You are on page 1of 33

INTERNSHIP REPORT

ON
AWS CLOUD VIRTUAL

An internship report submitted in partial fulfillment of the requirements for the Award
of Degree of

BACHELOR OF TECHNOLOGY
in

ELECTRICAL AND ELECTRONICS ENGINEERING

by
Y.SATISH
(20501A0298)
Under the Esteemed guidance of
Ms. V. Sai Geetha Lakshmi
Assistant Professor

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING

PRASAD V. POTLURI SIDDHARTHA INSTITUTE OF TECHNOLOGY


Autonomous & Permanent Affiliation to JNTUK, Kakinada,
AICTE Approved, NBA & NAAC accredited and ISO 9001: 2015 certified Institution

KANURU, VIJAYAWADA-520007

2022 – 2023
PRASAD V. POTLURI SIDDHARTHA INSTITUTE OF TECHNOLOGY

Autonomous & Permanent Affiliation to JNTUK, Kakinada,

AICTE Approved, NBA & NAAC accredited and ISO 9001: 2015 certified Institution

KANURU, VIJAYAWADA-520007

DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING

CERTIFICATE
This is to certify that the Internship report titled ”AWS CLOUD VIRTUAL”
submitted by Y.SATISH (Reg. no: 20501A0298) is work done by him and submitted during
2022 – 2023 academic year, in partial fulfillment of the requirements for the award of the
degree of BACHELOR OF TECHNOLOGY in ELECTRICAL AND ELECTRONICS
ENGINEERING.This internship was done at AICTE EDUSKILLS AWS CLOUD
VIRTUAL.

Supervisor Head of Department


Ms.V.Sai Geetha Lakshmi Dr.Ch Padmanabha Raju
Assistant Professor Professor & HOD

EXTERNAL EXAMINER
ACKNOWLEDGEMENT
It is my responsibiliity to thank Dr.K.PavanKumar,M.Tech,Ph.D for their
immeasurable beneficent help with timely suggestions, insurmountable guidance and
constant encouragement given throughout this internship and philanthropic ideas which
wrapped us in an inconceivable complaisance. We are very much indebted for his
valuable suggestions and inspiration he has proffered throughout the course of my
internship.

I wish to record my deep sense of gratitude and profound thanks to my


supervisor Ms.V.Sai Geetha Lakshmi, Assistant Professor, Department of Electrical
and Electronics Engineering, PVPSIT for her keen interest, inspiring guidance, constant
encouragement with my work during all stages, to bring this report into fruition.

We are highly grateful and obliged for the most cooperative attitude of Dr. Ch
Padmanabha Raju, Head of Electrical and Electronics Engineering Department in
preparing our internship.
We are thankful to our beloved Principal Dr. K. SIVAJI BABU for his
encouragement and the facilities provided to us.

We are also thankful to all our staff members for their valuable support for the
completion of the Internship.

Finally, we express our thanks to the members who directly and indirectly
helped us in completing this internship.

Y.SATISH

(Reg.No: 20501A0298)
ABSTRACT
Cloud computing has become an important tool not only in the business world but also in our
day-to-day activities. Most businesses have opted to cloud computing as it is considered safer
and more reliable especially in inventory tracking. Cloud computing is the on-demand
provision of services that includes data and projects can be put away and gotten to easily.
Amazon is at the forefront in providing cloud-computing services globally using a service
called Amazon Web Services (AWS). It allows customers to store data on the platform.
Amazon Web Services is a prevalent form that increases efficiency while assisting several
business practices. Dating back to the 2000s, organizations depended absolutely on servers that
are purchased servers. In contrast, such servers had functions that are limited with steep prices,
including a server that is functioning requiring numerous validations. The more business keeps
experiencing growth, the more optimization practices and servers are needed. Getting such
items showed unproductively and at times excessively costly. The benefits of Amazon Web
Services have been the answer to many problems. Organizations that use AWS have instantly
available servers; also, AWS offers various improved storage options, workloads, and
enhanced security measures.
CONTENTS

1. INTRODUCTION
1.1 AWS
1.2 History of AWS 1

2. CLOUD COMPUTING 2-5


2.1 Introduction
2.2 Infrastructure as Hardware
2.3 Infrastructure as Software
2.4 Features of Cloud Computing

3. AMAZON WEB SERVICES 6-9


3.1 Introduction
3.2 Services of AWS
3.3 Three ways to interact with AWS
3.4 The AWS Cloud Adoption Framework
3.5 The Future of AWS
3.6 Advantages of AWS

4. CLOUD ARCHITECTURE 10-13


4.1 Introduction to Cloud Architecture
4.2 AWS Well-Architected Framework
4.3 Storing Data in Amazon S3
4.4 Adding Compute Layer in EC2
4.5 Adding Database Layer

5. CREATING AND CONNECTING NETWORKS 14-17


5.1 Creating an AWS Networking Environment
5.2 Connecting your AWS Networking Environment to the Internet
5.3 Securing your AWS Networking Environment
5.4 Connecting to your Remote Network with AWS site-site VPN
5.5 Connecting to your Remote Network with AWS Direct Connect
5.6 Connecting your VPC to supported AWS Services

6. CAPSTONE PROJECT 18-25

7. CONCLUSION 26
LIST OF FIGURES

Name of Figures Page No

Fig 2.1: The Cloud Computing 2

Fig 2.2: Infrastructure as Hardware 2

Fig 2.3: Infrastructure as Software 3

Fig 2.4: Features of Cloud Computing 4

Fig 3.4: AWS CAF Perspectives 8

Fig 5.3: Default Security Groups 15


CHAPTER 1
INTRODUCTION

1.1 : AWS

AWS means Amazon Web Services that is used by millions, and to get the answer to this
question, we must know that AWS is a cloud provider. It is a safe cloud services platform that
offers almost all that a business requires to develop sophisticated applications with reliabilit y,
scalability, and flexibility. It is a model for billing generally referred to as “pay-as-you-go,”
having no upfront or capital cost. Amazon offers almost 100 services based on-demand, and
the list has been rising daily. Operation is almost immediately, and it’s accessible with reduced
setup. To master AWS is not all about the online building of sites. The service affords
developers access to an interconnected set of attributes offering calculated database storage,
power, content delivery, and an increasing portfolio of connected functionality. Organizations
around the globe use AWS to develop and scale. Cloud computing has come to remain, and the
available solutions from AWS are fast-tracking its development.

1.2 : History of AWS

Amazon Web Services was launched in 2002. The company intended to sell the infrastructure
that is not in use as a service or offering it to customers, wherein the purpose was met
enthusiastically. Amazon had its first AWS product launched in the year 2006. After four years,
in the year 2012, Amazon had a huge occasion to gather customer input concerning AWS. To
date, the organization continues to hold similar events, like Reinvent, that lets customers share
feedback concerning AWS. In 2015, Amazon publicized that the revenue of AWS has
amounted to $7.8 billion. From then and 2016, measures had been launched by AWS aiding
customers to migrate their services to AWS. Such actions, including the growing and
appreciating features of AWS, made further economic growth. In the year 2016, Amazon’s
revenue augmented to $12.2 billion in 2016. Presently, AWS provides customers with 160
products and services.

1
CHAPTER 2
CLOUD COMPUTING
2.1: Introduction

Cloud computing is the on-demand delivery of compute power, database, storage, applications,
and other IT resources via the internet with pay-as-you-go pricing. These resources run on
server computers that are located in large data centers in different locations around the world.
When you use a cloud service provider like AWS, that service provider owns the computers
that you are using. These resources can be used together like building blocks to build solutions
that help meet business goals and satisfy technology requirements.(as shown in fig:2.1)

Fig 2.1: The Cloud Computing


2.2: Infrastructure as Hardware
In the traditional computing model, infrastructure is thought of as hardware. Hardware
solutions are physical, which means they require space, staff, physical security, planning, and
capital expenditure. In addition to significant upfront investment, another prohibitive aspect of
traditional computing is the long hardware procurement cycle that involves acquiring,
provisioning, and maintaining on-premises infrastructure.(as shown in fig:2.2)

Fig 2.2: Infrastructure as Hardware


2
With a hardware solution, you must ask if there is enough resource capacity or sufficient
storage to meet your needs, and you provision capacity by guessing theoretical maximum
peaks. If you don’t meet your projected maximum peak, then you pay for expensive resources
that stay idle. If you exceed your projected maximum peak, then you don’t have sufficient
capacity to meet your needs. And if your needs change, then you must spend the time, effort,
and money required to implement a new solution.

For example, if you wanted to provision a new website, you would need to buy the hardware,
rack and stack it, put it in a data center, and then manage it or have someone else manage it.
This approach is expensive and time-consuming.

2.3 : Infrastructure as Software

Cloud computing enables you to stop thinking of your infrastructure as hardware, and instead
think of (and use) it as software.

Cloud computing enables you to think of your infrastructure as software. Software solutions
are flexible. You can select the cloud services that best match your needs, provision and
terminate those resources on-demand, and pay for what you use. You can elastically scale
resources up and down in an automated fashion. With the cloud computing model, you can
treat resources as temporary and disposable. The flexibility that cloud computing offers enables
businesses to implement new solutions quickly and with low upfront costs.(as shown in fig:2.3)

Fig 2.3: Infrastructure as Software


Compared to hardware solutions, software solutions can change much more quickly, easily,
and cost-effectively. Cloud computing helps developers and IT departments avoid
undifferentiated work like procurement, maintenance, and capacity planning, thus enabling
them to focus on what matters most.

3
2.4 : Features of Cloud Computing(as shown in fig:2.4)

Fig 2.4: Features of Cloud Computing

1. Resources Pooling
It means that the Cloud provider pulled the computing resources to provide services to multiple
customers with the help of a multi-tenant model. There are different physical and virtual
resources assigned and reassigned which depends on the demand of the customer.The customer
generally has no control or information over the location of the provided resources but is able
to specify location at a higher level of abstraction

2. On-Demand Self-Service
It is one of the important and valuable features of Cloud Computing as the user can
continuously monitor the server uptime, capabilities, and allotted network storage. With this
feature, the user can also monitor the computing capabilities.

3. Easy Maintenance
The servers are easily maintained and the downtime is very low and even in some cases, there
is no downtime. Cloud Computing comes up with an update every time by gradually making it
better.The updates are more compatible with the devices and perform faster than older ones
along with the bugs which are fixed.

4
4. Large Network Access
The user can access the data of the cloud or upload the data to the cloud from anywhere just
with the help of a device and an internet connection. These capabilities are available all over
the network and accessed with the help of internet.

5. Availability

The capabilities of the Cloud can be modified as per the use and can be extended a lot. It
analyzes the storage usage and allows the user to buy extra Cloud storage if needed for a very
small amount.

6. Automatic System

Cloud computing automatically analyzes the data needed and supports a metering capability at
some level of services. We can monitor, control, and report the usage. It will provide
transparency for the host as well as the customer.

7. Economical
It is the one-time investment as the company (host) has to buy the storage and a small part of
it can be provided to the many companies which save the host from monthly or yearly costs.
Only the amount which is spent is on the basic maintenance and a few more expenses which
are very less.

8. Security
Cloud Security, is one of the best features of cloud computing. It creates a snapshot of the data
stored so that the data may not get lost even if one of the servers gets damaged.The data is
stored within the storage devices, which cannot be hacked and utilized by any other person.
The storage service is quick and reliable.

9. Pay as you go
In cloud computing, the user has to pay only for the service or the space they have utilized.
There is no hidden or extra charge which is to be paid. The service is economical and most of
the time some space is allotted for free.

10. Measured Service


Cloud Computing resources used to monitor and the company uses it for recording. This
resource utilization is analyzed by supporting charge-per-use capabilities.

5
CHAPTER 3
AMAZON WEB SERVICES (AWS)

3.1 : Introduction

Amazon Web Services (AWS) is a secure cloud platform that offers a broad set of global cloud-
based products. Because these products are delivered over the internet, you have on-demand
access to the compute, storage, network, database, and other IT resources that you might need
for your projects—and the tools to manage them. You can immediately provision and launch
AWS resources. The resources are ready for you to use in minutes.

AWS offers flexibility. Your AWS environment can be reconfigured and updated on demand,
scaled up or down automatically to meet usage patterns and optimize spending, or shut down
temporarily or permanently. The billing for AWS services becomes an operational expense
instead of a capital expense.

AWS services are designed to work together to support virtually any type of application or
workload. Think of these services like building blocks, which you can assemble quickly to
build sophisticated, scalable solutions, and then adjust them as your needs change.

3.2 : Services of AWS

Since its existence, AWS has developed into a vital technological cloud computing. Below are
some essential services offered by AWS:

1. Amazon S3

It is a tool used for backing up the internet and less costly for storage options in the category
of object-storage. The central part of this option is that data stored can be retrieved from
virtually anywhere they are needed.

2. Amazon EC2 (Elastic Compute Cloud)

It provides a resizable and secured capacity for computing, depending on your requirements.
The service, therefore, is designed to enable web-scale cloud computing more reachable.

3. Amazon SNS (Simple Notification Services)

It is a tool for delivering notification messages to a significant number of subscribers via SMS
or email. Alarms can be sent, including service notifications and other messages proposed to
call attention to important information.

6
4. Amazon Lambda

It's for code running depending on a particular event and manages reliant resources. You do
not need either provisioning servers or operating, and how much is paid depending on the
length of time, it takes in executing your code. It's cost-effective, unlike services that their
charges are according to hourly rates.

5. Route 53

It is a DNS service in the cloud that doesn't need you to keep a separate DNS account. The
aim is to provide a cost-effective and reliable method to route users for businesses to internet
apps.

6. Amazon Elastic File System (EFS)

EFS can be used with the Amazon Web Services Cloud resources and services. It's scalable
and straightforward; it's flexible storing of files for on-premise resources. Containing an
intuitive interface allows users to build and file configuring systems without troubling the app
growth and automatic shrinking when files are being added or even removed.

7. Amazon RDS

Easing the process of setting up, operating, and scaling a relational database in the cloud,
Amazon RDS provides cost-efficient and resizable capacity while automating time-consuming
administrative tasks such as database hardware setup, repairing, and backups. The enhanced
service is for memory performance and output/input processes. Amazon RDS gives you the
freedom to use your relational database of choice including the most popular open source and
commercial agents and amazon relational database built for the cloud, Amazon Aurora, which
offers the performance and availability of traditional commercial databases and fraction of the
cost. RDS enables you to scale across a global footprint of data with enterprise high availability
and disasater recovery no matter the size, it automates many previous cumbersome task,
automatic failover, backups at point in time are restored, disaster recover, access management,
encryption, secure networking, monitoring and performance optimization. All these and more
can be enabled with a few clicks or API codes. Even, highly regulated industries can leverage
RDS which means a broad range of compliance certifications.

7
3.3 : Three ways to interact with AWS

There are three ways to create and manage resources on the AWS Cloud:

1. AWS Management Console


The console provides a rich graphical interface to a majority of the features offered by
AWS. (Note: From time to time, new features might not have all of their capabilities
included in the console when the feature initially launches.)
2. AWS Command Line Interface (AWS CLI)
The AWS CLI provides a suite of utilities that can be launched from a command script
in Linux, macOS, or Microsoft Windows.
3. Software development kits (SDKs)
AWS provides packages that enable accessing AWS in a variety of popular
programming languages. This makes it easy to use AWS in your existing applications
and it also enables you to create applications that deploy and monitor complex systems
entirely through code.

3.4 : The AWS Cloud Adoption Framework (AWS CAF)

The AWS Cloud Adoption Framework (AWS CAF) provides guidance and best practices to
help organizations identify gaps in skills and processes. It also helps organizations build a
comprehensive approach to cloud computing—both across the organization and throughout
the IT lifecycle—to accelerate successful cloud adoption.

The AWS CAF organizes guidance into six areas of focus, called perspectives.

Perspectives consist of sets of business or technology capabilities that are the responsibility
of key stakeholders.(as shown in fig:3.4)

Fig 3.4: AWS CAF Perspectives

8
3.5 : The Future of AWS

As business and artificial intelligence, including IoT, evolve and indeed come into existence
on their own, the necessity for data storage, cloud computing, and security would evolve to
new levels. Additional services can be developed in the cloud, such as financial markets,
healthcare, and other industries that will become more reliant on these technologies.
Gratefully, AWS is out and remains to develop scalable and easy solutions for deploying and
managing web apps in the cloud. It is evident that there's a bright future and that this cloud has
a silver lining. Suppose you're ready to be a part of the future of AWS. In that case, there's a
certification course from Simplilearn that would prepare you to be an industryready, in-
demand AWS solutions architect, with the privilege of firsthand experience with the
management of AWS. You will study how IT architecture rules are redefined by cloud
computing and how to scale and design Amazon Web Services cloud operations with
Amazon's recommended best practices.

3.6 : Advantages of AWS

Unilever, a well-known organization in the world of consumer goods, would be used as an


illustration. Unilever had an issue: It required a quicker time-to-market and a consistent
environment, and it's spread across 190 countries and uses digital marketing for its products.
Its prevailing legacy in the local climate proved unworkable, incapable of catering to altering
IT demands. It then moved a portion of its business to Amazon Web Services, and ever since,
rollouts have been smooth, provisioning apps have become dependable, and even
infrastructure provisioning has increased. The company can also do everything in scaling push-
button and Amazon Web Service's backups that are secured to ensure that all company's data
is accessible continuously and secured. Unilever is currently developing with Amazon Web
Services, kudos to characteristics like rapid rollouts deployment, capacity to produce current
reports and securing backups.

9
CHAPTER 4
CLOUD ARCHITECTURE

4.1 : Introduction to Cloud Architecting

Cloud architecture is the practice of applying cloud characteristics to a solution that uses cloud
services and features to meet an organization’s technical needs and business use cases. A
solution is similar to a blueprint for a building.Software systems require architects to manage
their size and complexity.

Cloud architects:

Engage with decision makers to identify the business goals and the capabilities that need
improvement. Ensure alignment between technology deliverables of a solution and the business
goals.Work with delivery teams that are implementing the solution to ensure that the
technology features are appropriate.

4.2 : The AWS Well-Architected Framework.

The AWS Well-Architected Framework provides a consistent approach to evaluate cloud


architectures and guidance to help implement designs.

The AWS Well-Architected Framework is organized into five pillars.They are:

1. The Security pillar addresses the ability to protect information, systems, and assets while
delivering business value through risk assessments and mitigation strategies.

2. The Operational Excellence pillar addresses the ability to run systems and gain insight into
their operations to deliver business value. It also addresses the ability to continuously improve
supporting processes and procedures.

3. The Reliability pillar addresses the ability of a system to recover from infrastructure or
service disruptions and dynamically acquire computing resources to meet demand.

4. Performance Efficiency pillar when you consider performance, you want to maximize
your performance by using computation resources efficiently. You also want to maintain that
efficiency as the demand changes.

5. Cost optimization pillar is an ongoing requirement of any good architectural design. The
process is iterative, and it should be refined and improved throughout your production.

10
4.3 : Storing Data in Amazon S3

S3 Standard offers high durability, availability, and performant object storage for frequently
accessed data. Because it delivers low latency and high throughput, S3 Standard is appropriate
for a wide variety of use cases, including cloud applications, dynamic websites, content
distribution, mobile and gaming applications, and big data analytics. It provides durability
across at least three Availability Zones.

1. S3 Standard

Infrequent Access (S3 Standard-IA) offers all the benefits of Amazon S3 Standard, but it runs
on a different cost model to store infrequently accessed data, such as older digital images or
older log files. There is a 30-day minimum storage fee applied to any data placed in it, and also
a higher cost to retrieve data from S3 Standard-IA than from S3 Standard storage.

2. S3 One Zone-IA

stores data in a single Availability Zone. It is ideal for customers who want a lower-cost option
and who do not need the availability and resilience of S3 Standard or S3 Standard-IA. It’s a
good choice for storing secondary backup copies of on-premises data or easily re-creatable
data. You can also use it as cost-effective storage for data that is replicated from another AWS
Region.

3. S3 Intelligent

Tiering is designed to optimize costs by automatically moving data to the most cost-effective
access tier, without performance impact or operational overhead. For a small monthly
monitoring and automation fee per object, Amazon S3 monitors access patterns of the objects
in S3 Intelligent-Tiering. It moves objects that have not been accessed for 30 consecutive days
to the infrequent access tier. If an object in the infrequent access tier is accessed, it is
automatically moved back to the frequent access tier. There are no retrieval fees when using
S3 Intelligent-Tiering and no additional tiering fees when objects are moved between tiers.

4. Amazon S3 Glacier Deep Archive

It is the lowest-cost storage class for Amazon S3. It supports the long-term retention and
digital preservation for data that might be accessed once or twice in a year.

11
4.3 : Adding a Compute Layer in EC2

Adding compute with Amazon EC2

Amazon EC2 provides resizable compute capacity in the cloud.

1. Provides virtual machines (servers)


2. Provisions servers in minutes
3. Can automatically scale capacity up or down as needed
4. Enables you to pay only for the capacity that you use
5. Amazon EC2 enables you to run Microsoft Windows and Linux virtual machines in the
cloud.
6. You can use an EC2 instance when you need complete control of your computing
resources and want to run any type of workload.
7. When you launch an EC2 instance, you must choose an AMI and an instance type.
Launching an instance involves specifying configuration parameters, including
network, security, storage, and user data settings.

Adding storage to an Amazon EC2 instance

EC2 instances have four main storage options:

1.Instance store

2.Amazon EBS

3.Amazon Elastic File System (Amazon EFS)

4.Amazon FSx for Windows File Server

All four options can be used to store a data volume. However, only an instance store or an
SSD-backed EBS volume can be used to store a root volume. In addition, an instance store or
EBS volume must be used by a single instance at a time. In the case of an instance store
volume, only the instance that the volume is added to can use it.

4.4 : Adding a Database Layer

Database types typically fall into one of two broad categories: relational or non-relational.

1. Relational databases are the most familiar type of databases to most people.
Traditional examples include Microsoft SQL Server, Oracle Database, and MySQL.

12
2. Non-relational databases were developed more recently, but have been around for a
few decades. They play an essential role in the modern computing landscape.
Examples include MongoDB, Cassandra, and Redis.

Amazon RDS

Amazon RDS is a fully managed relational database service that creates and operates a
relational database in the cloud. However, before you learn more details about Amazon RDS,
you will first review the advantages of Amazon RDS as a managed database service. As a
relational database offering, Amazon RDS is a good choice for applications that have complex,
well-structured data. Amazon RDS is a good choice if your workloads must frequently combine
and join datasets, and must have syntax rules that are strictly enforced. For example, Amazon
RDS is frequently used to back traditional applications, enterprise resource planning (ERP)
applications, customer relationship management (CRM) applications, and ecommerce
applications.Amazon RDS is available with six database engines to choose from, including
Microsoft SQL Server, Oracle, MySQL, PostgreSQL, Amazon Aurora, and MariaDB. All
Amazon RDS database types run on a serve. The exception is Aurora, which can run as a
serverless option.

Amazon DynamoDB

Amazon DynamoDB is a fully managed non-relational key-value and document NoSQL


database service.DynamoDB is serverless, provides extreme horizontal scaling and low
latency.DynamoDB global tables ensure that data is replicated to multiple Regions.DynamoDB
provides eventual consistency by default (in general, it is fully consistent for reads 1 second
after the write). Strong consistency is also an option.

Database security controls

Security is a shared responsibility between you and AWS. AWS is responsible for security of
the cloud, which means that AWS protects the infrastructure that runs Amazon RDS.
Meanwhile, you are responsible for security in the cloud. One security recommendation for
Amazon RDS is to run your RDS instances in a virtual private cloud (VPC). A VPC enables
you to place your instance in a private subnet, which secures it from public routes on the
internet. The VPC also provides IP firewall protection and enables you to securely control the
applicable network configuration.

13
CHAPTER 5
CREATING AND CONNECTING NETWORKS

5.1 : Creating an AWS Networking Environment

Amazon Virtual Private Cloud (Amazon VPC) is a service that enables you to provision a
logically isolated section of the AWS Cloud (called a virtual private cloud, or VPC) where you
can launch your AWS resources.Amazon VPC gives you control over your virtual networking
resources. For example, you can select your own IP address range, create subnets,and configure
route tables and network gateways. You can use both IPv4 and IPv6 in your VPC for secure
access to resources and applications.You can also customize the network configuration for your
VPC. For example, you can create a public subnet for your web servers that can access the
public internet. You can place your backend systems (such as databases or application servers)
in a private subnet with no public internet access. Finally, you can use multiple layers of
security to help control access to Amazon Elastic Compute Cloud (Amazon EC2) instances in
each subnet. These security layers include security groups and network access control lists
(network ACLs).

5.2 : Connecting your AWS Networking Environment to the Internet

An internet gateway allows communication between instances in your VPC and the internet.
Route tables control traffic from your subnet or gateway.
Elastic IP addresses are static, public IPv4 addresses that can be associated with an instance
or elastic network interface. They can be remapped to another instance in your account.

NAT gateways enable instances in the private subnet to initiate outbound traffic to the internet
or other AWS services.

A bastion host is a server whose purpose is to provide access to a private network from an
external network, such as the internet.

5.3 : Securing your AWS Networking Environment

Security groups

Security groups are stateful firewalls that act at the level of instance or network interface.
Security group rules control inbound and outbound traffic to your AWS resources. You should
tightly configure these rules to restrict traffic and allow access only as needed. Traffic can be

14
restricted by any internet protocol, service port, and source or destination IP address (individual
IP address or CIDR block).

Default security groups

Block all
Allow all
inbound
outbound
Security
group Security
group

Fig 5.3: Default Security Groups

When you create a security group, it has no inbound rules. This means that you must add
inbound rules to the security group to allow inbound traffic that originates from another host to
your instance. By default, a security group includes an outbound rule that allows all outbound
traffic. You can remove the rule and add outbound rules that allow specific outboundtraffic only.
If your security group has no outbound rules, no outbound traffic that originates from your
instance is allowed.(as shown in fig:5.3)

Network access control lists (network ACLs)

A network access control list (network ACL) is an optional layer of security for your VPC. It
acts as a firewall for controlling traffic in and out of one or more subnets. To add another layer
of security to your VPC, you can set up network ACLs with rules that are similar to your
security groups.

15
5.4 : Connecting to your remote network with AWS Site-to-Site VPN.

You can use AWS Site-to-Site Virtual Private Network (AWS Site-to-Site VPN) to securely
connect your on-premises network or branch office site to your VPC. Each AWS Site-to-Site
VPN connection uses internet protocol security (IPSec) communications to create encrypted
VPN tunnels between two locations. A VPN tunnel is an encrypted link where data can pass
from the customer network to or from AWS. The AWS side of the connection is the virtual
private gateway. (Note that instead of a virtual private gateway, you can also create a Site-to-
Site VPN connection as an attachment on a transit gateway. You will learn more about AWS
Transit Gateway later in this module.) The on-premises side of the connection is the customer
gateway.AWS Site-to-Site VPN provides two VPN tunnels across multiple Availability Zones
that you can use simultaneously for high availability. You can stream primary traffic through
the first tunnel and use the second tunnel for redundancy. If one tunnel goes down, traffic will
still get delivered to your VPC.

AWS Site-to-Site VPN supports two types of routing.

1. Static routing - Requires you to specify all routes (IP prefixes)


2. Dynamic routing - Uses the Border Gateway Protocol (BGP) to advertise its routes to
the virtual private gateway

5.5 : Connecting to your remote network with AWS Direct Connect.

AWS Direct Connect (or DX) is another solution that goes beyond simple connectivity over
the internet. DX uses open standard 802.1q virtual local area networks (VLANs) so you can
establish a dedicated, private network connection from your premises to AWS. This private
connection can reduce network costs, increase bandwidth throughput, and provide a more
consistent network experience than internet-based connections.

DX is useful for several scenarios, for example:

1.Hybrid environments

2. Transferring large datasets

3. Network performance predictability

16
5.6 : Connecting your VPC to supported AWS Services

A VPC endpoint enables you to privately connect your VPC to supported AWS services and to
VPC endpoint services that are powered by AWS PrivateLink. VPC endpoint services that are
powered by AWS PrivateLink include some AWS services, services hosted by other AWS
customers and AWS Partner Network (APN) Partners in their own VPCs (which are referred
to as endpoint services), and supported AWS Marketplace Partner services. VPC endpoints do
not require an internet gateway, NAT device, VPN connection, or DX connection. Instances in
your VPC do not require public IP addresses to communicate with resources in the service.
Traffic between your VPC and the other service does not leave the Amazon network. Endpoints
are virtual devices. They are horizontally scaled, redundant, and highly available VPC
components. Endpoints allow communication between instances in your VPC and services
without imposing availability risks or bandwidth constraints on your network traffic.

There are two types of VPC endpoints:

1. Interface endpoint
It is an elastic network interface with a private IP address. This IP address serves as an
entry point for traffic that is destined to a supported service.
2. Gateway endpoint
It is a gateway that you specify as a target for a route in your route table. The route is
for traffic that is destined to a supported AWS service. Amazon S3 and Amazon
DynamoDB are supported by gateway endpoints.

17
CHAPTER 6
CAPSTONE PROJECT

The capstone project is one way to practice applying the knowledge that you have developed
in the course to a real-world scenario. By completing and documenting the project, you will
have an asset that you can add to your portfolio of work for future opportunities.

The requirements for the capstone project are:

1. Provide secure hosting of the MySQL database

2. Provide secure access for an administrative user

3. Provide anonymous access to web users

4. Run the website on a t2.small EC2 instance, and provide Secure Shell (SSH) access to
administrators

5. Provide high availability to the website through a load balancer

6. Store database connection information in the AWS Systems Manager Parameter Store

7. Provide automatic scaling that uses a launch template

TASK1: CREATE A MYSQL RDS DATABASE INSTANCE

Step 1: Create Subnet Groups

RDS --> Choose Subnet Groups

Subnet Group Details:

Name- Example-DB-subnet

Description- Example-DB-subnet

VPC-Select Example VPC

Add Subnet:

AZ- Select us-east-1a (private subnet 1) us-east-1b1b (private subnet 2)

Subnet-Select Private subnet1 (10.0.2.0/23) Private subnet2 (10.0.4.0/23)

Step 2: Create Database


18
Navigate RDS Service in aws and create db instance

RDS --> Database --> Create Database

Create database:

Choose a database creation method- Standard create

Engine options- Select MySQL

Templates- Dev/Test Availability and durability-Choose Multi-AZ DB instance

Settings DB instance identifier-Example

Credentials Settings:

Master username-admin

Masterpassword-password

Confirmpassword-password

Instance Configuration:

DB instance class-Choose Burstable classes (includes t classes)-t3.micro

Connectivity:

Virtual private cloud (VPC)-Select Example VPC

VPC Security Group-Example-DB

Database options

Initial database name- exampledb

Backup- uncheck it

Monitoring - Disable monitoring

Then click create database

TASK2: CREATE CLOUD9 ENVIRONMENT

Navigate cloud9

19
Create clould environment

Step 1:

Name environment

Name- capstone project

Step 2:

Network settings (advanced)

Network (VPC)-Select Example VPC

Subnet-Select Public Subnet2

then click Next

TASK3: INSTALL A LAMP WEB SERVER ON AMAZON LINUX2 ON CLOUD9


INSTANCE

Step 1: Prepare the LAMP server

sudo yum update -y

sudo amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2

sudo yum install -y httpd mariadb-server

sudo systemctl start httpd

sudo systemctl enable httpd

sudo systemctl is-enabled httpd

Step 2:Download the project assets(copy from link from the capstone project)

->enter command in cloud9:

wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/ILT-TF-200-ACACAD-20-
EN/capstone-project/Example.zip

Then Come back to Cloud9 services and Unzip the php downloaded file by using

Ls

20
mkdir Example

sudo unzip Example.zip -d Example/

sudo cp Example/* /var/www/html/

Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/

Check public IP of Cloud9 EC2 instance and paste into new tab(Now Webpage is not showing)

Solution: Choose Instances and select your instance.(Cloud9 created Instance-Start with aws-
cloud9)

On the Security tab, view the inbound rules. Add HTTP protocol with 0.0.0.0/0 then again
refresh your webpage it will shows the webpage

TASK4: IMPORT THE DATABSE INTO THE DATABASE

//Download Database: wget https://aws-tc-largeobjects.s3-us-west-2.amazonaws.com/ILT-


TF-200-ACACAD-20-EN/capstone-project/Countrydatadump.sql

Again go to RDS dash board now your database instance is created(avilable) and Copy the
endpoint of DB. Go to Cloud9 service to access the machine. Database file is download
successfully.

Important: Go to Security Group and select Example-DB and add inbound rule for
MYSQL/Aurora and source to cloud9 instance then only we can able to import the database

TASK5: CONFIGURE PARAMETER VALUES IN AWS SYSTEMS MANAGER

Step1: Navigate to AWS systems manager and create parameter for following values

/example/endpoint <rds-endpoint>

/example/username admin

TASK6: CREATE AMI FOR AUTOSCALING(CLOUD9 INSTANCE)

In this step we take an AMI on cloud9 instance

select cloud9 instance --> actions-->i mages and templates--> create image

Image Name- CapstoneProjectAMI

21
Description-AMI for CapstoneProject Then

create Image.It will takes few minutes.

TASK7: CREATE LOAD BALANCER

Step1: Create Load Balancer

Go to EC2 console and select Load Balancer in new tab

Select Create Load Balancer

Select Application Load Balancer

Load balancer name-CapstoneProject-LB

Network mapping

VPC-Select Example VPC

Mappings-Select both us-east-1a below subnet choose Public subnet1 & us-east-1b below
subnet choose Public subnet2

Security groups

Security groups-Select ALBSG security group

Listeners and routing

Default action-Create Load Balancer

Create Target Group:

Step 1: Specify group details:

Choose a target type-instance

Target group name- CapstoneProject-TG

VPC-Ensure Example VPC is selected then click next

Step 2: Register targets:

2 avilable Target is there so click create target Group(Don't select instances)

22
Come back to Load balance and refresh this Listener and routing,now we can see created target
group name and select it.

Select CapstoneProject-TG Then Click create Load Balancer

Copy the DNS Name

TASK8: CREATE AUTOSCALING

EC2 management console under Auto Scaling choose Auto Scaling Groups in new tab

Create Auto Scaling group

Step 1: Choose launch template or configuration

Name

Auto Scaling group name-NmindAcademy-ASG

Launch template

Launch template-Choose Example-LT then modify the template

Change the AMI ID just now we created as CpstoneProjectAMI

Select Example-LT go to details --> Actions --> Modify Templates

Scroll down and on Launch Templates Contents choose our CapstoneProjectAMI ID then
create it.

Ensure the CapstoneProjectAMI ID is changed in out template then click next

Step 2: Choose instance launch options

Network

VPC-Select Example VPC

Availability Zones and subnets-Select Private subnet1 & Private subnet2 then click next

Step 3 : Configure advanced options

Load balancing - optional-Select Attach to an existing load balancer

Attach to an existing load balancer

23
Existing load balancer target groups-Select CapstoneProject-LB

Health checks - Select ELB(check mark) then click next

Step 4 :

Configure group size and scaling policies

Group size -Increase the group size like below

Desired capacity 2

Minimum capacity 2 Maximum capacity 2

Then click Next

Step 5: Add notifications Then click Next

Step 6: Add tags

Add name tag and value as Nminds-CapstoneProject and click next

Step 7: Review then click Create Auto Scaling group

wait for 5 minutes

/example/password password

/example/database exampledb

Step2: Modify IAM role to cloud9 instance

Go to cloud9 EC2 instance and attach IAM role- Inventory-App-Role

then refresh the web page again

Now you can access the database successfully.

then come back cloud9 instance

Import the file: mysql -u admin -p exampledb --host <rds-endpoint> < Countrydatadump.sql

It asks password then give password(copy password from master password) then hit enter

Verify once again database is attached or not by using following command

24
mysql -u admin -p --host <endpoint>

MySQL [(none)]> use exampledb;

MySQL [exampledb]> show tables;

MySQL [exampledb]> select*from countrydata_final;

MySQL [exampledb]> exit;

Step 3 Review -Create Environment

TASK9: CHECK THE OUPUT BY USING LOAD BALANCER DNS NAME

final step-1: edit subnet groups of auto scalling to public subnet-1 and public subnet-2.

step-2: then stop instances with name Nminds. wait for 5-10 minutes . keep refresh several
times.

Go to Load balancer and copy that DNS name and paste it into new tab.

And check website and database are connected or not.

25
CONCLUSION

Currently, in the marketplace where there's a rise in on-demand services, AWS has developed
a workable solution for business organizations that are searching for inexpensive, reliable, and
scalable cloud computing services. With separate functions in 22 geographical regions,
Amazon Web Services enables firms to manage different services as well as development, data
processing, game development, warehousing, and lots more. A distinguished benefit of AWS
is that your business can have access to EC2, which in turn offers a virtual cluster of computers
via the internet. Hence the job of hardware resources is copied by these much-helpful server
farms located across the globe. Irrespective of the fact that you're just starting or already an
already established enterprise, AWS is the best solution that can provide extensive maximum
uptime, cost savings, and continuous support, which is a good return on investment,
undeniably.

26

You might also like