You are on page 1of 53

MINOR PROJECT REPORT

Serverless using AWS Lambda


Submitted in partial fulfillment of the Requirement for the award of
Degree of Bachelor of B.Sc. (IT)

SUBMITTED BY:
Name: - Diksha Mishra
Roll no: - ASC11044

SUBMITTED TO:

Department Of Computer Science & Information Technology

BABASAHEB BHIMRAO AMBEDKAR UNIVERSITY


SATELLITE CAMPUS
TIKARMAFI AMETHI, INDIA
1
Declaration

I, the undersigned declare that project entitled “ Serverless using AWS Lambda”, submitted
by me to Babasaheb Bhimrao Ambedkar University Satellite Campus (A Central University),
Amethi in partial fulfilments of requirements for the award degree of Bachelor’s of Science
& Information technology under the guidance of Dr. Neeraj Tiwari, is my original work.

Signature of student

Name: Diksha Mishra

Roll. No. : ASC-11044

Mobile No.: 8707744218

E-mail ID: mdiksha40@gmail.com

Date:

Certified that the above statement made by the student is correct to the best of our knowledge

And belief.

Signature of Supervisor: Signature of Examiner:

2
Acknowledgment
This satisfaction that accompanies the successful completion of this project would be
incomplete without the mention of the people who made it possible, without whose
constant guidance and encouragement would have made efforts go in vain. I consider
myself privileged to express gratitude and respect to all those who guided us through the
completion of this project.

I convey thanks to my project guide, Dr. Neeraj Tiwari of Computer & Science department,
for providing encouragement, constant support, and guidance which was of great help to
complete this project.

Last but not least, I wish to thank my parents for financing our studies in this college as well
as for constantly us to learn the knowledge. Their personal sacrifice in providing this
opportunity to learn is greatly acknowledged.

3
Table of contents
Chapter Page

1. Declaration 2

2. Acknowledgment 3

3. List of figures 4

4. Cloud computing. 6
4.1. Cloud computing models.
4.2. Cloud computing deployment models.
4.3. Advantages of cloud computing.

5. AWS-Amazon Web Services 14


5.1. AWS Console.
5.2. Security and compliances.

6. AWS S3 17
6.1. Setup of S3 buckets.
6.2. Common use scenarios of S3.
6.3. Security measures with S3.

7. API Gateway 28
7.1. API Architecture.
7.2. Feature of API Gateway.
7.3. Way to access API.
7.4. Role of API in serverless.
7.5. Components of API in serverless.

8. Lambda 38
8.1. Why should go with Lambda?
8.2. Creating a Lambda function.
8.3. Use of Designer.
8.4. Invoking of Lambda.
8.5. Basic components of Lambda.
8.6. Features of Lambda.

9. Dynamo DB
9.1. High availability and Durability. 43
9.2. Security measures with DynamoDB.

10. Implemented Lambda Architecture 46

11. Conclusion 49

4
3. List of figures

Figure Page

1. Types of cloud computing. 8


2. Cloud service models. 9
3. Deployment model in cloud computing. 10
4. Features of cloud computing. 12
5. Ways to provide security to the S3 bucket. 16
6. The architecture of API Gateway. 26
7. General features of API Gateway. 28
8. Integration of API with AWS Lambda. 41
9. Data extraction from DynamoDB. 43
10. Encryption techniques in DynamoDB. 44
11. Lambda Implemented Architecture 46

5
Introduction

Amazon has launched its AWS- Amazon Web Services in 2006 since Amazon has become a
major cloud vendor in the arena of cloud computing web services. Back in time, when
timesharing systems are used for scientific purposes or for the organizations, which provides
a common resource pool, it proposes a paradigm of computing systems which is known as
cloud computing nowadays. With this computing technique inflation in assembling of data
capacity and computing resources or power is available to share among many users, to the
general public as function-as-a-service. Cloud computing introduces a new trending word
‘Serverless’. Serverless refers to the function-as-a-service platform which promises end-users
to reduced hosting cost, high availability of resources, dynamic elasticity, fault tolerance.
Comparatively, owing to their own hardware infrastructures, client or organization uses the
services on the rent basis provided by the cloud vendor. Almost all those services where the
client doesn’t need to be present physically to the computing hardware can be easily
conveyed by the vendors. In this project, Amazon lambda service is used to build efficient
serverless architectures. Lambda provisions function-as-a-service, which gives the runtime
environment for the events and also creates the instances for the request and responses.
Lambda manages the concurrency of the requests and methods which are invoked
simultaneously. AWS Lambda provides the developers with an environment that lets the
developers free from the administrative and management responsibilities of the application
development.

6
The technology used in this project-

1. AWS S3

2. API Gateway

3. AWS Lambda

4. Amazon Cognito User Pool

5. Dynamo DB

6. Amazon CloudFront

7
4. Cloud computing
In the emergence of cyberspace, traditional architectures of browsing have been replaced
by majorly cloud computing. Cloud computing is a way of accessing, manipulating and
retrieving data over internet as well as using all of the resources like OS platforms, storage,
memory (RAM & ROM) and predefined application interfaces. Start-up businesses related to
cyberspace, cloud computing can be a major choice for platforms to acquire services like
server management, API interfaces, etc. The concept of cloud computing starts from the
back 1950s era when mainframe computing covered gradual inflation in trend. Multiple
users are able to access the central computer or mainframe by dumb terminals. Providing
the single resource pool for sharing of data and information with large storage capacity and
less expensiveness, this concept becomes the solution for sophisticated expensive
technology regarding business aspects.

After some time, around the 1970s, the technology Virtual Machines (VM) is introduced.
With this technology finally, users can operate multiple operating systems simultaneously in
a single environment. In simple words, a completely new system with a different operating
system or the same operating systems can run in a single hardware system. Virtual Machine
technology takes the shared technology of mainframe computers onto the new level by
permitting different computing environments into one physical system.

With the advancements in the telecommunication sector, Virtual machine technology


merges with telecommunications and introduced Virtual Private Network (VPN). This newly
launched technology lets the telecommunication companies can able to give the same
quality service as point-to-point (as traditional communication was used to establish at that
time) with fewer investments in physical infrastructures. The following under mentioned list
briefly explains the evolution of cloud computing:

 Grid computing: Parallel computing to solve one large dedicated problem.


 Utility computing: computing resources given as per metered prices.
 SAAS: network-based subscriptions to application.
 Cloud Computing: Providing IT resources and services in any environment anywhere.

8
Figure 4.1. Types of cloud computing.

4.1. Cloud models


There are majorly three types of cloud computing models. Each set of the model has its own
benefits. Following are the undermentioned cloud computing models:

 SAAS: Software as a Service is a model that gives the user quick access to the services
and software according to their requirements. The whole integrated stack of
computing is controlled by the cloud vendor, which is accessible by the web browser
of the client. All applications which run in the cloud by the user, they are based on
licensed subscription by paying and also users can use the applications for free but in
this case, the services have their own limited access.
Because of the SAAS, there are no requirements of installations of software or any
type of IDE platform which are needed for the application to run in the existing
computing infrastructure.
 IAAS: Infrastructure as a Service provides the virtual arrangement of the computing
resources over the cloud network. The client can opt-out for any computing
resources that are needed as physical infrastructure and that computing resources
are like storage, memory and networking hardware as well as maintenance and
support to them.
 PAAS: Platform as a Service gradually reduces the complexity of the process of
software development for the organization. PAAS provides all that same virtual
environment where an organization can develop, test and organize the platform for
the application deployment. These services are provided in the form of servers,
storage, networking and IDE platforms needed for the application development.

By comparing and correlating the above-mentioned models, organizations can choose


their model or combination of the model according to their application or services as per
there requirements.

9
Figure 4.2. Cloud Service Models.

4.2. Cloud computing deployment models


Cloud deployment models symbolize how the cloud model is to be used for user
availability. Each cloud deployment model has its own premises of benefits, so it’s
important for the organization to choose the suitable cloud deployment model according
to their business needs and infrastructure.

Cloud deployment is designated according to the factors like who have taken over the
control for the deployment and the infrastructure for the deployment, where it resides.
Also for the best user experience, each cloud deployment model delivers distinct
services from each other. Following are the undermentioned cloud deployment models
are available:

1. Public cloud deployment model: As the name suggests, all users can use this model
on the subscription basis who wants to make the use of computing resources such as
hardware (CPU, storage and memory, etc.) and software (OS, application server and
database). This type of cloud model mainly used for no-critical tasks, testing and
developing of any application software.
2. Private cloud deployment model: This cloud deployment model is by a single
organization for their uses like file sharing, package sharing and other resources
related to the organization’s work. The management of such type of infrastructure is
managed by both parties i.e. service provider and the organization itself. Private
cloud is more expensive in terms of capital in comparison to the public cloud

10
deployment model due to it is acquiring and maintaining the private services. Besides
all of this, infrastructures with cloud deployment models have better security and
privacy systems than public and hybrid cloud because all of this is managed by the
organization and service provider’s sharing of responsibilities. In simple words, a
private cloud deployment model infrastructure is majorly managed by the
organization itself, so the user experiences are far better for their uses.

3. Hybrid cloud deployment model: This deployment model is best for the
organizations where the sudden need of inflation is required to increase the
scalability of the resources. Many organizations used to interconnect the public cloud
deployment model and private cloud deployment for their use. This model provides
leverage to public models to supplement their resources within the private cloud
deployment model.

4. Community cloud deployment model: This model is applied where multiple


organization has the need to share their resources and data, which are part of the
same community of organizations. The access to the available resources present
within the cloud is only available for the persons who are part of the community.

Figure 4.3. Cloud Deployment Models

11
4.3. Advantages of cloud computing-
Major advantages of AWS cloud computing as mentioned below:

1. Variable expenses of capital: Clients don’t have to make heavy investments in buying
servers and hardware. Also, clients only have to pay as they consume the resources and
according to the time usage of resources. They don’t have to make pre-transactions
like existing traditional services of server and hardware usage.

2. Free from limitations of capacity: Clients can make a prior decision to deploy the web
services instead of sitting up on idle resources or dealing with limited capacity. With
AWS clients can use as much as high or as much as low capacity as they need and scale
up and down as required at every present moment.

3. Increase speed and agility: In the cloud computing environment, resources can be
available at any instance or moment of time, instead of weeks to the developers. This
results in the major inflation in the agility of the organization. Since the cost and time
get significantly reduced in developing and experimenting with web services or web
applications.

4. No maintenance and maintaining the cost of data centers: AWS cloud computing lets
focus the client only on their business implementation, not on the infrastructure like
the heavy lifting of racking, stacking and powering servers.

5. Rapid global deployment: The client can easily deploy the service or application with
just a few configurations and few clicks across the globe over the regions. This means
users or the client’s service or application gets provided lower latency and a better
experience at a minimal cost.

6. Variable expenses of capital: Clients don’t have to make heavy investments in buying
servers and hardware. Also, clients only have to pay as they consume the resources and
according to the time usage of resources. They don’t have to make pre-transactions
like existing traditional services of server and hardware usage.

7. Free from limitations of capacity: Clients can make a prior decision to deploy the web
services instead of sitting up on idle resources or dealing with limited capacity. With
AWS client can use as much as high or as much as low capacity as they need and scale
up and down as required at every present moment.

8. Increase speed and agility: In the cloud computing environment, resources can be
available at any instance or moment of time, instead of weeks to the developers. This
results in the major inflation in the agility of the organization. Since the cost and time

12
get significantly reduced in developing and experimenting with web services or web
applications.

9. No maintenance and maintaining the cost of data centers: AWS cloud computing lets
focus the client only on their business implementation, not on the infrastructure like
the heavy lifting of racking, stacking and powering servers.

10. Rapid global deployment: The client can easily deploy the service or application with
just a few configurations and few clicks across the globe over the regions. This means
the services or applications get provided to the client with lower latency and a better
experience at a minimal cost.

Figure 4.4. Features of Cloud Computing.

13
5. Amazon Web Services

In the emergence of cyberspace, traditional architectures of browsing have been replaced


by majorly cloud computing. Cloud computing is a way of accessing, manipulating and
retrieving data over internet as well as using all of the resources like OS platforms, storage,
memory (RAM & ROM) and predefined application interfaces. Start-up businesses related to
cyberspace, cloud computing can be major choices for platforms to acquire services like
server management, API interfaces, etc. AWS or Amazon web services are a major cloud
vendor, which provides cloud services over many different areas. AWS manages all the
hardware as well as hardware configurations regarding their native software, connected
over network present in different regions of AWS service providing divisions. By this
provided management of the AWS team, clients can access rapid and pay-as-you-go pricing
and they only have to focus on their business setups and business management leaving the
web technical management to the cloud vendor AWS’s technical team. Over 165 services as
of 2019 are provided by the AWS yet.

14
5.1. AWS Management Console
AWS services can be access by a common console of Amazon Web Services. It is a simple
user-friendly web interface for the management of the services. Clients have to make an
account by giving their necessary information to the cloud vendor. Not only business users,
but even scholars can also use AWS services by making a free tier account with basic and
minimal pricing rates. This provided console can be accessed by any platform of network-
connected devices from anywhere.

15
5.2. Security and Compliance by AWS
The security of data and information of clients is at the highest priority of AWS. In the AWS
cloud, the client doesn’t have to maintain the servers and hardware for security reasons
personally, they maintain and precise the levels of security by the software-based security
tools and permission to flow of data.

The client can able to maintain the control of the security that they choose to implement to
protect their own content, data, facts, figures, and network, which is no different than
actual configuration and maintenance of onsite presented data-centers.

The AWS cloud used a shared responsibility model of security in which the client has full
control over their security configurations but they all manage by the cloud vendor. This
means the user can retain their controls of the security of their platform, applications,
sharing of cloud data, systems and networks no distinctively than self-presence of the client
on the data-center site. Clients get access to hundreds and thousands of tools and ways to
maintain their security of the network, data encryptions, and control of access over the AWS
cloud platform.

The IT infrastructure provided by AWS, enable you to share the responsibilities of choosing
the compliances and security management of the service infrastructure that is built over
their platform. The AWS infrastructure provides several security standards. Following are
the partial list of security standards that AWS compliance:

 SOC 1/ISAE 3402, SOC 2,SOC 3


 FISMA, DIACAP, and FedRAMP
 PCI DSS Level 1
 ISO 9001,ISO 27001,ISO 270271,ISO 27018

16
6. Amazon S3
Amazon S3 stands for-Simple Storage Service. It is a type of storage service provided by
Amazon Web Services. Clients can use Amazon S3 to retrieve any type of data that has
already uploaded in the S3 bucket (storage folders in S3 are formally known as S3- Bucket)
from any region at any moment of time. These Amazon S3 buckets can be configured
through the Amazon management console.

Figure 6.1. S3 Bucket

The data is stored in the form of objects in the S3 bucket. There is a file also attached to
each and individual object file, which stores the metadata (data about the data) that
describes the file information like file origin, file size, and date of formation of the file, etc.
To store any object in the S3 bucket, the user needs to upload the content and also they
have to set the permissions for the access of the bucket or the access of the metadata of the
bucket.

Basically, buckets are a type of container for the objects. Users can able to create unlimited
buckets as they want for a single application or instance. Users can also have the right to
choose the location or geographical region for the bucket and also he can set privacy and
security measures for the access of the bucket and for the logs of the buckets. The logs can
be the type of – who can create the bucket, who can delete the bucket and list of the objects
in the bucket.

17
This diagram shows the various operations that are available to perform on the S3 bucket.

Figure 5.2. Illustration of the life cycle of the S3 bucket.

6.1. Steps to create an Amazon S3 bucket

1. Sign up for Amazon S3:


To create an Amazon S3 bucket, the first user has to sign up for the AWS account or
log in to their AWS account. The billing for the S3 bucket starts when you use the
start using the bucket by uploading and transferring the object of the bucket by any
content.

1. Go to https://aws.amazon.com/s3/ and choose to Get started with Amazon S3.


2. Follow the on-screen instructions. AWS will notify you by email when your account is
active and available for you to use.

2. Create an S3 bucket:

After signing in, now we had to create a bucket.

1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.

2. Choose Create bucket.

18
3. For the name of the bucket in the name column, a unique typed DNS compliant name is
required. The given example of the S3 bucket configuration illustrates the columns and
their entries for the creation of buckets.
There are several guidelines to create a bucket, the guidelines are mentioned below:
 The DNS compliant name must be unique across all the existing buckets in all
regions.
 Once the name is assigned user can’t change the name of the bucket.
 To name the bucket, the user can choose what type of name reflects the type or
property of the existing object in the bucket, because in URL the bucket name is
visible and also that points out the object that the user gone have to upload.

3. Upload or adding of objects S3 bucket:

19
Objects can be any type of file, text, picture, and video i.e. to be uploaded in the
bucket.

1. To upload an object from the name list of buckets, choose the bucket which is
going to be used.

2. Choose upload.

20
3. From the upload dialogue box, choose the file that has to be uploaded.

4. Choose a file to upload and then choose Open.

5. Choose upload.

21
4. View an object S3 bucket:

After successful upload of an object, to view the information about the object and to
download the object file in the user system, the following are the steps:

1. To download the object file:


In the bucket name list, choose the button that you have created.

2. From the name list, select that checkbox that is present right next to the object
that you have uploaded and then select the download option on the object
overview panel.

22
5. To move an object :

Now if the user wants to move their object from the existing bucket to the next
folder, the following are the steps:
1. First, we have to create a folder and then we copy the object and paste it into the
other folder. Now, from the bucket name list, the user has to choose the name of
the bucket that he had created.
2. Choose, create a folder.
3. Now the user has to enter the information i.e. folder name and then for the
folder encryption setting the user can choose the security configuration
according to their object of the bucket.

23
4. Now, after the configurations and filling up the information as demanded, at last,
choose the paste option.

6. To delete the objects and empty the bucket :

If objects are no longer needed by the user that has been uploaded in
the S3 bucket, then for no further charges are applied, the user can
delete the bucket or the object from S3.
 To delete the objects:
1. To delete an object from the bucket name list of objects, the user has
to select the action box residing beside the name of the object.
2. After selection, the user has to choose the action named option and
then the delete option.

24
 To empty the bucket:
Emptying the objects from the bucket means deletion of all objects
which exist in that bucket.

To confirm the emptying of the bucket, in the empty dialogue box, enter the name
of the bucket and choose Confirm.

25
 To delete the entire bucket
You can delete a bucket and all the objects in it. Once the bucket gets to delete, the
domain name of the bucket is again getting available for using the use.
From the bucket name list, choose the bucket icon next to the name of the bucket
that you want to delete and choose the option Delete Bucket.

To confirm the deletion, in the delete dialogue box, enter the name of the
bucket and then choose Confirm.

6.2. Common use scenarios of Amazon S3 bucket

The AWS solutions web page lists of multiple ways in which an S3 bucket can be used.
The following under mentioned list summarizes some of those ways:
 Backup and storage- S3 provides data backup and storage services for others.
 Application Hosting- provides the services for deployment, installation, and
management of web applications.
 Media hosting- S3 supports the build of infrastructure which highly scalable,
redundant and highly available that hosts videos, photos, music uploads, and
downloads.
 Software delivery- Host your software applications that download for customers.

26
6.3 Security measures associated with S3
All the resources which come under Amazon S3 like- buckets, objects and related
subresources (like metadata about the objects and the logs related to it) exist in private
access security by default of S3 property. Private access security refers to the access
permission for the root account owner. The owner can also give access by making
security policies.
The Amazon S3 bucket policy is categorized into two parts:
 Resources based policy: Those policies which are attached with resource bucket
are known as a resource-based policy. For example bucket policies and access
control lists are resources based policies.
 User-based policy: those policies which are related to root account security or
IAM role policies.

The user can choose either resource-based policy or user-based policy or the
combination of both policies according to the needs of the organization and owner. The
following figure shows the ways by how the user can secure their bucket for use.

Figure 6.2. Some ways to provide security to the S3 bucket.

27
7. Amazon API Gateway
Amazon API gateway or Amazon Application Programming Interface Gateway provides
services for developing, creating, maintaining, publishing, monitoring and securing REST and
WebSocket API at any scale. The API gateway is used to create an interface for client service
provided by the organization or the AWS service consumer. API developers can create any
type of APIs, which is able to access any type of services and data of the AWS cloud. API
gateway creates two types of API i.e. REST APIs and WebSocket APIs.

API gateway creates REST APIs that:

 Are HTTP based.


 Implement standard HTTP methods such as GET, POST, DELETE, PUT and PATCH.
 Comply easily with the REST protocol, which enables stateless client-server
architecture of communication.

API gateway creates WebSocket APIs that:

 Adhere to the WebSocket protocol, which enables full-duplex communication and


stateful communication between client and server.
 It also routes incoming messages which are based on message content.

28
7.1. The architecture of API Gateway

The following diagram shows the API Gateway architecture.

Figure 7.1. The architecture of API Gateway.

This diagram explains that any API that is created on the Amazon API Gateway
interface, provides services to the user or the organization’s developers or users an
integrated and persistent developer environment and experience for the building the
applications.

API Gateway handles several API calls task such as:


 Accepting and processing thousands of requests which call simultaneously for
the response.
 Maintain traffic management.
 Maintain authorization and access control regarding security purposes.
 API version management.

API gateway act as a “front-door” for application:

 To access data.
 Business logic.
 Functionality for back-end services.
 Maintaining workloads for Amazon Elastic Compute Cloud (Amazon EC2).
 Process code which runs on AWS lambda.
 To create real-time communications applications.

29
7.2. Features of API Gateway

Amazon API Gateway serves features such as:


 Support for Stateful (WebSocket) and stateless (Rest) APIs.
 Flexible authentication management and powerful, such as AWS Identity
Management and Access Management policies, lambda function
authorizations, and Amazon User Cognito pool.
 For the publishing of APIs, developer portals are provided.
 Cloud trail logging and monitoring of API usage and API changes.
 It supports custom Domain Name.
 Canary release deployments for safely rolling out of charges.
 It supports an alarming feature when it is integrated with a CloudWatch.
 When AWS API integrated with AWS WAS, it gives protection against common
web exploitation.
 Performance latencies can be measured easily when Amazon API is integrated
with the AWS X-ray.

Figure 7.2. The diagram shows the general features of API Gateway.

30
7.2. Ways to access API Gateway
User can access the AWS API gateway by the following under-mentioned ways:
 AWS Management Console.
 AWS SDKs.
 AWS Command-line interface.
 AWS Tools for Windows PowerShell.

7.3. Role of API Gateway in Serverless


Serverless computing provides many reliabilities and leverages to the developer’s team
of any organization or the client itself. It doesn’t mean that serverless means there is no
existence of the server. In reality, there will be a runtime environment to execute the
code, and that runtime server has to be run on the server. In serverless computing, the
developer writes the code and tells the service provider that when the code has to be
executed. The serverless provider creates the environment for the user when they need
it, and get that in stopping stage when no requirement of the environment is needed.

When AWS Lambda integrates with API gateway it provides the application interface for
the user to enable serverless computing. The AWS code used to run on AWS Lambda,
which is a highly available computing resource infrastructure. When the runtime
environment gets created, lambda prepares all the execution and administration
resources for efficient work. With Lambda and HTTP protocols, it also used streamlined
proxy integration for making the API Gateway work.

Figure 7.3. This diagram illustrates the integration of the API gateway with AWS Lambda.

31
7.4. Components of API Gateway in Serverless

 API Gateway REST API –

REST APIs use HTTP as the underlying protocol for communication, which in turn follows
the request paradigm. There is a collection of HTTP resources and methods that are
integrated with backend HTTP endpoints, lambda functions, or other AWS services. The
architecture of resource logic is designed and arranged according to the application type
in a resource tree.

 API Gateway WebStock API-


WebStock API provides a full-duplex and persistent connection between the client
and the server. This means the connection stays open until the applications run. The
API methods are invoked through frontend WebSocket connections that you can
associate with a registered domain name.

 Edge optimization point-


The default Hostname of API which, is deployed in one geographical region using
CloudFront distribution, the client can access the API from any AWS region. This edge
optimization point reduces the response timings by requesting the response from the
nearest CloudFront point of presence.

 Integration request-
The body of a route request or the parameters and the body of the method request to
the formats which are required in the backend are used to map out in the internal
interface of the REST or WebStock API Gateways.

 Integration response-
The backend receives the response format which is mapped out with status code,
headers, and payload through the API gateway, which is then returned to the client.

 Mapping template-
A mapping template can be stated as request integration and respond integration. At
the run time they reference data made available as context and stage variables. It is a
collection of Velocity Template Language scripts that transforms the request body
from the frontend data format of the API to the backend data format of the API and
respond body from backend data format to the frontend data format.

32
 Method request-
It is a REST APIs method in API gateway of a public interface that defines the
parameters and the body that an app developer must send in requests to access the
backend through the API.

 Method response-
This public interface defines the status code, headers, and body models, which
should be the expectation of the developer in the responses generated from the API
gateway.

33
8. AWS Lambda
AWS Lambda is a type of computing service provided by the AWS, that lets the user and client
run the code without managing and arrangement of the servers. Lambda executes the
request when they needed and scales automatically, from a few requests per day to the
handling of thousands of requests per day. There will be no charges are made if the code is
not running and charges are only applied when code comes in a runtime environment for the
execution. With AWS lambda codes run in a whole virtual environment without any type of
administration. AWS Lambda runs the code with high-availability computing infrastructure
and manages all the other required computing resources like server management, operating
system maintenance, handle capacity inflations by scaled it up and down automatically, code
monitoring and maintain logging.

Figure 8.1. Working of AWS Lambda

With lambda integration, we can run the code in response to events such as:
 Make changes in existing data of the S3 bucket and DynamoDB.
 In giving the response to HTTP requests for API gateway.
 Invoking of AWS SDKs code through API gateways.

With these integrations and capabilities, users can make different types of data processing
triggers for event response, process streaming data stored in kinesis, or create your backend
that operates at AWS scale, performance and security.

With AWS Lambda, developers can build serverless applications comprised of multiple
functions that are triggered by events and it can also automatically get deployed them using
AWS CodeBuild.

34
8.1. Why should we go with AWS Lambda?

When a client uses AWS lambda, they are only responsible for their code. AWS Lambda
manages all other computing resource management such as CPU, memory, storage,
network and other resources relevant to it. This becomes great leverage for the
developer in a way that they don’t have to configure and set the management of the
computing instances or customization of operating systems as per their needed runtime
environment. AWS Lambda manages and performs all the administrative activities on
the behalf of the user, the activities are like monitoring fleet health, applying security
patches, provisioning capacities, deploying of code and monitoring and logging of
lambda functions.
If the user wants to organize their computing resources at their own, AWS provides
some other services that let the user handle their configurations:
 Amazon Elastic Compute Cloud (EC2): By using this service with Amazon Lambda,
it provides a wide range of computing services but instead of Amazon vendors
handle and configure the resources, they get handled by the user itself. The
customization options are like customized operating systems, network and
security settings, and the entire software stack, but you are responsible for
provisioning capacity, monitoring of logs of health and monitoring, and using
Availability zones for tolerance.
 Elastic Beanstalk is an easy go service for scaling and deploying of EC2 instances,
under the full control of the user.

35
8.2. Creating a Lambda Function with console
After logging in AWS console, choose ‘AWS Lambda’, named service from the services list.
Then the Lambda console gets opens up, here the user can create the lambda function.
This Lambda function executes and returns responses. The monitoring of lambda function
logs can be done by CloudWatch metrics.

To create Lambda function:


 Open ‘AWS Lambda Console’.
 Choose ‘Create a Function’.
 For Function name, the user has to enter ‘my-function’.
 Choose create Function.

36
After the above configuration, Lambda creates the function with Node.js function and then
an execution role that grants the permission for the user from the AWS. Lambda assumes
the execution role when the user invokes your function, and then the lambda functions
create it to create configurations and credentials for the SDKs and to read the data exchange
from event sources.

8.3. Use of Designer


The designer section shows the layout or the whole design of the function and its upstream
and downstream resources. You can use it to configure layers, triggers, and destinations.

37
Now, the user has to choose the ‘my-function’, option in the designer section to return to
the panel of function coding and configurations. Lambda offers multiple sample codes that
return a success response. The user also can create function code but the limit of function
code size should be 3MB.

8.4. Invoking of Lambda


From the Lambda console user can invoke their lambda function using the sample event
data that is provided previously.

To invoke the Lambda function:


 In the upper right corner of the ‘ Test’.
 In the ‘Configure Test Event’ page, choose ‘Create new taste event’ and in ‘Event
Template’, leave the default entry of ‘Hello World’ function that is already can be seen
in the function coding panel. Enter an Event Name and note it down the following
sample template.

38
Users can change the key and values in the sample JSON event template but there should be
no changes in the structure of the queries. If the user wants to change the key values then
they should maintain the structure format of the template. Otherwise, lambda doesn’t
execute it and the user won’t get their expected result as they want.

 From the behalf of the user, AWS handler in Lambda function receives and then execute
the sample event.

 The execution of Lambda function when getting successful, the results can be shown in the
console. There will be three sections in the console named execution result, summary
section and log output system.

39
 after running the lambda functions for few times, some metrics get collected and
that log data can be seen in the CloudWatch.
 Choose ‘monitoring’ and the CloudWatch console, all the logs of the Lambda
function.

40
8.5. Basic components of Lambda
AWS lambda tends to invoke the request from the Amazon API Gateway or by the
invoking configuration of the other services. The AWS Lambda runs the function code for
event sources. Some basic components of AWS Lambda are mentioned below:
 Function
 Runtime
 Event
 Concurrency
 Trigger

Function- A function is a resource that contains the coding of the event that has to be
invoked. A code of function process events and create a runtime to generate the
requests and responses between Lambda and the code of the function.

Runtime- It is a basic runtime environment, where the function codes come to execution.
The runtime environment supports many different languages. The runtime environment
lies between the Lambda and the function code panel with context information,
invocation events, and the responses that exchanges between them.

Event- An event is a document that contains data that tells the function process and it is
in JSON format. The developer determines the structure and contents of the events,
which is invoked when the user calls the function. When any AWS service your function,
the service defines the events.

Concurrency- Concurrency can be defined as the number of requests, that can be handled
by the lambda function. Whenever the functions get invoked, Lambda arranges the
instances according to the process of the events. When the function code gets to finish
at once, then the Lambda functions prepare its self for the next requests. And when the
next request gets invokes at the same time when the previous request is already in
running state, the function creates a new instance to give the response of the new
request. This arrangement of generating new instances and that results in increased in
functions capacity, this is known as function’s concurrency.

Trigger- A trigger is a type of configuration, that is used to invoke the Lambda function.
This can be generated by other AWS services, these configurations are included in
applications that user develops or in event source mappings. Event source mapping is a
resource that reads the items in queue or steam and invokes the function.

41
8.6. Features of AWS lambda

42
9. DynamoDB

DynamoDB is a fully managed NoSQL manage database service that provides fast and
predictable performance with seamless scalability. DynamoDB lets the user free from
carrying the administrative burdens of operating and scaling a distributed database so that,
users don’t have to worry about hardware arrangement, setup, and configurations,
replications, software patching, or cluster catching. DynamoDB also offers encryption at
rest, which decreases the operational burden and complexity involved in protecting
sensitive data.
DynamoDB is capable of storing and retrieving a large amount of data and it can able to
handle multiple concurrent requests at the same time. The performance and capacity of
DynamoDB can be upscaled and get downscale as per the needs of the server. The
consumption of resources can be monitored by the AWS console.

AWS DynamoDB provides on-demand backup. Users can create a backup of the whole
table anytime at a single click at any time of moment from the AWS management console.
When these back-ups or restore actions are applied, there will no zero impact on the
existing tables or in simple terms there will be no changes in data occurs in the existing
tables. DynamoDB provides point-in-time recovery for the tables. Point-in-time recovery
provides accidental recovery of the tables that exist. By this specification real-time
updating and delete operations take place. With this capability of point-in-time recovery, it
provides recovery of any time of instance within 35 days.

To reduce the data redundancy, DynamoDB deletes the data that is expired or no more in
use. This tends to save the cost and time for the operations that are applied to the
DynamoDB table.

43
Figure 9.1. This diagram illustrates the working of data extraction DynamoDB.

9.1. High Availability and Durability


To maintain the smart and consistent performance, DynamoDB distributes the traffics over
the tables among different AWS servers, this results in increasing throughput and data
accuracy. Whatever data is saved in DynamoDB, it is replicated among all availability zones
of geographical regions of AWS. The data are stored in SSDs (Solid State Disks).

That distribution of data among different availability zones provides high availability and
durability of the data. AWS provides global tables features, which enables the user to sync
the data over the different availability zones.

9.2. Security measures with DynamoDB


Data privacy and security is an important measure for AWS. As AWS maintains and handles
the security measures in their provided services, but user have also rights and AWS used
shared responsibility model which let the user configure the setting required for high
protection.
DynamoDB is designed for the mission-critical situation and primary data storage
protection. DynamoDB protects the data when it is in storage and also it gives protection
when data is transmitted between the client and the database.
There are three ways used by DynamoDB, which is for protection-
 DynamoDb Encryption at REST
 DAX encryption at REST
 Internetwork Traffic Privacy

44
Figure 9.2. Encryption techniques of DynamoDB.

45
10. Lambda implemented serverless architechure

We have played with the technology for so far. And in the latest, a new buzz word is a trend
which is ‘Serverless Computing’. This trending technology lets the users focus on their
business or developer strategies instead of maintaining the physical, operational and
administrative requirements. Serverless computing is all about DevOps(developers and
operations) which is all about speed.
With the implementation of AWS Lambda and Serverless architecture, the delivery and
performance will be fastest as ever than traditional services methods and also highly cost-
effective in multiple ways.
The fully serverless architecture will be the combination of multiple other services with
Amazon API Gateway and AWS Lambda. Serverless applications use a mixture of services that
allows the developers to develop to deploy the application efficiently and quickly without any
maintaining responsibilities of the physical servers and physical infrastructures.

The services which are going to use in this Lambda implemented serverless architecture are
mentioned below in the list-
 Amazon S3
 API Gateway
 User Cognito pool
 AWS Lambda
 Amazon DynamoDB

These above-mentioned services allow the developers to develop the applications without
the use of EC2 and puppet script.

46
Figure 10.1. The architecture of Serverless computing with AWS Lambda.

1. Consumers are meant to be clients who are geographically distributed among


different AWS regions. Clients can request their contents through the domain name by
their device but it should be connected with the network.

2. CloudFront decides the routes for the request to the edge locations which are the
data-centers located globally, which generates lower latency (time delay) and provides
a better experience to the clients by caching the data.

3. Amazon S3 hosts static website content such as HTML, CSS, JavaScript, etc. All the
contents which are static in the nature of the website are delivered through the S3.

4. Amazon Cognito User Pool arranges user management and provides validation for the
identity of the user or verifies authentication.

5. As all the static contents are delivered by the S3 bucket, the dynamic contents which
are needed to be responded are done through Amazon API Gateway. In this above-
mentioned architecture, REST API is used for the operation. Amazon API Gateway
provides a secure endpoint to exchange requests.

47
6. AWS Lambda works upon the DynamoDB and provides the computation to the
instances and process the events as request. It performs CRUD operation (i.e. Create,
Updation, Read and Delete).

7. Amazon DynamoDB provides the backend support with the NoSQL database with high
elastic capacity as per the traffic management of the web application.

With this architecture, the user can deploy their entire web application stack very quickly.
Without the management of managing servers, guessing at provided services, extra paying
for idle consumption of resources. Specifically, there is no concern for security measures,
reliability, and performance.

48
11. Conclusion

In the era of cloud computing, where physical infrastructure is no more concern, Amazon
Web Services provinces number of multiple secure and efficient services within their Virtual
Private Cloud. This mechanism is far more effective than traditional services for business or
start-ups. Amazon Lambda’s implementation provides high scalability, durability and easy
management and administration to the infrastructure of the organization. This proposed
architecture in this project is cost-effective as well as the administrative effective solution for
any business startup which lets the organization focus on its business strategies, not on the
infrastructure.

49
12. References

[1] Serverless Architectures with AWS Lambda-whitepapers by AWS.

[2] Yan M., Castro P., Cheng P., Ishakian V., Building a Chatbot with Serverless Computing. In Proceedings of the 1st
International ACM Workshop on Mashups of Things and APIs, Trento, Italy,Dec 2016, 5 p.

[3] Hendrickson S., Sturdevant S., Harter T., Venkataramani V.,Arpaci-Dusseau A.C., Arpaci-Dusseau R.H.,
Serverlesscomputation with open lambda. In Proceedings of the 8th USENIX Conference on Hot Topics in Cloud Computing
(Hot Cloud '16), Denver, CO, June 2016, 7p.

[4] Baldini I., Castro P., Chang K., Cheng P., Fink S., Ishakian V., Mitchell N., Muthusamy V., Rabbah R., Slominski A., Suter P.,
Serverless Computing: Current Trends and Open Problems., arXiv preprint arXiv:1706.03178. June 2017, 20 p.

[5] Sill A. The design and architecture of microservices. IEEE Cloud Computing. 2016 Sep;3(5):76-80.

[6] H. Liu, A Measurement Study of Server Utilization in Public Clouds, Proc. 9th IEEE International Conference on Cloud and
Green Computing (CAG’11), Sydney, Australia, Dec 2011, pp.435-442.

[7] M. Rehman, M. Sakr, Initial Findings for Provisioning Variation in Cloud Computing, Proc. of the IEEE 2nd Intl. Conf. on
Cloud Computing Technology and Science (CloudCom '10), Indianapolis, IN, USA, Nov 30 - Dec 3, 2010, pp. 473-479.

[8] R. Buyya, “Cloud computing: The next revolution in information technology,” in Parallel Distributed and Grid Computing
(PDGC), 2010 1st International Conference on, pp. 2–3, Oct 2010.

[9] J. Lewis and M. Fowler, “Microservices,” 2014.

[10] A. W. Services, “Aws lambda,” 2015.

[11] GIGAOM, “The biggest thing amazon got right: The platform,” 2011.

[12]Nginx, “Adopting microservices at Netflix: Lessons for architectural

design,” 2015.

[13] InfoQ, “From a monolith to microservices + rest: the evolution of

Linkedin's service architecture,” 2015.

50
Implementation of AWS Lambda in Business Arena

Diksha Mishra
CS & IT
BBAU Satellite Campus,
Amethi.
mdiksha40@gmail.com

Abstract—
II. THE MODELS OF CLOUD COMPUTING
In the emergence of traditional microservices for There are three major components of cloud computing.
browsing over the internet, cloud computing is a major Those three models are PAAS-(Platform as a Service),
acquiring service nowadays, as it provides function-as-a- SAAS-(Software as a Service) and IAAS-(Infrastructure as
service. For the new business implementations where web
a Service).
applications are required to run the business, the planning of
required infrastructure as well as their maintenances and
management is a hectic concern to the businessmen. Instead
of using traditional microservices, cloud computing services
provided by cloud vendors are the major relieve for their
concerns.AWS in a major cloud vendor that provides over
165 web services including AWs lambda, AWS API gateway
which are required services to build the infrastructure
regarding web services for the business implementations.
Serverless is a cost-effective with pay-as-go pricing technique
as well as the security and compliances of the related to web
services are fully managed by cloud vendors.

In this paper, we will discuss the evolution of web


services, advantages of cloud computing over the traditional
services and the level of usage of cloud computing over in
comparison to traditional services, AWS services and
architectures regarding business aspects for the new
startups.

Keywords—Cloud computing, AWS, Serverless, AWS


Lambda.

I. INTRODUCTION Figure 1. Components of cloud computing.


Amazon has launched its AWS- Amazon Web Services
in 2006 since Amazon has become a major cloud vendor in
the arena of cloud computing web services. Back in time, SAAS offers the most elevated level of scaling. At last,
when timesharing systems are used for scientific purposes IaaS offers access to processing assets, as a rule by renting.
or for the organizations, which provides a common Virtual Machines (VMs) and extra room. The Serverless
resource pool, it proposes a paradigm of computing figuring, which was evaluated in this paper can be
systems which is known as cloud computing nowadays. remembered for PaaS as in both of these models there is no
With this computing technique inflation in assembling of server the executives and engineers should just think about
data capacity and computing resources or power is composing the code.
available to share among many users, to the general public
as function-as-a-service. Cloud computing introduces a III. SERVERLESS COMPUTING
new trending word ‘Serverless’. Serverless refers to the
function-as-a-service platform which promises end-users Serverless registering is a distributed computing model
to reduced hosting cost, high availability of resources, where the cloud supplier progressively deals with the
dynamic elasticity, fault tolerance. Comparatively, owing portion and provisioning of the servers. At the end of the
to their own hardware infrastructures, client or day, applications are facilitated by third-get-together
organization uses the services on the rent basis provided by assistance, which causes a designer not to be locked in
the cloud vendor. Almost all those services where the with equipment and programming the board. The term
client doesn’t need to be present physically to the itself can make a bogus impression that there are no
computing hardware can be easily conveyed by the administrations off-camera, in any case, the servers are
vendors. not disposed of, they have simply avoided the purchasers.

1
In Figure 2, you can see the case of Serverless engineering model makes the communication between the client and
with the Amazon Web Services supplier. the user by exchanging the protocols between server and
database. This traditional architecture is comprised of two
In Figure 2, a client calls a post demand with a JSON components, where the client makes always request and
body and the parameters ought to be placed into a the server responds according to the request is make. But
DynamoDB table. The API is given by means of the API there are some issues related to this architecture like the
Gateway administration and the asset should trigger a required network setup is difficult to manage also it
Lambda work. The Lambda work gets the parameters and requires a lot of servers. Also, to maintain up these servers
put them into the database. The majority of the PaaS, or to deliver efficient services, it becomes quite expensive in
Platform as a Service, items offer similar focal points, terms of capital.
where the engineers likewise don't have to stress over the In serverless architecture, as shown in figure 2, some
backend servers yet at the same time, there exists the changes are done including server and database; new
critical contrast between these two advancements. The components are added like API Gateway, Lambda
essential distinction is standing out how to form and functions. Instead of having one server for each type of
convey the application to the cloud. functionalities, our FAAS has now each piece of
functionality for each type of function. For example, if we
could have a function for searching for a product, then
there does also exist a function for buying that product.

In the cloud arena, the application is used to run in


stateless compute, which is brought up and bring down
according to the trigger. Triggers are simply coded events,
the service according to the HTTP request that is requested
by the client. AWS Lambda is also a trigger that is created
Figure 2. AWS Serverless Architecture. within the function console, command-line interface and
within the same cloud provider’s environment. There are
three types of triggers majorly exist in the cloud computing
Interestingly, in a conventional 'server' design, the arena:
engineers should conjecture, how a lot of limits is required
for his application and buy it, regardless of whether they  HTTP Trigger: It gives the function call
wind up utilizing it or not. according to the content with the rich features. On
the contrary, a file, text, JSON, PUT, POST and
At long last, the client has an advantage of snappy DELETE HTTP methods.
organization and updates. As there is no need for backend
design from the designer's side, it is conceivable to rapidly  Database Trigger: It performs all those functions
transfer the code and discharge another item. Likewise, as which are used to alter the existing database like
the application is isolated into little capacities, it isn't insertion, deletion, updating and modification
important to make changes to the entire application; rather queries of records in the table.
can refresh each capacity in turn.  Storage Object Trigger: it is used for tracking
objects that exist in cloud storage like database
and some metadata about the events calls and
objects.
IV. TRADITIONAL ARCHITECHURE VS SERVERLESS
COMPUTING In AWS, a trigger function can be an HTTP request or
it can also be a request for another service. The AWS
cloud provides so many different types of serverless
Netizens make some assumptions like serverless computing. These include capacities are [1]:
computing means no servers are there used during the
computing. These assumptions are erroneous. To  Compute - AWS Lambda
demonstrate this, below give example is showed, which  APIs -Amazon API Gateway
shows the discrepancy between traditional service
architecture with n-tier server logic and the architecture of  Storage - Amazon Simple Storage Service(S3)
serverless computing.  Databases- Amazon DynamoDB
 Interprocess messaging- Amazon Simple
Notification Service (Amazon SNS) and
Amazon Simple Queue Service (Amazon
SQS).
 Orchestration: AWS Step Functions and AWS
Figure3. A traditional architecture where the server and Cloud Watch Events.
database are managed by the developers.

Here, the traditional architecture, we have spoken yet is


refers to the client-server architecture. The client-server

2
V. AWS LAMBDA focusing on technical infrastructures for maintaining the
Amazon Lambda was launched by Amazon Web services servers, capital cost as perusing or pay-as-you-go and
in 2014 and it was the first serverless computing platform. also it provides high scalabilities to integrate or the
AWS lambda defines a few key aspects like cost, sudden inflations for multiple requests simultaneously
programming model, deploy, security and monitoring. responses. Also, AWS provides 9.9999% of data recovery
That supports many languages, for example, Node.Js, and disaster recovery guarantee to the users in any case.
Python, Java, GoLang, .Net[2]. AWS lambda assists the Also, Amazon services are working on deploying services
progress of the functions which get automatically scaled that work with AWS Lambda for fields like Artificial
up or unable parallel computation and also those Intelligence and the Internet of Things.
functional applications can easily be deployed. AWS
Lambda provides the logic layer for the architecture.
VII. REFERENCES
When the scaling of function calls like event-driven
function increased simultaneously too many requests are [1] Serverless Architectures with AWS Lambda-
demanded, lambda creates multiple containers by creating whitepapers by AWS.
too many copies of existing functions to respond to each
request and run them parallel. That is why any [2] Yan M., Castro P., Cheng P., Ishakian V., Building a Chatbot
possibilities are minimal for the idle stage of the container with
Serverless Computing. In Proceedings of the 1st International
or the server. Deployment applications that use this
ACM Workshop on Mashups of Things and APIs, Trento, Italy,
architecture which includes lambda functions can be cost- Dec 2016, 5 p.
effective and also they get designed in a way to reduce the
wasting capacity of the resources and capacities. [3] Hendrickson S., Sturdevant S., Harter T., Venkataramani V.,
AWS Lambda is a type of FAAS service. FAAS Arpaci-Dusseau A.C., Arpaci-Dusseau R.H., Serverless
approaches to the event-driven computing system. computation with open lambda. In Proceedings of the 8th
USENIX
Conference on Hot Topics in Cloud Computing (Hot Cloud '16),
Denver, CO, June 2016, 7p.

[4] Baldini I., Castro P., Chang K., Cheng P., Fink S., Ishakian
V.,
Mitchell N., Muthusamy V., Rabbah R., Slominski A., Suter P.,
Serverless Computing: Current Trends and Open Problems.,
arXiv preprint arXiv:1706.03178. June 2017, 20 p.

[5] Sill A. The design and architecture of microservices. IEEE


Cloud
Computing. 2016 Sep;3(5):76-80.

[6] H. Liu, A Measurement Study of Server Utilization in Public


Clouds, Proc. 9th IEEE International Conference on Cloud and
Green Computing (CAG’11), Sydney, Australia, Dec 2011,
pp.435-442.

[7] M. Rehman, M. Sakr, Initial Findings for Provisioning


Figure 4: Relationship among event-driven, FaaS and Variation
Serverless FaaS. in Cloud Computing, Proc. of the IEEE 2nd Intl. Conf. on Cloud
Computing Technology and Science (CloudCom '10),
Indianapolis, IN, USA, Nov 30 - Dec 3, 2010, pp. 473-479.
Serverless FaaS relies on a function where the unit of
deployment and execution [1]. Serverless FAAS is a type
of FaaS where no virtual machines and Dockers or [8] R. Buyya, “Cloud computing: The next revolution in
containers are present in the programming model [1]. We information technology,”
in Parallel Distributed and Grid Computing (PDGC), 2010 1st
can run any type of backend service or application’s code
International Conference on, pp. 2–3, Oct 2010.
virtually in the Lambda environment and also they can run
this code with the high inflation of scalability and [9] J. Lewis and M. Fowler, “Microservices,” 2014.
capacity. Users don’t have to write extra code to integrate
their event source functions and also no management to [10] A. W. Services, “Aws lambda,” 2015.
match the scalability of the requests for a response. Users
can focus on their logic layer, not on the infrastructure [11] GIGAOM, “The biggest thing amazon got right: The
that is required for the deployment of the application. platform,” 2011.
[12]Nginx, “Adopting microservices at Netflix: Lessons for
VI. FUTURE SCOPE WITH AWS LAMBDA architectural
design,” 2015.
By using AWS services for business start-ups as well as [13] InfoQ, “From a monolith to microservices + rest: the
running businesses, it relies the customer or owner to let evolution of
them only focus on their business growth strategy besides Linkedin's service architecture,” 2015.

You might also like