Professional Documents
Culture Documents
Minor Project Report-Diksha Mishra, V Sem
Minor Project Report-Diksha Mishra, V Sem
SUBMITTED BY:
Name: - Diksha Mishra
Roll no: - ASC11044
SUBMITTED TO:
I, the undersigned declare that project entitled “ Serverless using AWS Lambda”, submitted
by me to Babasaheb Bhimrao Ambedkar University Satellite Campus (A Central University),
Amethi in partial fulfilments of requirements for the award degree of Bachelor’s of Science
& Information technology under the guidance of Dr. Neeraj Tiwari, is my original work.
Signature of student
Date:
Certified that the above statement made by the student is correct to the best of our knowledge
And belief.
2
Acknowledgment
This satisfaction that accompanies the successful completion of this project would be
incomplete without the mention of the people who made it possible, without whose
constant guidance and encouragement would have made efforts go in vain. I consider
myself privileged to express gratitude and respect to all those who guided us through the
completion of this project.
I convey thanks to my project guide, Dr. Neeraj Tiwari of Computer & Science department,
for providing encouragement, constant support, and guidance which was of great help to
complete this project.
Last but not least, I wish to thank my parents for financing our studies in this college as well
as for constantly us to learn the knowledge. Their personal sacrifice in providing this
opportunity to learn is greatly acknowledged.
3
Table of contents
Chapter Page
1. Declaration 2
2. Acknowledgment 3
3. List of figures 4
4. Cloud computing. 6
4.1. Cloud computing models.
4.2. Cloud computing deployment models.
4.3. Advantages of cloud computing.
6. AWS S3 17
6.1. Setup of S3 buckets.
6.2. Common use scenarios of S3.
6.3. Security measures with S3.
7. API Gateway 28
7.1. API Architecture.
7.2. Feature of API Gateway.
7.3. Way to access API.
7.4. Role of API in serverless.
7.5. Components of API in serverless.
8. Lambda 38
8.1. Why should go with Lambda?
8.2. Creating a Lambda function.
8.3. Use of Designer.
8.4. Invoking of Lambda.
8.5. Basic components of Lambda.
8.6. Features of Lambda.
9. Dynamo DB
9.1. High availability and Durability. 43
9.2. Security measures with DynamoDB.
11. Conclusion 49
4
3. List of figures
Figure Page
5
Introduction
Amazon has launched its AWS- Amazon Web Services in 2006 since Amazon has become a
major cloud vendor in the arena of cloud computing web services. Back in time, when
timesharing systems are used for scientific purposes or for the organizations, which provides
a common resource pool, it proposes a paradigm of computing systems which is known as
cloud computing nowadays. With this computing technique inflation in assembling of data
capacity and computing resources or power is available to share among many users, to the
general public as function-as-a-service. Cloud computing introduces a new trending word
‘Serverless’. Serverless refers to the function-as-a-service platform which promises end-users
to reduced hosting cost, high availability of resources, dynamic elasticity, fault tolerance.
Comparatively, owing to their own hardware infrastructures, client or organization uses the
services on the rent basis provided by the cloud vendor. Almost all those services where the
client doesn’t need to be present physically to the computing hardware can be easily
conveyed by the vendors. In this project, Amazon lambda service is used to build efficient
serverless architectures. Lambda provisions function-as-a-service, which gives the runtime
environment for the events and also creates the instances for the request and responses.
Lambda manages the concurrency of the requests and methods which are invoked
simultaneously. AWS Lambda provides the developers with an environment that lets the
developers free from the administrative and management responsibilities of the application
development.
6
The technology used in this project-
1. AWS S3
2. API Gateway
3. AWS Lambda
5. Dynamo DB
6. Amazon CloudFront
7
4. Cloud computing
In the emergence of cyberspace, traditional architectures of browsing have been replaced
by majorly cloud computing. Cloud computing is a way of accessing, manipulating and
retrieving data over internet as well as using all of the resources like OS platforms, storage,
memory (RAM & ROM) and predefined application interfaces. Start-up businesses related to
cyberspace, cloud computing can be a major choice for platforms to acquire services like
server management, API interfaces, etc. The concept of cloud computing starts from the
back 1950s era when mainframe computing covered gradual inflation in trend. Multiple
users are able to access the central computer or mainframe by dumb terminals. Providing
the single resource pool for sharing of data and information with large storage capacity and
less expensiveness, this concept becomes the solution for sophisticated expensive
technology regarding business aspects.
After some time, around the 1970s, the technology Virtual Machines (VM) is introduced.
With this technology finally, users can operate multiple operating systems simultaneously in
a single environment. In simple words, a completely new system with a different operating
system or the same operating systems can run in a single hardware system. Virtual Machine
technology takes the shared technology of mainframe computers onto the new level by
permitting different computing environments into one physical system.
8
Figure 4.1. Types of cloud computing.
SAAS: Software as a Service is a model that gives the user quick access to the services
and software according to their requirements. The whole integrated stack of
computing is controlled by the cloud vendor, which is accessible by the web browser
of the client. All applications which run in the cloud by the user, they are based on
licensed subscription by paying and also users can use the applications for free but in
this case, the services have their own limited access.
Because of the SAAS, there are no requirements of installations of software or any
type of IDE platform which are needed for the application to run in the existing
computing infrastructure.
IAAS: Infrastructure as a Service provides the virtual arrangement of the computing
resources over the cloud network. The client can opt-out for any computing
resources that are needed as physical infrastructure and that computing resources
are like storage, memory and networking hardware as well as maintenance and
support to them.
PAAS: Platform as a Service gradually reduces the complexity of the process of
software development for the organization. PAAS provides all that same virtual
environment where an organization can develop, test and organize the platform for
the application deployment. These services are provided in the form of servers,
storage, networking and IDE platforms needed for the application development.
9
Figure 4.2. Cloud Service Models.
Cloud deployment is designated according to the factors like who have taken over the
control for the deployment and the infrastructure for the deployment, where it resides.
Also for the best user experience, each cloud deployment model delivers distinct
services from each other. Following are the undermentioned cloud deployment models
are available:
1. Public cloud deployment model: As the name suggests, all users can use this model
on the subscription basis who wants to make the use of computing resources such as
hardware (CPU, storage and memory, etc.) and software (OS, application server and
database). This type of cloud model mainly used for no-critical tasks, testing and
developing of any application software.
2. Private cloud deployment model: This cloud deployment model is by a single
organization for their uses like file sharing, package sharing and other resources
related to the organization’s work. The management of such type of infrastructure is
managed by both parties i.e. service provider and the organization itself. Private
cloud is more expensive in terms of capital in comparison to the public cloud
10
deployment model due to it is acquiring and maintaining the private services. Besides
all of this, infrastructures with cloud deployment models have better security and
privacy systems than public and hybrid cloud because all of this is managed by the
organization and service provider’s sharing of responsibilities. In simple words, a
private cloud deployment model infrastructure is majorly managed by the
organization itself, so the user experiences are far better for their uses.
3. Hybrid cloud deployment model: This deployment model is best for the
organizations where the sudden need of inflation is required to increase the
scalability of the resources. Many organizations used to interconnect the public cloud
deployment model and private cloud deployment for their use. This model provides
leverage to public models to supplement their resources within the private cloud
deployment model.
11
4.3. Advantages of cloud computing-
Major advantages of AWS cloud computing as mentioned below:
1. Variable expenses of capital: Clients don’t have to make heavy investments in buying
servers and hardware. Also, clients only have to pay as they consume the resources and
according to the time usage of resources. They don’t have to make pre-transactions
like existing traditional services of server and hardware usage.
2. Free from limitations of capacity: Clients can make a prior decision to deploy the web
services instead of sitting up on idle resources or dealing with limited capacity. With
AWS clients can use as much as high or as much as low capacity as they need and scale
up and down as required at every present moment.
3. Increase speed and agility: In the cloud computing environment, resources can be
available at any instance or moment of time, instead of weeks to the developers. This
results in the major inflation in the agility of the organization. Since the cost and time
get significantly reduced in developing and experimenting with web services or web
applications.
4. No maintenance and maintaining the cost of data centers: AWS cloud computing lets
focus the client only on their business implementation, not on the infrastructure like
the heavy lifting of racking, stacking and powering servers.
5. Rapid global deployment: The client can easily deploy the service or application with
just a few configurations and few clicks across the globe over the regions. This means
users or the client’s service or application gets provided lower latency and a better
experience at a minimal cost.
6. Variable expenses of capital: Clients don’t have to make heavy investments in buying
servers and hardware. Also, clients only have to pay as they consume the resources and
according to the time usage of resources. They don’t have to make pre-transactions
like existing traditional services of server and hardware usage.
7. Free from limitations of capacity: Clients can make a prior decision to deploy the web
services instead of sitting up on idle resources or dealing with limited capacity. With
AWS client can use as much as high or as much as low capacity as they need and scale
up and down as required at every present moment.
8. Increase speed and agility: In the cloud computing environment, resources can be
available at any instance or moment of time, instead of weeks to the developers. This
results in the major inflation in the agility of the organization. Since the cost and time
12
get significantly reduced in developing and experimenting with web services or web
applications.
9. No maintenance and maintaining the cost of data centers: AWS cloud computing lets
focus the client only on their business implementation, not on the infrastructure like
the heavy lifting of racking, stacking and powering servers.
10. Rapid global deployment: The client can easily deploy the service or application with
just a few configurations and few clicks across the globe over the regions. This means
the services or applications get provided to the client with lower latency and a better
experience at a minimal cost.
13
5. Amazon Web Services
14
5.1. AWS Management Console
AWS services can be access by a common console of Amazon Web Services. It is a simple
user-friendly web interface for the management of the services. Clients have to make an
account by giving their necessary information to the cloud vendor. Not only business users,
but even scholars can also use AWS services by making a free tier account with basic and
minimal pricing rates. This provided console can be accessed by any platform of network-
connected devices from anywhere.
15
5.2. Security and Compliance by AWS
The security of data and information of clients is at the highest priority of AWS. In the AWS
cloud, the client doesn’t have to maintain the servers and hardware for security reasons
personally, they maintain and precise the levels of security by the software-based security
tools and permission to flow of data.
The client can able to maintain the control of the security that they choose to implement to
protect their own content, data, facts, figures, and network, which is no different than
actual configuration and maintenance of onsite presented data-centers.
The AWS cloud used a shared responsibility model of security in which the client has full
control over their security configurations but they all manage by the cloud vendor. This
means the user can retain their controls of the security of their platform, applications,
sharing of cloud data, systems and networks no distinctively than self-presence of the client
on the data-center site. Clients get access to hundreds and thousands of tools and ways to
maintain their security of the network, data encryptions, and control of access over the AWS
cloud platform.
The IT infrastructure provided by AWS, enable you to share the responsibilities of choosing
the compliances and security management of the service infrastructure that is built over
their platform. The AWS infrastructure provides several security standards. Following are
the partial list of security standards that AWS compliance:
16
6. Amazon S3
Amazon S3 stands for-Simple Storage Service. It is a type of storage service provided by
Amazon Web Services. Clients can use Amazon S3 to retrieve any type of data that has
already uploaded in the S3 bucket (storage folders in S3 are formally known as S3- Bucket)
from any region at any moment of time. These Amazon S3 buckets can be configured
through the Amazon management console.
The data is stored in the form of objects in the S3 bucket. There is a file also attached to
each and individual object file, which stores the metadata (data about the data) that
describes the file information like file origin, file size, and date of formation of the file, etc.
To store any object in the S3 bucket, the user needs to upload the content and also they
have to set the permissions for the access of the bucket or the access of the metadata of the
bucket.
Basically, buckets are a type of container for the objects. Users can able to create unlimited
buckets as they want for a single application or instance. Users can also have the right to
choose the location or geographical region for the bucket and also he can set privacy and
security measures for the access of the bucket and for the logs of the buckets. The logs can
be the type of – who can create the bucket, who can delete the bucket and list of the objects
in the bucket.
17
This diagram shows the various operations that are available to perform on the S3 bucket.
2. Create an S3 bucket:
1. Sign in to the AWS Management Console and open the Amazon S3 console at https://
console.aws.amazon.com/s3/.
18
3. For the name of the bucket in the name column, a unique typed DNS compliant name is
required. The given example of the S3 bucket configuration illustrates the columns and
their entries for the creation of buckets.
There are several guidelines to create a bucket, the guidelines are mentioned below:
The DNS compliant name must be unique across all the existing buckets in all
regions.
Once the name is assigned user can’t change the name of the bucket.
To name the bucket, the user can choose what type of name reflects the type or
property of the existing object in the bucket, because in URL the bucket name is
visible and also that points out the object that the user gone have to upload.
19
Objects can be any type of file, text, picture, and video i.e. to be uploaded in the
bucket.
1. To upload an object from the name list of buckets, choose the bucket which is
going to be used.
2. Choose upload.
20
3. From the upload dialogue box, choose the file that has to be uploaded.
5. Choose upload.
21
4. View an object S3 bucket:
After successful upload of an object, to view the information about the object and to
download the object file in the user system, the following are the steps:
2. From the name list, select that checkbox that is present right next to the object
that you have uploaded and then select the download option on the object
overview panel.
22
5. To move an object :
Now if the user wants to move their object from the existing bucket to the next
folder, the following are the steps:
1. First, we have to create a folder and then we copy the object and paste it into the
other folder. Now, from the bucket name list, the user has to choose the name of
the bucket that he had created.
2. Choose, create a folder.
3. Now the user has to enter the information i.e. folder name and then for the
folder encryption setting the user can choose the security configuration
according to their object of the bucket.
23
4. Now, after the configurations and filling up the information as demanded, at last,
choose the paste option.
If objects are no longer needed by the user that has been uploaded in
the S3 bucket, then for no further charges are applied, the user can
delete the bucket or the object from S3.
To delete the objects:
1. To delete an object from the bucket name list of objects, the user has
to select the action box residing beside the name of the object.
2. After selection, the user has to choose the action named option and
then the delete option.
24
To empty the bucket:
Emptying the objects from the bucket means deletion of all objects
which exist in that bucket.
To confirm the emptying of the bucket, in the empty dialogue box, enter the name
of the bucket and choose Confirm.
25
To delete the entire bucket
You can delete a bucket and all the objects in it. Once the bucket gets to delete, the
domain name of the bucket is again getting available for using the use.
From the bucket name list, choose the bucket icon next to the name of the bucket
that you want to delete and choose the option Delete Bucket.
To confirm the deletion, in the delete dialogue box, enter the name of the
bucket and then choose Confirm.
The AWS solutions web page lists of multiple ways in which an S3 bucket can be used.
The following under mentioned list summarizes some of those ways:
Backup and storage- S3 provides data backup and storage services for others.
Application Hosting- provides the services for deployment, installation, and
management of web applications.
Media hosting- S3 supports the build of infrastructure which highly scalable,
redundant and highly available that hosts videos, photos, music uploads, and
downloads.
Software delivery- Host your software applications that download for customers.
26
6.3 Security measures associated with S3
All the resources which come under Amazon S3 like- buckets, objects and related
subresources (like metadata about the objects and the logs related to it) exist in private
access security by default of S3 property. Private access security refers to the access
permission for the root account owner. The owner can also give access by making
security policies.
The Amazon S3 bucket policy is categorized into two parts:
Resources based policy: Those policies which are attached with resource bucket
are known as a resource-based policy. For example bucket policies and access
control lists are resources based policies.
User-based policy: those policies which are related to root account security or
IAM role policies.
The user can choose either resource-based policy or user-based policy or the
combination of both policies according to the needs of the organization and owner. The
following figure shows the ways by how the user can secure their bucket for use.
27
7. Amazon API Gateway
Amazon API gateway or Amazon Application Programming Interface Gateway provides
services for developing, creating, maintaining, publishing, monitoring and securing REST and
WebSocket API at any scale. The API gateway is used to create an interface for client service
provided by the organization or the AWS service consumer. API developers can create any
type of APIs, which is able to access any type of services and data of the AWS cloud. API
gateway creates two types of API i.e. REST APIs and WebSocket APIs.
28
7.1. The architecture of API Gateway
This diagram explains that any API that is created on the Amazon API Gateway
interface, provides services to the user or the organization’s developers or users an
integrated and persistent developer environment and experience for the building the
applications.
To access data.
Business logic.
Functionality for back-end services.
Maintaining workloads for Amazon Elastic Compute Cloud (Amazon EC2).
Process code which runs on AWS lambda.
To create real-time communications applications.
29
7.2. Features of API Gateway
Figure 7.2. The diagram shows the general features of API Gateway.
30
7.2. Ways to access API Gateway
User can access the AWS API gateway by the following under-mentioned ways:
AWS Management Console.
AWS SDKs.
AWS Command-line interface.
AWS Tools for Windows PowerShell.
When AWS Lambda integrates with API gateway it provides the application interface for
the user to enable serverless computing. The AWS code used to run on AWS Lambda,
which is a highly available computing resource infrastructure. When the runtime
environment gets created, lambda prepares all the execution and administration
resources for efficient work. With Lambda and HTTP protocols, it also used streamlined
proxy integration for making the API Gateway work.
Figure 7.3. This diagram illustrates the integration of the API gateway with AWS Lambda.
31
7.4. Components of API Gateway in Serverless
REST APIs use HTTP as the underlying protocol for communication, which in turn follows
the request paradigm. There is a collection of HTTP resources and methods that are
integrated with backend HTTP endpoints, lambda functions, or other AWS services. The
architecture of resource logic is designed and arranged according to the application type
in a resource tree.
Integration request-
The body of a route request or the parameters and the body of the method request to
the formats which are required in the backend are used to map out in the internal
interface of the REST or WebStock API Gateways.
Integration response-
The backend receives the response format which is mapped out with status code,
headers, and payload through the API gateway, which is then returned to the client.
Mapping template-
A mapping template can be stated as request integration and respond integration. At
the run time they reference data made available as context and stage variables. It is a
collection of Velocity Template Language scripts that transforms the request body
from the frontend data format of the API to the backend data format of the API and
respond body from backend data format to the frontend data format.
32
Method request-
It is a REST APIs method in API gateway of a public interface that defines the
parameters and the body that an app developer must send in requests to access the
backend through the API.
Method response-
This public interface defines the status code, headers, and body models, which
should be the expectation of the developer in the responses generated from the API
gateway.
33
8. AWS Lambda
AWS Lambda is a type of computing service provided by the AWS, that lets the user and client
run the code without managing and arrangement of the servers. Lambda executes the
request when they needed and scales automatically, from a few requests per day to the
handling of thousands of requests per day. There will be no charges are made if the code is
not running and charges are only applied when code comes in a runtime environment for the
execution. With AWS lambda codes run in a whole virtual environment without any type of
administration. AWS Lambda runs the code with high-availability computing infrastructure
and manages all the other required computing resources like server management, operating
system maintenance, handle capacity inflations by scaled it up and down automatically, code
monitoring and maintain logging.
With lambda integration, we can run the code in response to events such as:
Make changes in existing data of the S3 bucket and DynamoDB.
In giving the response to HTTP requests for API gateway.
Invoking of AWS SDKs code through API gateways.
With these integrations and capabilities, users can make different types of data processing
triggers for event response, process streaming data stored in kinesis, or create your backend
that operates at AWS scale, performance and security.
With AWS Lambda, developers can build serverless applications comprised of multiple
functions that are triggered by events and it can also automatically get deployed them using
AWS CodeBuild.
34
8.1. Why should we go with AWS Lambda?
When a client uses AWS lambda, they are only responsible for their code. AWS Lambda
manages all other computing resource management such as CPU, memory, storage,
network and other resources relevant to it. This becomes great leverage for the
developer in a way that they don’t have to configure and set the management of the
computing instances or customization of operating systems as per their needed runtime
environment. AWS Lambda manages and performs all the administrative activities on
the behalf of the user, the activities are like monitoring fleet health, applying security
patches, provisioning capacities, deploying of code and monitoring and logging of
lambda functions.
If the user wants to organize their computing resources at their own, AWS provides
some other services that let the user handle their configurations:
Amazon Elastic Compute Cloud (EC2): By using this service with Amazon Lambda,
it provides a wide range of computing services but instead of Amazon vendors
handle and configure the resources, they get handled by the user itself. The
customization options are like customized operating systems, network and
security settings, and the entire software stack, but you are responsible for
provisioning capacity, monitoring of logs of health and monitoring, and using
Availability zones for tolerance.
Elastic Beanstalk is an easy go service for scaling and deploying of EC2 instances,
under the full control of the user.
35
8.2. Creating a Lambda Function with console
After logging in AWS console, choose ‘AWS Lambda’, named service from the services list.
Then the Lambda console gets opens up, here the user can create the lambda function.
This Lambda function executes and returns responses. The monitoring of lambda function
logs can be done by CloudWatch metrics.
36
After the above configuration, Lambda creates the function with Node.js function and then
an execution role that grants the permission for the user from the AWS. Lambda assumes
the execution role when the user invokes your function, and then the lambda functions
create it to create configurations and credentials for the SDKs and to read the data exchange
from event sources.
37
Now, the user has to choose the ‘my-function’, option in the designer section to return to
the panel of function coding and configurations. Lambda offers multiple sample codes that
return a success response. The user also can create function code but the limit of function
code size should be 3MB.
38
Users can change the key and values in the sample JSON event template but there should be
no changes in the structure of the queries. If the user wants to change the key values then
they should maintain the structure format of the template. Otherwise, lambda doesn’t
execute it and the user won’t get their expected result as they want.
From the behalf of the user, AWS handler in Lambda function receives and then execute
the sample event.
The execution of Lambda function when getting successful, the results can be shown in the
console. There will be three sections in the console named execution result, summary
section and log output system.
39
after running the lambda functions for few times, some metrics get collected and
that log data can be seen in the CloudWatch.
Choose ‘monitoring’ and the CloudWatch console, all the logs of the Lambda
function.
40
8.5. Basic components of Lambda
AWS lambda tends to invoke the request from the Amazon API Gateway or by the
invoking configuration of the other services. The AWS Lambda runs the function code for
event sources. Some basic components of AWS Lambda are mentioned below:
Function
Runtime
Event
Concurrency
Trigger
Function- A function is a resource that contains the coding of the event that has to be
invoked. A code of function process events and create a runtime to generate the
requests and responses between Lambda and the code of the function.
Runtime- It is a basic runtime environment, where the function codes come to execution.
The runtime environment supports many different languages. The runtime environment
lies between the Lambda and the function code panel with context information,
invocation events, and the responses that exchanges between them.
Event- An event is a document that contains data that tells the function process and it is
in JSON format. The developer determines the structure and contents of the events,
which is invoked when the user calls the function. When any AWS service your function,
the service defines the events.
Concurrency- Concurrency can be defined as the number of requests, that can be handled
by the lambda function. Whenever the functions get invoked, Lambda arranges the
instances according to the process of the events. When the function code gets to finish
at once, then the Lambda functions prepare its self for the next requests. And when the
next request gets invokes at the same time when the previous request is already in
running state, the function creates a new instance to give the response of the new
request. This arrangement of generating new instances and that results in increased in
functions capacity, this is known as function’s concurrency.
Trigger- A trigger is a type of configuration, that is used to invoke the Lambda function.
This can be generated by other AWS services, these configurations are included in
applications that user develops or in event source mappings. Event source mapping is a
resource that reads the items in queue or steam and invokes the function.
41
8.6. Features of AWS lambda
42
9. DynamoDB
DynamoDB is a fully managed NoSQL manage database service that provides fast and
predictable performance with seamless scalability. DynamoDB lets the user free from
carrying the administrative burdens of operating and scaling a distributed database so that,
users don’t have to worry about hardware arrangement, setup, and configurations,
replications, software patching, or cluster catching. DynamoDB also offers encryption at
rest, which decreases the operational burden and complexity involved in protecting
sensitive data.
DynamoDB is capable of storing and retrieving a large amount of data and it can able to
handle multiple concurrent requests at the same time. The performance and capacity of
DynamoDB can be upscaled and get downscale as per the needs of the server. The
consumption of resources can be monitored by the AWS console.
AWS DynamoDB provides on-demand backup. Users can create a backup of the whole
table anytime at a single click at any time of moment from the AWS management console.
When these back-ups or restore actions are applied, there will no zero impact on the
existing tables or in simple terms there will be no changes in data occurs in the existing
tables. DynamoDB provides point-in-time recovery for the tables. Point-in-time recovery
provides accidental recovery of the tables that exist. By this specification real-time
updating and delete operations take place. With this capability of point-in-time recovery, it
provides recovery of any time of instance within 35 days.
To reduce the data redundancy, DynamoDB deletes the data that is expired or no more in
use. This tends to save the cost and time for the operations that are applied to the
DynamoDB table.
43
Figure 9.1. This diagram illustrates the working of data extraction DynamoDB.
That distribution of data among different availability zones provides high availability and
durability of the data. AWS provides global tables features, which enables the user to sync
the data over the different availability zones.
44
Figure 9.2. Encryption techniques of DynamoDB.
45
10. Lambda implemented serverless architechure
We have played with the technology for so far. And in the latest, a new buzz word is a trend
which is ‘Serverless Computing’. This trending technology lets the users focus on their
business or developer strategies instead of maintaining the physical, operational and
administrative requirements. Serverless computing is all about DevOps(developers and
operations) which is all about speed.
With the implementation of AWS Lambda and Serverless architecture, the delivery and
performance will be fastest as ever than traditional services methods and also highly cost-
effective in multiple ways.
The fully serverless architecture will be the combination of multiple other services with
Amazon API Gateway and AWS Lambda. Serverless applications use a mixture of services that
allows the developers to develop to deploy the application efficiently and quickly without any
maintaining responsibilities of the physical servers and physical infrastructures.
The services which are going to use in this Lambda implemented serverless architecture are
mentioned below in the list-
Amazon S3
API Gateway
User Cognito pool
AWS Lambda
Amazon DynamoDB
These above-mentioned services allow the developers to develop the applications without
the use of EC2 and puppet script.
46
Figure 10.1. The architecture of Serverless computing with AWS Lambda.
2. CloudFront decides the routes for the request to the edge locations which are the
data-centers located globally, which generates lower latency (time delay) and provides
a better experience to the clients by caching the data.
3. Amazon S3 hosts static website content such as HTML, CSS, JavaScript, etc. All the
contents which are static in the nature of the website are delivered through the S3.
4. Amazon Cognito User Pool arranges user management and provides validation for the
identity of the user or verifies authentication.
5. As all the static contents are delivered by the S3 bucket, the dynamic contents which
are needed to be responded are done through Amazon API Gateway. In this above-
mentioned architecture, REST API is used for the operation. Amazon API Gateway
provides a secure endpoint to exchange requests.
47
6. AWS Lambda works upon the DynamoDB and provides the computation to the
instances and process the events as request. It performs CRUD operation (i.e. Create,
Updation, Read and Delete).
7. Amazon DynamoDB provides the backend support with the NoSQL database with high
elastic capacity as per the traffic management of the web application.
With this architecture, the user can deploy their entire web application stack very quickly.
Without the management of managing servers, guessing at provided services, extra paying
for idle consumption of resources. Specifically, there is no concern for security measures,
reliability, and performance.
48
11. Conclusion
In the era of cloud computing, where physical infrastructure is no more concern, Amazon
Web Services provinces number of multiple secure and efficient services within their Virtual
Private Cloud. This mechanism is far more effective than traditional services for business or
start-ups. Amazon Lambda’s implementation provides high scalability, durability and easy
management and administration to the infrastructure of the organization. This proposed
architecture in this project is cost-effective as well as the administrative effective solution for
any business startup which lets the organization focus on its business strategies, not on the
infrastructure.
49
12. References
[2] Yan M., Castro P., Cheng P., Ishakian V., Building a Chatbot with Serverless Computing. In Proceedings of the 1st
International ACM Workshop on Mashups of Things and APIs, Trento, Italy,Dec 2016, 5 p.
[3] Hendrickson S., Sturdevant S., Harter T., Venkataramani V.,Arpaci-Dusseau A.C., Arpaci-Dusseau R.H.,
Serverlesscomputation with open lambda. In Proceedings of the 8th USENIX Conference on Hot Topics in Cloud Computing
(Hot Cloud '16), Denver, CO, June 2016, 7p.
[4] Baldini I., Castro P., Chang K., Cheng P., Fink S., Ishakian V., Mitchell N., Muthusamy V., Rabbah R., Slominski A., Suter P.,
Serverless Computing: Current Trends and Open Problems., arXiv preprint arXiv:1706.03178. June 2017, 20 p.
[5] Sill A. The design and architecture of microservices. IEEE Cloud Computing. 2016 Sep;3(5):76-80.
[6] H. Liu, A Measurement Study of Server Utilization in Public Clouds, Proc. 9th IEEE International Conference on Cloud and
Green Computing (CAG’11), Sydney, Australia, Dec 2011, pp.435-442.
[7] M. Rehman, M. Sakr, Initial Findings for Provisioning Variation in Cloud Computing, Proc. of the IEEE 2nd Intl. Conf. on
Cloud Computing Technology and Science (CloudCom '10), Indianapolis, IN, USA, Nov 30 - Dec 3, 2010, pp. 473-479.
[8] R. Buyya, “Cloud computing: The next revolution in information technology,” in Parallel Distributed and Grid Computing
(PDGC), 2010 1st International Conference on, pp. 2–3, Oct 2010.
[11] GIGAOM, “The biggest thing amazon got right: The platform,” 2011.
design,” 2015.
50
Implementation of AWS Lambda in Business Arena
Diksha Mishra
CS & IT
BBAU Satellite Campus,
Amethi.
mdiksha40@gmail.com
Abstract—
II. THE MODELS OF CLOUD COMPUTING
In the emergence of traditional microservices for There are three major components of cloud computing.
browsing over the internet, cloud computing is a major Those three models are PAAS-(Platform as a Service),
acquiring service nowadays, as it provides function-as-a- SAAS-(Software as a Service) and IAAS-(Infrastructure as
service. For the new business implementations where web
a Service).
applications are required to run the business, the planning of
required infrastructure as well as their maintenances and
management is a hectic concern to the businessmen. Instead
of using traditional microservices, cloud computing services
provided by cloud vendors are the major relieve for their
concerns.AWS in a major cloud vendor that provides over
165 web services including AWs lambda, AWS API gateway
which are required services to build the infrastructure
regarding web services for the business implementations.
Serverless is a cost-effective with pay-as-go pricing technique
as well as the security and compliances of the related to web
services are fully managed by cloud vendors.
1
In Figure 2, you can see the case of Serverless engineering model makes the communication between the client and
with the Amazon Web Services supplier. the user by exchanging the protocols between server and
database. This traditional architecture is comprised of two
In Figure 2, a client calls a post demand with a JSON components, where the client makes always request and
body and the parameters ought to be placed into a the server responds according to the request is make. But
DynamoDB table. The API is given by means of the API there are some issues related to this architecture like the
Gateway administration and the asset should trigger a required network setup is difficult to manage also it
Lambda work. The Lambda work gets the parameters and requires a lot of servers. Also, to maintain up these servers
put them into the database. The majority of the PaaS, or to deliver efficient services, it becomes quite expensive in
Platform as a Service, items offer similar focal points, terms of capital.
where the engineers likewise don't have to stress over the In serverless architecture, as shown in figure 2, some
backend servers yet at the same time, there exists the changes are done including server and database; new
critical contrast between these two advancements. The components are added like API Gateway, Lambda
essential distinction is standing out how to form and functions. Instead of having one server for each type of
convey the application to the cloud. functionalities, our FAAS has now each piece of
functionality for each type of function. For example, if we
could have a function for searching for a product, then
there does also exist a function for buying that product.
2
V. AWS LAMBDA focusing on technical infrastructures for maintaining the
Amazon Lambda was launched by Amazon Web services servers, capital cost as perusing or pay-as-you-go and
in 2014 and it was the first serverless computing platform. also it provides high scalabilities to integrate or the
AWS lambda defines a few key aspects like cost, sudden inflations for multiple requests simultaneously
programming model, deploy, security and monitoring. responses. Also, AWS provides 9.9999% of data recovery
That supports many languages, for example, Node.Js, and disaster recovery guarantee to the users in any case.
Python, Java, GoLang, .Net[2]. AWS lambda assists the Also, Amazon services are working on deploying services
progress of the functions which get automatically scaled that work with AWS Lambda for fields like Artificial
up or unable parallel computation and also those Intelligence and the Internet of Things.
functional applications can easily be deployed. AWS
Lambda provides the logic layer for the architecture.
VII. REFERENCES
When the scaling of function calls like event-driven
function increased simultaneously too many requests are [1] Serverless Architectures with AWS Lambda-
demanded, lambda creates multiple containers by creating whitepapers by AWS.
too many copies of existing functions to respond to each
request and run them parallel. That is why any [2] Yan M., Castro P., Cheng P., Ishakian V., Building a Chatbot
possibilities are minimal for the idle stage of the container with
Serverless Computing. In Proceedings of the 1st International
or the server. Deployment applications that use this
ACM Workshop on Mashups of Things and APIs, Trento, Italy,
architecture which includes lambda functions can be cost- Dec 2016, 5 p.
effective and also they get designed in a way to reduce the
wasting capacity of the resources and capacities. [3] Hendrickson S., Sturdevant S., Harter T., Venkataramani V.,
AWS Lambda is a type of FAAS service. FAAS Arpaci-Dusseau A.C., Arpaci-Dusseau R.H., Serverless
approaches to the event-driven computing system. computation with open lambda. In Proceedings of the 8th
USENIX
Conference on Hot Topics in Cloud Computing (Hot Cloud '16),
Denver, CO, June 2016, 7p.
[4] Baldini I., Castro P., Chang K., Cheng P., Fink S., Ishakian
V.,
Mitchell N., Muthusamy V., Rabbah R., Slominski A., Suter P.,
Serverless Computing: Current Trends and Open Problems.,
arXiv preprint arXiv:1706.03178. June 2017, 20 p.