You are on page 1of 13

Linköping University | Department of Computer and Information Science

Bachelor’s thesis, 16 ECTS | Microservice


2023 | LIU-IDA/LITH-EX-G--23/058--SE

Comparative Study of REST and


gRPC for Microservices in Estab-
lished Software Architectures

Isabella Olivos, Martin Johansson

Supervisor : Anders Fröberg


Examiner : Erik Berglund

Linköpings universitet
SE–581 83 Linköping
+46 13 28 10 00 , www.liu.se
Upphovsrätt
Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-
ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.
Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-
pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-
ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan
användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-
heten och tillgängligheten finns lösningar av teknisk och administrativ art.
Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som
god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet
ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-
nens litterära eller konstnärliga anseende eller egenart.
För ytterligare information om Linköping University Electronic Press se förlagets hemsida
http://www.ep.liu.se/.

Copyright
The publishers will keep this document online on the Internet - or its possible replacement - for a
period of 25 years starting from the date of publication barring exceptional circumstances.
The online availability of the document implies permanent permission for anyone to read, to down-
load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial
research and educational purpose. Subsequent transfers of copyright cannot revoke this permission.
All other uses of the document are conditional upon the consent of the copyright owner. The publisher
has taken technical and administrative measures to assure authenticity, security and accessibility.
According to intellectual property law the author has the right to be mentioned when his/her work
is accessed as described above and to be protected against infringement.
For additional information about the Linköping University Electronic Press and its procedures
for publication and for assurance of document integrity, please refer to its www home page:
http://www.ep.liu.se/.

© Isabella Olivos, Martin Johansson


Abstract

This study compares two commonly used communication architectural styles for dis-
tributed systems, REST and gRPC. With the increase of microservice usage when migrating
from monolithic structures, the importance of network performance plays a significantly
larger role. Companies rely on their users, and they demand higher performance for ap-
plications to enhance their experience. This study aims to determine which of these frame-
works performs faster in different scenarios regarding response time. We performed four
tests that reflect real-life scenarios within an established API and baseline performance
tests to evaluate them. The results imply that gRPC performs better than REST the larger
the size of transmitted data is. The study provides a brief understanding of how REST per-
forms compared to newer frameworks and that exploring new options is valuable. A more
in-depth evaluation is needed to understand the different factors of performance influences
further.
Comparative Study of REST and gRPC for Microservices in
Established Software Architectures
Isabella Olivos Martin Johansson
isade842@student.liu.se marjo380@student.liu.se

ABSTRACT since all of its functionality is coupled. That results in them


This study compares two commonly used communication ar- having to learn more about the surrounding functionality. This
chitectural styles for distributed systems, REST and gRPC. also creates an obstacle regarding continuous deployment.
With the increase of microservice usage when migrating from
As the expectation for additional functionality increases, mono-
monolithic structures, the importance of network performance
lithic applications have a tendency to keep growing incessantly.
plays a significantly larger role. Companies rely on their users,
To address this problem, integrating microservices is becom-
and they demand higher performance for applications to en-
ing a more common solution.
hance their experience. This study aims to determine which
of these frameworks performs faster in different scenarios re- Introducing microservices to a monolithic application comes
garding response time. We performed four tests that reflect with new challenges. One of them is changing the communica-
real-life scenarios within an established API and baseline per- tion mechanism for the application. With network communica-
formance tests to evaluate them. The results imply that gRPC tion, there is a performance aspect that must be considered. Is
performs better than REST the larger the size of transmitted it feasible to introduce microservices to an already established
data is. The study provides a brief understanding of how REST larger software product and maintain or improve performance?
performs compared to newer frameworks and that exploring
new options is valuable. A more in-depth evaluation is needed Switching architectural design requires a plan to determine
to understand the different factors of performance influences the functionality best suited for such extraction. Some func-
further. tionality might create unnecessary requests when switching to
a distributed design. It requires a concerted effort to connect
INTRODUCTION and manage the different services. To simplify this endeavor,
Ever since the mid-00s, when the concept of web-based mi- the use of already existing frameworks for communication
croservices was introduced, the space it has occupied in current can be used. Which framework is best suited for a particular
large-scale software has grown considerably. This approach microservice?
to software applications is utilized in numerous well-known Communication types that will be investigated are REST and
corporations such as Amazon 1 , Uber2 , Spotify3 and Netflix. gRPC. Differences in performance regarding speed and error
Netflix started integrating and developing microservices in handling in chosen scenarios will be evaluated.
20114 [13].
RESTful APIs, currently the most commonly used architec-
The benefits of a microservice architecture are many. Instead tural style for microservices, are built on the HTTP/1.1 proto-
of focusing on an entire application, teams can work exclu- col. Created by Roy Fielding during his doctoral dissertation
sively with smaller services. Making it easier to maintain and in the year 2000, REST has a relatively long history, and there
use in agile development. This, in turn, often leads to faster are many resources to use when developing microservices[5].
development circles and less complexity5 . As demonstrated in a research article by SM Sohan et al. [12]
A Monolithic Architecture6 in comparison to the Microservice REST APIs do need supporting documentation if development
Architecture is a unified application with all of its features is meant to be continued. Otherwise, complex applications
incorporated. To run it with new changes, you need to com- will have a hard time knowing what functions to use and how
pile the application in its entirety, which comes with many to use them.
obstacles. Developers, especially newcomers, tend to be over- gRPC7 was developed by Google and initially called Stubby
whelmed with familiarizing themselves with the application before being expanded upon and released in 2015 as
1 https://docs.aws.amazon.com/whitepapers/latest/ gRPC(Google Remote Procedure Call) is also an open-sourced
microservices-on-aws/ framework. It is a Remote Procedure Call framework that
simple-microservices-architecture-on-aws.html promises high performance and was designed based on that no-
2 https:
tion and provides a high-productivity design for distributed ap-
//www.uber.com/en-SE/blog/microservice-architecture plications. The communication protocol is used with HTTP/2,
3 https://www.youtube.com/watch?v=7LGPeBgNFuU
4 https://www.nginx.com/blog/
encoded in a binary format, and offers less overhead than
HTTP/1.1.
microservices-at-netflix-architectural-best-practices
5 https://www.nginx.com/blog/
microservices-at-netflix-architectural-best-practices 7 https:
6 https://microservices.io/patterns/monolithic.html //www.logicmonitor.com/blog/what-are-microservices

1
The study is in collaboration with Infor. Infor is a company placed in the cloud, able to be run on different servers all over
that offers enterprise software solutions for large businesses the world.
in over 175 different countries. As a part of their current
ventures, they are looking to move several services from the There are a lot of benefits to a microservice architecture ac-
core business application to a microservice solution. With this, cording to Richardson9 , such as:
they are interested in learning about different performance • Continuous deployment
improvements that can be adopted.
• Smaller services to test - Since the functionality is stripped
Due to the different approaches of these two communication
• Easier to understand for developers
solutions in handling data transfer, it is necessary to consider
how to evaluate and compare them fairly. First and fore- • Faster to start an application
most, ensuring similar data content when data is transferred is
paramount for the study to achieve fairness. For comparison, Different Kinds of Microservices
real-life scenarios for the application will be tested to evalu- Different microservices will vary in both scope and complex-
ate response time. When implementing and integrating these ity[6]. Some might focus entirely on database look-up, while
frameworks combined with real-life scenarios, we must ensure others make more complex calculations and might even need
that additional overhead is handled similarly. The focal point to call other microservices to fulfill their function. Depending
of the comparison between gRPC and REST implementations on the purpose of the service in the context of a larger system,
lies in the exchange of data, encompassing both sending and the load on the service will differ considerably.
receiving. When developing questions such as what possible bottlenecks
For the study, we concluded that the response time would be exist (Database look-up, CPU-intensive calculations, asyn-
measured through four different test cases. These consist of chronous requests) must be asked and answered. When com-
two real-life scenarios using provided microservice APIs with bined with understanding the above-mentioned potential scope
differing-sized payloads and two exclusively protocol-centered and context an architectural pattern can be chosen and used in
tests. development. Methods already exist to analyze this informa-
tion and make informed decisions. As proposed by de Oliveira
Purpose Rosa, Thatiane, et al. [4] by specifying what important crite-
The purpose of this work will be to test and benchmark gRPC ria and trade-offs are, searching for patterns matching named
for Infor8 . The framework will be compared to the current criteria, and finally analyzing and comparing trade-offs.
REST API in use. The results and possible conclusions can
Architectural Styles and Frameworks
then be used to make an informed decision regarding future en-
deavors when expanding and distributing more microservices Several frameworks simplify communication between larger
as part of Infor’s overall architecture and enterprise software applications and microservices. The frameworks offer varying
solutions. support to connect, stream and monitor the traffic between
services. This includes data serialization that New services
Research Question
added to a larger solution only need to adapt to the framework
being used instead of adapting to all services they might need
The purpose of this thesis work is to answer the following:
to interact with. REST architecture and gRPC have many
Which solution is the fastest in terms of response time, REST differences.
or gRPC?
REST
THEORY REST stands for Representational State Transfer and utilizes
How and when to use different frameworks for communication the HTTP/1.1 protocol for communication between a client
is a large subject to tackle. We discuss some topics that will not and a server over the network. REST consists of various parts,
be investigated thoroughly but will be necessary to understand including resources with identifiable addresses. Representa-
the context and to analyze the results we find. tions illustrate how these resources are created and transferred,
while messages organize requests and responses. Specific
Microservice Architecture methods can be applied to these resources, and hypermedia
As described by Johannes Thönes, microservices are smaller helps navigate and find them. It decouples the presentation
applications that are developed to solve or fill a niche func- layer data for the client side and storage and management for
tion[14]. These smaller services are easier to understand, the server side. REST is stateless, meaning the server does
develop and deliver than larger applications responsible for not store client states to make the requests independent. It
all functionality. This difference also necessitates a robust consists of several key components that compose a Uniformed
communication network for the services to access and make Interface that standardizes interactions.
use of each other. While a large, monolithic application can URL/URI Structure
make all the computation and offers its complete functionality When building REST APIs, it uses the URI(Uniform Resource
from a central location. Microservices can and are mostly Identifiers) structure that includes the URL(Uniform Resource
Locator) to locate the correct resource through the internet.
8 https://www.infor.com/nordics/about 9 https://microservices.io/patterns/microservices.html

2
Data Representation Formats Protoc compilation
Resources can be translated into various formats such as JSON, When a service is defined in a protobuf, there is a specific
XML, or binary data. The format is determined by the client compiler provided by Google to generate serialization and
on requests. deserialization classes for the RPCs and the messages. Protoc
supports many languages.
Methods
There are standardized HTTP methods, such as GET, POST, Available RPCs
PUT, PATCH, and DELETE. This allows clients to retrieve, The gRPC API provides four different types of Remote Proce-
create, update, and delete resources if these methods are im- dure Calls. These are Unary, Server, Client, and Bi-directional
plemented on the server side. RPCs.

Server method
gRPC
gRPC is a Remote Procedure Call framework that runs on Unary Client sends a single request to the server
top of the HTTP/2 protocol. A Remote Procedure Call (RPC) and receives a single response.
is made to request a service from a program or application.
In the context of this paper, the program or application is Server Client makes one request and receives
a microservice. An RPC itself is synchronous but can be messages in a stream.
used with threads to be able to make multiple procedures Client Client sends requests in a stream, and the
simultaneously. server responds with a single message.
gRPC Architecture Bi-directional Client starts a call. The client and server
gRPC has a client-server structure where the client can call the send responses or requests through a
server. In gRPC, we define the service and its functions. The stream.
service defines the API, and the client stores information about
which functions it can invoke on the server. gRPC is language Table 1. Service methods in gRPC summarized
agnostic, meaning the server and clients can communicate
regardless of the programming language. HTTP/1.1 Vs. HTTP/2
There are some differences between these protocols HTTP/1.1
Protocol Buffers is a text-based protocol, while HTTP/2 uses a binary-formed
When services send messages, gRPC uses Protocol Buffers style that compresses the data for transmission. Unlike
to serialize the data into binary data. Protocol buffers are HTTP/1.1, the HTTP/2 protocol compresses data for its head-
language-agnostic and define communication data and the ers. For compression, the HPACK algorithm[10] is used and
service. The service definition provides information about improves network performance. The HTTP/2 protocol pro-
what types of remote functions are available. These protocols vides multiplexing, meaning that instead of opening a connec-
create small payloads for messages between the client and tion for each request it allows multiple requests and responses
server at network transmission. The service definition is stored to be made and therefore is more efficient during network
in .proto files. connection setup.
syntax = " proto 3"; Benchmark Testing
package warehouse ; There are many different criteria that can be used to compare
frameworks. For our purpose, we will be looking for execution
service WarehouseStock { speed. Speed can be compared by rigorous testing of the same
rpc GetWarehouse ( WarehouseRequest ) microservices using different frameworks to call on them.
returns ( WarehouseInformationReply );
} Locust
message WarehouseRequest { Locust is an open-source tool based on Python for testing
int 32 id = 1; and simulating web traffic with multiple concurrent users10 .
} It allows for evaluation and benchmarking of response time,
number of requests, and content size.
message WarehouseInformationReply {
int 32 id = 1;
string name = 2; RELATED WORK
Item item = 3; Previous research in the area of microservices and microser-
} vice frameworks.
message Item { In an article by Carolina Luiza Chamas et al. [2], the authors
float amount = 1; compared REST, gRPC, and others with a perspective of en-
string name = 2; ergy efficiency. Computational offloading to reduce energy
}
consumption in mobile applications was the aim of the study,
Code snippet 1. An example of a service definition in a .proto file
10 https://locust.io/

3
and results show that REST was the most efficient option of REST[1], found that gRPC only showed better performance in
those compared. But only in the cases of larger data input and cases where either the data sent was encrypted or in setups with
complex algorithms. For smaller use cases, local execution a high amount of data traffic constantly being sent between
was more energy economical than the effort to offload the services.
work.
Kumar, Prajwal Kiran, et al. [8] analyzed the differences in
Another study by Koji Yamamoto proposed and tested a depth- response time, memory, network, and CPU usage across REST,
first approach for penetration testing of API sequences. To gRPC, and Thrift. Their results concluded that both Thrift
cover all parts of the various APIs in as few sequences as and gRPC were better in inter-microservice communication.
possible becomes a priority. The method suggested in the One of the reasons was the number of TCP packets sent via
article shows significant results in reducing the number of REST was higher for the same amount of data due to Thrift
sequences needed by offering fewer yet longer executable API and gRPC using HTTP/2 instead of HTTP/1.1, like REST.
sequences.
METHOD
Implementing gRPC in a larger project using microservices
is done in a Bachelor’s Thesis by Hoang Vo[15]. Hoang’s as- To measure the performance difference between gRPC and
signment consists of 4 services connected by gRPC, including REST, we were given access to a microservice called Supply
one service that acts as a router for incoming requests sent Chain Planning Service, already developed by Infor. This
to the project using REST calls. One of the notable findings service used REST and was modified to support gRPC requests
suggests that, in the case of small projects, the extra level of for our testing purposes.
complexity is not worth the time decoupling the project into
smaller services. System Description
To test communication between microservices, we developed
Performance differences between HTTP/1.1 and HTTP/2 are our smaller microservice, Communication Testing Service.
measured and compared in an article by Corbel, Romuald et Our service has the functionality to receive both REST re-
al.[3]. The tests were done by measuring the time taken to quests, and gRPC calls to the Supply Chain Planning Service.
download a website under varying network delays. Results Measuring between microservices was a decision made to
showed HTTP/2 averaged at least a 15% shorter completion simulate communication between services and learn about the
time. frameworks during implementation. With the company’s inter-
Americans Karandikar, Sagar, et al.[7] developed a hardware est in mind, we knew that the most important service methods
accelerator for the serialization and deserialization of protobuf to test were the ones that were most commonly used.
messages. The "HyperProtoBenchmark" was implemented in We decided to build a system to test the performance in regard
RTL(Register-transfer level) and made open-source, enabling to response time. It consists of three components. Those are
third parties to replicate and validate the findings. The end test scripts with Locust, Communication Testing Service, and
results showed a performance increase between 3.8 and 11.2 Supply Chain Planning Service. The Locust script runs with a
times faster in various tests than in regular serialization. set configuration. The script runs one test scenario at a time
A paper looked at twelve different object serialization libraries and generates CSV files with the results of the measurements.
by Kazuaki, Maeda[9]. The twelve different libraries were System Architectures
using JSON-based, XML-based, or Binary-based serialization. The following pictures represent the final system design.
Protobuf-based libraries showed to be better at reducing the
size of serialized data. Protobuf was also generally at the top
when it came to speed, but two different JSON libraries earned
the top and bottom spots when looking at speed or time is
taken to serialize given data.

Performance Comparisons
L.D.S.B. Weerasinghe et al.[16] evaluated REST, gRPC, and
Websocket specifically from the point of performance between Figure 1. System architecture for REST
microservices. Their project was built using Spring Boot in
Java, and three different tests were used. 1. requests sent
with no payload, 2. requests sent with 1 K.B. of data, and 3.
requests sent with 5 K.B. of data. Results showed grPC as
faster than both REST and Websocket.
A Polish article comparing REST, GrpahQL, and gRPC from
2021[11] showed results favoring REST in performance. But
also found that the amount of data sent was the smallest when
using gRPC.
Figure 2. System architecture for gRPC
In 2022 an article looking exclusively at comparing gRPC and

4
Measurement Decisions
Measurements of the two frameworks are the essential part of
determining which one of them is the most efficient in terms Endpoint Calls Description
of response time. The test scenarios for the thesis work are all
described and motivated in this section. Short /short- Sends a string and answers
message- with a simple "Hello world"
Short Scenario scp
Communication Testing Service produces a small data transfer
with a string of "Hello world" to the Supply Chain Planning Typical /typical-scp The most used function in
Service, which in return responds with a string of "Hello from the provided service takes
SCP Service". The character length sent in the message of the some data and makes database
request is 11 characters. lookups as well as calculations
before returning a fairly large
In terms of efficiency, it is interesting to test sending a small data amount
data transfer to help evaluation of which framework to use
in such a scenario. Both frameworks provide an overhead Huge /huge-scp Sends the most amount of data
with protocol headers and data serialization. Comparing the of the network makes similar
baseline of these gives us a better understanding of the average calculation as Typical but with
performance rate of REST and gRPC. fewer lookups
Typical Scenario HugeNoCalc /huge-scp- Send the exact same data as
This typical request send a small amount of data to Supply no-calc huge but instead of making
Chain Planning Service which in turn retrieves data from an lookups and calculations, re-
external database and makes a calculation. The response is a turns the same data directly
list of JSON objects. The character length sent in the message
of the request is 32 characters. Table 2. Summarization of test scenarios

This typical request is one of the most used requests within


Infor’s own Supply Chain Planning Service. The performance Implementation
measurement was chosen since its frequently used, and the re- This section describes how the test scenarios were imple-
sults will be used to determine the overall gains of a framework mented for the services and the test environment.
shift.
Huge Scenario Communication Testing Service
A long list formatted as JSON is sent to the Supply Chain The service was created using Spring Boot for Java. The ser-
Planning Service. Similar to the typical scenario, it performs a vice contains the data that we have to provide for the four
calculation but uses the provided data instead of connecting to test scenarios in Supply Chain Planning. When the applica-
the external database. The character length sent in the message tion in Spring Boot was created, the generated controller was
of the request is 10300 characters. implemented with the four test scenarios.

This scenario was chosen in contrast to the typical scenario to Communication Testing Service - REST
represent a real-life use case where the payload sent over the Four endpoints were created, and all of them provided an
network is relatively large. authorization token as a header to access the Supply Chain
Huge Scenario Without API Calculations
Planning Service. Short scenario is mapped to the endpoint
/short-message-scp accessed using the GET method. There a
Like Huge Scenario, the same data is sent, but without per-
WebClient is created that has the responsibility to send the data
forming any calculations, the data is immediately returned.
to the server at the Supply Chain Planning Service. A request
The character length sent in the message of the request is
was then built for HTTP transfer providing the message to be
10300 characters, the same as in the Huge Scenario.
sent.
We decided to compare a huge payload with Short Scenario
Typical scenario has an endpoint /typical-scp and is accessed
without service functionality being an influencing factor.
with a GET method. The request to Supply Chain Planning is
composed of a warehouse id and an item id.
Huge scenario has an endpoint /huge-scp and is accessed with
a POST method. The request to Supply Chain Planning is
composed of a provided material plan. This material plan is
the information retrieved by Supply Chain Planning from a
database in the Typical scenario and represents a large set of
JSON objects.
Huge scenario without API calculations has an endpoint /huge-
scp-no-calc and is accessed with a POST method. The request

5
to Supply Chain Planning is composed of a provided material }
plan as with calculations. Code snippet 2. Service definition in a .proto file

Communication Testing Service - gRPC


In code snippet 3. below is an example of how the call was
There was some initial setup because of the stub and server
implemented using the generated classes from the protofile.
creation as opposed to Spring Boot which provides that func-
tionality for REST. The server and the stub were created as @Override
separate classes and the service calls were implemented in the public void shortRequest ( StringRequest req ,
StreamObserver < StringReply >
server by overriding the generated service class files. responseObserver ) {
The most important step was to define the gRPC service in StringRequest request = StringRequest .
newBuilder (). setMessage ("Hello from
a protofile. The version used for protobuf was proto3. After Communication ").build ();
defining the protobuf, we compiled classes with the help of StringReply reply = stub. shortRequest (
protoc, that generated classes for Java. Compilation of the request );
classes with protoc was also done for Supply Chain Service
responseObserver . onNext (reply);
and the Locust stub. These classes were used to implement responseObserver . onCompleted ();
the microservice API containing the different calls. }
syntax = " proto 3"; Code snippet 3. Service class, example of Short for gRPC

option java_ multiple _files = true;


option java_ package = "io.grpc. grpcServer "; Supply Chain Planning
option java_ outer_ classname = " Modifying and adapting Supply Chain Planning Service turned
communicationTestingProto "; out to be the biggest endeavor. The solution with REST and
option objc_ class_ prefix = "CMTS "; Spring could not be easily adapted for gRPCwithout spending
package gRPCService ; time exploring and understanding the underlying classes and
methods used in the service. To fairly compare the results, we
// RPC definitions bypassed several authorization steps already included in the
service PerformanceSCP { provided service. These steps included sending an acceptable
bearer token with the REST request and asking for database
// Sends small request
rpc ShortRequest ( StringRequest ) returns ( access. Since these steps were developed for the current REST
StringReply ) {} solution, we had the option of either developing our own
implementation for GRPC or bypassing them entirely.
rpc Typical ( TypicalRequest ) returns (
TypicalReply ) {} Supply Chain Planning - REST
For REST, in the Supply Chain Planning Service, the methods
rpc Huge ( HugeRequest ) returns ( HugeReply )
{} used for typical and huge scenarios did not need any modi-
fication since Infor already implemented them. Instead, we
rpc HugeNoCalc ( HugeRequest ) returns ( implemented endpoints for /short-message-scp and /huge-scp-
HugeReply ) {} no-calc.
}
Supply Chain Planning - gRPC
message StringRequest {
optional string message = 1; Adding support in Supply Chain Planning Service for gRPC
} meant generating the server classes using protobuf and over-
riding them. The core functionality is the same as the REST
message StringReply { implementations of the endpoints but is modified with the
optional string message = 1; corresponding generated methods from gRPC.
}
During the implementation of gRPC, we found a restriction
message TypicalRequest {
optional string WarehouseID = 1; when we wanted to use the current API in the service. Spring
optional string ItemID = 2; Boot for REST maps and declares objects with underlying an-
} notation functionality. To use the service as a Spring Boot API,
we found a dependency called gRPC-Spring-Boot-Starter.
message TypicalReply {
optional string message = 1;
} Performance Testing
When the tests were conducted, all of the services were run
message HugeRequest { locally. In the locust file where the tests were defined, a gRPC
optional string WarehouseID = 1; stub was implemented.
optional string ItemID = 2;
optional string JSON = 5; The tests were then run with locust by having one call made
}
at a time for 500 seconds for each test scenario, both with
message HugeReply { REST and gRPC. Locust is often used to load test and check
optional string JSON = 1; how a service performs under different loads, but in our case,

6
we only wanted to measure the response time from when Test - Short Scenario
Communication Testing Service was called to when a response
was retrieved. The following command was used: 6

Average Response Time(ms)


5.5
$ python -m locust -f locustfile .py --
headless -u 1 -r 1 --run -time 500s --csv= 5
results --stop - timeout 99

4
By measuring for 500 seconds, we obtained stable average
response times that could be sensibly compared. 3.1
3

RESULTS

t
or
This section describes the results, summarized from the perfor-

Sh
mance testing with locusts. The overall results are presented
for the REST requests and the gRPC calls. Further, the in- REST gRPC
dividual results are divided into the corresponding scenario Figure 3. Average response times for REST and gRPC with Short cal-
categories to facilitate analysis and comparison. l/endpoint
With Locust running, it generated CSV files, and those results
are seen in Tables 3. and 4. There is a similar trend in regard to Test scenario Short, as shown in Figure 3. focused on testing
response times for all the test scenarios. No test failed during response time. With an average response time of 3.1 ms, gRPC
the runtime. performed better than REST, with a response time of 5.5 ms.
This results in a 77.42% faster response time for gRPC in the
Locust tests one request per second at maximum, and therefore, Short scenario.
the information under the title Request Count is not relevant
to our study. Test - Typical Scenario

4,791.9
4,800
Average Response Time(ms)

Measurements short-scp /typical-scp /huge-scp /huge-scp-no-calc

Request Count 176 66 72 174 4,700


Average Response Time 7.8 4791.9 4129.7 19.8
Min Response Time 4 4754 4046 6 4,600
Max Response Time 34 5092 5492 1091
Average Content Size 40 395 1036 3849
4,500
Requests/s 0.354 0.133 0.145 0.352
4,400
Table 3. Performance testing results with Locust sending requests with 4,349.8
REST
4,300
al
ic
p
Ty

Measurements short-scp /typical-scp /huge-scp /huge-scp-no-calc


REST gRPC
Request Count 176 70 79 176 Figure 4. Average response times for REST and gRPC with Typical
call/endpoint
Average Response Time 3.1 4349.8 3589.5 6.2
Min Response Time 0 4247 3424 0 The test scenario Typical, as shown in Figure 4. tested re-
Max Response Time 30 4716 4149 214 sponse time. With an average response time of 4349.8
Average Content Size Null Null Null Null
ms, gRPC performed better than REST, with a re-
sponse time of 4791.9 ms. This results in a 10.16%
Requests/s 0.354 0.140 0.157 0.354 faster response time for gRPC in the Typical scenario.
Table 4. Performance testing results with Locust sending requests with
gRPC

7
Test - Huge Scenario DISCUSSION
This section will discuss and analyze several unresolved issues,
4,200 limitations as well as assumed reasons for the obtained results.
4,129.7 Here we discuss our findings to prior research done when
Average Response Time(ms)

4,100 comparing REST and gRPC, and its components.

4,000 Method
When we set up and ran the test scenarios on the microservices
3,900 and the testing script was running on the same machine. Since
the tests were conducted on localhost, the advantages of this
3,800 decision were that we were met with zero instances of packet
loss. By isolating the system we can exclude latency differ-
3,700 ences, such as network latency, bandwidth limitations, and
3,589.5 network congestion, as an influencing factor when comparing
3,600 the performance on both frameworks. The consequence of
this choice is a result that is detached from a real-life scenario.
Since this study is focusing on the performance regarding net-
e
ug

work transmission for response times and does not investigate


H

REST gRPC reliability we argue that the method used was feasible.

Figure 5. Average response times for REST and gRPC with Huge cal-
We chose to build Communication Testing Service for the
l/endpoint purpose of investigating the communication between microser-
vices. But as seen in Figures 1. and 2. this means that data is
sent a total of four times, two times as requests and two times
The test scenario Huge, as shown in Figure 5. tested response
as responses. If we instead had the testing script directly make
time. gRPC had an average response time of 3589.5 ms, and
requests to Supply Chain Planning the results would be easier
performed better than REST, with a response time of 4129.7
to interpret.
ms. This results in a 15.04% faster response time for gRPC in
the Huge scenario. Bearer token
As part of making the comparison between frameworks un-
Test - Huge No Calculation Scenario biased, we skipped an authorization step in Supply Chain
Planning that involved validating each request with another
service called "Identity service". But our solution could not
19.8 skip the need to include a bearer token11 with each REST
20
Average Response Time(ms)

request sent. Therefore more data was included in every test


18 scenario with REST than for the corresponding gRPC tests.
16 With this test flaw in mind, we can see that the extra interval
14 that needs to be added to the gRPC call results is between the
results from Short Scenario, 3.1 ms, and Huge No Calculations
12 Scenario, 6.2. The bearer token is 320 amount of characters,
10 and in the Short Scenario it is 11 characters, and in the Huge
No Calculation Scenario, there are 10300 characters. With
8 the added interval for gRPC, we can still see that it performed
6.2 faster than REST.
6
Locust
As seen in Figures 3. and 4. we did not measure the content
c
al

size for any request sent using gRPC. This data point is mea-
oC
eN

sured by default but Locust is not usually used to measure


ug

gRPC. We were able to find examples helping us to add gRPC


H

REST gRPC support 12 but did not achieve content size measurements in
the end. Without this data available for comparison we can-
Figure 6. Average response times for REST and gRPC with Huge with not in our results provide evidence of our claim that gRPC
No Calculation call/endpoint offers data compression efficiency. Although the study, con-
ducted by R. Corbel, et al.[3] found that HTTP/2 performed
The test scenario Huge No Calculation, as shown in Figure 6.
tested response time. gRPC had an average response time of 11 https://swagger.io/docs/specification/authentication/
6.2 ms, and performed better than REST, with a response time bearer-authentication/
of 19.8 ms. This results in a 219.35% faster response time for 12 https:
gRPC in the Huge No Calculation scenario. //docs.locust.io/en/stable/testing-other-systems.html

8
faster. The performance result was not directly connected to this suspicion. The more data that needs to be sent, the more
the compression of data. opportunity gRPC has to compress and surpass REST.
Test scenarios When comparing the scenarios Huge and Huge No Calculation,
In the Short Scenario, the biggest issue is the bearer token that the difference in ms for Huge is in Figure 5. we expected to
is only included in the request made using REST. The token see the same difference as in the results from Figure 6. which
itself is larger than the amount of data sent meaning that gRPC was 13.6 ms. We have no explanation for this discrepancy,
has the performance advantage in this scenario. but we suspect that a factor can be the deserialization and
serialization of the message.
Repeated Field Modifier
As for the Huge scenario where we sent a large JSON object, Related Work
we could not make a perfectly comparable conversion from Our results and findings align with other performance com-
JSON to Java object. Spring boot was able to convert the parisons in this area that show gRPC to be faster in inter-
JSON data in the REST request directly to Java objects. When microservice communication. We found research on several
implementing the scenario for gRPC, we had to write our own key differences between gRPC and REST that all point to
conversion for the same data when it was sent and convert it strengths of gRPC, such as the use of HTTP/2 over HTTP/1.1
first from a string to a JSON object and then to the equivalent and generally better data compression[3]. But we have only
java object. We are not sure how this affected the results. measured the end result from request to response. Therefore,
While we implemented a working protofile for our gRPC We cannot answer which areas gRPC outperformed REST in
implementation, we discovered that the use of repeated field our specific test cases.
modifier in the protobuf definition was an option. If we had
CONCLUSIONS
more time, this optimization could affect the results, likely in
favor of gRPC. In this paper, we have made comparisons of REST and gRPC
in regard to the response time.
Database
In the specific scenarios we have tested, we exclusively see
Database connection externally is a factor for the response time performance advantages when using gRPC compared to REST.
but is not relevant in our study because we want to test network One of the reasons can be explained by the difference in the
transmission in terms of serialization. Although, it is important amount of data sent over the network. gRPC, with its generated
to mention that the response time is affected by this, and as protobuf classes, makes it possible to send messages with only
long as both test cases include the database connection, it is the necessary data to reach an RPC method on another service.
a fair comparison. For the database connection, we decided At the same time, REST requests have an inherently larger
to turn off the data cache option to prevent unforeseen results overhead for each request, as confirmed by Chamas, Carolina
and thus eliminate external factors. Luiza, et al. [2].
Results While a more in-depth comparison is necessary to determine
Short whether a move to gRPC would be the best choice for the client.
Figure 3. reveals the most telling result of our study. For small We can confirm that REST is not necessarily the best option if
data transmission, the result is 77.42% faster with gRPC. We the primary objective is to increase the communication speed.
suspect that this is because of the header compression with the
HTTP/2 protocol. Due to the amount of data being very small
and, therefore, overhead having the biggest part to play, gRPC
outperforms REST and is almost twice as fast in the average
case.
Typical
Looking at Typical in Figure 4. we see a considerable dif-
ference in favor of gRPC. Since this is the most commonly
used method in our client’s context, it is the most important
for any conclusions and recommendations moving forward.
As API calculation is the most time-consuming part of this
scenario, the percentual difference is not as important to look
at in this scenario. Instead, the actual response time difference
in milliseconds is what needs to be evaluated. In this particular
scenario, the difference is 442.1 ms, which is noticeable for
the user experience.
Huge and Huge No Calculation Scenario
These were the scenarios where the most data was sent, and
we assumed that would be where gRPC showed its advantages
the most. The results, as seen in Figures 5. and 6. confirm

9
REFERENCES Communication Protocols in Microservice Applications.
[1] Marek Bolanowski, Kamil Żak, Andrzej Paszkiewicz, In 2021 International Conference on Smart Applications,
Maria Ganzha, Marcin Paprzycki, Piotr Sowiński, Communications and Networking (SmartNets). IEEE,
Ignacio Lacalle, and Carlos E Palau. 2022. Eficiency of 1–5. DOI:
REST and gRPC realizing communication tasks in http://dx.doi.org/10.1109/SmartNets50376.2021.9555425
microservice-based ecosystems. arXiv preprint
arXiv:2208.00682 (2022). DOI:
[9] Kazuaki Maeda. 2012. Performance evaluation of object
http://dx.doi.org/10.3233/FAIA220242
serialization libraries in XML, JSON and binary formats.
[2] Carolina Luiza Chamas, Daniel Cordeiro, and In 2012 Second International Conference on Digital
Marcelo Medeiros Eler. 2017. Comparing REST, SOAP, Information and Communication Technology and it’s
Socket and gRPC in computation offloading of mobile Applications (DICTAP). 177–182. DOI:
applications: An energy cost analysis. In 2017 IEEE 9th http://dx.doi.org/https:
Latin-American Conference on Communications //doi.org/10.1109/DICTAP.2012.6215346
(LATINCOM). IEEE, 1–6. DOI: [10] Roberto Peon and Herve Ruellan. 2015. HPACK:
http://dx.doi.org/10.1109/LATINCOM.2017.8240185 Header compression for HTTP/2. Technical Report.
[3] Romuald Corbel, Emile Stephan, and Nathalie Omnes. DOI:http://dx.doi.org/10.17487/RFC7541
2016. HTTP/1.1 pipelining vs HTTP2 in-the-clear: [11] Mariusz Śliwa and Beata Pańczyk. 2021. Performance
Performance comparison. In 2016 13th International comparison of programming interfaces on the example
Conference on New Technologies for Distributed of REST API, GraphQL and gRPC. Journal of
Systems (NOTERE). IEEE, 1–6. DOI: Computer Sciences Institute 21 (2021), 356–361. DOI:
http://dx.doi.org/10.1109/NOTERE.2016.7745823 http://dx.doi.org/https://doi.org/10.35784/jcsi.2744
[4] Daniel J.F.L. Guerra E.M. Goldman A de Oliveira Rosa, [12] SM Sohan, Frank Maurer, Craig Anslow, and Martin P
T. 2020. A Method for Architectural Trade-off Analysis Robillard. 2017. A study of the effectiveness of usage
Based on Patterns: Evaluating Microservices Structural examples in REST API documentation. In 2017 IEEE
Attributes. Proceedings of the European Conference on symposium on visual languages and human-centric
Pattern Languages of Programs (2020). DOI:http: computing (VL/HCC). IEEE, 53–61. DOI:
//dx.doi.org/https://doi.org/10.1145/3424771.3424809 http://dx.doi.org/10.1109/VLHCC.2017.8103450
[5] Roy Thomas Fielding. 2000. REST: architectural styles [13] Davide Taibi, Valentina Lenarduzzi, and Claus Pahl.
and the design of network-based software architectures. 2017. Processes, Motivations, and Issues for Migrating
Doctoral dissertation, University of California (2000). to Microservices Architectures: An Empirical
https://dl.acm.org/doi/10.5555/932295 Investigation. IEEE Cloud Computing 4, 5 (2017),
[6] Hernán Astudillo Gastón Márquez. 2018. Actual Use of 22–32. DOI:
Architectural Patterns in Microservices-based Open http://dx.doi.org/10.1109/MCC.2017.4250931
Source Projects. 25th Asia-Pacific Software Engineering [14] Johannes Thönes. 2015. Microservices. IEEE software
Conference (APSEC) (2018). DOI: 32, 1 (2015), 116–116. DOI:
http://dx.doi.org/10.1109/APSEC.2018.00017 http://dx.doi.org/10.1109/MS.2015.11
[7] Sagar Karandikar, Chris Leary, Chris Kennelly, Jerry [15] Hoang Vo. 2021. Applying microservice architecture
Zhao, Dinesh Parimi, Borivoje Nikolic, Krste Asanovic, with modern gRPC API to scale up large and complex
and Parthasarathy Ranganathan. 2021. A hardware application. (2021). DOI:
accelerator for protocol buffers. In MICRO-54: 54th http://dx.doi.org/10.1109/MS.2015.11
Annual IEEE/ACM International Symposium on
Microarchitecture. 462–478. DOI: [16] LDSB Weerasinghe and I Perera. 2022. Evaluating the
http://dx.doi.org/10.1145/3466752.3480051 Inter-Service Communication on Microservice
Architecture. In 2022 7th International Conference on
[8] Prajwal Kiran Kumar, Radhika Agarwal, Rahul Information Technology Research (ICITR). IEEE, 1–6.
Shivaprasad, Dinkar Sitaram, and Subramaniam DOI:
Kalambur. 2021. Performance Characterization of http://dx.doi.org/10.1109/ICITR57877.2022.9992918

10

You might also like