You are on page 1of 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334344796

Comparison of Runtime Testing Tools for Microservices

Conference Paper · July 2019


DOI: 10.1109/COMPSAC.2019.10232

CITATIONS READS
7 2,977

6 authors, including:

Juan Sotomayor Sai Allala


Florida International University Florida International University
4 PUBLICATIONS   13 CITATIONS    6 PUBLICATIONS   13 CITATIONS   

SEE PROFILE SEE PROFILE

Peter J Clarke
Florida International University
122 PUBLICATIONS   1,005 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Electronic Health Records View project

CareHeroes View project

All content following this page was uploaded by Juan Sotomayor on 14 September 2019.

The user has requested enhancement of the downloaded file.


Comparison of Runtime Testing Tools for
Microservices
Juan P. Sotomayor∗ , Sai Chaithra Allala∗ , Patrick Alt † , Justin Phillips† , Tariq M. King∗† and Peter J. Clarke∗
∗School of Computing and Information Sciences
Florida International University, Miami, FL 33199, USA
Email: {jsoto128, salla010, tking003, clarkep}@cis.fiu.edu
† Ultimate Software

2250 North Commerce Parkway, Weston, Florida 33326


Email: quality@ultimatesoftware.com

Abstract— source tools to support the operation and testing of mi-


Recently there has been an increase in the number of software croservices, e.g., Eureka, to provide discovery service, Chaos
applications using the microservices architectural pattern. This Monkey to test the resiliency of the system, among many
is due to the benefits derived over the more traditional N-
tier patterns that use monolithic designs for each of the tiers.
others. Another interesting tool is Hoverfly [5], it provides
The value of using the microservices architectural pattern, support to simulate failure injection to the communications
particularly in the cloud, has been pioneered by companies to/from an API, allowing to write automated tests to simulate
such as Netflix and Google. These companies have created communication with other microservices.
protocols and tools to support the development of cloud-based This paper includes the following contributions:
applications. However, the testing of microservices applications
continues to be challenging due to the added complexity of • Provides a comparative survey of tools to support mi-
network communication between the collaborating services. croservices testing,
In this paper we provide a comparison of several open-source • Describes the design of a polyglot Ridesharing microser-
tools used to support the testing of microservices, describe poly- vices reference prototype for investigating runtime test-
glot Ridesharing microservices reference prototype, and present
ing, and
the results of a study that measures the overhead to test the
Ridesharing application. We also describe our experiences in • Discusses the results of a study that measures the over-
configuring and running the testing tools on the prototype. head of runtime testing on a microservices based appli-
Index Terms—Software testing, testing tools, microservices cation.
testing. In order to accomplish these objectives the paper is dis-
tributed as follows: in section II we cover some background
I. I NTRODUCTION information to introduce software architecture and software
testing. In section III we compare different tools used to
Since 2014 there has been an increasing number of com-
test microservices. In sections IV and V, we explain the
panies interested in adopting the Microservices Architecture
microservices prototype and the case study, showing the results
pattern [1]. Due to its decentralized nature and its loosely
and the discussion. Finally, in sections VI and VII we show
coupled services approach is becoming a popular architecture
some related work and the conclusion of the present study.
for developing large applications. However, this pattern needs
a cloud infrastructure in order to establish communication II. BACKGROUND
between microservices, databases, third-party services or other In this section we provide background on monolithic and
external components. The network connections used in this microservices software architectures, and software testing.
infrastructure add complexity to testing the application. Hence,
it’s necessary to test it at runtime in order to monitor the behav- A. Software Architecture
ior of the different components in a production environment. The work presented in this paper is centered around the
This paper compares different open source tools that allow the microservices architectural pattern, however we initially de-
execution and monitoring of these types of test. scribe the 3-tiered architectural pattern to highlight some of the
Although, guidelines exist to test communication between differences between a traditional architecture with a monolithic
services with contract and component testing, the nature of component and the microservices architecture.
the dynamically self-adapting microservices requires testing at By monolithic component we mean a single component that
runtime [2], specially the deployment of new service instances is deployed with all the modules and libraries needed to sup-
and the communication between them to evaluate the response port its execution. Dehghani [6] states that a monolithic system
time and the failure tolerance [3]. is composed of tightly integrated layers that are deployed
Some of the most representative technology companies together and have brittle interdependencies. Lewis and Fowler
worldwide like Netflix [4], have developed their own open [1] define microservices as an approach to developing a single
includes performance testing (capacity and response time) and
stress testing (system load).
III. T OOLS TO S UPPORT T ESTING OF M ICROSERVICES
In this section, we identify the testing tools that may be used
to test microrservices using different testing strategies (unit,
component, contract, integration and system), and compare a
subset of these tools for each strategy.
A. Classification of Tools
There are several tools to support the testing of microser-
vices, including tools to support different levels of testing
(unit, integration and system), different testing objectives, and
tools for scaffolding e.g., test harnesses. Table I shows a cross-
section of the tools use to support the testing of microservices.
In the subsequent paragraphs we describe the key aspects of
the tools shown in the table.
Table I consist of 8 columns, these columns from left to
Fig. 1. (a) 3-tier architecture with application logic monolithic component. right are:
(b) Microservices architecture with application logic service components. • No - number assigned to the tool.
• Name - name of the tool.
• Interface Method - identifies the interaction between
application as a suite of small services, each running in its
tools and system or unit under test. WebDriver, supports
own process and communicating with lightweight mechanisms,
direct calls to the web browser. HTTP request, is a
often an HTTP resource API.
communication protocol for distributed systems. Two-way
An example of a layered architecture is the 3-tier archi-
proxy, uses two proxies for communication between a
tecture which consist of: a client layer that handles all user
local environment and an external system in production.
interactions through the appropriate interfaces; a business logic
• Implementation - how the tester uses the tool to per-
layer that contains the business rules to process requests
form testing. IDE, integrated development environment.
sent from the client layer; and the data layer that stores the
Library, set of passive functions that can be called from
persistent data for the application. Figure 1 (a) shows the
an application. Framework, is an active application that
layered structure of the 3-tier architecture.
invokes calls to other applications. Side-car + proxy, an
The core concept in the microservices architecture is the independent component that is attached to an application
notion of separately deployed units also known as service via a proxy. Web browser extension, a plugin that can be
components or simply services [7]. Figure 1 (b) shows an installed to provide added functionality to a browser. CLI,
application using the microservices architecture with the ap- command line interface.
plication logic separated into several services and the API • Platform - groups of technologies used to support the
gateway (also a service). It is worth noting that each service is execution of other applications. This column in the table
independently deployable which is an advantage over mono- lists mainly well-known platforms. SPP, Scala Platform
lithic architectures. Each of these services will have their own Process - runtime system for Scala applications. OTP,
environment and the most common method of communication Open Telecom Platform - collection of middleware, li-
is through a network interface. braries and tools.
• Testing Case Language(s) - implementation language(s)
B. Software Testing
for writing test cases. WebDriver compatible languages.
The main objective of software testing is to verify the The other languages referenced in this column are com-
system against the expected behavior [8]. Software testing mon programming languages.
techniques are usually classified into the following categories: • Testing Objectives/Support - identifies the testing objec-
black-box, white-box and gray-box [9]. tives (see Section II-B) of the tool or the infrastructure
In this paper we focus on three levels of software testing provided by the tool.
which are unit, integration and system testing [8]. Within unit • Testing Strategies - identifies the levels of testing includ-
testing we also look at component and contract testing. ing unit, component, contract, integration and system (see
In addition to the testing strategies previously mentioned, Section II-B).
there is also the notion of testing objectives [8]. These
objectives can be broadly classified into functional testing B. Testing Microservices
and non-functional testing. End-to-End testing is a black-box Unit testing of microservices is done using xUnit frame-
functional system testing technique. Non-functional testing works such as JUnit or NUnit, see Table I rows 7 and
TABLE I
T OOLS USED TO SUPPORT THE TESTING OF MICROSERVICES .

No. Name Interface Implementation Platform Test Cases Language(s) Testing Objectives/ Testing Strategies
Method Support
1 Appium [10] WebDriver IDE iOS, Android Any WebDriver compat- Regression, System
ible language Exploratory
2 Docker Com- HTTP Library JVM Java Regression Integration, System
pose Rule [11] Requests
3 Gatling [12] HTTP Framework JVM, SPP (Scala) Java, Scala Performance, Stress Component, System
Requests
4 Gremlin [13] HTTP Side-car + Proxy Linux, Python Linux Scripting End-to-End Component, Integration,
Requests Runtime Environment Language, Python System
(PRE)
5 Hoverfly [5] HTTP Side-car + Proxy JVM, PRE, MacOS, Java, Python, Scripting End-to-End Component, Integration
Requests Windows, Linux Language
6 JMeter [14] HTTP IDE JVM Java Functional, Regression Component, Integration
Requests
7 JUnit [15] Method call Framework JVM Java Functional, Regression Unit
8 Minikube [16] CLI Container’s Orches- Linux, Windows, Ma- Scripting Language Test Harness Component, Integration
trator cOS
9 NUnit [17] Method call Framework .Net Framework .Net Languages Functional, Regression Unit
10 Pact [18] HTTP Framework Cross-section of run- Any supported language Regression, Contract
Requests time systems Component
11 Selenium IDE HTTP Web browser exten- Chrome, Firefox Selenium Script Regression, System
[19] Requests sion Exploratory
12 Spring Boot Method Call Framework JVM Java Test Harness Unit
Starter Test
[20]
13 Spring Cloud HTTP Framework JVM Groovy, YAML Regression, End-to- Contract
Contract [21] Requests End
14 Telepresence Two-way CLI Windows, Linux Any Test Harness Component
[22] proxy
15 Tsung [23] HTTP Framework OTP (Erlang) XML Stress Component, System
Requests
16 Watir [24] WebDriver CLI Ruby VM Ruby Regression System

8. Fowler [1] states that in the context of microservices example, using Docker Compose Rule [11] the tester can setup
the components are the services themselves, and should be the environment with the services to be integrated and run test
tested as independent systems by mocking or stubbing the cases on this environment. Other tools that perform integration
interaction with external dependent components. By removing testing include Gremlin [13], Hoverfly [5], JMeter [14], see
these dependencies during testing, the service does not rely on Table I rows 2, 4, 5, and 6.
the network communication which may modify the outcome System testing for microservices application is usually done
of the test if the network is not behaving as expected. Tools to by executing functionality through the UI or public API of
support component testing include Gatling [12] and Gremlin the system. This type of testing includes end-to-end testing
[13] (see Table I rows 3 and 4), among others. of user-define requirements and involves total coverage of the
When using microservices the interfaces to be defined in the various microservices. Tools used for system testing include
contract are implemented communication protocols, usually a Appium [10], Docker Compose Rule [11], Gatling [12], Grem-
public API or a messaging system. Sundar [25] states that lin [13], and Selenium IDE [19], among others, see Table I.
contract testing is integral to microservices testing and identi- Sundar [25] states that end-to-end testing of a microservices
fies two types of contract testing: producer-consumer service architecture can be difficult and expensive to debug.
model, integration testing verifies the contract of producer’s
interface by creating mocks or stubs of the producer thereby
replicating the service to be consumed; and consumer-driven
IV. M ICROSERVICES T ESTBED
contract testing is done on the producer’s side by verifying the
different consumer contracts that need to be serviced. Tools
to support contract testing include Pact [18] and Spring Cloud In this section we describe the polyglot testbed microser-
Contract [21], see Table I rows 9 and 12. vices system called Rideshare that implements a cluster of mi-
Fowler [1] states that when performing integration testing croservices for a ridesharing application that allows passengers
of a system containing microservices, each service can be to request a ride from a list of available drivers going from
considered as a component and the services can be grouped a pickup address to a destination address. The description of
together using the traditional approach to integration testing the application includes a high-level architecture of the system
[8]. More specifically, a test or staging environment is used and a component diagram of the system. You can download
to integrate the individually tested microservices [25]. For this testbed from the AI Testing github repository1 .
information temporarily on a data-store and sending it
to the front-end through the Notification service.
• Passenger, keeps information of the passenger, e.g.,
name, phone, payment information and so on.
• Driver, keeps information on the driver, e.g., name,
phone, vehicle, bank account and so on.
• Billing, bills customers for trips.
• Payment, requests payment from the third party creditor.
• Trip, updates the status of the trip including location
based on the passenger’s and driver’s locations.
Note that the Trip services shown in Figure 2 consist of
command and query components to show that the service is
interacting differently with the database.

B. Implementation
The Front-end of the Rideshare application is implemented
using AngularJS [26]. The backing services were developed
with Java Spring Boot framework [20]. The Edge service is
an instance of Zuul [4] that performs proxy communication as
previously mentioned, in addition to the proxy tasks the Edge
service also provides load balancing and fault tolerance. The
Fig. 2. Ridesharing testbed application. Discovery service implements Eureka [4], which is a REST
(Representational State Transfer) based service that provides
support for locating other active services in the network. The
A. Architecture Configuration service provides remote configuration capabili-
The Rideshare application uses the microservices architec- ties to Spring Boot applications and finally the Authentication
tural pattern consisting of a front-end, backing services and service is an OAuth2 implementation [27] that authorizes
domain services. The high-level architecture of the Rideshare applications on behalf of an authenticated user.
application is shown in Figure 2. The front-end is a web user The domain services are implemented as follows: Notifica-
interface that provides user interfaces for both the passenger tion uses RabbitMQ [28] as a message broker that facilitates
and the driver. These interfaces include screens for the require- asynchronous communication between all the services. Google
ments stated in the previous paragraph. The back-end services Maps Adapter, connects to the Google Maps Platform [29]
are the generic services to support running a microservices to retrieve details of locations. The Trip Calculator is imple-
application and include the following: mented in Golang [30] and uses a MongoDB database. The
• Authentication - validates user credentials.
other services implement Command and Query Responsibility
• Edge - is a proxy that provides communication between
Segregation (CQRS), that is, every service is split into two
the front-end and the domain services. smaller services, one that takes care of the command execu-
• Discovery - provides a directory and lookup functionality
tions using an event bus implemented with RabbitMQ, and
of the active microservices. another one for queries.
• Configuration Services - provides a centralized repository
V. C ASE S TUDY
of files to support configuration of the domain services
The domain services are the inherent services to provide the In this section we present a case study that tests the
functionality for the given application. These services include: Rideshare application presented in Section IV. The objective
of the case study is to provide insights into the overhead
• Notification, is a stateless communication service that
required to setup and run the testing tools on the Rideshare
sends asynchronous information expected by the front-
application. In addition, we also provide static metrics on the
end, such as ride updates, price estimates and payments.
tools and snippets of code to setup and test the application.
• Google Maps Adapter, is a stateless application that
forwards the request of information regarding the route A. Method
for the ride, retrieving information about estimated time
to reach the destination, directions and distance. The tools used to test the Rideshare application are Gremlin
• Trip Calculator, takes the information retrieved by the [13], Hoverfly [5], Minikube [16] and Telepresence [22]. The
Google Maps Adapter and calculates the price of the tools are used to perform component and integration testing on
ride to be charged to the passenger user, saving the a set of services based on several use cases. These use cases
include: create trip, request trip estimate, Gmaps adapter
1 https://github.com/AITestingOrg/microservices-sandbox request, update destination, and request trip information.
TABLE II 10 times to get a better estimate of the execution times. The
E XECUTION TIMES TO RUN 5 SYSTEM TEST CASES EACH ITERATED 5 columns in the table from left to right are the total time,
TIMES .
the arithmetic mean, standard deviation, maximum time and
Tool Total Time Mean Std Dev. Max. Min. the minimum time for all the iterations. Since the Rideshare
(sec.) (sec.) (sec.) (sec.) application runs on Docker the execution times in the first
Docker CE 5.30 0.11 0.23 0.88 <0.01 row of the table can be considered the baseline for running
Gremlin 6.36 0.13 0.28 1.19 <0.01
the Rideshare application. Telepresence takes the longest time
Hoverfly 5.57 0.11 0.27 1.15 <0.01
Minikube 14.85 0.30 0.43 2.17 <0.01 to run all iterations of the system test cases with a total of
Telepresence 1,003.22 20.06 40.92 123.13 0.05 1,003.22 seconds. The percentage increase of execution time
of Telepresence over Docker CE is 18,842% while Hoverfly
has the least percentage increase over Docker CE, 5.1%.
The platform used to execute the test cases on the Rideshare Gremlin has an increase of 20.11% and Minikube 180.35%
application are as follow. The hardware consisted of: Intel over Docker CE.
Core i7 2.2 Ghz, 32 GB of RAM and 500 GB of Hard
drive space. The software installed on the platform include: C. Discussion
Windows 10, Oracle VirtualBox version 5.2, Docker Tool The data shown in Table II provides some insight into the
Box, Gremlin version 0.1, Hoverfly version 0.17.7, Minikube variability of the tools available to test microservices. The tools
version 0.32.0 and Kubectl 0.32.0. used in the study are a small sample of the list shown in Table
To use Gremlin first run GremlinSDK with Elasticsearch I, and represent test harness tools (Minikube and Telepresence)
and specify three files that contain the topology, the fault and end-to-end testing tools (Gremlin and Hoverfly).
injection instructions, and the expected output. Every service Some of the execution times shown in Table II were
needs to implement a proxy that supports failure injection, unexpected, particularly the high execution times observed for
precisely: abort, delay, and mangle. The Gremlin project Minikube and Telepresence both of which are test harnesses.
provides a lightweight proxy written in Golang [30] that can It is inferred that these high executions times are related to
be attached to each microservice in order to process HTTP the communication setup with the services to be tested. This
requests. thesis is supported by the fact that Hoverfly has execution
Unlike Gremlin the other tools in our study are relatively times close to Docker CE, and requires manual changes to the
easy to setup and execute. Hoverfly [5] may be used in several system test cases in order to use the proxy service provided
different modes including capture, simulate, spy, synthesize by Hoverfly.
modify and diff. In this work we focus on capture mode. In One of the most challenging aspects to conducting the
capture mode Hoverfly is used to record data contained in experiments using the tools in Table II is setting up the tools to
the HTTP requests between services, including the time to run with the prototype. The tool configuration files need to be
complete a transaction. This captured data can then be used written manually and match the environments in the containers
to simulate the behavior of the Rideshare application during of the services. Other challenges include: the resources needed
system testing. to run both the tools and the prototype, e.g., RAM; unreliable
Telepresence [22] is used to debug a service in an appli- network connectivity; and other supporting entities needed to
cation by executing that service on a local machine while run the tools for different operating systems. The unreliable
connecting the service to the production environment. By network connectivity refers to the communication ports that
running the service on the local machine Telepresence supports are needed for the docker containers on the host machine that
the debugging of the service. In order for Telepresence to can sometimes be reassigned to other applications.
work Kubernetes is required to establish a connection to the
production environment running the Rideshare application. VI. R ELATED W ORK
Minikube [16] encapsulates services inside pods, thereby In this section we present the work related to tools used
providing the services with some level of resiliency by to test microservices. Heorhiadi et al. [13] describes Gremlin
monitoring the health of each service. That is, Minikube as a framework for systematically testing the failure-handling
continuously sends a message to determine if the service is capabilities of microservices. It is considered to be a general
running. The system under test is deployed in Minikube by purpose tool that performs tests on a fully deployed system
encapsulating each service to be tested in a pod, and exposing or set of components. Additionally, it requires a HTTP header
external ports to allow communication between the services. filter to work, this is usually achieved by a proxy attached to
every service to redirect the HTTP messages to the Gremlin
B. Results SDK. The work by Heorhiadi et al. motivated the contents of
The data collected during the study includes dynamic met- this paper resulting in the comparison of a cross-section of
rics for the Rideshare application, supporting platform, and microservices testing tools similar to Gremlin.
the tools executing the test cases. Savchenko et al. [31] describes an approach to testing the
Table II shows the execution times to run 5 system test Mjolnirr microservice platform, which is prototype developed
cases each for 10 iterations. We decided to run each test case by the authors. The work starts by describing the microservices
architectural pattern and comparing it to a monolithic system [4] Netflix Development Team, “Netflix,” https://www.netflix.com/, 2019,
architecture. This introduction lays the foundation for the [Online; accessed 18-Jan-2019].
[5] SpectoLabs, “Hoverfly,” https://hoverfly.io/, 2019, [Online; accessed 18-
different feature types and testing levels that may be applied to Jan-2019].
a microservices architecture. Our work also initially compares [6] Z. Dehghani, “How to break a monolith into microservices,” https:
microservices and monolithic architectures, however we then //martinfowler.com/articles/break-monolith-into-microservices.html,
April 2018, [Online; accessed 21-Jan-2019].
provide a comparison of the tools that may be used to test [7] M. Richards, Software architecture patterns. O’Reilly Media, Incorpo-
microservices which is not done by Savchenko et al. [31]. rated, 2015.
Unlike the previous works described in this section, Panda [8] IEEE Computer Society, P. Bourque, and R. E. Fairley, Guide to the
Software Engineering Body of Knowledge (SWEBOK(R)): Version 3.0,
et al. [32] argue that the recent advances in formal methods can 3rd ed. Los Alamitos, CA, USA: IEEE Computer Society Press, 2014.
pave the way to build mechanisms to ensure the correctness of [9] A. P. Mathur, Foundations of software testing, 2/e. Pearson Education
systems that use microservices. The authors propose a system India, 2013.
[10] J. Foundation, “Appium,” http://appium.io/, 2019, [Online; accessed 25-
ucheck that enforces invariants in microservices based appli- Jan-2019].
cations. The ucheck system can be used to define invariants on [11] P. Technologies, “Docker Compose Rule Library,” https://github.com/
the remote procedure calls (RPC) thereby indicating erroneous palantir/docker-compose-rule, 2018, [Online; accessed 15-Oct-2018].
[12] G. Corp, “Appium,” https://gatling.io/, 2019, [Online; accessed 25-Jan-
behavior that may occur in a microservices based system, 2019].
e.g., insertion calls to the database before a corresponding [13] V. Heorhiadi, S. Rajagopalan, H. Jamjoom, M. Reiter, and V. Sekar,
authentication call. ucheck requires no coordination which “Gremlin: Systematic resilience testing of microservices,” in Proceed-
ings - 2016 IEEE 36th International Conference on Distributed Com-
results in minimal overhead to the running system. This work puting Systems, ICDCS 2016, vol. 2016-August. Institute of Electrical
shows that researchers are seeking out solutions to improve and Electronics Engineers Inc., 8 2016, pp. 57–66.
the correctness of microservices applications using formal [14] Apache Software Foundation, “Appium,” http://jmeter.apache.org/, 2019,
[Online; accessed 25-Jan-2019].
techniques. [15] The JUnit Team, “JUnit,” https://junit.org/, 2019, [Online; accessed 25-
Jan-2019].
VII. C ONCLUSION [16] Google, “Minikube,” https://github.com/kubernetes/minikube, 2019,
[Online; accessed 10-Jan-2019].
The increasing use of microservices in many software [17] The NUnit Team, “NUnit,” http://nunit.org/, 2019, [Online; accessed 25-
Jan-2019].
applications has resulted in the need for testing tools and [18] Pact Foundation, “Pact,” https://docs.pact.io/, 2019, [Online; accessed
techniques more suited to testing these applications. To address 25-Jan-2019].
this problem, new testing techniques and tools are being [19] Software Freedom Conservancy (SFC), “Selenium IDE,” https://www.
seleniumhq.org/selenium-ide/, 2019, [Online; accessed 25-Jan-2019].
developed that focus on the added complexity of increased [20] Pivotal Software, Inc, “Spring Boot,” https://spring.io/projects/
data communication required between several services. In this spring-boot, 2019, [Online; accessed 25-Jan-2019].
paper we compared several tools that can be used to test [21] ——, “Spring Cloud Contract,” http://spring.io/projects/
spring-cloud-contract, 2019, [Online; accessed 25-Jan-2019].
microservices at different levels including end-to-end testing [22] D. Inc., “Telepresence,” https://www.telepresence.io, 2019, [Online; ac-
and regression testing, among others. Some of these tools also cessed 10-Jan-2019].
provide infrastructure support for executing test cases, e.g., test [23] Nicolas Niclausse, “Tsung,” http://tsung.erlang-projects.org/, 2019, [On-
line; accessed 25-Jan-2019].
harness. [24] Watir Development Team, “Watir,” http://watir.com/, 2019, [Online;
We also described an open-source polyglot Ridesharing accessed 25-Jan-2019].
microservices reference prototype that can be used as a testbed [25] A. Sundar, “An insight into microservices testing strategies,”
https://www.infosys.com/it-services/validation-solutions/white-papers/
to evaluate new testing techniques and tools. The Ridesharing documents/microservices-testing-strategies.pdf, [Online; accessed
application test cases were executed using four different tools January 2019].
and the execution times were reported. Our future work [26] Google, “Angualr Js,” https://angularjs.org/, 2019, [Online; accessed 10-
Jan-2019].
include developing a testing framework that can run system [27] O. Community, “OAuth 2.0,” https://oauth.net/2/, 2019, [Online; ac-
tests with minimal communication overhead. We also plan to cessed 10-Jan-2019].
extend the study presented in this paper to include additional [28] Pivotal Software, Inc, “RabbitMQ,” https://www.rabbitmq.com/, 2019,
[Online; accessed 26-Jan-2018].
tools and perform a more in-depth performance analysis using [29] Google, “Google Maps Platform,” https://cloud.google.com/
the Ridesharing prototype. maps-platform/, 2019, [Online; accessed 10-Jan-2019].
[30] Google, “The Go Programming Language,” https://golang.org/, 2019,
R EFERENCES [Online; accessed 10-Jan-2019].
[31] D. I. Savchenko, G. I. Radchenko, and O. Taipale, “Microservices
[1] M. Fowler and J. Lewis, “Microservices a definition of this new validation: Mjolnirr platform case study,” in 2015 38th International
architectural term,” http://web.archive.org/web/20180404230511/https:// Convention on Information and Communication Technology, Electronics
martinfowler.com/articles/microservices.html, accessed: 2018-04-05. and Microelectronics (MIPRO), May 2015, pp. 235–240.
[2] T. M. King, P. J. Clarke, M. Akour, and A. S. Ganti, “Validating [32] A. Panda, M. Sagiv, and S. Shenker, “Verification in the age of
autonomic service-driven applications: Challenges and approaches,” microservices,” in Proceedings of the 16th Workshop on Hot Topics
in Handbook of Research on Architectural Trends in Service-Driven in Operating Systems, ser. HotOS ’17. New York, NY, USA:
Computing, R. Ramanathan and K. Raja, Eds. IGI Global, 2014, ch. 9, ACM, 2017, pp. 30–36. [Online]. Available: http://doi.acm.org/10.1145/
pp. 197–219. 3102980.3102986
[3] D. Suliman, B. Paech, L. Borner, C. Atkinson, D. Brenner, M. Merdes,
and R. Malaka, “The morabit approach to runtime component testing,”
in 30th Annual International Computer Software and Applications
Conference (COMPSAC’06), vol. 2, Sept 2006, pp. 171–176.

View publication stats

You might also like