Professional Documents
Culture Documents
ON
MICROSERVICES FUNDAMENTALS
Submitted By:
Nishita Goyal
B.Tech (3rd year)
Bharati Vidyapeeth’s
College of Engineering,
New Delhi
1
ACKNOWLEDGEMENT
Sincerely,
Nishita Goyal
2
INDEX
1. Introduction
1.1 Monolithic v/s Micro-services
1.2 Micro-services – “The Saviour”
1.2.1 Benefits
1.2.2 Drawbacks
2. A Deep Dive into Micro-services
2.1 Micro-services Architecture Pattern
2.1.1 API Gateway (Zuul + Ribbon)
2.1.2 Load Balancer (Ribbon)
2.1.3 Service Discovery & Registry (Eureka Server)
2.1.4 Hystrix Server
3. Inter Service Communication
3.1 Using Rest-Template
3.2 Using Feign Client
4. Hystrix: Fault Tolerance in a connected world
4.1 Immediate Failures
4.2 Timely Failures
4.2.1 Dealing with Timely failures
4.2.2 Interceptor: It’s working
4.2.3 Netflix Hystrix
4.2.4 Resilience4j
5. Case Study 1: Implementing Hystrix in STS4
6. Database Management in Micro-services
6.1 Event-Driven architecture
6.2 Saga Pattern
6.3 Spring data JPA: Handling complexity in database access
7. Micro-services Deployment
7.1 Virtualisation
7.2 Containerisation
7.3 Application Containers
7.4 Docker Containers
8. Case Study 2: Hosting the Spring Boot application in a Docker
container.
3
INTRODUCTION
Micro-services is mainly defined as an approach which mainly
focus on decomposing application into single function modules
with well-defined interfaces. Each of these modules are
independently deployed and are operated by small teams. Thus,
micro-services accelerate delivery by minimizing communication
and coordination between people while reducing the scope and
risk of changes.
5
data model. Also, it often results in duplication of some data.
However, having a database schema per service is essential if you
want to benefit from micro-services, because it ensures loose
coupling.
This micro-service architecture pattern offers us lots of benefits
which force us to choose it over monstrous monolith. Keeping in
mind, that nothing is perfect, I will also some of the drawbacks
which it offers which will help to make best choice on developing
the best application for your organisation.
Benefits:
It tackles the problem of complexity. It decomposes what
would otherwise be a monstrous monolithic application
into a set of services. While the total amount of functionality
is unchanged, the application has been broken up into
manageable chunks or services.
The Micro-services Architecture pattern enforces a level of
modularity that in practice is extremely difficult to achieve
with a monolithic code base. Consequently, individual
services are much faster to develop, and much easier to
understand and maintain.
This architecture enables each service to be developed
independently by a team that is focused on that service.
Thereby, increasing the scalability .In this, the developers
are free to choose whatever technologies make sense. When
writing a new service, they have the option of using current
technology. Moreover, since services are relatively small, it
becomes more feasible to rewrite an old service using
current technology.
The Micro-services Architecture pattern makes continuous
deployment possible.
6
Drawbacks:
The term micro-services put special emphasis on the size of
the services. While small services are preferable, it’s
important to remember that small services are a means to
an end, but not the primary goal.
Due to presence of number of independent services in
micro-services architecture, developers need to take care of
the choice of method to facilitate inter service
communication.
Micro-services architecture offers the use of multiple
databases which leads to the challenge of updating multiple
databases owned by different services.
Testing a micro-services application is also much more
complex. For example, with a modern framework such as
Spring Boot, it is trivial to write a test class that starts up a
monolithic web application and tests its REST API. In
contrast, a similar test class for a service would need to
launch that service and any services that it depends upon,
or at least configure stubs for those services.
In Micro-services Architecture pattern, implementing
changes that span multiple services is a challenging task. In
this, one needs to carefully plan and coordinate the rollout
of changes to each of the services.
Deploying a micro-services based application is also much
more complex. As it typically consists of large number of
services and each service have multiple runtime instances.
Thus, there are many more moving parts that need to be
configured, deployed, scaled, and monitored. In addition, a
proper service discovery mechanism needs to be
implemented that enables a service to discover the locations
(hosts and ports) of any other services it needs to
communicate with.
7
A Deep Dive into “MICRO-SERVICES”
Despite of the fact that the micro-services offers so many
drawbacks, they are still the ideal choice for complex
applications. This is all because of the lots of flexibility they offer
in their architecture pattern.
8
service. For example, the product details page of any application,
not only show the details like name, price etc. But, it also shows
you the details like number of items in the cart, order history etc.
Now, in micro-services architecture, the data displayed on the
product details page is owned by multiple micro-services. And,
here the API gateway comes into existence.
How does it work?
The API Gateway is responsible for request routing,
composition, and protocol translation. All requests from clients
first go through the API Gateway. It then routes requests to the
appropriate micro-service. The API Gateway will often handle a
request by invoking multiple micro-services and aggregating the
results. It can translate between web protocols such as HTTP and
Web-Socket and web-unfriendly protocols that are used
internally.
How to implement an API Gateway?
While implementing an API gateway, following design issues we
need to consider:
Performance and Scalability - As in micro-services
architecture, it consists of large number of different
services. Thus, it must be capable of handling
thousands of requests.
Reactive Programming model – As, API gateway
handles requests by routing them to appropriate
services. Thus, to minimize the amount of time, the
code of gateway should be written in reactive style.
Service Invocation – In micro-services architecture,
the inter-process communication can be carried out
both synchronously and asynchronously. Hence, the
gateway must be designed to support different types
of communication mechanism.
9
Service Discovery - The API Gateway needs to know
the location (IP address and port) of each micro-
service with which it communicates. Hence, to solve
this purpose, the API Gateway, like any other service
client in the system, needs to use the system’s service
discovery mechanism.
Handling Partial Failures – In case of failure, that is,
if the gateway fails to call the service then it must be
capable of representing or showing up the cached data
or default data in order to maintain good user
experience.
Netflix Zuul Server is one of an example of API Gateway used. It
acts as a kind of Gateway Server wherein all the client requests
have to pass through it, so its acts as a kind of unified interface
to a Client. Client uses a single communication protocol to
communicate with all the Micro-services and Zuul server is in
turn entrusted with the responsibility of calling various micro
services with its appropriate communication protocols. Netflix
Zuul also has an inbuilt Load Balancer (Ribbon) to load balance
all the incoming requests from the client.
10
Netflix Ribbon is basically a client side load balancer wherein it
load balances all the incoming requests from the client. It uses a
basic Round Robin Load Balancing strategy by default. Let us
consider an example, if there are two services, A and B. Now,
there are several instances of service B which are available to
service A to communicate. So, how to select a particular instance
to connect? Thus, this task is performed by Ribbon.
Netflix Zuul Server has inbuilt Netflix Ribbon embedded with it.
In order to use Netflix Ribbon independently, we have to
provision appropriate Maven packages of Netflix Ribbon for
using it in any application.
11
Step 2: Client applies the load balancing algorithm (usually,
Round Robin Algorithm) to select one instances from all the
available instances.
Step 3: Client makes a request.
This client-side discovery is a pretty straight-forward method.
Also, information about all the available instances, let client
make an intelligent decision. In Spring Cloud this method is used.
Server-side Discovery
In this, client instead of making request directly, makes it via load
balancer. Thus, in this it leads to decrease in number of hops as
compared to what is required in client-side discovery.
Self-registration pattern
In this pattern, a service instance is responsible for registering
and unregistering itself with the service registry. Also, if
required, a service instance sends heartbeat requests to prevent
its registration from expiring. It is relatively simple and doesn’t
require any other system components. However, a major
drawback is that it couples the service instances to the service
registry. So, one must implement the registration code in each
programming language and framework used by your services.
12
Third-party registration pattern
In this pattern, service instances aren’t responsible for
registering themselves with the service registry. Instead, another
system component known as the service registrar (example:
Netflix OSS Prana) handles the registration. The service registrar
tracks changes to the set of running instances by either polling
the deployment environment or subscribing to events. When it
notices a newly available service instance, it registers the
instance with the service registry. The service registrar also
unregisters terminated service instances.
Netflix Eureka is good example of a service registry. It provides
a REST API for registering and querying service instances. A
service instance registers its network location using a POST
request. Every 30 seconds it must refresh its registration using a
PUT request. A registration is removed by either using an HTTP
DELETE request or by the instance registration timing out. As
you might expect, a client can retrieve the registered service
instances by using an HTTP GET request.
4) Hystrix Server
Hystrix acts as a fault tolerant resilient system which is used to
avoid complete failures of software application. It does this by
providing a kind of Circuit breaker mechanism in which a circuit
remains closed when application is running smoothly without
any issues. If there are errors continuously encountered in the
application then Hystrix Server Circuit opens and any further
requests to a calling service are stopped by Hystrix and instead
requests are diverted to a fall back service. In this way it provides
a highly resilient system.
13
INTER-SERVICE COMMUNICATION
In a monolithic application, components invoke one another via
language-level method or function calls. In contrast, a micro-
services based application is a distributed system running on
multiple machines.
Till now, we read about how the client makes request to the
different services. Also, we know that in a micro-services
architecture it consists of large number of different services
communicating with each other. Now, the question arises, how
these services communicate with each other?
The different services in micro-services communicate with each
other with the help of feign Client/Rest-Template.
Rest-Template
Rest-Template is used to create applications that consume
RESTful Web Services. One can make use of the exchange()
method to consume the web services for all HTTP methods.
But, use of Rest-Template leads to lots of redundancy in the code
and does not have much to do with the business logic. Thereby,
to prevent this redundancy, feign Client is used.
Feign Client
Feign is a Java to HTTP client binder. Feign Simplifies the HTTP
API Clients using declarative way. Feign is a library for creating
REST API clients in a declarative way. It makes writing web
service clients easier.
Basically, the main purpose of the above two is to provide the
REST endpoints to Ribbon. Now, let’s understand this using an
example,
14
1) When we hit the url: api/service1 (after starting the required
services such as discovery-service; gateway-service; service1;
service2; service3).
2) The client request is received at the API Gateway.
3) The API gateway (Zuul) resolves the port number from the
service name using the Service Discovery (Eureka).
4) The request is sent to the service1 which internally uses the
feign Client/Rest-Template to take data from the service2
micro-service.
5) The service2 micro-service in turn hits the service3 micro-
service using the feign Client/ Rest-Template.
6) The data is loaded from service3 micro-service to service2
micro-service and from the service2 micro-service to the
service1 micro-service.
7) The complete data is then sent as a response back to the client.
15
Hystrix: Fault Tolerance in a
Connected World
Let us consider two services A and B, such that service A wishes
to talk to service B. And consider that if the service A is built
using the framework like Spring Boot then it will have a Servlet
Thread-pool wherein for each incoming request, a thread will
be assigned from the thread-pool, to serve the request. This
thread will do the request processing and once after sending back
the response to the user the thread will be freed.
In micro-services architecture, the services can experience two
types of failures while communicating. These are:
1) Immediate Failures
2) Timely Failures
Immediate Failures
Let’s say that service A sends connection request to service B, a
thread from the pool is assigned for this work. But, immediately
service B sends an exception of “Connection refused”. Now, this
could happen either if the service B is down itself.
These types of exceptions can be easily dealt with our code, for
example by using try( ) and catch( ) exception handling. Also, in
this case the thread gets free immediately because of the failure.
Timely Failures
Unlike immediate failures, in this, the failure message is received
after certain time of making request to the service. In this it is
not necessary that service B must have got down, there could be
any other disturbance also.
16
So, in this, if request 1 is made and it gets halt then the particular
thread assigned to the request will get engage. Then, in case the
other request got hit in a meanwhile, another thread is assigned
and will get engage again. This is how, all the threads may get
engaged (after some time) and resources will get empty. And
hence, because of this service A will not be able to process further
request and will get down.
Hence like this, all the services will get down because of being
interconnected. And this will finally lead to Cascading Failure.
Interceptor: Working
The working of the interceptor can be understood while
considering the following two cases:
Case 1: When request is successful.
17
In the case, when the request made by service A to service B is
successful. Then, it will simply allow it to pass as in the ideal case.
Case 2: When request is unsuccessful i.e. failed.
In this case, when the request is failed then interceptor acts
smartly, and start keeping track of the failures (It is better to
keep track of failures in terms of percentage).
Now, if this failure percentage crosses the threshold value then
interceptor start acting as a barrier and does not allow more
requests to pass through it. Hence, it moves to the “STOP” state.
During this time, it sends back the default data to service A as
response. In the meanwhile, service B is allowed to recover. After
a particular interval, the interceptor again allows the request to
flow back to check the status of service B and the process
continues.
It should be noted, that the interceptor should not move to
“START” state immediately after “STOP” state. Instead, it should
move to “ALLOW PARTIAL” state. In this state, it allows only
few requests to pass through it, just to test the status of the
previously down service. If it is working fine now, the interceptor
is allowed to switch back to the ideal “START” state. This pattern
on which interceptor is called as Circuit Breaker Pattern in
micro-services.
18
In micro-services, the two commonly used tool to implement this
pattern are:
1) Netflix Hystrix
2) Resilience4j
Netflix Hystrix
In a distributed environment, inevitably some of the many
service dependencies will fail. Hystrix is a library that helps you
control the interactions between these distributed services by
adding latency tolerance and fault tolerance logic. Hystrix does
this by isolating points of access between the services, stopping
cascading failures across them, and providing fallback options,
all of which improve your system’s overall resiliency.
Hystrix evolved out of resilience engineering work that the
Netflix API team began in 2011. In 2012, Hystrix continued to
evolve and mature, and many teams within Netflix adopted it.
Today tens of billions of thread-isolated, and hundreds of
billions of semaphore-isolated calls are executed via Hystrix
every day at Netflix. This has resulted in a dramatic
improvement in uptime and resilience.
Resilience4j
Resilience4j is a lightweight fault tolerance library inspired
by Netflix Hystrix, but designed for Java 8 and functional
programming. Lightweight, because the library only uses Vavr,
which does not have any other external library dependencies.
Netflix Hystrix, in contrast, has a compile dependency
to Archaius which has many more external library dependencies
such as Guava and Apache Commons Configuration.
Resilience4j provides higher-order functions (decorators) to
enhance any functional interface, lambda expression or method
reference with a Circuit Breaker, Rate Limiter, Retry or
Bulkhead. The advantage is that you have the choice to select the
decorators you need and nothing else.
19
Case Study 1:
Implementing Hystrix in Spring Tool
Suite 4
Step 1: Create a new service “Service1” in STS.
20
Step 3: Create HystrixdemoApplication.java class using the
following code as:
21
Output:
Case 1: When “Service1” is running.
22
Database Management in
MICROSERVICES
In monolithic architecture, a single relational database is
present. This makes it easy to write queries because everything
is inside a single database.
But, in micro-services architecture each service has its own
database which is thereby difficult to manage. Also, in this, as
each service has its own database thus, each database can be of
different type also. This type of architecture is called as “Polyglot
approach or architecture”. For example, one service can have
SQL type of database, other can have NoSQL type of database
etc.
In order to deal with such type of architecture, Event-Driven
Architecture is used for managing database.
Event-Driven Architecture
In this architecture, a micro-service publishes an event when
something notable happens, such as when it updates a business
entity. Other micro-services subscribe to those events. When a
micro-service receives an event it can update its own business
entities, which might lead to more events being published. One
can use events to implement business transactions that span
multiple services. A transaction consists of a series of steps. Each
step consists of a micro-service updating a business entity and
publishing an event that triggers the next step. The micro-
services exchange events via a Message Broker.
For database management, the best pattern to implement or
perform transaction is: SAGA PATTERN.
23
Saga Pattern
In this, it implements the sequence of local transactions. Each
local transaction updates the database and publishes a message
or event to trigger the next local transactions in saga. In this, if a
transaction fails because of violation of a business rule then the
saga executes a series of compensating transactions that undo all
the changes that were made by the preceding local transactions.
Two ways to implement saga are:
1) Choreography- In this, each local transaction publishes
domain events that trigger local transactions in other
services.
2) Orchestration- In this, an orchestrator tells the participants
about what local transaction to execute.
Now, let us see various factors to be considered to select an
appropriate database for the application-
1) Structure of data: The structure of the data basically
decides how we need to store and retrieve it. As our
applications deal with data present in a variety of formats,
selecting the right database should include picking the right
data structures for storing and retrieving the data. If we do
not select the right data structures for persisting our data,
our application will take more time to retrieve data from the
database, and will also require more development efforts to
work around any data issues.
2) Size of data to be stored: This factor takes into
consideration the quantity of data we need to store and
retrieve as critical application data. The amount of data we
can store and retrieve may vary depending on a
combination of the data structure selected, the ability of the
database to differentiate data across multiple file systems
and servers, and even vendor-specific optimisations. So we
need to choose our database keeping in mind the overall
volume of data generated by the application at any specific
24
time and also the size of data to be retrieved from the
database.
3) Speed and scalability: This decides the speed we require
for reading the data from the database and writing the data
to the database. It addresses the time taken to service all
incoming reads and writes to our application. Some
databases are actually designed to optimise read-heavy
applications, while others are designed in a way to support
write-heavy solutions. Selecting a database that can handle
our application’s input/output needs can actually go a long
way to making a scalable architecture.
4) Accessibility of data: The number of people or users
concurrently accessing the database and the level of
computation involved in accessing any specific data are also
important factors to consider while choosing the right
database. The processing speed of the application gets
affected if the database chosen is not good enough to handle
large loads.
5) Data modelling: This helps map our application’s
features into the data structure and we will need to
implement the same. Starting with a conceptual model, we
can identify the entities, their associated attributes, and the
entity relationships that we will need. As we go through the
process, the type of data structures we will need in order to
implement the application will become more apparent. We
can then use these structural considerations to select the
right category of database that will serve our application the
best.
6) Scope for multiple databases: During the modelling
process, we may realise that we need to store our data in a
specific data structure, where certain queries cannot be
optimised fully. This may be because of various reasons
such as some complex search requirements, the need for
robust reporting capabilities, or the requirement for a data
pipeline to accept and analyse the incoming data. In all such
25
situations, more than one type of database may be required
for our application. When choosing more than one
database, it’s quite important to select one database that
will own any specific set of data. This database acts as the
canonical database for those entities. Any additional
databases that work with this same set of data may have a
copy, but will not be considered as the owner of this data.
7) Safety and security of data: We should also check the
level of security that any database provides to the data
stored in it. In scenarios where the data to be stored is
highly confidential, we need to have a highly secured
database. The safety measures implemented by the
database in case of any system crash or failure is quite a
significant factor to keep in mind while choosing a
database.
26
adds its own features like a no-code implementation of
the repository pattern and the creation of database queries from
method names.
Here is the 3 key features of spring Boot JPA
1. No-code Repositories
The repository pattern is one of the most popular
persistence-related patterns. It hides the data store specific
implementation details and enables you to implement your
business code on a higher abstraction level. Implementing
that pattern isn’t too complicated but writing the standard
CRUD operations for each entity creates a lot of repetitive
code. Spring Data JPA provides you a set of repository
interfaces which you only need to extend to define a
specific repository for one of your entities.
27
MICROSERVICES Deployment
Deploying a monolithic application means running one or more
identical copies of a single, usually large, application. You
typically provision N servers (physical or virtual) and run M
instances of the application on each server. The deployment of a
monolithic application is not always entirely straightforward,
but it is much simpler than deploying a micro-services
application
A micro-services application consists of tens or even hundreds of
services. Services are written in a variety of languages and
frameworks. Each one is a mini-application with its own specific
deployment, resource, scaling, and monitoring requirements.
For example, you need to run a certain number of instances of
each service based on the demand for that service Also, each
service instance must be provided with the appropriate CPU,
memory, and I/O resources What is even more challenging is
that despite this complexity, deploying services must be fast,
reliable and cost-effective.
Containers and virtual machines are two of the most popular
methods for application deployment. Containerisation is
basically defined as an Operating System (OS) virtualisation
whereas, deploying Virtual Machines (VMs) is hardware
virtualisation.
Virtualisation
In this, it is hypervisor based technology. Due to this, we can run
any OS on the top of the other. For example: Linux running on
Windows or vice-versa. In this, both guest OS (Linux in above
example) and host OS (Windows in above example) run with
their own Kernel. The communication between the guest and
actual hardware is done with the help of an abstract layer of the
hypervisor.
28
Since all the communication between the guest and host is
through hypervisor thus, this approach provides very high level
of isolation as well as security. But, this is considerably slow and
incurs performance overhead due to hardware emulation. Hence
to reduce this overhead, containerisation was introduced.
Containerisation
In this, multiple isolated instances of the service are using
the same kernel. In this, all of the application’s logic
including the application platform and its dependencies
is packaged under one single container. It is actually a
virtualisation method for deploying different distributed
applications without actually launching an entire virtual
machine for each application.
Application Containers
Application containers comprises of the application files,
dependencies and libraries of an application to run on an OS.
Multiple Application containers can be run for a micro-service
based architecture where each service can make up the
application run independently from one another.
Docker Containers
A Docker container is a tool that makes it very easy to deploy and
run an application using containers. As a container allows a
developer to create an all-in-one package of the developed
application with all its dependencies. For example, a Java
application requires Java libraries, and when we deploy it on any
system or VM, we need to install Java first. But, in a container,
everything is kept together and shipped as one package, such as
in a Docker container.
29
Case Study 2:
Hosting the Spring Boot application
in a Docker container
30
Step 3: To do so, first clean up the target folder with mvn
clean (this can also be done from the IDE by running Maven
Clean) and mvn install (this can also be done from IDE by
running Maven Install).
This command will create a “dockerdemo.jar” in the target
directory of the working directory.
These are the four steps for that will create an image of our Java
application to be able to run Docker.
1. FROM java: 8 means this is a Java application and will
require all the Java libraries so it will pull all the Java-
related libraries and add them to the container.
31
2. EXPOSE 8080 means that we would like to expose 8080
to the outside world to access our application.
3. ADD /target/dockerdemo.jar dockerdemo.jar
ADD <source from where Docker should create the image>
<destination>
4. ENTRYPOINT [“java”,“-jar”,“dockerdemo.jar”]
will run the command as the entry point as this is a JAR and
we need to run this JAR from within Docker.
32
Step 6: This will create our image in Docker and load it up to the
container.
Step 7: Now that we have the image ready to run, let’s do that
with the following command.
33
Step 8: The Spring Boot application now boots up and the server
is running on port 8080.
34
REFERENCES
[1] https://www.nginx.com/blog/introduction-to-
microservices/
[2] https://www.nginx.com/learn/microservices/
[3] https://dzone.com/refcardz/getting-started-with-
microservices
[4]https://www.nginx.com/resources/library/designing
-deploying-microservices/
[5]http://www.serverspace.co.uk/blog/containerisation-
vs-virtualisation-whats-the-difference
[6] https://microservices.io/patterns/data/database-
per-service.html
[7] https://dzone.com/articles/6-data-management-
patterns-for-microservices-1
[8] https://blog.christianposta.com/microservices/the-
hardest-part-about-microservices-data/
[9] https://thenewstack.io/selecting-the-right-database-
for-your-microservices/
[10]https://microservices.io/patterns/reliability/circuit-
breaker.html
[11] https://dzone.com/articles/circuit-breaker-pattern
[12] https://www.nginx.com/blog/service-discovery-in-
a-microservices-architecture/
[13] https://dzone.com/articles/deploying-spring-boot-
on-docker
35