You are on page 1of 35

INTERNSHIP REPORT

ON
MICROSERVICES FUNDAMENTALS

MENTOR: Mr. Subh Anand

Submitted By:
Nishita Goyal
B.Tech (3rd year)
Bharati Vidyapeeth’s
College of Engineering,
New Delhi

1
ACKNOWLEDGEMENT

The internship opportunity I had with Reliance Industries


Limited was a great chance for learning and professional
development. Therefore, I consider myself as a very lucky
individual as I was provided with an opportunity like this. I am
also grateful for having a chance to meet so many wonderful
people and professionals who led me through this internship
period.

Bearing in mind, I am using this opportunity to express my


deepest gratitude and special thanks Mr. Subh Anand who in-
spite of being extraordinarily busy with his duties, took time out
to hear, guide and keep me on the correct path and allowing me
to carry out my learning.

I perceive this opportunity as a big milestone in my career


development. I will strive to use gained skills and knowledge in
the best possible way, and I will continue to work on areas of
improvement. Hope to continue cooperation with all in the
future.

Sincerely,

Nishita Goyal

2
INDEX
1. Introduction
1.1 Monolithic v/s Micro-services
1.2 Micro-services – “The Saviour”
1.2.1 Benefits
1.2.2 Drawbacks
2. A Deep Dive into Micro-services
2.1 Micro-services Architecture Pattern
2.1.1 API Gateway (Zuul + Ribbon)
2.1.2 Load Balancer (Ribbon)
2.1.3 Service Discovery & Registry (Eureka Server)
2.1.4 Hystrix Server
3. Inter Service Communication
3.1 Using Rest-Template
3.2 Using Feign Client
4. Hystrix: Fault Tolerance in a connected world
4.1 Immediate Failures
4.2 Timely Failures
4.2.1 Dealing with Timely failures
4.2.2 Interceptor: It’s working
4.2.3 Netflix Hystrix
4.2.4 Resilience4j
5. Case Study 1: Implementing Hystrix in STS4
6. Database Management in Micro-services
6.1 Event-Driven architecture
6.2 Saga Pattern
6.3 Spring data JPA: Handling complexity in database access
7. Micro-services Deployment
7.1 Virtualisation
7.2 Containerisation
7.3 Application Containers
7.4 Docker Containers
8. Case Study 2: Hosting the Spring Boot application in a Docker
container.

3
INTRODUCTION
Micro-services is mainly defined as an approach which mainly
focus on decomposing application into single function modules
with well-defined interfaces. Each of these modules are
independently deployed and are operated by small teams. Thus,
micro-services accelerate delivery by minimizing communication
and coordination between people while reducing the scope and
risk of changes.

Monolithic v/s Micro-services


Unlike micro-services, in monolithic architecture the entire
application is packaged as a one piece. In monolith, the core of
the application is the business logic, which is implemented by
modules that define services, domain objects, and events.
Surrounding the core are adapters that interface with the
external world.
The choice between the two approaches depends on the context,
and complexity of the application. Indeed micro-services solves
problems occurs in the large application from scaling, managing,
but it’s not always the way to go.
Applications written using monolithic style are extremely
common. They are simple to develop since our IDEs and other
tools are focused on building a single application. These kinds of
applications are also simple to test. You can implement end-to-
end testing by simply launching the application and testing the
UI with a testing package such as Selenium. Monolithic
applications are also simple to deploy. You just have to copy the
packaged application to a server. You can also scale the
application by running multiple copies behind a load balancer.
But, this simple looking architecture has huge limitations.
 This fails when it comes to building large applications
because then it becomes extremely difficult for a single
4
developer to understand the entire code. As a result, fixing
bugs and implementing new features correctly becomes
difficult and time consuming.
 Also, this sheer size of the application slows it down. The
larger the application, the longer the start-up time is.
 These are difficult to scale. As discussed above, monolith
consists of separate features in a single module and each
feature has its own requirement, but because they comprise
of single module then we need to compromise these
requirements.
 In this, as all the features are present in a single module,
thus this offers poor reliability. Hence, one error brings
down the entire application.
 Complex monolithic application also acts as an obstacle to
continuous deployment. It is extremely difficult since in
order to do this, we must need to redeploy the entire
application in order to update any one part of it.
Now, in order to overcome these limitations and to make agile
development possible, the micro-services architecture is being
adopted worldwide.

Micro-services – “The Saviour”


In micro-services, as discussed, the idea is to split our application
into set of smaller, interconnected services. Using micro-services
architecture, each service is designed as a mini-application that
has its own hexagonal architecture consisting of business logic
along with various adapters.
The micro-services Architecture pattern significantly impacts
the relationship between the application and the database.
Rather than sharing a single database schema with other
services, each service has its own database schema. On the one
hand, this approach is at odds with the idea of an enterprise-wide

5
data model. Also, it often results in duplication of some data.
However, having a database schema per service is essential if you
want to benefit from micro-services, because it ensures loose
coupling.
This micro-service architecture pattern offers us lots of benefits
which force us to choose it over monstrous monolith. Keeping in
mind, that nothing is perfect, I will also some of the drawbacks
which it offers which will help to make best choice on developing
the best application for your organisation.

Benefits:
 It tackles the problem of complexity. It decomposes what
would otherwise be a monstrous monolithic application
into a set of services. While the total amount of functionality
is unchanged, the application has been broken up into
manageable chunks or services.
 The Micro-services Architecture pattern enforces a level of
modularity that in practice is extremely difficult to achieve
with a monolithic code base. Consequently, individual
services are much faster to develop, and much easier to
understand and maintain.
 This architecture enables each service to be developed
independently by a team that is focused on that service.
Thereby, increasing the scalability .In this, the developers
are free to choose whatever technologies make sense. When
writing a new service, they have the option of using current
technology. Moreover, since services are relatively small, it
becomes more feasible to rewrite an old service using
current technology.
 The Micro-services Architecture pattern makes continuous
deployment possible.

6
Drawbacks:
 The term micro-services put special emphasis on the size of
the services. While small services are preferable, it’s
important to remember that small services are a means to
an end, but not the primary goal.
 Due to presence of number of independent services in
micro-services architecture, developers need to take care of
the choice of method to facilitate inter service
communication.
 Micro-services architecture offers the use of multiple
databases which leads to the challenge of updating multiple
databases owned by different services.
 Testing a micro-services application is also much more
complex. For example, with a modern framework such as
Spring Boot, it is trivial to write a test class that starts up a
monolithic web application and tests its REST API. In
contrast, a similar test class for a service would need to
launch that service and any services that it depends upon,
or at least configure stubs for those services.
 In Micro-services Architecture pattern, implementing
changes that span multiple services is a challenging task. In
this, one needs to carefully plan and coordinate the rollout
of changes to each of the services.
 Deploying a micro-services based application is also much
more complex. As it typically consists of large number of
services and each service have multiple runtime instances.
Thus, there are many more moving parts that need to be
configured, deployed, scaled, and monitored. In addition, a
proper service discovery mechanism needs to be
implemented that enables a service to discover the locations
(hosts and ports) of any other services it needs to
communicate with.

7
A Deep Dive into “MICRO-SERVICES”
Despite of the fact that the micro-services offers so many
drawbacks, they are still the ideal choice for complex
applications. This is all because of the lots of flexibility they offer
in their architecture pattern.

Micro-services Architecture Pattern


While studying micro-services, the following questions comes to
mind:
 How will we make sure that there is a single interface
given to the client which reduces a lot of client side
efforts and makes that micro-services application very
easy to use?
 How to load balance incoming requests to micro-
services?
 How will we register micro-services so that clients can
use that registration information (Service Discovery)
and call in a micro-service?
 How will we ensure that micro-services application
which is built, is completely fault tolerant and highly
available?

Following are the different components used in the micro-


services architecture pattern which provided me the answer to all
these questions:
1) API GATEWAY (ZUUL + RIBBON)
API Gateway is an essential component which let us decide that
how our application’s clients will interact with the micro-
services. It is mainly responsible for performing request filtering.
API Gateway acts as a single entry point into the system for the
client. It is used to route the request to an appropriate micro-

8
service. For example, the product details page of any application,
not only show the details like name, price etc. But, it also shows
you the details like number of items in the cart, order history etc.
Now, in micro-services architecture, the data displayed on the
product details page is owned by multiple micro-services. And,
here the API gateway comes into existence.
How does it work?
The API Gateway is responsible for request routing,
composition, and protocol translation. All requests from clients
first go through the API Gateway. It then routes requests to the
appropriate micro-service. The API Gateway will often handle a
request by invoking multiple micro-services and aggregating the
results. It can translate between web protocols such as HTTP and
Web-Socket and web-unfriendly protocols that are used
internally.
How to implement an API Gateway?
While implementing an API gateway, following design issues we
need to consider:
 Performance and Scalability - As in micro-services
architecture, it consists of large number of different
services. Thus, it must be capable of handling
thousands of requests.
 Reactive Programming model – As, API gateway
handles requests by routing them to appropriate
services. Thus, to minimize the amount of time, the
code of gateway should be written in reactive style.
 Service Invocation – In micro-services architecture,
the inter-process communication can be carried out
both synchronously and asynchronously. Hence, the
gateway must be designed to support different types
of communication mechanism.

9
 Service Discovery - The API Gateway needs to know
the location (IP address and port) of each micro-
service with which it communicates. Hence, to solve
this purpose, the API Gateway, like any other service
client in the system, needs to use the system’s service
discovery mechanism.
 Handling Partial Failures – In case of failure, that is,
if the gateway fails to call the service then it must be
capable of representing or showing up the cached data
or default data in order to maintain good user
experience.
Netflix Zuul Server is one of an example of API Gateway used. It
acts as a kind of Gateway Server wherein all the client requests
have to pass through it, so its acts as a kind of unified interface
to a Client. Client uses a single communication protocol to
communicate with all the Micro-services and Zuul server is in
turn entrusted with the responsibility of calling various micro
services with its appropriate communication protocols. Netflix
Zuul also has an inbuilt Load Balancer (Ribbon) to load balance
all the incoming requests from the client.

2) LOAD BALANCER (RIBBON)


Load Balancing denotes automatically distributing incoming
application traffic between two or more service instances. It
enables us to achieve fault tolerance in our applications,
seamlessly providing the required amount of load balancing
capacity needed to route application traffic. Load balancing aims
to optimize resource use, maximize throughput, minimize
response time, and avoid overload of any single resource. Using
multiple components with load balancing instead of a single
component may increase reliability and availability through
redundancy.

10
Netflix Ribbon is basically a client side load balancer wherein it
load balances all the incoming requests from the client. It uses a
basic Round Robin Load Balancing strategy by default. Let us
consider an example, if there are two services, A and B. Now,
there are several instances of service B which are available to
service A to communicate. So, how to select a particular instance
to connect? Thus, this task is performed by Ribbon.
Netflix Zuul Server has inbuilt Netflix Ribbon embedded with it.
In order to use Netflix Ribbon independently, we have to
provision appropriate Maven packages of Netflix Ribbon for
using it in any application.

3) SERVICE DISCOVERY & REGISTRY (EUREKA


SERVER)
In the micro-services architecture, in order to invoke the service
or to establish a communication with the service its network
location is required. For modern cloud-based applications, these
network locations are not fixed but keeps on changing
dynamically. So now, in order to find them, a proper service
discovery mechanism is used.
The service discovery mechanism use two different types of
patterns-
1) Client-side Discovery
2) Server-side Discovery

Client-side Discovery (Netflix OSS)


In this, client is mainly responsible for determining the network
locations of available instances. It performs discovery as:
Step 1: It queries the service registry (defined as database of all
the available instances) about the available instances.

11
Step 2: Client applies the load balancing algorithm (usually,
Round Robin Algorithm) to select one instances from all the
available instances.
Step 3: Client makes a request.
This client-side discovery is a pretty straight-forward method.
Also, information about all the available instances, let client
make an intelligent decision. In Spring Cloud this method is used.

Server-side Discovery
In this, client instead of making request directly, makes it via load
balancer. Thus, in this it leads to decrease in number of hops as
compared to what is required in client-side discovery.

As read, service discovery makes the use of registered instances


in the service registry. Now, how do these service gets registered
themselves in service registry?
Service Registry can takes place using two different patterns-
1) Self-registration pattern
2) Third-party registration pattern

Self-registration pattern
In this pattern, a service instance is responsible for registering
and unregistering itself with the service registry. Also, if
required, a service instance sends heartbeat requests to prevent
its registration from expiring. It is relatively simple and doesn’t
require any other system components. However, a major
drawback is that it couples the service instances to the service
registry. So, one must implement the registration code in each
programming language and framework used by your services.

12
Third-party registration pattern
In this pattern, service instances aren’t responsible for
registering themselves with the service registry. Instead, another
system component known as the service registrar (example:
Netflix OSS Prana) handles the registration. The service registrar
tracks changes to the set of running instances by either polling
the deployment environment or subscribing to events. When it
notices a newly available service instance, it registers the
instance with the service registry. The service registrar also
unregisters terminated service instances.
Netflix Eureka is good example of a service registry. It provides
a REST API for registering and querying service instances. A
service instance registers its network location using a POST
request. Every 30 seconds it must refresh its registration using a
PUT request. A registration is removed by either using an HTTP
DELETE request or by the instance registration timing out. As
you might expect, a client can retrieve the registered service
instances by using an HTTP GET request.

4) Hystrix Server
Hystrix acts as a fault tolerant resilient system which is used to
avoid complete failures of software application. It does this by
providing a kind of Circuit breaker mechanism in which a circuit
remains closed when application is running smoothly without
any issues. If there are errors continuously encountered in the
application then Hystrix Server Circuit opens and any further
requests to a calling service are stopped by Hystrix and instead
requests are diverted to a fall back service. In this way it provides
a highly resilient system.

13
INTER-SERVICE COMMUNICATION
In a monolithic application, components invoke one another via
language-level method or function calls. In contrast, a micro-
services based application is a distributed system running on
multiple machines.
Till now, we read about how the client makes request to the
different services. Also, we know that in a micro-services
architecture it consists of large number of different services
communicating with each other. Now, the question arises, how
these services communicate with each other?
The different services in micro-services communicate with each
other with the help of feign Client/Rest-Template.
Rest-Template
Rest-Template is used to create applications that consume
RESTful Web Services. One can make use of the exchange()
method to consume the web services for all HTTP methods.
But, use of Rest-Template leads to lots of redundancy in the code
and does not have much to do with the business logic. Thereby,
to prevent this redundancy, feign Client is used.

Feign Client
Feign is a Java to HTTP client binder. Feign Simplifies the HTTP
API Clients using declarative way. Feign is a library for creating
REST API clients in a declarative way. It makes writing web
service clients easier.
Basically, the main purpose of the above two is to provide the
REST endpoints to Ribbon. Now, let’s understand this using an
example,

14
1) When we hit the url: api/service1 (after starting the required
services such as discovery-service; gateway-service; service1;
service2; service3).
2) The client request is received at the API Gateway.
3) The API gateway (Zuul) resolves the port number from the
service name using the Service Discovery (Eureka).
4) The request is sent to the service1 which internally uses the
feign Client/Rest-Template to take data from the service2
micro-service.
5) The service2 micro-service in turn hits the service3 micro-
service using the feign Client/ Rest-Template.
6) The data is loaded from service3 micro-service to service2
micro-service and from the service2 micro-service to the
service1 micro-service.
7) The complete data is then sent as a response back to the client.

15
Hystrix: Fault Tolerance in a
Connected World
Let us consider two services A and B, such that service A wishes
to talk to service B. And consider that if the service A is built
using the framework like Spring Boot then it will have a Servlet
Thread-pool wherein for each incoming request, a thread will
be assigned from the thread-pool, to serve the request. This
thread will do the request processing and once after sending back
the response to the user the thread will be freed.
In micro-services architecture, the services can experience two
types of failures while communicating. These are:
1) Immediate Failures
2) Timely Failures

Immediate Failures
Let’s say that service A sends connection request to service B, a
thread from the pool is assigned for this work. But, immediately
service B sends an exception of “Connection refused”. Now, this
could happen either if the service B is down itself.
These types of exceptions can be easily dealt with our code, for
example by using try( ) and catch( ) exception handling. Also, in
this case the thread gets free immediately because of the failure.

Timely Failures
Unlike immediate failures, in this, the failure message is received
after certain time of making request to the service. In this it is
not necessary that service B must have got down, there could be
any other disturbance also.

16
So, in this, if request 1 is made and it gets halt then the particular
thread assigned to the request will get engage. Then, in case the
other request got hit in a meanwhile, another thread is assigned
and will get engage again. This is how, all the threads may get
engaged (after some time) and resources will get empty. And
hence, because of this service A will not be able to process further
request and will get down.
Hence like this, all the services will get down because of being
interconnected. And this will finally lead to Cascading Failure.

Dealing with Timely Failures:


While dealing with such failures, one should keep in mind two
things:
1) One should not keep on sending requests, when we know
there are higher chances of failures. This leads to lack of
resources availability.
2) If a service is failing continuously, it should return the
default response, so that the thread not get stuck.
Both the above tasks can be achieved with the use of the
interceptor in the micro-services architecture.

Request Service A Service B


Interceptor

Interceptor: Working
The working of the interceptor can be understood while
considering the following two cases:
Case 1: When request is successful.

17
In the case, when the request made by service A to service B is
successful. Then, it will simply allow it to pass as in the ideal case.
Case 2: When request is unsuccessful i.e. failed.
In this case, when the request is failed then interceptor acts
smartly, and start keeping track of the failures (It is better to
keep track of failures in terms of percentage).
Now, if this failure percentage crosses the threshold value then
interceptor start acting as a barrier and does not allow more
requests to pass through it. Hence, it moves to the “STOP” state.
During this time, it sends back the default data to service A as
response. In the meanwhile, service B is allowed to recover. After
a particular interval, the interceptor again allows the request to
flow back to check the status of service B and the process
continues.
It should be noted, that the interceptor should not move to
“START” state immediately after “STOP” state. Instead, it should
move to “ALLOW PARTIAL” state. In this state, it allows only
few requests to pass through it, just to test the status of the
previously down service. If it is working fine now, the interceptor
is allowed to switch back to the ideal “START” state. This pattern
on which interceptor is called as Circuit Breaker Pattern in
micro-services.

18
In micro-services, the two commonly used tool to implement this
pattern are:
1) Netflix Hystrix
2) Resilience4j

Netflix Hystrix
In a distributed environment, inevitably some of the many
service dependencies will fail. Hystrix is a library that helps you
control the interactions between these distributed services by
adding latency tolerance and fault tolerance logic. Hystrix does
this by isolating points of access between the services, stopping
cascading failures across them, and providing fallback options,
all of which improve your system’s overall resiliency.
Hystrix evolved out of resilience engineering work that the
Netflix API team began in 2011. In 2012, Hystrix continued to
evolve and mature, and many teams within Netflix adopted it.
Today tens of billions of thread-isolated, and hundreds of
billions of semaphore-isolated calls are executed via Hystrix
every day at Netflix. This has resulted in a dramatic
improvement in uptime and resilience.
Resilience4j
Resilience4j is a lightweight fault tolerance library inspired
by Netflix Hystrix, but designed for Java 8 and functional
programming. Lightweight, because the library only uses Vavr,
which does not have any other external library dependencies.
Netflix Hystrix, in contrast, has a compile dependency
to Archaius which has many more external library dependencies
such as Guava and Apache Commons Configuration.
Resilience4j provides higher-order functions (decorators) to
enhance any functional interface, lambda expression or method
reference with a Circuit Breaker, Rate Limiter, Retry or
Bulkhead. The advantage is that you have the choice to select the
decorators you need and nothing else.
19
Case Study 1:
Implementing Hystrix in Spring Tool
Suite 4
Step 1: Create a new service “Service1” in STS.

Step 2: Create another Hystrix Service called “Hystrix Demo”.


For this, include the following dependencies in pom.xml

20
Step 3: Create HystrixdemoApplication.java class using the
following code as:

Step 4: Specify the port at which application will run in


application.yml file

21
Output:
Case 1: When “Service1” is running.

Case 2: When “Service1” has stopped.

22
Database Management in
MICROSERVICES
In monolithic architecture, a single relational database is
present. This makes it easy to write queries because everything
is inside a single database.
But, in micro-services architecture each service has its own
database which is thereby difficult to manage. Also, in this, as
each service has its own database thus, each database can be of
different type also. This type of architecture is called as “Polyglot
approach or architecture”. For example, one service can have
SQL type of database, other can have NoSQL type of database
etc.
In order to deal with such type of architecture, Event-Driven
Architecture is used for managing database.

Event-Driven Architecture
In this architecture, a micro-service publishes an event when
something notable happens, such as when it updates a business
entity. Other micro-services subscribe to those events. When a
micro-service receives an event it can update its own business
entities, which might lead to more events being published. One
can use events to implement business transactions that span
multiple services. A transaction consists of a series of steps. Each
step consists of a micro-service updating a business entity and
publishing an event that triggers the next step. The micro-
services exchange events via a Message Broker.
For database management, the best pattern to implement or
perform transaction is: SAGA PATTERN.

23
Saga Pattern
In this, it implements the sequence of local transactions. Each
local transaction updates the database and publishes a message
or event to trigger the next local transactions in saga. In this, if a
transaction fails because of violation of a business rule then the
saga executes a series of compensating transactions that undo all
the changes that were made by the preceding local transactions.
Two ways to implement saga are:
1) Choreography- In this, each local transaction publishes
domain events that trigger local transactions in other
services.
2) Orchestration- In this, an orchestrator tells the participants
about what local transaction to execute.
Now, let us see various factors to be considered to select an
appropriate database for the application-
1) Structure of data: The structure of the data basically
decides how we need to store and retrieve it. As our
applications deal with data present in a variety of formats,
selecting the right database should include picking the right
data structures for storing and retrieving the data. If we do
not select the right data structures for persisting our data,
our application will take more time to retrieve data from the
database, and will also require more development efforts to
work around any data issues.
2) Size of data to be stored: This factor takes into
consideration the quantity of data we need to store and
retrieve as critical application data. The amount of data we
can store and retrieve may vary depending on a
combination of the data structure selected, the ability of the
database to differentiate data across multiple file systems
and servers, and even vendor-specific optimisations. So we
need to choose our database keeping in mind the overall
volume of data generated by the application at any specific

24
time and also the size of data to be retrieved from the
database.
3) Speed and scalability: This decides the speed we require
for reading the data from the database and writing the data
to the database. It addresses the time taken to service all
incoming reads and writes to our application. Some
databases are actually designed to optimise read-heavy
applications, while others are designed in a way to support
write-heavy solutions. Selecting a database that can handle
our application’s input/output needs can actually go a long
way to making a scalable architecture.
4) Accessibility of data: The number of people or users
concurrently accessing the database and the level of
computation involved in accessing any specific data are also
important factors to consider while choosing the right
database. The processing speed of the application gets
affected if the database chosen is not good enough to handle
large loads.
5) Data modelling: This helps map our application’s
features into the data structure and we will need to
implement the same. Starting with a conceptual model, we
can identify the entities, their associated attributes, and the
entity relationships that we will need. As we go through the
process, the type of data structures we will need in order to
implement the application will become more apparent. We
can then use these structural considerations to select the
right category of database that will serve our application the
best.
6) Scope for multiple databases: During the modelling
process, we may realise that we need to store our data in a
specific data structure, where certain queries cannot be
optimised fully. This may be because of various reasons
such as some complex search requirements, the need for
robust reporting capabilities, or the requirement for a data
pipeline to accept and analyse the incoming data. In all such

25
situations, more than one type of database may be required
for our application. When choosing more than one
database, it’s quite important to select one database that
will own any specific set of data. This database acts as the
canonical database for those entities. Any additional
databases that work with this same set of data may have a
copy, but will not be considered as the owner of this data.
7) Safety and security of data: We should also check the
level of security that any database provides to the data
stored in it. In scenarios where the data to be stored is
highly confidential, we need to have a highly secured
database. The safety measures implemented by the
database in case of any system crash or failure is quite a
significant factor to keep in mind while choosing a
database.

Handling complexity in database access: Spring Data


JPA
While implementing a new application, one should focus on the
business logic instead of technical complexity. That’s why the
Java Persistence API (JPA) specification and Spring Data
JPA are extremely popular. JPA handles most of the complexity
of JDBC-based database access and object-relational mappings.
On top of that, Spring Data JPA reduces the amount of
boilerplate code required by JPA. That makes the
implementation of your persistence layer easier and faster.
JPA is a specification that defines an API for object-relational
mappings and for managing persistent objects. Hibernate and
Eclipse Link are 2 popular implementations of this specification.
Spring Data JPA adds a layer on top of JPA. That means it uses
all features defined by the JPA specification, especially the entity
and association mappings, the entity lifecycle management,
and JPA’s query capabilities. On top of that, Spring Data JPA

26
adds its own features like a no-code implementation of
the repository pattern and the creation of database queries from
method names.
Here is the 3 key features of spring Boot JPA
1. No-code Repositories
The repository pattern is one of the most popular
persistence-related patterns. It hides the data store specific
implementation details and enables you to implement your
business code on a higher abstraction level. Implementing
that pattern isn’t too complicated but writing the standard
CRUD operations for each entity creates a lot of repetitive
code. Spring Data JPA provides you a set of repository
interfaces which you only need to extend to define a
specific repository for one of your entities.

2. Reduced Boiler-Plate code


Spring Data JPA provides a default implementation for
each method defined by one of its repository interfaces.
That means that you no longer need to implement basic
read or write operations. And even so all of these
operations don’t require a lot of code, not having to
implementing them makes life a little bit easier and it
reduces the risk of stupid bugs.
3. Generated Queries
Spring Data JPA is the generation of database queries based
on method names. As long as your query isn’t too complex,
you just need to define a method on your repository
interface with a name that starts with find…By. Spring then
parses the method name and creates a query for it.

27
MICROSERVICES Deployment
Deploying a monolithic application means running one or more
identical copies of a single, usually large, application. You
typically provision N servers (physical or virtual) and run M
instances of the application on each server. The deployment of a
monolithic application is not always entirely straightforward,
but it is much simpler than deploying a micro-services
application
A micro-services application consists of tens or even hundreds of
services. Services are written in a variety of languages and
frameworks. Each one is a mini-application with its own specific
deployment, resource, scaling, and monitoring requirements.
For example, you need to run a certain number of instances of
each service based on the demand for that service Also, each
service instance must be provided with the appropriate CPU,
memory, and I/O resources What is even more challenging is
that despite this complexity, deploying services must be fast,
reliable and cost-effective.
Containers and virtual machines are two of the most popular
methods for application deployment. Containerisation is
basically defined as an Operating System (OS) virtualisation
whereas, deploying Virtual Machines (VMs) is hardware
virtualisation.

Virtualisation
In this, it is hypervisor based technology. Due to this, we can run
any OS on the top of the other. For example: Linux running on
Windows or vice-versa. In this, both guest OS (Linux in above
example) and host OS (Windows in above example) run with
their own Kernel. The communication between the guest and
actual hardware is done with the help of an abstract layer of the
hypervisor.
28
Since all the communication between the guest and host is
through hypervisor thus, this approach provides very high level
of isolation as well as security. But, this is considerably slow and
incurs performance overhead due to hardware emulation. Hence
to reduce this overhead, containerisation was introduced.

Containerisation
In this, multiple isolated instances of the service are using
the same kernel. In this, all of the application’s logic
including the application platform and its dependencies
is packaged under one single container. It is actually a
virtualisation method for deploying different distributed
applications without actually launching an entire virtual
machine for each application.

Application Containers
Application containers comprises of the application files,
dependencies and libraries of an application to run on an OS.
Multiple Application containers can be run for a micro-service
based architecture where each service can make up the
application run independently from one another.

Docker Containers
A Docker container is a tool that makes it very easy to deploy and
run an application using containers. As a container allows a
developer to create an all-in-one package of the developed
application with all its dependencies. For example, a Java
application requires Java libraries, and when we deploy it on any
system or VM, we need to install Java first. But, in a container,
everything is kept together and shipped as one package, such as
in a Docker container.
29
Case Study 2:
Hosting the Spring Boot application
in a Docker container

Step 1: Create a spring boot application such as “DockerDemo”


displaying a simple message.

Step 2: Now, we will deploy it in a Docker container. For this, we


have to create a Docker-file that will contain the steps to be
executed by Docker to create an image of this application and run
that image from Docker. For this, in the pom.xml, we will define
the packaging as JAR

30
Step 3: To do so, first clean up the target folder with mvn
clean (this can also be done from the IDE by running Maven
Clean) and mvn install (this can also be done from IDE by
running Maven Install).
This command will create a “dockerdemo.jar” in the target
directory of the working directory.

Step 4: Creating a Docker file.


Docker gives the user the capability to create their own Docker
images and deploy them in Docker. To create your own Docker
image, we have to create our own Docker-file. Basically,
a Docker-file is a simple text file with all the instructions
required to build the image.
For Docker-file, create a simple file in the project folder and add
these steps in that file:

These are the four steps for that will create an image of our Java
application to be able to run Docker.
1. FROM java: 8 means this is a Java application and will
require all the Java libraries so it will pull all the Java-
related libraries and add them to the container.

31
2. EXPOSE 8080 means that we would like to expose 8080
to the outside world to access our application.
3. ADD /target/dockerdemo.jar dockerdemo.jar
ADD <source from where Docker should create the image>
<destination>
4. ENTRYPOINT [“java”,“-jar”,“dockerdemo.jar”]
will run the command as the entry point as this is a JAR and
we need to run this JAR from within Docker.

Step 5: Till now, two pieces are ready:

1. Java – Spring Boot application


2. Docker-file that will create the image to be run in the
Docker container.
To load these up in the Docker container, we have to first create
the image and then run that image from the Docker container.
We need to run certain commands in the folder that contains the
Docker-file.

32
Step 6: This will create our image in Docker and load it up to the
container.

Step 7: Now that we have the image ready to run, let’s do that
with the following command.

33
Step 8: The Spring Boot application now boots up and the server
is running on port 8080.

34
REFERENCES
[1] https://www.nginx.com/blog/introduction-to-
microservices/
[2] https://www.nginx.com/learn/microservices/
[3] https://dzone.com/refcardz/getting-started-with-
microservices
[4]https://www.nginx.com/resources/library/designing
-deploying-microservices/
[5]http://www.serverspace.co.uk/blog/containerisation-
vs-virtualisation-whats-the-difference
[6] https://microservices.io/patterns/data/database-
per-service.html
[7] https://dzone.com/articles/6-data-management-
patterns-for-microservices-1
[8] https://blog.christianposta.com/microservices/the-
hardest-part-about-microservices-data/
[9] https://thenewstack.io/selecting-the-right-database-
for-your-microservices/
[10]https://microservices.io/patterns/reliability/circuit-
breaker.html
[11] https://dzone.com/articles/circuit-breaker-pattern
[12] https://www.nginx.com/blog/service-discovery-in-
a-microservices-architecture/
[13] https://dzone.com/articles/deploying-spring-boot-
on-docker

35

You might also like