You are on page 1of 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/334451751

High Availability Server Implementation Using NGINX, Keepalived, and


Varnish to Ease Web Server Access

Article · July 2019

CITATIONS READS

0 289

1 author:

Ezra Adriadi Putranto


Politeknik Negeri Jakarta
1 PUBLICATION   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Ezra Adriadi Putranto on 14 July 2019.

The user has requested enhancement of the downloaded file.


High Availability Server Implementation using
NGINX, Keepalived, and Varnish to Ease Web
Server Access
Ezra Adriadi Putranto
Information and Computer Engineering
Jakarta State Polytechnic
Jakarta, Indonesia
ezra.putranto.tik15@mhsw.pnj.ac.id

Abstract— The research in this paper is conducted to create utilize a component that is able to monitor failure inside the
and test the performance of two High Availability Server which system and divert the job to another part of the system which
are used to increase data availability and reduce the load of is more capable to do it [2].
backend web servers. The performance is measured by the
response time of the server at failover and by logging the result C. Keepalived
of Secure Socket Layer Termination and website content Keepalived is a routing program which is running on
caching inside High Availability Server. Using the integration of background, capable of doing services monitoring and
Keepalived, NGINX, and Varnish, the High Availability Server automatic failover in case of failure inside the system [2].
is created and tested by sending request via Virtual IP. Based on
the performance measured from the test, it indicates that the D. NGINX
High Availability Server is able to reduce downtime and lighten Developed by Igor Sysoev in October 2004, NGINX is an
the load on the backend servers. open-source web server as an answer to the C10K problem to
handle 10.000 concurrent connection at the same time. As a
Keywords—Website; Availability; High Availability Server;
Downtime; Keepalived; NGINX; Varnish; Cache; Failover; SSL
web server, NGINX has many features, such as caching, load
Termination balancing, handling SSL traffic, reverse proxy, and many
more [3].
I. INTRODUCTION E. Varnish
Have you ever been in the situation, where you want to Varnish is an HTTP accelerator that can fasten access to
throw away your phone so badly because a tab in your browser websites. Varnish works in the server side, unlike Squid Proxy
keeps loading forever? It seems that your phone’s browser is which accelerate on the client side. Varnish are used by
trying hard to load a website from an unoptimized, or perhaps websites which have high traffic [4]. Compared to another
unavailable web server. You won’t be able to browse the web cache software, Varnish can work faster without
if there is no web server in the internet. Web server keeps communicating with database, able to server request when
expanding and increasing in number every year, and not every there are damage on the data source, and is able to handle
one of them are managed or maintained properly. This could 275.000 request per second. Varnish log also monitor traffic
lead into compromising the aspects that a website has, such as and request statistic [5].
its availability. Using High Availability servers as a bridge
between web servers and client could help websites stay F. SSL Termination
accessible despite the troubles on the backside, thus Terminating SSL traffic is one of configuration option to
minimalizing the downtime on the client. Through failover, balance encrypted traffic. Received SSL requests are
SSL termination, and Caching, High Availability server helps decrypted by load balancer server, and the decrypted requests
a website by switching between servers and taking over some are forwarded to the backend server. Load balancer server take
of the server’s work. High Availability server plays a over the intensive decryption process [6]. SSL Termination
significant role in website availability based on its can be applied on load balancer server if the network between
performance to reduce web server downtime. the load balancer server and backend server is guaranteed safe
[7].
II. LITERATURE REVIEW
A. Caching III. EXPERIMENTS
Making requests served faster can be achieved by doing In this paper, two high availability server will be used, and
data caching. By caching, data loaded from past requests are their performance will be tested to maintain the availability of
reused. To do this, those data are stored inside RAM and serve two backed web servers. High availability servers will be
them if there is a similar request. There are few kind of cache, installed with three main software: NGINX, Varnish, and
those are browser cache, database cache, object cache, and et Keepalived.
cetera [1]. Testing the performance of high availability server will be
B. High Availability done by collecting the data of the response time of the server
at failover, and by logging the result of cache and termination
A function inside system’s design which enable an of SSL traffic.
application to re-run or divert its job to another, more capable
system at failure is called high availability. This function

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE


A. Virtual Machines Role Distribution and IP Allocation B. Failover Between Servers Using Keepalived
To implement the high availability server setup, four Failover is needed to maintain availability in case one of
virtual machines (VMs) are required. From those VMs, two of the server is down. In this scenario, failover will be created
them will act as high availability servers, and the remaining between two high availability server. If one of the master
two will be configured as backend web servers. All of the server is down, e.g. high availability server 1, the second
VMs are using Ubuntu 16.04.05 LTS as the operating system server will take over the master role.
and are installed on Microsoft Hyper-V.
Failover will be achieved using Keepalived. To configure
Four different IP addresses will be allocated to the VMs. Keepalived, open the configuration file in
These IP address can be allocated freely, as long as they /etc/keepalived/keepalived.conf. These are the configuration
remain in the same network. IP address 172.31.254.225/24 used in this implementation:
and 172.31.254.226/24 will be allocated for the high
availability servers. The backend web servers will be given IP TABLE I. KEEPALIVED CONFIGURATION
address of 172.31.254.223/24 and 172.13.254.224/24. An High Availability Server 1 High Availability Server 2
additional IP address of 172.31.254.227/24 will be allocated
as a virtual IP for the client to make request. vrrp_instance VI_1 { vrrp_instance VI_1 {
state MASTER state MASTER
interface eth0 interface eth0
virtual_router_id 101 virtual_router_id 101
priority 101 priority 100
advert_int 1 advert_int 1
authentication { authentication {
auth_type PASS auth_type PASS
auth_pass 1111 auth_pass 1111
} }
virtual_ipaddress { virtual_ipaddress {
172.31.254.227 172.31.254.227
} }
} }

Fig. 1. Illustration on virtual machine roles and IP address After the configuration files are altered, restart the service
and see the service’s status to check whether Keepalived is
Both of the high availability server will be installed three running properly. If the configuration is correct, it should
software. The first is Keepalived which function is to prevent show master state for server 1 and backup state for server 2 as
failure if one of the server fails. The second is Varnish to store shown in the Fig. 3.
cache to lighten the load of backend web servers. The last is
NGINX to terminate SSL-encrypted traffic.

Fig. 3. Status of Keepalived on both server

C. Traffic Caching and Redirection Using Varnish


Varnish will receive all request coming from port 80, or
HTTP, to check whether Varnish has cache of similar request.
Fig. 2. Block diagram of high availability server
If not, then the request will be forwarded to the backend web
servers. This is intended to reduce the number of requests IV. TESTING AND ANALYSIS
directed to the backend server, thus lighten their load. Few test will be run in order to determine whether high
Install the Varnish in both high availability server and availability servers have met the goals. There are five tests
change the configuration file on /etc/default/varnish. Find the based on the features configured into high availability server,
line written with “# This file contains 4 alternatives, please there are high availability server failover test, backend server
use only one.” and replace all configuration written after that failover test, SSL termination test, caching test, and downtime
line with this: test.

TABLE II. VARNISH CONFIGURATION


A. Testing and Analysis Preparation
1) Paessler Router Traffic Grapher (PRTG)
DAEMON_OPTS="-a :80 In PRTG, the IP address of 172.31.254.227 is added as
-b localhost:8080 \
-T localhost:6082 \
new device under the Hyper-V Virtual Machines category.
-S /etc/varnish/secret \ Ping sensor is also activated to monitor downtime on that IP.
-p thread_pools=2 \
-p thread_pool_min=100 \ 2) PHP Script
-p thread_pool_max=2000 \ A PHP Script is needed to log return values which are
-p thread_pool_add_delay=2 \ generated to identify IP address, used protocol, proxy
-p session_linger=50 \
-s malloc,512m \
information, and port number used. This script is stored on
-f /etc/varnish/default.vcl " both backend web servers inside /var/www/html directory
under name status.php. The content of the script is as shown
in TABLE IV below.
Based on the configuration on TABLE II, Varnish will
forward the request to port 8080 if there are no similar cache TABLE IV. STATUS.PHP CONTENT
found.
<?php
D. Create and Terminate Secured Traffic Using NGINX header( 'Content-Type: text/plain' );
NGINX installation is intended to capture traffic that is echo 'Host: ' . $_SERVER['HTTP_HOST'] . "\n";
coming to port 443 (HTTPS) and 8080 (from Varnish). echo 'Remote Address: ' . $_SERVER['REMOTE_ADDR'] . "\n";
echo 'X-Forwarded-For: ' .
Secured traffic using HTTPS will be terminated by NGINX $_SERVER['HTTP_X_FORWARDED_FOR'] . "\n";
and forwarded as unencrypted traffic (HTTP) to the backend echo 'X-Forwarded-Proto: ' .
web servers. Key and Certificate also required for NGINX to $_SERVER['HTTP_X_FORWARDED_PROTO'] . "\n";
enable HTTPS traffic, which can be created using OpenSSL. echo 'Server Address: ' . $_SERVER['SERVER_ADDR'] . "\n";
echo 'Server Port: ' . $_SERVER['SERVER_PORT'] . "\n\n";
After NGINX is installed and certificate and key are ?>
generated, change the NGINX configuration by deleting the
default configuration /etc/nginx/sites-enabled/default and
create new configuration file /etc/nginx/conf.d/load- B. High Availability Server Failover Test
balancer.conf. This test is done to determine whether failure at one of the
high availability servers will affect the availability on the
TABLE III. NGINX CONFIGURATION backend web servers. To test the failover, first thing to do is
to open the PHP script on the backend web server to identify
upstream backend { the high availability server that is currently active. The log
server 172.31.254.223; generated by the script will be shown on the browser by
server 172.31.254.224; opening the address 172.31.254.227/status.php.
}
server { After the active high availability server is identified via the
listen 8080; IP address on the log, shut that server down so it will become
location / { inactive and unable to process or forward any request. Make a
proxy_pass http://backend;
}
request again by opening the PHP script from the browser and
} see the log.
server {
listen 443 ssl;
ssl_certificate /etc/ssl/certs/loadbalancer.crt;
ssl_certificate_key /etc/ssl/nginx/loadbalancer.key;
location / {
proxy_pass http://127.0.0.1:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For
$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header Host $host;
proxy_set_header X-Secure on; Fig. 4. High availability Server before failover
}
}
Fig. 5. High availability Server after failover Fig. 8. SSL Termination test

Fig. 4 and Fig. 5 indicates that there is change on the active From Fig. 8, the value of X-Forwarded-Proto is HTTPS,
high availability server after failing over the previously active meaning that the request is originated from secured traffic.
server. Also, the backend web servers remain reachable even The log also shows the log of the backend web server
if one of the high availability server has failed. destination port which value is 80, where port 80 is for HTTP
traffic (unsecured traffic). That indicates the SSL termination
C. Backend Server Failover Test is successfully done by the high availability server.
This test is done to determine whether failure at one of the
backend web servers will affect the availability on the content E. Caching Test
of the web server. To test the failover, first thing to do is to This test is done to test whether high availability servers
open the PHP script on the backend web server to identify the are able to store cache and use them to serve client’s requests.
backend web server that is currently active. The log generated Monitoring the log is possible via the value of the cache hit
by the script will be shown on the browser by opening the inside the Varnish stat. Open the stat using command
address 172.31.254.227/status.php. varnishstat and check the cache hit value. Once the value is
checked, try to make as many request as possible to the Virtual
After the active backend web server is identified via the IP IP from any browser. After the requests are made, check the
address on the log, shut that server down so it will become value of the cache hit again and compare with the previous
inactive and unable to process any request. Make a request value.
again by opening the PHP script from the browser and see the
log.

Fig. 9. Cache hit before request

Fig. 6. Backend web Server before failover

Fig. 10. Cache hit after requests

Fig. 7. Backend Server after failover The increased value of the cache hit from 9 to 31 shown
on Fig. 9 and Fig. 10 indicates that the requests are served
Fig. 6 and Fig. 7 indicates that there is change on the active using cache stored inside the high availability server without
backend web server after failing over the previously active making request to the backend web servers.
server. Also, the content of the web servers remain accessible
even if one of the backend web server has failed. F. Downtime Measurement at Failover
D. SSL Termination Test Downtime testing is done to determine whether failover
affects the availability percentage of the server. By monitoring
This test is done to determine whether the request from the virtual IP from PRTG, the response time of the server will
HTTPS traffic can be forwarded into the backend web server be recorded. The test is done by comparing the response time
via HTTP traffic with the help of SSL termination inside the of ping packets (ICMP) when the server is in idle state and
high availability server. The log generated by the PHP script when the failover is in process, repeated by five times to get
can show what protocol is used inside the request. more accurate result.
using integration of Keepalived, Varnish, and NGINX. The
50 performance is measured by the response time of the server at

Response time (millisecond)


45 failover and by logging the result of Secure Socket Layer
40 Termination and website content caching inside high
35 availability server. The test indicates that the servers are able
to do failover, caching, and also terminating SSL connection
30
into HTTP request. At failover state, it also produce quite
25 Idle
small response time as it indicates that there are no downtime
20 Failover which affect the availability.
15
10
ACKNOWLEDGMENT
5 I would like to express our deepest gratitude towards Mr.
0 Nur Fauzi Soelaiman, S.T.,M.Kom. as our academic mentor
from Jakarta State Polytechnic, Mr. Aulia Rahman, S. Tr. as
1 2 3 4 5
our industrial mentor from PT. Mitra Akses Globalindo, and
n-Attempt
also towards my family and friends who have helped me finish
Fig. 11. Downtime test response time graph this paper. Without their help and support, I wouldn’t have
finish this paper on time.
TABLE V. DOWNTIME TEST RESULT
REFERENCES
Idle Failover
n-Attempt
Response time Response time [1] Nathasya, "Penjelasan Cache dan Jenis-Jenisnya," 4 April 2018.
[Online]. Available: https://www.dewaweb.com/blog/penjelasan-
1 8 ms 8 ms
cache-dan-jenis-jenisnya/. [Accessed 19 Desember 2018].
2 8 ms 47 ms [2] J. Ellingwood, "Web Caching Basics: Terminology, HTTP Headers,
and Caching Strategies," 2015. [Online]. Available:
3 10 ms 8 ms https://www.digitalocean.com/community/tutorials/web-caching-
4 9 ms 9 ms basics-terminology-http-headers-and-caching-strategies. [Accessed
27 November 2018].
5 9 ms 9 ms [3] Kinsta, "What Is NGINX? A Basic Look at What It Is and How It
Works," 19 Juli 2018. [Online]. Available:
Average: 8.8 ms 16.2 ms https://kinsta.com/knowledgebase/what-is-nginx/. [Accessed 19
Desember 2018].
Test result from Fig. 11 and TABLE V indicates that there are [4] Pusathosting, "Varnish Cache," 11 Juli 2018. [Online]. Available:
difference in response time between idle and failover state https://pusathosting.com/kb/varnish. [Accessed 19 Desember 2018].
with a value of 7.4 milliseconds, which is a relatively small [5] Joe, "The Benefits of Varnish Cache," 8 September 2017. [Online].
number for a response time. At failover state, all sent ping Available: https://www.extreme-creations.co.uk/blog/the-benefits-of-
varnish-cache/. [Accessed 19 Desember 2018].
packets are responding back which indicates that there are no
downtime at failover. [6] DigitalOcean, "How to Configure SSL Termination," 19 Juni 2018.
[Online]. Available:
https://www.digitalocean.com/docs/networking/load-balancers/how-
V. CONCLUSION to/ssl-termination/. [Accessed 19 Desember 2018].
The research in this paper is conducted to create and test [7] Fideloper LLC, "So You Got Yourself a Loadbalancer," 2015.
the performance of two High Availability Server which are [Online]. Available: https://serversforhackers.com/c/so-you-got-
used to increase data availability and reduce the load of yourself-a-loadbalancer. [Accessed 13 Januari 2019].
backend web servers. High availability servers are created

View publication stats

You might also like