You are on page 1of 1

ABSTRACK

Building a proxy-cache server is an effective solution to speed up access and save the
bandwidth of the network. A proxy server performance can be improved by adding multiple
servers into a cluster. The addition of the server is intended to divide the workload of the
server, so that the load of each server becomes lighter and increased the performance of
proxy service. It takes a load balancing system that can spread the traffic is balanced
according to the scheduling algorithm is selected. LVS (Linux Virtual Server) is an open
source Linux-based application for load balancing. LVS-DR (Linux Virtual Server-Direct
Routing) method is chosen to distribute IP packets, means the reply packets from the proxy
servers shipped directly to the client without having to pass through the load balancer first.
By routing directly to the client, the access with this method is considered more quickly than
with other methods. The proxy servers use a transparent proxy mode by using Squid
application which is run on the Linux operating system. Reason for choosing transparent
proxy is the transparent proxy does not bother the user to do the proxy settings. In addition,
the transparent proxy is supported by LVS scheduling algorithm. Designed scheduling
algorithm specifically for load balancing transparent proxy-cache cluster is an algorithm
that will drive traffic to a proxy server that has been storing the requested content if the
content had been asked before. They are DH (Destination Hashing), LBLC (Locality-Based
Least-Connection), and LBLCR (Locality-Based Least-Connection with Replication). From
the test results of the response time and throughput of each algorithm, DH is the best
algorithm with the largest throughput is 5.21 MB / s and the response time is relatively short
and stable with an average of 0,11 s.
Keywords: proxy-cache cluster, transparent proxy, load balancing, LVS, LVS-DR, scheduling
algorithms, DH, LBLC, LBLCR, response time, throughput.

You might also like