Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more ➡
Download
Standard view
Full view
of .
Add note
Save to My Library
Sync to mobile
Look up keyword
Like this
5Activity
×
0 of .
Results for:
No results containing your search query
P. 1
Load Balancing in Distributed Computer Systems

Load Balancing in Distributed Computer Systems

Ratings: (0)|Views: 1,603|Likes:
Published by ijcsis
Load balancing in distributed computer systems is the process of redistributing the work load among processors in the system to improve system performance. Trying to accomplish this, however, is not an easy task. In recent research and literature, various approaches have been proposed to achieve this goal. Rather than promoting a specific load balancing policy, this paper presents and discusses different orientations toward solving the problem.
Load balancing in distributed computer systems is the process of redistributing the work load among processors in the system to improve system performance. Trying to accomplish this, however, is not an easy task. In recent research and literature, various approaches have been proposed to achieve this goal. Rather than promoting a specific load balancing policy, this paper presents and discusses different orientations toward solving the problem.

More info:

Published by: ijcsis on Aug 12, 2010
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See More
See less

03/13/2011

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
4
 , 2010
Load Balancing in Distributed Computer Systems
Ali M. Alakeel
College of Computing and Information TechnologyUniversity of Tabuk Tabuk, Saudi ArabiaEmail: alakeel@ut.edu.sa
 Abstract
—Load balancing in distributed computer systems is theprocess of redistributing the work load among processors in thesystem to improve system performance. Trying to accomplishthis, however, is not an easy task. In recent research andliterature, various approaches have been proposed to achieve thisgoal. Rather than promoting a specific load balancing policy, thispaper presents and discusses different orientations towardsolving the problem.
 Keywords-distributed systems; load balancing; algorithms; performance evaluation
I.
 
I
NTRODUCTION
 Since the introduction of parallel computers, the mainobjective has been to allow more than one computer to cooperatein solving the same problem. Obviously, distributing a work load equally between equally capable processors should give thebest results. Theoretically speaking, N computers should spend1/Nth of the time a single computer spends in solving the sameproblem [1].Unfortunately, cooperation by itself is not enough andcould be potentially disastrous due to such factors as thecommunication overhead between processors and work imbalance distribution among processors. Having someprocessors doing more or less work than others will degrade theoverall performance of the system and, in the worst case, willcause some the multiprocessors to do nothing at all in reducingthe time to solve the problem. Since all processors have tocommunicate with each other at some point during theircomputation, a lightly loaded processor will spend most of itstime waiting for a subsequent result from an overloadedprocessor. Consequently, we find that most of the processor'stime is spent in waiting rather than doing useful computation asintended.Load balancing, the process of distributing the work fairlyamong participating processors, is a sub-problem of a biggerdilemma: distributed scheduling. Distributed scheduling iscomposed of two parts: local scheduling, which takes care of assigning processing resources to jobs within one node, andglobal scheduling, which determines which jobs are processedby which processor. Load balancing is a vital ingredient in anyacceptable global scheduling policy. The aim of load balancingis to improve system performance by preventing someprocessors from being overwhelmed with work while othersfind no work to do.Achieving the best load balance in a distributed system isnot an easy task. A good load balancing policy should considerthe following goals: (1) Optimal overall system performance--maximum total processing capacity at acceptable delays; (2)Fairness of service--jobs should be serviced equally regardlessof their origin; and (3) Failure tolerance--keeping the systemperformance at an acceptable level in the presence of partialfailures in the system [4].For the purpose of this presentation we assume theconfiguration shown in Figure 1. This configuration is a onepossible arrangement of a distributed homogenous computersystem which has N processors connected through a computernetwork. In this configuration, processors assumed to be of thesame type and have the same power. Tasks (jobs) areindividually independent of each other, and can be processedby any processor. Moreover, tasks arrive at each processor P
 j
 has an arrival rate
λ
 j
and a First Come-First-Serve (FCFS)queue discipline is assumed for all queues in the system underconsideration.
Figure 1. An Example Distributed Computer System Configuration
Computer Network 
λ
1
λ
2
λ
 P
1
P
2
P
λ
K+1
λ
K+2
λ
N
 P
k+1
P
K+2
P
N
8http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
4
 , 2010
Several attempts have been made in the load balancingfield, yet with different approaches and orientations. Twodifferent approaches have been taken by researchers in theirattempts to achieve load balance in distributed systems: Staticand Dynamic.In the static approach, enough information about the statusof all processors in the system is assumed before distributingthe work load among them. Once tasks have been assigned torun at specific processors, however, this assignment is final andcannot be changed regardless of any changes occurring later inthe system [4], [7], [10], [11], and [18]. Static strategies maybe either deterministic or probabilistic. A deterministicstrategy assigns tasks to processors based on a fixed criterion,while a probabilistic strategy uses probability values when theassignment is made. For example the load balancing policythat transfers extra tasks from processor A to processor B allthe time is deterministic, while the policy that transfer extratasks from processor A to processor B with probability 0.7 andto processor C with probability 0.3 is a probabilistic one [5]. Astatic solution's ignorance of system workload fluctuations indecision making is a major disadvantage. On the other hand,static algorithms are easier to work with and analyze [4], [5],and [18]. Several static algorithms have been developed andimplemented, some of which can be found in [12]-[14], and[19]-[22].This paper concentrates almost completely on the dynamicapproach, due to its more realistic approach to load balancing.The goal of this paper is not to advance a specific dynamic loadbalancing policy, but rather to address the problem and presentdifferent approaches that have been used to develop a solutionfor it. Section II presents dynamic load balancing strategiesand Section III identifies different methods depicting theassignment of responsibility for conducting load balance.Section IV presents different solutions a load balance strategycould yield, Section V identifies some techniques used tomodel and analyze a load balance strategy, and Section VIconcludes the paper.II.
 
D
YNAMIC
L
OAD
B
ALANCING
The dynamic approach looks at the load balancing problemmore realistically by assuming little information is availablebefore any assignment is made. It does not presume anyknowledge of where a certain task will finally execute or inwhat environment. Dynamic load balancing algorithmsmonitor changes on the system work load and redistribute thework load accordingly [4], [7], [10], and [11]. Although thedynamic load balancing approach alleviates the drawbacks of the static approach, it is harder to work with and analyze [4].Several dynamic algorithms, e.g., [2], [3], [15]-[17], [23]-[26],have been developed and implemented.Research has shown that the dynamic approachoutperforms the static approach and yields better systemperformance [5]. This section identifies some, not necessarilyall or the best, dynamic load balancing strategies that have beenreported in literature. It has to be emphasized that the selectionof these specific strategies is to serve for presentation purposesonly and not to be interpreted as an exhaustive selection orclassification. Four strategies will be presented: Bidding,Drafting, Threshold, and Greedy. All presented strategies tryto minimize the response time of each process. A comparisonof the strategies will be used to demonstrate their relative usesand effectiveness. The bidding strategy is compared with thedrafting strategy based on their similarities more than theirdifferences. The way each of them approaches the problem ispresented, in addition to identifying the major parameters andproperties of each. Communication overhead is used todiscriminate between the performances of each strategy.Similarly, the threshold strategy is compared with thegreedy strategy based on their similarities. Since bothstrategies assume the same amount of information to reach thesame objective, their performance will be compared in additionto presenting the features of each strategy.
 A.
 
 Bidding Strategy
The main concept of this approach is bids. The overloadedprocessor looking for help in executing some of its tasksrequests other processors to submit their bids. Bid informationincludes the current work load status about each processor.After receiving all bids from participating processors, theoriginal processor selects to whom it will send some of its tasksfor execution. A major drawback of this strategy is thepossibility that one processor will become overloaded as aresult of its winning many bids. To overcome this problem,some variations of this strategy would allow the bidderprocessor to accept or reject the tasks sent by the originalprocessor. This c
o
uld be done by allowing it to send a messageto the original processor informing it of whether the work hasbeen accepted or rejected. Since a processor's load couldchange while these messages take place, the final selectionmight not turn out to be as good as it seems to be at earlier timeor vice versa [8]. Different algorithms have been proposed inthe literature to determine who gets to initiate the bid, bidinformation, bid selection, bid participation, and bid evaluation[4], [7]-[11].The performance of this strategy depends on the amount of information exchanged, the selection of bids, andcommunication overhead [4]. More information exchangeenhances the performance and provides a stronger basis forselection but also requires extra communication overhead [4],[6], and [8]. Examples of this strategy can be found in [27]-[30].
 B.
 
 Drafting Strategy
The drafting strategy differs from the bidding strategy inthe way it allows process migration and in the manner itattempts to achieve load balance. The drafting policy tries toalleviate some of the communication overhead introduced bythe bidding strategy. The drafting policy achieves load balanceby keeping all processors busy rather than evenly distributingthe work load among participating processors (which is one of the objectives of the bidding strategy). In the bidding strategy,to keep all processors evenly loaded, groups of processes willbe required to migrate from a heavily loaded processor to alightly loaded processor. Consequently, it is possible to findsome of these processes migrating back as a result of theunpredictable change of the processor's work load. To allow
9http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol.
8
 , No.
4
 , 2010
for this problem in the bidding approach, the drafting strategyallows only one process to migrate at a time rather than groupmigration [8].The drafting strategy adopts a process migration policywhich is based on giving the control to the lightly loadedprocessors. Lightly loaded processors initiate a processmigration instead of having process migration being triggeredby a processor being overloaded as in the bidding strategy [8].In drafting, the number of processes currently at the processoris used for work load evaluation. Each processor maintains itswork load and identifies itself as in one of the following states:H-Load, N-Load, or L-load. An H-Load, heavy load, indicatesthat some of this processor's processes can migrate to otherprocessors. An N-Load, normal load, indicates that there is nointention for process migration. A L-Load, light-load, indicatesthat this processor is willing to accept some migrant processes.A load table is used at each processor to hold this informationabout others processors and act as a billboard from which theglobal information of the system is obtained. When a loadchange occurs in a processor, it will broadcast its loadinformation to other nodes so that they will update their loadtables.As the processor becomes lightly loaded, i.e. L-Load, it willidentify other processors having the status of H-Load from itsload table and send them a draft-request message. Thismessage indicates that the drafting processor is willing toaccept more work. If by the time it receives this message it isstill in H-Load, each remote (drafted) processor will respond bysending a draft-respond message which contains a draft-ageinformation. Otherwise the current load status will be returnedto the drafting processor, adopting the concept that a process isallowed to migrate only if it is expecting a better response timeand age is associated with each draftable process. Some of theparameters that may be used for age determination are: Processwaiting time, process priority, or process arrival [8].The draft-age is determined by the ages of those processesnominated to be drafted. Various alternatives for draft-agecalculations are possible. The selection of the draft age to bethe maximum age of all draftable processes, the average age of the draftable processes, or simply the number of draftableprocesses are some of them [8]. When all draft-responsemessages are received, the drafting processor calculates draft-standard criteria. Draft-standard criteria are calculated basedon the draft-ages received and used to ensure fairness of selection among drafted processes. The choice of draftstandard is crucial to the performance of this strategy and isdetermined at the system design stage. After calculating thedraft-standard, a draft-select message is sent to the draftedprocessor that has the highest draft-age. The drafting processorwill send the draft-select message, only if it is still on the L-Load state, otherwise it will not accept any migratingprocesses.Research has shown in [8] that the drafting strategyalleviates many drawbacks encountered in the biddingalgorithm such as unnecessary communication messages andthe possibility of having a bid winner processor beingoverloaded. Simulation results and detailed comparison resultsare reported in [8].
C.
 
Threshold Strategy
In this type of load balancing algorithms, a threshold valueT is used to decide whether a task is executed locally orremotely. The threshold is the processor's queue length. Thequeue length is the number of processes in service plus thenumber of processes waiting in the queue. Threshold valueassumes static value in an implementation of the algorithm [4]and [5].In this strategy, a processor will try to execute a newlyarriving task locally unless this processor threshold has beenreached. In this case, this processor will select anotherprocessor randomly and probe it to determine if transferring thetask to the probed processor will place it above the threshold ornot. If it does not, then the task is transferred to the probedprocessor which has to execute the task there without anyattempts to transfer it to a third one. If it does place it abovethe threshold, then another processor is selected in the samemanner. This operation continues up to a certain limit calledthe probing limit. After that, if no destination processor isfound, the task is executed locally [5]. It should be noted thatthe threshold strategy requires no exchange of informationamong processors in order to decide whether to transfer a task or not [5]. This is an advantage because it minimizescommunication overhead. Another advantage is that thethreshold policy avoids extra transfer of tasks to processors thatare above the threshold. It has been shown, in [5], that thethreshold algorithm with T=1 yields the best performance forlow to moderate system load, and the threshold algorithm withT=2 gives the best performance for high system load.
 D.
 
Greedy Strategy
In this strategy, the current state of each processor Prepresented by a function
 f 
(n) , where n is the number of taskscurrently at the processor. If a task arrives at P and number of tasks n is greater than zero, then this processor looks for aremote processor that has its state less than or equal to
 f 
(n). If aremote processor is found with this property, then the task istransferred there. The performance of this strategy depends onthe selection of the function
 f 
(n). It has been shown in [6] that
 f 
(n) < n must holds in order to achieve good performance.Also, n-1, n div 2, n div 3, n div 4, etc. are possible values for
 f 
(n). Furthermore, it has been shown in [6] that
 f 
(n) = n div 3yields the best results and that the greedy strategy outperformsthe threshold strategy with T=1 in all experiments.The greedy strategy adopts a cyclic probing mechanisminstead of the random selection used in the threshold strategy.In this cyclic probing mechanism, processor i probes processor(i+j) mod N, N representing the number of processors in thesystem, in the j
th
probe to locate a suitable destinationprocessor. For example, in a system with 5 processorsnumbered 0,1,2,3, and 4 respectively, Processor 1 will firstprobe processor 2. If this attempt is not successful, it willprobe 3 and so on. As in the threshold strategy, once a task istransferred to a remote processor it must be executed there [6].Despite the similarities between the two strategies, it has beendemonstrated using simulation results in [6] that the greedystrategy outperforms the threshold strategy. This improvementis attributed to the fact that the greedy strategy attempts to
10http://sites.google.com/site/ijcsis/ISSN 1947-5500

Activity (5)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
kapil88 liked this
hak2310 liked this

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->