7 views

Uploaded by Ritu Raj

save

You are on page 1of 8

4, August 2011

An Optimal Task Allocation Model for System Cost Analysis in Heterogeneous Distributed Computing Systems: A Heuristic Approach

P. K. Yadav

Central Building Research Institute, Roorkee- 247667, Uttarakhand (INDIA)

M. P. Singh

Gurukul Kangri University, Haridwar- 249404, Uttarakhand (INDIA)

Kuldeep Sharma*

Krishna Institute of engg. and Technology, Ghaziabad201206, U.P(INDIA)

ABSTRACT

In Distributed computing systems (DCSs), task allocation strategy is an essential phase to minimize the system cost (i.e. the sum of execution and communication costs). To utilize the capabilities of distributed computing system (DCS) for an effective parallelism, the tasks of a parallel program must be properly allocated to the available processors in the system. Inherently, task allocation problem is NP-hard in complexity. To overcome this problem, it is necessary to introduce heuristics for generating near optimal solution to the given problem. This paper deals with the problem of task allocation in DCS such that the system cost is minimized. This can be done by minimizing the inter-processor communication cost (IPCC). Therefore, in this paper we have proposed an algorithm that tries to allocate the tasks to the processors, one by one on the basis of communication link sum (CLS). This type of allocation policy will reduce the inter-processor communication (IPC) and thus minimize the system cost. For an allocation purposes, execution cost of the tasks on each processor and communication cost between the tasks has been taken in the form of matrices. Keywords Distributed computing system, task allocation, execution cost, communication cost, communication link sum.

Task allocation problem is known to be NP- hard problem in complexity, when we required an optimal solution to this problem. The easiest way to finding an optimal solution to this problem is an exhaustive enumerative approach. But it is impractical, because there are nm ways for allocating m- tasks to n- processors [3]. Much research efforts on the task allocation problem have been identified in the past with the main concern on the performance measures such as minimizing the total sum of execution and communication costs [1-4,6,7,11 ] or minimizing the program turnaround time [8, 10, 22], the maximization of the system reliability [12- 19] and safety [16]. A large number of techniques to task allocation in DCSs have been reported in [1-4, 5-8, 10-19, 21- 24]. They can be broadly classified into three categories: graph theoretic technique [8, 9], integer programming technique [6- 8] and heuristic technique [1-3, 23, 24]. Graph theoretic and integer programming techniques yields an optimal solution at all the times. But these techniques are restricted to the small size problems. If the problem size is very large, it is necessary to use the heuristic technique to get near optimal solutions. The choice of a particular technique depends on the structure of the problem [14]. In this paper, we have developed a task allocation model and have proposed a heuristic algorithm for task allocation that will find a near optimal solution to the problem. The proposed algorithm try to minimize the inter processor communication cost (IPCC) by assigning those task first, which has the heaviest communication link sum (CLS). Using this approach it has been seen that the system cost will minimize more than other heuristic. The rest of this paper is organized as follows: section-2 formulates the task allocation problem for minimizing the overall system cost; section-3 discusses in detail the proposed allocation technique and algorithm; section-4 gives an implementation of the proposed algorithm. In the last section-5 concludes the paper.

1. INTRODUCTION

To meet the requirement of faster computation, one approach is to use distributed computing systems (DCSs).Distributed computing system (DCS) not only provide the facility for utilizing remote computer resources or data not existing in local computer systems but also minimize the system cost by providing the facilities for parallel processing.[1, 8, 24]. A distributed computing system (DCS) consists of a set of multiple processors (which are geographically distributed) interconnected by communication links. A very common interesting problem in DCS is the task allocation. This problem deals with finding an optimal allocation of tasks to the processors so that the system cost (i.e. the sum of execution cost and communication cost) is minimized without violating any of the system constraints [3]. In DCS, an allocation policy may be either static or dynamic, depending upon the time at which the allocation decisions are made. In a static task allocation, the information regarding the tasks and processor attributes is assumed to be known in advance, before the execution of the tasks [1]. We shall be considering static task allocation policy in this paper.

2. PROBLEM FORMULATION

In the past, different task allocation models and techniques for minimizing the overall system cost have been widely investigated in the literature. In this paper, we follow (1- 4, 6, 7, 10) to formulate the task allocation problem.

30

incurred due to the inter task communication is the amount of total cost needed for exchanging data between the same processor then t i and t j residing at separate processor during the execution process. 2. ITCCM (. ) . if i th task is executed on k th 2. Tass {} : a linear array to hold assigned tasks.3 Communication link Sum (CLS) It is an important characteristic of ITCCM (.1 Problem statement The problem being addressed in this paper is concerned with an optimal allocation of the tasks of a parallel application on to the processors in DCS.) of order m × m. Similarly. ti : i th t i in ITCCM (. the corresponding execution cost is taken to be infinite (∞).4 Assumptions To allocate the tasks of a parallel program to processors in DCS. if for i = 1.4. otherwise. The execution costs of a task running on different processors are different and it is given in the form of a matrix of order m × n. If two tasks executed on ccij = 0. which measures how communication intensive a task is. If a task is not executable on a particular processor. Tnon _ ass {} : a linear array to hold non assigned tasks. processor. 2. t i can be computed as: m Pk : k th xik : CLS (t i ) = ∑ j =1 ccij xik =1. August 2011 2. an allocation of tasks to processors can be defined by a function X as follows: X: T→P. we have considered a distributed computing system made up by two sets. the inter task communication cost between two tasks is given in the form of a symmetric matrix named as inter task communication cost matrix ITCCM (. TCLS {} : a linear array to hold the task according to their communication link sum. the communication link sum of a task of the given program.tasks to one of the n.4. can be easily determined by finding the sum of communication costs of all the tasks which are interacting with task 2.……. incurred inter task communication cost between task ccij : t i and t j . which collectively form a common goal[1]. named as execution cost matrix ECM (. 2..m . ……….3.4 It is also assumed that the number of tasks to be allocated is more than the number of processors (m>>n) as in real life situation..1 Execution cost (EC) The execution cost ecik of a task t i . ECM (. ) .).International Journal of Computer Applications (0975 – 8887) Volume 28– No. 2. t2. which are to be executed on a set of n. if ith task is allocated to kth processor. such that X(i)= k. ecik : incurred execution cost (EC).1 The processors involved in the DCS are heterogeneous and do not have any particular interconnection structure. the inter task communication cost (ITCC) between them is zero. An optimal allocation is one that minimizes the system cost function subject to the system constraints. 2. The purpose of defining the above function is to allocate each of the m.5 Task allocation model for system cost 31 .tasks that are free in general.2 Notations T : the set of tasks of a parallel program to be executed. n : the number of processors.processors having different processor attributes.2 The parallel program is assumed to be the collection of m.4. ) : execution cost matrix. 2.……. P : the set of processors in DCS. m : the number of tasks forming a program. if they are executed on separate processors. denoted by CLS. In this paper. Whenever a group of tasks is assigned to the processor.4.processors such that the overall system cost is minimized.. interconnected by communication links and T = {t1. processor in P. The CLS of a task t i : 1≤ i ≤m. we have been made the following assumptions: allocated to k th processor. in inter task communication cost matrix. 2. Therefore. Now.3 Once the tasks are allocated to the processors they reside on those processors until the execution of the program is completed.3.Pn}of heterogeneous processors . ) : inter task communication cost.3 Definitions 2.3. xik =0.4. (1) the decision variable such that i th task is 2. P= {P1.tm} of program tasks. 2.2 Communication cost (CC) The communication cost ( ccij ). running on a processor Pk is the amount of the total cost needed for the execution of t i on that processor during the execution process. P2.

Thus. An optimal solution to this problem can be found by enumerating all possible allocations. the total cost on kth processor is the sum of the processor execution cost (PEC) and IPCC for kth processor. states that each task should be assigned to exactly one processor. to minimize the system cost. The CLS of each task can be computed by using PEC ( X ) k = ∑i =1 ecik xik m (2) 2. Therefore. 2]. constraint-6. Now.k. the task allocation model for system cost may be formulated as follows: Rule-1 This rule is incorporated to the selection of suitable processors. The proposed technique has been given in next section that will find near optimal solution to the mentioned problem at all the times. We can achieve this objective by making task allocation properly.e.S Cost ( X ) s. (6) (7) xik ∈ {0. under a task allocation X. both the costs are application dependent and takes play an important role in task allocation. the result will decrease in system cost. X: T→P. one of the tasks with equal CLS is selected randomly. 2.g execution cost. Tie breaking is done randomly i.5. inter processor distance etc] etc.m. in this paper. 2. due to the inter task communication. ∀ i. we have developed a task allocation model to get an optimal system cost.processor communication cost (IPCC) Inter processor communication cost is incurred when the data is transmitted from task to task if they are residing on separate processors. since obtaining such information is beyond the scope of this paper therefore. 2. the order in which the tasks in a program are considered for allocation is a critical factor affecting the optimality of the resulting allocations [21]. If the CLS of a task t i is very high than other task i. August 2011 In this section. Therefore. Initially.1 Processor execution cost (PEC) For given a task allocation.5.International Journal of Computer Applications (0975 – 8887) Volume 28– No. we assume that the linear array In this model. inter processor communication cost (IPCC) is proportional to inter task communication cost ccij [6. for an optimal allocation of tasks to processors in DCS. We have selected all the tasks for allocation according to their CLS.1} In this model. 24]. under a task allocation X. there are two types of costs to be considered for this system. inter task communication cost etc] and processor attributes [e. 3. Hence. In the present task allocation model. Henceforth. to minimize the IPC [1.. xik is being decision variable.m. whose capabilities is most appropriate for the task. the processor execution cost. But this technique requires O(nm) time computations.4. obtaining an optimal allocation of tasks of a random program to any arbitrary number of processors interconnected with non-uniform links is a very difficult problem [1].t. PROPOSED TASK ALLOCATION TECHNIQUE AND ALGORITHM The technique by which the tasks comprising the program are allocated to the available processors in DCS are essential.g. in order to allocate the tasks of such program to processors in DCS. X(i) = k. CLS (t i ) = ∑ j =1 ccij m for i = 1. by ordering the tasks according to their CLS and made allocation of these tasks to processors in that order. ………. a deterministic model that the required information is available before the execution of the program is assumed [20]. The above model defines an integer programming problem and is known to be NP. Therefore. the inter processor communication cost for kth processor can be computed as: Now.e. we should know the information about the input such as tasks attributes [e.3 System cost model With system resources constraints taken into account. first we have to allocate such tasks to the processors. ∑ n k =1 xik = 1 ∀ i= 1. we have defined two rules. This is prohibitive even for small size problems. min . all the tasks are sorted in the monotonically decreasing order of their CLS in a linear array TCLS { } and they are IPCC ( X ) k = ∑i =1 ∑ j fi (ccij )xik x jb m m (3) considered for allocation in that order. 7.2-Inter. However. Therefore.hard problem [2-4.5. In this case.Now. Constraint-7. To achieve this objective. …………. Therefore. under a given task allocation X TCost ( X ) k = PEC ( X ) k + IPCC ( X ) k and the total cost of the system is computed by: (4) S Cost ( X ) = ∑k =1 TCost ( X ) k n (5) Tass {} ←Φ and TCLS {} ← Tnon _ ass {} . the execution cost ecik represent to execute task ti on processor Pk and used to control the corresponding processor allocation. we present a heuristic algorithm to find quickly the solution of high quality. the inter task communication of t i becomes more intensive as compared to other tasks. an efficient task allocation of the program tasks to processor is imperative. guarantees that. 2. processor topology. We apply this rule as: 32 . needed to execute all the tasks assigned to kth processor can be computed as: 3. system cost derived could be lower due to involvement of more communication links. 12-19].

19. ti in a linear Tnon _ ass {} is modified by Tnon _ ass {} ←Φ TCLS {} ← {t1. ECM (. P3. 15.). ITCCM (. ITCCM (. Thus.) and ITCCM (.m. Step-3: initialize: TCLS {} according to decreasing TCLS {} ← Tnon _ ass {} Tass {} ←Φ Step-4: pick up the ith task (say then 4. we have tested the proposed algorithm on these examples. To show the performance of our allocation technique for better allocation.. Task t i . t8. n. we give two numerical examples to illustrate the formulation and solution procedure of the proposed task allocation model.). IMPLEMENTATION OF THE MODEL In this section. ti → Pr . t4. Hence. Step-5: for all tasks from Tnon _ ass {} .1 Example-1 In this example. ………. we have considered a typical program made up by 9. Suppose th Tass {} ← Φ∪ { t i }= { t i }. executed in O(m) time operations. rule-2 is used to add the effect of communication cost of the executing task Tnon _ ass {} ← Φ and TCLS {} ← Tass {} Step-6: compute: TCost ( X ) k = PEC ( X ) k + IPCC ( X ) k and Step-7: End.m.tn}= Tass {} ..4. In step-4.. n = 3.) by deleting the ith row and column.) respectively. Step-1: compute the communication link sum (CLS) of each task using 4. 18. P3}. the run time complexity of the proposed algorithm can be analyzed as follows: step-1. 9. t9} to be executed on the DCS having three processors {P1. 27 corresponding to i= 1. the proposed algorithm traces the following output. we have modified both the matrices ECM (. repeat step-4 until and unless we get Pk {k= 1. Step-2: sort all the tasks in order of their CLS. The detailed process 4. 31. we store the task array Tass {} and the linear array deleting ith task from Tnon _ ass {} .2 Algorithm complexity Using the method suggested by H. t2. with other tasks residing on P1. ………. of allocating the tasks to the processors is given below in the form of algorithm.International Journal of Computer Applications (0975 – 8887) Volume 28– No. t6.) and add this column to all the columns of ECM (. we get CLS ( t i ) = 29. Step-2. until and unless and 3. 2. Ellis et al [25]. for m.……. we have to calculate the communication link sum (CLS) of each task using CLS (t i ) = ∑ j =1 ccij m for i = 1. processor. We have applied the proposed algorithm on this example in the following manner: Step-0: Input: m = 9. has a worst case time complexity of O(m log m). t2.1: assign Rule-1 and Rule-2. t7. P2. 3.executable tasks {t1. We have taken the execution cost of each task on different processors and ITCC between the tasks in the form of matrices ECM (. Using these inputs.).1 Proposed algorithm Our algorithm consists of the following steps. th S Cost ( X ) = ∑k =1 TCost ( X ) k n .Pn-1 except the r processor as: Select the ith column of ITCCM (. a single task requires O(1(n)) time operations. the run time complexity of the proposed algorithm is O(mn). 3. t5. in the above manner.…. therefore.. 2. we have assigned t i to the r Rule-2 It is another rule incorporated to IPC caused by the inter task communication (ITC). the overall time complexity of the proposed algorithm is O(m + m log m + mn). August 2011 Pick up the task t i from Tnon _ ass {} and assign t i to processor ecik is minimum. 2.).2: t i ) from Tnon _ ass {} ti to the appropriate processor by using CLS (t i ) = ∑ j =1 ccij m for i = 1. Therefore. ECM (. Both the matrices have been given in Table-1 and Table-2 respectively. 3. Now. Thus. P2. and and Step-1: first of all.) except the rth column.……. {t i } 33 .2.tasks step-5 requires O(m(n)) time operations. Since m ≥ ≥ n. Step-0: input: m. Tnon _ ass {} ← Tnon _ ass { } Thus.. t3. which has been assigned to the rth processor (say) using rule-1 in DCS.) and ITCTM (. both the rules will be repeated for each task of Tnon _ ass {} . 18. 18.n} for which that processor is Pr . 17. Therefore.……. 4.

Also. assign communication to processor t 8 → P2 and add the effect of using Rule-2. t 7 . In this example. t 7 . the proposed algorithm produces lower system cost in comparison to the algorithm presented in [26]. P2 by modify ECM (. n).e. is the minimization of IPC. Thus. August 2011 Step-2 and 3: sort all the tasks in decreasing order of their CLS ( t i ) TCLS {} . t 3 . We have used static task allocation policy to achieve this objective. t 4 . t 3 . But. Therefore. t 5 .Thus. ↓Tasks → t1 t2 t3 t4 t5 t6 t7 t8 t9 t1 0 8 10 4 0 3 4 0 0 t2 8 0 7 0 0 0 0 3 0 t3 10 7 0 1 0 0 0 0 0 t4 4 0 1 0 6 0 0 8 0 t5 t6 t7 t8 t9 0 3 4 0 0 0 0 0 3 0 0 0 0 0 0 6 0 0 8 0 0 0 0 12 0 0 0 0 0 12 0 0 0 3 10 12 0 3 0 5 0 12 10 5 0 34 . Total cost of processor P } 1 {i. 26]. task allocation problem is known to be NP. Hence. t 2 . according to 4. As we can observe from table-8.7 shows. O (nm) and O (m2+ mn). t4}. we have looked at the problem of task allocation in DCS. the proposed algorithm produces near optimal allocation at all the times.7. Table-5 shows an optimal allocation of tasks to processors in DCS. has been given in table-9 and figure-1and it is found that the proposed algorithm is suitable for a DCS having arbitrary inter connection of processors with random program structure and workable in all the cases. Total cost of processor P3 } S COST ( X ) = TCOST ( X )1 + TCOST ( X )2 + TCOST ( X )3 = 91 + 137+ 300 = 528. {i. t3. t 2 . TCLS {} = {t 8 . a comparison between the complexities of the proposed algorithm and the complexities of the algorithms presented in [5.) have been given in table-3 and table-4. which is very time saving as compared to the complexities of the algorithms presented in [5. t1 . Table – 6 and table. Thus. we have consider a DCS consists of three processors P = {P1. t 3 → P2 and t1 . which finds near optimal system cost for the DCS. P2. For this example. t 5 . t 4 . t 3 . t1 . CONCLUSION In this paper. t 9 . t2. pick up the first task and initially we t8 from Tnon _ ass {} . For several sets of input data (m. repeat step-4. modified ECM (. t 5 .26].) 5. t 6 } Tass {} ←Φ Step-4: Now.11% than that of [26] for this example.hard problem in complexity. Therefore. t 4 . t 8 . the execution cost of each task on processors and ITCC respectively. t 6 → P3 . Thereafter. having arbitrary structure of processors. t 9 . we have proposed an efficient algorithm. for this example has been given below in table-8. TCLS { } ← Tass { } = {t 8 . t 2 . Step-7: End. t 7 . the system cost is minimized by 11. it is a good heuristic to minimize the system cost. respectively. t 4 . t 9 . 7. the proposed algorithm tries to allocate the tasks to the processors on the basis of CLS and found that. t 7 → P1 . t 6 } Step-6: Processor wise total costs are: TCOST ( X )1 = 91 TCOST ( X )2 = 137 TCOST ( X )3 = 300 and total system cost i. t 9 . 137 and 300 respectively and the optimal value of system cost 528 with the proposed algorithm. The results obtained with the proposed algorithm and the algorithm presented in [26].International Journal of Computer Applications (0975 – 8887) Volume 28– No. One of the best options to minimize the system cost. In table -8. for the present task allocation model. t 5 . t1 .e.). when we required an optimal solution to this problem. and P3 are 91. t 2 . P2.) by removing t8 from ECM (.executable tasks T = {t1.2 Example-2 The efficacy of the proposed algorithm has been shown by solving the same running example as in [26]. Apply Rule-1 for t 8 . t 2 .e. our algorithm shows that the proposed algorithm tries to minimize IPCC much more than the algorithm of H.) and ITCCM (. t 6 } Tass {} ←Φ ∪{ t8 } Step-5: For all tasks of Tnon _ ass {} . Execution cost matrix (. The performance of the proposed algorithm is compared with [26]. t 3 . Whose complexities are O (nm). the optimal processor cost of P1. t 9 .e. respectively.) and ITCCM (. Total cost of processor P2 } {i. the run time complexity of the proposed algorithm is O (mn). P3} and a typical program made up by 4.4. t 5 .) and ITCCM (. TCLS {} ← Tnon _ ass {} = {t 8 . until and unless we get Tnon _ ass {} ← Φ. t 6 } assume. Thus. Tnon _ ass {} ← Tnon _ ass {} /{t 8 } = {t1 . Table1.Since execution cost for t 8 is minimum on processor P2 . Therefore. t 7 . Kumar et al [26]. t 4 .

t9.t4. t7 t8. t5.7 Inter task communication cost matrix →Τasks↓ ↓ t1 t2 t3 t4 t1 0 1 4 6 t2 1 0 2 0 t3 4 2 0 8 t4 6 0 8 0 Table. t6 P1 P2 P3 91 Tasks 528 t2 300 Nil t2 P1 P2 24 t 1.4 Modified inter task communication cost matrix ↓Τasks→ t1 t2 t3 t4 t5 t6 t7 t9 174 95 196 148 44 241 12 215 211 176 15 79 215 234 225 28 13 11 110 134 156 143 122 27 192 122 208 t1 t2 t3 t4 t5 t6 t7 t9 0 8 10 4 0 3 4 0 8 0 7 0 0 0 0 0 10 7 0 1 0 0 0 0 4 0 1 0 6 0 0 0 0 0 0 6 0 0 0 0 3 0 0 0 0 0 0 12 4 0 0 0 0 0 0 10 0 0 0 0 0 12 10 0 Table. t 4 t3 P3 Tasks 137 P1 P2 P3 27 t3.International Journal of Computer Applications (0975 – 8887) Volume 28– No.6 Execution cost matrix Processors → Tasks↓ t1 t2 t3 t4 t5 t6 t7 t9 P1 P2 P3 Processors→ → Tasks↓ ↓ P1 9 3 7 3 P2 2 8 10 4 P3 6 7 3 9 t1 t2 t3 t4 173 98 196 156 56 241 15 216 176 15 79 215 234 225 28 11 110 137 156 151 134 27 195 213 Table.4. t1 Table 2. Inter task communication cost matrix Optimal system cost 35 . t3 t1.8 Comparison between the proposed algorithm and the algorithm of H. Kumar et algorithm [26 ] Processors al.3 Modified execution cost matrix Table. Kumar et al [26] Proposed algorithm Optimal system cost Processors H. August 2011 P1 Processors→ Tasks↓ t1 t2 t3 t4 t5 t6 t7 t8 t9 P2 P3 Table. t4.5 An optimal allocation of tasks Total optimal system cost Optimal allocation Processors Tasks Total optimal processor’s cost Table. t2.

3(1999). “Task Allocation Model for Distributed System. 22.pp. [26] O(m2+mn) Proposed algorithm O(mn) [5] Peng.46.757. 7(1997). [18] Pradeep Kumar Yadav.’’ IEEE Transactions on Computers. [6] W.” International Journal of modeling. Pei-Pei Wang. H.41.F. “Task Allocation Algorithms for Maximizing Reliability of Distributed Computing System. 254.pp. 6) (11. [2] A. [8] C. [13] S.138. Kafil and I. [9] Stone. Vol. Dar.S. and C. Vol. Ahmad.T. 9(1991).” IEEE Transactions on Computers. [4] M. [16] Santhanam Srinivasan and Niraj K.P. pp. 1-19. 4) (8. [3] Ajith Tom P. M.K.’’ IEEE Transactions on Computers. Vol. simulation and scientific computations.5) (9. Min-Tsung Lan. Vol.K.Dhodhi and Arif Ghafoor.Vidayarthi and A. Edward Y.” IEE Proceedings-E.” IEEE Transactions on Computers. Richard MA. pp.49-53. “Task Allocation For Maximizing Reliability of Distributed Computer Systems. “Critical Load Factors in Two. Yi-Te Wang.Siva Ram Murthy. pp. Singh and Kuldeep Sharma.10. Shen and W. pp. “Task Assignment in Distributed Computing Systems. “Task Allocation Model for Reliability and Cost optimization in Distributed Computing System.[7]. 2(2011). 9 Results of run time complexity of the algorithms Input combination (Tasks.J. 1 2 3 4 5 6 7 8 9 10 (4. Peng et al. Applic.Sarje and G. 5(1991). Lee and Masahiro Tsuchiya. “Task Allocation in Distributed Data Processing.41. and Kemal Kfe. Jia-Ping Wang. 36 S. Dongmyun Lee and Myunghwan Kim.7) (13. 3) (5..” 0-8186. Systems Sci.G. pp.Holloway. Tasi.pp. Processors) Run time complexity of the algorithms Richard et al.34.41. 745. “An Improved Algorithm For Module Allocation in Distributed Computing Systems.9(1992). [17] D.” IEEE Transactions on Software Engineering. pp.104.Tripathi.00 1997 IEEE.. “Safety and Reliability Drivan Task Allocation in Distributed Systems. C. Sarje. 1671. “Optimal Task Assignment in Linear Array Networks.Comparision between the complexities of the algorithms [15] Peng-Yeng Yin. vol.4. and Abdel.” Journal of Parallel and Distributed Computing Systems.” 42 (1997). “Optimal Task Assignment in Heterogeneous Computing Systems. [14] D. K. 29. Shin.” IEEE Transactions on Software Engrg. vol-2. “A Task Allocation Model for Distributed Computing Systems. [12] Sol M.1678.K. Vol. Figure 1.C-31.pp. 6) (12. Shiuh-Sheng Yu. August 2011 Table. and Masanori Goto.258. 724-735. “Maximizing reliability of Distributed Computing System with Task allocation using Simple Genetic Algorithm.1(1982).Sagar.” IEEE Concurrency (1995).H.57-69.” The Journal of Systems and Software.International Journal of Computer Applications (0975 – 8887) Volume 28– No.” IEEE Concurrency. 82-90. 47(2001).W.” IEEE Transactions on computers. Kumar et al. REFERENCES [1] G. 4) (7. [7] P-Y. Vol. pp.6 (1997).P. (1997).” IEEE transactions on Parallel and Distributed Systems.46. [10] Imtiaz Ahmad Muhammad K. Zoher.” Computers Math.” Journal of System Architecture..10(1995). Chen et al.Processor Distributed System. “A Heuristic Algorithm for the ReliabilityOriented File Assignment in a Distributed Computing System. Vol. .Chu. Sagar and A.Tezen.7) 81 243 4096 16384 390625 1953125 60466176 362797056 13841287201 96889010407 28 40 60 77 104 126 160 187 228 260 12 15 24 28 40 45 60 66 84 91 6. 4(May 1978). November 1980. 3) (6.Shatz. “Assignment Scheduling Communication periodic Tasks in Distributed Real Time System.549-559.” Int.Jha. “Task Allocation for Maximizing Reliability of a Distributed System using Hybrid Particle Swarm Optimization.S. 80(2007). 5) (10. Vol.Kartik and C. 3(1985). [11] Cheol-Hoon Lee.No. 85. “A Graph Matching Approach to Optimal Task Assignment in Distributed Computing Systems Using a Minimax Criterion. Siva Ram Murthy. Leslie J. SE-13. C. “Heuristic Model for Task Allocation in Distributed Computer Systems. [5]O(nm) H. J.7879-8/97$ 10.

1.Don Han.Dug Kim and Tack. 37 .. November 1988. “load Balanced Task Allocation in locally Distributed Computer Sciences.379-392. Vol. Hui.37. 50. No.83. March 1999.” Galgotiya publication Pvt Ltd. 2(2010).6. June 1982. Sahni S and S.” International journal of computers and there applications. Vol. “A Task Allocation Model for Distributed Data Network. [24] K.M. feb.Bang Yang. Vol.” Technical report# 633.” Journal of Mathematical Sciences.000© 1997 IEEE p.4. 11. Hong He.1996. E. “ Task allocation for maximizing Reliability of distributed Computing System using Honeybee mating Optimization.1397. 301-305. Elsadek & B.International Journal of Computer Applications (0975 – 8887) Volume 28– No. Shen. No. “Heuristic Algorithms for Task assignment in distributed systems.Ho Woo. Vol. August 2011 [19] Qin-Ma Kng.7901.A. “A heuristic model for task allocation in heterogeneous distributed computing systems. Rong Deng. Pp. pp. Wells.” The Journal of Systems and software.p. “Fundamentals of computers algorithm. 15. 4(2006). [23] V. Sung.8/97 $ 10. “Task scheduling in Distributed computing systems with a genetic algorithm.” IEEE Transactions on computers.1.Min Song. “Heuristic Models of Task Assignment Scheduling in Distributed Systems. Rajsekaram. [25] H.pp. Lo. 1384. [21] Hongjun Lu.56. Kumar et al. 0-35. Vol.” 0.” Computer. Kfe. [22] A.8186. pp. (2005). [26] H.. Ellis. [20] Sung.

- Industrial Eng Standards - SCEUploaded byimran_chaudhry
- 10.1.1.101Uploaded byme50
- Simulation Driven Structural Design in Ship BuildingUploaded byAltairEnlighten
- IntroductionUploaded bysurojiddin
- 18sensi1Uploaded byPotnuru Vinay
- 105(1) 103-111 2002Uploaded byAntonio Low Pfeng
- mit-04Uploaded byVeenay Hulkhoree
- Syllabus SEM 8.pdfUploaded byAakashKumar
- Field Service ManagerUploaded bythegownuprooster
- Using Idle PCs to Solve Scheduling Problems - Eudoxus Systems LtdUploaded byvaradu
- Global LocalUploaded byxtren
- Applications of Maintenance OptimizationUploaded bymelator
- guide 3 Phase 4 - teoria de la desiciones.docxUploaded bynico
- 2_79-84Uploaded bykesavareddyc
- Paper_1Uploaded bySrilekhya Reddy
- 09_FPGA_Optimization_(Optional).pdfUploaded byTanmay Patel
- Steel StructureUploaded byGokul Gj
- Circular FootingUploaded byAnonymous ux7Klgruqg
- Assignment ModelUploaded bySophia April
- Gpu Sample Sort Le is Ch NerUploaded bysubsen123
- 300617011 Case Study 2 ChandpurUploaded byArpit Bhawsar
- Dynamic Placement Analysis of Wind Power Generation Units in Distribution Power SystemsUploaded byKaiba Seto
- A Dual Framework and Algorithms for TargetedUploaded bySangeetha Sangu BC
- programacion lineal N 1.pdfUploaded byAlbino Quispe M
- 0 1991 an Optimization of Intermittent Corn Drying in AUploaded byfalya arona
- resume huitingsu dsUploaded byapi-332063517
- CRITICAL POINTS IN REACTIVE MIXTURESUploaded byFRANCISCO SANCHEZ MARES
- 7Uploaded byFarshid Faal
- Exam+3 Sample (1)Uploaded byJeff Petrillo
- Optimization of Shell n Tube HEUploaded byDarshan Mb

- dark silicon and the end of multicore scaling.pdfUploaded byrooty
- Cs6801 2m Rejinpaul IIIUploaded byNISHADEVI
- PgtrbcomputerscienceIN PART 3 (1)Uploaded byjothimanicse
- Lucka_Piecka.pdfUploaded byvarunsingh214761
- Parallel Communicating Extended Finite Automata Systems Communicating by StatesUploaded byIAEME Publication
- Curs TP 1Uploaded byBugnaru Gelu
- Mobile FusionUploaded byPurav Shah
- Labwindows CVI addition informationUploaded bykraiphop
- Distributed DL1602.06709v1Uploaded bysharkie
- Superscalar ArchitectureUploaded byyudhishther
- Advanced Computer Architecture Rr320501Uploaded byNizam Institute of Engineering and Technology Library
- WebSphere Compute Grid Technical IntroductionUploaded byYakura Coffee
- DataStage Quality Stage FundamentalsUploaded bybrightfulindia3
- uiuiuiuiuUploaded byJisha Raviprasad
- ScsUploaded bypreethipreethi
- Java Multi ThreadingUploaded bymyname_123
- Monografia Arquitecturas en ParaleloUploaded bymorris star
- c6x Assembly Programming 1Uploaded byErnesto Nadlor
- j.marksUploaded bySavas Kaplan
- RDB_Parallel Execution.pdfUploaded byFil Mango
- Distos Presentacion Intro Prog Paralela DescomposicionUploaded byAgustin Ventura
- Handouts ForsydeUploaded byTanniru Srinivasa Rao
- Linear Scaling and Parallelizable Algorithms for Stochastic Quantum ChemistryUploaded byMUZICH
- ThesisUploaded byproxymo1
- MPC 2012 HOL-web.pdfUploaded byMark C. Brooklyn
- ACA Microprocessor and Thread Level ParallelismUploaded byMohammad Bilal Mirza
- MPICH2 Code::Blocks IDEUploaded byDenny Hermawanto
- M Tech EST Syllabus 2008-09 Non-unitized(1)Uploaded by1818smanoj
- Labview 8.5 Basics IIUploaded byCarlos Andres Mendoza Chacón
- Cs 614 Current PapersUploaded bychi