HPC Exp6

You might also like

You are on page 1of 3

Experiment No 6

60004190123
Vidhan Shah
CS-B/B3

Aim: Implement a program to demonstrate balancing of workload on MPI.


Theory:
Load Balancing on MPI
MPI is the most important parallel programming tool in cluster currently. It
implements communication in parallel program by message. Implementing load
balance in MPI parallel program is very important. It may reduce running time
and improve performance of MPI parallel program, aiming at solving the
dynamic balancing problem in homogeneous cluster system. Load balancing or
scheduling is the process of assigning resources to tasks in order to make the
overall computation efficient.
Code & Output:
from mpi4py import MPI

print("Vidhan Shah - 60004190123")

def work_chunk(n, num_procs):


chunk_size = n // num_procs
remainder = n % num_procs
start = 0
for i in range(num_procs):
if i < remainder:
end = start + chunk_size + 1
else:
end = start + chunk_size
yield (start, end)
start = end

def sum_squares(start, end):


sum = 0
for i in range(start, end):
sum += i**2
return sum

comm = MPI.COMM_WORLD
rank = comm.Get_rank()
num_procs = comm.Get_size()

if rank == 0:
n = 1000
chunks = list(work_chunk(n, num_procs))
else:
chunks = None

chunks = comm.scatter(chunks, root=0)


local_sum = sum_squares(*chunks)
total_sum = comm.reduce(local_sum, op=MPI.SUM, root=0)

if rank == 0:
print(f"Total sum of squares is {total_sum}")

Conclusion: Thus, by this experiment we understood the working of MPI in process of


workload balancing, implemented it in python and observed its output.

You might also like