You are on page 1of 2

Homework 5 Parallel Poisson Solver

I. Overview:

The homework required code to find the steady state solution for a 2-D Poisson solver. The simulation
was implemented using a simple Jacobi method in C with the use of MPI for parallelization of tasks.
The results required to visualize contour plot for phi and quiver plot for the gradient field resulting from
applying a central difference method on phi at each grid point.
II. Simulation Parameters:
The simulation was carried out on a 100 X 100 grid. The left most boundary was kept at phi = -100 and
the right most boundary was kept at phi = 100.
III. Domain Decomposition:
The domain was divided among processors horizontally (along x) with each processor working with a
grid size of 100/numprocs. Dirichlet boundary conditions were applied to the top and bottom rows.
IV. MPI Implementation Details:

MPI Initialization was followed by division of domain among the processors using their ID.
For each process variables denoting their next neighbor and previous neighbor was also
defined.
At the end of each iteration cycle, every node exchanged the overlapping grid point values.
The communication was a two stage communication schedule with the nodes exchanging values
with their next neighbor in the first stage, then exchanging with their previous neighbor in the
second stage using blocking MPI Sends/Receives.
At the end of the iteration all nodes received the final steady state solution using
MPI_Allgather() function call.
Convergence of solution was tested on all nodes simultaneously.
The gradient was then calculated using (un-parallelized) code written for Homework 4.
The Master process printed out the values of phi & gradient on to the screen which was then
processed using MATLAB for visualizing the field and gradient.
V. Results:

The simulation gave the correct solution of the steady state solution. The error was of a magnitude of
1E-4. The visual graph is shown below:

VI. Challenges and Improvements:

1. The result printed to the screen which was redirected to a file. Improvement would
include MPI IO operations by processes to write sequentially to a single file.
2. Implementing OpenMP to parallelize the loop to calculate gradient was not
implemented, which I will certainly look into next.

You might also like