You are on page 1of 1

Master in High Performance Computing

Advanced Parallel Programming


MPI: Topologies and Neighborhood Collectives

LABS 3
The labs will be performed in the Finis Terrae (FT2) supercomputer of the Galicia
Supercomputing Center (CESGA). Starting additional codes are in Lab3Codes.zip le.
For each one of the labs you will have to write a small report, just explaining what
you have done in each exercise, the resulting codes, and the performance analysis. The
memory can be written in English or Spanish. The deadline dates for each lab will be
communicated via slack.
We can use Intel MPI implementation, module load intel impi, or OpenMPI one,
module load gcc openmpi. There may be some dierences.
The code stencil.c computes the evolution of the heat in a squared area. The code
includes a function to create an output le with the result. In stencil codes, the compu-
tation of a magnitude in one point always involves neighbor points. In parallel with MPI
that means that communications with neighbor processes are needed to compute the
points in the borders.
The code printarr_par.c generate an output bmp le for parallel implementations
of stencil.
1. Parallelize the code stencil.c using a 2D topology and neighborhood collectives.
Compare with an implementation using point to point communications in 2D.

You might also like