You are on page 1of 1

Master in High Performance Computing

Advanced Parallel Programming


MPI: Remote Memory Access Operations

LABS 4
The labs will be performed in the Finis Terrae (FT2) supercomputer of the Galicia
Supercomputing Center (CESGA).
For each one of the labs you will have to write a small report, just explaining what
you have done in each exercise, the resulting codes, and the performance analysis. The
memory can be written in English or Spanish. The deadline dates for each lab will be
communicated via slack.
We can use Intel MPI implementation, module load intel impi, or OpenMPI one,
module load gcc openmpi. There may be some differences. The exercises are based on
codes that you parallelized in MPI previously.

1. Parallelize the code pi_integral.c using MPI RMA operations. Compare it with
the blocking collective version.

2. Parallelize the code dotprod.c using MPI RMA operations. Compare it with the
blocking collective version.

3. Parallelize the code mxvnm.c using MPI RMA operations. Compare it with the
blocking collective version. N = M can be assumed.

4. Parallelize the code stencil.c using MPI RMA operations. Compare it with the
implementation using point to point communications.

You might also like