Professional Documents
Culture Documents
Abstract
The Message Passing Interface (MPI) is a communication library specification for
parallel computers as well as for the workstation network. This paper signifies
the role of MPI in parallel Computing. To give the general understanding of MPI
Specification, this paper also presents the detailed view of the primary subject
including “Evolution of MPI, Modes of MPI, MPI features, Communicator” etc.
This paper ideal for the learner, new to parallel programming with MPI.
Keyword
Message Passing Interface: MPI, MPI Modes, MPI Functions, Communicator
1.0 MPI
MPI is a portable standard library interface for coding parallel programs
using distributed -memory programming model. MPI is primarily focused
on message -passing in parallel programming model: data is transferred
from one process’s address space to another process’s address space
through cooperative operation on each process [1] . The goal of MPI is to
provide a universal standard for programs based on message passing.
Features of MPI-1
a. Point-to-point communication,
b. Collective communication,
c. Process groups and communication domains,
d. Virtual process topologies, and
e. Binding for Fortran and C.
The 2nd version of MPI i.e. (MPI-2) was released in 1997. The MPI-2
extend the basic message passing model with one-sided communication,
parallel I/O and dynamic processes.
Communicator
The communication domain is a group of processes communicating with
each other. Information related to the communication domain are store in
MPI_Comm type variable, called “communicator”.
MPI define the default communicator MPI_COMM_WORLD, includes all
MPI processes.
A customized communicator or process filtering are required, if the
programmer needs to send a message within some of these processes
The data transfer is not finished until both function s completed their
operation.
a. Buffered mode: This mode allow user to control the space available
for buffering within a defined buffer model. Send can be initiated
whether or not matching receive has been initiated, also send may
complete before matching receive is initiated . The send primitive is
MPI_Send.
d. Standard mode: The sender rs blocked until the send buffer can be
reused without altering the message. It may behave like either
buffered mode or synchronous mode, depending on specific
b. MPI_Allreduce
The essential subroutine of MPI are define in mpi.h (C) and mpif.h (Fortran). Some
of the primary subroutine are as follows:
I. mpi_init()
It initiates an MPI based program, or computation. It initializes the library,
builds an MPI environment and creates an MPI jobs to store the information
about the process involved.It is defined as:
II. mpi_comm_rank()
Each process in an MPI job is given a unique rank number, helps to
communicate with other processes. The mpi_comm_rank() determines the rank
of defined MPI process. It is defined as :
III. mpi_comm_size()
It is used to determine the total number of processes to be lunched and
monitored by the MPI program.It is defined as :
IV. mpi_send()
This function is used to send data to another process in the communicator
group. It is defined as :
V. mpi_recv()
This function is used to receive data from another process in the
communicator group.It is defined as :
VI. mpi_barrier()
This subroutine is used to synchronize the execution of a group of
process specified within the communicator. when a process reaches
this operation, it has to wait until all other processes have reached
the mpi_barrier(). i.e. no process returns from mpi_barrier until all
the processes have called it.
VII. mpi_finalize()
It is used to terminate an MPI process or shuts down the current
running task and inform the MPI library about the task completion.
A non-call to mpi_finalize() at the end of each task ,leaves network
communication pipes open causing system errors.It is defined as:
int mpi_finalize()
int myRank;
int partner;
int size, i,t;
char greeting[100];
MPI_Status stat;
if (myRank ==0)
{
fputs(greeting, stdout);
for (partner = 1; partner < size; partner++)
{
}
}
else
{
MPI_Send(greeting, strlen(greeting)+1, MPI_BYTE,
0,1,MPI_COMM_WORLD);
}
Output
References
[1].https://books.google.co.in/books?id=jQrpBQAAQBAJ&pg=PA45&dq=%22from+the+addres
s+space+of+one+process+to+the+address+space+of+another%22&hl=en&sa=X&ved=0ahUKEwj
23YXCzI_lAhUZcCsKHTcaBusQ6AEIKTAA#v=onepage&q=%22from%20the%20address%20space
%20of%20one%20process%20to%20the%20address%20space%20of%20another%22&f=false
[2]. https://books.google.co.in/books?id=o58GiMl-
_vkC&printsec=frontcover&dq=advanced+computer+architecture+,+RAJIV+CHOPRA&hl=e
n&sa=X&ved=0ahUKEwjGnr21wY3lAhVGfH0KHb3XDmgQ6AEIKTAA#v=snippet&q=M
PI&f=false
[3]. https://computing.llnl.gov/tutorials/dataheroes/mpi/
[4]. www.mcs.anl.gov › ~Balaji › permalinks › 2014-06-06-argonne-mpi-basic
[5]. https://issuu.com/imsf/docs/mcse-011