Professional Documents
Culture Documents
Q1 Explain the principles of message passing paradigm with the send and 10
recv operations.
Solutions:
The Building Blocks: Send and Receive Operations
On the receiving end, the status variable can be used to get information
about the MPI_Recv operation.
The corresponding data structure contains:
typedef struct MPI_Status {
int MPI_SOURCE;
int MPI_TAG;
int MPI_ERROR; };
The MPI_Get_count function returns the precise count of data items
received.
int MPI_Get_count(MPI_Status *status, MPI_Datatype
datatype, int *count)
1. MPI_Comm_split
2. MPI_Comm_Size
3. MPI_Send
4. MPI_Cart_Comm
1) MPI_Comm_split() to form communicators int
MPI_Comm_split( MPI_Comm comm, int color, int key, MPI_Comm
*newcomm ); – Creates new communicators based on colors and keys.
This function partitions the group associated with comm into disjoint
subgroups, one for each value of color. Each subgroup contains all
processes of the same color. Within each subgroup, the processes are
ranked in the order defined by the value of the argument key, with ties
broken according to their rank in the old group. – comm: old communicator
– color: control of subset assignment (nonnegative integer). Processes with
the same color are in the same new communicator – key: control of rank
assignment – newcomm: new communicator – This is a collective call.
Each process can provide its own color and key.
2) MPI_Comm_size(comm, &size)
Returns the total number of processes
• Within the communicator, comm
EX:int my_rank, size; MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &size);
3) Sending a message
In MPI, there are many flavors of sends and receives. This is one reason why
there are more than 125 routines provided in MPI. In our basic set of six calls,
we will look at just one type of send and one type of receive.
MPI_Send is a blocking send. This means the call does not return control to
your program until the data have been copied from the location you specify in
the parameter list. Because of this, you can change the data after the call and
not affect the original message. (There are non-blocking sends where this is not
the case.)
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
The parameters:
buf is the beginning of the buffer containing the data to be sent. For For-
tran, this is often the name of an array in your program. For C, it is an ad-
dress.
dest is the rank of the process which is the destination for the message
Receiving a message
Message passing in MPI-1 requires two-way communication. One-sided commu-
nication is one of the features added in MPI-2. In two-way communication
(point-to-point communication), each time one process sends a message, an-
other process must explicitly receive the message; for every place in your appli-
cation that you call an MPI send routine, there needs to be a corresponding
place where you call an MPI receive routine. Care must be taken to ensure that
the send and receive parameters match. This will be discussed in more detail
later.
Like MPI_Send, MPI_Recv is blocking. This means the call does not return con-
trol to your program until all the received data have been stored in the
variable(s) you specify in the parameter list. Because of this, you can use the
data after the call and be certain it is all there. (There are non-blocking receives
where this is not the case.)
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int
tag, MPI_Comm comm, MPI_Status *status)
The parameters:
buf is the beginning of the buffer where the incoming data are to be stored.
For Fortran, this is often the name of an array in your program. For C, it is
an address.
source is the rank of the process from which data will be accepted
(This can be a wildcard, by specifying the parameter MPI_ANY_SOURCE.)
4) MPI_Cart_create
Fortran syntax
Subroutine MPI_Cart_cre-
ate(old_comm,ndims,dim_size,periods,reorder,new_comm,ierr)
int MPI_Cart_create(MPI_Comm comm_old, int ndims, const int dims[],
const int periods[],
int reorder, MPI_Comm *comm_cart)
Input Parameters
Q3 List and explain the minimal set of MPI routines to start and to 10
MPI_Init is called prior to any calls to other MPI routines. Its purpose
int MPI_Finalize()
constructed.
Q4 Explain the blocking message passing operations using send and receive 10
1 P0 P1
2
3 send(&a, 1, 1); send(&a, 1, 0);
4 receive(&b, 1, 1); receive(&b, 1, 0);
1.
o The code fragment makes the values of available to both pro-
at waits for the matching receive at whereas the send at
1 P0 P1
2
3 for (i = 0; i < 1000; i++) { for (i = 0; i < 1000; i++) {
4 produce_data(&a); receive(&a, 1, 0);
5 send(&a, 1, 1); consume_data(&a);
6 }
getting to this loop, process might have sent all of its data.
o If there is enough buffer space, then both processes can proceed;
however, if the buffer is not sufficient (i.e., buffer overflow), the
sender would have to be blocked until some of the corresponding
receive operations had been posted, thus freeing up buffer space.
o This can often lead to unforeseen overheads and performance
degradation. In general, it is a good idea to write programs that
have bounded buffer requirements.
Deadlocks
MPI_Status status;
...
MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
if (myrank == 0) {
else if (myrank == 1) {
...