You are on page 1of 91

.E..

Message Passing Interface


(MPI)

2001

1.
2.
3.
4.

MPI: The complete reference, MIT Press


A Users Guide to MPI, University of San Francisco
Course notes for MPI, Edinburgh Parallel Computing Centre
Installation Guide to mpich, Argonne National Laboratory

http://www.mcs.anl.gov/mpi/mpich/docs


2001..........................................................1

:.....................................................................3
1. .................................................................... 7
1.1 .......7
1.2 MPI.......................................................................................7
1.3 MPI................................................................................ 8
1.4 MPI......................................................... 9

2. MPI .................................................. 10
2.1 MPI ....................................................................................... 10
2.2 .................................................................................11
2.3 C FORTRAN...... 11
2.4 MPI.................................................................. 12
2.4.1 .............................................................. 12
2.4.2 ................................................................13
2.4.3 ......................... 14
2.5 MPI........................................................................ 14
2.5.1 MPI_Initialized....................................................... 15
2.6 MPI.......................................................................... 15
2.6.1 MPI_Abort.............................................................. 16
2.7 Communicator................................................................16
2.7.1 MPI_Comm_size.................................................... 17
2.7.2 (rank) MPI_Comm_rank........17
2.7.3 (Group) ............................................................ 18
MPI_Group_size.............................................................. 18
MPI_Group_rank............................................................. 19
MPI_Comm_group.......................................................... 19
2.8 MPI C....................................................19
2.9 MPI Fortran..........................................22
2.10 MPI_Get_processor_name.......................................... 23
2.11 ............................................................... 24
2.12 (Opaque Objects).......................................... 24
2.13 MPI .................................................................... 25
2.14 ........................................................................... 25

3. ..............................27
3.1 ...................................................................................................27
3.2 ............................................................................................ 27
3.3 (point to point)................................... 28
4

3.3.1 ............. 29
3.4 (Blocking) ............................................... 29
MPI_Send.................................................................... 30
MPI_Recv..................................................................... 30
3.4.1 MPI_Send MPI_Recv.......... 31
3.4.2 ................................................................32
3.4.3 blocking receive.......................................................... 33
3.4.4 MPI_Probe.............................................................. 34
3.4.5 MPI_Pack MPI_Unpack...............................34
3.5 ..........................................................................36
3.5.1 Blocking Send Receive............................................................36
3.5.2 .......................... 37
3.5.3 (Ping - pong).... 39
3.5.4 ............................................40
3.5.5 ........................................................................ 43
3.6 MPI......................................................47
3.7 ........................... 47

4. .............................49
4.1 ...................................................................................................49
4.2 ........................................ 49
4.3 request.....50
4.4 .................................... 51
4.4.1 (posting send receive).....51
MPI_Isend....................................................................51
MPI_Irecv.................................................................... 52
4.4.2 .....................................53
MPI_Wait.................................................................... 53
MPI_Test.................................................................... 54
4.5 ........................................................................ 54
4.6 .................................56
4.6.1 MPI_Waitany......................................................... 56
4.6.2 MPI_Testany.......................................................... 57
4.6.3 MPI_Waitall...........................................................57
4.6.4 MPI_Testall............................................................58
4.6.5 MPI_Waitsome...................................................... 58
4.6.6 MPI_Testsome....................................................... 59
4.7 ......... 59
4.8 .................................................................. 60
4.8.1 Standard send.................................................................................61
4.8.2 Synchronous send.......................................................................... 62
MPI_Ssend MPI_Issend.................................... 62
4.8.3 Buffered send................................................................................. 63
MPI_Ibsend..................................................................63
MPI_Buffer_attach MPI_Buffer_detach............. 63
4.8.4 Ready send.................................................................................... 64
MPI_Rsend.................................................................. 65

5. ...........................................66
5.1 ...................................................................................................66
5

5.2 ...................................... 66
5.3 point-to-point .......................67
5.4 ................................................ 68
MPI_Barrier................................................................... 68
MPI_Bcast........................................................................68
MPI_Gather......................................................................70
MPI_Scatter..................................................................... 71
MPI_Allgather ................................................................ 72
MPI_Alltoall ................................................................... 73
5.5 .................................................73

6. .............................................. 76
6.1 ...................................................................................................76
6.2 ............ 77
MPI_Cart_create.............................................................. 77
MPI_CART_RANK MPI_CART_COORDS........ 78
MPI_Cart_sub................................................................................ 78

...................................................................................... 79
...................................................................79
1. MPI........................................................................... 79
1.1 tstmachines...............................................................81
2. ........................................................................................ 81
3. ................................................................................................ 82
4. MPI ................................................ 83
4.1 ................................... 83
4.2 Point-to-Point........................................ 83
4.3 .............................................85
4.4 ..................................................... 86
4.5 Communicators........................................ 86
4.6 ................. 87
4.7 ..............................87
4.8 ......................................................................88
5. MPI ......................................................................................... 88
5.1 ............................................................88
5.2 .............................................................................89
5.3 ..............................................................................89
5.4 .............................................................................. 90
5.5 .......................................................................90

1.

1.1
(message
passing model),
.
,
.
,
, .

,
. ,
,
( ).
,
, (networks of
workstations), .

. `,
, , .

1.2 MPI
MPI (Message Passing Interface) ( C)
( Fortran)
C FORTRAN .

.
MPI .

. ,
MPI
.
MPI, .

. ,
. ,
MPI
(run-time process management).

.
MPI.
( )
, . ,
,
.
MPI-1 ` 1994. :
,
Fortran C ,
. ` MPI
, (portability).
,
, (optimized versions)
.

.
.
MPI-2 , . `
MPI-1, , /, C++
Fortran 90, .
MPI MPI-1.

1.3 MPI
MPI

. MPI
,
. MPI,
. MPI :

.
.
` (blocking send-receive).
(collective) .
.

,
.
.
MPI . ,
printf C , MPI
.
. MPI
C FORTRAN.
.

1.4 MPI

MPI :
(point-to-point).
.
(group).
communicators.
.

C FORTRAN.

, :
(debugging).
.
.
I/O.
.
.
MPI .

. , MPI

.

2. MPI

MPI . MPI
FORTRAN C MPI
MPI . MPI
.
,
MPI , .
. ,
MPI_Init MPI.
, MPI
, , .

2.1 MPI
MPI
. .
MPI
.
C,
typedef. Fortran,
, . :
MPI_SUCCESS - C Fortran
.
MPI_COMM_WORLD C, MPI_COMM (
communicator). Fortran, . ,

communicator

/.

FORTRAN C.

10

2.2
, MPI C,
(int)

int err;
...
err = MPI_Init(&argc, &argv);
...
FORTRAN IERR
:
INTEGER IERR
...
CALL MPI_INIT(IERR)
,
,
MPI_SUCCESS. `, , C
:
if ( err = = MPI_SUCCESS) {
. . . routine ran correctly . . . }
}
MPI

. , , .

MPI, .
MPI
,
.

2.3
FORTRAN

11

MPI

. MPI
MPI_. FORTRAN ,
C (
). , MPI,
, .
.
FORTRAN, INTEGER
1. C, typedef.
. .., MPI_Datatype,
MPI_Comm, .. 0.
(
, integer, real ..). ,
MPI_ROUTINE FORTRAN, :
MPI_ROUTINE (MY_ARGUMENT, IERR)
<type> MY_ARGUMENT
MY_ARGUMENT . C
void *,
.

2.4 MPI
MPI , ,
C FORTRAN,
. , MPI
. , ,
.

2.4.1
C , MPI C :

MPI

MPI_CHAR

signed char

12

MPI_SHORT
MPI_INT
MPI_LONG
MPI_UNSIGNED_CHAR
MPI_UNSIGNED_SHORT
MPI_UNSIGNED
MPI_UNSIGNED_LONG
MPI_FLOAT
MPI_DOUBLE
MPI_LONG_DOUBLE
MPI_BYTE
MPI_PACKED

signed short int


signed int
signed long int
unsigned char
unsigned short int
unsigned int
unsigned long int
float
double
long double

Fortran, , MPI
Fortran :
MPI

FORTRAN

MPI_INTEGER
MPI_REAL
MPI_DOUBLE_PRECISION
MPI_COMPLEX
MPI_LOGICAL
MPI_CHARACTER
MPI_BYTE
MPI_PACKED

INTEGER
REAL
DOUBLE PRECISION
COMPLEX
LOGICAL
CHARACTER

MPI_BYTE MPI_PACKED
C Fortran. MPI_BYTE 8 .
bit-, , bit
flag.
.
(.., int), MPI_BYTE int
, . MPI_PACKED
.

2.4.2
C, MPI (). ,
:
MPI_Comm - communicator

13

MPI_Status - (status) MPI

MPI_Datatype

o . ,
new_comm MPI_Comm :
MPI_Comm new_comm;
Fortran, (INTEGER).

2.4.3
,
- . ,
MPI_INTEGER,
MPI_INTEGER. ,
. , ,
. ,
MPI_INTEGER float
char. MPI_PACKED MPI_BYTE,
, ,
. , float
MPI_BYTE float byte
.

2.5 MPI
MPI C,
:
#include mpi.h

#include <mpi.h>
mpi.h
header . header prototypes

MPI. FORTRAN,
mpif.inc.

14

MPI,
MPI_Init. MPI
( ,
Communicator MPI_COMM_WORLD). MPI_Init
MPI. .
:
int MPI_Init(int *argc, char **argv)

C main,
MPI_Init(&argc, &argv);
argc
. argv
. ,
main. ,
. MPI_Init,
, main.

.
MPI. FORTRAN

.
MPI_INIT(IERROR)
INTEGER IERROR

2.5.1 MPI_Initialized
MPI_Init MPI_Initialized
true MPI_Init.
( ),
MPI. :
int MPI_Initialized(int *flag)
flag.

2.6 MPI

15

MPI_Finalize().
, MPI_Finalize()
MPI_Init. , MPI_Init.
, ,
, .
. , MPI
MPI_Init. MPI_Finalize .
, .
:
MPI_Finalize();

2.6.1 MPI_Abort
MPI MPI_Abort
communicator. :
int MPI_Abort(MPI_Comm comm, int err)
:
comm
err

communicator
.
.

,
MPI. comm
MPI_COMM_WORLD . comm
MPI_COMM_WORLD comm == MPI_COMM_WORLD
Communicator . , comm = MPI_COMM_WORLD
.

2.7 Communicator
communicator (handle)
(group) .
MPI.
,
. communicator.
,
communicator.
, , communicator.

16

MPI, communicator
MPI_COMM_WORLD. communicator
MPI
. ,
.
,
communicators .
communicators,
. ,
MPI_COMM_WORLD communicator, ,

. ,
(i, j) . ,
, 4 .
,
,
.

2.7.1 MPI_Comm_size
communicator,
, MPI_Comm_size.
int MPI_Comm_size(MPI_Comm Comm, int *size);
Comm communicator size.
:
MPI_Comm_size(MPI_COMM_WORLD, &size);
, size
communicator MPI_COMM_WORLD, ,
MPI .

2.7.2 (rank) MPI_Comm_rank


` ,
communicator. communicator n ,
0 n-1.
, (rank) communicator.
- / -
(.., MPI_Send MPI_Recv)
.

17

communicator . ,
communicator
.
communicators, , , MPI
.
communicator
MPI_Comm_rank :
int MPI_Comm_rank(MPI_Comm Comm, int rank);
Comm communicator . rank
.
, .
:
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
,
, , .

.
, .
,
.

2.7.3 (Group)
communicator (group) .
, . ,
communicator
communicator. ,
MPI_COMM_WORLD, .
, communicator .

.

MPI_Group_size

18

MPI_Comm_size . ,
.
int MPI_Group_size(MPI_Group group, int *size)
:
group
size

.
.

group = MPI_GROUP_EMPTY, size


0. group = MPI_GROUP_NULL .

MPI_Group_rank
.
:
int MPI_Group_rank(MPI_Group group, int *rank)
:
group
rank

.
.

,
MPI_UNDEFINED.

MPI_Comm_group
communicator.
:
int MPI_Comm_group(MPI_Comm comm, MPI_Group *group)
:
comm
group

communicator .
.

2.8 MPI C
MPI C :

19

#include mpi.h
/* #include */
.
.
.
/* , , */
.
.
.
main(int argc, char **argv)
{
.
.
.
/* , */
.
.
.
/* MPI */
/* MPI */
MPI_Init(&argc, &argv);
.
.
.
/* C
MPI */
.
.
.
/* MPI */
/* MPI */
MPI_Finalize();
/* */
exit(0);
}
:

20

#include "mpi.h"
#include <stdio.h>
int main(argc, argv)
int argc;
char *argv[ ];
{
int numtasks, rank, rc;
rc = MPI_Init(&argc,&argv);
if (rc != 0) {
printf (" MPI. .\n");
MPI_Abort(MPI_COMM_WORLD, rc);
}
MPI_Comm_size(MPI_COMM_WORLD,&numtasks);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
printf (" = %d My rank= %d\n", numtasks,rank);
/**********************************/
/*

*/
/**********************************/
MPI_Finalize();
}

MPI
MPI_Init. , rc
. MPI_Abort.
MPI_Init 0, MPI ,
default communicator MPI_COMM_WORLD. ,
communicator
MPI_Comm_rank. MPI MPI_Finalize().
10
= 10 My rank= 0
= 10 My rank= 2
= 10 My rank= 1
= 10 My rank= 4
= 10 My rank= 3

21

= 10 My rank= 6
= 10 My rank= 5
= 10 My rank= 9
= 10 My rank= 8
= 10 My rank= 7
` ,
.

2.9 MPI Fortran


, MPI FORTRAN
.
PROGRAM simple
include mpif.h
integer errcode
C Initialize MPI
call MPI_INIT (errcode)
C Main part of program ...
call MPI_FINALIZE (errcode)
end
program simple
include 'mpif.h'
integer numtasks, rank, ierr, rc
call MPI_INIT(ierr)
if (ierr .ne. 0) then
print *,' MPI. .'
call MPI_ABORT(MPI_COMM_WORLD, rc, ierr)
end if

22

call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)


call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
print *, ' =',numtasks,' My rank=',rank
C ****** ******
call MPI_FINALIZE(ierr)
end
, FORTRAN MPI
. ,
.

2.10 MPI_Get_processor_name
, ,
. MPI,
( ) . ,
. ,
MPI_processor_name
. :
int MPI_Get_processor_name(char *name, int *resultlen)
:
name

resultlen

string
.
string .

name ,
MPI_MAX_PROCESSOR_NAME.
,
. ,
.
,
.

,
MPI . ,
. MPI
(,

23

) ,
. ,
,
.

2.11
,
.
MPI_ANY_TAG, MPI_ANY_SOURCE
communicator MPI_COMM_WORLD. ,
,
.
, ,
. ,
MPI_SUCCESS
.., 0 . MPI,
.
MPI
MPI_ MPI_
.
MPI_ , PMPI_
MPI_.

2.12 (Opaque Objects)


MPI
. communicators, ,
,
.. ,
.
(handles).
,
. C MPI
.
. ,
.

. ,
. ,

24

.
.
,
. , MPI
. (flag) .
, ,
,
.
MPI
. MPI
.
communicator MPI_COMM_WORLD
.

2.13 MPI
MPI
.
. ,
MPI :
(local).
.
, .
.
, buffer ,
.
- (non-local).
. . ,
.
(blocking).
.
(non-blocking). ,

.
(collective). .
,
(Broadcast).

2.14

25

,
:
1. .
, (Read only). ,
(by value).
2.
(Read-Write). , (by reference)
& C.
3.
.
. ,
(Write-only). , .
MPI
.
MPI_SUCCESS .

26

3.

3.1
MPI
() ( ). MPI
.
.
,
(Point-to-Point Communication).
, (Collective
Communication). MPI
.

3.2
MPI : (envelope)
(message body). MPI
:
1. - (source).
2. -
(destination).
3. communicator
(communicator).
4. (tag) . ,
,
.
:
1. (buffer)

27

(send operation),
(receive operation).
2. (datatype). ,
C Fortran (int
INTEGER, float REAL, ..). ,
.
` , C
(structure)
, .
, ,

.
3. (count)
, /.

3.3 (point to point)


(point to point)
MPI. ,
. -. ,
- (source) ,
(rank) - (destination) communicator
-.
-
.
. ,
. , ,
, .

, MPI
. (blocking) (nonblocking) . ,
,
, .

,
, ,
.
(communication mode). MPI

(,

28

) .
.

3.3.1
MPI
.
.
MPI .
communicator .
, .
. ,
.

send receive,
.
, send receive
, , (deadlock)
,
.
send

receive. , send receive


.

. , receive send
.

3.4 (Blocking)
,
,
, .
. MPI_Send
MPI_Recv .
,
( )
.
MPI_Send MPI_Recv .

29

MPI_Send
. :
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm);
:
buf

.
count
( ).
datatype (MPI_INT, MPI_CHAR, ..)
dest
-.
tag
.
comm
communicator -.
count
datatype, buf. , datatype
.
, bytes.
, MPI
byte.

MPI_Recv

-. :
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source, int tag,
MPI_Comm comm, MPI_Status *status)
MPI_Send.
:
buf

.
count
.
datatype .
source
H -.
tag
.
comm
communicator .
status
receive.

30

,
. , , ,
status.
.
. count
. , count
sizeof C.
, .
(buffer)
.
status (struct) MPI_Status
. status
. ,
. ,
MPI_Get_count .
int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count)
:
status
datatype
count

MPI_Recv.
MPI_Recv.
.

MPI_Get_count status,
MPI_Recv. count
datatype . ,
datatype bytes ,
.
(datatype)
MPI_Recv
. MPI_Send MPI_Recv,
, .

3.4.1 MPI_Send MPI_Recv


MPI_Recv .
MPI_Recv,
MPI_Recv,
MPI_Recv.
MPI_Send
.` MPI_Send ,
:

31

1. MPI
(MPI internal buffer) .
2. ,
, - .
, .
MPI_Send

MPI,
.
MPI -
.
MPI, -
- , ,
.
MPI_Send

. ` , -.

3.4.2
,
source, tag comm -
. source / tag
MPI_ANY_TAG, MPI_ANY_SOURCE.
MPI_ANY_TAG tag.
MPI_ANY_SOURCE . ,
comm. communicator
.
MPI_ANY_SOURCE
MPI_ANY_TAG
, status MPI_Recv.
MPI_SOURCE MPI_TAG .
,
,
, .
. , ,
tag .
,
tag .
send receive
. receive ,
send . ,

32

. , MPI
send receive.
,
. , (deadlock).

3.4.3 blocking receive


receive ,
, .

, tags .
, receive.
,
, .
, ( ,
buffer ).
, (tag) .
, , ,
tag . ,
.
. MPI
.
. , ,
.
,
packing unpacking.

. .
receive
(unpacking). packing unpacking .
MPI
receive. ,
receive,
. MPI_ANY_SOURCE
MPI_ANY_TAG, .
, .
,
. ,

.

. ,

33

.
MPI_Probe.

3.4.4 MPI_Probe
MPI_Probe
. MPI_Probe
. , buffer
. buffer
, MPI_Probe
. .
int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status)
:
source
tag
comm
status

.
.
communicator .
.

MPI_Probe
.
MPI_Probe, - ,
, .
MPI_Recv,
, , . ,
. status
source tag . status (struct)
.

MPI_Probe. , MPI_Probe
receive. source tag
MPI_ANY_SOURCE MPI_ANY_TAG
receive .

3.4.5 MPI_Pack MPI_Unpack


MPI
() .
, , ,
.

34

. ,
. receive ,
.
MPI_Pack :
int MPI_Pack(void *inbuf, int incount, MPI_Datatype datatype, void *outbuf,
int outsize, int *position, MPI_Comm comm)
:
inbuf
incount

datatype
outbuf
outsize
position
comm

.

( ).
.
.
, bytes.
, bytes.
communicator .

inbuf, incount
comm outbuf outsize. inbuf
MPI_Send. outbuf
outsize bytes,
outbuf. position outbuf
.
,
MPI_Pack.
(.., MPI_Send).
MPI_Unpack, ,
MPI_Pack. , .
:
int MPI_Unpack(void *inbuf, int insize, int *position, void *outbuf,
int outcount, MPI_Datatype datatype, MPI_Comm comm)
:
inbuf

insize
position
outbuf
outcount
datatype
comm


.
bytes.
, bytes.
.
.
outbuf.
communicator .

35

inbuf,
outbuf, outcount datatype. outbuf
MPI_Recv. inbuf
insize bytes,
inbuf.

3.5
MPI C
.

3.5.1 Blocking Send Receive


MPI
MPI_Send MPI_Recv. . 0
(string) 1.
#include mpi.h
main(int argc, char **argv)
{
char msg[20];
int process_rank, tag = 100;
MPI _Status status;
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &process_rank);
if (process_rank == 0)
{
strcpy(msg, Hello World);
MPI_Send(msg, strlen(msg) + 1, MPI_CHAR, 1, tag,
MPI_COMM_WORLD);
}
else if (process_rank == 1)
{
MPI_Recv(msg, 20, MPI_CHAR, 0, tag, MPI_COMM_WORLD,
&status);
MPI_Finalize();
}
}

36

. ,
MPI MPI_Init. MPI_Comm_rank
, process_rank. 0 (,
0) MPI_ Send.
char msg
string Hello World null
string C. MPI_CHAR
msg char. (rank)
. tag 100. , tag
. ,
MPI_Send. , MPI_COMM_WORLD ,
communicator .
1 (, 1),
MPI_Recv. ,
MPI_Send, status
. status MPI_Status
:
typedef struct
{
int MPI_SOURCE;
int MPI_TAG;
int MPI_ERROR;
} MPI_Status;
, , tag
, .. MPI_Status
MPI ,
, . , MPI
MPI_Finalize()
MPI_Init.
blocking send receive. MPI_Send
msg
1. , 0 ,
msg . 1
MPI_Recv.
msg. ,
,
.

3.5.2

37

. ,
. string
0 .
.
#include <stdio.h>
#include mpi.h
main(int argc, char **argv)
{
int my_rank;
/*
*/
int p;
/*
*/
int source;
/*
*/
int dest;
/*
*/
int tag = 50;
/*
*/
char message[100];
/*
*/
MPI_Status status;
/* receive */
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
MPI_Comm_size(MPI_COMM_WORLD, &p);
if (my_rank != 0) /* 0 */
{
sprintf(message, Greetings from process %d!, my_rank);
dest = 0;
/* strlen(message) +1 \0 */
MPI_Send(message, strlen(message) + 1, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);
}
else /* my_rank == 0 */
{
for (source = 1; source < p; source++)
{
MPI_Recv(message, 100, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &status);
printf(%s \n, message);
}
}
MPI_Finalize();
}

38

:
Greetings from process 1!
:
Greetings from process 1!
Greetings from process 2!
Greetings from process 3!

3.5.3 (Ping - pong)



.
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[ ];
{
int numtasks, rank, dest, source, rc, tag = 1;
char inmsg, outmsg = 'Sample message';
MPI_Status Stat;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 0)
{
dest = 1;
source = 1;
rc = MPI_Send(&outmsg, 14, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);
rc = MPI_Recv(&inmsg, 14, MPI_CHAR, source, tag,
MPI_COMM_WORLD, &Stat);
}
else if (rank == 1)
{
dest = 0;
source = 0;
rc = MPI_Recv(&inmsg, 14, MPI_CHAR, source, tag,

39

MPI_COMM_WORLD, &Stat);
rc = MPI_Send(&outmsg, 14, MPI_CHAR, dest, tag,
MPI_COMM_WORLD);
}
MPI_Finalize();
}

3.5.4
, 1
1 0.
0, 1
0.
buffer 16MB
, .

MPI .
. ,

.
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[ ];
{
int myid, numprocs;
int namelen;
char processor_name[MPI_MAX_PROCESSOR_NAME];
char *buf;
int bufsize, other, done, i;
double t1, t2, tbase;
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
MPI_Get_processor_name(processor_name, &namelen);
printf("Process %d on %s\n", myid, processor_name);

40

bufsize = 1024;
other = (myid + 1) % 2;
done = 0;
while (!done && bufsize < 1024*1024*16) {
if ((buf = (char *) malloc (bufsize)) == NULL) {
printf("%d could not malloc %d bytes\n", myid, bufsize );
MPI_Finalize();
exit(-1);
}
printf("%d sending %d to %d\n", myid, bufsize, other );
if ((myid % 2) == 0) {
MPI_Send(MPI_BOTTOM 0, MPI_INT, other, 1, MPI_COMM_WORLD);
MPI_Recv(MPI_BOTTOM, 0, MPI_INT, other, 2, MPI_COMM_WORLD,
&status );
/* Compute a time to send when the receive is waiting */
t1 = MPI_Wtime();
MPI_Send(buf, bufsize, MPI_CHAR, other, 100, MPI_COMM_WORLD);
t2 = MPI_Wtime();
tbase = t2 - t1;
MPI_Recv(MPI_BOTTOM, 0, MPI_INT, other, 2, MPI_COMM_WORLD,
&status );
/* Compute a time when the receive is NOT waiting */
t1 = MPI_Wtime();
MPI_Send(buf, bufsize, MPI_CHAR, other, 100, MPI_COMM_WORLD);
t2 = MPI_Wtime();
if (t2 - t1 > 1.5 && t2 - t1 > 2.0 * tbase) {
printf( "MPI_Send blocks with buffers of size %d\n", bufsize );
done = 1;
}
}
else
{
MPI_Recv(MPI_BOTTOM, 0, MPI_INT, other,1,MPI_COMM_WORLD,
&status);
t1 = MPI_Wtime();
MPI_Send(MPI_BOTTOM, 0, MPI_INT, other, 2, MPI_COMM_WORLD);
MPI_Recv(buf,bufsize, MPI_CHAR, other, 100, MPI_COMM_WORLD,
&status);
MPI_Send(MPI_BOTTOM, 0, MPI_INT, other, 2, MPI_COMM_WORLD);
while (MPI_Wtime() - t1 < 2.0) ;
MPI_Recv(buf, bufsize, MPI_CHAR, other, 100, MPI_COMM_WORLD,

41

&status );
}
printf("%d received %d from %d\n", myid, bufsize, other );
free( buf );
i = done;
MPI_Allreduce(&i, &done, 1, MPI_INT, MPI_SUM,
MPI_COMM_WORLD);
bufsize *= 2;
}
MPI_Finalize();
return 0;
}
2 :
Process 1 on Manolis.Athens.Greece
1 sending 1024 to 0
1 received 1024 from 0
1 sending 2048 to 0
1 received 2048 from 0
1 sending 4096 to 0
1 received 4096 from 0
1 sending 8192 to 0
1 received 8192 from 0
1 sending 16384 to 0
1 received 16384 from 0
1 sending 32768 to 0
1 received 32768 from 0
1 sending 65536 to 0
1 received 65536 from 0
1 sending 131072 to 0
1 received 131072 from 0
1 sending 262144 to 0
1 received 262144 from 0
1 sending 524288 to 0
1 received 524288 from 0
1 sending 1048576 to 0
1 received 1048576 from 0
1 sending 2097152 to 0
1 received 2097152 from 0
1 sending 4194304 to 0
1 received 4194304 from 0
1 sending 8388608 to 0

42

1 received 8388608 from 0


Process 0 on Manolis.Athens.Greece
0 sending 1024 to 1
0 received 1024 from 1
0 sending 2048 to 1
0 received 2048 from 1
0 sending 4096 to 1
0 received 4096 from 1
0 sending 8192 to 1
0 received 8192 from 1
0 sending 16384 to 1
0 received 16384 from 1
0 sending 32768 to 1
0 received 32768 from 1
0 sending 65536 to 1
0 received 65536 from 1
0 sending 131072 to 1
0 received 131072 from 1
0 sending 262144 to 1
0 received 262144 from 1
0 sending 524288 to 1
0 received 524288 from 1
0 sending 1048576 to 1
0 received 1048576 from 1
0 sending 2097152 to 1
0 received 2097152 from 1
0 sending 4194304 to 1
0 received 4194304 from 1
0 sending 8388608 to 1
0 received 8388608 from 1

3.5.5

.

#include <stdlib.h>
#include <stdio.h>
#include "mpi.h"
#define sqr(x)
((x)*(x))
#define DARTS 5000
/* number of throws at dartboard
*/
#define ROUNDS 10
/* number of times "darts" is iterated */

43

#define MASTER 0

/* task ID of master task

long random(void);
double dboard(int darts)
{
double x_coord,
/* x coordinate, between -1 and 1 */
y_coord,
/* y coordinate, between -1 and 1 */
pi,
/* pi
*/
r;
/* random number between 0 and 1 */
int score, n;
/* number of darts that hit circle
*/
unsigned long cconst;
/* used to convert integer random number
*/
/* between 0 and 2^31 to double random number */
/* between 0 and 1
*/
cconst = 2 << (31 - 1);
score = 0;
/* "throw darts at board" */
for (n = 1; n <= darts; n++) {
/* generate random numbers for x and y coordinates */
r = (double)random()/cconst;
x_coord = (2.0 * r) - 1.0;
r = (double)random()/cconst;
y_coord = (2.0 * r) - 1.0;
/* if dart lands in circle, increment score */
if ((sqr(x_coord) + sqr(y_coord)) <= 1.0)
score++;
}
/* calculate pi */
pi = 4.0 * (double)score/(double)darts;
return(pi);
}
MPI_Status status;
MPI_Request request;
main(int argc, char **argv)
{

44

*/

double homepi,
pi,
avepi,
pirecv,
pisum;
int mytid,
nproc,
source,
mtype,
msgid,
nbytes,
rcode,
i, n;

/* value of pi calculated by current task


/* average of pi after "darts" is thrown
/* average pi value for all iterations
/* pi received from worker
/* sum of workers pi values
/* task ID - also used as seed number
/* number of tasks
/* source of incoming message
/* message type
/* message identifier
/* size of message
/* return code

*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/
*/

/* Obtain number of tasks and task ID */


MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD, &mytid);
MPI_Comm_size(MPI_COMM_WORLD, &nproc);
printf ("MPI task ID = %d\n", mytid);
/* Set seed for random number generator equal to task ID */
srandom (mytid);
avepi = 0;
for (i = 0; i < ROUNDS; i++) {
/* All tasks calculate pi using dartboard algorithm */
homepi = dboard(DARTS);
/* Workers send homepi to master
*/
/* - Message type will be set to the iteration count */
/* - A non-blocking send is followed by mpi_wait */
/* this is safe programming practice
*/
if (mytid != MASTER) {
mtype = i;
MPI_Isend(&homepi, 1, MPI_DOUBLE, MASTER, mtype,
MPI_COMM_WORLD, &request);
MPI_Wait(&request, &status);
}
/*
/*
/*
/*

Master receives messages from all workers


Message type will be set to the iteration count
a message can be received from any task, as long as the
message types match

45

*/
*/
*/
*/

/* The return code will be checked, and a message displayed


*/
/* if a problem occurred
*/
else {
mtype = i;
pisum = 0;
for (n = 1; n < nproc; n++) {
MPI_Recv(&pirecv, 1, MPI_DOUBLE, MPI_ANY_SOURCE,
mtype, MPI_COMM_WORLD, &status);
/* keep running total of pi */
pisum = pisum + pirecv;
}
/* Master calculates the average value of pi for this iteration */
pi = (pisum + homepi)/nproc;
/* Master calculates the average value of pi over all iterations */
avepi = ((avepi * i) + pi)/(i + 1);
printf(" After %3d throws, average value of pi = %10.8f\n",
(DARTS * (i + 1)),avepi);
}
}
MPI_Finalize();
}
10 :
MPI task ID = 0
After 5000 throws, average value of pi = 3.14424000
After 10000 throws, average value of pi = 3.14088000
After 15000 throws, average value of pi = 3.14584000
After 20000 throws, average value of pi = 3.14762000
After 25000 throws, average value of pi = 3.14385600
After 30000 throws, average value of pi = 3.14318667
After 35000 throws, average value of pi = 3.14357714
MPI task ID = 1
MPI task ID = 4
MPI task ID = 8
MPI task ID = 7
MPI task ID = 9
MPI task ID = 6
MPI task ID = 2
MPI task ID = 3
MPI task ID = 5
After 40000 throws, average value of pi = 3.14346000
After 45000 throws, average value of pi = 3.14483556

46

After 50000 throws, average value of pi = 3.14424000


,
.

3.6 MPI

,
. :
1. .
2.
MPI.
3. .
4.
. ,
.
, MPI Single Program Multiple Data
(SPMD) . ,

. ,
0,
, .

3.7
blocking send receive
MPI. .
,
. -
. , blocking
.
-
.
,
. ,
( ).
,
send receive .
. ,
, .
. MPI

47

MPI
.

48

4.

4.1

, MPI
(non-blocking) .

, ,
, .
: MPI
,
.
,
.
- .


(communication mode).
,
.

4.2
,
. , .
, communicator

0 5.
MPI_Send receive.
MPI_Send

49


,
,
.
,
. , 0, 1, 2 4, 3 5,
. , . ,
5, 3 4 1, 0 2, . ,
communicator .
. ,
,
.
, ,
.

4.3
request
MPI
( ):

.

(posting send,) .

(posting receive) ,
.

,
(complete send)
(complete receive) ,
. ,

. ,
,
.
non-blocking fax
. fax
, .

,
. MPI

50

request (request handles). request (request object)


MPI .
MPI
. , (opaque objects).
.
request
(handle) request.
request , ,

.
. ,
,
request.
MPI_REQUEST_NULL
request. request,
. , ,
MPI_REQUEST_NULL.

4.4
,
.
.

4.4.1 (posting send receive)



,
I (Immediate), non-blocking. `
MPI_Isend
MPI_Irecv.
MPI_Isend
MPI_Isend :
int MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
:
buf

51

count
datatype
dest
tag
comm
request

.
( buffer).
( MPI_INT,
MPI_FLOAT, MPI_CHAR, .
.
.
communicator .
request.

, MPI_Send
request request.
, request MPI
.
MPI_Isend MPI
. MPI_Isend, -
( )
MPI_Isend,
.

MPI_Irecv
MPI_Irecv .
MPI_Recv status request.
int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source, int tag,
MPI_Comm comm, MPI_Request *request)
:
buf
count
datatype
source
tag
comm
request


.
.
.
.
.
communicator
.
request.

MPI_Irecv, -
MPI_Irecv,
..

52

4.4.2


(.., MPI_Isend MPI_Irecv).
Wait MPI_Wait
. ,
. ,

, ,
.
Wait,
.
Test MPI_Test
. , true false
.
,
, , ,
. ,
-
..
MPI_Wait MPI_Test
. ,
.
,
, , status
.
MPI_Wait

, MPI_Wait. :
int MPI_Wait(MPI_Request *request, MPI_Status *status)
:
request
request (
)
status
,
.
, .

53

. request

(posting send) (posting


receive) . MPI_Wait
request.
request
(posting send) (posting receive), request
MPI_REQUEST_NULL.
, source, tag count
, status.
, status
.

MPI_Wait.
MPI_Test

, MPI_TEST. :
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
:
request
flag
status

request.
.
status.

MPI_Test . request
(posting send) (posting receive)
. flag ,
request, .
MPI_Wait, request
MPI_REQUEST_NULL.
request, ,
flag . ,
flag , source, tag count
, status.
, flag , status
( MPI_Test).

4.5

54

MPI_Cancel

(posting send posting receive) .
int MPI_Cancel(MPI_Request *request)
request

request.

MPI_Cancel (
) .
(, complete send receive),
. ,
request
status .
,
, , .
MPI_Test_cancelled
status ,
. MPI_Test_cancelled
:
int MPI_Test_cancelled(MPI_Status *status, int *flag)
:
status status.
flag
.
flag , status
. , flag .
MPI_Cancel.
MPI_Comm_rank(comm, &rank);
if (rank == 0)
MPI_Send(a, 1, MPI_CHAR, tag, comm);
else if (rank == 1)
{
MPI_Irecv(a, 1, MPI_CHAR, 0, tag, comm, &req);
MPI_Cancel(&req);
MPI_Wait(&req, &status);
MPI_Test_cancelled(&status, &flag);
if (flag) /* true cancel . receive */
MPI_Recv(a, 1, MPI_CHAR, 0, tag, comm, &req);
}

55

4.6
MPI
.
.
. ,
, ,
. Wait
Test .

Wait
(blocking)

Test
(non-blocking, query)

MPI_ Waitany
.
.

MPI_Testany

MPI_Testall

MPI_Waitall

. MPI_Waitsome
.

MPI_Testsome

,
.

4.6.1 MPI_Waitany
MPI_Waitany ,
.
, .
:
int MPI_Waitany(int count, MPI_Request *array_of_requests, int *index,
MPI_Status *status)
:

56

count
array_of_requests
index

status

.
request.

request .
status.

index
request , status .
request request
MPI_REQUEST_NULL. .

4.6.2 MPI_Testany
MPI_Testany
. :
int MPI_Testany(int count, MPI_Request *array_of_requests, int *index,
int *flag, MPI_Status *status)
:
count
array_of_requests
index

flag
status

.
request.

request .
true .
status.

, flag true, index


request , status
. request
request handle MPI_REQUEST_NULL.
. , flag false
status . , MPI_UNDEFINED
index.
MPI_Testany(count, array_of_requests, &index, &flag, &status)
MPI_Test(&array_of_requests[i], & flag, &status), i = 0,
1, , count - 1, flag = true
MPI_Testany. , index
i , MPI_UNDEFINED.

4.6.3 MPI_Waitall

57

MPI_Waitall ,
.. :
int MPI_Waitall(int count, MPI_Request *array_of_requests,
MPI_Status *array_of_statuses)
:
count
array_of_requests
array_of_statuses

.
request.
status.

i array_of_requests i . request
MPI_REQUEST_NULL.
.
MPI_Waitall(count, &array_of_requests, &array_of_statuses)
MPI_Wait(&array_of_requests[i], &array_of_statuses[I]),
i = 0, 1, , count - 1.
MPI_Waitall, MPI_ERR_IN_STATUS.

4.6.4 MPI_Testall
MPI_Testall true
. .
int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag,
MPI_Status *array_of_statuses)
:
count
array_of_requests
flag
array_of_statuses

.
request.
.
status.

, flag true,
status ,
array_of_requests. ,
request MPI_REQUEST_NULL.

4.6.5 MPI_Waitsome

58

MPI_Waitsome
.
. :
int MPI_Waitsome(int incount, MPI_Request *array_of_requests,
int *outcount, int *array_of_indices, MPI_Status *array_of_statuses)
:
incount
array_of_requests
outcount
array_of_indices
array_of_statuses

array_of_requests.
request.
.
.
status .

4.6.6 MPI_Testsome
MPI_Testsome MPI_Waitsome
.
MPI_Waitsome, :
int MPI_Testsome(int incount, MPI_Request *array_of_requests,
int *outcount, int *array_of_indices)
:
incount
array_of_requests
outcount
array_of_indices
array_of_statuses

4.7

array_of_requests.
request.
.
.
status .


MPI_Send
. , ,
MPI_Send
. , ,
. -
, ,

59

,
(
)
.

4.8

(communication mode). MPI

.
receive (receive mode)
send (send modes). send
standard, synchronous, buffered ready.
send blocking non-blocking
.

(communication mode).

. ,

.


MPI_Test MPI_Wait ( ).
MPI_Send MPI_Isend ,
standard .
send.

. `, B, S R
buffered, synchronous ready , .
MPI
.

Standard send
Synchronous send
Buffered send
Ready send
Receive

MPI_Send
MPI_Ssend
MPI_Bsend
MPI_Rsend
MPI_Recv

60


Standard send
Synchronous send
Buffered send
Ready send
Receive

MPI_Isend
MPI_Issend
MPI_Ibsend
MPI_Irsend
MPI_Irecv

(send) ,
buf, count, datatype, source, dest, tag, comm, status request
standard.

4.8.1 Standard send


standard (standard send)
.
, standard.
` MPI standard send,
:
1. MPI (MPI internal
buffer) .
2. (, )
- ,
. ,
.
,
MPI,

.
MPI - .
, MPI_Send

.

,
MPI_Send MPI_Isend. :
MPI_Send ,
MPI_Isend .
:
MPI,
. ` standard send
MPI (buffering)

61

MPI,
.

4.8.2 Synchronous send


synchronous (synchronous
send) ,
.
-
, .
, -
- (
(handshake). synchronous send .
MPI_Ssend MPI_Issend
MPI_Ssend
standard send:
int MPI_Ssend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
MPI_Issend
synchronous, ,
standard send:
int MPI_Issend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
synchronous send
receive, send receive.
, synchronous send,
(completion test) ,
receive. , synchronous send
standard send. ,
synchronous send, standard send, .
synchronous send , standard
send .
. ,
, , .

.

62

4.8.3 Buffered send


buffered (buffered send)
,

.
.
. buffered send ,

, . ,
. ,
,
MPI_Buffer_attach. .
MPI_Ibsend
MPI_Ibsend
buffered. :
int MPI_Ibsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
MPI_Issend.
MPI_Buffer_attach MPI_Buffer_detach
MPI_Buffer_attach
.
buffered mode. (buffering)
. :
int MPI_Buffer_attach(void *buffer, int size)
:
buffer
size

.
, bytes.

H buffer .
(.., bytes)
void *. , .

. , ,
.

63

H MPI_Buffer_detach
MPI_Buffer_attach.
size . :
int MPI_Buffer_detach(void *buffer, int *size)

buffer size .
, . ,
MPI_Buffer_detach (blocking)
. C,
. ,
. ,
(void *).
attach detach:
#define BUFFSIZE 10000
int size;
char *buff;
buff = (char*)malloc(BUFFSIZE);
MPI_Buffer_attach(buff, BUFFSIZE);
/* 10000 bytes
MPI_Bsend */
MPI_Buffer_detach(&buff, &size);
/* 0 */
MPI_Buffer_attach(buff, size);
/* */
void *,
. MPI_Buffer_attach
. MPI_Buffer_detach
. , MPI_Buffer_detach .
, void * (
void * void ** ) (type
cast) , &buff,
char ** MPI_Buffer_detach
. (formal parameter) void **
.

4.8.4 Ready send


ready (ready send)
ready send -, -

64

. ,
.
(hand-shake)
. , ready send
standard send,
. ,
. ,
. ,
MPI. ready
.

ready
.
MPI_Rsend
MPI_Rsend
ready. :
int MPI_Rsend (void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)

65

5.

5.1
` ,
, MP , (collective)
.
, ,
send receive
. ,
. , .
, 100 ,
100 .
100
.
, . ,

,
.
.
,
MPI, ,
,
.

5.2

(group)
communicator. , MPI_Barrier,
, .
MPI .

66

(Broadcast).

(Gather).
()
(Scatter).


(Reduce). : ,
, , , , .

.
.
, ,
,
. ,
,
.
.
, ,
broadcast gather,
. (root).
:
- (
-). scatter.
-
( -).
gather.
.
allgather alltoall.

5.3 point-to-point
MPI

. ,
-.
(blocking) . ,
tag . , send receive
.
MPI
(communication mode) standard

67

. , ,
. , MPI,
.
, ,
, MPI
. ,
MPI . MPI
MPI_barrier
.
,
communicator ( communicator).

, communicator
.
. ,
. ,
- ,
.
.
MPI ,

5.4
MPI_Barrier

.
, , communicator.
. :
int MPI_Barrier(MPI_Comm comm)
comm communicator .
. ,
.

MPI_Bcast
- .
:

68

int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root,


MPI_Comm comm)
:
buffer
count
datatype
root
comm

.
.
.
-.
communicator .

(broadcast)
-
communicator. ,
. root comm
. datatype.
-
.
MPI_Bcast.
#include "mpi.h"
#include <stdio.h>
int main(argc,argv)
int argc;
char *argv[ ];
{
double var;
MPI_Init(&argc,&argv);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (rank == 1) var = 10.5;
MPI_Bcast(&constant, 1, MPI_DOUBLE, 0, MPI_COMM_WORLD);
Printf( %d %f \n, rank, var);
MPI_Finalize();
}
6 :





0
1
3
5
2
4

69

10.5
10.5
10.5
10.5
10.5
10.5

MPI_Gather
` MPI_Gather, (
-) (send buffer)
-. -
-..
:
int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf,
int recvcount, MPI_Datatype recvtype, int root MPI_Comm comm)
:
sendbuf
.
sendcount .
sendtype

sendbuf.
recvbuf
.
recvcount .
recvtype

recvbuf.
root
- ( ).
comm
communicator .
MPI_Gather MPI_Send
(sendbuf, sendcount, sendtype, root, ) n -,
n MPI_Recv(recvbuf + i * recvcount, recvcount, recvtype,
i, ) -. ,
.
(, recvbuf)
- .
MPI_Bcast. , ,
- (
).
- ,
sendbuf, sendcount, sendtype, root comm. root
comm .
MPI_Gather. - 100

.
MPI_Comm comm;
int gsize, sendarraty[100];
int root, *rbuf;
/* */

70

. . .
MPI_Comm_size(comm, &gsize);
rbuf = (int *) malloc(100*gsize*sizeof(int));
MPI_Gather(sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);

MPI_Scatter
MPI_Scatter MPI_Gather. ,
- ,
.
. :
int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype, void *recvbuf,
int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm)
:
sendbuf
.
sendcount
.
sendtype

sendbuf.
recvbuf
.
recvcount
.
recvtype

recvbuf.
root
- ( ).
comm
communicator .
MPI_Scatter n MPI_Send
(sendbuf + i*sendcount, sendcount, sendtype, i, ), i = 0 n 1 , MPI_Recv(recvbuf, recvcount, recvtype, root, )
-. ,
-.
sendcount sendtype -
recvcount recvtype
-. root comm.
-
, recvbuf, recvcount,
recvtype, root comm. .
MPI_Scatter. , 100
- .
MPI_Comm comm;
int gsize, *sendbuf;
int root, rbuf[100];
/* sendbuf */

71

. . .
MPI_Comm_size(comm, &gsize);
sendrbuf = (int *) malloc(100*gsize*sizeof(int));
MPI_Scatter(sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, root, comm);

MPI_Allgather
MPI_Gather
, ,
.
( )
-.. :
int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
:
sendbuf
sendcount
sendtype
recvbuf
recvcount
recvtype
comm

.
sendbuf.
sendbuf.
.
.
recvbuf.
communicator .

sendcount sendtype
recvcount recvtype .
MPI_Allgather(. . .) n
MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm)
, root = 0, . . ., n 1.
MPI_Allgather
MPI_Gather. root
.
MPI_Allgather. , 100
.
MPI_Comm comm;
int gsize, sendarraty[100];
int *rbuf;
/* */
. . .
MPI_Comm_size(comm, &gsize);
rbuf = (int *) malloc(100*gsize*sizeof(int));

72

MPI_Allgather(sendarray, 100, MPI_INT, rbuf, 100, MPI_INT, comm);


,
.

MPI_Alltoall
,
.

.
MPI_Allgather.
int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm)
:
sendbuf
.
sendcount
.
sendtype
sendbuf.
recvbuf
.
recvcount
.
recvtype
recvbuf.
comm
communicator .
MPI_Send
(sendbuf + i * sendcount, sendcount, sendtype, i , . . .) ,
MPI_Recv(recvbuf + i * recvcount, recvcount, i, . . .)
, i = 0, . . . , n.
. root.

5.5
1010 10
MPI_Scatter.
. , .
#include "mpi.h"
#include <stdio.h>
#define MAX_PROCESSES 10
void Test_Waitforall( )
{

73

int m, one, myrank, n;


MPI_Comm_rank( MPI_COMM_WORLD, &myrank );
MPI_Comm_size( MPI_COMM_WORLD, &n );
one = 1;
MPI_Allreduce(&one,&m, 1, MPI_INT, MPI_SUM,
MPI_COMM_WORLD );
if (m != n) {
printf( "[%d] Expected %d processes to wait at end, got %d\n", myrank,
n, m );
}
if (myrank == 0)
printf( "All processes completed test\n" );
}

int main( int argc, char **argv )


{
int
rank, size, i,j;
int
table[MAX_PROCESSES][MAX_PROCESSES];
int
row[MAX_PROCESSES];
int
errors=0;
int
participants;
MPI_Init( &argc, &argv );
MPI_Comm_rank( MPI_COMM_WORLD, &rank );
MPI_Comm_size( MPI_COMM_WORLD, &size );
/* A maximum of MAX_PROCESSES processes can participate */
if ( size > MAX_PROCESSES ) participants = MAX_PROCESSES;
else
participants = size;
if ( (rank < participants) ) {
int send_count = MAX_PROCESSES;
int recv_count = MAX_PROCESSES;
/* If I'm the root (process 0), then fill out the big table */
if (rank == 0)
for ( i=0; i<participants; i++)
for ( j=0; j<MAX_PROCESSES; j++ )
table[i][j] = i+j;
/* Scatter the big table to everybody's little table */
MPI_Scatter(&table[0][0], send_count, MPI_INT,
&row[0], recv_count, MPI_INT, 0, MPI_COMM_WORLD);

74

/* Now see if our row looks right */


for (i=0; i<MAX_PROCESSES; i++)
if ( row[i] != i+rank ) errors++;
}
Test_Waitforall( );
MPI_Finalize();
if (errors)
printf( "[%d] done with ERRORS(%d)!\n", rank, errors );
return errors;
}
, 10 .
, .

75

6.

6.1
( ) (virtual topology)
communicator ,

(communication pattern) . ,
communicator
,

.

.
, ,
-
. , ,
/ .

(communication pattern) communicator.
communicator
.
MPI
: (cartesian virtual topologies)
(graph virtual topologies).

( ).

.
.
, , .
,
.

76

6.2

MPI :
MPI_Cart_create
MPI_Cart_rank
MPI_Cart_coords
MPI_Cart_sub
MPI_Cartdim_get
MPI_Cart_get

MPI_Cart_create
int MPI_Cart_create (MPI_Comm old_comm, int ndims, int *dims, int *periods,
int reorder, MPI_Comm *cart_comm)
communicator old_comm
communicator cart_comm.
cart_comm .
ndims . O
dims,
periods , . `
periods TRUE , ,
(ring). ,
FALSE. reorder TRUE FALSE.
, reorder
FALSE . ,
communicator (cart_comm)
communicator (old_comm). ,

.
TRUE .
, communicator (cart_comm)
communicator (old_comm). communicator
.
MPI_Cart_create .
communicator. `
, .
, .

77

MPI_Topo_test
communicator.

, communicator,
MPI_COMM_NULL.
, .

MPI_CART_RANK MPI_CART_COORDS
MPI_Cart_rank
,
,
.
int MPI_Cart_rank(MPI_Comm cart_comm, int *coords, int *rank)
MPI_Cart_coords
.
,
.
int MPI_Cart_coords(MPI_Comm cart_comm, int rank, int maxdims, int *coords)
maxdims
.

MPI_Cart_sub
MPI_Cart_sub communicators 1 , . ,
,
.
MPI_Cart_sub.
int MPI_Cart_sub (MPI_Comm cart_comm, int *belongs, MPI_Comm new_cart_comm)
belongs ndims
cart_comm communicator new_cart_comm. ,
cart_comm 425 , belongs = (TRUE, FALSE, TRUE) ,
MPI_Cart_sub communicators 20
45 .

78

1. MPI
MPI
ftp. MPI
. ftp sites
MPI. links :

ftp://info.mcs.anl.gov/pub/mpi.
MPICH
Argonne National Laboratory.
ftp.epcc.ed.ac.uk/pub/chimp/release/chimp.tar.Z.

CHIMP Edinburgh Parallel Computing Center.
ftp://tbag.osc.edu/pub/lam. LAM Ohio Supercomputing Center.
ftp://ftp.erc.msstate.edu/unify. MPI PVM
Mississippi State University.

mpich v
1.1.2 Argonne National Laboratory. :
1. mpich.tar.gz mpich.tar.Z
ftp.
. gunzip, .gz,
.Z
2.

gunzip, gunzip -c mpich.tar.gz | tar xovf


zcat zcat mpich.tar.Z | tar xovf
. /usr
/ , partition ,

3.


mpich. , script configure.

79

configure
MPI. configure

>. `, ,
. configure

. , configure
. , configure
:

compiler .
compiler , ANSI
MPI,
- const.
(header) ,
.
, int, char, long
.

4. make. make utility


UNIX
. make

. .
make configure. make
Makefile .
MPI.
Makefile configure.
5.


, mpich/util/machines/machines.xxx
xxx .
MPI.

5 . MPI
. ,
. README
machines .

6. PC LINUX (
) .rhosts.
HOME MPI.

80


. machines.xxx.

1.1 tstmachines
script
MPI . mpich/util.
machines.xxx ,
. .
v tstmachines. tstmachines
LINUX :
Trying true on Manolis.Athens.Greece...
Trying true on Manolis.Athens.Greece...
Trying true on Manolis.Athens.Greece...
Trying true on Manolis.Athens.Greece ...
Trying true on Manolis.Athens.Greece...
Trying ls on Manolis.Athens.Greece...
Trying ls on Manolis.Athens.Greece...
Trying ls on Manolis.Athens.Greece...
Trying ls on Manolis.Athens.Greece...
Trying ls on Manolis.Athens.Greece...
Trying user program on Manolis.Athens.Greece...
Trying user program on Manolis.Athens.Greece...
Trying user program on Manolis.Athens.Greece...
Trying user program on Manolis.Athens.Greece...
Trying user program on Manolis.Athens.Greece...
tstmachines machines.LINUX 5
5 . script
remote shell
machines.xxx. rsh tstmachines
C .

NFS.

2.
MPI
compiler cc UNIX.
script mpicc script

81

MPI cc . ,
mpicc script compiler
. mpicc :
mpicc [args] filename
, C example.c.
:
mpicc c example.c
, object example.o.
example :
mpicc -o example example.o
,
mpicc c mpicc o object
mpicc o ExeFile file1.o file2.o file3.o .
C++ script mpiCC
mpicc.

3.
MPI , script mpirun.
mpirun MPI .
:
mpirun -np < > < >
example 10
:
mpirun np 10 example
mpirun mpich/util/machines/machines.xxx
. mpirun
MPI_Init main. mpicc, mpiCC mpirun
mpich/bin path. mpirun
MPI
C, C++ Fortran. mpirun
.
.
MPI mpirun,
. ,
, . ,
.

82

MPI_Init,
MPI_COMM_WORLD .
MPI_COMM_WORLD ,
MPI .

4. MPI
prototypes MPI C,
.

4.1
int MPI_Abort(MPI_Comm comm, int errorcode)
int MPI_Errhandler_create(MPI_Handler_function *function,
MPI_Errhandler *errhandler)
int MPI_Errhandler_free(MPI_Errhandler *errhandler)
int MPI_Errhandler_get(MPI_Comm comm, MPI_Errhandler *errhandler)
int MPI_Errhandler_set(MPI_Comm comm, MPI_Errhandler errhandler)
int MPI_Error_class(int errorcode, int *errorclass)
int MPI_Error_string(int errorcode, char* string, int* resultlen)
int MPI_Finalize(void)
int MPI_Get_processor_name(char *name, int *resultlen)
int MPI_Init(int *argc, char **argv)
int MPI_Initialized(int *flag)
double MPI_Wtick(void)
double MPI_Wtime(void)

4.2 Point-to-Point
int MPI_Bsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
int MPI_Bsend_init(void *buf, int count, MPI_Datatype datatype, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Buffer_attach(void *buffer, int size)
int MPI_Buffer_detach(void *buffer, int* size)
int MPI_Cancel(MPI_Request *request)
int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype, int *count)
int MPI_Get_elements(MPI_Status *status, MPI_Datatype datatype, int *count)
int MPI_Ibsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
int MPI_Iprobe(int source, int tag, MPI_Comm comm, int *flag, MPI_Status *status)
int MPI_Irecv(void *buf, int count, MPI_Datatype datatype, int source,

83

int tag, MPI_Comm comm, MPI_Request *request)


int MPI_Isend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
int MPI_Issend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
int MPI_Irsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm, MPI_Request *request)
int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status)
int MPI_Recv(void *buf, int count, MPI_Datatype datatype, int source,
int tag, MPI_Comm comm, MPI_Status *status)
int MPI_Recv_init(void *buf, int count, MPI_Datatype datatype, int source,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Request_free(MPI_Request *request)
int MPI_Rsend(void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm
comm)
int MPI_Rsend_init(void *buf, int count, MPI_Datatype datatype, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Send(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
int MPI_Send_init(void *buf, int count, MPI_Datatype datatype, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Sendrecv(void *sendbuf, int sendcount, MPI_Datatype datatype,
int dest, int sendtag, void *recvbuf, int recvcount,
MPI_Datatype recvtype, int source, MPI_Datatype recvtag,
MPI_Comm comm, MPI_Status *status)
int MPI_Sendrecv_replace(void *buf, int count, MPI_Datatype datatype,
int dest, int sendtag, int source, int recvtag,
MPI_Comm comm, MPI_Status* status)
int MPI_Ssend(void *buf, int count, MPI_Datatype datatype, int dest, int tag,
MPI_Comm comm)
int MPI_Ssend_init(void *buf, int count, MPI_Datatype datatype, int dest,
int tag, MPI_Comm comm, MPI_Request *request)
int MPI_Start(MPI_Request *request)
int MPI_Startall(int count, MPI_Request *array_of_requests)
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
int MPI_Test_cancelled(MPI_Status *status, int *flag)
int MPI_Testall(int count, MPI_Request *array_of_requests, int *flag,
MPI_Status *array_of_statuses)
int MPI_Testany(int count, MPI_Request *array_of_requests, int *index,
int *flag, MPI_Status *status)
int MPI_Testsome(int incount, MPI_Request *array_of_requests,
int *outcount, int *array_of_indices)
int MPI_Wait(MPI_Request *request, MPI_Status *status)

84

int MPI_Waitall(int count, MPI_Request *array_of_requests,


MPI_Status *array_of_statuses)
int MPI_Waitany(int count, MPI_Request *array_of_requests,
int *index, MPI_Status *status)
int MPI_Waitsome(int incount, MPI_Request *array_of_requests,
int *outcount, int *array_of_indices,
MPI_Status *array_of_statuses)

4.3
int MPI_Allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype,
MPI_Comm comm)
int MPI_Allgatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int *recvcounts, int *displs,
MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Allreduce(void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
int MPI_Alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype,
MPI_Comm comm)
int MPI_Alltoallv(void *sendbuf, int *sendcounts, int *sdispls,
MPI_Datatype sendtype, void* recvbuf, int *recvcounts,
int *rdispls, MPI_Datatype recvtype, MPI_Comm comm)
int MPI_Barrier(MPI_Comm comm)
int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype, int root,
MPI_Comm comm)
int MPI_Gather(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype,
int root, MPI_Comm comm)
int MPI_Gatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int *recvcounts, int *displs,
MPI_Datatype recvtype, int root, MPI_Comm comm)
int MPI_Op_create(MPI_User_function*function, int commute, MPI_Op *op)
int MPI_Op_free(MPI_Op *op)
int MPI_Reduce(void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm)
int MPI_Reduce_scatter(void *sendbuf, void *recvbuf, int *recvcounts,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
int MPI_Scan(void *sendbuf, void *recvbuf, int count,
MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
int MPI_Scatter(void *sendbuf, int sendcount, MPI_Datatype sendtype,
void *recvbuf, int recvcount, MPI_Datatype recvtype,

85

int root, MPI_Comm comm)


int MPI_Scatterv(void *sendbuf, int *sendcounts, int *displs,
MPI_Datatype, sendtype, void *recvbuf, int recvcount,
MPI_Datatype recvtype, int root, MPI_Comm comm)

4.4
int MPI_Group_compare(MPI_Group group1, MPI_Group group2, int *result)
int MPI_Group_difference(MPI_Group group1, MPI_Group group2,
MPI_Group *newgroup)
int MPI_Group_excl(MPI_Group group, int n , int *ranks,
MPI_Group *newgroup)
int MPI_Group_free(MPI_Group *group)
int MPI_Group_incl(MPI_Group group, int n, int *ranks,
MPI_Group *newgroup)
int MPI_Group_intersection(MPI_Group group1, MPI_Group group2,
MPI_Group *newgroup)
int MPI_Group_range_excl(MPI_Group group, int n, int ranges[ ][3],
MPI_Group *newgroup)
int MPI_Group_range_incl(MPI_Group group, int n, int ranges[ ][3],
MPI_Group *newgroup)
int MPI_Group_rank(MPI_Group group, int* rank)
int MPI_Group_size(MPI_Group group, int *size)
int MPI_Group_translate_ranks(MPI_Group group1, int n, int *ranks1,
MPI_Group group2, int *ranks2)
int MPI_Group_union(MPI_Group group1, MPI_Group group2,
MPI_Group* newgroup)

4.5 Communicators
int MPI_Comm_compare(MPI_Comm comm1, MPI_Comm comm2, int *result)
int MPI_Comm_create(MPI_Comm comm, MPI_Group group,
MPI_Comm *newcomm)
int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *newcomm)
int MPI_Comm_free(MPI_Comm *comm)
int MPI_Comm_group(MPI_Comm comm, MPI_Group *group)
int MPI_Comm_rank(MPI_Comm comm, int *rank)
int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group *group)
int MPI_Comm_remote_size(MPI_Comm comm, int *size)
int MPI_Comm_size(MPI_Comm comm, int *size)
int MPI_Comm_split(MPI_Comm comm, int color, int key,
MPI_Comm *newcomm)
int MPI_Comm_test_inter(MPI_Comm comm, int *flag)

86

int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,


MPI_Comm bridge_comm, int remote_leader,
int tag, MPI_Comm *newintercomm)
int MPI_Intercomm_merge(MPI_Comm intercomm, int high,
MPI_Comm *newintracomm)

4.6
int MPI_Type_commit(MPI_Datatype *datatype)
int MPI_Type_contiguous(int count, MPI_Datatype oldtype,
MPI_Datatype *newtype)
int MPI_Type_extent(MPI_Datatype datatype, MPI_Aint *extent)
int MPI_Type_free(MPI_Datatype *datatype)
int MPI_Type_hindexed(int count, int *array_of_blocklengths,
MPI_Aint *array_of_displacements,
MPI_Datatype oldtype, MPI_Datatype *newtype)
int MPI_Type_hvector(int count, int blocklength, MPI_Aint stride,
MPI_Datatype oldtype, MPI_Datatype *newtype)
int MPI_Type_indexed(int count, int *array_of_blocklengths,
int *array_of_displacements, MPI_Datatype oldtype,
MPI_Datatype* newtype)
int MPI_Type_lb(MPI_Datatype datatype, MPI_Aint *displacement)
int MPI_Type_size(MPI_Datatype datatype, int *size)
int MPI_Type_struct(int count, int *array_of_blocklengths,
MPI_Aint *array_of_displacements,
MPI_Datatype *array_of_types, MPI_Datatype *newtype)
int MPI_Type_ub(MPI_Datatype datatype, MPI_Aint *displacement)
int MPI_Type_vector(int count, int blocklength, int stride)

4.7
int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims, int *coords)
int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims,
int *periods, int reorder, MPI_Comm *comm_cart)
int MPI_Cart_get(MPI_Comm comm, int maxdims, int *dims, int *periods,
int *coords)
int MPI_Cart_map(MPI_Comm comm, int ndims, int *dims, int *periods,
int *newrank)
int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank)
int MPI_Cart_shift(MPI_Comm comm, int direction, int disp,
int *rank_source, int *rank_dest)
int MPI_Cart_sub(MPI_Comm comm, int *remain_dims, MPI_Comm *newcomm)

87

int MPI_Cartdim_get(MPI_Comm comm, int *ndims)


int MPI_Dims_create(int nnodes, int ndims, int *dims)
int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int *index,
int *edges, int reorder, MPI_Comm *comm_graph)
int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges,
int *index, int *edges)
int MPI_Graph_map(MPI_Comm comm, int nnodes, int *index, int *edges,
int *newrank)
int MPI_Graph_neighbors(MPI_Comm comm, int rank, int maxneighbors,
int *neighbors)
int MPI_Graph_neighbors_count(MPI_Comm comm int rank, int *nneighbors)
int MPI_Graphdims_get(MPI_Comm comm, int nnodes, int nedges)
int MPI_Topo_test(MPI_Comm comm, int *status)

4.8
int MPI_Address(void *location, MPI_Aint *address)
int MPI_Attr_delete(MPI_Comm comm, int keyval)
int MPI_Attr_get(MPI_Comm comm, int keyval, void *attribute_val, int *flag)
int MPI_Attr_put(MPI_Comm comm, int keyval, void *attribute_val)
int MPI_Keyval_create(MPI_Copy_function *copy_fn,
MPI_Delete_function *delete_fn, int *keyval, void *extra_state)
int MPI_Keyval_free(int *keyval)
int MPI_Pack(void *inbuf, int incount, MPI_Datatype datatype, void *outbuf,
int outsize, int *position, MPI_Comm comm)
int MPI_Pack_size(int incount, MPI_Datatype datatype, MPI_Comm comm, int *size)
int MPI_Pcontrol(const int level, )
int MPI_Unpack(void *inbuf, int insize, int *position, void *outbuf,
int outcount, MPI_Datatype datatype, MPI_Comm comm)

5. MPI
MPI C
Fortran. ,
.
. . ,
MPI.

5.1
MPI_CART
MPI_COMM_NULL

88

MPI_CONGRUENT
MPI_DATATYPE_NULL
MPI_ERRHANDLER_NULL
MPI_ERROR
MPI_GRAPH
MPI_IDENT
MPI_OP_NULL
MPI_PENDING
MPI_PROC_NULL
MPI_REQUEST_NULL
MPI_SIMILAR
MPI_SUCCESS
MPI_UNDEFINED
MPI_UNEQUAL

5.2
MPI_ERR_ARG
MPI_ERR_BUFFER
MPI_ERR_COMM
MPI_ERR_COUNT
MPI_ERR_DIMS
MPI_ERR_GROUP
MPI_ERR_IN_STATUS
MPI_ERR_INTERN
MPI_ERR_LASTCODE
MPI_ERR_OP
MPI_ERR_OTHER
MPI_ERR_PENDING
MPI_ERR_RANK
MPI_ERR_REQUEST
MPI_ERR_ROOT
MPI_ERR_TAG
MPI_ERR_TOPOLOGY
MPI_ERR_TRUNCATE
MPI_ERR_TYPE
MPI_ERR_UNKNOWN

5.3
MPI_ANY_SOURCE
MPI_ANY_TAG

89

5.4
MPI_BYTE
MPI_CHAR
MPI_CHARACTER
MPI_COMPLEX
MPI_DOUBLE
MPI_DOUBLE_COMPLEX
MPI_DOUBLE_INT
MPI_DOUBLE_PRECISION
MPI_FLOAT
MPI_FLOAT_INT
MPI_INT
MPI_INTEGER
MPI_INTEGER1
MPI_INTEGER2
MPI_INTEGER4
MPI_LOGICAL
MPI_LONG
MPI_LONG_DOUBLE
MPI_LONG_DOUBLE_INT
MPI_LONG_INT
MPI_LONG_LONG
MPI_PACKED
MPI_REAL
MPI_REAL2
MPI_REAL4
MPI_REAL8
MPI_SHORT
MPI_SHORT_INT
MPI_UNSIGNED
MPI_UNSIGNED_CHAR
MPI_UNSIGNED_LONG
MPI_UNSIGNED_SHORT

5.5
MPI_2DOUBLE_PRECISION
MPI_2INT
MPI_2INTEGER
MPI_2REAL
MPI_BAND
MPI_BOR

90

MPI_BOTTOM
MPI_BSEND_OVERHEAD
MPI_BXOR
MPI_COMM_SELF
MPI_COMM_WORLD
MPI_GROUP_EMPTY
MPI_ERRORS_ARE_FATAL
MPI_ERRORS_RETURN
MPI_GROUP_NULL
MPI_HOST
MPI_IO
MPI_KEYVAL_INVALID
MPI_LAND
MPI_LB
MPI_LOR
MPI_LXOR
MPI_MAX
MPI_MAX_ERROR_STRING
MPI_MAX_PROCESSOR_NAME
MPI_MAXLOC
MPI_MIN
MPI_MINLOC
MPI_PROD
MPI_SOURCE
MPI_STATUS_SIZE
MPI_SUM
MPI_TAG
MPI_TAG_UB
MPI_UB
MPI_WTIME_IS_GLOBAL

91

You might also like