Professional Documents
Culture Documents
|
|
!J
½utline
O O O O
SPMD: A dominant paradigm for
writing data parallel applications
main(int argc, char **argv)
{
if(process is assigned Master role)
{
/* Assign work and coordinate workers and collect results */
MasterRoutine(/*arguments*/);
}
else /* it is worker process */
{
/* interact with master and other workers. Do the work and send
results to the master*/
WorkerRoutine(/*arguments*/);
}
}
Messages
Messages are packets of
data moving between sub-
programs.
The message passing system
has to be told the following
information
Sending processor
Source location
Data type
Data length
Receiving processor(s)
Destination location
Destination size
Messages
Access:
Each sub-program needs to be connected to a message passing
system
Addressing:
Messages need to have addresses to be sent to
Reception:
It is important that the receiving process is capable of dealing
with the messages it is sent
A message passing system is similar to:
Post-office, Phone line, Fax, E-mail, etc
Message Types:
Point-to-Point, Collective, Synchronous (telephone)/Asynchronous
(Postal)
Message Passing Systems and MPI
- www.mpi-forum.org
Initially each manufacturer developed their own message
passing interface
Wide range of features, often incompatible
MPI Forum brought together several Vendors and users of HPC
systems from US and Europe ² overcome above limitations.
Produced a document defining a standard, called
Message Passing Interface (MPI), which is derived from
experience or common features/issues addressed by many
message-passing libraries. It aimed:
to provide source-code portability
to allow efficient implementation
it provides a high level of functionality
support for heterogeneous parallel architectures
parallel I/½ (in MPI 2.0)
MPI 1.0 contains over 115 routines/functions that can be
grouped into 8 categories.
eneral MPI Program Structure
O
O
O
MPI programs
# "
$ %"
&
MPI helloworld.c
´
''"
!!"
( () *)+,-! '"
( ' () *)+,-!'"
./*
0012
' '"
"
# !
How Manjra cluster looks
Compile:
manjra> mpicc helloworld.c -o helloworld
Run:
manjra> mpirun -np 3 helloworld [hosts picked from configuration]
manjra> mpirun -np 3 -machinefile machines.list helloworld
The file machines.list contains nodes list:
manjra.cs.mu.oz.au
node1
node2
- !
..
node6
node13
Some nodes may not work today, if they had failed!
Sample Run and ½utput
A Run by default
manjra> helloworld
Hello World from process 0 of 1
Sample Run and ½utput
Run with
67:7
where jobscript is
´3489
PBS Script
*+#&-
.-)
$'!
"
½utput ² Result/Error
½utput
hello.bat.oXXXXX
Error, if any
hello.bat.eXXXXX
Where XXXXX is the ID assigned to your
job by PBS
References
#include <mpi.h>
main(int argc, char **argv)
{
int numtasks, rank;
int resultlen;
static char mpi_hostname[MPI_MAX_PR½CESS½R_NAME];
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_C½MM_W½RLD, &numtasks);
MPI_Comm_rank(MPI_C½MM_W½RLD, &rank);
MPI_et_processor_name( mpi_hostname, &resultlen );
MPI_Finalize();
}
MPI Routines
MPI Routines ² C and Fortran
Environment Management
||
-
|
,
-
8#,
1
8,
(
Environment Management Routines
Point-to-Point Communication
Synchronous Sends
provide information about the completion of the
message
e.g. fax machines
Asynchronous Sends
½nly know when the message has left
e.g. post cards
Blocking operations
only return from the call when operation has completed
Non-blocking operations
return straight away - can test/wait later for
completion
||
Collective Communications
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_C½MM_W½RLD, &numtasks);
MPI_Comm_rank(MPI_C½MM_W½RLD, &rank);
if (rank == 0) {
dest = 1;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag, MPI_C½MM_W½RLD);
printf("Rank0 sent: %c\n", outmsg);
source = 1;
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag, MPI_C½MM_W½RLD, &Stat);
}
MPI Send/Receive a Character
else if (rank == 1) {
source = 0;
rc = MPI_Recv(&inmsg, 1, MPI_CHAR, source, tag,
MPI_C½MM_W½RLD, &Stat);
printf("Rank1 received: %c\n", inmsg);
dest = 0;
rc = MPI_Send(&outmsg, 1, MPI_CHAR, dest, tag,
MPI_C½MM_W½RLD);
}
MPI_Finalize();
}
Execution Demo
mpicc mpi_com.c
[raj@manjra mpi]$ mpirun -np 2 a.out
Rank0 sent: X
Rank0 recv: Y
Rank1 received: X
Non Blocking Message Passing
Exercise: Ping Pong
strcpy(pingmsg, "ping");
strcpy(pongmsg, "pong");
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_C½MM_W½RLD, &numtasks);
MPI_Comm_rank(MPI_C½MM_W½RLD, &rank);
Ñ $ % &
MPI_Finalize();
}
Timers