Professional Documents
Culture Documents
225989736 Cơ bản lập trinh song song MPI C C PDF
225989736 Cơ bản lập trinh song song MPI C C PDF
225989736 Cơ bản lập trinh song song MPI C C PDF
ng Nguyn Phng
dnphuong1984@gmail.com
Mc lc
1 M u 2
2 MPI 3
2.1 Gii thiu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Ci t MPICH2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.3 Bin dch v thc thi chng trnh vi MPICH2 . . . . . . . . . . . . . . . . . . . 4
4 Cc lnh MPI 13
4.1 Cc lnh qun l mi trng MPI . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Cc kiu d liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 Cc c ch truyn thng ip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.4 Cc lnh truyn thng ip blocking . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.5 Cc lnh truyn thng ip non-blocking . . . . . . . . . . . . . . . . . . . . . . . 19
4.6 Cc lnh truyn thng tp th . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
5 Mt s v d 23
5.1 V d tnh s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 V d nhn ma trn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1
ng Nguyn Phng Ti liu ni b NMTP
1 M u
Thng thng hin nay, hu ht cc chng trnh tnh ton u c thit k chy trn mt
li (single core), l cch tnh ton tun t (serial computation). c th chy c chng
trnh mt cch hiu qu trn cc h thng my tnh (cluster ) hoc cc cpu a li (multi-core),
chng ta cn phi tin hnh song song ha chng trnh . u im ca vic tnh ton song
song (parallel computation) chnh l kh nng x l nhiu tc v cng mt lc. Vic lp trnh
song song c th c thc hin thng qua vic s dng cc hm th vin (vd: mpi.h) hoc cc
c tnh c tch hp trong cc chng trnh bin dch song song d liu, chng hn nh
OpenMP trong cc trnh bin dch fortran F90, F95.
Cng vic lp trnh song song bao gm vic thit k, lp trnh cc chng trnh my tnh song
song sao cho n chy c trn cc h thng my tnh song song. Hay c ngha l song song
ho cc chng trnh tun t nhm gii quyt mt vn ln hoc lm gim thi gian thc thi
hoc c hai. Lp trnh song song tp trung vo vic phn chia bi ton tng th ra thnh cc
cng vic con nh hn ri nh v cc cng vic n tng b x l (processor ) v ng b
cc cng vic nhn c kt qu cui cng. Nguyn tc quan trng nht y chnh l tnh
ng thi hoc x l nhiu tc v (task ) hay tin trnh (process) cng mt lc. Do , trc khi
lp trnh song song ta cn phi bit c rng bi ton c th c song song ho hay khng
(c th da trn d liu hay chc nng ca bi ton). C hai hng chnh trong vic tip cn
lp trnh song song:
Song song ho ngm nh (implicit parallelism): b bin dch hay mt vi chng trnh
khc t ng phn chia cng vic n cc b x l.
Song song ho bng tay (explicit parallelism): ngi lp trnh phi t phn chia chng
trnh ca mnh n c th c thc thi song song.
Ngoi ra trong lp trnh song song, ngi lp trnh cng cn phi tnh n yu t cn bng ti
(load balancing) trong h thng. Phi lm cho cc b x l thc hin s cng vic nh nhau,
nu c mt b x l c ti qu ln th cn phi di chuyn cng vic n b x l c ti nh hn.
Mt m hnh lp trnh song song l mt tp hp cc k thut phn mm th hin cc gii
thut song song v a vo ng dng trong h thng song song. M hnh ny bao gm cc ng
dng, ngn ng, b bin dch, th vin, h thng truyn thng v vo/ra song song. Trong thc
t, cha c mt my tnh song song no cng nh cch phn chia cng vic cho cc b x l
no c th p dng hiu qu cho mi bi ton. Do , ngi lp trnh phi la chn chnh xc
mt m hnh song song hoc pha trn gia cc m hnh vi nhau pht trin cc ng dng
song song trn mt h thng c th.
Hin nay c rt nhiu m hnh lp trnh song song nh m hnh a lung (multi-threads), truyn
thng ip (message passing), song song d liu (data parallel ), lai (hybrid ),... Cc loi m hnh
ny c phn chia da theo hai tiu ch l tng tc gia cc tin trnh (process interaction)
v cch thc x l bi ton (problem decomposition). Theo tiu ch th nht, chng ta c 2 loi
m hnh song song ch yu l m hnh dng b nh chia s (shared memory) hoc truyn thng
ip (message passing). Theo tiu ch th hai, chng ta cng c hai loi m hnh l song song
ha tc v (task parallelism) v song song ha d liu (data parallelism).
Vi m hnh b nh chia s, tt c cc x l u truy cp mt d liu chung thng qua
mt vng nh dng chung.
Vi m hnh truyn thng ip th mi x l u c ring mt b nh cc b ca n, cc
x l trao i d liu vi nhau thng qua hai phng thc gi v nhn thng ip.
Song song tc v l phng thc phn chia cc tc v khc nhau n cc nt tnh ton
khac nhau, d liu c s dng bi cc tc v c th hon ton ging nhau.
2
ng Nguyn Phng Ti liu ni b NMTP
Song song d liu l phng thc phn phi d liu ti cc nt tnh ton khc nhau
c x l ng thi, cc tc v ti cc nt tnh ton c th hon ton ging nhau.
M hnh truyn thng ip l mt trong nhng m hnh c s dng rng ri nht trong
tnh ton song song hin nay. N thng c p dng cho cc h thng phn tn (distributed
system). Cc c trng ca m hnh ny l:
Cc lung (thread ) s dng vng nh cc b ring ca chng trong sut qu trnh tnh
ton.
Nhiu lung c th cng s dng mt ti nguyn vt l.
Cc lung trao i d liu bng cch gi nhn cc thng ip
Vic truyn d liu thng yu cu thao tc iu phi thc hin bi mi lung. V d,
mt thao tc gi mt lung th phi ng vi mt thao tc nhn lung khc.
Ti liu ny c xy dng vi mc ch cung cp cc kin thc c bn bc u nhm tm
hiu kh nng vit mt chng trnh song song bng ngn ng lp trnh C/C++ theo c ch
trao i thng ip s dng cc th vin theo chun MPI. Mc ch l nhm ti vic thc thi
cc chng trnh C/C++ trn my tnh a li hoc h thng cm my tnh (computer cluster )
gip nng cao hiu nng tnh ton. Trong ti liu ny, th vin MPICH2 c s dng bin
dch cc chng trnh C/C++ trn h iu hnh Linux.
2 MPI
2.1 Gii thiu
M hnh truyn thng ip l mt trong nhng m hnh lu i nht v c ng dng rng
ri nht trong lp trnh song song. Hai b cng c ph bin nht cho lp trnh song song theo
m hnh ny l PVM (Parallel Virtual Machine) v MPI (Message Passing Interface). Cc b
cng c ny cung cp cc hm dng cho vic trao i thng tin gia cc tin trnh tnh ton
trong h thng my tnh song song.
MPI (Message Passing Interface) l mt chun m t cc c im v c php ca mt th vin
lp trnh song song, c a ra vo nm 1994 bi MPIF (Message Passing Interface Forum),
v c nng cp ln chun MPI-2 t nm 2001. C rt nhiu cc th vin da trn chun MPI
ny chng hn nh MPICH, OpenMPI hay LAM/MPI.
MPICH2 l mt th vin min ph bao gm cc hm theo chun MPI dng cho lp trnh song
song theo phng thc truyn thng ip, c thit k cho nhiu ngn ng lp trnh khc nhau
(C++, Fortran, Python,. . . ) v c th s dng trn nhiu loi h iu hnh (Windows, Linux,
MacOS,. . . ).
2.2 Ci t MPICH2
Gi MPICH2 c th c ci t trn tt c cc my tnh thng qua lnh sau
$ sudo aptget install mpich2
Sau khi ci t thnh cng MPICH2, ta cn phi cu hnh trc khi chy song song. Trong
trng hp phin bn c s dng l 1.2.x tr v trc th trnh qun l thc thi mc nh s
l MPD, cn t 1.3.x tr v sau th trnh qun l s l Hydra. Cch thc cu hnh dnh cho 2
trnh qun l s nh sau:
3
ng Nguyn Phng Ti liu ni b NMTP
Hydra tng t nh vi MPD nhng n gin hn, ta ch cn to duy nht 1 file c tn hosts
ti th mc /home/phuong cha tn ca tt c cc my con trong h thng
master
node1
node2
node3
V d nh ta mun bin dch mt chng trnh ng dng vit bng ngn ng C/C++, ta c th
g lnh sau
mpicc o helloworld helloworld . c
Trong , helloworld.c l file cha m ngun ca chng trnh, ty chnh -o cho ta xc nh
trc tn ca file ng dng c bin dch ra, trong trng hp ny l file helloworld.
Thc thi Trong trng hp phin bn MPICH2 s dng trnh qun l MPD, trc khi thc
thi chng trnh ta cn gi MPD qua lnh mpdboot nh cp n trn hoc
mpd &
Chng trnh c bin dch bng MPI c th c thc thi bng cch s dng lnh
mpirun -np <N> <tenchuongtrinh>
hoc
mpiexec -n <N> <tenchuongtrinh>
4
ng Nguyn Phng Ti liu ni b NMTP
Bt u chng trnh
...
<on code tun t>
...
Khi ng mi trng MPI
...
<on code cn thc hin song song>
...
Kt thc mi trng MPI
...
<on code tun t>
...
Kt thc chng trnh
5
ng Nguyn Phng Ti liu ni b NMTP
}
Cn lu rng header MPI (mpi.h) cn phi c thm vo trong file c th gi c
cc lnh MPI.
Khai bo mi trng MPI
#include <stdio . h>
#include <mpi . h>
6
ng Nguyn Phng Ti liu ni b NMTP
7
ng Nguyn Phng Ti liu ni b NMTP
Lu rng cch thc s dng lnh MPI trong C v C++ khc nhau hai im chnh sau y:
Cc hm trong C++ c s dng vi khng gian tn (namespace) MPI.
Cc tham s (argument) c s dng trong cc hm C++ l tham chiu (reference) thay
v l con tr (pointer ) nh trong cc hm C. V d nh cc tham s argc v argv ca
hm MPI_Init trong C c s dng vi du & pha trc, cn trong C++ th khng.
MPI_Init(&argc , &argv ) ;
}
Gi s chng ta mun thc hin vng lp 1000 ln, do s ln lp ca mi tc v s bng
1000/ntasks vi ntasks l tng s tc v. Chng ta s s dng ch s ca mi tc v nh
du phn khc lp ca mi tc v
8
ng Nguyn Phng Ti liu ni b NMTP
MPI_Init(&argc , &argv ) ;
count = 1000 / n t a s k s ;
s t a r t = rank count ;
s t o p = s t a r t + count ;
nloops = 0;
f o r ( i=s t a r t ; i <s t o p ; ++i ) {
++n l o o p s ;
}
}
Trong count l s ln lp ca mi tc v, cc tc v c ch s rank s thc hin ln lp th
rank*count n ln lp th rank*count+count-1. Bin nloops s m s ln thc hin vng
lp ca tc v c ch s rank v xut ra mn hnh.
Trong trng hp tc v ang thc hin khng phi l tc v ch (rank khc 0), tc v ny s
gi kt qu nloops cho tc v ch.
#include <stdio . h>
#include <mpi . h>
MPI_Init(&argc , &argv ) ;
nloops = 0 ;
for ( i=start ; i<stop ; ++i ) {
++nloops ;
}
i f ( rank != 0 ) {
MPI_Send ( &n l o o p s , 1 , MPI_INT, 0 , 0 , MPI_COMM_WORLD ) ;
9
ng Nguyn Phng Ti liu ni b NMTP
}
Trong trng hp tc v ny l tc v ch, n s nhn gi tr nloops t cc tc v con gi v
cng dn li
#include <stdio . h>
#include <mpi . h>
MPI_Init(&argc , &argv ) ;
nloops = 0 ;
for ( i=start ; i<stop ; ++i ) {
++nloops ;
}
if ( rank != 0 ) {
MPI_Send ( &nloops , 1 , MPI_INT , 0 , 0 , MPI_COMM_WORLD ) ;
} else {
total_nloops = nloops ;
f o r ( i =1; i <n t a s k s ; ++i ) {
MPI_Recv( &n l o o p s , 1 , MPI_INT, i , 0 , MPI_COMM_WORLD, 0 ) ;
t o t a l _ n l o o p s += n l o o p s ;
}
}
}
Tc v ch s chy tip s 1000 vng lp trong trng hp c d ra mt s vng lp sau
khi chia 1000 cho tng s tc v.
#include <stdio . h>
#include <mpi . h>
MPI_Init(&argc , &argv ) ;
10
ng Nguyn Phng Ti liu ni b NMTP
nloops = 0 ;
for ( i=start ; i<stop ; ++i ) {
++nloops ;
}
if ( rank != 0 ) {
MPI_Send ( &nloops , 1 , MPI_INT , 0 , 0 , MPI_COMM_WORLD ) ;
} else {
total_nloops = nloops ;
for ( i=1; i<ntasks ; ++i ) {
MPI_Recv ( &nloops , 1 , MPI_INT , i , 0 , MPI_COMM_WORLD , 0 ) ;
total_nloops += nloops ;
}
nloops = 0;
f o r ( i=t o t a l _ n l o o p s ; i <1000; ++i ) {
++n l o o p s ;
}
}
}
Cui cng l xut ra kt qu v kt thc mi trng MPI, cng nh kt thc chng trnh.
#include <stdio . h>
#include <mpi . h>
MPI_Init(&argc , &argv ) ;
nloops = 0 ;
for ( i=start ; i<stop ; ++i ) {
++nloops ;
}
if ( rank != 0 ) {
MPI_Send ( &nloops , 1 , MPI_INT , 0 , 0 , MPI_COMM_WORLD ) ;
} else {
total_nloops = nloops ;
for ( i=1; i<ntasks ; ++i ) {
MPI_Recv ( &nloops , 1 , MPI_INT , i , 0 , MPI_COMM_WORLD , 0 ) ;
total_nloops += nloops ;
11
ng Nguyn Phng Ti liu ni b NMTP
}
nloops = 0 ;
for ( i=total_nloops ; i <1000; ++i ) {
++nloops ;
}
MPI_Finalize ( ) ;
return 0;
}
std : : cout << " Task " << rank << " performed " << nloops
<< " iterations of the loop . \ n " ;
if ( rank != 0 ) {
MPI : : COMM_WORLD . Send ( &nloops , 1 , MPI_INT , 0 , 0 ) ;
} else {
int total_nloops = nloops ;
for ( int i=1; i<ntasks ; ++i ) {
MPI : : COMM_WORLD . Recv ( &nloops , 1 , MPI_INT , i , 0 ) ;
total_nloops += nloops ;
}
nloops = 0 ;
for ( int i=total_nloops ; i <1000; ++i ) {
++nloops ;
}
std : : cout << " Task 0 performed the remaining " << nloops
<< " iterations of the loop \n " ;
}
MPI : : Finalize ( ) ;
return 0 ;
}
12
ng Nguyn Phng Ti liu ni b NMTP
4 Cc lnh MPI
4.1 Cc lnh qun l mi trng MPI
Cc lnh ny c nhim v thit lp mi trng cho cc lnh thc thi MPI, truy vn ch s ca
tc v, cc th vin MPI,...
MPI_Get_processor_name tr v tn ca b x l
MPI_Get_processor_name (&name,&resultlength)
Get_processor_name(&name,resultlen)
13
ng Nguyn Phng Ti liu ni b NMTP
MPI_Finalize ( ) ;
}
Ngoi ra ngi dng cn c th t to ra cc cu trc d liu ring cho mnh da trn cc kiu
d liu c bn ny. Cc kiu d liu c cu trc do ngi dng t nh ngha c gi l derived
data types. Cc lnh nh ngha cu trc d liu mi bao gm:
14
ng Nguyn Phng Ti liu ni b NMTP
15
ng Nguyn Phng Ti liu ni b NMTP
MPI_Status stat ;
MPI_Datatype columntype ;
17
MPI_Init(&argc ,& argv ) ;
MPI_Comm_rank ( MPI_COMM_WORLD , &rank ) ;
MPI_Comm_size ( MPI_COMM_WORLD , &numtasks ) ;
i f ( numtasks == SIZE ) {
i f ( rank == 0 ) {
27 f o r ( i=0; i<numtasks ; i++)
MPI_Send(&a [ 0 ] [ i ] , 1 , columntype , i , tag , MPI_COMM_WORLD ) ;
}
MPI_Recv ( b , SIZE , MPI_FLOAT , source , tag , MPI_COMM_WORLD , &stat ) ;
printf ( " rank= %d b= %3.1 f %3.1 f %3.1 f %3.1 f \n" ,
32 rank , b [ 0 ] , b [ 1 ] , b [ 2 ] , b [ 3 ] ) ;
} else
printf ( "Must s p e c i f y %d p r o c e s s o r s . Terminating . \ n" , SIZE ) ;
MPI_Type_free(&columntype ) ;
37 MPI_Finalize ( ) ;
}
16
ng Nguyn Phng Ti liu ni b NMTP
17
ng Nguyn Phng Ti liu ni b NMTP
MPI_Rsend gi thng tin theo ch ready, ch nn s dng khi ngi lp trnh chc chn
rng qu trnh nhn thng tin sn sng.
MPI_Rsend (&buf,count,datatype,dest,tag,comm)
Comm::Rsend(&buf,count,datatype,dest,tag)
MPI_Sendrecv gi thng tin i v sn sng cho vic nhn thng tin t tc v khc
MPI_Sendrecv (&sendbuf,sendcount,sendtype,dest,sendtag,
&recvbuf,recvcount,recvtype,source,recvtag,comm,&status)
Comm::Sendrecv(&sendbuf,sendcount,sendtype,dest,sendtag,
&recvbuf,recvcount,recvtype,source,recvtag,status)
18
ng Nguyn Phng Ti liu ni b NMTP
i f ( rank == 0 ) {
dest = 1 ;
source = 1 ;
17 rc = MPI_Send(&outmsg , 1 , MPI_CHAR , dest , tag , MPI_COMM_WORLD ) ;
rc = MPI_Recv(&inmsg , 1 , MPI_CHAR , source , tag , MPI_COMM_WORLD , &
Stat ) ;
} e l s e i f ( rank == 1 ) {
dest = 0 ;
source = 0 ;
22 rc = MPI_Recv(&inmsg , 1 , MPI_CHAR , source , tag , MPI_COMM_WORLD , &
Stat ) ;
rc = MPI_Send(&outmsg , 1 , MPI_CHAR , dest , tag , MPI_COMM_WORLD ) ;
}
MPI_Finalize ( ) ;
}
19
ng Nguyn Phng Ti liu ni b NMTP
MPI_Test kim tra trng thi kt thc ca cc lnh gi v nhn thng ip non-blocking
Isend(), Irecv(). Tham s request l tn bin yu cu c dng trong cc lnh gi v
nhn thng ip, tham s flag s tr v gi tr 1 nu thao tc hon thnh v gi tr 0 trong
trng hp ngc li.
MPI_Test (&request,&flag,&status)
Request::Test(status)
14 prev = rank 1;
next = rank +1;
i f ( rank == 0 ) prev = numtasks 1 ;
i f ( rank == ( numtasks 1 ) ) next = 0 ;
29 MPI_Finalize ( ) ;
20
ng Nguyn Phng Ti liu ni b NMTP
21
ng Nguyn Phng Ti liu ni b NMTP
logic), MPI_BOR (ton t OR bitwise), MPI_LXOR (ton t XOR logic), MPI_BXOR (ton t XOR
bitwise), MPI_MAXLOC (gi tr cc i v v tr), MPI_MINLOC (gi tr cc tiu v v tr).
V d:
#include <stdio . h>
#include <mpi . h>
22
ng Nguyn Phng Ti liu ni b NMTP
i f ( numtasks == SIZE ) {
source = 1 ;
20 sendcount = SIZE ;
recvcount = SIZE ;
MPI_Scatter ( sendbuf , sendcount , MPI_FLOAT , recvbuf , recvcount ,
MPI_FLOAT , source , MPI_COMM_WORLD ) ;
printf ( " rank= %d R e s u l t s : %f %f %f %f \n" , rank , recvbuf [ 0 ] ,
25 recvbuf [ 1 ] , recvbuf [ 2 ] , recvbuf [ 3 ] ) ;
} else
printf ( "Must s p e c i f y %d p r o c e s s o r s . Terminating . \ n" , SIZE ) ;
MPI_Finalize ( ) ;
30 }
5 Mt s v d
5.1 V d tnh s
Trong v d ny ta s tin hnh lp trnh song song cho php tnh s . Gi tr ca s c th
c xc nh qua cng thc tch phn
Z 1
4
= f (x)dx , vi f (x) = (1)
0 (1 + x2 )
Trong v d ny, ta s s dng c ch truyn thng tp th. Da vo cng thc pha trn ta c
th d dng nhn ra rng ch c mt tham s duy nht, l n, do ta s truyn tham s ny
cho tt c cc tc v trong h thng thng qua lnh MPI_Bcast
MPI_Bcast(&n , 1 , MPI_INT , 0 , MPI_COMM_WORLD ) ;
23
ng Nguyn Phng Ti liu ni b NMTP
h = 1 . 0 / ( double ) n ;
sum = 0 . 0 ;
for ( i = myid + 1 ; i <= n ; i += numprocs ) {
19 x = h ( ( double ) i 0 . 5 ) ;
sum += ( 4 . 0 / ( 1 . 0 + xx ) ) ;
}
mypi = h sum ;
MPI_Reduce(&mypi , &pi , 1 , MPI_DOUBLE, MPI_SUM, 0 , MPI_COMM_WORLD) ;
24
MPI_Finalize ( ) ;
return 0 ;
}
while (1) {
i f ( myid == 0 ) {
printf ( " Enter t h e number o f i n t e r v a l s : ( 0 q u i t s ) " ) ;
18 scanf ( "%d" ,&n ) ;
}
MPI_Bcast(&n , 1 , MPI_INT , 0 , MPI_COMM_WORLD ) ;
i f ( n == 0 )
24
ng Nguyn Phng Ti liu ni b NMTP
break ;
23 else {
h = 1.0 / ( double ) n ;
sum = 0 . 0 ;
f o r ( i = myid + 1 ; i <= n ; i += numprocs ) {
x = h ( ( double ) i 0 . 5 ) ;
28 sum += ( 4 . 0 / ( 1 . 0 + xx ) ) ;
}
mypi = h sum ;
MPI_Reduce(&mypi , &pi , 1 , MPI_DOUBLE , MPI_SUM , 0 ,
MPI_COMM_WORLD ) ;
i f ( myid == 0 )
33 printf ( " p i i s a p p r o x i m a t e l y %.16 f , E r r o r i s %.16 f \n" , pi ,
fabs ( pi PI25DT ) ) ;
}
}
MPI_Finalize ( ) ;
return 0;
38 }
while (1) {
i f ( rank == 0 ) {
cout << " Enter t h e number o f i n t e r v a l s : ( 0 q u i t s ) " << endl ;
17 cin >> n ;
}
25
ng Nguyn Phng Ti liu ni b NMTP
}
37 }
MPI : : Finalize ( ) ;
return 0;
}
26
ng Nguyn Phng Ti liu ni b NMTP
In kt qu
/ P r i n t r e s u l t s /
printf ( " \ n" ) ;
printf ( " R e s u l t Matrix : \ n" ) ;
f o r ( i=0; i<NRA ; i++) {
5 printf ( " \n" ) ;
f o r ( j=0; j<NCB ; j++)
printf ( " %6.2 f " , c[i ][ j]) ;
}
printf ( "\n \ n" ) ;
10 printf ( "Done . \ n" ) ;
Tr kt qu v cho tc v chnh
/ Send r e s u l t s back t o master t a s k /
mtype = FROM_WORKER ;
3 MPI_Send(&offset , 1 , MPI_INT , MASTER , mtype , MPI_COMM_WORLD ) ;
27
ng Nguyn Phng Ti liu ni b NMTP
5 #define NRA 62
#define NCA 15
#define NCB 7
#define MASTER 0
#define FROM_MASTER 1
10 #define FROM_WORKER 2
/ master t a s k /
i f ( taskid == MASTER ) {
30 printf ( "mpi_mm has s t a r t e d with %d t a s k s . \ n" , numtasks ) ;
printf ( " I n i t i a l i z i n g a r r a y s . . . \ n" ) ;
f o r ( i=0; i<NRA ; i++)
f o r ( j=0; j<NCA ; j++)
a [ i ] [ j]= i+j ;
35 f o r ( i=0; i<NCA ; i++)
f o r ( j=0; j<NCB ; j++)
b [ i ] [ j]= ij ;
28
ng Nguyn Phng Ti liu ni b NMTP
55 / R e c e i v e r e s u l t s from worker t a s k s /
mtype = FROM_WORKER ;
f o r ( i=1; i<=numworkers ; i++) {
source = i ;
MPI_Recv(&offset , 1 , MPI_INT , source , mtype , MPI_COMM_WORLD , &
status ) ;
60 MPI_Recv(&rows , 1 , MPI_INT , source , mtype , MPI_COMM_WORLD , &status ) ;
MPI_Recv(&c [ offset ] [ 0 ] , rows NCB , MPI_DOUBLE , source , mtype ,
MPI_COMM_WORLD , &status ) ;
printf ( " R e c e i v e d r e s u l t s from t a s k %d\n" , source ) ;
}
65
/ P r i n t r e s u l t s /
printf ( " \ n" ) ;
printf ( " R e s u l t Matrix : \ n" ) ;
f o r ( i=0; i<NRA ; i++) {
70 printf ( " \n" ) ;
f o r ( j=0; j<NCB ; j++)
printf ( " %6.2 f " , c[i ][ j]) ;
}
printf ( " \n \ n" ) ;
75 printf ( "Done . \ n" ) ;
}
/ worker t a s k /
i f ( taskid > MASTER ) {
80 / R e c e i v e matrix data from master t a s k /
mtype = FROM_MASTER ;
MPI_Recv(&offset , 1 , MPI_INT , MASTER , mtype , MPI_COMM_WORLD ,& status ) ;
MPI_Recv(&rows , 1 , MPI_INT , MASTER , mtype , MPI_COMM_WORLD , &status ) ;
MPI_Recv(&a , rows NCA , MPI_DOUBLE , MASTER , mtype , MPI_COMM_WORLD , &
status ) ;
85 MPI_Recv(&b , NCA NCB , MPI_DOUBLE , MASTER , mtype , MPI_COMM_WORLD , &
status ) ;
/ Do matrix m u l t i p l y /
f o r ( k=0; k<NCB ; k++)
f o r ( i=0; i<rows ; i++) {
90 c[i ][ k] = 0.0;
f o r ( j=0; j<NCA ; j++)
c[i][k] = c[i][k] + a[i][j] b[j][k];
}
29
ng Nguyn Phng Ti liu ni b NMTP
Ti liu
[1] William Gropp et al, MPICH2 Users Guide Version 1.0.6, Mathematics and Computer
Science Division, Argonne National Laboratory, 2007.
[2] Serrano Pereira, Building a simple Beowulf cluster with Ubuntu
http://byobu.info/article/Building_a_simple_Beowulf_cluster_with_Ubuntu/
[3] Blaise Barney, Message Passing Interface (MPI)
https://computing.llnl.gov/tutorials/mpi/
[4] Paul Burton, An Introduction to MPI Programming
http://www.ecmwf.int/services/computing/training/material/hpcf/Intro_MPI_
Programming.pdf
[5] Stefano Cozzini, MPI tutorial, Democritos/ICTP course in Tools for computational
physics, 2005
http://www.democritos.it/events/computational_physics/lecture_stefano4.pdf
[6] Ng Vn Thanh, Tnh ton song song
http://iop.vast.ac.vn/~nvthanh/cours/parcomp/
[7] https://www.surfsara.nl/systems/shared/mpi/mpi-intro
[8] http://chryswoods.com/book/export/html/117
[9] http://beige.ucs.indiana.edu/B673/node150.html
[10] http://www.cs.indiana.edu/classes/b673/notes/mpi1.html
[11] http://geco.mines.edu/workshop/class2/examples/mpi/index.html
[12] http://www.mcs.anl.gov/research/projects/mpi/usingmpi/examples/simplempi/
main.htm
30