Professional Documents
Culture Documents
Parallel Computing (C3)
Parallel Computing (C3)
Ng Vn Thanh,
Vin Vt l.
3.1 C bn v giao tip bng phng php trao i thng ip (message passing)
3.1.1 Trao i thng ip nh mt m hnh lp trnh. 3.1.2 C ch trao i thng ip. 3.1.3 Tip cn n mt ngn ng cho lp trnh song song.
3.2 Th vin giao din trao i thng ip (Message Passing Interface MPI)
3.2.1 Gii thiu v MPI. 3.2.2 Lp trnh song song bng ngn ng C v th vin MPI. 3.2.3 Mt s k thut truyn thng: broadcast, scatter, gather, blocking message passing...
3.3 My o song song (Parallel Virtual Machine-PVM). 3.4 Thit k v xy dng mt chng trnh (gii mt bi ton (NPcomplete) s dng MPI v C.
@2009, Ng Vn Thanh - Vin Vt L
3.1 C bn v giao tip bng phng php trao i thng ip (message passing)
Phng php Message-passing : l phng ra i sm nht v c ng dng rng ri trong k thut lp trnh song song. Dng trao i thng tin v truyn d liu gia cc processors thng qua cp lnh send/receive. Khng cn s dng b nh dng chung. Mi mt node c mt processor v mt b nh ring. Cc message c gi v nhn gia cc node thng qua mng cc b. Cc nodes truyn thng tin cho nhau thng qua cc kt ni (link) v c gi l knh ngoi (external channels).
Cc chng trnh ng dng c chia thnh nhiu chu trnh, cc chu trnh c thc hin ng thi trn cc processors. Kiu chia s thi gian: tng s cc chu trnh nhiu hn s processor. Cc chu trnh chy trn cng mt processor c th trao i thng tin cho nhau bng cc knh trong (internal channels). Cc chu trnh chy trn cc processor khc nhau c th trao i thng tin thng qua cc knh ngoi. Mt message c th l mt lnh, mt d liu, hoc tn hiu ngt. Ch : D liu trao i gia cc processor khng th dng chung (shared) m chng ch l bn copy d liu. Ht chu trnh (process granularity): l kch thc ca mt chu trnh, c nh ngha bi t s gia thi gian thc hin chu trnh v thi gian truyn thng tin: process granularity = (computation time)/(communication time)
u im:
Kiu trao i d liu khng i hi cu trc ng b ca d liu. C th d dng thay i s lng cc processors. Mi mt node c th thc hin ng thi nhiu chu trnh khc nhau.
@2009, Ng Vn Thanh - Vin Vt L
mi l cc message trao i gia cc processor. Cc mi tn th hin hng trao i message gia hai processors. H message passing c th tng tc vi th gii bn ngoi (h ngoi) cng phi thng qua cc qu trnh nhn v gi cc message.
Cu trc message passing s dng cc lnh m n cho php cc chu trnh truyn thng tin cho nhau: send, receive, broadcast v barrier.
Lnh send : ly d liu t vng nh m (buffer memory) v gi n n mt node no . Lnh receive : cho php nhn mt message t mt node khc gi n, message ny c lu li trn mt vng nh m ring.
Kiu Blocking: Kiu blocking: Cc yu cu send t mt processor v yu cu receive t mt processor khc u b kha. D liu c php chuyn i khi v ch khi node sender nhn c tr li yu cu nhn t node receiver. Kiu blocking cn phi c 3 bc: Bc 1) gi yu cu truyn d liu n node nhn. Bc 2) node nhn lu yu cu li v gi mt message tr li. Bc 3) node gi bt u gi d liu i sau khi nhn c tr li t node nhn.
u im: n gin, c hai nodes sender v receiver khng cn s dng b nh m. Nhc im: c hai nodes sender v receiver u b kha (blocked) trong sut qu trnh thc hin gi send/receive. Trong qu trnh ny, cc processor khng hot ng (trng thi ngh). Khng th thc hin ng thi c vic truyn thng tin v vic tnh ton.
Kiu nonblocking:
Node sender gi message trc tip cho node receiver m khng phi ch thng tin tr li. Mi d liu c lu li trn vng nh m v s c truyn i khi cng kt ni gia hai node m. Nhc im: b trn b nh m nu nh cc node receiver x l khng kp cc thng tin gi t node sender.
V d: tnh
Tnh trn mt processor phi thc hin qua 8 bc. Tnh trn hai processor phi thc hin qua 7 bc.
Bc tnh
1 2 3 4 5 6 7 8
Cng vic
c a Tnh a+b Lu kt qu c c Tnh c+d Tnh (a+b)*(c+d) Ghi kt qu Kt thc
Bc tnh
1 2 3 4 5 6 7
c s dng cho cc message chn ng dn trn cc knh mng. K thut nh tuyn dng tm ra tt c cc ng dn kh d mt message c th i n ch, sau chn ra mt ng dn tt nht. C hai kiu nh tuyn: nh tuyn trung tm: Tt c cc ng dn c thit lp y trc khi gi message. K thut ny cn phi xc nh c trng thi ngh ca tt c cc node trong mng. nh tuyn phn tn: Mi mt node t chn cho mnh cc knh chuyn tip mt message n node khc. K thut ny ch cn bit trng thi ca cc node bn cnh.
Broadcast: mt node gi thng ip cho tt c cc node khc. N c ng dng phn pht d liu t mt node n cc node khc. Multicast: mt node gi thng ip ch cho mt s node chn, k thut ny c ng dng trong cc thut ton tm kim trn h multiprocessor.
@2009, Ng Vn Thanh - Vin Vt L
c s dng di chuyn d liu t knh vo sang knh ra. Cc kiu chuyn mch: Store-and-forward: truyn d liu theo kiu tun t, mc ch l m bo cn bng ti ng cho qu trnh truyn message qua mng. Packet-switched : mi mt message c chia thnh nhiu gi nh (packet) c cng kch thc. Mi mt node cn phi c vng nh m ln lu gi packet ny trc khi chuyn chng i. Mi mt packet cn phi c dn nhn kt ni vi nhau sau khi truyn xong. Virtual cut-through: packet ch lu tr trn cc node trung gian nu nh node k tip ang cn bn. Nu node k tip trn ng truyn khng b bn th n s gi lun packet i m khng cn phi nhn y packet t node trc n. Circuit-switching: Cc lin kt trn ng truyn d liu t node ngun sang node ch c khp kn, khng cn s dng b nh m trn mi node. Sau khi d liu c truyn xong, cc lin kt ny s c gii phng s dng cho cc message khc. Kiu chuyn mch ny c ng dng trong vic truyn d liu c dung lng ln do thi gian tr b. y l mt kiu cn bng ti tnh.
@2009, Ng Vn Thanh - Vin Vt L
Cc chu trnh c vit chung trong mt chng trnh. Trong chng trnh c cc cu lnh iu khin phn pht cc phn khc nhau cho cc processor. Cc chu trnh trong chng trnh l chu trnh tnh.
y l c s cho s ra i th vin MPI (message passing interface).
Cc chng trnh tch bit c vit ring cho tng chu trnh. S dng phng php master-slave. Mt processor thc hin cc chu trnh master, cc chu trnh khc (cc chu trnh slave) s c khi to t chu trnh master trong qu trnh chy. Cc chu trnh l chu trnh ng. y l c s cho s ra i ca b th vin PVM (parallel virtual machine).
Cc th tc thng kt thc khi m message c truyn xong. Th tc send ng b: Ch thng tin chp nhn t chu trnh nhn trc khi gi message. Th tc receive ng b: ch cho n khi message n.
3.2 Th vin giao din trao i thng ip (Message Passing Interface MPI)
3.2.1 Gii thiu v MPI.
MPI l mt b th vin h tr cho vic lp trnh kiu message passing. Th vin MPI bao gm cc th tc truyn tin kiu point-to-point , v cc ton hng chuyn d liu, tnh ton v ng b ha. MPI(1) ch lm vic trn cc chu trnh tnh, tt c cc chu trnh cn phi c nh ngha trc khi thc hin v chng s c thc hin ng thi. MPI-2 l phin bn nng cp ca MPI, c thm cc chc nng c th p ng cho cc chu trnh ng, kiu server-client Trong mt chng trnh ng dng, lp trnh vin a thm mt s lnh iu khin link n cc hm/th tc ca b th vin MPI. Mi mt tc v trong chng trnh c phn hng (rank) hay nh s bng cc s nguyn t 0 n n - 1. n l tng s cc tc v. Cc tc v MPI da trn cc rank phn loi message gi v nhn, sau p dng cc ton hng ph hp thc hin cc tc v. Cc tc v MPI c th chy ng thi trn cng mt processor hoc trn cc processor khc nhau.
@2009, Ng Vn Thanh - Vin Vt L
Communicator l mi trng truyn thng tin (communication context) cho nhm cc tc v. truy cp n mt communicator, cc bin cn phi c khai bo kiu : MPI_COMM
Khi chng trnh MPI bt u chy th tt c cc tc v s c lin kt n mt communicator ton cc (MPI_COMM_WORLD). Nhm tc v: MPI_Group
Cc tc v trong MPI c th c chia thnh cc nhm, mi nhm c gn nhn (t tn). Cc ton hng cranka MPI ch lm vic vi cc thnh vin trong nhm. Cc thnh vin trong nhm c nhn dng nh vo hng ca n (rank). MPI cho php to ra nhng nhm mi m cc thnh vin ca n l tp hp ca cc thnh vin trong cng mt nhm hoc t cc nhm khc nhau. Conmunicator ngm nh: MPI_COMM_WORLD
MPI_COMM_WORLD:
Tham s ny c dng chung trong tt c cc chu trnh, n gi nguyn khng thay i trong sut qu trnh thc hin tc v.
@2009, Ng Vn Thanh - Vin Vt L
Lnh MPI Init(): Bt u thc hin cc th tc MPI. Lnh MPI Finalize(): Kt thc cc th tc MPI. V d:
main (int argc, char *argv[]) { MPI Init(&argc,&argv); MPI Comm rank(MPI COMM WORLD,&myrank); if (myrank == 0) master(); /* master code */ else salve(); /* slave code */ MPI Finalize(); }
Hng tc v (task rank): MPI_Comm_rank() MPI_Comm_rank() : tr li ch s rank ca tc v. C php: MPI_Comm communicator; /* communicator handle */ int my_rank; /* the rank of the calling task */ MPI_Comm_rank(communicator, &my_rank);
MPI_Comm_group(): to mt nhm mi t cc nhm c. C php: MPI_Comm communicator; /*communicator handle */ MPI_Group corresponding_group; /*group handle */ MPI_Comm_group(communicator, &corresponding_group) MPI_Comm_size() : tr li kch thc ca nhm (tng s cc tc v). C php: MPI_Comm communicator; /*communicator handle */ int number_of_tasks; MPI_Comm_size(communicator, &number_of_tasks)
@2009, Ng Vn Thanh - Vin Vt L
V d: chng trnh c 5 tc v T0,T1,T2,T3,T4, c cc rank tng ng l 0,1,2,3,4. Ban u c 5 tc v u c tham chiu ln communicator MPI_COMM_WORLD.
Gi s tc v T3 thc hin lch gi: MPI_Comm_rank(MPI_COMM_WORLD, &me); Bin me c gn gi tr l 3. MPI_Comm_size(MPI_COMM_WORLD, &n) Bin n c gi tr l 5.
MPI_Comm_group(MPI_COMM_WORLD, &world_group)
Cc th tc to mi communicator.
To bn sao communicator (duplicate) MPI_Comm_dup(oldcomm, &newcomm) To mi mt communicator tng ng vi mt nhm ca communicator c. MPI_Comm_create(oldcomm, group, &newcomm) To mt communicator tng ng vi mt nhm con c tch ra t nhm c. MPI_Comm_split(oldcomm, split_key, rank_key, &newcomm)
@2009, Ng Vn Thanh - Vin Vt L
V d: chng trnh c 5 tc v T0,T1,T2,T3,T4, c cc rank tng ng l 0,1,2,3,4. Ban u ch c mt nhm tn l small_group vi hai phn t l T0 v T1. Th tc to communicator mi cho nhm : MPI_Comm_create(MPI_COMM_WORLD, small_group, &small_comm) Tch cc tc v thnh hai nhm, t hai gi tr split_key = 8 v 5. T0 gi th tc vi x = 8 v me = 0 MPI_Comm_split(MPI_COMM_WORLD, x, me, &newcomm) T1 gi th tc vi x = 5 v me = 1 MPI_Comm_split(MPI_COMM_WORLD, y, me, &newcomm) T2 gi th tc vi x = 8 v me = 2 MPI_Comm_split(MPI_COMM_WORLD, x, me, &newcomm) T3 gi th tc vi x = 5 v me = 3 MPI_Comm_split(MPI_COMM_WORLD, y, me, &newcomm) T4 gi th tc vi x = MPI_UNDEFINED v me = 4 MPI_Comm_split(MPI_COMM_WORLD,MPI_UNDEFINED,me,&newcomm)
Lnh send(): sender s b kha cho n khi message c sao chp y ln b m nhn. MPI_Send(buf,count,data_type,dest,tag,commu) buf: a ch ca b m gi; count: s phn t cn gi data_type: kiu d liu; dest: rank ca chu trnh nhn tag : nhn ca message; commu: communication. Lnh receive(): receiver cng b kha cho n khi message c nhn t b m. MPI_Recv(buf,count,data_type,source,tag,commu,&status) source: rank ca chu trnh gi; status: cho bit kt qu ca vic nhn message c thnh cng hay khng? Lnh send v receive phi c cng tham s commu. Lnh Isend()/Irecv(): sender v receiver khng b kha. MPI_ISend(buf,count,data_type,dest,tag,commu,&request) MPI_IRecv(buf,count,data_type,source,tag,commu,&request) request dng kim tra th tc send/receive hon thnh hay cha.
@2009, Ng Vn Thanh - Vin Vt L
MPI_Test(request, &flag, &status) request: tn bin yu cu dng trong cc lnh Isend/Irecv. flag: l bin logic, c gi tr TRUE nu nh qu trnh truyn tin xong. status: thng tin b sung v trng thi ca th tc Isend/Irecv.
Lnh ch: Yu cu ch cho n khi vic truyn tin hon thnh. MPI_Wait(request, &status) Lnh kim tra v lnh ch cho nhiu request. MPI_Testall(count, array_of_requests, &flag, &array_of_ Statuses) Tr li gi tr TRUE nu tt c cc requests hon thnh. MPI_Testany(count, array_of_requests, &flag, &status) Tr li gi tr TRUE nu mt trong s cc requests hon thnh. MPI_Waitall(count, array_of_requests, &array_of_statuses) Ch cho n khi tt c cc requests hon thnh. MPI_Waitany(count, array_of_requests, &status) Ch cho n khi mt trong s cc requests hon thnh.
@2009, Ng Vn Thanh - Vin Vt L
Tc v ti barrier phi ch cho n khi tt c cc tc v khc trn cng mt communicator hon thnh. V d: chng trnh c 5 tc v T0,T1,T2,T3,T4, c cc rank tng ng l 0,1,2,3,4. Ban u c 5 tc v u c tham chiu ln communicator MPI_COMM_WORLD. S dng lnh: MPI_Barrier(MPI_COMM_WORLD) Yu cu cc tc v phi ch ti barrier cho n khi tt c cc tc v u n c barrier.
Lnh Ssend/Srecv: gi v nhn ng b. Lnh Ssend s ch cho n khi thng tin c nhn. Lnh Srecv s ch cho n khi thng tin c gi. C hai chu trnh nhn v gi u b block.
3.2.3 Mt s k thut truyn thng: broadcast, scatter, gather, blocking message passing...
Broadcast:
MPI_Bcast(buf, n, data_type, root, communicator) Lnh gi bn sao ca mt buffer c kch thc l n t mt tc v root n tt c cc tc v khc trong cng communicator.
scatter/gather:
MPI_Scatter(sbuf,n,stype,rbuf,m,rtype,rt,communicator) MPI_Gather(sbuf,n,stype,rbuf,m,rtype,rt,communicator) Lnh Scatter: phn pht mt buffer ln tt c cc tc v khc. Buffer c chia thnh n phn t. Lnh Gather: to mi mt buffer ring cho mnh t cc mnh d liu gp li. sbuf : a ch ca buffer gi. n : s cc phn t gi n cho mi tc v (trng hp scatter) hoc s cc phn t trong buffer gi (trng hp gather). stype: kiu d liu ca cc buffer gi. rbuf : a ch ca buffer nhn. m : s phn t d liu trong buffer nhn (trng hp scatter) hoc s phn t nhn t mi mt tc v gi (trng hp gather). rtype : kiu d liu ca cc buffer nhn. rt : rank ca tc v gi (trng hp scatter) hoc rank ca tc v nhn (trng hp gather).
scatter/gather:
Lnh Reduce():
MPI_Reduce(sbuf, rbuf, n, data_type, op, rt, communicator) sbuf : a ch ca buffer gi. rbuf : a ch ca buffer nhn. n : s phn t d liu trong buffer gi. data_type: kiu d liu ca buffer gi. op : php ton rt gn. rt : rank ca tc v gc. Cc php ton rt gn: MPI_SUM : php tnh tng MPI_PROD : php nhn MPI_MIN : tm cc tiu MPI_MAX : tm cc i MPI_LAND : Logic AND. MPI_LOR : Logic OR.
PVM: l mt tp hp cc h my tnh khc nhau c kt ni qua mng v c iu khin bi mt my tnh n trong h parallel. Mi mt node my tnh trong mng gi l host, cc host c th c mt processor hoc nhiu processor, host cng c th l mt cluster c ci t phn mm PVM. H PVM bao gm hai phn: B th vin cc hm/th tc PVM. Mt chng trnh daemon c ci trn tt c cc node trong h my o. Mt chng trnh ng dng PVM c kt hp mt s cc chng trnh ring l, mi mt chng trnh c vit tng ng cho mt hoc nhiu chu trnh trong chng trnh parallel. Cc chng trnh ny c dch (compile) chy cho mi mt host. Cc file chy c t trn cc host khc nhau. Mt chng trnh c bit c gi l tc v khi u (initiating task) c khi ng bng tay trn mt host no . initiating task s kch hot t ng tt c cc tc v trn cc host khc. Cc tc v ging nhau c th chy trn cc khong d liu khc nhau, y l m hnh Single Program Multiple Data (SPMD).
@2009, Ng Vn Thanh - Vin Vt L
Trong trng hp cc tc v thc hin cc chc nng khc nhau, y l m hnh Multiple Program Multiple Data (MPMD). Cc chng trnh c th thc hin theo cc cu trc khc nhau m khng cn phi sa i file ngun, ch cn copy t cu trc ny sang cu trc khc ri dch v chy chng trnh. Cu trc chng trnh PVM: Cu trc ph dng nht l cu trc hnh sao (star). Node chnh gia gi l supervisor (master), cc node cn li gi l Workers (slaves). Trong m hnh ny, node master thc hin initiating task sau kch hot tt c cc tc v khc trn cc node slave.
Cu trc cy (tree) - hierarchy: node master trn cng thc hin initiating task gi l node gc (root). Cc node slave nm trn cc nhnh v chia thnh cc bc khc nhau (level).
@2009, Ng Vn Thanh - Vin Vt L
L trng hp c bit ca cu trc cy m n ch c mt mc. Mt master v nhiu slaves. Node master c kch hot bng tay tc v "initiating", n tng tc trc tip vi ngi s dng. Node master kch hot cc node slave, gn cc cng vic cho cc node slave ny, cui cng l gom kt qu t cc node slave v node master. Node slave thc hin cng vic tnh ton, cc node ny c th hot ng mt cch c lp, hoc ph thuc ln nhau. Nu nh node slave hot ng ph thuc ln nhau, n c th trao i thng tin trc tip vi nhau hon thnh cng vic, sau gi kt qu v master. V d: bi ton sp xp cc phn t trong mng. Node master chia mng thnh cc phn con c s phn t bng nhau. Mi mt phn con c gn cho mt node slave. Cc node slave s thc hin vic sp xp mt cch c lp cc phn t trong phn con ca mng. Sau gi kt qu v node master. Cui cng node master tp hp cc phn con c sp xp t cc node slave, trn cc chui li vi nhau thnh mt chui hon chnh.
@2009, Ng Vn Thanh - Vin Vt L
Cu trc cy Hierarchy.
S khc bit gia cu trc hnh sao v cu trc cy l mi mt node slave c th ng vai tr l mt node master th cp, n c th to mi cc node slave th cp. Node master thc hin tc v initiating gi l node bc mt hay node gc, n to ra cc node slave k tip bc hai. Cc node bc 2 to ra cc node bc 3 Cc "l" ca cy l cc node c bc thp nht.
S chia mng thnh 2 phn v gi cho W1,W2 W1 chia phn mng con thnh 2 phn v gi cho W3, W4 W3 chia phn mng con thnh 2 phn v gi cho W7, W8
@2009, Ng Vn Thanh - Vin Vt L
To tc v.
Tc v trong PVM c th c to bng tay, hoc sinh ra t cc tc v khc. Tc v initiating c to ra t u trong chng trnh thc hin trn node master, n l mt tc v tnh . Cc tc v khc c to ra trong qu trnh chy chng trnh t cc tc v khc c gi l tc v ng. Hm pvm_spawn(): to mt tc v ng.
Tc v m (parent): tc v gi hm pvm_spawn().
Tc v con (child): c to ra t hm pvm_spawn(). iu kin thc hin lnh pvm_spawn().
Cc hm lin quan n ch s tc v.
Hm pvm_spawn( , , , , ,&tid): hm to cc tc v con, hm tr li mt mng "tid" cha ch s ca cc tc v con va c to ra. Hm pvm_parent(): cho bit ch s ca tc v m
V d: mytid = pvm_mytid();
Hm pvm_tidtohost(id): cho bit ch s ca host m chng trnh daemon ang chy. V d: daemon_tid = pvm_tidtohost(id);
V d: my_parent_tid = pvm_parent();
To tc v ng:
VD:
Child: Tn file chy (file phi c t trn host m n thc hin). Arguments: mng cc tham s chng trnh Flag: bng 0 th cc host chy s c chn t ng bi chng trnh PVM. Bng 1 nu tc v mi s chy trn host c nh ngha tham s Where. Where: tn host thc hin cc tc v mi. HowMany: s tc v con c to mi. Tids: mng ch s cc tc v con. n1 = pvm_spawn(/user/rewini/worker, 0, 1, homer, 2, &tid1);
@2009, Ng Vn Thanh - Vin Vt L
Cc tc v c th gia nhp hoc ri khi nhm. Mi mt tc v c th gia nhp nhiu nhm khc nhau. Mi mt tc v trong mt nhm c nh s th t bt u t s 0. Hm pvm_joingroup(): a tc v gia nhp nhm "group_name". Hm tr li ch s ca tc v trong nhm.
i = pvm_joingroup(group_name);
Nhm tc v.
Hm pvm_getinst(): Cho bit s th t trong nhm ca mt tc v khi bit TID ca tc v : inst = pvm_getinst(slave,100)
@2009, Ng Vn Thanh - Vin Vt L
V d: C cc tc v :T0, T1, T2, v T3 vi TID ca cc tc v ny l 300, v 400. Tc v T0,T1,T2 gi cc hm: i1 = pvm_joingroup(slave); /* i1 = i2 = pvm_joingroup(slave); /* i2 = i3 = pvm_joingroup(slave); /* i3 =
200, 100,
0 */ 1 */ 2 */
Tc v T1 gi hm gia nhp nhm i5 = pvm_joingroup(slave); /* i5 = 3 */ Lnh tid = pvm_gettid(slave,1) s tr li gi tr TID =400 ca T3. Lnh inst = pvm_getinst(slave,100) s tr li ch s nhm bng 3 ca tc v T1 c TID = 100.
@2009, Ng Vn Thanh - Vin Vt L
Trao i thng tin gia cc tc v trong h PVM cng s dng kiu message passing. S dng cp lnh: send/receive thng qua chng trnh daemon ang chy trn mi node. Daemon xc nh im ch message gi ti. Nu message c chuyn n mt tc v ni b (trn cng mt node vi daemon), daemon s chuyn trc tip message . Nu message c chuyn n mt tc v trn mt node khc, daemon s gi message cho daemon ca node nhn thng qua mng. Mt message c th c gi n mt hoc nhiu node nhn. Message c th nhn theo kiu blocking hoc nonblocking.
Lnh send c thc hin ti im 1. Message chuyn n daemon ti im 2 3-4: iu khin tr v cho ngi s dng khi chuyn d liu n 2. 5-6: l lnh nhn t node receiver 7-8 : iu khin tr v cho ngi s dng.
@2009, Ng Vn Thanh - Vin Vt L
Chun b buffer gi. Message phi c nn li trong buffer ny. Message c gi y n cc node nhn.
Lnh pvm_initsend() : to mt buffer, hm tr li ID ca buffer . bufid = pvm_initsend(encoding_option); tham s encoding_option: c gi tr ngm nh bng 0, d liu c m ha. Bng 1 th d liu khng c m ha. Lnh pvm_mkbuf(): To buffer, c s dng c hiu qu cao khi trong chng trnh cn phi s dng nhiu buffer cho nhiu message. bufid = pvm_mkbuf(encoding_option); Phin bn PVM 3: ch cho php mt buffer gi v mt buffer nhn cng hot ng ti mt thi im. Lnh t active: pvm_setsbuf(bufid) v pvm_setrbuf(bufid).
hm tr li ch s ID ca buffer v ghi li trng thi ca buffer trc .
Nn mt mng d liu a vo buffer gi. Cc lnh nn c 3 i s: Bin con tr mng tr ti v tr ca phn t u tin trong mng. S cc phn t trong mng s c gi. Bc nhy s phn t trong mng. Cc hm nn d liu c tn tng ng vi kiu d liu nh s nguyn, s thc, chui k t pvm_pkint(), pvm_pkfloat(), pvm_pkstr()
Gi message:
Qu trnh gi message l qu trnh khng ng b, tc l n khng ch cho n khi tc v nhn hon thnh. Sau khi buffer c khi to, qu trnh gi d liu thc hin xong th d liu trng thi sn sng (active) c gi i. Lnh gi n mt node nhn c Task ID l tid: info = pvm_send(tid, tag); Tham s tag l nhn ca message, hm tr li gi tr m nu b li.
@2009, Ng Vn Thanh - Vin Vt L
Lnh gi n nhiu node nhn: info = pvm_mcast(tids,n,tag); Tham s n l s cc tc v nhn. Lnh gi n nhm: info = pvm_bcast(group_name, tag); Tham s n l s cc tc v nhn. Lnh nn v gi: info = pvm_psend(tid, tag, my_array, n, int) Nn mt mng tn l my_array gm n s nguyn vo mt message c nhn l tag, sau gi n TID.
Nhn message:
Nhn kiu blocking: ch cho n khi message c nhn. bufid = pvm_recv(tid, tag) Nu chn tid = -1 hoc tag = -1 th s nhn bt k message no va c gi n. Nhn kiu nonblocking: khng phi ch cho n khi message c nhn bufid = pvm_nrecv(tid, tag) Nu message cha c gi n th bufid c gi tr bng 0.
@2009, Ng Vn Thanh - Vin Vt L
Nhn kiu timeout: khng phi ch cho n khi message c nhn bufid = pvm_trecv(tid, tag, timeout) Hm nhn s b kha trong mt khong thi gian timeout ch message c gi n. Nu message khng n th bufid=0. Lnh nhn v gii nn: pvm_precv() info = pvm_precv(tid, tag, my_array, len, datatype, &src,&atag,&alen) Task ID ca node gi, nhn ca message v di ca message c gi n c gn vo cc bin src,atag,alen.
Gii nn d liu:
Cc lnh gii nn hon ton tng t vi cc lnh nn, tn hm c thm ch u: pvm_upkint(), pvm_upkfloat(), pvm_upkstr() V d: Lnh gii nn mt chui k t. info = pvm_upkstr(string) Lnh gii nn mt mng gm n phn t, bc nhy gia cc phn t s = 1. info = pvm_upkint(my_array, n, 1)
@2009, Ng Vn Thanh - Vin Vt L
ng b ha cc tc v:
Lnh pvm_barrier(): Tt c cc tc v u phi ch ti barrier. info = pvm_barrier(group_name, ntasks) ntasks l s cc thnh vin trong nhm thc hin lnh barrier.
3.4 Thit k v xy dng mt chng trnh (gii mt bi ton (NP-complete) s dng MPI v C.
https://computing.llnl.gov/tutorials/mpi/ Practical MPI Programming: http://www.redbooks.ibm.com/redbooks/pdfs/sg245380.pdf
/**
/* Tnh c */ for (k=0; k<ncb; k++) for (i=0; i<nra; i++) { c[i][k] = 0.0; for (j=0; j<nca; j++) c[i][k] = c[i][k] + a[i][j]*b[j][k]; } /* In ket qua */ printf(Ket qua tich hai ma tran\n"); for (i=0; i<nra; i++) { printf("\n"); for (j=0; j<ncb; j++) printf("%6.2f ", c[i][j]); } printf ("\n"); } }
@2009, Ng Vn Thanh - Vin Vt L
Chng trnh vit dng parallel s dng MPI #include <stdio.h> #include "mpi.h" #define nra 62 /* S hng ca ma trn A */ #define nca 15 /* S ct ca ma trn A */ #define ncb 7 #define MASTER 0 #define FROM_MASTER 1 /* Kiu message */ #define FROM_WORKER 2 MPI_Status status; main(int argc, char **argv) { numtasks, /* tng s tc v */ taskid, /* ch s task */ numworkers, /* s task slave */ source, /* task id ca message ngun */ dest, /* task id ca message chn */ nbytes, /* s byte trong mt message */ mtype, /* kiu d liu ca message */ intsize, /* kch thc ca s nguyn theo bytes */ dbsize, /* kch thc ca s thc theo bytes */ rows, /* hng ca ma trn A */
@2009, Ng Vn Thanh - Vin Vt L
averow, extra, offset, /* cc bin ph */ i, j, k, /* ch s chy */ count; double a[nra][nca], b[nca][ncb], c[nra][ncb]; intsize = sizeof(int); dbsize = sizeof(double); /************************************/ MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &taskid); MPI_Comm_size(MPI_COMM_WORLD, &numtasks); numworkers = numtasks-1; /************* master task **************/ if (taskid == MASTER) { printf("Number of worker tasks = %d\n",numworkers); for (i=0; i<nra; i++) for (j=0; j<nca; j++) a[i][j]= i+j; for (i=0; i<nca; i++) for (j=0; j<ncb; j++) b[i][j]= i*j;
@2009, Ng Vn Thanh - Vin Vt L
/* send matrix data to the worker tasks */ averow = nra/numworkers; extra = nra%numworkers; offset = 0; mtype = FROM_MASTER; for (dest=1; dest<=numworkers; dest++) { rows = (dest <= extra) ? averow+1 : averow; printf(" sending %d rows to task %d\n",rows,dest); MPI_Send(&offset, 1, MPI_INT, dest, mtype, MPI_COMM_WORLD); MPI_Send(&rows, 1, MPI_INT, dest, mtype, MPI_COMM_WORLD); count = rows*nca; MPI_Send(&a[offset][0], count, MPI_DOUBLE, dest, mtype, MPI_COMM_WORLD); count = nca*ncb; MPI_Send(&b, count, MPI_DOUBLE, dest, mtype, MPI_COMM_WORLD); offset = offset + rows; }
@2009, Ng Vn Thanh - Vin Vt L
/* wait for results from all worker tasks */ mtype = FROM_WORKER; for (i=1; i<=numworkers; i++){ source = i; MPI_Recv(&offset, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status); MPI_Recv(&rows, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status); count = rows*ncb; MPI_Recv(&c[offset][0], count, MPI_DOUBLE, source, mtype, MPI_COMM_WORLD, &status); } /* In ket qua */ printf("Here is the result matrix\n"); for (i=0; i<nra; i++) { printf("\n"); for (j=0; j<ncb; j++) printf("%6.2f ", c[i][j]); } printf ("\n"); } /* end of master section */
@2009, Ng Vn Thanh - Vin Vt L
/************** worker task ****************/ if (taskid > MASTER) { mtype = FROM_MASTER; source = MASTER; printf ("Master =%d, mtype=%d\n", source, mtype); MPI_Recv(&offset, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status); printf ("offset =%d\n", offset); MPI_Recv(&rows, 1, MPI_INT, source, mtype, MPI_COMM_WORLD, &status); printf ("row =%d\n", rows); count = rows*nca; MPI_Recv(&a, count, MPI_DOUBLE, source, mtype, MPI_COMM_WORLD, &status); printf ("a[0][0] =%e\n", a[0][0]); count = nca*ncb; MPI_Recv(&b, count, MPI_DOUBLE, source, mtype, MPI_COMM_WORLD, &status); printf ("b =\n");
@2009, Ng Vn Thanh - Vin Vt L
for (k=0; k < ncb; k++) for (i=0; i < rows; i++) { c[i][k] = 0.0; for (j=0; j < nca; j++) c[i][k] = c[i][k] + a[i][j] * b[j][k]; } mtype = FROM_WORKER; printf ("after computing\n"); MPI_Send(&offset, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD); MPI_Send(&rows, 1, MPI_INT, MASTER, mtype, MPI_COMM_WORLD); MPI_Send(&c, rows*ncb, MPI_DOUBLE, MASTER, mtype, MPI_COMM_WORLD); printf ("after send\n"); } /* end of worker */ MPI_Finalize(); } /* end of main */
@2009, Ng Vn Thanh - Vin Vt L
V d tnh tng ca mt vector #include <stdio.h> #include <stdlib.h> #include <mpi.h> main(int argc, char **argv){ int rank, size, myn, i, N; double *vector, *myvec, sum, mysum, total; MPI_Init(&argc, &argv ); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &size); if (rank == 0) { printf("Enter the vector length : "); scanf("%d", &N); vector = (double *)malloc(sizeof(double) * N); for (i = 0, sum = 0; i < N; i++) vector[i] = 1.0; myn = N / size; } MPI_Bcast(&myn,1,MPI_INT,0,MPI_COMM_WORLD);
@2009, Ng Vn Thanh - Vin Vt L
myvec = (double *)malloc(sizeof(double)*myn); MPI_Scatter(vector, myn, MPI_DOUBLE, myvec, myn, MPI_DOUBLE, 0, MPI_COMM_WORLD ); for (i = 0, mysum = 0; i < myn; i++) mysum += myvec[i];
MPI_Gather(myvec, myn, MPI_DOUBLE, vector, myn, MPI_DOUBLE, 0, MPI_COMM_WORLD ); if (rank == 0) for (i = 0; i < N; i++) printf("[%d] %f\n", rank, vector[i]); MPI_Finalize(); return 0;