You are on page 1of 20

r Se~ enth Semester B F.

Di"OrAD E
· ••
.
x.ammat1 n CBCS - Dec 2 18
-o "'"
Advanced Computer Architecture
T,me: ~

~ ,te: A11n.:er 011 I JJ l fiII II que IIUIJ

I\1odule .. I
L a. List the pc~formancc factors and S) tern attnbutes. [xp
-'"- arc mflucncc<l In• ~\Sl~m
factors • att n.b utes.
Ans. Pcm rm' LC f
• hardv. are tcchn c~
• ar\;httcc 1ral feat e
• cfl ~ cnt re rec mam1e!!'.Cm1ent
• alg nthm de ,gn
• data trm.:t n:
• I n 1ae, .
•P kill
• C m
\\ hen .. e ta , peri rm ce fc p er
qu1ckl) a ~I\ en
mkn m
• d1 k and

• C mpt ,Ill
• operating tc
• C Pl t1m-.:
A n idea I pcrfi rmance f a c p
machine capab1lit} and pr gr m be
b) using better hard arc t ch I nd ctfi~1en re
program bcha, 1 r I c nccrned rt depends n code c;c \:
time c md1t1 n Also a machme pcrf◄ nnance m .
Because there are t man., pr gram and 1t 1 1mpra\; ~ t~
all of them. benchmark \\ere de\el ped Computer arc tee ha,e 1.:
vanct) f metncs to describe the computer performance Clock ra e
Smee IO and ystem \'t:rhead frcquentl) O\oerlaps proc g b_ pr02J-ams.
ts fair to con 1dcr onl) the CPU tame used b) a progra . nd t user CP
m t 1mporta t tactor CPU I dnven b.. a clock
me red m nar105'~ od hteh nu the rate
The cl km l has th~ consaanl c cle tame t 1n nm:I054i:cond5i-).
The m erk! f the cycle time . the clock rate f - t. measured megahertz
A shorter clock cycle ~ or eq aleAd)' a lager ber per ,ecood.
implies more ~ caa be performed per die program
Ad vet need, Co·n ip u:te-r A rchu-eet-Ure,
:, ,'\.'.-.' ·11111. ·\l t,) 1h1.· i....1 t (k ). l'he ,i,t.· of a program i, dt:tennined b,
r11011 \.'l'lllH
1-. ,,, l \.'111..'ll \.'l)lllll. "-'· m.ichinl' 111:->truclion, t0 be c::-..ecuted b~ th~
lhl' numb1..·1 l)r

',)~,.till D1lli...·1'l·q1 ·11.11.:f1111c in..;truction:- r1.·quirc dilfrrcnt numbcr'> l fl'lock c~clesto


l..'\.1..1..'llk. CPI (ch· 1.·, i11..T i11-.tru1.·1io11) ,, thu . . an imponant parameter.
\\ l'l":l~l' CPI

1t 1hi.' ,I\ er.1~e num bcr of C) c It..', pl.'r Instruct ion for a particular
,..., 1.'.h) 11..) ,k1err11 i111..•
p, 1..'\.'1.•,,l,, if "1.· "'w,, che lh:qlll'll1.') of o.:currl.'ncc of each instruction t) pe. Of
• e1..,ur,1.\ ,111: 1.·,11•n.ltl' " , .tlid c,nJ) fo, ,1 . . pu:ific set of program::.(,, hich define.-, the
ilhf: t11..·t tt)n m 1\ ). ,lll1..I t ll'll on h ,f the rt: ,irt.' sulTk i1.•nt I: large number of instruction::..
ln g1.·n1.·r.1·. th.' h.-rm CPI i, us1.<i ,, ith n:::.pt:et ton particular instrudion set and a gi,en
P'\\~!',llll llll \. 1h1.• 111111..· rl'l}lltrl'd co c\.ccute n program containing le instruction.;, ,s
_,ti....t r- I-.· ... CPI "'r. fad1 11htruc11011 m11~1 be ft:td11.'d from rnt'mOr). derodt:d. then
l)Pl'i,llld..., 11.•h.'lit'd from mcnwn. tht: im,truct,on e\ecutt:d. and thl' rl':-.ttlt:-. ~tored.
fhc t1m1. IL'~Ju.rl.'d IL' ,lL'L'l's" ;m·mor: 1s c,1lll'd the memo!") c:cle tune. \\hich is
tbu.1lh. 1-. , 111..·, tlh.' ,ro,.:t·ssc,r C\ de 1,mc t. The \ alue of k dcpt:nd-, on the memon..
' ~

l\.'1.·imolog: .rnd rile prol·cs~or-memo1: mterconnection scheme. The processor C) cles


rt>quir1.'d for c;id1 1rn,truction (C PI) can be attributed to C) cles needed for instruction
dl.'Cl'tk ,rnd 1.'\l'L'llt ion (p). nnd C)clcs tH.·cded for memo!") references (m* h.).
r/11..· Wt.ii time 111.·1.·,kd tr1 t'\CL'llle n program cnn then be rt'\\ ritten as T = le., (p -
m..,1-.)"r.

b. £\plain th<.· :1rd1itl·rtun· of, ('l'for supl'f' <.·ompukr "ith nt•nr dingrnm. (08 ~larks)
Ans. \h•tor Supcrcompukrs
In a \ c~tor computer. ,1 , cctor processor is attached to the scalar processor ns an
l'ptivrwl fo,1tur"· The host computer first loads program and data to the main memo[).
Then the scalt1r control unit decodes nil the instructions. If the decoded instructions
art' sl.'nlar operations or program operations. the scalar processor e:\ecutes those
op1.'ratiom using sc,llar fum.:tionnl pipelines.
On the otlwr h,md. 1f the (kcoded instructions are , ector operations then the
im,tructwn!-- "ill be sent tu , cctor control unit.
\·ecror fonction:1/ units pipelined. full) segmented each stage of the pipeline performs
a step of the fundion on different operand(s) once pipeline is full. a ne,, result is
produced each clod. period (cp).
Pipelining
file pipeline is di, idcd up into indh id uni segments. each of,, hich is complete!)
imkpl'ndl'nt and ill\ oh cs nu hnnh\arc sharing. This means that the machine can be
\\Orking on Sl'paratc OfK'rands at the sann.· timt'. This abilit) cnal-iks it to produ~~
one n:sult pl'I' dol'.1-. period as soon ns the pipeline is full. The same instruction is
obc) t:d rcpeatl·dl) using the pipeline tcl·hniquc so the vector processor processes all
thl· cl~mcnts of a '- cct~r in c.~actl) tf~e snme way. The pipeline segments arithmetic
upcratmn sm:h as fl~atrng -~omt multiply into stages passing the output of one stnge
to the ne>.t stage as mput. I he nc~t pair of operands nrn) enter the pipcJine after th~
first stage has processed the previous pair of operands. fhe processing of a number
of opernnds ma) be carried out simultaneously.

2
..''
.'
f
• I
I

f
I
I
I
I
I
I I 1 I
I
nt , I
1 I I
I
I
I
I

.
--- - -- - -
I
- ------------- _ .!lI
I
I

'I
f • t '
NP" r•r
1 f1,1 ~l t I
I
I
' I
I
I
I
0
0
''
I
I

l '
'- t_r rlr,(t O'

- L -_1;:'•' _
I
• __ _ .--.J :•

,. 1 ..
...'
I .,._..::-------__.J '
'
.'
I

, .. -----· -- ·-...-·-·------·-·-·-· ....... ---·-·-·- :

Ol{
(06 IVhtrl{S)
\\'h~tt ~,re conditiollS of pnn11Iclis111'? E"'pl:iin the types of tint" dcpcndcncies-

Th<-' ._.~p lo it,1t ion of prir,11ldi,111 in con1puting. r.:quirc, un,1<.:rstanding_ the basic
th eCl ...., , ,,,.ot: i.i ed , , it h 1- p rogr.:s, i, 11 ced.:d in st.:' ..:r.1 I a rcas: colll putauon_ in ode Is
ft.1r Parallel co111puting1 interproccs,or co1nt1'.'"1icat_ion in parallel architectures
1
1
. 1 . t· r,,,lld S' ,,1e111s intc> !!,:11cr,1I ell' ,ronn1cnts
0·1(;1 aud Resource
e;rrat1c111 o pn .. 0cpc1ulcflc1cs
. - . - ,11 l"~r!llt..:1 unlc>S th<.:) arc independent.
• t b. c,._..,,ut.:d ,., . 01 b<.: ,nodificd b) another
1111 u .st:' ~r~tl ton11:,:
c ;
I ,ttl n ot· sco1ncnts ~onnot b e
c nit:11 t '" u
. , , , l)tll,; •s.:""
.,, ,:e;r1nent" cn111lt' " : d
· -autro o" ,.,,, scgoncnts
' •
<= ,s
• va•••,abl·
c.
•'"vs, the}'
Ad-v
\ TI Scoz ( c'SE 15f J
Data lkpcmknt't.' - 2 , b S ~.2). T\u) statement are output c.kpendent if thev
1
t 1utput dcpcmkncc (1.kiwtt.:d ) · ) J, Jendence ( denoted b) ) Read and \\ rite a·
t \. arrnble 1 l c,q c. d re
rWll"1u1..'I.' tht-' . . ,unc tllJ t pu d' ' , ~ 1ccure
· ,, her 1 the same
'
file is re I ere nee b,., both 11
0
l () ~t.1rt:mc::11t~ l O l iept>n e111..t'. l
,t,lk'llh.'llb
3
Data Dt'pemlcm.·c - b ·, .· t t·n, 'lriable is itself ~ubscripted. The subscr,'pt
f... J·, j~ncc·Thcsu ~i..11p O ' "
l n Hlm n 1.:rt.:.nc t · . i . bl~ A variabk appears more than o.nc~ with
J 'c'~ not conr,un the luLlf) rnc t>A \ arra I,;. • bf (ti t . . .
\ ~,. rt h:n inn dilfrrent ~o.dtici~nts of the loop , aria e 1~ is, different
~u )~t.:11 ~ • ' l::: .· bl·) fh, subscript is nonlin~ar in the loop mde\ ,ariablc
JunctJL)11 .., of the OLlf) ' :1r 1«1 (.;'. • \.'. l · ·
• .
Para/Id ~.\ ecul ,on o pr ogrnm S<:c 1,;
t "trrll :>nts \\ h ich do not ha, c tota data mdependence
~an produce non-ddenn inistic results.
b. \VIrnt a,·i the metrics affecting scalability of~• c~~puter sySfcm.
1 (06 Maries)
Ans. identified rm: the basic metrics affecting the scalabil 11) of a c?mputer S) stem.
• .\ 1achin~ Size(n ): the number of processors emplo) ed 111 a parallel_ computer
'- ___
s,. ~tern. A frm.!e machine
__,
size implies more resoun;es and more computing power.
lomputl!r
l'Pl t\ Iad1 Jill!
I tnh' \ 11i: (. \>st

Scalabilit) of
IO Mcmor)
- - (an:hilt:<.'lurt:. algorithm)
Dt:mand Dt:mnnd
CombinJtion

Pn>gr,1mm 111~ Prob km (. l>mmunH.:ation


Co..,1 '-i111..· (h i:rload

• Clock rate{f): the clo<:k ra te dett:rmines the basic machine C}Cle. But a machine
with components driven b) a dock which can scale up with better technology.
• Problem size (s): the amount of computational \\ orkload. or the number of data
points used to soh ea gi, en problem . The problem size is directly proportional to
the sequential execution time T(s. I) for uniprocessor S) stem because each data
point ma) demand one or more operations.
• CPU Time(t): the actual CPU time elapsed in executing a given program on a
parallel machine with 11 processes collectively. This is the parallel execution time.
denoted as T(s. n) and is a function of both sand n. ..
• I/O demand (d): the input/output demand. in . 1110\'ina e the prooram
e , data, and
results associated with a given application run. ·1h~ UO operation may overlap
with the CPU operation in a multiprogrammed environment.
• Memory capacity(m): the amount of main memory used in a program execution.
CBCS · Vev2018 I Jevvv2019
also indudc~ all nnncomputc opcrlltioni., ,, hich do not in, oh c lhc CPUs or \/0
de, ic-..:.., I h b mcrhcad h( -;. n) 1<:, ,1 function of s a~d n and is not part of'\ ( c.,n l
• Computer Co'it (c): the total cost ofhard,,are and soft\,arc resources required to
can) out the l!\.ccut1on of a program .
c. What arc the important charnctcristics of parallel algorithms'? (04 1\1,lrks)
Ans.
• Vve can justif)' tht.: importance of parallel computing for t\\O rca~on" Yer~ large
application domain-:,. and Ph 1s1cal lim1tat1oni., ofVl ~\ circuits
• l hough computers arc getting. faster and fa':>tcr. user demands for sol\ mg \ L'ry
large problems is gro\\ ing at a still faster rate .
• Some c::-..amples include \\Cather forecasting.. s11nulation of protein folding.
computational ph 1sics etc.
Module - 2
3. a. What arc characteristics of CISC' and RISC architecture? (04 Marks)
Ans. Characteristic of RISC -
• Simpler in...,truct1on. h1.:ncc simple instruction decoding.
• Instruction come umkr si,c of one \\Ord.
• Instruction tak.e single clock C)cle to get e:-..ecutcd.
• More number of g~neral purpose register
• Simpl~ Address111t?.... t'\.lmk1.;
• Le~~ Data t: pe-.
• P1pcl111g. can b1.: ,1i..:hie,ed.
Charackristic of C:ISC -
• Compkx instruction. hcm:1: compk'\ instruction decoding.
• Instruction arl.! largl.!r than l,ne \\ord s11e.
• Instruction ma) take mon: than singk doc\... C) c\c to get C\.ccutcd.
• Less number of general purpose rcg.1':-iter as operation get performed in memory
itself.
• Complc~ tuldrcs sing Modes.
• More Data types.
b. \Vhat arc the virtual memory models for multiprocessor system'? (04 Marks)
Ans. Private virtual memory
In the full-prokction model, each process is gh en its 0\\11 private virtua\ memory,
\\hid1 !:,pans to~ or 3.5 gigab)tl!s (depending on the CPU). This is accomp\ished by
using the CPU\ MMU. The performance cost for a process switch and a message
pass ,, ill increase due to the increased complexit) of obtaining addressability
between t\\ o complete\) pri, ate address spaces.
Share virtual memory
When programs are written. programmers take great care to make sure that code is
not needlessly repeated. Subroutines to handle particular functions are \Hitten and
used where\'cr possible. Subroutines that are useful to many programs are gathered

5
-

n
LJ
(a) Pri, 3~e , irtual menwr: , ,.,-:~ (''' \..~ \.)l',1 11 \ ,h.H\.'d
!I? \. .. ::"'rent r 1.'..:l ''"':'\ \ ii ilhl; 1lK'1lhlr\ '',h.'t'

c. Explain !H.ldn:s~ tr~rn~I.Hion nH:1.·h.rni~m using Tl.B :rnd (l~l~l' tnht...•. \O~ mark,)
Ans. TLB
• Trnn~btion lt){)ka~i"h.· bt1ffl'f fr,r·, .. ih.', , irtt1,l1 •',lfl' , 1:1h.) P 1 l , 1 f',lg_1.' fr:urn:
1mmber-..) (not ph:,i..:al add:e...-.1 ~.1 11 l'1.' 1.k"1h.' 111, 11gk m,1-: 1,11i: i::~k
• TLB 1~ 1 nplemented in harll\\,trl' 1, .l fud: ,1,~1.'1..'1,H1,~ ~.ldll' l,lll 1.'llt111.!, '-1.'.l \.'hcd
in paralld) cache tag~ are, irtu,11 pagr nurnh.·r,. 1.'.h.'il1.· ,.1lu1.''- ,L,' PlT, \~\lf~
frame number:-.)\\ ith PTE .... 1."lh,1.:t. M~ tl ~.111 dm:1.·tl: 1..\1kul,lt1.' tilt' P \
• TLBs exploit lo~alit) pro(t',:-,e, "1nl: ll'-l' .l h.rndful 1.)f l'·'t-"'" ,n ,l t1m..: · 6-4S
entril?5 in TLB i!) t: pica I (64- i l}_: l\. B) (,111 h1."ld th~ "h1.'t ,~c·
'-' 1 .. ,, ,,, 1,_ ·1t- '"'\ .. 1.l n
1

proce:-,s. hit rates in the Tl Bare lh1?r1?fr,1i: r~.111~ llllf'l)lt,lllt


The use of a TLB ,md PTs for addn:s~ lr.111:-.I.Hi . ."n 1, ,lh"I\\ 11 in fig.tu\.' I .h.·h, irtu.tl
addre~s i~ di, ided into thrct· field~. file ldtllh':-.t 1lt·ld 111.,IJ:-. th,•, 1rtu,1l \~lt-~ 1rnmh~r.
the middk field identifi~::- the ~a~h~ l1 lo~k numbt·r. ,l1t1.i th1.· r1g.htn11.-i~t t1t·l...i 1~ tht' '"-mt
address" ithin the block.
Our purpose is to produce the ph~ si~al nddr~ss ~1."t1sisti11g ('f the p.1~c fr.1mc number.
the block number. and the ,,ord address. Th~ first step 1.)ftlH: transl,1ti1..rn 1, h' u~~ th\'
\: irtual page number as a kc~ hJ sear(h through the n l3 Ft,r ,1 m.1h.·h l he n l3 1..'.lll
bo implemented "ith a spcn.il associati, c nh~nwn ( (Olllcnt-.1\i\irc,~.1l,k 11\1.'11"-'':)
or use pan of the cache llll'IHO~. •

In case of a match (a hit) 111 the fLB. the p,1gc framt> number is rdrk, cd from the
matched page cnt~. The cache block and \\Ord address arc copied din.·~tl~. In l.',1sc
the match cannot be found (a miss) in the TLB. a hashed pointer is used to i\ientit~

6
C13CS · Veo 20 l 8 I J CvYv 20 I 9
on~ of the page table~ \\ hen: the de(,ired page frnme number can be retr _\ ed
I ,, - - . . , .c - : I

,I

I •

t
• ¥ii

OR
4. a. Explain typical superscalar RISC processor architecture {08 marks).
Ans. Superscalar Processors:
• In a super::.calar processor. multiple instruction pipelines are requ red. This
implies that multiple instruction::, are issued per c~de and mu tip e res.u ts a:e
generated per~~ cle.
• Supers~a lar processors are designed to exploit more instruction-le, el paral ~.ISm
in ust!r programs. Onl~ independent instructions an be exec!.lted in para -:
\\ ithout causing a\\ a1t state.
• The in:::itructiun-issue degree in a superscalar processor i.s limited to ::-S ·n pra .. -· .. e.
• The etle~ti, e C Pl of a :::iuperscalar proce~sur :;hould be to\, er ·han that of a =er.e ..· ..
scalar RISC processors.

' 'l
!
' ' l

'
'
7
'
' ' ' '
l l , .: '

. ,. . · r · . . · -t ·· .
• 1 11
l .t
t

, I I I It
t

'\

7
VII Se11t ( CSE/ISE)
b. i, '-Plain inclm,,on. cohcrcm·c and lonllit~ properties. (08 l\farks)
Ans. lnclu~ion Propl•rt~ . . ,
lh1..• irn.:lu~ion prnpcrt\ 1.., stated.,-. 1\1111\1211\1'I Mn I he -.d 111dw,1on relationship
unplie-.. th,ll .,II mfunn.1tilH1 1tcm-. .1re 01 ig111nll_\ sturcd Ill the outc_rnw-,t h:,el l\\n
Dunng the pnJcc-..slllg. sub-..ds l)I 1\111 are copied 111tu \1n - I ~,milarl), subsets of
l\ln-1 ,m: 1..'.tlpied into l\1n-2. and ..,o 011. _ . . . _
In other ,,ords. if an information ,,ord ts lound 111 Mt. then copies ot the same \\Ord
can be also found in all upper lcH!ls Mi+ I, l\1i+2. ... . Mn . Ho,,c,cr, a word stored
in l\ti+ I ma, not be found in Mi . A ,,ord miss in Mi implic<; that it is also missing
from all lm;cr le,cls Mi-1.1\ti-2 . .. . , Ml. The highest lc\'el is the backup storao,~
(:-'-•

"here c, er) thing can be f'mmd .


Information transfer bdm:cn the CPU and cache is in terms of ,,ords (4 or 8 b)tes
each depending on the "ord length of a machine. The each\.! (MI) is di, ided into
cache block.s. also called cachi.! lines by somc authors. Each block is typical!) 32
b) tes (8 ,,ords). Blocks arc the units of data transfer bel\H!en the cache and main
mem01').
The main memor) (M2) is di, idcd into pages, sa~. 4Kbytes each . Each page contains
128 blocks. Pages arc the units of informatiDn tran..,fcrrcd bet,, cen di~I-- and ma 111
memor) .
Coherence Proper()
fhe coherence prupi.!rt) requires that copies oftht: same information item at successi,e
mem01') le,els be consistent. If a ,,ord is modified in the cache, copies of that must
be updated immediate I) ore, entuall)' at all higher le, els. The hierarchy should be
maintained as such. Frequently used information is often found in the lo\\er le, els
in order to _minimize the effecti,e access time of the mem0t") hierarch). In general.
there are t\, o strategies for maintaining the cohcrenct: in a mi.!m0r) hierarch).
The first method is called \\l'itc-through (WT).,, hich demands immediate update in
Mi+ I of a\\ ord is modified in Mi. for i = I, 2 . .... , n-1.
The second method is \\l'ite-back (WB), which dcla)S the update in Mi+! until the
word being modified in Mi is replaced or remo,ed from Mi.
Locality of References
There are three dimensions of the locality propcrt): temporal, spatial. and sequential.
During the lifetime of a software process. a number uf pages are used d1 namicall).
The references to tht:se pages vat") from timt: to time: hm, c, 1:r, the) folio\\ certain
access patterns. These memor) reference patterns are caused b) the foll~)'' ing
locality properties:
I. Temporal lrn.:ality-Rccentl) n:forenced items (instruction or data) are likel) to be "
referenced again in the nt:ar future. This is often caused b) special program constructs
such as iterative loops. process stacks. tcmpornr) , ariablcs, or subroutirn:s. Once a loop
is entered or a subroutine is called. a small code segment\\ ill be referenced repeatedly
many times. Thus temporal tends to cluster the access in the recently used areas.

8
(. ,~, s 0 l \)
' '-.p.111,11 h)1,-,1lr\, 1111 I 11,t !It lh I tHI " It I ' p111 ltt I 11 II I '
c1Ctd1L'\",1,' • ''"' lll,11 Cllll ,111i,11t I I Ill \ 11\lt)I Ujlll ,1, HI llll I ,lll ti Ill I Ill ,,
,1 l ll'-.~L·-. l 11 1 n1t11n ,lu luul 111.·1 111 thi.: Hhh pt I l'1t1•11111 •111 nt I h
'" 11111111,c ,nd 111 ,ulh k11d 10 Ill 1,1n.d 11 , 1ltl: , 111 1h 1~•hho1h I d llf lit
'> Jl<IC\.'

~ ~1.:q11cnt11il kh ,il,t) In 1~ pt\.; ti PH)t'1 111,, thl: \.'\1. utic, 11 CII 111 ,ll ll\.11 ,11 lt1II \
,cq11cn11,tl rnd1.·1 {01 11tc p11•1•1,111, u1dc,, ,11 ,11;-,, l, 1111 H.:lt 111,l1t11..tt(11\. "' ,tc ul ol
wdL' 1 1.'\\.'l.:tllto11, I he 1,1110 ol 111 u1clc1 1.·,~1.ut1011 ll11nll 111 rndc1 \:\ccul111111 I HI 1 hl)
5 hl I 111 <Hd111 ") p101ti,1111, l\c,1d~ •• thi.: 111.:\;\.' ~ ol I I 11 •~ d,11.i .11ri1) ,ti., 1(111 m, 1
~C{Jlll.'1111 ti 01 dl:t

l\locluk - J
5. a. \Vlrnl is arhitrat i1111 ·: E~pla iu tlill't·n•ut I ) pt•s of :11 hilt :it ion (08 !\Ju rlH••)
Ans. I h\.' d1.•, 1.·1.· th,11 i~ ,tllo,H·d tu 1111 1ii1h.: d11ta 111111\h:tl'> u11 tltc h11"i ,11 ,111) !'IH'n 11111c 1,
1

r.llkd till' h11\ 111.1,11.•1 111 "l:111111H11\.'1 "-,11:111 tln:11.: 111a, hl' 111011.· 1lti1n one bu 111.i ,tcr
, 111.:h a~ p101.l',su1. Ui\1 \ ,'\i11t1lllk1 etc
I he: ,hn1L' lh1.· ": ~11.'111 1,11~ \\ ln:11 CllllL'llt 111111,tc1 1cli11qu1~hc5 l:llllltol ol the hu .•
a11 utlw1 bu~ 111.1~tc1 \.'Oil ,11:qu111.· th1.• ru11 t 1ul (ii the l>u\
Bus :11 b1tr,1t llm 1, th~ pt0c1.·~" Ii:, ,, lw.:h tl1l! 111.::\I dc1,; 1c\.' to bcco111c the liu<; mn'itcr i,
s\.'k1.:kd nnd bu:- m,1'ih.'1ship i~ lt1IIH,k11ul tt• it I he !,Clcctio11 ol bu~ 111,1\tc1 i'i u ,unlly
dl,nc (Ill th~ pt 1011\~ b,,sis.
1here ,1r~ l\\\) .1pplll,tL'111.•1.i to hu, .11b1tant1011 l'i.:nt111li,1.•d nnd di,11 ilrntcd.
I. (\•ntr;tliH·<l \rhitration In rc1ll1,il11cd bu1., ,11hit1ntion, 11 <,inglc bus ,11b1tcr
pt·1I01mo, th\.' r1.•q1111\;d ,11h1lt,1t1on I he bw. ,11b11c1 111,1: he the p1o~c,~01 or n 5cpmatc
L'ont roller connected to the lrn!, I hc1c flll' 1111 \!C di llcrl'lll ,11 bit, at ion 5chc111 c~ tit at use -
the ~1..·1111.tl11cd bu5 a1b1t1nt1on ,1pproi1d 1. I It~,~ <,c hcmcs a1c:
• Dai~) 1.: h :1111 ing
• Poll ing method
• lmkpenck nt rcquc,t
2. Distrihuktl Arhitnition
• In d1:-t11h11tcd .1 1b1 11 .1 11o n , all d1.·, kc~ p:11ti1.·ip:tl\.' i11 the Sl'lcctiun o f' the next bus
mas!l.'I'.
• In this sdlL' llll' each t h.·\ ic1..· on till' bus is il \:, igncd a•l-hit identification number.
• 1he m11nbcr of de, il:cs conn~ch:d on th1.: bus when one or mon:: dcvkcs request
for the l'Untrol uf bus. the~ assert the 5tart-arhitration signal and place their 4-bit
ID numbers l>ll arbitration lines. ARBO through ARID .
• rhcsc four arbitration lines arc all open-collector. rhcrcforc. more than one
<.it~, il'c l'an place their 4-bit ID number to indicate that they need to control of
bus. If one de, ice puts I on the bus line and another device puts Oon the same
bus line. the bus line status will be 0. Device reads the status of all lines through
inverters buffers so de\ ice reads bus status Oas logic I. Scheme the device having
highest ID number has highest priority.

9
VII Semt (CSf/ISf)
• When two or more de, ices plitct: their ID number llll hu-. lmc, tht·n ll, n" .._
to idcntif) the highest ID number on bw, lines then 1t 1, 11\.:cc,, u, t,:1 1d ntit~ ~
highest ID number from the st,1tw, of hw, line ( on'i1ck1 th tt t,H 1 dt·\ 11.:, A ind A
having ID number I ,md 6. rcspcctl\cl\ nit.: rc:quc<;1111 the 11-.t; ul till hu~
1

• Device A puts the bit p,,ttcrn 0001. and dt·\1~c B r,111, the h11 pnth:rn 0110 Wtth
this combirrntion the stntm, of bus-lmc \\ ill ht· I000. hem c, c1 hcl 111 c uf lll\ttt
. • er
buffers code seen b) both (IC\ ice-, 1s O111 .
• Each dc,icc compares the code formed on the ,uh1t11111011 1111c tu Its O\\n I[)
starting from the most significant bit. If it finds the d1flc1 cn\;c ,,t lit) hit po~1t,un:
it disables its dri,cs at that bit pmition and lo, 11II kmc1 -rnde1 hit--.
• It docs sob) placing a O,lt the input of their drn c 111 our c~11111pk. de, 11.:c <fch:1,;ts
a different on line ARB~ ,md hcn~c it dis,tbles iti, d1" t'S on line ARB~. ARRI
and ARBO. I hi~ causes the code 011 tht· a,bilr,111011 line, to ch,,nµc to 0110 lhis
means thdt de, kt· B htis "un tht· 1,11:c.
• The dcccntrali,cd ,1rbitra1io11 offers high rdiabilil) bcc,,ust· opcrnt,l)I\ ol 1hc hus
is not dcpcndl·rtt on ,Ill) ~inglc de, ice.
b. Explain scq m.·ntinl .md ncnk consish:nl·~ modds. (08 Mnrka)
Ans. The sc4ucntial consistcnc) (SC} mcmor) models is w 1dcl) undc, stood nmong
multiprocessor designers. In this models. tlw lond. stores. nnd swnps ol al proccs'iors
appear to execute serial!) m n single glob,ll rncmo1') oi dcr thnt confo1 ms to the
individuaJ program orders of the p1occssors
Sequential consistcnc) l he result of nn) l'Xccution 1s the s,unc ,,s if the rend nnd
write operations b) ,tll processes were cxc\; utcd 111 ,omc sequential 01 dcr nnd tht
operations of each md1v1du,1l pro~css appcnr in this ,c'-1uc1h:c in th ..• oHkr spe:~·1hed
b) its program
• Any valid interleavani of read and write opcrntions is OK. but all proccss"s must
see the same interleavmg.
• The events observed by ench process must globnlly occur in the snme order, or it
ii not sequentially consistent. It doesn't nctually mntter if the events don't really
lfll'eC with clock time. as long us they nre consistent.
• Ltnearimbdity is weaker than strict consishmc). but strony.cr thnn seqm:ntial
~.,.
• ~ l i t y ha5 pm"'1 useflal for rensonin3 about program correctness but ha•

......... .....,...
• -.inl,
111111ft I I Ii -,llmtntable and widely UHCl but hal poor

It &hit have better ,performtftCC


CBCS - Veo2018 / JCNYv2019
• rlw, lcad::i to··\\ ~al-- Con::ibtem:) ··
• It i:-. primaril) de,1g.ned to \\Ori--\\ 1th d1,tributed critic.ti n:gion,
• Th1:., model introduce:-. the notion of a
• ..::i) nchroniz.ation \ ariab1c··. ,\hich 1, u:::.cd to update all copies nf the ddh\--..tore
The follm\ im.!. criteria mu::it be met
• Acce::,se:-. (o :::.) nchronization, ariable . . n::,~ociatcd \\ ith a data ,tore c1rl' (;cqu1?ntiall;
con-..1:-.tcnt - all proce:::.:-.c:::. :-:.ee the:::,\ nchroni1..ttion cnlb in the :::.amt· order.
• ~o operation on a s: nchronizatirn~ , ariablc i:::. allo\\ cd to be performed until illl
pre, ious \\ rite:::. ha, c bet:n
• compktcd e, t:1) ,, here.
• No read or ,,1it1: operation on dttta items arc .1ll0\\Cd to be performed until all
pn.:, ious operation~ tl"'
• s: nchroni1ation , arinbks h:l\ 1.' bL·cn pcrfornH:d.
()R
6. a. what an.· tht· difkn•nt kchniqm·s for hrand1 prcdit-tion'? Explain. (08 i\forks)
Ans. In general. the problem of the branch becomes more important fl,r deep I) pipelined
processor:-. bccau:-.c the 1.·ost of incorrect predil'.tions increases (the brand1c~ arc
sol\ cd sc\ cral ,tngc, ,tftcr thl' l D stage)
i\1ain goal of branch prcdktion tcdrniques: tr) to predict ASAP the outcome of a
branch i11'tru~tion.
·1 he pcrformnn,:c of n bmnch predi\.·til'll tcdrniquc d~pcnds on:
• Accura\.') mensurrd m term~ of !H..'rccnt,1gc c.,r in1.·orrcct predictions g.i\ en b) the
predictor.
• Co~t of a incorrect prediction mcn~ur1.•d in terms of time lost to c\.ecutc useless
in:>tntctions (misp1cdiction pcnalt~) giH·n b) th1.• pn,c~s~or architcdurc
\\ c nbo nel:d to c01u-,id1.:1 branch f1cquenc~: the im1wnance of accurate branch
prcdi1.·tion 1-. higher in pwg1arn\ ,, 1th h1g.h1.•r br.1111."1, frcqu1.'ll('~.
I here arl· m.tn) methods lO deal "ith tlK· pt..·rli.1rn,,1m:c lu:-.s due tll brand1 hazards:
• St,1t11.· B1.11H:h P1cdiction 1~xh111ques: I hi..' actiun:-. rt,r ,1 b1,111d1 arc fixed for each
branch during the cnti1c e=-.ccution. I ht..' actitms arc thed at compile time.
• D) nam ic Brand1 Predidion li.:~hniqucs: rhc decision causing the branch
prediction can d) namicall) change during the program execution.
In both cases. care must be taken not to change the processor state and registers until
the branch is definitd~ 1-.no,,n.
Static Br.tm.·h Prediction is used in processors "here the cxpcdation is that the
brand1 beha, ior is highl~ predictable at compile time.
• Static Branch Prediction can also be used to assist d) namic predictors.
l) Branch Al" a) s Not Tal-.en (Predicted-Not-Taken)
2) Branch Al" a) s Talten (Predicted-Taken)
3) Backward Taken Forward Not Taken (BTFNT)
4) Profile-Ori\ en Prediction
S) Dela)ed Branch

11
AdN~C<J111f>U,t"eY Avchu-e.ct;-IA¥e,
VII Sewtt ( CSE/ISE)
b. Explain multiply pipeline design to multiply 1:w~
8-~it integers. (~8 Mar~
Ans. Arithmetic pipelines li"-.c floating point mult1pltcatwn~ ar: _Pl)pular 111 general
purpl1 sc ClHnputcr:-. and question aduall:v arises.,, hen a pipe l11:111g can .be used . V-..e
ha, e seen that. pipe lining can be U'-Cd ,, hene\er: ou are gett111g continuous input
and producing continuous output. _ . _
For a single task. it docs not gi\'e you an} bendit. th_e re~son 1or that 1s, for a_ singlt:
task. time required to pe.rform computation e:\.ecut1on 1s mon.: than that ot a 11011
pipeline implementation. _ .
That means. onl) \\ hen large number of tasl-.s are to b~ perlormed then pipeline is
suitable.
v,hcnc\er :ou an.: pcrfunning multiplication of two number1::.. sa: .\ and B. sa: is
equal to kt us as.:.,umc IO 11 and the number another number 1s IO I0. So, ,, ht!nc, er
)OU ha\c to multiplier. \\hat )OU do, wlH.:ne,er you do b) hand computation, )Ou •
first multipl) 0 \\ ith this. so ) ou get IO 11 then multipl) I \\ ith, you multi pl) \\ ith
0. you v. ill get 0000. \\ hcnc\ er) ou multiply,, ith I, you gd l O11 and) ou keep on
shifting it and you essential!). \\e do shitt and add, shifr and you have to add these
two numbers.
You ha\e perform addition of this and then \\hcnc\er you multiply with this. again
you get 0000 and then another 1s IO 11. So no,\, these are added, so this addition
\\hcnc,er :ou perfurm b) hand. \\e 111a) do it simultaneous1 1 that means, \\e add all
these bib together and pn.,ducc the sum. as it i~ sho,\ n in this diagram belov. here, A
and B. t\\ o 8 bit numbers. PO is a partial product. 8) multiplying this bit one v.-ith
this number. all the bits. \\C gd this and in this \\a), \\C ha\c got pa1iial product P
0, P I. P 2. P 3. P 4, P 5. P 6 and P 7 corresponding. to multiplication of these 8 bits
with these bits.
So, these arc the pa11ial products. "hich :m.: tu be a<ldcd and ) ou can sec O has been
inserted on the right side. And ,,hcnc,cr \\C do the addition of the entire column,
but whenever you do it" ith the help of practical, I mean processors, your adder will
take two numbers. usual S) mbol of adder is this, adder will take l\\O numbers and
produces sum and a carry. So, this A. B. sum and carry, so you cannot really perform
all these additions simultaneously. "hich is shov. n in this diagram.
However, we can try to do it in a pipelined manner, instead of doing it serially that
means. adding these two PO and P I then PO and P I then the result of that is added
with P 2. And then result of that is added with P 3 and so on, that is essentially doing
it serially, that will take long time. It will take definitely 8 clock cycles, if we perform
one addition per clock cycle.

u
i
CBCS - Deo 2018 I ]CM'v 2019

• •
1 t

I l II
• I ! l I1 I I I I 1
t
" •
I I I
, I'
' ' ' •
t
I \ '
II 1 I -11
' ... , ,
. ' I

! l·,
I \
' \
"'
I
'
I I ! 1 )

1,-
... -n "" -, .'
t ' ,A
,, _,, .J.1 •

... ' I\

l I I I

II

!11 ! If

l J•/\

i "·
111
t
I' I\ H

\Jodult'-4
7. a. E,plain routing in 0111l•g,111u:t\\urk. (08 Marks)
Ans. I he Omcgn MIN
• Another popular \1IN '" the Ornrc.1 \UN
• I hr 1111cn.:onnc(;t1tm\ b\!t\\ccn the ~tngl·~ 111 ,1 Oml·g,1 ~l't\\orh. n1e defined b~ the
... " ' " ·• •bu, w;.,•cl m 1b,• nod II)
• l:xamplc. An 8 x 8 Orncg I Nch,ork 1s intl·rcom1e~·1cd as follcm ~:
000 ---> 000 ___ ,. 000 --- ,. 000
OU I --- ;,> UIO ___ ,,. IOU ___ ,,,. 00 I
0 IO ---> I 00 ___ ,,,. 00 I ___ ,,,. UI U
011 ---> I IO----> IOI ---> 011
I 00 ---> 00 I ---> 0 I U---> I UO
IO I ---> 011 ---> 110 ---> IO I
110 ---> I OI ---> 0 11 ---> 110
111 ---> I 11 ---> I 11 ---> 111
• Consists of four 2 x 2 switches per stage
• rhc (h.cd links bd\\ een e, Cf) pair of stages are identical.
• A perfect shuftk is formed for the fi~ed links bct\\Ccn ever: pair of stages.
• Has complc:\il) of O(n lg n).

13
VII Se-wv (CSE/ISE)

• Fnr 8 possible in pub. there arc a total of 8! 40. -;20 I tl) I m,1pp111gt. ol thl! j 11 Pllt
onto thc outpuh But only 12 s\\ itches f<"1r a tot.ti 01' ~ 12 HNG ~cttmgs I ht,1.,~
nct\\.O rk J') blm;k111g.

~ :_J LJ
r l

r 0
I

2 r--i
J !
B I lt7
.___ j

(, I
- -,
6
7
I) I
J 1. l 1

b. What arc different ,·cctor-acccss nu•mor)· schemes'! Explain any two of them,
(08 Marks)
Ans. Threi:, cdur-rn:cl·ss schL'lllL's from inll'rk~n cu memo,) nwduks allm\ ing o, crlappcd
ll1 Cll1 0 1) c.lC<.'C~",

• C-Access \ kmor) Urg:rni.rntiun


• S-Acccss Mcmor) Orgnni1atinn
• C!S Access
S-Acccss J\.lcmory Organizat ion:

Fetch c•,clo ____ '-~ - - ACCF.S.S cycl~- - -·


I

I
-I M0<! 1JIO

+
0
r-•=" Data Ldtch

.
... .
Moouta
I
- -
- -
--
.
singe wora
access
-
,1 ~
• Multiplexer -- .
'
-

(n•c1) •
h1gn-oruar
• •
address tJ1ts • •
• • .-

Moaula
.-
--
,, m-1

-
~

,~
'
Reaa·write
a Low-o,oer
~ddress btts

14
CBCS · Vco2018 / JCvYv20I9
• S 1m rlc11 to Im, -order 1ntcrle,.l\ ed memo,:
• 11 igh ordcr bit5 ,elect modulc5
• Words trorn modules arc latched at the <.amc tune
• I O\\ order bit~ select ,, <Jrds from datn latches
• I 111s 1s done through the 111ttltiplcx-.:d \\ ith higher speech tminor c~cle~)
• /\ llo\\ ~ ~i111ult,111cou~ ,1ccc:-is
• I hi~ is ca lied S-acccs~
lntcrb1\ eel Fetch ,ind Accc~s
• It the 111 inor c> clc is sckctcd I /in-m \\ ords ( one I O\\) is ncccssed in ~ memo()
( major) C) clcs
• Ir let ch and access to the latches an: intcrkavcd- m \\ ords is accessed in I mcmof)
c~clc

Folch 1 FolCh 2 F.. ,c~ ) • • •


Mm I r------~----..:.....----~------!
°'CCOH I A ens.? Access J

Fetch I f ,t::1' 2 Fetch J •••


Accou I Access 2 Acceu 3

F'ot~h 2 Feitn J •••


A ceu 1 Acceu 2 Accus J

m "' •els m wotd t m "1)<dS


,...
Crtki 1 C,c o 2
rI
A

C;itle J
T '
C,'C 1 4
~
I
Time

<)R
8. n. \\'hat arc tltt' implt·mt•ntation modl'b of SIMD'! Explain tht·m. (08 Marks)
Ans. Vl.'ctnr procc-.si11g ci1n be ca11 icd out h) Sl~ll) l'Omputch
• VL·ctor i11~t1 uction ·~ opt·rn1HI.., must h.i, L' ,, fb..l'd length of n cqui, alcnt to the
number of PEs
• Two modds:
- Distributed memory modd
- Shared memory model
Distributcd-l\lcmory Model
• Spatial parallcli~m is explored - An array of PEs
- An arra~ control unit .
• Program and data are loaded into the control memory through a host unit
• ln~tructions are sent to the control unit for decoding A scalar or program control
operation is directly executed by a scalar processor attached to the control unit

15
A d,vCtHtcid,t C 0-11-'rp CUC,f f \;, U"ccte.ct!A.+-"e,
11
( CSE/IS£) PEs for execution
VII Sett 11 be broadcast to all • emone:-, dtt..tche i to the PL
• A . 1 w .• d·stributed to a 11 the 1oca 1 I11
vector in:.trw.:t101 :,
. d iata scb ai e •
• Partit,one ~
'. ' data bus , the control unit
through a vt:ctor .zed in hard\\ c1re b. - the· . . nr11e c> dt>
' Pr CS\ nchro111 II the P[·.::, 111 • •
• 1 he : s ar , . . i, ~xccuted b~ a E frtllll part,c 'pal Ill • in a !!I\ en
• rhc same instru~t1on . it:d to di~ahh: nn~ p
, .· I wic IS pro\ I(
• instruction
Ma,~ Ing (C: '=
cle d b, a data-routing net,, or" " h ic h r,·,forms lnkr-Pt
• Tl , PE s are interconnecte ~ .

data
lt.:: commun1cat1ons
• • . l . rl1ora111 comro 1 t 111.ou ....gh the control unit
.
• The data-routmg ne t \\ ork is un< e1 p ::-

Scalar
P:ocesscr
Scalar 1ns1ruc:1ons
.
Netw-orl<
Contrcl r--..,..-:a~n~I,~0T11~-'ecmmt01Q°F;rv,,~
....-- , IProqr.1r:i arid Oa,al
I ~~_.,.1,0
~
' {User)
Instr
'✓ ccrcr

lnstNct o ~;.,..... 8roaccas1 Sus


(lns1ruc: crs
me co,..s·anis)

~ ' Daw
us

'
L '...' 1
'
u.in PE Process-
ing Element
LM lccal
' '
Oa1a Routing ~!e r.-.o~ ' Memory

Sharcd-Mclllor~ SJ;\ll> l\lo,lt·I . . . ,.


1·1 · .
llS "' bC.,.,,.
' , .,. . 'll'
<
rnachines is 1n ,'H.:tic;dh, l..'<.jllt\ nknl to the smglc-proccssor
• d\cdor.
processors. although uthl..'I' interesting mnc.:hincs in this subdass hn\C cx1~te (viz.
VLI\\' ma~hi,11..:s [31 J). In lhe block diagnun it depict a gcncri~ model of a vector
architecturt!.
The single-processor ,ector lllachine "ill have only one of the vccturproccss~rs
depicted and the system 11111) e, en ha1 c ils scalar floating-point capability shared \\Ith
thncctur processor (as ,,as the case in so,nc Cra) s~stems). It may be noted thatthe
VPU docs 1101 sho" a cache. The majority of I ccturproccssors do not employ a cache
anymore. In man) case~ the 1ec1or unit cannot take ad, antagc of it and e.~ecution
speed ma) e, en be unfa1 ourably affocted because of frequent cache 01 erflim.
Although vectorprocessors have existed that loaded their operands directly from
memory and stored the results again immediately in memory (CDC Cyber ~05.
~TA-_ I0), all presen'.-day v~ctorpro~essors use vector registers. This usually does ~ot
1mpa1r theand
operands speed of operations
mauimilation ... &:.. L while
• •
providing much rnnrP fl4_.:a..a,.. . .. --''"''"•
c..,1,-KS • r>eo2018 I ]<M'v2019
t'I 1hc ~'l: llCt 1~ lldlllrc ol I 1~urc 1 110 dctailc., ol the 1nh.:1con111.!ction between
lk1.:,111w
thl: \ Pl ,ind lhc 111c1110tv c11c c.,hcm...11 • ~till, tlu:sc dcla bare \el) 1111portant for the
cfh:d I\ c <;peed ol ,\ \\.:l'tu, npct al tun : \\ hen the bandv.i lth bet\\ ecII 111cmof) and the
\ PL ,c., ltH, ~,n,ill 11 j,, 1101 pus!-.ihl~ to taku rull advanta ,e of the VPU because it has
to \\.Ill Im operand<; and/or has to v,ait bt:lorc it can tore rcsul s. When the ratio
of arithmctk to load/store operations is not high en ugh to compensate for such
:-iituat ion~. scv~rc pcrfonnancc losses may be incurred.
Mass
Storage
Scalar
Processor
---
1
Host

NetwoM(1
Control :
I
••••

Ahgnmiant Network

••••
Data Bus

b. Explain four context-switching policies. (08 Marks)


Ans.
1. Switch on cache miss: This policy corresponds o the case \.\here a conte:-..t is
preempted when it causes a cache miss. In this ca"e. R is taken to be the average
interval between misses (in cycles), and L the tirre required to satisf: the miss.
Here the processor switches conte;-...t only when it is certain that the current one
will be dela) ed for a significant number of cycles
2. Switch on every load: This polic) allows switchl ng on eve!) load. independent
of whether it will cause a miss or not. In this ~ase, R represents the a\ierage
interval between loads. A general mu_ltithreadi~lg model assumes .that context
is blocked for L cycles after every switch; but 1r the case of a S\\ 1tch-on-load-
processor. this happens on!) if the load causes a che miss.
3. Switch on every instruction: This po lie) al lo\, s S\\ itching, one\ Cl") instruction,
independent of v. hcther it is load or not.
4. Switch on block of instruction: Blocks of inst uctions from different threads
are interleaved. This will improve the cache-hit atio due to localit). It will also
benefit single-context performance.

17
Module - 5
9. a. \\'hnt .u·c tlw issues in using shared variable model?
Ans. Shan~d Vnriabk Communication (08 1\1·,r••·
• I\S)
• "-.h,tn:d \ariablc and mutual l' xclusion
• l\f,1111 ls-,ucs ·
Prokctcd acccss of critical section~
Ml.!11101) Consistency
Atomkity of Memory Operations
fast S) nchronization
Shared data structures
Fast data movement techniques
• Critical Section (CS)
Code segments accessing shared variable with atomic operation
Requirements - Mutual Exclusion, No Deadlock in waiting. Non-preemption
Eventual Entry
'
b. Explai11 diffcre11t phases of parallelizing compiler with a diagram. (08 Marks)
Ans. Major Phases of parallelizing compiler
• Phase I : Flow Analysis
Data Dependence
Control Dependence
Reuse Analysis
• Phase II : Program Optimizations
Vectorization
Para Ilei izations
Locality
Pipelining
• Phase III : Parallel Code Generation
Granularity
Degree of Parallelism
Code Scheduling

:' Flow Analyis

18
t>R
to th r,phun h'~tinu ttl~!lH hhm t'tll' d,·1h•n(h•n\'t' h'~tillt!, (OS Murks)
\n, ll Pu1•tifw1t Hn~l·tl \ll!,,11 ithm
.1 l'.uOtl,'ll tin- st1b,\·1 lpt ~ lllh1 Ill scp.11 11bk· 11nd 1111nilllt1I l'l'llpkd fll 1llps S,. S.!. · S111
h 1t ~, ~m~•k t\'kt,'lh.\.' p,HI ,' 1h hh\\l m 11 l,1,1 1,.., "11h 11\dt·,1.·s 1• I,, . 111
1
t, 1,\lh'I ,'.h'h ,nb,\·11pt ns l \ "l \ I 11 \ 11 \
,.' ~ ,,, \-.11.-h ,,•p,11,,t,k sul 1s,·111'I, ,l!'l'I~ llh' 11pp11'p1i.1t1.· ,-.1ug.ll.· suhsl'l'ipt kst (ZIVh
~I\ \ \! \ ' l\l:-.,•,t ,11\ th"· 1.'l1111pl,·,1t, \' l the -..11b:-.\.·ti1't , It' i11d1.•pl·nd1.·nn· b p1'll\ ~.-d.
th) rn111h'1 t1.'~t11,g " 1H.'\.'1il•d "·b1.· 1t '"II
p,1.,du"·"· dii1.·\.·tion \ l,'\.'hH:-. l1.H th1.· i1ukx~s
,'1.\.'lll't m~ in th.it :-.ub~l·1 ipl.
,t l\, ,·.h·h ,'\'U1'kd µrPnp ..1pph .1 nmltiph.· :-ub ..1.· 1ipt tl'~t t1.1 prudurt· n s<.'l or d1r1.·dion
, ,'\.'h 11, 1,,1 llw md1\.'1::-. ''"','\11'1 tl\f "11hi11 th.it gH,up
I) If ,ll\~ l\',t \ 1dd, i11dq 1 ,'thkn1.· . .·~ llP ,kpl'thil-111..·1..•s 1.·\.i,t.
11) l\th"''"'''-' llh'lf1.' .di tlh' d11,·\.·ti,,11 ,,·,·1111, 1.'llllljHlll'll in ti\\: pt'I..'\ 1t111, st1..ips inh 1 ,I
,,n~k ,1.·t , , ' d 11 "'"·t ll 1n \\'1.. h H, I~ 11 t" l' 1d"'' "'ll\.'1..·:-..
111) l h1~ .tl~l', ,\hm 1, 11111'klll1.'l\l\.'ll m bl 1th 1'l t' .11 1 ,llltl,m.ttk, 1.'l.'ll'l'izi11g .111d p.11-:ilklising
"\'IH1'ik·1. •" "1..·ll .,s l'.n,,~\.·\ll''-'. .1 p.11.1111..'1 "'\lllpllting. 1..·m in11mw11t l ~--11
h. \\ h!\t th\• prindpks of ~yndll'ottitation mt'l'hani~ms·.> Explaio tht•m.
!ll\'

(08 !\larks)
\ns. S~ nd1roni1ntinn
"~ 11d11\mi1.1ti,m .. 1.l-. .• 1. s1..·ri.llil.llil,11. i-.. m.11-.m:~ ta-.1-.-.. run inn s1..·1 ial :-.1..•qtK'n~e (nnt'
.\l .1 tinw) in,11.-.hi t 1 f nll .1t \'th.'1..'. b1.·1.-.1u-.1..· ,,r dq,1..•nd1.·n1..·i"·" bd,\l'l'l1 till· tasks. t\n
l''l.,llll('k ('r thi, i, ,th.It \\ lwn butld,n~ .I lll\lhl' th1..· Pl'\'',l)I\ budding b,l"l'llll'lll IH..'cds
,~, h:ll p1..•r,,,n building g1\,u11d lh,l,r \\h1..·11 th1.· b.bcm1..·nt ,~ ~fon1..-. M' th,H \H)tl l)ll the
~l"\)\llld thh\l' \.'.Ill ,t.11 t.
ln .I llh)l\' \.'\\lllpkx 1.'.l .. l', it Ill!\~ lw p:111\ ,,r th1..· !).1\)tll\d 1h""IM .H'~ on!~ lkptmknt
tlhll
l'll l'·u·t:- of th\.' b,tsl'm1..•11t, st, tlw p1.·rs\ln bmldmg. tin: gn,und tfoor m,1~ lk· abk to
l'"':!ill \\l'tk "lh'ltll.'1'. I his llh'J'l' \.'l'lllpk, L'.1\1.' 1.., rl..'l.111..·d ll) thL· Pn,lhtl.'1..'r-l\,n-.;umcr
p.ltt1..·111 di-..~·11,:-."·d t,clln,.
\\lut\lnl F,dnsion
\httu.il l''l.dti-.ll''' b \.'l'llll\llling. ,\l'l'l'ss to shnrl.'d 1\'.Sl)llt\'.1..',. (\""ltltinuing \\ith th1..•
\.'L)llslru~:li1)1\ 1..·,ampk. nnl~ l)lll' p1..·rsr,n l':111 us1..· th1..· sail\\.' drill at tht· snmc time. or
_ thint!s t!.1..'t lli!h. l il-.1..'\\ is1..·. thinµ:,; likt· a disk or a dat,\ slnt1..'llll\.' oth:n n:quir~ n limit
")f l,;11..· ~hr1..·a:1 ~11.'l'l.'s~inµ it ,\I .1 till\t' tll 1..•n-.ur1..· th1..• lksir1..•d dkl.'t.
D.1h1 Sharing
Data :,;h,lrill1!. ,, hil.'h i~ rh.b1.'h rl.'l.1kd h) mutu.tl t'\.1.'lusil)I\, is m:magin!l th\.' duplil.'ation
1.,t\iata stru~lllrl.'s l'I' dt'\ il.'l.'S·. l'lll'St' dupli1..·ah:d n:sou1"1.:1..·s \\lHtld in turn send updat1..•s
to ,l man.,g~r of sorts, and "L)tild b1..· infonnl.'d llf updaks b) thl.' manager.

19
\III Sem ( CS f/IS [) ~
---------------------------::._ect(¼-e,
I\W v ~ Compwtev I\ vchi,t.,

\n '-'\.ample of this is what Massivcly-Multiplaycr Online ( MMO) garncs ~


,noid i:crtain prnbkm"> associated with ndwork lag (the delay betwcl!n <.,cnct·) tci
notifitation or artion to the scrver a11d receiving the rec.,ult of that action). ;"\g.a
1
\\as done" ith all \Vorld data proccss111g dune 011 the server. a player W<>ltldn·t k his
"hethcr ~)r not an op!)oncnt lw_d been de (.eat elI \Int ·1I l I1~ nl:lwork re~ponds With%w
the
result. \\/1th data sharing, the client program can 1mmcdtatcly determtn<.:. based on.
cop) of till' local cm, ironmcnt, that it is likely that the opponent has been defoat 115
and then recci, c either confirmation or rejection of that assumption when the se~d.
rcsponlis wit· I1 ·its "o ffi c1a
· I version
· ,. of events. er
Message Passing is this way of managing duplication of state via coordinat d
communication between copies. Shared Mcmor: is a ¼ay to simulate duplication e f
stak in a _wa) that in some circumstances. can allow concurrent access. and general~ ,
access with less overhead than message passing. J

You might also like