Professional Documents
Culture Documents
Answer:
a) Parallel speedup is defined as the ratio of the time required to compute some function using a
single processor (T1) divided by the time required to compute it using P processors (TP). That is:
speedup = T1/TP. For example if it takes 10 seconds to run a program sequentially and 2 seconds to
run it in parallel on some number of processors, P, then the speedup is 10/2=5 times.
b) Parallel efficiency measures how much use of the parallel processors we are making. For P
processors, it is defined as: efficiency= 1/P x speedup= 1/P x T1/TP. For example, continuing with the
same example, if P is 10 processors and the speedup is 5 times, then the parallel efficiency is
5/10=.5. In other words, on average, only half of the processors were used to gain the speedup and
the other half were idle.
c) Amdahl’s law states that the maximum speedup possible in parallelizing an algorithm is limited by
the sequential portion of the code. Given an algorithm which is P% parallel, Amdahl’s law states
that: Maximum Speedup=1/(1- (P/100)). For example if 80% of a program is parallel, then the
maximum speedup is 1/(1-0.8)=1/.2=5 times. If the program in question took 10 seconds to run
serially, the best we could hope for in a parallel execution would be for it to take 2 seconds (10/5=2).
This is because the serial 20% of the program cannot be sped up and it takes .2 x 10 seconds = 2
seconds even if the rest of the code is run perfectly in parallel on an infinite number of processors so
it takes 0 seconds to execute.
The Gustafson-Barsis law states that speedup tends to increase with problem size (since the fraction
of time spent executing serial code goes down). Gustafason-Barsis’ law is thus a measure of what is
known as “scaled speedup” (scaled by the number of processors used on a problem) and it can be
stated as: Maximum Scaled Speedup=p+(1-p)s, where p is the number of processors and s is the
fraction of total execution time spent in serial code. This law tells us that attainable speedup is often
related to problem size not just the number of processors used. In essence Amdahl’s law assumes
that the percentage of serial code is independent of problem size. This is not necessarily true. (E.g.
consider overhead for managing the parallelism: synchronization, etc.). Thus, in some sense,
Gustafon-Barsis’ law generalizes Amdahl’s law.
Answer :
a) 8
b) 7
c) 14 / 7 = 2
d) 8
e) 8 → 14/7=2, 4 → 15/8 = 1.875, 2 → 15/10 = 1.5
4. Give the classification of shared memory computer with definition. What is the limitation of
shared memory?
Ans :
5. What is Implicit Parallelism? Explain Pipelining and superscalar execution in parallel
processing with suitable example
Ans:
6. Explain how the task interaction graph and the task dependency graph each play their own
role in mapping task to processes. Clearly explain why tasks interaction graphs represent
input-and output-dependencies and task dependencies graphs represent flow and anti-
dependencies with suitable examples
Ans:
7. a) Consider the execution of a program of 55,000 instructions by a linear pipeline processor
with a clock rate of 30 Mhz. Assume that the instruction pipeline has twelve stages and that
one instruction is issued per clock cycle. The penalties due to branch instructions and out-
of-sequence executions are ignored.
Calculate the speedup factor using this pipeline to execute the program as compared with
the use of an equivalent non-pipelined processor with an equal amount of flow-through
delay.
b) Justify the concept of SIMD computing in different parallel architectures.
Ans :
a)
b)SIMD