Professional Documents
Culture Documents
University TECHNOLOGY
Category,
Question
Difficulty Sub Question Marks
Type
Category
Single Easy Parallel and 1. The main purpose of parallel 1
choice Distributed computing is:
Computing, a. performs parallel tasks
Introduction b. performs computations faster
to PDC c. uses no. of processors concurrently
d. all of the above
Single Hard Parallel and 2. Three main factors contributed to the 3
choice Distributed current strong trend of parallel
Computing, processing:
Introduction a. hardware cost, applications and AI
to PDC b. speedup, execution time and
complexity
c. cost optimal processors, LSI circuit,
faster cycle time
d. fluid dynamics, simulation and image
processing
Single Easy Parallel and 3. Which applications are use as the 1
choice Distributed scope of parallel computing
Computing, a. Scientific, engg. and design
Introduction b. commercial, computer systems
to PDC c. Both (a) and (b)
d. Neither (a) or (b)
Single Easy Parallel and 4. The logical view of a sequential 1
choice Distributed computer consists of
Computing, a. processor, memory and data path
Parallel b. pipelines, processing elements and
programming buses
platforms c. all of the above
d. none of the above
Single Easy Parallel and 69. What are the parallel approaches used 1
choice Distributed in Dijkstra’s algorithm for shortest path
Computing, all pairs
Dijkstra’s a. source partitioned and source
algorithm parallel
b. source vertex and source parallel
c. source edge and source vertex
d. source edge and source parallel
Single Easy Parallel and 70. Every node in the graph is handled 1
choice Distributed independently of the others in
Computing, a. fine grain
Graph b. vertex grain
algorithms c. edge grain
d. coarse grain
Single Easy Parallel and 71. The p/n processors share the work 1
choice Distributed with one vertex in
Computing, a. vertex grain
Graph b. edge grain
algorithms c. fine grain
d. coarse grain
Single Hard Parallel and 72. In sparse graphs, while using a list 3
choice Distributed representation, the algorithm can be
Computing, modified efficiently to
Algorithms a. |E|=O(n*n/log n)
for sparse b. |E|=O(n^2/log n)
graph c. |E|=O(n/log n)
d. |E|=O(n^n/log n)
Single Easy Parallel and 73. The matrix is evenly distributed on 1
choice Distributed the processors
Computing, a. balanced work load and little
Dense matrix communication
algorithm b. unbalanced work load and increased
communication
c. unbalanced work load and little
communication
d. balanced work load and increased
communication
Fill in the Medium Parallel and 74. In Johnson’s algorithm, initially al the 2
blanks Distributed vertex is equal to ____ except starting
Computing, vertex which has _____
Other sorting a. infinity, 1
algorithms b. 0, 1
c. 1, infinity
d. infinity, 0
Single Hard Parallel and 75. In row-wise 1-D partitioning, the total 3
choice Distributed parallel time is
Computing, a. big-oh(n)
Dense matrix b. theta(n*n)
algorithm c. theta(n)
d. big-oh(n*n)
Single Hard Parallel and 76. In row-wise 1-D partitioning, the 3
choice Distributed overall isoefficiency is
Computing, a. W=O(p*p)
Dense matrix b. W=O(p^2)
algorithm c. W=theta(p^2)
d. W=theta(p*p)
Single Hard Parallel and 77. The cost (process-time product) in 2- 3
choice Distributed D partitioning
Computing, a. theta (log n)
Dense matrix b. theta (n log n)
algorithm c. theta (n*n log n)
d. theta (n^2 log n)
Single Medium Parallel and 78. In matrix-matrix multiplication, the 2
choice Distributed serial complexity is
Computing, a. O(n^3)
Dense matrix b. O(n^n)
algorithm c. O(n^2)
d. O(n*n)
Single Medium Parallel and 79. Which algorithm uses 3-D 2
choice Distributed partitioning
Computing, a. Prim's algo
Dense matrix b. Dijkstra's algo
algorithm c. DNS algo
d. Floyd's Algo
Single Hard Parallel and 80. The lower bound on any comparison 3
choice Distributed based sort of n numbers is O(n log n)
Computing, a. True
Other sorting b. False
algorithms
Single Easy Parallel and 81. A discrete optimization problem can 1
choice Distributed be expressed as a
Computing, a. set(S,f)
Search b. tuple(S,f)
algorithm for c. solution(S,f)
discrete d. function(S,f)
optimization
Single Easy Parallel and 82. In discrete optimization, if the 1
choice Distributed estimate is guaranteed to be an
Computing, underestimate, the heuristic is called
Search a. heuristic estimate
algorithm for b. heuristic underestimate
discrete c. intermediate heuristic
optimization d. admissible heuristic
Single Hard Parallel and 83. In 8 puzzle problem, the distance 3
choice Distributed between positions(i,j) and (k,l) is defines
Computing, as
Parallel DFS a. |i-k|+|j+l|
and BFS b. |i-j|-|k-l|
c. |i-k|+|j-l|
d. |i+j|+|k+l|
Single Easy Parallel and 84. The main advantage of DFS is that 1
choice Distributed a. its storage requirement is linear
Computing, b. its storage requirement is non linear
Parallel DFS c. its storage requirement is state space
and BFS d. its storage requirement is directed
space
Single Easy Parallel and 85. Simple backtracking in DFS 1
choice Distributed algorithm does not guaran teed to find
Computing, a. successors of the node
Parallel DFS b. minimum cost solution
and BFS c. maximum cost solution
d. first feasible solution
Fill in the Easy Parallel and 86. In iterative deepening search, if no 1
blanks Distributed solution is found, the bound is ______
Computing, and the process is ______
Sequential a. ignored, repeated
search b. decreased, repeated
algorithms c. increased, not repeated
d. increased, repeated
Single Easy Parallel and 87. IDA defines a function for node x in 1
choice Distributed the search space as L(x)=
Computing, a. f(x)+h(x)
Sequential b. g(x)+h(x)
search c. f(x)+g(x)
algorithms d. f(x)-g(x)
Single Medium Parallel and 88. The total space requirement of the 2
choice Distributed DFS algorithm is
Computing, a. O(md)
Parallel DFS b. theta(md)
and BFS c. omega(md)
d. O(1)
Single Medium Parallel and 89. The search overhead factor s is 2
choice Distributed defined as
Computing, a. (Wp/W)p
Speedup b. (Wp/p)W
anomalies in c. Wp/W
parallel d. W/Wp
search algo.
Single Easy Parallel and 90. Requesting a random selected 1
choice Distributed processor for work in load balancing
Computing, scheme is known as
Parallel DFS a. global round robin
and BFS b. asynchronous round robin
c. round robin
d. random polling
Single Easy Parallel and 91. While analyzing DFS, the total 1
choice Distributed network requests is O(V(p)log W)
Computing, a. True
Parallel BFS b. False
and DFS
Single Easy Parallel and 92. In asynchronous round robin, V(p)= 1
choice Distributed O(p^2) is the worst case
Computing, a. True
Sequential b. False
search
algorithms
Single Hard Parallel and 93. The total communication overhead 3
choice Distributed while analyzing DF is given by
Computing, T(o)=(comm)V(p)log n
Speedup a. True
anomalies in b. False
parallel
search algo
Fill in the Easy Parallel and 94. In tree-based termination detection, 1
blanks Distributed termination is signaled when the weight
Computing, at processor ___ becomes _____ again
Parallel BFS a. P, 1
and DFS b. P0, 0
c. P1, 0
d. P0, 1
Single Medium Parallel and 95. The formulations of parallel 2
choice Distributed formulation of IDA are
Computing, a. common cost and variable cost
Sequential b. concurrency cost and common cots
search c. variable cost and concurrency cost
algorithms d. only concurrency cost
Single Hard Parallel and 96. The upper bound of speedup in 3
choice Distributed parallel BFS is t(access)+t(exp)/t(access)
Computing, a. True
Parallel BFS b. False
and DFS
Single Medium Parallel and 97. Executions yielding speedups greater 2
choice Distributed than p by using p processors are referred
Computing, to as deceleration anomalies
Speedup a. True
anomalies n b. False
parallel
search algo.
Single Hard Parallel and 98. In parallel BFS, each processor locks 3
choice Distributed the queue, extracts the _______ and
Computing, unlocks it
Parallel BFS a. successor node
and DFS b. worst node
c. best node
d. all the nodes
Single Medium Parallel and 99. Parallel formulations of DFBB are 2
choice Distributed similar to those of parallel BFS
Computing, a. True
Parallel BFS b. False
and DFS
Match the Hard Parallel and 100. Match the following: 3
following
(i) (i) strikes Distributed
a desirable a. (i)-(i), (ii)-(ii), (iii)-(iii)
asynchronous compromise Computing, b. (i)-(ii), (ii)-(iii), (iii)-(i)
round robin Parallel DFS c. (i)-(ii), (ii)-(i), (iii)-(iii)
(ii) and BFS
(ii) has poor d. (i)-(iii), (ii)-(ii), (iii)-(i)
synchronous performance because
round robin of contention
(iii) random (iii) has poor
polling performance because
it makes large number
Subjective Easy Parallel and
of work requests 1. What is a parallel computer and 2
Distributed parallel computing?
Computing,
Introduction
to PDC
Subjective Easy Parallel and 2. What are the subclasses of PRAM? 2
Distributed
Computing,
Physical
organization
Subjective Easy Parallel and 3. What are the principal parameters that 3
Distributed determine the communication latency in
Computing, message passing platforms?
Communicati
-on costs in
parallel
windows
Subjective Hard Parallel and 4. Differentiate between packet routing 3
Distributed and cut through routing.
Computing,
Interconnecti
-on networks
Subjective Easy Parallel and 5. Briefly explain the terms: true data 3
Distributed dependency, resources dependency,
Computing, branch dependency, vertical waste,
Trends in horizontal waste, VLIW.
microprocess
-or
architectures
Subjective Medium Parallel and 6. Briefly explain superscalar execution. 2
Distributed
Computing,
Trends in
microprocess
-or
architectures
Subjective Medium Parallel and 7. What is the architecture of an Ideal 2
Distributed parallel computer?
Computing,
Physical
organization
Subjective Hard Parallel and 8. Explain the store and forward routing 3
Distributed with a neat diagram.
Computing,
Interconnecti
-on networks
Subjective Hard Parallel and 9. What is pipelining? Briefly explain 3
Distributed instruction pipeline.
Computing,
Trends in
microprocess
-or
architectures
Subjective Medium Parallel and 10. How to embed a hypercube in 2D 2
Distributed mesh?
Computing,
Mapping
techniques
Subjective Easy Parallel and 11. What is CUDA? 2
Distributed
Computing,
Overview of
CUDA
Subjective Easy Parallel and 12. What are GPU and GPGPU? 2
Distributed
Computing,
Overview of
CUDA
Subjective Easy Parallel and 13. What is data parallelism and task 2
Distributed parallelism?
Computing,
API functions
to allocate
memory
Subjective Easy Parallel and 14. Explain Threads, Blocks and Grids. 2
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 15. Define vector addition of blocks. 3
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 16. Define vector addition of threads. 3
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 17. A CUDA programmer says that if 2
Distributed they launch a kernel with only 32 threads
Computing, in each block, they can leave out
Developing a the __syncthreads() instruction
kernel wherever barrier synchronization is
function needed. Do you think this is a good idea?
Explain.
Subjective Hard Parallel and 18. A student mentioned that he was able 3
Distributed to multiply two 1024 ×1024 matrices by
Computing, using a tiled matrix multiplication code
Executing a with 32 ×32 thread blocks. He is using a
kernel CUDA device that allows up to 512
function threads per block and up to 8 blocks per
SM. He further mentioned that each
thread in a thread block calculates one
element of the result matrix. What would
be your reaction and why?
Subjective Hard Parallel and 19. What is kernel function in CUDA? 3
Distributed
Computing,
Developing a
kernel
function
Subjective Hard Parallel and 20. What is a host in CUDA? 3
Distributed
Computing,
Executing a
kernel
function
Subjective Easy Parallel and 21. What is analytical modeling of a 2
Distributed parallel programs?
Computing,
Analytical
modelling of
parallel
programs
Subjective Easy Parallel and 22. What do you mean by overhead? 2
Distributed
Computing,
Sources of
overhead
Subjective Easy Parallel and 23. What is the effect of granularity on 2
Distributed performance?
Computing,
Effect of
granularity
on
performance
Subjective Easy Parallel and 24. Explain the following terms: problem 2
Distributed size and processing elements.
Computing,
Scalability of
parallel
systems
Subjective Medium Parallel and 25. What is idling? 2
Distributed
Computing,
Sources of
overhead
Subjective Medium Parallel and 26. How do efficiency and speedup are 3
Distributed related to each other?
Computing,
Performance
metrices for
parallel
systems
Subjective Medium Parallel and 27. What is total parallel overhead? 3
Distributed
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 28. Differentiate between large 3
Distributed isoefficiency and small isoefficiency?
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 29. What is scalability of a parallel 3
Distributed system?
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 30. Is it necessary for a parallel program 3
Distributed to have a serial component? If there is a
Computing, serial component, how will it be
Sources of executed?
overhead
Subjective Easy Parallel and 31. Give the brief idea about matrix- 2
Distributed vector multiplication.
Computing,
Dense matrix
algorithms
Subjective Easy Parallel and 32. Give the brief idea about matrix- 2
Distributed matrix multiplication.
Computing,
Dense matrix
algorithms
Subjective Easy Parallel and 33. What is dense matrix algorithm and 2
Distributed its types?
Computing,
Dense matrix
algorithms
Subjective Hard Parallel and 34. Briefly explain cannon’s algorithm. 3
Distributed
Computing,
Dense matrix
algorithms
Subjective Medium Parallel and 35. Explain bubble sort with help of the 2
Distributed algorithm.
Computing,
Bubble sort
and Variants
Subjective Medium Parallel and 36.What are 2 types of representation of 3
Distributed graphs?
Computing,
Graph
algorithms
Subjective Medium Parallel and 37. Explain the properties of graphs. 3
Distributed
Computing,
Graph
algorithms
Subjective Easy Parallel and 38. What do you mean by minimum 2
Distributed spanning tree?
Computing,
Minimum
spanning tree
Subjective Hard Parallel and 39. What is transitive closure? 3
Distributed
Computing,
Transitive
closure
Subjective Hard Parallel and 40. What are the connected components 3
Distributed use in graph algorithms?
Computing,
Connected
Components
Subjective Easy Parallel and 41. What is discrete optimization and 2
Distributed discrete optimization problem?
Computing,
Search
algorithms
for discrete
optimization
problem
Subjective Easy Parallel and 42. What is simple back tracking in DFS? 2
Distributed
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 43. What is DFBB? 2
Distributed
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 44. What is IDA? 2
Distributed
Computing,
Sequential
search
algorithms
Subjective Medium Parallel and 45. What is iterative deepening search? 2
Distributed
Computing,
Sequential
search
algorithms
Subjective Medium Parallel and 46. What are the DFS storage 3
Distributed requirements?
Computing,
Parallel DFS
and BFS
Subjective Medium Parallel and 47. Differentiate between BFS and DFS. 3
Distributed
Computing,
Parallel BFS
and DFS
Subjective Hard Parallel and 48. Explain parallel DFS with the help of 3
Distributed a diagram.
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 49. Explain tree-based termination 3
Distributed detection with the help of a diagram.
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 50. Differentiate between parallel BFS 3
Distributed and parallel DFS.
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 1. Explain the dichotomy of parallel 5
Distributed computing platforms.
Computing,
Dichotomy of
parallel
computing
platforms
Subjective Hard Parallel and 2. Explain the interconnection networks 5
Distributed with diagrams.
Computing,
Interconnecti
-on networks
Subjective Hard Parallel and 3. What are decomposition techniques? 5
Distributed Explain each of them.
Computing,
Decompositi-
on techniques
Subjective Medium Parallel and 4. What mapping techniques are used for 5
Distributed load balancing?
Computing,
Mapping
techniques
for load
balancing
Subjective Medium Parallel and 5. What are the characteristics of inter- 5
Distributed task interaction?
Computing,
Characteristic
-s of tasks
Subjective Medium Parallel and 6. What are the API functions in CUDA? 5
Distributed
Computing,
API functions
to allocate
memory
Subjective Hard Parallel and 7. If a device supports compute capability 5
Distributed 1.3 then it can have blocks with a
Computing, maximum of 512 threads/block and 8
Introduction blocks/SM can be scheduled
of threads concurrently. Each SM can schedule
and blocks, groups of 32-thread units called warps.
and grids The maximum number of resident warps
per SM in a device that supports compute
capability 1.3 is 32 and the maximum
number of resident threads per SM is
1024.What would be the ideal block
granularity to compute the product of two
2-D matrices of size 1024 x 1024?
a. (i) 4×4?
b. (ii) 8×8?
Subjective Easy Parallel and 8. How to transfer data back to host 5
Distributed processor with API function? Explain
Computing, with the help of an example.
API functions
to transfer
back data
Subjective Easy Parallel and 9. How Isolated data is used by 5
Distributed parallelized code? Explain with example.
Computing,
Isolating data
used by
parallelized
code
Subjective Hard Parallel and 10. What are cudaMalloc(), 5
Distributed cudaMallocHost(), cudaMemcpy(),
Computing, cudafree()? Explian with an example.
Developing
kernel
functions
Subjective Easy Parallel and 11. What are the different performance 5
Distributed metrices for parallel systems?
Computing,
Performance
metrices for
parallel
systems
Subjective Medium Parallel and 12. Write the scaling characteristics of 5
Distributed parallel programs with equations.
Computing,
Scalability of
parallel
systems
Subjective Medium Parallel and 13. What is isoefficiency function? Write 5
Distributed down the equations of the isoefficiency
Computing, function.
Scalability of
parallel
systems
Subjective Hard Parallel and 14. What is minimum execution time and 5
Distributed minimum cost optimal execution time?
Computing,
Minimum
Execution
time and cost
-optimal
execution
time
Subjective Easy Parallel and 15. What are the sources of overhead in 5
Distributed parallel programs?
Computing,
Sources of
overhead
Subjective Medium Parallel and 16. Explain the Dijkstra’s algorithm with 5
Distributed help of an example.
Computing,
Dijkstra’s
algorithm
Subjective Easy Parallel and 17. Differentiate between all-pairs 5
Distributed shortest path and single source shortest
Computing, path.
Graph
algorithms
Subjective Easy Parallel and 18. What are the issues in sorting of 5
Distributed parallel computers?
Computing,
Issues in
sorting on
parallel
computers
Subjective Medium Parallel and 19. What algorithms are used for sparse 5
Distributed graph?
Computing,
Algorithms
for sparse
graphs
Subjective Hard Parallel and 20. Explain Prim’s algorithm with help of 5
Distributed an example.
Computing,
Prim’s
algorithm
Subjective Medium Parallel and 21. Explain parallel BFS with the help of 5
Distributed an example.
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 22. How do you analyze DFS for random 5
Distributed polling?
Computing,
Parallel DFS
and BFS
Subjective Medium Parallel and 23. Explain the analysis for the 5
Distributed following: asynchronous round robin,
Computing, global round robin, random polling.
Parallel DFS
and BFS
Subjective Medium Parallel and 24. What is Dijkstra’s token termination 5
Distributed detection?
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 25. Explain formulation of DFBB and 5
Distributed IDA.
Computing,
Sequential
search
algorithms