You are on page 1of 22

Name of College/ NOIDA INSTITUTE OF ENGINEERING AND

University TECHNOLOGY

Course Name/Branch B. TECH/ Computer Science


Paper Code RCS-083
Subject Name Parallel and Distributed Computing
Exam Name Pre-University Test
Total Question Pool 175

Category,
Question
Difficulty Sub Question Marks
Type
Category
Single Easy Parallel and 1. The main purpose of parallel 1
choice Distributed computing is:
Computing, a. performs parallel tasks
Introduction b. performs computations faster
to PDC c. uses no. of processors concurrently
d. all of the above
Single Hard Parallel and 2. Three main factors contributed to the 3
choice Distributed current strong trend of parallel
Computing, processing:
Introduction a. hardware cost, applications and AI
to PDC b. speedup, execution time and
complexity
c. cost optimal processors, LSI circuit,
faster cycle time
d. fluid dynamics, simulation and image
processing
Single Easy Parallel and 3. Which applications are use as the 1
choice Distributed scope of parallel computing
Computing, a. Scientific, engg. and design
Introduction b. commercial, computer systems
to PDC c. Both (a) and (b)
d. Neither (a) or (b)
Single Easy Parallel and 4. The logical view of a sequential 1
choice Distributed computer consists of
Computing, a. processor, memory and data path
Parallel b. pipelines, processing elements and
programming buses
platforms c. all of the above
d. none of the above

Single Easy Parallel and 5. The instruction cycle is divided into 1


choice Distributed a. 2 steps
Computing, b. 4 steps
Parallel c. 6 steps
programming d. 8 steps
platforms
Single Easy Parallel and 6. When 2 instructions compete for a 1
choice Distributed single resource, it is called as
Computing, a. data dependency
Trends in b. procedural dependency
microprocess c. horizontal waste
-or d. none of the above
architectures
Single Easy Parallel and 7. Full form of VLIW is 1
choice Distributed a. Very large instruction word
Computing, b. Very low instrument word
Trends in c. very long instruction word
microprocess d. very low instruction word
-or
architecture
Single Easy Parallel and 8. Dichotomy is based on the control 1
choice Distributed structure and the communication model
Computing, a. True
Dichotomy of b. False
parallel
computing
platforms
Single Medium Parallel and 9. EREW is the most powerful PRAM 2
choice Distributed model
Computing, a. True
Physical b. False
organization
Single Easy Parallel and 10. Static networks consists of 1
choice Distributed a. point-to-point buses
Computing, b. direct links
Interconnecti c. point-to-point communication links
-on networks d. indirect links
Single Medium Parallel and 11. Which different criterions are used to 2
choice Distributed evaluate an interconnection network
Computing, a. diameter, connectivity, bandwidth
Interconnecti and cost
-on networks b. diameter, nodes, links and cost
c. bisection width, channel rate and cost
d. arc connectivity, links and cost
Single Easy Parallel and 12. In cut- through routing, the message 1
choice Distributed is broken into fixed size units called
Computing, packets
Interconnecti a. True
on netwroks b. False
Single Easy Parallel and 13. Which routing scheme route the 1
choice Distributed message along a longer path to avoid
Computing, network congestion
Decompositi- a. minimal routing
on techniques b. non-minimal routing
c. deterministic routing
d. adaptive routing
Single Hard Parallel and 14. Embedding a mesh into hypercube is 3
choice Distributed an extension of embedding a ring into
Computing, hypercube
Mapping a. True
techniques b. False
Single Hard Parallel and 15. The decomposition techniques when 3
choice Distributed used together known as
Computing, a. collaborative decomposition
Decompositi- b. hierarchical decomposition
on techniques c. hybrid decomposition
d. similar decomposition
Single Medium Parallel and 16. The scenario in which all the tasks 2
choice Distributed are known before the algorithm starts
Computing, execution
Characteristic a. static task generation
-s of task b. dynamic task generation
c. input task generation
d. output task generation
Single Hard Parallel and 17. Which mapping technique distribute 3
choice Distributed the work among processes during the
Computing, execution of the algorithm
Mapping a. static mapping
techniques b. regular mapping
c. array mapping
d. dynamic mapping
Single Medium Parallel and 18. The number and size of tasks into 2
choice Distributed which a problem is decomposed is
Computing, referred to as
Decompositi- a. granularity of the decomposition
on techniques b. packets of the decomposition
c. flits of the decomposition
d. task of the decomposition
Single Medium Parallel and 19. What does t(s), t(h) and t(w) denotes 2
choice Distributed a. serial time, head time and word time
Computing, b. sequential time, hop time and work
Routing time
Mechanisms c. startup time, header time and window
time
d. startup time, hop time and word
time
Single Hard Parallel and 20. Hypercube have a diameter of log p 3
choice Distributed and its cost is (p log p)/2
Computing, a. True
Mapping b. False
techniques

Single Easy Parallel and 21. Full form of CUDA is 1


choice Distributed a. computer unifies data architecture
Computing, b. computing universe device architecture
overview of c. compute unique data architecture
CUDA d. compute unified device architecture
Single Medium Parallel and 22. The CUDA platform is designed to 2
choice Distributed work with programming languages such
Computing, as
Overview of a. java, vb.net and html
CUDA b. c, c++ and fortran
c. fortran, java and c
d. c, c++ and python
Single Easy Parallel and 23. A property that determines when and 1
choice Distributed how changes made by one operation
Computing, becomes visible to other concurrent users
Isolating data and systems
to be used by a. parallelization
parallelized b. concurrency
code c. data isolation
d. data validation
Single Hard Parallel and 24. Which API function is used to 3
choice Distributed allocate a piece of global memory
Computing, a. cudaalloc()
API functions b. cudamemory()
for allocating c. cudaloc()
memory d. cudamalloc()
Single Medium Parallel and 25. cudafree() can be used to 2
choice Distributed a. exits the CUDA system
Computing, b. free the resources in CUDA
API functions c. deallocates the memory
for transfer d. reallocates the free memory
back the data
Single Easy Parallel and 26. A programming abstraction used to 1
choice Distributed represent a group of threads
Computing, a. grids
Introduction b. blocks
to threads, c. parallel threads
blocks and d. multiple threads
grids
Single Easy Parallel and 27. All the blocks in the same grid 1
choice Distributed contains same no. of threads
Computing, a. True
Introduction b. False
to threads,
blocks and
grids
Single Hard Parallel and 28. No. of threads in a block is limited to 3
choice Distributed a. 1024
Computing, b. 2048
Introduction c. 4096
to threads, d. 8192
blocks and
grids
Single Easy Parallel and 29. cudamemcpy() is used to move data 1
choice Distributed from device to GPU and vice-versa
Computing, a. True
API functions b. False
to transfer
back the data
Single Easy Parallel and 30. A single operation replicated over 1
choice Distributed collection of data called as
Computing, a. matrix replication
Developing a b. matrix addition
kernel c. vector replication
function d. vector addition
Single Easy Parallel and 31. A way of performing parallel 1
choice Distributed execution of an application on multiple
Computing, processors
API functions a. task parallelism
to allocate b. data parallelism
memory c. thread parallelism
d. block parallelism
Single Medium Parallel and 32. Which memory transfer did CUDA 2
choice Distributed use
Computing, a. UMA
To transfer b. NUMA
data in c. DMA
CUDA d. SMA
Single Hard Parallel and 33.Eeach invocation in CUDA program 3
choice Distributed can refer to its block index using
Computing, a. blockIdx.x
Developing a b. blockCUDAx.x
kernel c. blockIndex.x
function d. blockincvx.x
Single Hard Parallel and 34. To launch add () kernel on GPU with 3
choice Distributed N threads, which instruction is used
Computing, a. add<<>>(host copies of all variables)
Execution of b. add<<>>(device copies of one
kernel variables)
function c. add<<>>(host copies of one variable)
d. add<<>>(device copies of all
variables)
Single Easy Parallel and 35. A thread execution is managed 1
choice Distributed independently by
Computing, a. OS
Introduction b. Memory
to threads, c. Scheduler
bocks and d. Blocks
grids
Single Easy Parallel and 36. It focuses on distributing parallel 1
choice Distributed execution threads across parallel
Computing, computing nodes
API functions a. data parallelism
to allocate b. task parallelism
memory c. grid parallelism
d. explicit parallelism
Single Easy Parallel and 37. Data parallelism uses _____ mode to 1
choice Distributed control the parallel data operations
Computing, a. SIMD
API functions b. MIMD
to allocate c. SPMD
memory d. MPMD
Single Medium Parallel and 38. Task parallel algorithms follows the 2
choice Distributed ____ model commonly for task
Computing, responsibility
API functions a. Client-master
to allocate b. client-server
memory c. master-server
d. master-worker
Single Medium Parallel and 39. CUDA is a system which allows 2
choice Distributed _____ to take the GPU to the next level
Computing, a. Microsoft
Overview of b. Ubuntu
CUDA c. Nvidia
d. Linux
Single Hard Parallel and 40. To declare a shared memory array 3
choice Distributed consisting of N integers, we use
Computing, a. __shared__int sharememory[size];
Execution of b. extern__shared__int sharememory[];
kernel c. dynamicmem<<<threadSize, gridSize,
functions n*sizeof(int)>>>(...);
d. dynamicmem<<<gridSize,
blockSize, n*sizeof(int)>>>(...);
Single Easy Parallel and 41. In which terms, a sequential 1
choice Distributed algorithm is evaluated
Computing, a. Cost optimal
Analytical b. Execution time
modelling of c. Execution time and input size
parallel d. Input size
programs
Single Easy Parallel and 42. A parallel system or a program is 1
choice Distributed a. as same as sequential program
Computing, b. combination of sequential and parallel
Analytical program
modelling of c. combination of parallel architecture
parallel and algorithm
programs d. a part of parallel architecture only
Single Easy Parallel and 43. Analytical modelling of a program 1
choice Distributed refers to
Computing, a. to have a solution of analyzing the
Analytical program
modelling of b. to have a solution of multiple parts of
parallel the program
programs c. to have a solution based on the
problem and the solution describes the
changes in the system
d. to have a solution based on the parallel
algorithms
Single Easy Parallel and 44. Which of these are not an overhead? 1
choice Distributed a. cost
Computing, b. communication
Sources of c. idle time
overhead d. excess computing
Single Easy Parallel and 45. Overhead in a parallel system means 1
choice Distributed a. processing in parallel systems
Computing, b. inter process interaction is less
Sources of c. parallel scheduling is required
overhead d. elevated processing in parallel
systems
Single Easy Parallel and 46. A serial run time of a parallel system 1
choice Distributed is
Computing, a. time elapsed between the starting
Performance and ending of a process execution
metrices for b. time elapsed between processing
parallel elements execution
systems c. total time required by a system to
execute a process
d. time elapsed between starting and
ending of the parallel process

Single Medium Parallel and 47. Overhead function is given by 2


choice Distributed a. t(o) = t(s) + t(o) - pt(p)
Computing, b. t(o) = pt(p) - t(s)
Sources of c. t(o) = t(s) - pt(p)
overhead d. t(o) = pt(p) + t(s)
Single Easy Parallel and 48. Which measure captures the relative 1
choice Distributed benefit of solving a problem in parallel
Computing, a. execution time
Performance b. overhead
metrices for c. speedup
parallel d. size
systems
Single Hard Parallel and 49. In ideal parallel system, 3
choice Distributed a. speedup is equal to p and efficiency is
Computing, equal to 1
Scalability of b. speed up is not equal to p and
parallel efficiency is equal to 1
systems c. speedup is equal to p and efficiency
is equal to 0
d. speed up is less than p and efficiency is
equal to 0
Single Easy Parallel and 50. Efficiency is the ratio of 1
choice Distributed a. cost to the number of processes
Computing, b. speedup to the number of processing
Scalability of elements
parallel c. speedup to the cost
systems d. cost to the speedup
Single Hard Parallel and 51. A cost optimal parallel system has an 3
choice Distributed efficiency of
Computing, a. big oh of (0)
Scalability of b. theta of (0)
parallel c. omega of (1)
systems d. theta of (1)
Single Easy Parallel and 52. If we use less processing elements 1
choice Distributed rather than maximum number, it is called
Computing, as isoefficiency function of the parallel
Scalability of system
parallel a. True
systems b. False
Single Easy Parallel and 53. When p is less than n in a parallel 1
choice Distributed system, is the system cost optimal?
Computing, a. True
Scalability of b. False
parallel
systems
Single Medium Parallel and 54. In scalability of a parallel system, 2
choice Distributed which is the first derived metric
Computing, a. Speedup
Scalability of b. cost
parallel c. efficiency
systems d. overhead
Single Medium Parallel and 55. In parallel systems, to solve a 2
choice Distributed problem, we use
Computing, a. 1 processing element
Analytical b. n number of processing elements
modelling of c. 1 serial element and n number of
parallel processing elements
programs d. only serial elements
Single Hard Parallel and 56. Parallel execution time is expressed 3
choice Distributed in terms of
Computing, a. cost, overhead and processing elements
Scalability of b. execution time and serial time
parallel c. efficiency and cost
systems d. processing elements, overhead and
problem size
Fill in the Medium Parallel and 57. For maintaining a fixed efficiency 2
blanks Distributed value, ____ is increased while ____
Computing, remains at constant
Scalability of a. cost, efficiency
parallel b. overhead, processing elements
systems c. problem size, cost
d. problem size, processing elements
Single Hard Parallel and 58. Which isoefficiency function is best 3
choice Distributed for the parallel systems?
Computing, a. Large isoefficiency function
Scalability of b. Moderate isoefficiency function
parallel c. Small isoefficiency function
systems d. very small isoefficiency function
Single Medium Parallel and 59. Parallel runtime is denoted by 2
choice Distributed a. t(s)
Computing, b. pt(p)
Performance c. t(p)
metrices for d. st(p)
parallel
systems
Single Hard Parallel and 60. A serial computation in parallel 3
choice Distributed system is done by
Computing, a. only one serial component
Analytical b. n number of serial components
modelling of c. only one parallel component
parallel d. n number of parallel components
programs
Single Easy Parallel and 61. Acyclic graph is a graph which 1
choice Distributed a. contains a cycle
Computing, b. contains no cycle
Graph c. contains 2 cycles
algorithms d. contains 1 cycle

Single Easy Parallel and 62. A graph is said to be complete when 1


choice Distributed there exists
Computing, a. an edge between every pair of
Graph vertices
algorithms b. a path between every pair of vertices
c. a cycle between every pair of vertices
d. a subgraph between every pair of
vertices
Single Easy Parallel and 63. A _____ consists of several trees 1
choice Distributed a. subgraph
Computing, b. sparse graph
Graph c. forest
algorithms d. spanning tree
Single Easy Parallel and 64. Which representation is used for 1
choice Distributed graphs
Computing, a. matrix and list
Graph b. MST
algorithms c. forest
d. cycle
Single Easy Parallel and 65. MST for a weighted graph is 1
choice Distributed a. spanning tree with maximum weight
Computing, b. spanning tree with all the edges
Minimum c. spanning tree with all the vertices
spanning tree d. spanning tree with minimum weight
Single Medium Parallel and 66. In prim's algorithm for MST, the 2
choice Distributed while loop is executed
Computing, a. n^2 times
Prim’s b. n-1 times
algorithm c. n*n times
d. n times
Single Medium Parallel and 67. How many total steps are there in 2
choice Distributed prim's algorithm for MST
Computing, a. omega(n^n) steps
Prim’s b. theta(n^n) steps
algorithm c. theta(n^2) steps
d. omega(n^2) steps
Single Easy Parallel and 68. While parallelizing prim's algorithm, 1
choice Distributed in every iteration
Computing, a. can parallelize the while loop
Prim’s b. cannot parallelize the while loop
algorithm c. can choose 2 vertices at the same time
d. can choose 2 vertices at different time

Single Easy Parallel and 69. What are the parallel approaches used 1
choice Distributed in Dijkstra’s algorithm for shortest path
Computing, all pairs
Dijkstra’s a. source partitioned and source
algorithm parallel
b. source vertex and source parallel
c. source edge and source vertex
d. source edge and source parallel
Single Easy Parallel and 70. Every node in the graph is handled 1
choice Distributed independently of the others in
Computing, a. fine grain
Graph b. vertex grain
algorithms c. edge grain
d. coarse grain
Single Easy Parallel and 71. The p/n processors share the work 1
choice Distributed with one vertex in
Computing, a. vertex grain
Graph b. edge grain
algorithms c. fine grain
d. coarse grain
Single Hard Parallel and 72. In sparse graphs, while using a list 3
choice Distributed representation, the algorithm can be
Computing, modified efficiently to
Algorithms a. |E|=O(n*n/log n)
for sparse b. |E|=O(n^2/log n)
graph c. |E|=O(n/log n)
d. |E|=O(n^n/log n)
Single Easy Parallel and 73. The matrix is evenly distributed on 1
choice Distributed the processors
Computing, a. balanced work load and little
Dense matrix communication
algorithm b. unbalanced work load and increased
communication
c. unbalanced work load and little
communication
d. balanced work load and increased
communication
Fill in the Medium Parallel and 74. In Johnson’s algorithm, initially al the 2
blanks Distributed vertex is equal to ____ except starting
Computing, vertex which has _____
Other sorting a. infinity, 1
algorithms b. 0, 1
c. 1, infinity
d. infinity, 0
Single Hard Parallel and 75. In row-wise 1-D partitioning, the total 3
choice Distributed parallel time is
Computing, a. big-oh(n)
Dense matrix b. theta(n*n)
algorithm c. theta(n)
d. big-oh(n*n)
Single Hard Parallel and 76. In row-wise 1-D partitioning, the 3
choice Distributed overall isoefficiency is
Computing, a. W=O(p*p)
Dense matrix b. W=O(p^2)
algorithm c. W=theta(p^2)
d. W=theta(p*p)
Single Hard Parallel and 77. The cost (process-time product) in 2- 3
choice Distributed D partitioning
Computing, a. theta (log n)
Dense matrix b. theta (n log n)
algorithm c. theta (n*n log n)
d. theta (n^2 log n)
Single Medium Parallel and 78. In matrix-matrix multiplication, the 2
choice Distributed serial complexity is
Computing, a. O(n^3)
Dense matrix b. O(n^n)
algorithm c. O(n^2)
d. O(n*n)
Single Medium Parallel and 79. Which algorithm uses 3-D 2
choice Distributed partitioning
Computing, a. Prim's algo
Dense matrix b. Dijkstra's algo
algorithm c. DNS algo
d. Floyd's Algo
Single Hard Parallel and 80. The lower bound on any comparison 3
choice Distributed based sort of n numbers is O(n log n)
Computing, a. True
Other sorting b. False
algorithms
Single Easy Parallel and 81. A discrete optimization problem can 1
choice Distributed be expressed as a
Computing, a. set(S,f)
Search b. tuple(S,f)
algorithm for c. solution(S,f)
discrete d. function(S,f)
optimization
Single Easy Parallel and 82. In discrete optimization, if the 1
choice Distributed estimate is guaranteed to be an
Computing, underestimate, the heuristic is called
Search a. heuristic estimate
algorithm for b. heuristic underestimate
discrete c. intermediate heuristic
optimization d. admissible heuristic
Single Hard Parallel and 83. In 8 puzzle problem, the distance 3
choice Distributed between positions(i,j) and (k,l) is defines
Computing, as
Parallel DFS a. |i-k|+|j+l|
and BFS b. |i-j|-|k-l|
c. |i-k|+|j-l|
d. |i+j|+|k+l|
Single Easy Parallel and 84. The main advantage of DFS is that 1
choice Distributed a. its storage requirement is linear
Computing, b. its storage requirement is non linear
Parallel DFS c. its storage requirement is state space
and BFS d. its storage requirement is directed
space
Single Easy Parallel and 85. Simple backtracking in DFS 1
choice Distributed algorithm does not guaran teed to find
Computing, a. successors of the node
Parallel DFS b. minimum cost solution
and BFS c. maximum cost solution
d. first feasible solution
Fill in the Easy Parallel and 86. In iterative deepening search, if no 1
blanks Distributed solution is found, the bound is ______
Computing, and the process is ______
Sequential a. ignored, repeated
search b. decreased, repeated
algorithms c. increased, not repeated
d. increased, repeated
Single Easy Parallel and 87. IDA defines a function for node x in 1
choice Distributed the search space as L(x)=
Computing, a. f(x)+h(x)
Sequential b. g(x)+h(x)
search c. f(x)+g(x)
algorithms d. f(x)-g(x)
Single Medium Parallel and 88. The total space requirement of the 2
choice Distributed DFS algorithm is
Computing, a. O(md)
Parallel DFS b. theta(md)
and BFS c. omega(md)
d. O(1)
Single Medium Parallel and 89. The search overhead factor s is 2
choice Distributed defined as
Computing, a. (Wp/W)p
Speedup b. (Wp/p)W
anomalies in c. Wp/W
parallel d. W/Wp
search algo.
Single Easy Parallel and 90. Requesting a random selected 1
choice Distributed processor for work in load balancing
Computing, scheme is known as
Parallel DFS a. global round robin
and BFS b. asynchronous round robin
c. round robin
d. random polling

Single Easy Parallel and 91. While analyzing DFS, the total 1
choice Distributed network requests is O(V(p)log W)
Computing, a. True
Parallel BFS b. False
and DFS
Single Easy Parallel and 92. In asynchronous round robin, V(p)= 1
choice Distributed O(p^2) is the worst case
Computing, a. True
Sequential b. False
search
algorithms
Single Hard Parallel and 93. The total communication overhead 3
choice Distributed while analyzing DF is given by
Computing, T(o)=(comm)V(p)log n
Speedup a. True
anomalies in b. False
parallel
search algo
Fill in the Easy Parallel and 94. In tree-based termination detection, 1
blanks Distributed termination is signaled when the weight
Computing, at processor ___ becomes _____ again
Parallel BFS a. P, 1
and DFS b. P0, 0
c. P1, 0
d. P0, 1
Single Medium Parallel and 95. The formulations of parallel 2
choice Distributed formulation of IDA are
Computing, a. common cost and variable cost
Sequential b. concurrency cost and common cots
search c. variable cost and concurrency cost
algorithms d. only concurrency cost
Single Hard Parallel and 96. The upper bound of speedup in 3
choice Distributed parallel BFS is t(access)+t(exp)/t(access)
Computing, a. True
Parallel BFS b. False
and DFS
Single Medium Parallel and 97. Executions yielding speedups greater 2
choice Distributed than p by using p processors are referred
Computing, to as deceleration anomalies
Speedup a. True
anomalies n b. False
parallel
search algo.
Single Hard Parallel and 98. In parallel BFS, each processor locks 3
choice Distributed the queue, extracts the _______ and
Computing, unlocks it
Parallel BFS a. successor node
and DFS b. worst node
c. best node
d. all the nodes
Single Medium Parallel and 99. Parallel formulations of DFBB are 2
choice Distributed similar to those of parallel BFS
Computing, a. True
Parallel BFS b. False
and DFS
Match the Hard Parallel and 100. Match the following: 3
following
(i) (i) strikes Distributed
a desirable a. (i)-(i), (ii)-(ii), (iii)-(iii)
asynchronous compromise Computing, b. (i)-(ii), (ii)-(iii), (iii)-(i)
round robin Parallel DFS c. (i)-(ii), (ii)-(i), (iii)-(iii)
(ii) and BFS
(ii) has poor d. (i)-(iii), (ii)-(ii), (iii)-(i)
synchronous performance because
round robin of contention
(iii) random (iii) has poor
polling performance because
it makes large number
Subjective Easy Parallel and
of work requests 1. What is a parallel computer and 2
Distributed parallel computing?
Computing,
Introduction
to PDC
Subjective Easy Parallel and 2. What are the subclasses of PRAM? 2
Distributed
Computing,
Physical
organization
Subjective Easy Parallel and 3. What are the principal parameters that 3
Distributed determine the communication latency in
Computing, message passing platforms?
Communicati
-on costs in
parallel
windows
Subjective Hard Parallel and 4. Differentiate between packet routing 3
Distributed and cut through routing.
Computing,
Interconnecti
-on networks
Subjective Easy Parallel and 5. Briefly explain the terms: true data 3
Distributed dependency, resources dependency,
Computing, branch dependency, vertical waste,
Trends in horizontal waste, VLIW.
microprocess
-or
architectures
Subjective Medium Parallel and 6. Briefly explain superscalar execution. 2
Distributed
Computing,
Trends in
microprocess
-or
architectures
Subjective Medium Parallel and 7. What is the architecture of an Ideal 2
Distributed parallel computer?
Computing,
Physical
organization
Subjective Hard Parallel and 8. Explain the store and forward routing 3
Distributed with a neat diagram.
Computing,
Interconnecti
-on networks
Subjective Hard Parallel and 9. What is pipelining? Briefly explain 3
Distributed instruction pipeline.
Computing,
Trends in
microprocess
-or
architectures
Subjective Medium Parallel and 10. How to embed a hypercube in 2D 2
Distributed mesh?
Computing,
Mapping
techniques
Subjective Easy Parallel and 11. What is CUDA? 2
Distributed
Computing,
Overview of
CUDA
Subjective Easy Parallel and 12. What are GPU and GPGPU? 2
Distributed
Computing,
Overview of
CUDA
Subjective Easy Parallel and 13. What is data parallelism and task 2
Distributed parallelism?
Computing,
API functions
to allocate
memory
Subjective Easy Parallel and 14. Explain Threads, Blocks and Grids. 2
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 15. Define vector addition of blocks. 3
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 16. Define vector addition of threads. 3
Distributed
Computing,
Introduction
to threads,
blocks and
grids
Subjective Medium Parallel and 17. A CUDA programmer says that if 2
Distributed they launch a kernel with only 32 threads
Computing, in each block, they can leave out
Developing a the __syncthreads() instruction
kernel wherever barrier synchronization is
function needed. Do you think this is a good idea?
Explain.
Subjective Hard Parallel and 18. A student mentioned that he was able 3
Distributed to multiply two 1024 ×1024 matrices by
Computing, using a tiled matrix multiplication code
Executing a with 32 ×32 thread blocks. He is using a
kernel CUDA device that allows up to 512
function threads per block and up to 8 blocks per
SM. He further mentioned that each
thread in a thread block calculates one
element of the result matrix. What would
be your reaction and why?
Subjective Hard Parallel and 19. What is kernel function in CUDA? 3
Distributed
Computing,
Developing a
kernel
function
Subjective Hard Parallel and 20. What is a host in CUDA? 3
Distributed
Computing,
Executing a
kernel
function
Subjective Easy Parallel and 21. What is analytical modeling of a 2
Distributed parallel programs?
Computing,
Analytical
modelling of
parallel
programs
Subjective Easy Parallel and 22. What do you mean by overhead? 2
Distributed
Computing,
Sources of
overhead
Subjective Easy Parallel and 23. What is the effect of granularity on 2
Distributed performance?
Computing,
Effect of
granularity
on
performance
Subjective Easy Parallel and 24. Explain the following terms: problem 2
Distributed size and processing elements.
Computing,
Scalability of
parallel
systems
Subjective Medium Parallel and 25. What is idling? 2
Distributed
Computing,
Sources of
overhead
Subjective Medium Parallel and 26. How do efficiency and speedup are 3
Distributed related to each other?
Computing,
Performance
metrices for
parallel
systems
Subjective Medium Parallel and 27. What is total parallel overhead? 3
Distributed
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 28. Differentiate between large 3
Distributed isoefficiency and small isoefficiency?
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 29. What is scalability of a parallel 3
Distributed system?
Computing,
Scalability of
parallel
systems
Subjective Hard Parallel and 30. Is it necessary for a parallel program 3
Distributed to have a serial component? If there is a
Computing, serial component, how will it be
Sources of executed?
overhead
Subjective Easy Parallel and 31. Give the brief idea about matrix- 2
Distributed vector multiplication.
Computing,
Dense matrix
algorithms
Subjective Easy Parallel and 32. Give the brief idea about matrix- 2
Distributed matrix multiplication.
Computing,
Dense matrix
algorithms
Subjective Easy Parallel and 33. What is dense matrix algorithm and 2
Distributed its types?
Computing,
Dense matrix
algorithms
Subjective Hard Parallel and 34. Briefly explain cannon’s algorithm. 3
Distributed
Computing,
Dense matrix
algorithms
Subjective Medium Parallel and 35. Explain bubble sort with help of the 2
Distributed algorithm.
Computing,
Bubble sort
and Variants
Subjective Medium Parallel and 36.What are 2 types of representation of 3
Distributed graphs?
Computing,
Graph
algorithms
Subjective Medium Parallel and 37. Explain the properties of graphs. 3
Distributed
Computing,
Graph
algorithms
Subjective Easy Parallel and 38. What do you mean by minimum 2
Distributed spanning tree?
Computing,
Minimum
spanning tree
Subjective Hard Parallel and 39. What is transitive closure? 3
Distributed
Computing,
Transitive
closure
Subjective Hard Parallel and 40. What are the connected components 3
Distributed use in graph algorithms?
Computing,
Connected
Components
Subjective Easy Parallel and 41. What is discrete optimization and 2
Distributed discrete optimization problem?
Computing,
Search
algorithms
for discrete
optimization
problem
Subjective Easy Parallel and 42. What is simple back tracking in DFS? 2
Distributed
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 43. What is DFBB? 2
Distributed
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 44. What is IDA? 2
Distributed
Computing,
Sequential
search
algorithms
Subjective Medium Parallel and 45. What is iterative deepening search? 2
Distributed
Computing,
Sequential
search
algorithms
Subjective Medium Parallel and 46. What are the DFS storage 3
Distributed requirements?
Computing,
Parallel DFS
and BFS
Subjective Medium Parallel and 47. Differentiate between BFS and DFS. 3
Distributed
Computing,
Parallel BFS
and DFS
Subjective Hard Parallel and 48. Explain parallel DFS with the help of 3
Distributed a diagram.
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 49. Explain tree-based termination 3
Distributed detection with the help of a diagram.
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 50. Differentiate between parallel BFS 3
Distributed and parallel DFS.
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 1. Explain the dichotomy of parallel 5
Distributed computing platforms.
Computing,
Dichotomy of
parallel
computing
platforms
Subjective Hard Parallel and 2. Explain the interconnection networks 5
Distributed with diagrams.
Computing,
Interconnecti
-on networks
Subjective Hard Parallel and 3. What are decomposition techniques? 5
Distributed Explain each of them.
Computing,
Decompositi-
on techniques
Subjective Medium Parallel and 4. What mapping techniques are used for 5
Distributed load balancing?
Computing,
Mapping
techniques
for load
balancing
Subjective Medium Parallel and 5. What are the characteristics of inter- 5
Distributed task interaction?
Computing,
Characteristic
-s of tasks
Subjective Medium Parallel and 6. What are the API functions in CUDA? 5
Distributed
Computing,
API functions
to allocate
memory
Subjective Hard Parallel and 7. If a device supports compute capability 5
Distributed 1.3 then it can have blocks with a
Computing, maximum of 512 threads/block and 8
Introduction blocks/SM can be scheduled
of threads concurrently. Each SM can schedule
and blocks, groups of 32-thread units called warps.
and grids The maximum number of resident warps
per SM in a device that supports compute
capability 1.3 is 32 and the maximum
number of resident threads per SM is
1024.What would be the ideal block
granularity to compute the product of two
2-D matrices of size 1024 x 1024?
a. (i) 4×4?
b. (ii) 8×8?
Subjective Easy Parallel and 8. How to transfer data back to host 5
Distributed processor with API function? Explain
Computing, with the help of an example.
API functions
to transfer
back data
Subjective Easy Parallel and 9. How Isolated data is used by 5
Distributed parallelized code? Explain with example.
Computing,
Isolating data
used by
parallelized
code
Subjective Hard Parallel and 10. What are cudaMalloc(), 5
Distributed cudaMallocHost(), cudaMemcpy(),
Computing, cudafree()? Explian with an example.
Developing
kernel
functions
Subjective Easy Parallel and 11. What are the different performance 5
Distributed metrices for parallel systems?
Computing,
Performance
metrices for
parallel
systems
Subjective Medium Parallel and 12. Write the scaling characteristics of 5
Distributed parallel programs with equations.
Computing,
Scalability of
parallel
systems
Subjective Medium Parallel and 13. What is isoefficiency function? Write 5
Distributed down the equations of the isoefficiency
Computing, function.
Scalability of
parallel
systems
Subjective Hard Parallel and 14. What is minimum execution time and 5
Distributed minimum cost optimal execution time?
Computing,
Minimum
Execution
time and cost
-optimal
execution
time
Subjective Easy Parallel and 15. What are the sources of overhead in 5
Distributed parallel programs?
Computing,
Sources of
overhead
Subjective Medium Parallel and 16. Explain the Dijkstra’s algorithm with 5
Distributed help of an example.
Computing,
Dijkstra’s
algorithm
Subjective Easy Parallel and 17. Differentiate between all-pairs 5
Distributed shortest path and single source shortest
Computing, path.
Graph
algorithms
Subjective Easy Parallel and 18. What are the issues in sorting of 5
Distributed parallel computers?
Computing,
Issues in
sorting on
parallel
computers
Subjective Medium Parallel and 19. What algorithms are used for sparse 5
Distributed graph?
Computing,
Algorithms
for sparse
graphs
Subjective Hard Parallel and 20. Explain Prim’s algorithm with help of 5
Distributed an example.
Computing,
Prim’s
algorithm
Subjective Medium Parallel and 21. Explain parallel BFS with the help of 5
Distributed an example.
Computing,
Parallel DFS
and BFS
Subjective Easy Parallel and 22. How do you analyze DFS for random 5
Distributed polling?
Computing,
Parallel DFS
and BFS
Subjective Medium Parallel and 23. Explain the analysis for the 5
Distributed following: asynchronous round robin,
Computing, global round robin, random polling.
Parallel DFS
and BFS
Subjective Medium Parallel and 24. What is Dijkstra’s token termination 5
Distributed detection?
Computing,
Parallel DFS
and BFS
Subjective Hard Parallel and 25. Explain formulation of DFBB and 5
Distributed IDA.
Computing,
Sequential
search
algorithms

You might also like