Professional Documents
Culture Documents
50 Ques One Mark
50 Ques One Mark
each]
1) What functions have to be included in any MPI program?
A. MPI_Init and MPI_Abort
B. MPI_Init and MPI_Finalize
C. MPI_Send and MPI_Recv
D. MPI_Comm_size and MPI_Comm_rank
2) To run an MPI program on a linear array of 4 nodes, what
allocation of processes (viz., the list of the ranks of the processes
allocated on each node of the linear array) is possible?
A. 0 1 2 3
B. 3 2 1 0
C. 1 2 3 4
D. A and B
3) Suppose that node A is sending an n-packet message to node B in
a distributed-memory system with a static network. Also suppose
that the message must be forwarded through k intermediate nodes.
The startup time is s and the time for transmitting one packet to a
nearby node is c. What is the most proper formula for calculating the
time for the above communication?
A. s + k c + n - 1
B. s + k n c
C. k n (s + c)
D. s + (k + n) c
4) Which one of the following is NOT a collective communication function?
A. MPI_Send
B. MPI_Reduce
C. MPI_Bcast
D. MPI_Allgather
5) What is the primary reason for using parallel computing?
A. Parallel computing is a natural way of programming
B. Parallel computing is another programming paradigm
C. Hardware technology makes building supercomputers feasible
D. We cannot rely on increasing the speed of CPU to meet the needs
for more computational power
6) According to Flynns taxonomy, the classical Von Newman
architecture is a
A. SISD system
B. SIMD system
C. MISD system
D. MIMD system
A static
B master-slave
C work pool
D dynamic
30) If a parallel program is developed in a way that a single source
program is written and each processor executes its personal copy of
this program, although independently and not in synchronism, this
program is in _____ structure.
A SIMD
B MIMD
C SPMD
D MPMD
31) If a parallel program is developed within the MIMD classification
and each processor will have its own program to execute, this
program is in _____ structure.
A SIMD
B MIMD
C SPMD
D MPMD
32) MPI uses
A static process creation
B dynamic process creation
C a routine spawn() to create processes
D both B and C
33) PVM uses
A static process creation
B dynamic process creation
C a routine spawn() to create processes
D both B and C
34) In the following general style of an MPI SPMD program, assume
process 0 is the master process, and master() and slave() are to be
executed by master process and slave process, respectively, what
should be the missing expression in the if statement?
MPI_Init(&argc, &argv);
.
.
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank
*/
if (_________) // missing expression
master();
else
slave();
.
.
MPI_Finalize();
A myrank == 0
B myrank != 0
C myrank == 1
D myrank != 1
35) MPI is a
A programming language
B set of preprocessor directives
C message passing software
D standard for message passing interface
36) Assume process 0 (the master process) wants to distribute an
array across a set of processes, i.e., partition the array into subarrays and send each sub-array to a process. If we want to use a
single MPI routine to do the partitioning and distribution, which MPI
routine is capable for doing this?
A MPI_Reduce
B MPI_Scatter
C MPI_Gather
D MPI_Alltoall
37) Assume process 0 (the master process) wants to gather results
(single values) from all the processes and combine then into a single
value final result, If we want to use a single MPI routine to do the
data gathering and combination, which MPI routine is capable for
doing this?
A MPI_Reduce
B MPI_Scatter
C MPI_Gather
D MPI_Alltoall
38) Assume each process in a group of processes holds a row of the
matrix in process rank order, i.e., row i of the matrix is held my
process i. If we want to use a single MPI routine to transpose the
matrix across the processes, i.e., elements on each row are
distributed across the process with element i sent to process i, and
after data exchange, each process holds one column of the matrix
with column i resides on process i, which MPI routine is capable for
doing this?
A MPI_Reduce
B MPI_Scatter
C MPI_Gather
D MPI_Alltoall
39) The following MPI program computes the sum of elements in
array x and let process 0 print out the result. What code in the
heading of the for loop should complete the program, without
changing any other parts of the program (assume all variables are
properly defined)?
int x[100];
MPI_Init(&argc, &argv);
.
.
MPI_Comm_rank(MPI_COMM_WORLD, &myrank); /*find process rank
*/
MPI_Comm_size(MPI_COMM_WORLD, &p); /*find total number of
processes */
MPI_Bcast(x, 100, MPI_INT, 0, MPI_COMM_WORLD);
//calcs partial sums
part_sum = 0.0;
for (___________________) // missing code here
{
part_sum += x[i];
}
MPI_Reduce(&part_sum, &result, 1, MPI_INT, MPI_SUM, 0,
MPI_COMM_WORLD);
if (myrank == 0) printf(The final sum is: %d\n, result);
.
.
MPI_Finalize();
A i = myrank; i < n; i += p
B i = myrank+1; i < =n; i += p
C i = myrank*100/p-1; i < (myrank+1)*100/p-1; i++
D i = myrank*100/p+1; i < (myrank+1)*100/p+1; i++
40) If we use tree structured communication to distribute an array of
16 elements evenly across a set of 8 processes, how many elements
will be held by each process?
A1
B2
C3
D4