Professional Documents
Culture Documents
2. When should you define the problem during the problem-solving process?
a) At the end
b) After finding a solution
c) After identifying the problem
d) At the beginning
7. What is the significance of understanding the problem before attempting to solve it?
a) It reduces the chances of finding a solution
b) It helps in finding irrelevant solutions
c) It ensures the problem is ignored
d) It ensures a more focused and effective problem-solving process
10. What happens if the problem is not properly defined in the problem-solving process?
a) It guarantees finding the best solution
b) It reduces the likelihood of finding a solution
c) It eliminates the need for finding a solution
d) It helps in finding all possible solutions
3. When should you identify the problem during the problem-solving process?
a) At the beginning
b) At the end
c) Somewhere in the middle
d) It is not necessary to identify the problem
8. What is the significance of understanding the problem before attempting to solve it?
a) It reduces the chances of finding a solution
b) It helps in finding irrelevant solutions
c) It ensures the problem is ignored
d) It ensures a more focused and effective problem-solving process
10. What happens if the problem is not properly identified in the problem-solving process?
a) It guarantees finding the best solution
b) It reduces the likelihood of finding a solution
c) It eliminates the need for finding a solution
d) It helps in finding all possible solutions
1. What is problem-solving?
a) Avoiding problems
b) Finding multiple solutions to a problem
c) Analyzing the consequences of a problem
d) Process of finding solutions to difficult or complex issues
Answer: d) Process of finding solutions to difficult or complex issues
10. What happens if problem-solving skills are not honed or practiced regularly?
a) It ensures immediate success in solving all problems
b) It makes problems more difficult to solve
c) It eliminates the need for problem-solving
d) It reduces the effectiveness of finding solutions
10. How can familiarity with introductory concepts benefit learners in their programming journey?
a) By memorizing complex algorithms
b) By avoiding data structures in problem-solving
c) By providing a better understanding of algorithm design and analysis
d) By eliminating the need for learning programming languages
10. Which of the following is NOT a common algorithm construct used in programming?
a) Conditional statements
b) Loops
c) Global variables
d) Functions
1. What is the primary goal of complexity analysis of algorithms using Big O notation?
a) Identifying the smallest algorithms
b) Analyzing the physical size of algorithms
c) Evaluating the efficiency and performance of algorithms
d) Comparing algorithms based on their difficulty level
Answer: c) It
Answer: a) O(1)
Answer: b) O(n)
7. How does the time complexity of an algorithm with O(n^2) compare to O(n)?
a) O(n) is more efficient than O(n^2)
b) O(n^2) is more efficient than O(n)
c) O(n^2) and O(n) are equally efficient
d) Both complexities have the same computational cost
8. When analyzing the complexity of an algorithm, what does "n" typically represent?
a) The number of steps in the algorithm
b) The physical size of the algorithm
c) The input size or number of elements in the algorithm
d) The number of iterations in the algorithm
9. Which of the following best describes an algorithm with constant time complexity (O(1))?
a) The algorithm takes the same amount of time regardless of the input size
b) The algorithm's performance improves linearly with the input size
c) The algorithm's performance worsens exponentially with the input size
d) The algorithm's performance varies unpredictably with the input size
Answer: a) The algorithm takes the same amount of time regardless of the input size
10. Why is it important to consider both time complexity and space complexity when analyzing
algorithms?
a) To increase the complexity of the algorithm
b) To measure the efficiency of the algorithm's code
c) To make the algorithm more difficult to understand
d) To avoid analyzing the performance of the algorithm
1. What is the primary objective of learning about Abstract Data Types (ADTs) in algorithm design?
a) Creating complex algorithms with unique data structures
b) Ignoring the concept of data structures
c) Implementing physical components of a computer
d) Encapsulating data and operations in a reusable manner
3. What distinguishes an Abstract Data Type (ADT) from a concrete data type?
a) ADTs have no specific operations associated with them
b) ADTs are limited to primitive data types
c) ADTs are solely used in theoretical computer science
d) Concrete data types have no relationship with algorithms
Answer: d) Loop
7. How does an Abstract Data Type (ADT) differ from a concrete data type in programming?
a) ADTs are only used in theoretical computer science
b) Concrete data types are more versatile than ADTs
c) ADTs provide a high-level description without specifying the implementation
d) Concrete data types are limited to primitive data structures
8. What does "data abstraction" mean in the context of Abstract Data Types (ADTs)?
a) Hiding the internal details of data structures
b) Displaying all the details of data structures
c) Ignoring the implementation of data structures
d) Avoiding the use of data structures in algorithms
Answer: c) To encapsulate data and operations for easy reuse and maintenance
10. How do Abstract Data Types (ADTs) contribute to the object-oriented programming paradigm?
a) By eliminating the use of objects in programming
b) By promoting procedural programming practices
c) By encapsulating data and behavior into objects
d) ADTs have no relevance to object-oriented programming
**2.5.1 Arrays**
Answer: b) 0
4. How can you access the element at index
"i" in an array?
a) array[i]
b) array(index i)
c) array(i)
d) element[array(i)]
Answer: a) array[i]
5. What is the time complexity for accessing an element in an array with "n" elements?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
Answer: a) 10
7. What happens when you try to access an element outside the valid index range in an array?
a) The program crashes
b) The value at the invalid index is set to 0
c) The element at the valid index is returned
d) It depends on the programming language
8. What is the time complexity for inserting an element at the end of an array with "n" elements?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
9. How can you initialize an array with all elements set to zero in many programming languages?
a) array(0)
b) array = []
c) array = [0]
d) It depends on the programming language
**2.5.2 Stacks**
1. What is a stack in computer science?
a) A linear data structure with variable size
b) A collection of unrelated elements
c) A hierarchical data structure
d) A method for sorting data
3. What is the term used to describe the process of adding an element to a stack?
a) Insertion
b) Push
c) Append
d) Enqueue
Answer: b) Push
4. What happens when you try to remove an element from an empty stack?
a) The program crashes
b) The top element is removed
c) The stack becomes double-ended
d) A new element is added
Answer: a) O(1)
6. What is the term used to describe the process of removing an element from a stack?
a) Removal
b) Pop
c) Remove
d) Dequeue
Answer: b) Pop
Answer: a) O(1)
**2.5.3 Queues**
3. What is the term used to describe the process of adding an element to a queue?
a) Insertion
b) Push
c) Append
d) Enqueue
Answer: d) Enqueue
4. What happens when you try to remove an element from an empty queue?
a) The program crashes
b) The front element is removed
c) The queue becomes double-ended
d) A new element is added
6. What is the term used to describe the process of removing an element from a queue?
a) Removal
b) Pop
c) Remove
d) Dequeue
Answer: d) Dequeue
Answer: a) O(1)
a) Sorting algorithms
b) Implementing recursive functions
c) Graph traversal algorithms
d) Scheduling tasks in operating systems
2. How does a circular queue handle the issue of space when adding elements?
a) It automatically resizes to accommodate more elements
b) It overwrites the oldest element to make space for new elements
c) It throws an error when the queue is full
d) It cannot add elements once the queue is full
Answer: b) It overwrites the oldest element to make space for new elements
Answer: a) O(1)
4. What is the time complexity for removing an element from a circular queue?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
5. What happens when you try to remove an element from an empty circular queue?
a) The program crashes
b) The front element is removed
c) The circular queue becomes double-ended
d) A new element is added
6. What is the term used to describe the process of adding an element to a circular queue?
a) Insertion
b) Push
c) Append
d) Enqueue
Answer: d) Enqueue
7. What is the term used to describe the process of removing an element from a circular queue?
a) Removal
b) Pop
c) Remove
d) Dequeue
Answer: d) Dequeue
9. What is the main advantage of using a circular queue over a regular queue?
a) It provides faster insertion and removal of elements
b) It requires less memory for implementation
c) It guarantees a fixed size for the queue
d) It prevents queue overflow and underflow
4. What is the time complexity for accessing an element in a singly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
6. What is the term used to describe the first element in a linked list?
a) First element
b) Front element
c) Head
d) Top element
Answer: c) Head
7. What is the term used to describe the last element in a linked list?
a) Last element
b) End element
c) Tail
d) Rear element
Answer: c) Tail
8. What happens when you try to access an element outside the valid index range in a linked list?
a) The program crashes
b) The value at the invalid index is set to 0
c) The element at the valid index is returned
d) It depends on the programming language
9. What is the time complexity for inserting an element at the beginning of a singly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
10. What is the time complexity for removing an element from the beginning of a singly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
Answer: c) Each element has one pointer pointing to the next element
2. How many pointers does the last element in a singly linked list have?
a) One pointing to the next element
b) Two - one for the previous element and one for the next element
c) None - it does not point to any element
d) It depends on the implementation
3. How do you traverse a singly linked list from the beginning to the end?
a) Move in a random order
b) Follow the pointers from one element to the previous element
c) Follow the pointers from one element to the next element
d) Jump directly to the middle element
Answer: c) Follow the pointers from one element to the next element
4. What is the primary advantage of using a singly linked list over a doubly linked list?
a) Singly linked lists allow constant-time deletion of elements
b) Singly linked lists require less memory per element
c) Singly linked lists provide faster traversal in both directions
d) Singly linked lists are more memory-efficient
5. What is the time complexity for accessing an element in a singly linked list at a specific index?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
Answer: a) Change the pointer of the last element to point to the new element
7. What is the time complexity for inserting an element at the end of a singly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
8. How do you remove an element from the middle of a singly linked list?
a) Change the pointer of the previous element to point to the next element
b) Change the pointer of the next element to point to the previous element
c) Remove the element and update the pointers of adjacent elements
d) Singly linked lists do not support removal of elements from the middle
Answer: c) Remove the element and update the pointers of adjacent elements
9. What is the time complexity for removing an element from the middle of a singly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
Answer: a) Each element has two pointers - one for the previous element and one for the next element
2. How many pointers does the first element in a doubly linked list have?
a) One pointing to the next element
b) Two - one for the previous element and one for the next element
c) None - it does not point to any element
d) It depends on the implementation
3. How do you traverse a doubly linked list from the beginning to the end?
a) Move in a random order
b) Follow the pointers from one element to the previous element
c) Follow the pointers from one element to the next element
d) Jump directly to the middle element
Answer: c) Follow the pointers from one element to the next element
4. What is the primary advantage of using a doubly linked list over a singly linked list?
a) Doubly linked lists allow constant-time deletion of elements
b) Doubly linked lists require less memory per element
c) Doubly linked lists provide faster traversal in both directions
d) Doubly linked lists are more memory-efficient
5. What is the time complexity for accessing an element in a doubly linked list at a specific index?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
Answer: a) Change the pointer of the last element to point to the new element
7. What is the time complexity for inserting an element at the end of a doubly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
8. How do you remove an element from the middle of a doubly linked list?
a) Change the pointer of the previous element to point to the next element
b) Change the pointer of the next element to point to the previous element
c) Remove the element and update the pointers of adjacent elements
d) Doubly linked lists do not support removal of elements from the middle
Answer: c) Remove the element and update the pointers of adjacent elements
9. What is the time complexity for removing an element from the middle of a doubly linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
10. What is the main advantage of using a doubly linked list over a singly linked list?
a) Doubly linked lists require less memory per element
b) Doubly linked lists allow constant-time access to elements at arbitrary positions
c) Doubly linked lists are easier to implement than singly linked lists
d) Doubly linked lists have no advantages over singly linked lists
Answer: b) Doubly linked lists allow constant-time access to elements at arbitrary positions
3. What is the term used to describe the first element in a circular linked list?
a) First element
b) Front element
c) Head
d) Top element
Answer: c) Head
4. What is the term used to describe the last element in a circular linked list?
a) Last element
b) End element
c) Tail
d) Rear element
Answer: c) Tail
5. What is the time complexity for accessing an element in a circular linked list at a specific index?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
list?
a) Change the pointer of the last element to point to the new element
b) Change the pointer of the first element to point to the new element
c) Create a new list with the new element and append it to the existing list
d) Circular linked lists do not support adding elements to the end
Answer: a) Change the pointer of the last element to point to the new element
7. What is the time complexity for inserting an element at the end of a circular linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
8. How do you remove an element from the middle of a circular linked list?
a) Change the pointer of the previous element to point to the next element
b) Change the pointer of the next element to point to the previous element
c) Remove the element and update the pointers of adjacent elements
d) Circular linked lists do not support removal of elements from the middle
Answer: c) Remove the element and update the pointers of adjacent elements
9. What is the time complexity for removing an element from the middle of a circular linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
10. What is the main advantage of using a circular linked list over a singly linked list?
a) Circular linked lists require less memory per element
b) Circular linked lists allow constant-time access to elements at arbitrary positions
c) Circular linked lists provide faster traversal in both directions
d) Circular linked lists have no advantages over singly linked lists
Answer: c) A data structure containing data and a pointer to the next element
Answer: b) O(n)
5. What is the primary advantage of using node-based storage with arrays over standard arrays?
a) It provides faster access to elements
b) It allows constant-time insertion and deletion
c) It requires less memory per element
d) It provides faster traversal in both directions
6. What is the time complexity for inserting an element at the end of a node-based storage linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
7. How do you add an element to the beginning of a node-based storage linked list?
a) Change the pointer of the first node to point to the new node
b) Change the pointer of the last node to point to the new node
c) Create a new list with the new element and append it to the existing list
d) Node-based storage linked lists do not support adding elements to the beginning
Answer: a) Change the pointer of the first node to point to the new node
8. What is the time complexity for inserting an element at the beginning of a node-based storage linked
list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: a) O(1)
9. How do you remove an element from the middle of a node-based storage linked list?
a) Change the pointer of the previous node to point to the next node
b) Change the pointer of the next node to point to the previous node
c) Remove the node and update the pointers of adjacent nodes
d) Node-based storage linked lists do not support removal of elements from the middle
Answer: c) Remove the node and update the pointers of adjacent nodes
10. What is the time complexity for removing an element from the middle of a node-based storage
linked list?
a) O(1)
b) O(n)
c) O(log n)
d) O(n^2)
Answer: b) O(n)
Lecture 4: Recursion
**4.1 What is recursion?**
1. Recursion is a programming technique where a function calls itself directly or indirectly until it
reaches a specific condition called the:
a) Loop condition
b) Recursive condition
c) Base condition
d) Termination condition
Answer: d) A new instance of the function is created and added to the stack
3. Recursion provides an alternative approach to solving problems that can also be solved using:
a) Loops
b) Arrays
c) Pointers
d) Switch statements
Answer: a) Loops
Answer: d) O(2^n)
10. Recursion can be considered as an alternative to which of the following programming constructs?
a) Decision-making statements (if-else)
b) Looping statements (for, while)
c) Function declarations
d) Variable declarations
Answer: b) The recursive function stops calling itself and returns a value
6. How many times does a recursive function call itself when the base condition is met?
a) Zero times
b) One time
c) Multiple times
d) It depends on the implementation
10. What happens if the base condition is not met during recursion?
a) The program terminates successfully
b) The recursive function halts execution
c) The program enters an infinite loop
d) The function returns the default value of its return type
5. Which of the following statements is true for direct and indirect recursion?
a) Both direct and indirect recursion involve only one function call
b) Direct recursion involves
only one function call, while indirect recursion involves multiple function calls
c) Indirect recursion involves only one function call, while direct recursion involves multiple function
calls
d) Both direct and indirect recursion involve multiple function calls
Answer: d) Both direct and indirect recursion involve multiple function calls
Answer: a) The recursive logic is complex and needs to be split into multiple functions
8. What happens if the base condition is not defined correctly in indirect recursion?
a) The program terminates successfully
b) The recursive function halts execution
c) The program enters an infinite loop
d) The function returns the default value of its return type
2. Each instance of a recursive function call has its own set of:
a) Input parameters
b) Output parameters
c) Local variables
d) Static variables
Answer: b) Stack
Answer: c) Each function call has its own copy of local variables
6. Excessive recursion or recursion with a large number of function calls can lead to:
a) Increased memory consumption
b) Stack overflow and program crash
c) Faster execution of the program
d) Better performance
Answer: b) Stack overflow and program crash
9. What is the purpose of allocating memory separately for each function call in recursion?
a) To ensure faster execution of the program
b) To allow for easy sharing of data between function calls
c) To prevent memory leaks
d) To allow for parallel execution of function calls
10. When does the memory allocated to a recursive function call get released?
a) When the recursive call is made
b) After the base condition is met
c) When the program terminates
d) When the recursive call is completed
2. Recursion can lead to a more concise and readable code compared to:
a) Functions with loops
b) Functions with multiple return statements
c) Functions with many parameters
d) Functions with a single return statement
6. Recursive algorithms can be more intuitive to understand for problems that have a:
a) Linear structure
b) Hierarchical structure
c) Random structure
d) Circular structure
Answer: d) The efficiency of recursion and iteration depends on the problem being solved
Answer: d) The number of recursive calls and the time complexity of each call
3. What is the time complexity of a recursive function with multiple recursive calls and a time
complexity of O(n) per call?
a) O(n)
b) O(log n)
c) O(n^2)
d) O(2^n)
Answer: c) O(n^2)
Answer: c) The number of local variables and the maximum depth of the recursion
5. The space complexity of a recursive function with a time complexity of O(1) per call is:
a) O(1)
b) O(n)
c) O(log n)
d) O(2^n)
Answer: b) O(n)
6. What is the impact of increasing the depth of recursion on the space complexity of a recursive
function?
a) The space complexity remains constant
b) The space complexity increases linearly
c) The space complexity increases logarithmically
d) The space complexity decreases
7. The space complexity of a recursive function with a time complexity of O(log n) per call is:
a) O(1)
b) O(n)
c) O(log n)
d) O(2^n)
Answer: c) O(log n)
8. The space complexity of a recursive function with a time complexity of O(2^n) per call is:
a) O(1)
b) O(n)
c) O(log n)
d) O(2^n)
Answer: d) O(2^n)
9. Recursive functions that involve backtracking or multiple recursive calls may have:
a) Low time complexity but high space complexity
b) High time complexity but low space complexity
c) Low time complexity and low space complexity
d) High time complexity and high space complexity
10. The time and space complexity of a recursive function can be analyzed using techniques such as:
a) Stack tracing
b) Profiling
c) Debugging
d) Manual calculation
Answer: b) Profiling
Lecture 5: Trees & Applications
Answer: b) One
5. What is the maximum number of children that a node can have in a binary tree?
a) 1
b) 2
c) 3
d) There is no maximum limit
Answer: b) 2
6. The depth of a node in a tree is the number of edges in the path from the root node to that node.
What is the depth of the root node?
a) 0
b) 1
c) The depth of the root node is not defined
d) It depends on the height of the tree
Answer: a) 0
7. The height of a tree is the maximum depth of any node in the tree. What is the height of a tree with a
single node (no children)?
a) 0
b) 1
c) The height is not defined for a tree with a single node
d) It depends on the type of tree
Answer: a) 0
1. In a binary tree, a node that has at least one child node is called:
a) A leaf node
b) An internal node
c) A root node
d) A branch node
2. What is the maximum number of child nodes that an internal node can have in a binary tree?
a) 1
b) 2
c) 3
d) There is no maximum limit
Answer: b) 2
3. The total number of nodes in a binary tree with height h can be at most:
a) h
b) 2^h - 1
c) 2^h
d) h^2
Answer: b) 2^h - 1
Answer: a) 0
8. In a binary tree, the nodes at the same level are also called:
a) Siblings
b) Ancestors
c) Descendants
d) Cousins
Answer: a) Siblings
1. Which tree traversal visits the root node first, then the left subtree, and finally the right subtree?
a) Inorder traversal
b) Preorder traversal
c) Postorder traversal
d) Level-order traversal
3. What is the sequence of nodes visited during a postorder traversal of a binary tree?
a) Root, left subtree, right subtree
b) Left subtree, right subtree, root
c) Right subtree, root, left subtree
d) Left subtree, root, right subtree
4. Which traversal is commonly used to delete nodes from a binary search tree?
a) Inorder traversal
b) Preorder traversal
c) Postorder
traversal
d) Level-order traversal
Answer: a) O(n)
8. In a binary tree, which traversal is used to obtain nodes in ascending order (sorted order)?
a) Inorder traversal
b) Preorder traversal
c) Postorder traversal
d) Level-order traversal
1. A binary tree is a tree data structure in which each node can have:
a) Two children
b) Three children
c) More than three children
d) Any number of children
Answer: b) 1
5. A binary tree with all leaf nodes at the same level is called a:
a) Full binary tree
b) Perfect binary tree
c) Balanced binary tree
d) Complete binary tree
6. A binary tree with all internal nodes having exactly one child is called a:
a) Full binary tree
b) Perfect binary tree
c) Balanced binary tree
d) Skewed binary tree
Answer: b) The heights of the left and right subtrees differ by at most one
Answer: a) 2^k
2. In a complete binary tree, if a node has an index i (0-based indexing) in the level-order traversal,
then its left child will have an index of:
a) i - 1
b) 2i
c) 2i + 1
d) i + 1
Answer: c) 2i + 1
3. In a complete binary tree, if a node has an index i (0-based indexing) in the level-order traversal,
then its right child will have an index of:
a) i - 1
b) 2i
c) 2i + 1
d) i + 1
Answer: c) 2i + 1
4. In a complete binary tree with n nodes, what is the maximum height of the tree?
a) log2(n)
b) n
c) n - 1
d) n + 1
Answer: a) log2(n)
Answer: b) 2^h
with nodes
b) Is a perfect binary tree
c) Is a full binary tree
d) Is a complete binary tree, but some nodes are missing from the last level
Answer: d) Is a complete binary tree, but some nodes are missing from the last level
7. In an almost complete binary tree (ACBT), the last level may not be completely filled, but:
a) All internal nodes have two children
b) The number of nodes at each level is the same
c) The left subtree is completely filled
d) The right subtree is completely filled
8. What is the key property that distinguishes an almost complete binary tree (ACBT) from a complete
binary tree?
a) The height of the tree
b) The number of nodes
c) The number of children per node
d) The completeness of the last level
1. In an array implementation of an almost complete binary tree (ACBT), if a node is at index i, what
are the indices of its left child and right child, respectively?
a) 2i and 2i + 1
b) i - 1 and i + 1
c) i + 1 and i + 2
d) i and i + 1
Answer: a) 2i and 2i + 1
2. In an array implementation of an almost complete binary tree (ACBT), what is the index of the
parent node of a node at index i?
a) i / 2
b) i - 1
c) i + 1
d) i * 2
Answer: a) i / 2
3. If an array is used to represent an almost complete binary tree (ACBT) of n nodes, what will be the
size of the array?
a) n
b) n + 1
c) 2n
d) 2n + 1
Answer: a) n
4. In an array implementation of an almost complete binary tree (ACBT), the elements of the tree are
stored in the array such that:
a) All elements are stored in ascending order
b) The root element is at the first index (index 0)
c) The elements are stored in a zigzag manner
d) The parent node is always greater than its children
5. The array implementation of an almost complete binary tree (ACBT) saves memory compared to a
linked representation because:
a) It requires fewer nodes to represent the tree
b) It eliminates the need for pointers to link nodes
c) It allows for faster tree traversal
d) It has a smaller height
6. In an array representation of an almost complete binary tree (ACBT), if a node is at index i, and i is
greater than the size of the array, it indicates that:
a) The tree is not an ACBT
b) The node is a leaf node
c) The node has no children
d) The node is missing from the tree
7. One disadvantage of using an array to represent an almost complete binary tree (ACBT) is that:
a) It requires more memory compared to linked representation
b) It is difficult to perform insertion and deletion operations
c) It is not possible to find the parent of a node
d) It cannot handle trees with more than 100 nodes
8. When converting an ACBT into an array representation, the nodes are stored in the array in a
specific order to maintain:
a) Balanced height of the tree
b) The heap property of the tree
c) The completeness of the last level
d) The maximum number of nodes at each level
1. In a binary search tree (BST), the key value of the left child is:
a) Greater than the key value of the parent
b) Less than the key value of the parent
c) Equal to the key value of the parent
d) Unrelated to the key value of the parent
2. In a binary search tree (BST), the key value of the right child is:
a) Greater than the key value of the parent
b) Less than the key value of the parent
c) Equal to the key value of the parent
d) Unrelated to the key value of the parent
3. The binary search tree (BST) property allows for efficient searching of elements because:
a) All elements are stored in a sorted order
b) All elements have unique key values
c) All elements have the same key value
d) The tree is perfectly balanced
Answer: a) Following the left child pointers from the root until a leaf node is reached
Answer: b) Following the right child pointers from the root until a leaf node is reached
6. Which operation is commonly used to insert a new element into a binary search tree (BST) while
maintaining the BST property?
a) Left rotation
b) Right rotation
c) Preorder traversal
d) Binary search
7. In a binary search tree (BST), if a node with a given key value is not found during the search
operation, it indicates that:
a) The tree is not balanced
b) The tree is not an ACBT
c) The node does not exist in the tree
d) The tree is not sorted
8. The worst-case time complexity of searching for an element in a binary search tree (BST) with n
nodes is:
a) O(1)
b
) O(log n)
c) O(n)
d) O(n^2)
Answer: b) O(log n)
1. An AVL tree is a self-balancing binary search tree, where the balance factor of every node is:
a) Greater than 0
b) Less than 0
c) Equal to 0
d) Between -1 and 1
2. In an AVL tree, the balance factor of a node is defined as the difference between:
a) The height of its left subtree and the height of its right subtree
b) The number of nodes in its left subtree and the number of nodes in its right subtree
c) The key value of its left child and the key value of its right child
d) The depth of its left subtree and the depth of its right subtree
Answer: a) The height of its left subtree and the height of its right subtree
3. What operation is performed to balance an AVL tree after inserting a new node?
a) Right rotation
b) Left rotation
c) Preorder traversal
d) Level-order traversal
4. In an AVL tree, after a right rotation is performed on a node, which node becomes the new parent of
the subtree?
a) The left child of the node
b) The right child of the node
c) The parent of the node
d) The grandparent of the node
Answer: b) O(log n)
6. In an AVL tree, after a left rotation is performed on a node, which node becomes the new parent of
the subtree?
a) The left child of the node
b) The right child of the node
c) The parent of the node
d) The grandparent of the node
7. In an AVL tree, the balancing operation is triggered when the balance factor of a node becomes:
a) 0
b) 1
c) -1
d) Outside the range of -1 to 1
8. The main advantage of using AVL trees over regular binary search trees is that AVL trees:
a) Have faster search operations
b) Require less memory
c) Support more operations
d) Ensure a balanced height, resulting in faster overall operations
2. In a multi-way tree, the number of children a node can have is called the:
a) Degree of the node
b) Height of the node
c) Level of the node
d) Depth of the node
4. A binary tree is a special case of a multi-way tree where each internal node has:
a) One child
b) Two children
c) Three children
d) Multiple children
7. A multi-way tree with all internal nodes having the same number of children is called a:
a) Full multi-way tree
b) Perfect multi-way tree
c) Balanced multi-way tree
d) Complete multi-way tree
node
b) Height of the node
c) Level of the node
d) Depth of the node
7. B-trees and B+ trees are particularly well-suited for use in databases because they:
a) Minimize the use of disk storage
b) Provide a constant-time search operation
c) Balance the tree automatically
d) Support efficient range queries and ordered traversals
8. B-trees and B+ trees are commonly used in scenarios where data is stored on secondary storage
devices like hard disks because they:
a) Reduce the number of disk accesses
b) Minimize the use of main memory
c) Support parallel processing
d) Allow for faster processing of complex algorithms
2. What is the time complexity of the sequential search algorithm in the worst case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
3. The sequential search algorithm is best suited for searching in which type of data structure?
a) Sorted arrays
b) Linked lists
c) Binary search trees
d) Hash tables
5. In the sequential search algorithm, if the target element is found early in the list, what is the best-
case time complexity?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: a) O(1)
1. The sequential search algorithm starts searching for the target element from the:
a) Last element of the list
b) Middle element of the list
c) First element of the list
d) Randomly selected element of the list
Answer: c) First element of the list
2. The sequential search algorithm compares the target element with each element in the list until:
a) It finds the target element
b) It reaches the middle element of the list
c) The list becomes empty
d) It reaches the last element of the list
Answer: c) O(n)
4. If the target element is not present in the list, the sequential search algorithm will:
a) Output the index of the target element as -1
b) Output the index of the target element as 0
c) Return an empty list
d) Iterate through the entire list without finding the target element
Answer: d) Iterate through the entire list without finding the target element
1. The time complexity of the sequential search algorithm in the best-case scenario is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: a) O(1)
2. In the worst-case scenario, when the target element is not present in the list, the sequential search
algorithm will perform how many comparisons?
a) 1
b) n
c) n/2
d) n-1
Answer: d) n-1
4. The time complexity of the sequential search algorithm in the average-case scenario is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
5. The sequential search algorithm works well for small datasets because of its:
a) Low space complexity
b) Low time complexity
c) Ease of implementation
d) Recursive nature
2. The binary search algorithm divides the input data into two halves and compares the target element
with the:
a) Middle element of the array
b) First element of the array
c) Last element of the array
d) Randomly selected element of the array
3. In each step of the binary search algorithm, half of the remaining elements are:
a) Removed from the search
b) Duplicated for further search
c) Sorted in ascending order
d) Compared with the target element
4. If the target element is found during the binary search algorithm, it will be at the:
a) First index of the array
b) Last index of the array
c) Middle index of the array
d) Any index of the array
Answer: b) O(log n)
6. In the binary search algorithm, if the target element is greater than the middle element of the array,
the search continues in the:
a) Left half of the array
b) Right half of the array
c) Middle element of the array
d) Any random half of the array
1. The time complexity of the binary search algorithm in the best-case scenario is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: b) O(log n)
2. In the worst-case scenario, when the target element is not present in the sorted array, the binary
search algorithm will perform how many comparisons?
a) 1
b) log n
c) n
d) n/2
Answer: b) log n
) Linked lists
c) Hash tables
d) Sorted arrays
4. The time complexity of the binary search algorithm in the average-case scenario is:
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: b) O(log n)
5. The binary search algorithm works well for large datasets because of its:
a) Low space complexity
b) Low time complexity
c) Ease of implementation
d) Recursive nature
Answer: b) Low time complexity
4. The time complexity of a sorting algorithm is primarily measured by the number of:
a) Comparisons and swaps performed
b) Loops used in the algorithm
c) Recursion calls made
d) Random choices made during the process
5. In a stable sorting algorithm, elements with equal keys maintain their relative order in the:
a) Original array
b) Final sorted array
c) Middle of the array
d) Random positions in the array
2. In each iteration of the selection sort algorithm, the selected smallest element is placed at the:
a) Beginning of the array
b) End of the array
c) Middle of the array
d) Random position in the array
3. What is the time complexity of the selection sort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: d) O(n^2)
5. The selection sort algorithm is not suitable for sorting large datasets because of its:
a) Low time complexity
b) Low space complexity
c) High time complexity
d) High space complexity
1. The insertion sort algorithm works by dividing the input array into:
a) Two equal halves
b) Two sorted and unsorted parts
c) Three equal parts
d) Multiple subarrays
2. In each iteration of the insertion sort algorithm, the algorithm picks an element from the unsorted
part and places it in the correct position within the:
a) Unsorted part of the array
b) Sorted part of the array
c) Middle of the array
d) Random position in the array
Answer: b) Sorted part of the array
3. What is the time complexity of the insertion sort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
1. The bubble sort algorithm works by repeatedly comparing adjacent elements and swapping them if
they are in:
a) Random order
b) Ascending order
c) Descending order
d) Any order
2. In each iteration of the bubble sort algorithm, the largest element "bubbles up" to the:
a) Beginning of the array
b) End of the array
c) Middle of the array
d) Random position in the
array
3. What is the time complexity of the bubble sort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
**6.3.4 Heapsort**
3. In heapsort, the largest element is repeatedly removed from the heap and placed at the:
a) Root of the heap
b) Leaf of the heap
c) Middle of the heap
d) Random position in the heap
4. What is the time complexity of the heapsort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
5. Heapsort is an example of a:
a) Divide and conquer algorithm
b) Greedy algorithm
c) Recursive algorithm
d) Linear search algorithm
**6.3.5 Mergesort**
2. In mergesort, the input array is divided into smaller subarrays until each subarray contains:
a) One element
b) Two elements
c) Three elements
d) Four elements
3. The process of combining two sorted subarrays into a single sorted array is called:
a) Merge
b) Split
c) Heapify
d) Swap
Answer: a) Merge
4. What is the time complexity of the mergesort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
5. Mergesort is an example of a:
a) Divide and conquer algorithm
b) Greedy algorithm
c) Recursive algorithm
d) Linear search algorithm
**6.3.6 Quicksort**
2. In quicksort, the input array is partitioned into two subarrays such that elements in one subarray are:
a) Smaller than the pivot element
b) Greater than the pivot element
c) Equal to the pivot element
d) In random order with respect to the pivot element
4. What is the time complexity of the quicksort algorithm in the best case?
a) O(1)
b) O(log n)
c) O(n)
d) O(n^2)
Answer: c) O(n)
5. Quicksort is an example of a:
a) Divide and conquer algorithm
b) Greedy algorithm
c) Recursive algorithm
d) Linear search algorithm
1. The time complexity of sorting algorithms is generally expressed in terms of the number of:
a) Swaps performed
b) Loops used in the algorithm
c) Comparisons performed
d) Recursive calls made
2. Which sorting algorithm has the lowest time complexity in the best-case scenario?
a) Selection sort
b) Bubble sort
c) Quick sort
d) Merge sort
3. A stable sorting algorithm maintains the relative order of elements with equal keys in the:
a) Original array
b) Final sorted array
c) Middle of the array
d) Random positions in the array
5. The worst-case time complexity of a sorting algorithm gives the upper bound on the time required to
sort the data when the data is:
a) Sorted in ascending order
b) Sorted in descending order
c) Sorted in any order
d) Sorted in reverse order
1. The selection sort algorithm has a time complexity of O(n^2) in both the best and worst cases.
a) True
b) False
Answer: a) True
Answer: b) False
3. Selection sort works well for large datasets due to its efficient time complexity.
a) True
b) False
Answer: b) False
4. The main advantage of selection sort is its low space complexity.
a) True
b) False
Answer: a) True
5. The number of swaps performed by selection sort is proportional to the number of elements in the
array.
a) True
b) False
Answer: a) True
1. The insertion sort algorithm has a time complexity of O(n) in the best case.
a) True
b) False
Answer: a) True
Answer: a) True
3. Insertion sort is well-suited for sorting linked lists due to its low time complexity.
a) True
b) False
Answer: a) True
4. The number of comparisons performed by insertion sort is proportional to the number of elements in
the array.
a) True
b) False
Answer: a) True
5. The main advantage of insertion sort is its efficient performance on large datasets.
a) True
b) False
Answer: b) False
1. The bubble sort algorithm has a time complexity of O(n^2) in the best case.
a) True
b) False
Answer: a) True
Answer: a) True
3. Bubble sort is well-suited for sorting arrays with a small number of elements due to its low time
complexity.
a) True
b) False
Answer: a) True
4. The number of swaps performed by bubble sort is proportional to the number of elements in the
array.
a) True
b) False
Answer: a) True
5. The main advantage of bubble sort is its efficient performance on large datasets.
a) True
b) False
Answer: b) False
6.4.4 Heapsort
1. The heapsort algorithm has a time complexity of O(n log n) in the best case.
a) True
b) False
Answer: a) True
Answer: b) False
3. Heapsort is well-suited for sorting linked lists due to its low time complexity.
a) True
b) False
Answer: b) False
4. The number of comparisons performed by heapsort is proportional to the number of elements in the
array.
a) True
b) False
Answer: a) True
Answer: a) True
6.4.5 Mergesort
1. The mergesort algorithm has a time complexity of O(n log n) in the best case.
a) True
b) False
Answer: a) True
Answer: a) True
3. Mergesort is well-suited for sorting arrays with a small number of elements due to its low time
complexity.
a) True
b) False
Answer: b) False
4. The number of swaps performed by mergesort is proportional to the number of elements in the array.
a) True
b) False
Answer: b) False
Answer: a) True
Lecture 7: Hash Functions and Hash Tables
**7.1 Hashing & Introduction to Hash Tables**
3. The data structure that uses hashing to store and retrieve data is called a:
a) Linked list
b) Binary tree
c) Hash table
d) Stack
5. Hash tables use an array to store data, and the array index is determined by the hash function. What
is the term used for the array index?
a) Hash value
b) Hash key
c) Hash code
d) Hash index
6. In hashing, what is the name given to the process of converting a key into its corresponding hash
value/index?
a) Hashing
b) Collision resolution
c) Hash function
d) Hashing algorithm
8. When two different keys generate the same hash value, it is called:
a) Clustering
b) Hash collision
c) Hash table overflow
d) Linear probing
Answer: a) Collisions
Answer: c) Reversibility
5. Which of the following is a disadvantage of using a hash function with poor distribution properties?
a) Reduced collision probability
b) Increased load factor
c) Slower hash table resizing
d) Reduced memory usage
6. Which of the following hash functions is more likely to cause clustering in a hash table?
a) A hash function with uniform distribution
b) A hash function that always returns the same hash value
c) A hash function that multiplies the key by a prime number
d) A hash function that performs bitwise XOR on the key
Answer: b) A hash function that always returns the same hash value
7. One approach to handling hash collisions is to use a separate data structure to store multiple values
that share the same hash index. What is this data structure called?
a) Linked list
b) Binary tree
c) Stack
d) Queue
Answer: a) Multiplying the key by a prime number and taking the remainder
Answer: b) Breaks the key into several parts, adds them, and takes the remainder
Answer: b) Multiplies the key by itself and takes the middle digits
5. Which of the following is an advantage of using the mid-square method for hashing?
a) It provides a more uniform distribution of hash values.
b) It is computationally less expensive than other methods.
c) It guarantees no collisions for all input keys.
d) It is easy to implement and requires little additional memory.
hash function.
c) All hash functions involve bitwise operations on the input key.
d) Folding method can only be used with positive integer keys.
Answer: a) The hash function produces the same value for two different keys
Answer: d) Separate chaining uses an additional data structure to store collided elements.
6. In the separate chaining collision resolution technique, each bucket in the hash table is a:
a) Linked list
b) Binary tree
c) Fixed-size array
d) Circular queue
4. Linear probing may cause performance degradation when the load factor of the hash table:
a) Is close to 0.5
b) Is close to 1
c) Is greater than 1
d) Is less than 1
Answer: b) Is close to 1
5. Linear probing can lead to a phenomenon called "primary clustering." What does primary clustering
refer to in linear probing?
a) Keys that hash to the same index form a chain
b) Collided elements are placed at the beginning of the table
c) Collided elements are evenly distributed across the table
d) The hash table is resized to accommodate more elements
8. The linear probing technique usually requires __________ additional memory compared to separate
chaining.
a) More
b) Less
c) The same amount of
d) No
Answer: b) Less
4. Quadratic probing may cause performance degradation when the load factor of the hash table:
a) Is close to 0.5
b) Is close to 1
c) Is greater than 1
d) Is less than 1
Answer: b) Is close to 1
5. Quadratic probing can mitigate the problem of primary clustering experienced in linear probing.
How does quadratic probing achieve this?
a) By rehashing the keys that cause primary clustering
b) By evenly distributing collided elements across the table
c) By incrementing the index quadratically to find empty slots
d) By using a different hash function for quadratic probing
8. Quadratic probing
can handle more collisions compared to linear probing before causing primary clustering. What is the
primary advantage of this property?
a) It allows for faster search operations.
b) It requires less memory for the hash table.
c) It reduces the need for hash table resizing.
d) It guarantees a lower load factor.
4. Double hashing may cause performance degradation when the load factor of the hash table:
a) Is close to 0.5
b) Is close to 1
c) Is greater than 1
d) Is less than 1
Answer: b) Is close to 1
5. Double hashing can handle primary clustering that occurs in linear probing. How does double
hashing achieve this?
a) By using two different hash functions to distribute collided elements
b) By rehashing the keys that cause primary clustering
c) By using a quadratic step size for the linear probe
d) By creating a separate linked list for collided elements
8. Which of the following is an advantage of using double hashing over linear probing?
a) Lower likelihood of collisions
b) Faster search operations
c) Reduced primary clustering
d) Better space utilization
2. In separate chaining, if the load factor of the hash table is high, what can be done to avoid hash table
overflow?
a) Resize the hash table to increase its size
b) Use linear probing for collision resolution
c) Use quadratic probing for collision resolution
d) Use a different hash function
3. When deleting an element from a hash table using separate chaining, what should be done with the
linked list in the corresponding bucket?
a) Delete the linked list
b) Remove the element from the linked list
c) Rehash the linked list
d) Reverse the linked list
4. In double hashing, how is the step size for the linear probe determined?
a) By the first hash function
b) By the second hash function
c) By the difference between the hash values generated by the two hash functions
d) By the load factor of the hash table
5. Which collision resolution technique has the least likelihood of causing secondary clustering?
a) Linear probing
b) Quadratic probing
c) Double hashing
d) Separate chaining
6. When using quadratic probing for collision resolution, what happens if the computed index for the
next probe exceeds the size of the hash table?
a) The probe continues from the beginning of the hash table.
b) The probe continues from the last available slot in the hash table.
c) The probe wraps around to the initial position and starts again.
d) The element is placed in the next available slot in the hash table.
Answer: c) The probe wraps around to the initial position and starts again.
7. In double hashing, what is the primary purpose of using two different hash functions?
a) To reduce the number of collisions
b) To ensure uniform distribution of keys
c) To avoid hash table overflow
d) To improve search efficiency
Answer: c) Tree
17. The space complexity of an adjacency matrix for an undirected graph with n vertices is:
a) O(n)
b) O(n^2)
c) O(n log n)
d) O(1)
Answer: b) O(n^2)
22. The space complexity of an adjacency list for a graph with n vertices and e edges is:
a) O(n)
b) O(e)
c) O(n + e)
d) O(n * e)
Answer: c) O(n + e)
25. In an adjacency list representation, the time complexity to check if there is an edge between two
vertices is:
a) O(1)
b) O(log n)
c) O(n)
d) O(e)
Answer: a) O(1)
**8.4.3 Graph Traversal Algorithms (Breadth First Search, Depth First Search)**
26. Breadth First Search (BFS) explores the graph in which order?
a) From the deepest nodes to the root
b) From the first encountered vertex to the last
c) From the source vertex to the farthest vertices
d) From the last encountered vertex to the first
28. In BFS, the shortest path between two vertices in an unweighted graph is found by:
a) Maintaining a priority queue of vertices
b) Exploring all possible paths from the source to the destination
c) Exploring the graph level by level
d) Using Dijkstra's algorithm
Answer: c) Exploring the graph level by level
30. Which traversal algorithm is more suitable for finding the shortest path in an unweighted graph?
a) BFS (Breadth First Search)
b) DFS (Depth First Search)
c) Both BFS and DFS are equally suitable
d) Neither BFS nor DFS can find the shortest path
32. The priority queue is a crucial data structure used in Dijkstra's algorithm to:
a) Store all the vertices of the graph
b) Keep track of visited vertices
c) Maintain the distances to each vertex from the source
d) Ensure that the graph has no cycles
33. In Dijkstra's algorithm, what happens when a shorter path to a vertex is discovered?
a) The vertex is marked as visited and added to the shortest path tree
b) The vertex is removed from the priority queue
c) The distance to the vertex is updated in the priority queue
d) The algorithm backtracks to the previous vertex
Answer: a) The shortest path between all pairs of vertices in a weighted graph
38. The time complexity of the Floyd-Warshall algorithm for a graph with V vertices is:
a) O(V)
b) O(V^2)
c) O(V^3)
d) O(log V)
Answer: c) O(V^3)
39. The Floyd-Warshall algorithm uses a dynamic programming approach to find the shortest path.
What does "dynamic programming" mean in this context?
a) Solving a problem by breaking it into smaller subproblems and storing their solutions
b) Using random values to update the distance matrix
c) Always selecting the shortest path at each step
d) Simulating all possible paths in the graph
Answer: a) Solving a problem by breaking it into smaller subproblems and storing their solutions
43. The time complexity of Prim's algorithm for finding the minimum spanning tree in a graph with V
vertices is:
a) O(V)
b) O(V^2)
c) O(V^3)
d) O(V log V)
Answer: b) O(V^2)
45. In Prim's algorithm, what happens when two disconnected components are present in the graph?
a) The algorithm enters an infinite loop
Answer: c) The algorithm connects the components with a minimum weight edge
46. When using Prim's algorithm, the initial vertex for the minimum spanning tree is chosen:
a) Arbitrarily
b) As the vertex with the smallest degree
c) As the vertex with the largest degree
d) As the vertex with the smallest total edge weight
Answer: a) Arbitrarily
Answer: a) The edge with the smallest weight from the current tree to a new vertex
48. Which statement is true about Prim's algorithm for finding the minimum spanning tree?
a) It can handle graphs with negative edge weights
b) It always starts with the vertex having the smallest degree
c) It may not work correctly for disconnected graphs
d) It guarantees the maximum spanning tree of the graph
50. The minimum spanning tree found by Prim's algorithm is unique if:
a) The graph has a cycle
b) The graph has multiple disconnected components
c) The graph has at least one negative weight edge
d) The graph has unique edge weights
52. Kruskal's algorithm sorts the edges of the graph based on:
a) The vertices they connect
b) The edge weights in non-decreasing order
c) The edge weights in non-increasing order
d) The total number of edges
53. The time complexity of Kruskal's algorithm for finding the minimum spanning tree in a graph
with E edges and V vertices is:
a) O(E)
b) O(E log V)
c) O(V^2)
d) O(V log V)
55. In Kruskal's algorithm, what happens when two disconnected components are present in the
graph?
a) The algorithm enters an infinite loop
b) The algorithm outputs an error message
c) The algorithm connects the components with a minimum weight edge
d) The algorithm cannot handle disconnected components
Answer: c) The algorithm connects the components with a minimum weight edge
Lecture 9: Algorithm Designs
56. Which class of algorithms is used to find the shortest path in a graph with non-negative edge
weights?
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Shortest Path algorithm
57. The class of algorithms that breaks a problem into smaller subproblems and solves each
subproblem independently is called:
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Brute force algorithm
58. The "Greedy algorithm" always makes the locally optimal choice at each step, with the hope of
finding a global optimum. Which statement is true about Greedy algorithms?
a) Greedy algorithms always guarantee the global optimum for all problems.
b) Greedy algorithms are usually more time-efficient than other algorithm classes.
c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
d) Greedy algorithms are only applicable to problems with a small number of variables.
Answer: c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
59. Dynamic Programming algorithms are particularly useful for solving problems that exhibit:
a) Overlapping subproblems and optimal substructure
b) Low complexity and small input size
c) Independent subproblems and no overlapping
d) Linear data structures and one-dimensional arrays
Answer: c) Examine all possible solutions and select the best one
62. An algorithm that has a time complexity of O(log n) is considered more efficient than an algorithm
with time complexity:
a) O(1)
b) O(n)
c) O(n^2)
d) O(n log n)
63. Which strategy is NOT typically used to improve the efficiency of an algorithm?
a) Divide and Conquer
b) Brute force
c) Dynamic Programming
d) Memoization
Answer: d) The efficiency of an algorithm can be measured by its time and space complexity.
68. Which of the following algorithm design techniques emphasizes breaking a problem into smaller
subproblems and solving each subproblem independently?
a) Greedy algorithm
b) Divide and Conquer algorithm
c) Brute force algorithm
d) Dynamic Programming algorithm
71. The "Greedy algorithm" always makes the locally optimal choice at each step, with the hope of
finding a global optimum. Which statement is true about Greedy algorithms?
a) Greedy algorithms always guarantee the global optimum for all problems.
b) Greedy algorithms are usually more time-efficient than other algorithm classes.
c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
d) Greedy algorithms are only applicable to problems with a small number of variables.
Answer: c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
72. Dynamic Programming algorithms are particularly useful for solving problems that exhibit:
a) Overlapping subproblems and optimal substructure
b) Low complexity and small input size
c
Answer: c) Examine all possible solutions and select the best one
Answer: c) Exploring all possible solutions and making choices along the way
77. In asymptotic analysis, the term "Big O notation" is used to represent the:
a) Best-case time complexity of the algorithm
b) Average-case time complexity of the algorithm
c) Worst-case time complexity of the algorithm
d) Space complexity of the algorithm
79. Which statement is true about the "Big O notation" in asymptotic analysis?
a) It provides an exact measure of the algorithm's execution time.
b) It represents the average-case time complexity of the algorithm.
c) It gives a lower bound on the algorithm's time complexity.
d) It helps analyze how the algorithm's performance scales with the input size.
Answer: d) It helps analyze how the algorithm's performance scales with the input size.
82. In algorithm analysis, the term "average-case analysis" considers the algorithm's performance based
on:
a) The best possible input
b) The worst possible input
c) Randomly distributed inputs
d) All possible inputs
Answer: d) It helps analyze how the algorithm performs on the least favorable input.
85. When analyzing an algorithm, we are primarily interested in its performance concerning:
a) The total execution time
b) The number of conditional statements used
c) The maximum input size it can handle
d) How the execution time grows with the input size
Answer: d) How the execution time grows with the input size
86. The "Divide and Conquer" algorithm technique is based on the principle of:
a) Iterative problem-solving
b) Exploring all possible solutions simultaneously
c) Breaking a problem into smaller subproblems and solving them independently
d) Reducing the problem size by increasing the input size
Answer: c) Breaking a problem into smaller subproblems and solving them independently
88. Which of the following best describes the "Divide and Conquer" algorithm?
a) It divides the problem into overlapping subproblems and combines their solutions.
b) It recursively breaks the problem into smaller independent subproblems.
c) It uses random choices to explore the solution space.
d) It uses a single approach to solve problems of different sizes.
89. A Divide and Conquer algorithm reduces the problem size by:
a) Increasing the input size
b) Combining all subproblems into one
c) Dividing the problem into subproblems of equal size
d) Ignoring subproblems that do not contribute to the final solution
Answer: c) The algorithm's performance grows at a linear rate with the input size.
92. The "Knapsack Problem" is a classic example of a problem solved using which technique?
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Brute force algorithm
a) Greedy algorithms always guarantee the global optimum for all problems.
b) Greedy algorithms are usually more time-efficient than other algorithm classes.
c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
d) Greedy algorithms are only applicable to problems with a small number of variables.
Answer: c) Greedy algorithms can sometimes lead to suboptimal solutions for some problems.
Answer: c) The optimal solution can be reached by making locally optimal choices at each step
96. Dynamic Programming algorithms are particularly useful for solving problems that exhibit:
a) Overlapping subproblems and optimal substructure
b) Low complexity and small input size
c) Independent subproblems and no overlapping
d) Linear data structures and one-dimensional arrays
Answer: a) Overlapping subproblems and optimal substructure
97. The "Fibonacci sequence" is a classic problem solved using which technique?
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Brute force algorithm
Answer: d) It solves subproblems recursively and stores their solutions to avoid redundant
computations.
99. In the context of dynamic programming, the term "memoization" refers to:
a) Storing data in one-dimensional arrays
b) Storing intermediate results to avoid redundant calculations
c) Solving the problem iteratively using a bottom-up approach
d) Using a divide and conquer approach to solve subproblems
Answer: c) Examine all possible solutions and select the best one
103. Brute force algorithms are most appropriate for problems where:
a) The optimal solution can be found by combining solutions to subproblems
b) The problem can be divided into smaller independent subproblems
c) The problem size is small, and all possible solutions can be checked
d) The problem involves dynamic programming and memoization
Answer: c) The problem size is small, and all possible solutions can be checked
104. The main drawback of using brute force algorithms is that they:
a) Require sophisticated data structures
b) Are not suitable for problems with overlapping subproblems
c) Have high time and space complexity for large input sizes
d) Are only applicable to problems with a small number of variables
Answer: c) Have high time and space complexity for large input sizes
105. The "Traveling Salesman Problem" is an example of a problem often solved using which
technique?
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Brute force algorithm
Answer: c) Exploring all possible solutions and making choices along the way
107. The "N-Queens Problem" is an example of a problem often solved using which technique?
a) Divide and Conquer algorithm
b) Greedy algorithm
c) Dynamic Programming algorithm
d) Backtracking algorithm
Answer: c) The problem involves exploring all possible solutions and making choices
Answer: d) Backtracking is a form of recursion that explores all possible paths until a solution is found.
**9.6.6 Branch-and-bound algorithms**
Answer: d) Explore only a subset of possible solutions while eliminating suboptimal paths
114. The "Traveling Salesman Problem" can also be solved using branch-and-bound algorithms. What
is the main benefit of using this technique?
a) It guarantees the optimal solution in all cases.
b) It explores all possible solutions in a short amount of time.
c) It significantly reduces the time complexity compared to other techniques.
d) It prunes suboptimal paths and eliminates redundant computations.
Answer: d) The problem can benefit from random exploration and probabilistic selection
Answer: d) Introduce randomness, which can lead to better solutions in some cases
Answer: d) Stochastic algorithms use random choices and probability distributions to search for
solutions.
**9.7 Complexity**
Answer: b) It represents the amount of memory used by the algorithm to store data.
125. In time complexity analysis, the term "Big O notation" is used to represent the:
a) Best-case time complexity of the algorithm
b) Average-case time complexity of the algorithm
c) Worst-case time complexity of the algorithm
d) Space complexity of the algorithm
Answer: d) Space complexity helps determine how much memory an algorithm needs to execute.
Answer: d) Time complexity helps determine the efficiency of an algorithm concerning the input size.
Answer: c) Applying different algorithms to a specific problem and evaluating their performance
138. The main purpose of conducting a case study on algorithm design techniques is to:
a) Identify the best algorithm for a specific problem
b) Determine the average execution time of an algorithm
c) Explore different data structures used in algorithms
d) Analyze the space complexity of an algorithm
Answer: a) Identify the best algorithm for a specific problem
139. Which statement is true about a case study on algorithm design techniques?
a) The case study involves comparing algorithms based on their code complexity.
b) The case study helps determine the most optimal solution for any given problem.
c) The case study focuses on finding the average execution time of an algorithm.
d) The case study evaluates algorithm performance using real-world scenarios.
Answer: d) The case study evaluates algorithm performance using real-world scenarios.
140. The results of a case study on algorithm design techniques can be used to:
a) Prove the correctness of a particular algorithm
b) Determine the best algorithm for a specific problem under certain conditions
c) Establish the average execution time of an algorithm
d) Explore the mathematical properties of different algorithms
Answer: b) Determine the best algorithm for a specific problem under certain conditions
142. The primary purpose of using data structures in algorithm design is to:
a) Reduce the time complexity of the algorithm
b) Provide a visual representation of the algorithm's execution
c) Optimize the algorithm's performance for large input sizes
d) Facilitate the storage and retrieval of data during algorithm execution
Answer: d) Facilitate the storage and retrieval of data during algorithm execution
143. Which statement best describes the relationship between data structures and algorithms?
a) Data structures represent the steps in an algorithm's execution.
b) Data structures are the algorithms used to process data.
c) Data structures and algorithms are separate entities with no connection.
d) Data structures provide the foundation for algorithm design and execution.
Answer: d) Data structures provide the foundation for algorithm design and execution.
Answer: c) The input size and the specific operations required by the algorithm
Answer: d) Lists, queues, stacks, and trees to store and organize data