You are on page 1of 5

Q.

1 Differentiate between recursion and iteration


Recursion Iteration
Recursion uses the selection structure. Iteration uses the repetition structure.
Recursion reduces the size of the code. Iteration increases the size of the code.
Recursion uses more memory in comparison to Iteration uses less memory in comparison to
iteration. recursion.
A recursive algorithm is one that calls itself Iterative algorithms use constructs like loops and
repeatedly until a base condition is satisfied. sometimes other data structures like stacks and
queues to solve the problems.

Terminates when a base case is reached. Terminates when a condition is proven to be false.

Each recursive call requires extra space on the Each iteration does not require extra space
stack frame (memory).
If we get infinite recursion, the program may run An infinite loop could loop forever since there is
out of memory and result in stack overflow. no extra memory being created.

Solutions to some problems are easier to Iterative solutions to a problem may not always
formulate recursively. be as obvious as a recursive solution.

Recursion is slower than iteration iteration is quick in comparison to recursion.

Q.2 What is recursion? And explain different application of recursion with following programmers (1)
Fibonacci Series (2) factorial of number
1) Any function which calls itself is called recursive.
2) A recursive method solves a problem by calling a copy of itself to work on a smaller problem. This is called
the recursion step.
3) The recursion step con result in many more such recursive calls.
4) It is important to ensure that the recursion terminates.
5) Each time the function calls itself with a slightly simpler version of the original problem. The sequence of
smaller problems must eventually converge on the base case.

Q.4 What is sorting? Explain bubble sort with example.


Sorting
1) Sorting is an algorithm that arranges the elements of a list in a certain order [either ascending or
descending]. The output is a permutation or reordering of the input.
2) Sorting is one of the important categories of algorithms in computer science and a lot of research has gone
into this category.
3) Sorting can significantly reduce the complexity of a problem.
4) It is often used for database algorithms and searches.
5) Sorting algorithms are generally classified based on the following parameters.
a) By Number of Comparisons
b) By Number of Swaps
c) By Memory Usage
d) By Recursion
e) By Stability
Bubble sort
1) Bubble sort is the simplest sorting algorithm.
2) It works by iterating the input array from the first element to the last, comparing each pair of elements
and swapping them if needed.
3) Bubble sort continues its iterations until no more swaps are needed.
4) Some researchers suggest that we should not teach bubble sort because of its simplicity and high time
complexity.
5) The only significant advantage that bubble sort has over other implementations is that it can detect
whether the input list is already sorted or not.
6) Algorithm takes O(n2) (even in best case).
7) Space complexity : 0(1)
Q.5 What is sorting? Explain Selection sort with example.
Selection sort
1) Selection sort is an in-place sorting algorithm.
2) Selection sort works well for small files.
3) It is used for sorting the files with very large values and small keys.
4) This is because selection is made based on keys and swaps arc made only when required.
5) Advantages :
a) Easy to implement
b) In-place sort (requires no additional storage space)
6) Disadvantages : Doesn't scale well: O(n2)
7) Algorithm :
a) Find the minimum value in the list
b) Swap it with the value in the current position
c) Repeat this process for all the elements until the entire array is sorted
8) This algorithm is called selection sort since il repeatedly selects the smallest element.
9) Algorithm takes O(n2) (even in best case).
10) Space complexity : 0(1)

Q.6 What is sorting? Explain Insertion sort with example.


Insertion sort
1) Every repetition of insertion sort removes the element from the input data.
2) inserts it into the correct position in the already-sorted list until no input elements remain.
3) Sorting is typically done in – place
4) The resulting array after k iterations has the property where the first k + I entries are sorted.
5) Each clement greater than x is copied to the right as it is compared against x .
6) Algorithm takes O(n2)
7) Space complexity : 0(1) and o(1)
8) Example :
9) Given an array: 6 8 l 4 5 3 7 2 and the goal is to put them in ascending order.
6 8 l 4 5 3 7 2 (Consider index 0)
6 8 I 4 5 3 7 2 (Consider indices 0 - I )
1 6 8 4 5 3 7 2 (Consider indices~ 0 - 2: Insertion sort places 1 in front of 6 and 8)
1 4 6 8 5 3 7 2 (Process some as above is repeated until array is sorted)
14568 372
13456782
1 2 3 4 5 6 7 8 (The array is sorted!)

Q.7 Compare and contrast performance analysis of different sorting techniques


1) Sorting is an algorithm that arranges the elements of a list in a certain order [either ascending or
descending]. The output is a permutation or reordering of the input.
2) different sorting techniques including their complexity, stability and memory constraints.
3) Time complexity Analysis : the best, average and worst case complexity of different sorting techniques
with possible scenarios. 
4) Bubble sort and Insertion sort –
a) Average and worst case time complexity: n^2 
b) Best case time complexity: n when array is already sorted. 
c) Worst case: when the array is reverse sorted. 
5) Selection sort – Best, average and worst case time complexity: n^2 which is independent of distribution
of data. 
6) Merge sort – Best, average and worst case time complexity: nlogn which is independent of distribution of
data.

Q.8 What is searching technique explain linear search with example


1) Searching is the process of selecting particular information from a collection of data based on specific
criteria.
2) searching to refer to the process of finding a specific item in a collection of data items.
3) The search operation can be performed on many different data structures.
4) finding an item within a sequence using a search key to identify the specific item.
5) A key is a unique value used to identify the data elements of a collection. In collections containing simple
types such as integers or reals, the values themselves are the keys.
The Linear Search
1) The simplest solution to the sequence search problem is the sequential or linear search algorithm.
2) This technique iterates over the sequence, one item at a time, until the specific item is found.
3) Example
a) The use of the in operator makes our code simple and easy to read but it hides the inner workings.
Underneath, the in operator is implemented as a linear search.
b) Consider the unsorted 1-D array of integer values shown in Figure(a). To determine if value 31 is in the
array, the search begins with the value in the first element.
c) Since the first element does not contain the target value, the next element in sequential order is
compared to value 31.
d) This process is repeated until the item is found in the sixth position.
e) What if the item is not in the array? For example, suppose we want to search for value 8 in the sample
array. The search begins at the first entry as before, but this time every item in the array is compared
to the target value. It cannot be determined that the value is not in the sequence until the entire array
has been traversed, as illustrated in Figure 5.1(b).
Binary search

Q. 10 compare and contrast linear and binary search with respect to time and space complexity

Q. 11 What is selection technique explain selection by sorting with example


1) Selection algorithm is an algorithm for finding the kth smallest/largest number in a list (also called as kth
order statistic).
2) This includes finding the minimum, maximum, and median elements.
3) Selection by Sorting
a) A selection problem can be converted to a sorting problem. In this method, we first sort the input
elements and then get the desired element
b) elements and then get the desired element. I
c) For example,
i) let we want to get the minimum element.
ii) After sorting the input elements we can simply return the first element (assuming the array is sorted
in ascending order)
iii) Now, if we want to find the second smallest element, we can simply return the second element from
the sorted list.
iv) That means, for the second smallest element we are not performing the sorting again. The same is
also the case with subsequent queries.
v) also the case with subsequent queries. Even if we want to get kth smallest element, just one scan of
the sorted list is enough to find the element.
d) From the above discussion what we can say is, with the initial sorting we can answer any query in one
scan, O(n).
e) Linear Selection Algorithm - Median of Medians Algorithm
i) Worst-case performance O(n)
ii) Best-case performance O(n)
iii) Worst-case space complexity O(1) auxiliary

Q.1 Discuss diff techniques of algo design techniques?


1) Classification There are many ways of classifying algorithms and a few of them are shown below
a) Implementation Method
b) Design Method
2) Classification by Implementation Method
a) Recursion : A recursive algorithm is one that calls itself repeatedly until a base condition is satisfied.
b) Iteration : Iterative algorithms use constructs like loops and sometimes other data structures like stacks
and queues to solve the problems.
c) Serial : the algorithms we assume that computers execute one instruction at a time. These are called
serial algorithms.
d) Parallel : Parallel algorithms take advantage of computer architectures to process many instructions at
a time.
e) Deterministic or Non-Deterministic : Deterministic algorithms solve the problem with a predefined
process, whereas non – deterministic algorithms guess the best solution at each step through the use of
heuristics.
3) Classification by Design Method
a) Greedy Method :
i) Method Greedy algorithms work in stages.
ii) A decision is made that is good at that point, without bothering about the future consequences.
Generally, this means that some local best is chosen.
iii) It assumes that the local best selection also makes for the global optimal solution.
b) Divide and Conquer :
i) Divide : Breaking the problem into sub problems that are themselves smaller instances of the same
type of problem.
ii) Recursion: Recursively solving these sub problems
iii) Conquer: Appropriately combining their answers.
c) Linear Programming : In linear programming, there are inequalities in terms of inputs and
maximizing (or minimizing) some linear function of the inputs.
4) Classification by Complexity : In this classification, algorithms are classified by the time they take to find a
solution based on their input size.
5) Randomized Algorithms A few algorithms make choices randomly.

Q. 2 What is Greedy technique?


1) Greedy algorithms work in stages.
2) In each stage, a decision is made that is good at that point, without bothering about the future. This means
that some local best is chosen.
3) It assumes that a local good selection makes for a global optimal solution.
4) Making locally optimal choices does not always work. Hence, Greedy algorithms will not always give the
best solution.
5) The two basic properties of optimal Greedy algorithms are
a) Greedy choice properly
b) Optimal substructure
6) Advantages of Greedy Method :
a) it is easy to understand and easy to code.
b) It is straightforward.
7) Disadvantage of Greedy Method : many cases there is no guarantee that making locally optimal
improvements in a locally optimal solution gives the optimal global solution.
8) Greedy Applications
a) Sorting: Selection sort, Topological sort
b) Priority Queues: Heap sort
c) Huffman coding compression algorithms
d) Shortest path in Weighted Graph
e) Fractional Knapsack problem
f) Job scheduling algorithm
g) Greedy techniques can be used as an approximation algorithm for complex problems.

Q.3 Explain the concept of divide and concure? State its advantages and disadvantages?
1) Divide and Conquer is an important algorithm design technique based on recursion.
2) The D & C algorithm works by recursively breaking down a problem into two or more sub problems of the
same type, until they become simple enough to be solved directly
3) The solutions to the sub problems are then combined to give a solution to the original problem.
4) The D & C strategy solves a problem by:
a) Divide: Breaking the problem into sub problems that are themselves smaller instances of the same type
of problem.
b) Recursion: Recursively solving these sub problems.
c) Conquer: Appropriately combining their answers.
5) It’s not possible to solve all the problems with the Divide & Conquer technique.
6) As per the definition of D & C, the recursion solves the subproblems which are of the same type.
7) For all problems it is not possible to find the subproblems which are the same size and D & C is not a
choice for all problems.
8) 18.6 Advantages of Divide and Conquer
a) D & C is a powerful method for solving difficult problems.
b) Dividing the problem into subproblems so that subproblems can be combined again is a major
difficulty in designing a new algorithm. For many such problems D & C provides a simple solution.
9) Disadvantages of Divide and Conquer
a) D & C approach is that recursion is slow.
b) the D & C approach needs a stack for storing the calls
c) Another problem with D & C is that, for some problems, it may be more complicated than an iterative
approach.
10) Application : 1) Merge sort 2) Strassen matrix

Q.4 4 Explain the concept of dynamic programming state advantages and Disadvantages
1) (DP) is a simple technique but it can be difficult to master.
2) One easy way to identify and solve DP problems is by solving as many problems as possible.
3) The term Programming is not related to coding but it is from literature, and means filling tables (similar to
Linear Programming).
4) Dynamic programming and memoization work together.
5) The main difference between dynamic programming and divide and conquer is that in the case of the
latter, sub problems are independent, whereas in DP there can be an overlap of sub problems.
6) DP reduces the exponential complexity to polynomial complexity (O(n2), O(n3), etc.)
7) major components of DP are: 1) Recursion: Solves sub problems recursively. 2) Memoization: Stores
already computed values in table
8) Dynamic Programming = Recursion + Memoization
9) Basically there are two approaches for solving DP problems (1) Bottom-up dynamic programming (2) Top-
down dynamic programming
10) Examples of Dynamic Programming Algorithms
a) Chain matrix multiplication
b) Subset Sum
c) Travelling salesman problem
d) Algorithms on graphs can be solved efficiently: Bellman-Ford algorithm for finding the shortest
distance in a graph

Q.5 Explain the concept of Backtracking programming state advantages and Disadvantages
1) Backtracking is an improvement of the brute force approach.
2) It systematically searches for a solution to a problem among all available options.
3) In backtracking, we start with one possible option out of many available options and try to solve the
problem if we are able to solve the problem with the selected move then we will print the solution else we
will backtrack and select some other option and try to solve it. If none if the options work out we will claim
that there is no solution for the problem.
4) Backtracking allows us to deal with situations in which a raw brute-force approach would explode into an
impossible number of options to consider.
5) Backtracking can be thought of as a selective tree/graph traversal method.
6) What’s interesting about backtracking is that we back up only as far as needed to reach a previous decision
point with an as-yet-unexplored alternative.
7) In general, that will be at the most recent decision point.
8) Sometimes the best algorithm for a problem is to try all possibilities. This is always slow, but there are
standard tools that can be used to help.
9) Application of BP : 1
a) Binary Strings: generating all binary strings
b) Generating k – ary Strings
c) N-Queens Problem
d) The Knapsack Problem
e) Generalized Strings
f) Graph Coloring Problem

You might also like