Paper Id: 110502 Sub Code-RCS502
B.TECH.
(SEM V) THEORY EXAMINATION 2019-20
DESIGN AND ANALYSIS OF ALGORITHM
TIME: 3 HRS TOTAL MARKS: 70
SECTION-A
1. Attempt all questions in brief: 2x7=14
a. How do you compare the performance of various
algorithms?
Ans.. It is determined by how much computing time and storage of
algorithm requires.
i)Time complexity: It is the amount of time need for computation.
ii)Space complexity: It is the amount of space required for computation of an
algorithm.
b. Take the following list of functions and arrange
them in ascending order of growth rate. That is, if
function g(n) immediately follows function f(n) in
your list, then it should be the case that f(n) is
Og(n).
f1(n)=n2.5, f2=(n)= √2n, f3(n)=n+10, f4(n)=10n,
f5(n)=100n, and f6=n2log n
Value of n-> n=1 n=2 n=3
f1=n2.5 1 5.66 45.59
n
f2=√2 √2 2 2.83
f3=n+10 11 12 13
f4=10n 10 20 30
f5=100n 100 200 300
2
f6=n logn 0 1.20 4.29
f5<f4<f3<f1<f2<f6
c. What is advantage of binary search over linear
search? Also, state limitations of binary search.
SOL. Advantage:
Compared to linear search, binary search is much faster. Linear search
takes, on average N/2 comparisons (where N is the number of elements
in the array), and worst case N comparisons. Binary search takes an
average and worst-case log2(N) comparisons.
Limitation:
The only limitation is that the array or list of elements must be sorted for
the binary search algorithm to work on it.
d. What are greedy algorithms? Explain their
characteristics?
Ans..Greedy is an algorithmic paradigm that builds up a solution piece by
piece, always choosing the next piece that offers the most obvious and
immediate benefit. So the problems where choosing locally optimal also
leads to global solution are best fit for Greedy.
Characteristics:
i) Greedy choice property
We can make whatever choice seems best at the moment and then
solve the sub problems that arise later. The choice made by a greedy
algorithm may depend on choices made so far, but not on future
choices or all the solutions to the subproblem.
ii) Optimal substructure
"A problem exhibits optimal substructure if an optimal solution to the
problem contains optimal solutions to the sub-problems.
e. Explain applications of FFT.
Ans.. The FFT has lots of applications and is used extensively in
audio processing, radar, sonar and software defined radio to name but a
few. In all these applications a time domain signal is converted by
the FFT into a frequency domain representation of the signal.
f. Define feasible and optimal solution.
Ans.. The feasible solution refers to the set of values applicable for
the decision variable. It satisfies the entire constraints provided in the
optimization problem. The feasible region of the optimization problem is
defined by all the set of the feasible solutions.
An optimal solution is a feasible solution where the objective function
reaches its maximum (or minimum) value – for example, the most profit
or the least cost. A globally optimal solution is one where there are no
other feasible solutions with better objective function values.
g. What do you mean by polynomial time reduction?
Ans.. In computational complexity theory, a polynomial-time
reduction is a method for solving one problem using another. One
shows that if a hypothetical subroutine solving the second problem
exists, then the first problem can be solved by transforming
or reducing it to inputs for the second problem and calling the subroutine
one or more times. If both the time required to transform the first
problem to the second, and the number of times the subroutine is called
is polynomial, then the first problem is polynomial-time reducible to the
second
SECTION-B
2. Attempt any 3 of the following. 7x3=21
a. (i) Solve the recurrence T(n)=2T(n/2)+n2+2n+1
Ans. T(n)=2T(n/2)+n2+2n+1
T(n)=aT(n/b)+f(n)
a=2 b=2 f(n)=n2+2n+1
nlogba=n log22=1
case(III) f(n) ∈ Ω(nlogba)
a f(n/b)<=c f(n)
2[(n2/4)+n+1]<=c(n2+2n+1)
(n2/2)+2n+2<=c(n2+2n+1)----------------Eq(1)
n2+4n+4<=c(2n2+4n+2)
c>=n2+4n+4
2n2+4n+2
Let assume n=1
c>=9/8
put in Eq(1)
(n2/2)+2n+2<= (9/8)[(n2+2n+1)]
Apply case(III)
T(n)= Ωf(n)
T(n)= Ω(n2+2n+1)
(ii) Prove that worst case running time of any
comparison sort is Ω(nlogn).
Ans..ii) Proof: Recall that the sorting algorithm must output a permutation of the input
[a1,a2,... ,an]. The key to the argument is that (a) there are n! different possible permutations
the algorithm might output, and (b) for each of these permutations, there exists an input for
which that permutation is the only correct answer. For instance, the permutation [a3,a1,a4,a2]
is the only correct answer for sorting the input [2, 4, 1, 3]. In fact, if you fix a set of n distinct
elements, then there will be a 1-1 correspondence between the different orderings the
elements might be in and the permutations needed to sort them. Given (a) and (b) above, this
means we can fix some set of n! inputs (e.g., all orderings of {1, 2,... ,n}), one for each of the n!
output permutations. let S be the set of these inputs that are consistent with the answers to all
comparisons made so far (so, initially, |S| = n!). We can think of a new comparison as splitting S
into two groups: those inputs for which the answer would be YES and those for which the
answer would be NO. Now, suppose an adversary always gives the answer to each comparison
corresponding to the larger group. Then, each comparison will cut down the size of S by at most
a factor of 2. Since S initially has size n!, and by construction, the algorithm at the end must
have reduced |S| down to 1 in order to know which output to produce, the algorithm must
make at least log2 (n!) comparisons before it can halt. We can then solve:
log2 (n!) = log2 (n) + log2 (n − 1) + ... + log2 (2)
= Ω(n log n).
b. Insert the following element in an initially empty
RB-Tree.
12,9,81,76,23,43,65,88,32,54.Now Delete 23 and81.
c. Define spanning tree. Write Kruskal’s algorithm for finding
minimum cost spanning tree. Describe how Kruskal’s algorithm
is different from Prim’s algorithm for finding minimum cost
spanning tree.
Ans.. Given an undirected and connected graph G=(V,E), a spanning tree of the
graph G is a tree that spans G (that is, it includes every vertex of G) and is a subgraph
of G (every edge in the tree belongs to G)
What is a Minimum Spanning Tree?
The cost of the spanning tree is the sum of the weights of all the edges in the tree.
There can be many spanning trees. Minimum spanning tree is the spanning tree where
the cost is minimum among all the spanning trees. There also can be many minimum
spanning trees.
Kruskal’s Algorithm
Kruskal‟s Algorithm builds the spanning tree by adding edges one by one into a growing
spanning tree. Kruskal's algorithm follows greedy approach as in each iteration it finds
an edge which has least weight and add it to the growing spanning tree.
Algorithm Steps:
Sort the graph edges with respect to their weights.
Start adding edges to the MST from the edge with the smallest weight until the
edge of the largest weight.
Only add edges which doesn't form a cycle , edges which connect only
disconnected components.
d. What is dynamic programming? How is this
approach different from recursion? Explain with
example.
Ans.. Dynamic Programming is also used in optimization problems. Like divide-and-
conquer method, Dynamic Programming solves problems by combining the solutions
of subproblems. Moreover, Dynamic Programming algorithm solves each sub-problem
just once and then saves its answer in a table, thereby avoiding the work of re-
computing the answer every time.
Two main properties of a problem suggest that the given problem can be solved using
Dynamic Programming. These properties are overlapping sub-problems and
optimal substructure.
Steps of Dynamic Programming Approach:
Dynamic Programming algorithm is designed using the following four steps −
Characterize the structure of an optimal solution.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution, typically in a bottom-up fashion.
Construct an optimal solution from the computed information.
Applications of Dynamic Programming Approach:
Matrix Chain Multiplication
Longest Common Subsequence
Travelling Salesman Problem
During recursion, there may exist a case where same sub-problems are solved
multiple times.
In Dynamic programming, once a result of a function is evaluated , it gets stored
in a table and can be retrieved from there whenever required in further
iterations. Whereas , in recursion, the functions maybe calculated many times as
required by the program.
During recursion, there may exist a case where same sub-problems are solved
multiple times.
But Dynamic programming is a technique where you store the result of
previous calculation to avoid calculating the same once again. DP is basically a
memorization technique which uses a table to store the results of sub-problem so
that if same sub-problem is encountered again in future, it could directly return
the result instead of re-calculating it.
e. Define NP-Hard and NP-complete problems. What
are the steps involved in proving a problem NP-
complete? Specify the problems already proved to
be NP-complete.
Sol. A language B is NP-complete if it satisfies two conditions
B is in NP
Every A in NP is polynomial time reducible to B.
If a language satisfies the second property, but not necessarily the first one, the
language B is known as NP-Hard.
Informally, a search problem B is NP-Hard if there exists some NP-
Complete problem A that Turing reduces to B.
Steps to prove a problem Q is NPC:
Prove Q ꞓ NP
Pick a known NPC problem P
Reduce P to Q
Describe a transformation that maps instances of P to instances of Q,
such that “yes” for Q = “yes” for P
Prove the transformation works
Prove it runs in polynomial time
SECTION-C
3. Attempt any one part of the following: 7x1=7
a. Among Merge Sort, Insertion sort and quick sort
which sorting technique is the best in worst case.
Apply the best one among these algorithms to Sort
the list E, X, A, M, P, L, E in alphabetical order.
Ans.
Time complexity in worst case:
Merge sort=n log n
Quick sort=n2
Insertion sort=n2
Among these 3,time complexity in worst case for
merge sort is less than the rest two. Thus, merge
sort is best in worst case.
b. Solve the recurrence using recursion tree method:
T(n)=T(n/2)+T(n/4)+T(n/8)+n
4. Attempt any one part of the following: 7x1=7
a. Using minimum degree ‘t’ as 3, insert following sequence
of integers 10, 25, 20, 35, 30, 55, 40, 45, 50, 55, 60, 75, 70,
65, 80, 85 and 90 in an initially empty B-Tree. Give the
number of nodes splitting operations that take place.
b. Explain the algorithm to delete a given element in a
binomial heap. Give an example for the same.
Ans. algorithm to delete a given element in a binomial
heap:
Step1: BinomialHeapDecreasekey(x,- ∞)
Step2: BinomialHeapExtractMin(H)
BinomialHeapDecreasekey(x,k)
{
If k>key[x] then
Error”new key is greator”
Key[x]<- k
y<- x
z<- p[y]
while z!= NIL && key[y]<key[z]
{
do exchange key[y]<->key[z]
y<-z
z<-p[y]
}}
Binomial Heap Extract Min(H)
Find root x with min key in root list of H and remove
x from root list
H’<-makeBinomialHeap
Reverse the order of linked list of x children and set
H’ to point the head of resulting list.
H<-Binomial_HeapUnion(H,H’)
Return x
5. Attempt any one part of the following: 7x1=7
a. Compare the various programming paradigms such
as divide-and-conquer, dynamic programming and
greedy approach.
Ans.. The following is a list of several popular design approaches:
1. Divide and Conquer Approach: It is a top-down approach. The algorithms which follow the
divide & conquer techniques involve three steps:
o Divide the original problem into a set of subproblems.
o Solve every subproblem individually, recursively.
o Combine the solution of the subproblems (top level) into a solution of the whole original
problem.
2. Greedy Technique: Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (known as objective), i.e. some constraints or conditions.
3. Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible
small problems and then combine them to obtain solutions for bigger problems.This is
particularly helpful when the number of copying subproblems is exponentially large. Dynamic
Programming is frequently related to Optimization Problems.
4. Branch and Bound: In Branch & Bound algorithm a given subproblem, which cannot be
bounded, has to be divided into at least two new restricted subproblems. Branch and Bound
algorithm are methods for global optimization in non-convex problems. Branch and Bound
algorithms can be slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage with much less
effort.
5. Randomized Algorithms: A randomized algorithm is defined as an algorithm that is allowed
to access a source of independent, unbiased random bits, and it is then allowed to use these
random bits to influence its computation.
6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until they find the right
one. It is a depth-first search of the set of possible solution. During the search, if an alternative
doesn't work, then backtrack to the choice point, the place which presented different
alternatives, and tries the next alternative.
b. What do you mean by convex hull? Describe an algorithm
that solves the convex hull problem. Find the time
complexity of the algorithm.
Ans.. Given a set of points in the plane. the convex hull of the set is the
smallest convex polygon that contains all the points of it.
Following is Graham’s algorithm
Let points[0..n-1] be the input array.
1) Find the bottom-most point by comparing y coordinate of all points. If
there are two points with the same y value, then the point with smaller x
coordinate value is considered. Let the bottom-most point be P0. Put P0 at
first position in output hull.
2) Consider the remaining n-1 points and sort them by polar angle in
counterclockwise order around points[0]. If the polar angle of two points is
the same, then put the nearest point first.
3 After sorting, check if two or more points have the same angle. If two
more points have the same angle, then remove all same angle points
except the point farthest from P0. Let the size of the new array be m.
4) If m is less than 3, return (Convex Hull not possible)
5) Create an empty stack „S‟ and push points[0], points[1] and points[2] to
S.
6) Process remaining m-3 points one by one. Do following for every point
„points[i]‟
4.1) Keep removing points from stack while orientation of following 3
points is not counterclockwise (or they don‟t make a left turn).
a) Point next to top in stack
b) Point at the top of stack
c) points[i]
4.2) Push points[i] to S
5) Print contents of S
Time Complexity: Let n be the number of input points. The algorithm
takes O(nLogn) time if we use a O(nLogn) sorting algorithm.
The first step (finding the bottom-most point) takes O(n) time. The
second step (sorting points) takes O(nLogn) time. The third step takes
O(n) time. In the third step, every element is pushed and popped at most
one time. So the sixth step to process points one by one takes O(n)
time, assuming that the stack operations take O(1) time. Overall
complexity is O(n) + O(nLogn) + O(n) + O(n) which is O(nLogn)
6. Attempt any one part of the following: 7x1=7
a. Solve the following 0/1 knapsack problem using
dynamic programming P= [11, 21, 31, 33]
w=[2, 11, 22, 15] c=20, n=4.
b. Define Floyd Warshall Algorithm for all pair shortest
path and apply the same on the following graph:
7. Attempt any one part of the following: 7x1=7
a. Describe in detail Knuth-Morris-Pratt string matching
algorithm. Compute the prefix function π for the
pattern ababbabbabbababbabb when the alphabet
is ∑ = {a, b}.
b. What is an approximation algorithm? What is meant
by P(n) approximation algorithms? Discuss
approximation algorithm for Travelling Salesman
Problem.
SOL. B) An Approximate Algorithm is a way of approach NP-
COMPLETENESS for the optimization problem. This technique
does not guarantee the best solution. The goal of an
approximation algorithm is to come as close as possible to the
optimum value in a reasonable amount of time which is at the
most polynomial time. Such algorithms are called approximation
algorithm or heuristic algorithm.
o For the traveling salesperson problem, the optimization
problem is to find the shortest cycle, and the approximation
problem is to find a short cycle.
o For the vertex cover problem, the optimization problem is to
find the vertex cover with fewest vertices, and the
approximation problem is to find the vertex cover with few
vertices.
Approximation algorithms naturally arise in the field
of theoretical computer science as a consequence of the widely
believed P ≠ NP conjecture. Under this conjecture, a wide class
of optimization problems cannot be solved exactly
in polynomial time.
P(n) approximation algorithm: If, for any input of size n for a
problem, the cost C of the solution produced by an algorithm is
within a factor of p(n)of the cost of C* of an optimal solution, we
say that, the algorithm is a p(n) approximation algorithm.
Approximation Algorithm for Travelling Salesman Problem:
Travelling Salesman Problem (TSP): Given a set of cities and distance
between every pair of cities, the problem is to find the shortest possible
route that visits every city exactly once and returns to the starting point.
Note the difference between Hamiltonian Cycle and TSP. The Hamiltoninan
cycle problem is to find if there exist a tour that visits every city exactly
once. Here we know that Hamiltonian Tour exists (because the graph is
complete) and in fact many such tours exist, the problem is to find a
minimum weight Hamiltonian Cycle.
In fact, there is no polynomial time solution available for this problem as the
problem is a known NP-Hard problem. There are approximate algorithms to
solve the problem though. The approximate algorithms work only if the
problem instance satisfies Triangle-Inequality.
Triangle-Inequality: The least distant path to reach a vertex j from i is
always to reach j directly from i, rather than through some other vertex k (or
vertices), i.e., dis(i, j) is always less than or equal to dis(i, k) + dist(k, j). The
Triangle-Inequality holds in many practical situations.
When the cost function satisfies the triangle inequality, we can design an
approximate algorithm for TSP that returns a tour whose cost is never more
than twice the cost of an optimal tour. The idea is to
use Minimum Spanning Tree (MST). Following is the MST based algorithm.
Algorithm:
1) Let 1 be the starting and ending point for salesman.
2) Construct MST from with 1 as root using Prim‟s Algorithm.
3) List vertices visited in preorder walk of the constructed MST and add 1 at
the end.