You are on page 1of 43

II YEAR / IV SEMESTER B.TECH.

- IT

CS6402 - DESIGN AND ANALYSIS OF ALGORITHMS

UNIT III

DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

COMPILED BY

M.KARTHIKEYAN, M.E., (AP/IT)

VERIFIED BY

HOD PRINCIPAL CEO/CORRESPONDENT

SENGUNTHAR COLLEGE OF ENGINEERING – TIRUCHENGODE

DEPARTMENT OF INFORMATION TECHNOLOGY


UNIT III

DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

 Computing a Binomial Coefficient


 Warshall’s and Floyd algorithm
 Optimal Binary Search Trees
 Knapsack Problem and Memory functions.
 Greedy Technique
 Prim’s algorithm
 Kruskal's Algorithm
 Dijkstra's Algorithm
 Huffman Trees.
List of Important Questions

UNIT III

DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

PART A
1. Define the Single source shortest path algorithm.[N/D2019] Define the single
source shortest path problem. [M/J 2016]
2. State Assignment problem. (M/J 2016).(N/D2019)
3. State principles of optimality.[M/J 12,14,N/D 10,13,14][A/M 2019]
4. What is the constraint for binary search tree insertion.[A/M2019]
5. Write down the optimization technique used for Warshall's algorithm. State
the rules and assumptions which are implied behind that. [A/M 15]
6. List out memory functions used under dynamic programming. [A/M 15]
7. What is knapsack problem? [A/M 13, M/J 13]
8. Write control abstraction for the ordering paradigm. [A/M 13]
9. Differentiate between subset paradigm and ordering paradigm. [N/D 12]
10. What is the drawback of Greedy algorithm? [A/M 12]
11. Differentiate greedy method and dynamic programming. [M/J 12]
12. What is 0/1 knapsack problem. [N/D 13,12,14]
13. List advantages of dynamic programming. [M/J 14]
14. How efficient is Prim's algorithm. [N/D 06]
15. What do you meant by Huff man code? [N/D 06]
16. Write the pseudo code for Warshall’s algorithm. [N/D 08]
17. What is Greedy Algorithm?
18. What are the applications of Greedy Method?
19. What is Minimum Spanning Tree?
20. What are the algorithms to solve MST?
21. What is Prim’s algorithm?
22. What is kruskal’s algorithm?

PART B
1. i) Given the mobile numeric keypad. You can only press buttons that are
up,left,right or down ot the first number pressed to obtain the subsequent numbers.
You are not allowed to press bottom row corner buttons (i.e * and #). Given a
number N, how many key strokes will be involved to press the given number. What
is the length of it? Which dynamic programming technique could be used to find
solution for this? Explain each step with the help of a pseudo code and derive its
time complexity. [A/M 15] (12)
ii) How do you construct a minimum spanning tree using Kruskal's algorithm?
Explain. [A/M 15](4)[N/D2019]
2. i) Construct a Huffman tree by using these nodes.

Value A B C D E F
Frequency 5 25 7 15 4 12

[A/M 15] (8)[A/M 2019]


ii). Write an algorithm to construct the optimal binary search tree given the roots
r(i,j), 0<= i<=j<=n. Also prove that this could be performed in time O(n). [A/M
08,11,15 N/D 13,14]
3. With suitable example explain all pair shortest path algorithm. [M/J 12][or]
Describe all pair shortest path problem and write procedure to compute length of
shortest path. [M/J 13] How dynamic programming applied to solve the travelling
sales person problem? Explain in detail with an example. [A/M 12,M/J 12]
4. Explain Warshall’s Algorithm for Transitive Closure of directed graph.
5. Explain Floyd’s algorithm to find shortest path of directed graph.[A/M2019]
6. Explain Knapsack problem and Memory functions in dynamic programming.
7. Write down and explain the algorithm to solve all pair shortest path problem. [A/M
10]
8. Discuss about the algorithm and pseudo code to find the Minimum Spanning Tree
using Prim's Algorithm. Find the Minimum Spanning Tree for the graph shown
below. Am Discuss about the efficiency of the algorithm. (16) [M/J2016][N/D2019]

4
c
a

2 6 1

b 3 d
NOTES

UNIT III

DYNAMIC PROGRAMMING AND GREEDY TECHNIQUE

PART A

1. Define the Single source shortest path algorithm.[N/D2019] Define the single
source shortest path problem. [M/J 2016]

Dijkstra's algorithm solves the single source shortest path problem of finding
shortest paths from a given vertex( the source), to all the other vertices of a weighted
graph or digraph. Dijkstra's algorithm provides a correct solution for a graph with non
negative weights.
2. State Assignment problem. (M/J 2016).(N/D2019)

There are n people who need to be assigned to execute n jobs, one person per
job. (That is, each person is assigned to exactly one job and each job is assigned to
exactly one person.) The cost that would accrue if the ith person is assigned to the jth
job is a known quantity C[i,j] for each pair i,j = 1, 2, ..., n. The problem is to find an
assignment with the minimum total cost.

3. State principles of optimality.[M/J 12,14,N/D 10,13,14][A/M 2019]

The principle of optimality states that “ in an optimal sequence of decisions or


choices, each subsequence must also be optimal”.

4. What is the constraint for binary search tree insertion.[A/M2019]

43, 10, 79, 90, 12, 54, 11

 Insert 43 into the tree as the root of the tree.


 Read the next element, if it is lesser than the root node element, insert it
as the root of the left sub-tree.
 Otherwise, insert it as the root of the right of the right sub-tree.
5. Write down the optimization technique used for Warshall's algorithm. State
the rules and assumptions which are implied behind that. [A/M 15]

Warshall's algorithm constructs the transitive closure of given digraph with n


vertices through a series of n by n Boolean matrices. This computation is done as
follows
R(0),....,R(k-1),...,R(k),...,R(n).The optimization technique used for Warshall's
algorithm is based on construction of Boolean matrices.
The elements of this matrix can be generated using following formula
rij(k) = rij(k-1) or rik(k-1) and rjk(k-1)
Where rij(k) is the element in ith row and jth of matrix R(K)

6. List out memory functions used under dynamic programming. [A/M 15]

Memory function for bonomial coefficient is


C(n,k) = C(n-1,k-1) + C(n-1,k)
and C(n,0) = C(n,n) = 1
Memory function for Warshall algorithm is
rij(k) = rij(k-1) or rik(k-1) and rjk(k-1)

Memory function for Knapsack problem is

table[i,j] = max{table[i-1],j], Vi + table[i-1,j-W i] if j>= Wi}


or
table [i-1,j] if j<W i
7. What is knapsack problem? [A/M 13, M/J 13]

If we are given n objects and a knapsack or a bag in which the object i that has
weight wi is to be placed. The knapsack has capacity W.then the profit can be earned is
pixi the objective is to obtain filling of knapsack with maximum profit earned.

8. Write control abstraction for the ordering paradigm. [A/M 13]

Algorithm Store(n,limit)
{
j=0;
for(i<-1 to n) do
{
Write(“Append program”,i);
Wrtie(“Permutation for tape”,j);
j=(j+1) mod limit;
}
}

9. Differentiate between subset paradigm and ordering paradigm. [N/D 12]

Subset paradigm:
At each step the decision about the input is made.That means at each step it
is decided whether the particular input is in an optimal solution or not.
Ordering paradigm:

In this paradigm, the decision is made by considering the inputs in some


order. This paradigm is useful for solving those problems that do not call for
selection of optimal subset in greedy manner.

10. What is the drawback of Greedy algorithm? [A/M 12]

Follwing are the determines of greedy method:


1. Greedy method is comparatively efficient than divide and conquer but there is no as
such guarantee of getting optimim solution.
2.In Greedy method,the optimum selection is without reversing prevously generated
solutions.

11. Differentiate greedy method and dynamic programming.[M/J 12]

12. What is 0/1 knapsack problem.[N/D 13,12,14]

0/1 Knapsack Problem: In this item cannot be broken which means thief
should take the item as a whole or should leave it. That's why it is called 0/1
knapsack Problem. Each item is taken or not taken. Cannot take a fractional
amount of an item taken or take an item more than once

13. List advantages of dynamic programming.[M/J 14]


 As it is a recursive programming technique, it reduces the line code.
 it speeds up the processing as we use previously calculated references.

14. How efficient is Prim's algorithm.[N/D 06]


The efficiency of prim’s algorithm depends on which data structures are used to
implement it, but it should be clear that O(nm) time suffices. In each of n iterations, we will
scan through all the m edges and test whether the current edge joins a tree with a non-tree
vertex and whether this is the smallest edge seen thus far. By maintaining a Boolean flag
along with each vertex to denote whether it is in the tree or not, this test can be performed
in constant time. In fact, better data structures lead to a faster, O(n2), implementation by
avoiding the need to sweep through more than n edges in any iteration.

15. What do you meant by Huff man code?[N/D 06]


Huffman coding is a lossless data compression algorithm. The idea is to assign
variable-legth codes to input characters, lengths of the assigned codes are based on the
frequencies of corresponding characters. The most frequent character gets the smallest
code and the least frequent character gets the largest code.The variable-length codes
assigned to input characters are Prefix Codes, means the codes (bit sequences) are
assigned in such a way that the code assigned to one character is not prefix of code
assigned to any other character. This is how Huffman Coding makes sure that there is no
ambiguity when decoding the generated bit stream.

16. Write the pseudo code for Warshall’s algorithm. [N/D 08]

17. What is Greedy Algorithm?


A greedy algorithm makes a locally optimal choice in the hope that this choice will lead
to a globally optimal solution.
The choice made at each step must be:
 Feasible
Satisfy the problem’s constraints
 locally optimal
Be the best local choice among all feasible choices
 Irrevocable
Once made, the choice can’t be changed on subsequent steps.

18. What are the applications of Greedy Method?


 Optimal solutions:
 change making
 Minimum Spanning Tree (MST)
 Single-source shortest paths
 Huffman codes
 Approximations:
 Traveling Salesman Problem (TSP)
 Knapsack problem
 other optimization problems

19. What is Minimum Spanning Tree?
A minimum spanning tree (MST) of a weighted graph G is a spanning tree of G
whose edges sum is minimum weight. In other words, a MST is a tree formed from a
subset of the edges in a given undirected graph, with two properties:
 it spans the graph, i.e., it includes every vertex of the graph.
 it is a minimum, i.e., the total weight of all the edges is as low as
possible.
Let G=(V, E) be a connected, undirected graph where V is a set of vertices (nodes) and E
is the set of edges. Each edge has a given non negative length.

20. What are the algorithms to solve MST?


There are two basic algorithms to solve this problem; both are greedy. We now describe
them.
 Prim's Algorithm
 Kruskal's Algorithm

21. What is Prim’s algorithm?


 Start with tree T1 consisting of one (any) vertex and “grow” tree one vertex at a time
to produce MST through a series of expanding subtrees T1, T2, …, Tn
 On each iteration, construct Ti+1 from Ti by adding vertex not in Ti that is closest to
those already in Ti (this is a “greedy” step!)
 Stop when all vertices are included.

22. What is kruskal’s algorithm?


 Sort the edges in nondecreasing order of lengths
 “Grow” tree one edge at a time to produce MST through a series of expanding
forests F1, F2, …, Fn-1
 On each iteration, add the next edge on the sorted list unless this would create a
cycle. (If it would, skip the edge.)

23. What is a single source shortest path algorithm?


Single Source Shortest Paths Problem:
Given a weighted connected (directed) graph G, find shortest paths from source
vertex s to each of the other vertices.

24. What is Dijkstra’s algorithm?


Dijkstra’s algorithm:
Similar to Prim’s MST algorithm, with a different way of computing numerical labels:
Among verticesnot already in the tree, it finds vertex u with the smallest sum
dv + w(v,u)
where
v is a vertex for which shortest path has been already found
on preceding iterations (such vertices form a tree rooted at s)
dv is the length of the shortest path from source s to v
w(v,u) is the length (weight) of edge from v to u

25. Mention the facts of spanning trees?


 Any two vertices in a tree are connected by a unique path.
 Let T be a spanning tree of a graph G, and let e be an edge of G not in T. The T+e
contains a unique cycle.
26. What are the Characteristics and Features of Problems solved by Greedy Algorithms?
To construct the solution in an optimal way. Algorithm maintains two sets. One
contains chosen items and the other contains rejected items.
The greedy algorithm consists of four (4) function.
1. A function that checks whether chosen set of items provide a solution.
2. A function that checks the feasibility of a set.
3. The selection function tells which of the candidates is the most promising.
4. An objective function, which does not appear explicitly, gives the value of a solution.

27. Write the algorithm for Dijkstra’s algorithm.


ALGORITHM Dijkstra (G,s)
// Dijkstra’s algorithm for single source shortest paths
//Input: A weighted connected graph G=< V ,E > withnonnegetive weights and its
vertex s
//Output:the length dv of a shortest path from s to v and its penultimate vertex pv for
every vertex v in V
Initialize(Q) // initialize vertex priority queue to empty
for every vertex v in V do
dv <- ∞; pv <- null
Insert(Q,v,dv) //initialize vertex priority in the priority queue
ds <- 0; Decrease (Q,s, ds)

28. Write the prim’s algorithm.


ALGORITHM Prim(G)
//Prim’s Algorithm for constructing minimum spanning tree
//Input:A weighted connected graph G = < V, E >
//Output: ET, The set of tree vertices can be initialized with any vertex
VT <- {v0 }
ET <- 0
for I <- 1 to l V l – 1 do
find minimum – weight edge e* = ( v* , u* )among all the edges (u,v) such that v is in
VT and u is in V - VT
VT <- VT U { u* }
ET <- ET U { e* }
Return ET
29.Define Warshall’s Algorithm.
• Computes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraph
Example of transitive closure:

35. Mention the time efficiency and space efficiency of Warshall’s Algorithm.

Time efficiency: Θ(n3)


Space efficiency: Matrices can be written over their predecessors
(with some care), so it’s Θ(n^2).
36. Define Floyd’s algorithm.
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices
Same idea: construct solution through series of matrices D(0), …,
D (n) using increasing subsets of the vertices allowed
as intermediate
Example:

30. Mention time efficiency and space efficiency of Floyd’s algorithm.


 Time efficiency: Θ(n3)
 Space efficiency: Matrices can be written over their predecessors.

31. What is OBST? Give the analysis for OBST.


 OBST is Optimal Binary Search Tree
 OBST is for which the average number of comparisons in a search is the smallest
possible.
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)

32. What is difference between Prim’s algorithm and Kruskal’s algorithm?


Prim’s algorithm:
In Prim's, you always keep a connected component, starting with a single vertex.
You look at all edges from the current component to other vertices and find the smallest
among them. You then add the neighbouring vertex to the component, increasing its size
by 1. In N-1 steps, every vertex would be merged to the current one if we have a
connected graph.
Kruskal’s algorithm:
In Kruskal's, you do not keep one connected component but a forest. At each stage,
you look at the globally smallest edge that does not create a cycle in the current forest.
Such an edge has to necessarily merge two trees in the current forest into one. Since you
start with N single-vertex trees, in N-1 steps, they would all have merged into one if the
graph was connected.
PART B
1. i) Given the mobile numeric keypad. You can only press buttons that are
up,left,right or down ot the first number pressed to obtain the subsequent
numbers. You are not allowed to press bottom row corner buttons (i.e * and
#). Given a number N, how many key strokes will be involved to press the
given number. What is the length of it? Which dynamic programming
technique could be used to find solution for this? Explain each step with the
help of a pseudo code and derive its time complexity. [A/M 15] (12)

Solution:
In order to understand the problem let us draw the numeric keypad as follows
If we start with number 0 valid numbers will be 00,08 terefore count = 2.
If we start with number 1 valid numbers will be 11,12,14 therefore count = 3.
If we start with number 2 valid numbers will be 22,21,23,25 therefore count = 4.
If we start with number 3 valid numbers will be 33,32,36 therefore count = 3. Counting in
this fashion, we will get different counts for each number.

In pressing the valid numbers, we need to traverse from one key to another in left,right,up
and down. This leads to many repeated traversals on smaller paths to find all possible
longer paths.
Let us assume length of the path as N
If N= 4,then traversal for the button 8 will be

8 -> 5 -> 2
8 -> 7 -> 4
8 -> 9 -> 6

In this problem is solved using dynamic programming then two properties optimal
substructure and overlapping sub problem are used.

The implementation code using dynamic programming will be

int DynamicProgMethod(char keypad[][3],int n)


{
if(keypad = = NULL || n <= 0)
return 0;
if(n = = 1)
return 10;
int row[ ] = {0,0,-1,0,1};
int coll[ ] = {0,-1,0,1,0};
int count[10][n+1];
int i=0, j=0, k=0, move=0, r=0, c=0, num=0;
int nextnumber=0, displaycount=0;
for(i=0; i<=9; i++)
{
count[i][0] - 0;
count[i][1] = 1;
}
for (k=2; k<=n; k++)
{
for (j=0; j<3; j++)
{
if(keypad[i][j] != '*'&& kaypad[i][j] != '#')
{
num = keypad[i][j] = '0';
count[num][k] = 0;
for(move=0;move<5;move++)
{
r=i+ row[move];
c=j+ col[move];
if(r>=0 && r<= 3 && c>=0 && c<=2 && keypad[r][c] != '*'&& keypad[r][c] != '#')
{
nextnumber = keypad[r][c] – '0';
count[num][k] += count[nextnumber][k-1];
}
}
}
}
}
}
displaycount = 0;
for(i=o;i<=9;i++)
displaycount += count[i][n];
return displaycount;
}

ii) How do you construct a minimum spanning tree using Kruskal's algorithm?
Explain. [A/M 15](4)[N/D2019]
The another greedy algorithm for the minimum spanning tree problem that also
always yielda an optimal solution. It is named Kruskal’s algorithm.
Algorithm:
ALGORITHM Kruskal(G)
//Kruskal’s algorithm for constructing minimum spanning tree
//Input: A weighted connected graph G = <V,E>
//Output: ET <- 0: ecounter <- 0
k <- 0
while ecounter < l V l – 1 do
k <- k + 1
if ET U { eik } isacyclic
ET <- ET U { eik } ; ecounter <- ecounter = 1
return ET

Example:
2 Construct a Huffman tree by using these nodes.

Value A B C D E F
Frequency 5 25 7 15 4 12

[A/M 15] (8)[A/M 2019]


Huffman coding is a lossless data compression algorithm. The idea is to assign
variable-length codes to input characters, lengths of the assigned codes are based on the
frequencies of corresponding characters. The most frequent character gets the smallest
code and the least frequent character gets the largest code. The variable-length codes
assigned to input characters are Prefix Codes, means the codes (bit sequences) are
assigned in such a way that the code assigned to one character is not prefix of code
assigned to any other character. This is how Huffman Coding makes sure that there is no
ambiguity when decoding the generated bit stream.

Applications of Huffman Coding.

There are mainly two major parts in Huffman Coding


1) Build a Huffman Tree from input characters.
2) Traverse the Huffman Tree and assign codes to characters.

Steps to build Huffman Tree


Input is array of unique characters along with their frequency of occurrences and output is
Huffman Tree.
1. Create a leaf node for each unique character and build a min heap of all leaf nodes (Min
Heap is used as a priority queue. The value of frequency field is used to compare two
nodes in min heap. Initially, the least frequent character is at root)

2. Extract two nodes with the minimum frequency from the min heap.

3. Create a new internal node with frequency equal to the sum of the two nodes
frequencies. Make the first extracted node as its left child and the other extracted node as
its right child. Add this node to the min heap.

4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is
the root node and the tree is complete.

Huffman Coding Algorithm Example

Value A B C D E F
Frequency 5 25 7 15 4 12

Solution:

Step 1: According to the Huffman coding we arrange all the elements (values) in
ascending order of the frequencies.

Value E A C F D B
Frequency 4 5 7 12 15 25

Step 2: Insert first two elements which have smaller frequency.

Value C EA F D B
Frequency 7 9 12 15 25

Step 3: Taking next smaller number and insert it at correct place.


Value F D CEA B
Frequency 12 15 16 25

Step 4: Next elements are F and D so we construct another subtree for F and D.

Value CEA B FD
Frequency 16 25 27

Step 5: Taking next value having smaller frequency then add it with CEA and
insert it at correct place.

Value FD CEAB
Frequency 27 41
Step 6: We have only two values hence we can combined by adding them.

Huffman Tree
Value FDCEAB
Frequency 68

Now the list contains only one element i.e. FDCEAB having frequency 68 and this
element (value) becomes the root of the Huffman tree.
ii). Write an algorithm to construct the optimal binary search tree given the roots
r(i,j), 0<= i<=j<=n. Also prove that this could be performed in time O(n). [A/M
08,11,15 N/D 13,14]

Problem:
Given n keys a1 < …< an and probabilities p1, …,searching for them, find a BST with
a minimum average number of comparisons in successful search. Since total number of
BSTs with n nodes is given by C(2n,n)/(n+1), which grows exponentially, brute force is
hopeless.

Example:
What is an optimal BST for keys A, B, C, and search probabilities 0.1, 0.2, 0.4, and 0.3,
respectively?

B D

Average # of comparisons = 1*0.4 + 2*(0.2+0.3) + 3*0.1 = 1.7

DP for Optimal BST Problem:

Let C[i,j] be minimum average number of comparisons made in T[i,j], optimal BST for keys
ai < …< aj , where 1 ≤ i ≤ j ≤ n. Consider optimal BST among all BSTs with some ak (i ≤ k
≤ j ) as their root; T[i,j] is the best among them.
ak

z Optimal Optimal
BST for BST for
a i , ..., ak-1 a k+1 , ..., aj

B D

C[i,j] =min {pk · 1 +i ≤ k ≤ j


∑ ps (level as in T[i,k-1] +1) + ∑ ps (level as in T[k+1,j] +1)}
After simplifications, we obtain the recurrence for C[i,j]:
C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps for 1 ≤ i ≤ j ≤ n
C[i,i] = pi for 1 ≤ i ≤ j ≤ n
Example:

key A B C D
probability 0.1 0.2 0.4 0.3

The tables below are filled diagonal by diagonal: the left one is filled using the recurrence

C[i,j] = min {C[i,k-1] + C[k+1,j]} + ∑ ps , C[i,i] = pi ;

the right one, for trees’ roots, records k’s values giving the minimam .

0 1 2 3 4 0 1 2 3 4

1 0 .1 .4 1.1 1.7 1 1 2 3 3

2 0 .2 .8 1.4 2 2 3 3

3 0 .4 1.0 3 3 3

4 0 .3 4 4

5 0 5
Optimal BST:

Analysis DP for Optimal BST Problem:


Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)
Method can be expanded to include unsuccessful searches.
3.With suitable example explain all pair shortest path algorithm.[M/J 12][or]
Describe all pair shotest path problem and write procedure to compute length of
shortest path.[M/J 13] How dynamic programming applied to solve the travelling
sales person problem? Explain in detail with an example.[A/M 12,M/J 12]

Travelling Salesperson Problem:

• This is a classic CS problem


• Given a graph (cities), and weights on the edges (distances) find a minimum weight tour
of the cities
– Start in a particular city
– Visit all other cities (exactly once each)
– Return to the starting city
• Cannot be done by brute-force as this is worst-case exponential or worse running time
– So we will look to backtracking with pruning to make it run in a reasonable
amount of time in most cases
• We will build our state space by:
– Having our children be all the potential cities we can go to next
– Having the depth of the tree be equal to the number of cities in the graph
• we need to visit each city exactly once
• So given a fully connected set of 5 nodes we have the following state space
– only partially completed
• Now we need to add bounding to this problem
– It is a minimization problem so we need to find a lower bound
• We can use:
– The current cost of getting to the node plus
– An underestimate of the future cost of going through the rest of the cities
• The obvious choice is to find the minimum weight edge in the graph and
multiply that edge weight by the number of remaining nodes to travel through
• As an example assume we have the given adjacency matrix
• If we started at node A and have just traveled to node B then we need to compute the
bound for node B
– Cost 14 to get from A to B
– Minimum weight in matrix is 2 times 4 more legs to go to get back to node A = 8
– For a grand total of 14 + 8 = 22

• Recall that if we can make the lower bound higher then we will get more pruning
• Note that in order to complete the tour we need to leave node B, C, D, and E
– The min edge we can take leaving B is min(14, 7, 8, 7) = 7
– Similarly, C=4, D=2, E=4
• This implies that at best the future underestimate can be 7+4+2+4=17
• 17 + current cost of 14 = 31
– This is much higher than 8 + 14 = 22
4. Explain Warshall’s Algorithm for Transitive Closure of directed graph.

Warshall’s Algorithm: Transitive Closure


• Computes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:

Constructs transitive closure T as the last matrix in the sequence of n-by-n matrices
R(0), … , R(k), … , R(n) where R(k)[i,j] = 1 iff there is nontrivial path from i to j with only the
first k vertices allowed as intermediate Note that R(0) = A (adjacency matrix), R(n) = T
(transitive closure)

On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists
from i and j with just vertices 1,…,k allowed as intermediate R(k)[i,j] = R(k-1)[i,j]
(path using just 1 ,…,k-1) R(k-1)[i,k] and R(k-1)[k,j] (path from i to k and from k to j using
just 1 ,…,k-1)

Recurrence relating elements R(k) to elements of R(k-1) is:

R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])


It implies the following rules for generating R(k) from R(k-1):
Rule 1 If an element in row i and column j is 1 in R(k-1),
it remains 1 in R(k)
Rule 2 If an element in row i and column j is 0 in R(k-1),
it has to be changed to 1 in R(k) if and only if
the element in its row i and column k and the element
in its column j and row k are both 1’s in R(k-1)
Rule for changing zeros in Warshall’s algorithm:

Example:
Analysis:

Time efficiency: Θ(n3)


Space efficiency: Matrices can be written over their predecessor (with some care), so it’s
Θ(n^2).

5.Explain Floyd’s algorithm to find shortest path of directed graph.

Floyd’s Algorithm:
Matrix generation:

Pseudo code :
Application:

Analysis:
Time efficiency: 0(n2)
Space efficiency: Matrices can be written over their predecessors.
6. Explain Knapsack problem and Memory functions in dynamic programming.

Table for solving Knapsack problem


Memory function:
7. Write down and explain the algorithm to solve all pair shortest path problem.[A/M
10]
Dijikstra’s algorithm:
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path
from a single source vertex to all other vertices in the given graph.

Algorithm:

1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in
shortest path tree, i.e., whose minimum distance from source is calculated and finalized.
Initially, this set is empty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values as
INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn’t include all vertices

a) Pick a vertex u which is not there in sptSetand has minimum distance value.
b) Include u to sptSet.
c) Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if sum of distance value
of u (from source) and weight of edge u-v, is less than the distance value of v, then update
the distance value of v.

DIJKSTRA(G, w, s)

INITIALIZE SINGLE-SOURCE (G, s)


S ← { } // S will ultimately contains vertices of final shortest-path
weights from s
Initialize priority queue Q i.e., Q ← V[G]
while priority queue Q is not empty do
u ← EXTRACT_MIN(Q) // Pull out new vertex
S ← S {
u}
// Perform relaxation for each vertex v adjacent to u
for each vertex v in Adj[u] do
Relax (u, v, w)
Analysis:

Like Prim's algorithm, Dijkstra's algorithm runs in O(|E|lg|V|)time.

Let us understand with the following example:

The set sptSetis initially empty and distances assigned to vertices are {0, INF, INF, INF,
INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with minimum
distance value. The vertex 0 is picked, include it in sptSet. So sptSet becomes {0}. After
including 0 to sptSet, update distance values of its adjacent vertices. Adjacent vertices of 0
are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8. Following subgraph
shows vertices and their distance values, only the vertices with finite distance values are
shown.

Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}.
Update the distance values of adjacent vertices of 1. The distance value of vertex 2
becomes 12.
Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values
of adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).

Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance
values of adjacent vertices of 6. The
distance value of vertex 5 and 8 are
updated.

We repeat the above steps until sptSet doesn’t include all vertices of given graph. Finally,
we get the following Shortest Path Tree (SPT).

8. Discuss about the algorithm and pseudocode to find the Minimum Spanning Tree
using Prim's Algorithm. Find the Minimum Spanning Tree for the graph shown
below. Am Discuss about the efficiency of the algorithm. (16) [M/J2016]

4
c
a

2 6 1

b 3 d

A spanning tree of an undirected connected graph is its connected acyclic


subgraph (i.e., a tree) that contains all the vertices of the graph. If such a graph has
weights assigned to its edges, a minimum spanning tree is its spanning tree of the
smallest weight. where the weight of a tree is defined as the sum of the weights on all its
edges. The minimum spanning tree problem is the problem of finding a minimum spanning
tree for a given weighted connected graph.
Efficiency of the algorithm If a graph is represented by its adjacency lists and the priority
queue is implemented as a min-heap. the running time of the algorithm is 0(1E1 log IV I) in
a connected graph, where NI - 15 El.

You might also like