You are on page 1of 11

Chapter 4 – Dynamic Programming

Chapter 4: Dynamic Programming


Graphs
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes
also referred to as vertices and the edges are lines or arcs that connect any two nodes in the
graph. Therefore graph is a finite set of vertices which are connected by edges.
Undirected graphs have edges that do not have a direction. The edges indicate a two-
way relationship, in that each edge can be traversed in both directions.Directed graphs have
edges with direction. The edges indicate a one-way relationship, in that each edge can only be
traversed in a single direction.
Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also
used in social networks like linkedIn, Facebook. For example, in Facebook, each person is
represented with a vertex(or node). Each node is a structure and contains information like
person id, name, gender, locale etc.

Dynamic Programming
Dynamic Programming is a technique used to solve different types of problems in time O(n2)
or O(n3) for which a naive approach would take exponential time. Dynamic programming is
typically applied to optimization problem. Dynamic Programming is an algorithmic paradigm
that solves a given complex problem by breaking it into subproblems and stores the results of
subproblems to avoid computing the same results again. There are two main properties of a
problem which suggest that the given problem can be solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure

Overlapping Subproblems:
As studied in Divide and Conquer, Dynamic Programming technique combines the solutions
of sub-problems. Dynamic Programming is mainly used when solutions of same subproblems
are needed again and again. Therefore the computed solutions to subproblems are stored in a
table so that these don’t have to recomputed. So Dynamic Programming is not useful when
there are no common (overlapping) subproblems because there is no point storing the
solutions if they are not needed again.
For example, Binary Search doesn’t have common subproblems. But if we consider the
following recursive program for Fibonacci Numbers, there are many subproblems which are
solved again and again.
int fib(int n)
{
if ( n <= 1 ) return n;
return fib(n-1) + fib(n-2);
}

Optimal Substructure
A problem is said to have optimal substructure if an optimal solution can be constructed
efficiently from optimal solutions of its subproblems. This property is used to determine the
usefulness of dynamic programming and greedy algorithms for a problem. There are two
ways of doing this.
1) Top-Down : Start solving the given problem by breaking it down. If you see that the
problem has been solved already, then just return the saved answer. If it has not been solved,
solve it and save the answer. This is usually easy to think of and very intuitive. This is
referred to as Memoization.

Page 1
Chapter 4 – Dynamic Programming

2) Bottom-Up : Analyze the problem and see the order in which the sub-problems are solved
and start solving from the trivial subproblem , up towards the given problem. In this process,
it is guaranteed that the subproblems are solved before solving the problem. This is referred
to as Dynamic Programming: Note that divide and conquer is slightly a different technique.
In that, we divide the problem in to non-overlapping subproblems and solve them
independently, like in mergesort and quick sort.
Principal of optimality: The principle of optimality states that no matter what the first
decision, the remaining decisions must be optimal with respect to the state that results from
this first decision. This principle implies that an optimal decision sequence is comprised for
some formulations of some problem. Since the principle of optmaility may not hold for some
formulations of some problems, it is necessary to verify that it does not hold for the problem
being solved. Dynamic programming cannot be applied when this principle does not hold.

In dynamic programming the solution to the problem is a result of the sequence of decisions.
At every stage we make decisions to obtain optimal solution. This method is effective when a
given sub problem may arise from more than one partial set of choices. Using this method the
exponential time algorithm may be brought down to polynomial algorithm as it reduces
amount of enumeration by avoiding the enumeration of some decision sequences that cannot
possibly be optimal. A dynamic programming algorithm solves every sub problem just once
and then saves its answer in a table thereby avoiding the work of recomputing the answer
every time the sub problem is encountered.
The basic steps of dynamic programming are:
1. Characterize the structure of an optimal solution.
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution in a bottom up fashion
4. Construct optimal solution computed information.
Principle of Optimality
The principle states that an optimal sequence of decisions has the property that
whatever the initial state and decisions are the remaining decisions must constitute an
optimal decision sequences with regard to the state resulting from the first decision.

Knapsack problem

In this problem we are given a knapsack (bag or a container) of capacity M and N


objects of weight w1, w2…..wn with profit p1, p2,...pn.
The main objective is to place the objects into the knapsack so that maximum profit is
obtained and the weight of the object should not exceed the capacity of the knapsack.
Given weights and values of n items, put these items in a knapsack of capacity W to get the
maximum total value in the knapsack. In other words, given two integer arrays val[0..n-1]
and wt[0..n-1] which represent values and weights associated with n items respectively. Also
given an integer W which represents knapsack capacity, find out the maximum value subset
of val[] such that sum of the weights of this subset is smaller than or equal to W. You cannot
break an item, either pick the complete item, or don’t pick it (0-1 property). Problem
Description If we are given n objects and a knapsack or a bag in which the object i that has
weight wi is to be placed. The knapsack has a capacity W. Then the profit that can be earned
is pixi . The objective is to obtain filling of knapsack with maximum profit
earned.Maximized pixi. subject to constraint wixi<=W Where 1<=i<=n and n is total no. of
objects and xi =0 or 1.

Page 2
Chapter 4 – Dynamic Programming

The knapsack problem can be stated in 2 ways:


1. Continuous or fractional
2. Discrete or 0/1

The recurrence relation to get the solution to knapsack problem using dynamic
programming can be:

Max (v[i-1, j], v[i-1 , j-wi]+pi if wi ≤ j

V[i , j] = v[i-1 , j] if wi > j

0 if i=j=0

Example 1: Apply dynamic programming algorithm to the following instance of


knapsack problem.
Item Weight Value
1 2 12
2 1 10
3 3 20
4 2 15

With the capacity M=5.

Solution : It is given that the number of items N=4 and the capacity of the knapsack=5
with weights w1=2,w2=1,w3=3 and w4=2 with the profits p1=12,p2=10,p3=20and p4=15.

Step1: Since N=4 and M=5, let us have a table with N+1 rows(i.e., 5 rows) and M+1
columns (i.e.,6 columns).We know from the above relation that whenever i=0 it
indicates that there are no items to select and hence irrespective of capacity of the
knapsack, the profit will be 0 and is denoted by V[i, j]=0 for 0<=j<=M when i=0.

So V[0,0] =V[0,1]=V[0,2]=V[0,3]=V[0,4]=V[0,5]=0

And whenever j=0 and irrespective of the number of items selected, we cannot place
into knapsack and hence profit will be 0. This is denoted by V[i, j]=0 for 0<=i<=N
when j=0.
So V[0,0] =V[1,0]=V[2,0]=V[3,0]=V[4,0]=0
So, the table with N+1 rows and M+1 columns can be filled as shown below:

Page 3
Chapter 4 – Dynamic Programming

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0

Step 2: When i=1 W1 =2 and P1=12


V[1,1]=V[0,1]=0 (because j=1 which is less than w1 2)
V[1,2]=max(V[0,2],V[0,0]+12)=max{0,0+12}=12
V[1,3]=max(V[0,3],V[0,1]+12)= max{0,0+12}=12
V[1,4]=max(V[0,4],V[0,2]+12)= max{0,0+12}=12
V[1,5]=max(V[0,5],V[0,3]+12)= max{0,0+12}=12
By placing these values in the table shown in step1.we have the table shown below:

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0
3 0
4 0

Step 2: Consider i=2, w2=1 P2=10


v[2,1]=max{v[1,1],v[1,0]+10}=max{0,0+10}=10
v[2,2]=max{v[1,2],v[1,1]+10}=max{12,0+10}=12
v[2,3]=max{v[1,3],v[1,2]+10}=max{12,12+10}=22
v[2,4]=max{v[1,4],v[1,3]+10}=max{12,12+10}=22
v[2,5]=max{v[1,5],v[1,4]]+10}=max{12,12+10}=22

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0
4 0

Step 3: Consider i=3, w3=3 P3=20


v [3,1]=v[2,1] =10 (because j<w3 ie.1<3)

v [3,2]=v[2,2] =12 (because j<w3 ie.2<3)


v[3,3]=max{v[2,3],v[2,0]+10}=max{22,0+20}=22
v[3,4]=max{v[2,4],v[2,1]+10}=max{22,10+20}=30
v[3,5]=max{v[2,5],v[2,2]]+10}=max{22,12+20}=32

Page 4
Chapter 4 – Dynamic Programming

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0

Step 4: Consider i=4, w4=2 P4=15


v [4,1]=v[3,1] =10 (because j<w3 ie.1<2)
v [4,2]=max{v[3,2],v[3,0]+15}=max{12,0+15} =15
v[4,3]=max{v[3,3],v[3,1]+15}=max{22,10+15}=25
v[4,4]=max{v[3,4],v[3,2]+15}=max{30,12+15}=30
v[4,5]=max{v[3,5],v[3,3]+15}=max{32,22+15}=37

0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 12 12 12 12
2 0 10 12 22 22 22
3 0 10 12 22 30 32
4 0 10 15 25 30 37

Now find the maximum value of v [N, M] after selecting N items and placing them
into the knapsack to get optimal solution.

Here, N=4 and M=5. Therefore, an optimal solution is v[N,M]=v[4,5]=37 units.


v[4,5] ≠ v[3,5],therefore item 4 was used to fill the knapsack thereby reducing the
knapsack’s total capacity to (5-2)=3 ie.Knapsack capacity – weight of item 4.The
remaining capacity of knapsack is represented by v[3,3] ie.Remaining number of
items to be selected is 3 and remaining knapsack capacity is 3.Check the value at
v[3,3].Now v[3,3]=v[2,3] so item 3 is not included in the knapsack. Compare
v[2,3]with v[1,3].As v[2,3] ≠ v[1,3] item 2 is included and the knapsack capacity
reduces to (3-1=2) ie.Remaining knapsack capacity – weight of item 2. Check v[1,2]
≠v[0,2] so include item 1 and he knapsack capacity further reduces to (2-2=0)and
knapsack is full. So including the items 4, 2, 1 gives a profit of 15, 10, and 12
respectively producing a maximum profit of 37.

Example 2:
Apply dynamic programming algorithm to the following instance of knapsack
problem.

Item Weight Value


1 1 1
2 2 6
3 5 18
4 6 22

Page 5
Chapter 4 – Dynamic Programming

5 7 28

With the capacity M=11

The [i, j] entry here will be V [i, j], the best value obtainable using the first "i" rows of items
if the maximum capacity were j. We begin by initialization and first row.

i=1 w1=1 p1=1


V[1][1]=Max{v[0,1],v[0,0]+1}=1
V[1][2]=Max{v[0,2],v[0,1]+1}=1
V[1][3]=Max{v[0,3],v[0,2]+1}=1
V[1][4]=Max{v[0,4],v[0,3]+1}=1
V[1][5]=Max{v[0,5],v[0,4]+1}=1
V[1][6]=Max{v[0,6],v[0,5]+1}=1
V[1][7]=Max{v[0,7],v[0,6]+1}=1
V[1][8]=Max{v[0,8],v[0,7]+1}=1
V[1][9]=Max{v[0,9],v[0,8]+1}=1
V[1][10]=Max{v[0,10],v[0,9]+1}=1
V[1][11]=Max{v[0,11],v[0,10]+1}=1

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0
3 0
4 0
5 0

i=2 w2=2 p2=6


V[2,1]=v[1,1]=1
V[2,2]=Max{v[1,2],v[1,0]+6}=6
V[2,3]=Max{v[1,3],v[1,1]+6}=7
V[2,4]=Max{v[1,4],v[1,2]+6}=7
V[2,5]=Max{v[1,5],v[1,3]+6}=7
V[2,6]=Max{v[1,6],v[1,4]+6}=7
V[2,7]=Max{v[1,7],v[1,5]+6}=7
V[2,8]=Max{v[1,8],v[1,6]+6}=7
V[2,9]=Max{v[1,9],v[1,7]+6}=7
V[2,10]=Max{v[1,10],v[1,8]+6}=7
V[2,11]=Max{v[1,11],v[1,9]+6}=7

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7

Page 6
Chapter 4 – Dynamic Programming

3 0
4 0
5 0

i=3 w3=5 p3=18


V[3][1]=v[2][1]=1
V[3][2]=v[2][2]=6
V[3][3]=v[2][3]=7
V[3][4]=v[2][4]=7
V[3][5]=Max{v[2,5],v[2,0]+18}=18
V[3][6]=Max{v[2,6],v[2,1]+18}=19
V[3][7]=Max{v[2,7],v[2,2]+18}=24
V[3][8]=Max{v[2,8],v[2,3]+18}=25
V[3][9]=Max{v[2,9],v[2,4]+18}=25
V[3][10]=Max{v[2,10],v[2,5]+18}=25
V[3][11]=Max{v[2,11],v[2,6]+18}=25

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0
5 0

i=4 w4=6 p4=22


V[4][1]=v[3][1]=1
V[4][2]=v[3][2]=6
V[4][3]=v[3][3]=7
V[4][4]=v[3][4]=7
V[4][5]=v[3][5]=18
V[4][6]=Max{v[3,6],v[3,0]+22}=22
V[4][7]=Max{v[3,7],v[3,1]+22}=24
V[4][8]=Max{v[3,8],v[3,2]+22}=28
V[4][9]=Max{v[3,9],v[3,3]+22}=29
V[4][10]=Max{v[3,10],v[3,4]+22}=29
V[4][11]=Max{v[3,11],v[3,5]+22}=40

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5

Page 7
Chapter 4 – Dynamic Programming

i=5 w1=7 p1=28


V[5][1]=v[4][1]=1
V[5][2]=v[4][2]=6
V[5][3]=v[4][3]=7
V[5][4]=v[4][4]=7
V[5][5]=v[4][5]=18
V[5][6]=v[4][6]=22
V[5][7]=Max{v[4,7],v[4,0]+22}=24
V[5][8]=Max{v[4,8],v[4,1]+22}=28
V[5][9]=Max{v[4,9],v[4,2]+22}=29
V[5][10]=Max{v[4,10],v[4,3]+22}=29
V[5][11]=Max{v[4,11],v[4,4]+22}=40

0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 1 1 1 1 1 1 1 1 1 1 1
2 0 1 6 7 7 7 7 7 7 7 7 7
3 0 1 6 7 7 18 19 24 25 25 25 25
4 0 1 6 7 7 18 22 24 28 29 29 40
5 0 1 6 7 7 18 22 24 28 29 29 40

Therefore we choose the maximum profit at v[5][11]=40 ,compare with previous cell
v[4][11],as there is no change in value we consider v[4][11] which has produced the value
=40.So consider object 4.Maximum knapsack capacity is 11,so considering object 4 the left
out capacity M=11- 6=5.Number of objects yet to consider=3.Therefore check the value at
v[3,5],as v[3,5]=18,and the previous cell value v[2,5]=7,consider v[3,5].So we consider
object 3 of weight=5.Now the left out capacity M=5-5=0.
The solution is to consider object 4 and object 3,producing the total maximum profit of 40

Travelling Salesman Problem


The problem is of a salesman to visit n cities. He has to visit all the cities exactly once
and reach back to the city he started his journey from. There is an integer cost c(i,j) to
travel from one city i to other city j. The salesman wishes to make a tour to visit all the
cities and come back to his starting place with a minimum cost.

Equation to solve this problem is


g(1,s-{1}) = min{c(i, j)+g(j, s-{ j })}
jϵs
Where c(i,j) is the cost of the edge from i to j. s can take values from 0 to n-1.
Therefore we must calculate for |s|=0, |s|=1,……|s|=n-1.

10

1 2
5
8

10
Page 8 15
8 20 9 13
6
Chapter 4 – Dynamic Programming

Solution:
Let the source vertex be 1.The cost matrix for the graph is,
C(i , j) 1 2 3 4
1 0 10 15 20
2 5 0 9 10
3 6 13 0 12
4 8 8 9 0

Step 1: Consider |s|=0.In this case no intermediate node is considered and the number
of elements in s is 0.i.e.Coming to node 1 without going to any intermediate vertex.
Using the formula g(i,s) = min{c(i, j)+g( j, s-{ j })}
jϵs
g(2,0)=c(2,1)=5 //path 21
g(3,0)=c(3,1)=6 //path 31
g(4,0)=c(4,1)=8 //path 41
Step 2: Consider |s|=1.In this case one intermediate node is considered and the number
of elements in s is 1.i.e.Coming to node 1 going through any 1 intermediate vertex.
g(2,{3})=c(2,3)+g(3,0)=9+6=15 //path 231
g(2,{4})=c(2,4)+g(4,0)=10+8=18 //path 241

g(3,{2})=c(3,2)+g(2,0)=13+5=18 //path 321


g(3,{4})=c(3,4)+g(4,0)=12+8=20 //path 341
g(4,{2})=c(4,2)+g(2,0)=8+5=13 //path 421
g(4,{3})=c(4,3)+g(3,0)=9+6=15 //path 431

Step 3: Consider |s|=2.In this case two intermediate nodes are considered and the
number of elements in s is 2.i.e.Coming to node 1 going through any 2 intermediate
nodes.
g(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min {9+20,10+15}=min{29,25}=25

g(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}
=min {13+18, 12+13}=min{31,25}=25

g(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min {8+15, 9+18} =min {23,27}=23

Page 9
Chapter 4 – Dynamic Programming

Step 4: Consider |s|=3.In this case three intermediate nodes are considered and the
number of elements in s is 3.i.e.Coming to node 1 going through 3 intermediate nodes.
G(1,{2,3,4})=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min {10+25, 15+25, 20+23}=min{35,40,43}=35
Therefore the optimal tour cost is 35.

Step 5: Find the tour path


g(1,{2,3,4})= c(1,2) + g(2,{3,4})
= c(1,2) + c(2,4)+g(4,{3})
=c(1,2) + c(2,4) + c(4,3)+g(3,0)
= c(1,2) + c(2,4) + c(4,3)+c(3,1)

Therefore the tour path is 12431.

Example 2:
Find the shortest path of the Travelling salesman problem using dynamic programming for
the below graph.

G{2,0}=c(2,1)=10
G{3,0}=c(3,1)=18
G{4,0}=c(4,1)=20

G{2,{3}}=c(2,3)+g(3,0)=6+18=24
G{2,{4}}=c(2,4)+g(4,0)=12+20=32
G{3,{2}}=c(3,2)+g(2,0)=6+10=16
G{3,{4}}=c(3,4)+g(4,0)=18+20=38
G{4,{2}}=c(4,2)+g(2,0)=12+10=22
G{4,{3}}=c(4,3)+g(3,0)=18+18=36

G(2,{3,4})=min{c(2,3)+g(3,{4}),c(2,4)+g(4,{3})}
=min{6+38,12+36}=min{44,48}=44
G(3,{2,4})=min{c(3,2)+g(2,{4}),c(3,4)+g(4,{2})}

Page 10
Chapter 4 – Dynamic Programming

=min{6+32,18+22}=min{38,40}=38
G(4,{2,3})=min{c(4,2)+g(2,{3}),c(4,3)+g(3,{2})}
=min{12+24,18+16}=min{36,34}=34

G{1,{2,3,4}}=min{c(1,2)+g(2,{3,4}),c(1,3)+g(3,{2,4}),c(1,4)+g(4,{2,3})}
=min{10+44,18+38,20+34}=Min{54,56,54}=54

Find the tour path by expanding G{1,{2,3,4}} looking for the minimum value
Path 1: c(1,2)+g(2,{3,4}
= c(1,2)+ c(2,3)+g(3,{4})
=c(1,2)+ c(2,3)+c(3,4)+g(4,0)
=c(1,2)+ c(2,3)+c(3,4)+c(4,1)

Solution 1 : 1->2->3->4->1

Path 2:Expand and trace back G{1,{2,3,4}}


=c(1,4)+g(4,{2,3})
=c(1,4)+c(4,3)+g(3,{2})
=c(1,4)+c(4,3)+c(3,2)+g(2,0)
=c(1,4)+c(4,3)+c(3,2)+c(2,1)
Solution 2 : 1->4->3->2->1

Page 11

You might also like