You are on page 1of 68

Algorithm Design Paradigms

(Dynamic Programming)

1
Introduction to Dynamic
Programming
● Similar to Divide and Conquer approach
● Difference: “When subproblems share subsubproblems”

Divide-and-conquer: Repeatedly solve common
the subsubproblems even if it is already by another
solved subproblem.

Dynamic-programming: Solves each subsubproblem just once
and then saves its answer in a table. Refers table whenever
solution is needed for already solved subsubproblem”.
● Typically applied to optimization problems.
– Finding optimal (minimum or maximum) solution out of many possible
solutions.
● Can be considered as optimization of backtracking search

2
Divide and Conquer
P

S1 S2 S3

SS1 SS2 SS2 SS3 SS3 SS4

Dynamic Programming
P

S1 S2 S3

SS1 SS2 SS3 SS4

3
Algorithm development steps in
Dynamic Programming

Characterize the structure of an
solution. optimal

Recursively
solution. define the value of an optimal

Compute the value of an optimal solution,
(typically in a bottom-up fashion).

Construct an optimal solution from
computed information.

4
Implementing Dynamic
Programming
● Top-down with memorization
– Write the procedure recursively in a natural manner, but
modify to save the result of each subproblem (usually in an array or

hash table).
Bottom-up

method.
Sort the subproblems by size and solve them in smallest first order.
– Solve a subproblem, using its smaller subproblems that are already
solved and saved.
– Save the solution of subproblem for subproblem of larger size.

The bottom-up approach often has much better
constant factors, since it has less overhead for procedure calls

5
Principle of Optimality

An optimal policy has the property that whatever the
initial state and initial decision are, the remaining
decisions must constitute an optimal policy
with regard to the state resulting from the first
● decision
A problem exhibits optimal substructure if an optimal
solution contains optimal solutions to
its subproblems.

Dynamic programming builds an optimal solution
to the problem from optimal solutions to
subproblems

6
Knapsack problem

Given a knapsack with maximum capacity W,
and a set S consisting of n items
● Each item i has some weight wi and utility
value pi (all wi , pi and W are integer values)

Problem: How to pack the knapsack to
achieve maximum total utility of packed items?

7
0/1 Knapsack Problem

8
9
0-1 Knapsack
problem
Weight Utility value

Items wi pi
1
2 3
This is a knapsack 2
3 4
Max weight: W = 20 3
4 5
4 5 8
W = 20

5
9 10

10
0-1 Knapsack problem: brute-force
approach

If there are n items, there are 2n possible
combinations of items.

Check all combinations and find the one with
the maximum total value and with total weight
less or equal to W

Running time will be O(2n)

11
11
0-1 Knapsack problem :
Characterizing optimal solution
Maximize  px i i

1in

Subject to

1in
w i xi  W

and x  {0,1}, 1  i  n
i
● n – no. of items
● W – max. Weight
● wi - weight of

item i


pi- value of item i
xi- selection
decision of item i
, 1 if selected 0 if
12
not selected
12
0-1 Knapsack : Reccurence
of optimal solution

p[i] - the value of item i

w[i] – the weight of item i

KS(W,k) – returns the
total value of optimal
solution
– Where, W : the capacity
for containing solution
KS(W,k){
– k : index of the
if(k=0)
subproblem (decision
return 0;
index)
else
if(W<w[k]
)
return KS(W,k-1);
else
● Every return
computed value can be stored in a table so that when it is
max{ again,
needed KS(W,k-1),
it is simply returned from the table.
p[k] + KS(W – 13
w[k],k-1) }
Recursive definition of subproblem
● P[k,w] = P[k-1,w] if w[k] > W
● = max { P[k-1, w], P[k-1,(w – w[k]) ] + p[k] }

otherwise
The best subset of Sk that has the total weight w, either
contains item k or not.

First case: w[k] > W. Item k can’t be part of the
solution, since if it was, the total weight would be > w,
which is unacceptable

Second case: w[k] <= W. Then the item k can be in the
solution, and we choose the case with greater value

14
0-1 Knapsack : computing optimal
for (j = 0 to W)
value
P[0,w] = 0
for (i = 0 to n)
P[i,0] = 0
for (i = 0 to n)
for (j = 1 to W) // weight variation loop
if (w[i] <= j)
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j- w[i]]
else
P[i,j] = P[i-1,j]
else
P[i,j] = P[i-1,j]

15
0-1 Knapsack: Algorithm Analysis
● Initial loop j from 0 to W runs O(W) times
● Initial loop i from 0 to n runs O(n) times
● Nested loop over i and j runs O(n*W) times
● Therefore, overall running time = O(n*W)
● Space complexity = O(n*W) ( the table to
store values)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= w
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
else P[i,j] = P[i-1,j]
P[i,j] = P[i-1,w]
16
Example
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

● n =4
● W =5
● Elements (weight, profit) : (2,3),(3,4),(4,5),(5,6)
17
Example(2)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

18
Example(3)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

19
Example(4)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

20
Example(5)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

21
Example(6)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

22
Example(7)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

23
Example(8)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

24
Example(9)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

25
Example(10)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

26
Example(11)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

27
Example(12)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

28
Example(13)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

29
Example(14)
for j = 0 to W
P[0,w] = 0
for i = 0 to n
P[i,0] = 0
for i = 0 to n
for j = 1 to W // weight variation loop
if w[i] <= j
if ( p[i] + P[i-1,j-w[i]] > P[i-1,j] )
P[i,j] = p[i]+ P[i-1,j-
w[i]]
else
P[i,j] = P[i-1,j]
else P[i,j] = P[i-1,j]

30
Finding optimal set of items
i=n, k=W
while (i>0 && k > 0)
if (P[i,k]  P[i-1,k])
mark the ith item as in the knapsack
i = i-1, k = k-w[i]
else
i = i-1

31
Longest Common
Subsequence
Problem

32
Longest Common
Subsequence
● Definition : subsequence

A subsequence of a given sequence is just the given sequence
with zero or more elements left out.
● Formally, given a sequence X = <x1, x2, ..., xm>, another sequence Z =
<z1, z2, ..., zk> is a subsequence of X, if there exists a strictly increasing
sequence <i1, i2, ..., ik> of indices of X such that for all j = 1,2,...,k, we
have xij = zj

Example

Z =<B, C, D, B> is a subsequence of X = <A, B, C, B, D, A, B>
with corresponding index sequence <2, 3, 5, 7>.

Z is a common subsequence of X and Y if Z is a subsequence of both
X and Y

33
Longest Common
Subsequence
Problem
● We are given two sequences X = <x1, x2, ..., xm>
and Y = <y1, y2, ..., yn> and wish to find a
maximum length common subsequence (LCS)
of X and Y.

Application
– Used in DNA analysis to find the lagest strand
between DNAs

34
Characterizing a longest
common

subsequence
In a brute-force approach, enumerate all subsequences of X and check
each subsequence to see whether it is also a subsequence of Y
● Because X has O(2m) subsequences, requires exponential time O(n2m)

In Dynamic programming approach for LCS, subproblems correspond
to pairs of “prefixes” of the two input sequences.
● Given, sequence X = <x1, x2, ..., xm>, we define the ith prefix of X, for i =
0,1, ..., m, as Xi = <x1, x2, ..., xi>.

For example
– if X = <A, B, C, B, D, A, B>, then X4 = <A, B, C, B> and X0 is the empty
sequence.

35
Optimal substructure of an
LCS
● Let sequences X = <x1, x2, ..., xm> and Y = <y1,
y2, ..., yn> be sequences, and let Z = <z1, z2, ..., zk>
be any LCS of X and Y.

If xm = yn, then zk = xm = yn and Zk-1 is an LCS of Xm-1
and Yn-1.
If xm  yn, then zk  xm and Z is an LCS of Xm-1 and Y. If


xm  yn, then zk  ym and Z is an LCS of X and Yn-1.

36
A recursive
solution
• c[i,j] to be the length of an LCS of the

sequences Xi and Yj.


• If either i = 0 or j = 0, so the LCS has length 0.

• – i.e., c[0,j] = c[i,0] = 0


• The optimal substructure of the LCS

problem gives the recursive formula


if x[i] 
c[i 1, j 1] 1
c[i, j]   y[ j],
max(c[i, j 1], c[i 1,
j]) otherwise
37
A recursive
solution
• c[i,j] to be the length of an LCS of the

sequences Xi and Yj.


• If either i = 0 or j = 0, so the LCS has length 0.

• – i.e., c[0,j] = c[i,0] = 0


• The optimal substructure of the LCS

problem gives the recursive formula


if x[i] 
c[i 1, j 1] 1
c[i, j]   y[ j],
max(c[i, j 1], c[i 1,
j]) otherwise
38
Computing the length of an
LCS

39
Time complexity : computing
length of LCS
● Line 4-5 runs m times
● Line 6-7 runs n times
● Line 8 runs m times
● Line 9-17 runs m.n
● times
Hence, time complexity

is (mn)
Space complexity
(mn)

40
Example

41
Printing optimal
solution

Time complexity = (m+n)

Can we compute optimal solution without array b??


42
Dr. Alekha Kumar Mishra
All-Pair ShortestPath Problem
(Floyd-Warshall Algorithm)

43
The All-Pairs Shortest Paths Problem
● Given a weighted digraph G(V,E) with a weight
function w : E  R, determine the length of the
shortest path between all pairs of vertices in G.
● We assume that negative-weight edges may be
present, but we assume that there are no negative-
weight cycles

44
Graph input/output
representation

45
● The vertices v1, v2 ,... ,vk-1 , are called the intermediate
vertices of the path p = <v0,v1, ..., vk>

Floyd-Warshall Algorithm
Optimal Structure

46
Shortest path
structure
● A shortest path does not contain the same vertex twice.
● For a shortest path from i to j such that any intermediate vertices
on the path are chosen from the set {1,2,...k}, there are two
possibilities:

47
Shortest path
structure(2)
All intermediate vertices in {1,2,…,k-1}

p1 p2
k

i j

P:all intermediate vertices in {1,2,…,k}

48
Shortest path
structure(3)

49
Computation of shortest path:
The bottom up approach

50
Computationof shortest path:
The bottom up approach

51
Analysis
● The algorithm’s running time is = (n3) .

The predecessor pointer can be used to
extract the final path.
● Space complexity is also = (n3)
● It is possible to reduce this down to (n2) by
keeping only one matrix instead of n.

52
Space-efficient version
of Floyd-Warshall

53
Example

53
54
Example of
Computation

5
4
55
Example
2

56
Matrix Chain Multiplication
Problem

57
Matrix Chain
Multiplication
● “Given a sequence (chain) < A1, A2, ..., An> of n matrices, where for
i=1,2, ...n, matrix Ai has dimension pi-1 X pi , fully parenthesize the product
A1A2...An in a way that minimizes the number of scalar multiplications”

Matrix multiplication is associative, and so all parenthesizations yield the
same product.

A product of matrices is fully parenthesized if it is
– Either a single matrix or
– The product of two fully parenthesized matrix products, surrounded by
● parentheses
The placement of parentheses in a chain of matrices can affect the cost of
● evaluating the product
Goal is only to determine an order for multiplying matrices that has the
lowest cost

58
An example of fully
paranthesized

59
Impact of different
paranthesization
● Three matrices < A1, A2, A3> with dimension 10 X 100, 100 X 5, and 5 X
50 respectively

((A1A2)A3)
– A1A2 requires 10.100.5 = 5000 scalar multiplications
– (A1A2)A3 requires 10.5.50 = 2500 scalar
● multiplications
● Total = 7500 scalar multiplications.
● (A1(A2A3))
● A2A3 requires 100.5.50 = 25,000 scalar multiplications
● A1(A2A3) requires 10.100.50 = 50,000 scalar multiplications
● Total = 75,000 scalar multiplications
Thus, ((A1A2)A3) is 10 times faster than (A1(A2A3))

60
Reccurrence for
number of possible
paranthesizations

P(n) - the number of possible parenthesizations
of a sequence of n matrices
– Basis : P(1) =1

61
Optimal structure of
paranthesization
● Ai..,j, where i<= j , represents the matrix that
results from the product of AiAi+1... Aj
● We must split the product between A k and Ak+1 for
some integer k in the range i <= k < j.

Optimal substructure:
– subchain AiAi+1... Ak within optimal parenthesization
of AiAi+1... Aj must be an optimal parenthesization of AiAi+1...
– Ak Similar subchain Aj within optimal
Ak+1Ai+1...
way, of AiAi+1... Aj must be an
parenthesization optimal

parenthesization 62
A recursive
solution

Let m[i,j] be the minimum number of
scalar multiplications needed to
compute the matrix
● Ai..,j

63
The bottom-up
approach

Uses additional table m[1..n,1..n] to store cost
m[i,j] and s[1..n-1,2..n] that records which index of
k achieved the optimal cost in computing m[i,j]
● Cost m[i,j] of computing a matrix-chain product of
j - i+1 matrices depends only on the costs of
products of fewer than j - i+1 matrices
● Thus, the algorithm should fill in the table m in a
manner to solving the parenthesization problem
of increasing length

64
Algorithm to compute the
tables

64
65
Exampl
e

6
5
66
Time complexity of Matrix-
Chain- Order

The loops are nested three deep,and each loop index
(l , i, and k) takes on at most n-1 values.
● MATRIX-CHAIN-ORDER yields a running time of O(n3)
● The algorithm requires (n3) space to store the m and
s tables.
● Thus, MATRIX-CHAIN-ORDER is much more efficient
than the exponential-time brute-force method of
enumerating all possible parenthesizations and
checking each one.

67
Constructing an optimal
solution

P(s,1,6)

( P(s,1,3)P(s,4,6) )

( ( P(s,1,1) P(s,2,3)) (P(s,4,5) P(s,6,6) ) )

( ( A1 ( P(s,2,2) P(s,3,3)) (( P(s,4,4) P(s,5,5) A6 ) )

68

You might also like