Dynamic Programming (DP

‡ Like divide-and-conquer, solve problem by combining the solutions to sub-problems. ‡ Differences between divide-and-conquer and DP:
± Independent sub-problems, solve sub-problems independently and recursively, (so same sub(sub)problems solved repeatedly) ± Sub-problems are dependent, i.e., sub-problems share sub-sub-problems, every sub(sub)problem solved just once, solutions to sub(sub)problems are stored in a table and used for solving higher level sub-problems.


Application domain of DP
‡ Optimization problem: find a solution with optimal (maximum or minimum) value. ‡ An optimal solution, not the optimal solution, since may more than one optimal solution, any one is OK.


Typical steps of DP
‡ Characterize the structure of an optimal solution. ‡ Recursively define the value of an optimal solution. ‡ Compute the value of an optimal solution in a bottom-up fashion. ‡ Compute an optimal solution from computed/stored information.

DP Example ± Assembly Line Scheduling (ALS)


Concrete Instance of ALS 5 .

Brute Force Solution ± List all possible sequences. there are total 2n possible sequences. (the computation takes 5(n) time. compute the passing time. 6 . ± However.) ± Record the sequence with smaller passing time. ± For each sequence of n stations.

j-1 or S2.j-1 + a1.j ‡ suppose the fastest way through S1. additional time a1. additional time t2.j) ‡ j=1. Why??? ‡ Similarly for S2.j-1.j-1.n. then the chassis must have taken a fastest way from starting point through S1.j is through S1.3.«.j (same for S2.j ± from S2. two possibilities: from S1.j-1. 7 .ALS --DP steps: Step 1 ‡ Step 1: find the structure of the fastest way through factory ± Consider the fastest way from starting point through station S1.j-1 ± from S1. only one possibility ‡ j=2.j-1.j-1.

j-1 . 8 . ‡ the fastest way through station Si.DP step 1: Find Optimal Structure ‡ An optimal solution to a problem contains within it an optimal solution to subproblems.j contains within it the fastest way through station S1.j-1 or S2. ‡ Thus can construct an optimal solution to a problem from the optimal solutions to subproblems.

ALS --DP steps: Step 2 ‡ Step 2: A recursive solution ‡ Let fi[j] (i=1.j. Then ‡ f* = min(f1[n] +x1.1.j.1 ‡ f1[j]=min(f1[j-1]+a1. f2[n] +x2) ‡ f1[1]=e1+a1. f2[j-1]+ t2.2.«. n) denote the fastest possible time to get a chassis from starting point through Si. 9 .j-1+ a1. fastest time to get through S1. ‡ Let f* denote the fastest time for a chassis all the way through the factory.2 and j=1.j) ‡ Similarly to f2[j].

1 if j=1 ± min(f2[j-1]+a2. 10 .j.j) if j>1 ‡ fi[j] (i=1.2.j) if j>1 ± f2[j]= e2+a2. f2[j-1]+ t2.j-1+ a2.j.j.j-1+ a1.1 if j=1 ± min(f1[j-1]+a1.2. whose station j-1 is used in a fastest way through Si. ‡ To keep the track of the fastest way. ‡ Introduce l* to be the line whose station n is used in a fastest way through the factory.n) records optimal values to the subproblems.ALS --DP steps: Step 2 ‡ Recursive solution: ± f* = min(f1[n] +x1. f1[j-1]+ t1. introduce li[j] to record the line number (1 or 2). f2[n] +x2) ± f1[j]= e1+a1. j=1.«.

11 .ALS --DP steps: Step 3 ‡ Step 3: Computing the fastest time ± One option: a recursive algorithm. So f1[1] is referred to 2n-1 times. ‡ Let ri(j) be the number of references made to fi[j] ± ± ± ± ± r1(n) = r2(n) = 1 r1(j) = r2(j) = r1(j+1)+ r2(j+1) ri (j) = 2n-j. ± Non-recursive algorithm. ‡ Thus. the running time is exponential. Total references to all fi[j] is 5(2n).

ALS FAST-WAY algorithm Running time: O(n). 12 .

ALS --DP steps: Step 4 ‡ Step 4: Construct the fastest way through the factory 13 .

compute the product: A1vA2v«vAn .k]B[k. minimum number of multiplications) to compute it. A2.Matrix-chain multiplication (MCM) -DP ‡ Problem: given A1.q) and B(q. «.r).r) in p v q v r multiplications ± for i=1 to p for j=1 to r C[i.e.j]+ A[i.j] 14 . find the fastest way (i. ‡ Suppose two matrices A(p.An".. compute their product C(p.j] = C[i.j]=0 ± for i=1 to p ‡ for j=1 to r ± for k=1 to q C[i.

«.« An(pn-1. A2.An" by < p0. 10 v100 v50+100 v5 v50=75000 ‡ The first way is ten times faster than the second !!! ‡ Denote A1.e. «.p1).p2).5).p1.Matrix-chain multiplication -DP ‡ Different parenthesizations will have different number of multiplications for product of multiple matrices ‡ Example: A(10.pn) 15 . «. A1(p0.50) ± If ((A vB) vC).p2. C(5. Ai(pi-1. A2(p1.pn> ± i.100). B(100. 10 v100 v5 +10 v5 v50 =7500 ± If (A v(B vC)).pi).

‡ Let P(n) denote the number of alternative parenthesizations of a sequence of n matrices: ± P(n) = 1 if n=1 §k=1n-1 P(k)P(n-k) if nu2 ‡ The solution to the recursion is .Matrix-chain multiplication ±MCM DP ‡ Intuitive brute-force solution: Counting the number of parenthesizations by exhaustively checking all possible parenthesizations.(2n). 16 . ‡ So brute-force will not work.

.. (iek<j)..j (iej) denote the matrix resulting from AivAi+1v«vAj ± Any parenthesization of AivAi+1v«vAj must split the product between Ak and Ak+1 for some k.. the parenthesization of ³prefix´ subchain AivAi+1v«vAk within this optimal parenthesization of AivAi+1v«vAj must be an optimal parenthesization of AivAi+1v«vAk.MCP DP Steps ‡ Step 1: structure of an optimal parenthesization ± Let Ai.k + # of computing Ak+1.j.k v Ak+1. The cost = # of computing Ai.j + # Ai.. ± If k is the position for an optimal parenthesization. ± AivAi+1v«vAk v Ak+1v«vAj 17 .

MCP DP Steps ‡ Step 2: a recursive relation ± Let m[i.j] +pi-1pkpj } if i<j iek<j 18 .j] be the minimum number of multiplications for AivAi+1v«vAj ± m[1.k] + m[k+1.j] = 0 if i = j min {m[i.n] will be the answer ± m[i.

). to P. 19 . ± The second hallmark of DP: overlapping subproblems and solve every subproblem just once.346 for the proof. each subproblem is only solved once. Computing the optimal cost ± If by recursive algorithm. ± Total number of subproblems: ( 2 ) +n = 5(n2) n ± Recursive algorithm will encounter the same subproblem many times. ± If tabling the answers for subproblems. no better than brute-force.MCM DP Steps ‡ Step 3. exponential time .(2n) (ref.

with m[i. 20 .j] records the optimal cost for AivAi+1v«vAj . ± array s[1..MCM DP Steps ‡ Step 3. pn >. ± array m[1.j]. p1 ...n. ± Suppose the input to the algorithm is p=< p0 .«.j] records index k which achieved the optimal cost when computing m[i.1.n]. s[i.n.1. Algorithm..n].

MCM DP Steps 21 .

5) m(5.MCM DP²order of matrix computations m(1.3) m(3.6) m(4.5) m(1.4) m(3.4) m(2.1) m(1.6) m(5.3) m(1.5) m(4.5) m(2.6) m(6.4) m(1.6) 22 .5) m(3.6) m(2.6) m(3.4) m(4.2) m(1.3) m(2.2) m(2.

MCM DP Example 23 .

j] is the split position for AiAi+1«Aj . i.n.n].. Ai«As[i.n] is computed.1. beginning from s[1.e... and s[i.j] +1«Aj . ± Since s[1.. 24 .MCM DP Steps ‡ Step 4.n.j] and As[i. the parenthesization order can be obtained from s[1. thus. constructing a parenthesization order for the optimal solution.n] recursively.1.

MCM DP Steps ‡ Step 4. algorithm 25 .

Elements of DP ‡ Optimal (sub)structure ± An optimal solution to the problem contains within it optimal solutions to subproblems. Total number of distinct subproblems is typically polynomial in input size. ‡ Overlapping subproblems ± The space of subproblems is ³small´ in that a recursive algorithm for the problem solves the same subproblems over and over. ‡ (Reconstruction an optimal solution) 26 .

Finding Optimal substructures ‡ Show a solution to the problem consists of making a choice. ‡ Suppose you are given a choice leading to an optimal solution. ‡ Show the solution to the subproblems used within the optimal solution to the problem must themselves be optimal by cut-and-paste technique. which results in one or more subproblems to be solved. ± Determine which subproblems follows and how to characterize the resulting space of subproblems. 27 .

j is good for subproblem space. subproblem space A1A2«Aj will not work. no need for other more general space ‡ In matrix-chain multiplication. 28 . S1. AiAi+1«Aj (vary at both ends) works.j and S2. ‡ In assembly-line schedule.Characterize Subproblem Space ‡ Try to keep the space as simple as possible. Instead.

j]ng 3.i. return m[i.1. if i=j then return 0 2. m[i. to page 346 for proof.k+1.j] then m[i. 29 . for kni to j-1 4.j) (called with(p. do qn RECURSIVE-MATRIX-CHAIN(p.i.j] The running time of the algorithm is O(2n). Ref.A Recursive Algorithm for Matrix-Chain Multiplication RECURSIVE-MATRIX-CHAIN(p.k)+ RECURSIVE-MATRIX-CHAIN(p. if q< m[i.j)+pi-1pkpj 5.n)) 1.j] nq 6.

4 1.. DP solves the same (overlapping) subproblems only once (at the first time)..1 2.1 2..2 3.. In contrast..3 2..2 3...3 1.... when the same subproblem is encountered later..2 3.2 This divide-and-conquer recursive algorithm solves the overlapping problems over and over.Recursion tree for the computation of RECURSIVE-MATRIX-CHAIN(p. 30 ..1.3 4....2 3.3 4.2 3.4 1. just look up the table to get the result...1 2.4 1...1 2.4 2.3 1.3 4..3 4.4) The divide-and-conquer is better for the problem which generates brand-new problems at each step of recursion..2 3..4 2..3 3...4 2.4 1.1..4) 1. then store the result in a table.4 1. The computations in green color are replaced by table loop up in MEMOIZED-MATRIX-CHAIN(p.

Optimal Substructure Varies in Two Ways ‡ How many subproblems ± In assembly-line schedule. one subproblem ± In matrix-chain multiplication: two subproblems ‡ How many choices ± In assembly-line schedule. 31 . two choices ± In matrix-chain multiplication: j-i choices ‡ DP solve the problem in bottom-up manner.

j if stay in the same line. ± In matrix-chain multiplication. ± In matrix-chain multiplication. O(n) v O(1)= O(n) . ± In assembly-line scheduling. 32 . choice cost is pi-1pkpj. ± In assembly-line scheduling. choice cost is ‡ ai.j-1+ai.j (id{i) otherwise. O(n2) v O(n) = O(n3) ‡ The cost =costs of solving subproblems + cost of making choice.Running Time for DP Programs ‡ #overall subproblems v #choices. ti¶.

Subtleties when Determining Optimal Structure ‡ Take care that optimal structure does not apply even it looks like to be in first sight.4 shows it does not satisfy optimal substructure. 33 . But qp r is not the longest simple path from q to r. q s r t qp r p t is the longest simple path from q to t. ‡ Unweighted longest simple path: ± Find a simple path from u to v consisting of most edges. ‡ Unweighted shortest path: ± Find a path from u to v consisting of fewest edges. ‡ Independence (no share of resources) among subproblems if a problem has optimal structure. ± Figure 15. ± Can be proved to have optimal substructures.

1.. reconstructing optimal solution from m[1.. if s[1.. 34 . l1[n] and l2[n] can be easily removed.Reconstructing an Optimal Solution ‡ An auxiliary table: ± Store the choice of the subproblem in each step ± reconstructing the optimal steps from the table..n] is removed. ‡ The table may be deleted without affecting performance ± Assembly-line scheduling.n] will be inefficient. Reconstructing optimal solution from f1[n] and f2[n] will be efficient.1.n.n. ± But MCM.

just look up the table to take the value.Memoization ‡ ‡ ‡ ‡ A variation of DP Keep the same efficiency as DP But in a top-down manner. 35 . ± When a subproblem is first encountered. ± If the subproblem is encountered again in the future. its solution needs to be solved and then is stored in the corresponding entry of the table. Idea: ± Each entry in table initially contains a value indicating the entry has yet to be filled in.

j] 36 .j] then m[i. if q< m[i.Memoized Matrix Chain LOOKUP-CHAIN(p.k+1.j] n0 3. return m[i.i.i.j)+pi-1pkpj 6. if m[i. do qn LOOKUP-CHAIN(p. LOOKUP-CHAIN(p.j] nq 7.j]<g then return m[i.j) 1. else for kni to j-1 4. if i=j then m[i.k)+ 5.j] 2.

Memoization ‡ MCM can be solved by DP or Memoized algorithm. both in O(n3). ‡ If all subproblems must be solved at least once. ± Total 5(n2) subproblems. ‡ If some subproblems may not need to be solved.DP VS. since it only solve these subproblems which are definitely required. 37 . Memoized algorithm may be more efficient. with O(n) for each. DP is better by a constant factor due to no recursive involvement as in Memoized algorithm.

Longest Common Subsequence (LCS) ‡ DNA analysis. y2. ± Z=CGA is a common subsequence of both X=ACGCTAC and Y=CTGACA. x2. ‡ Longest Common Subsequence (LCS): the longest one of common subsequences.T.C. ± S=ACCGGTCGAGCTTCGAAT ‡ Subsequence (of X): is X with some symbols left out.«. ± Z=CGTC is a subsequence of X=ACGCTAC.«. 38 . xm> and Y=<y1. two DNA string comparison. ‡ Common subsequence Z (of X and Y): a subsequence of X and also a subsequence of Y. ‡ DNA string: a sequence of symbols A. ± Z' =CGCA is the LCS of the above X and Y. ‡ LCS problem: given X=<x1. find their LCS. yn>.G.

2. ‡ Each subsequence corresponds to a subset of the indices {1.«.m}. keep the longer one each time. 39 . there are 2m. check whether they are also subsequences of Y. So exponential.LCS Intuitive Solution ±brute force ‡ List all possible subsequences of X.

xm> (= Xm) and Y=<y1. zk> (= Zk) be any LCS of X and Y.yn> (= Yn) and Z=<z1. ‡ Theorem 15.«. 40 .«. ± 3.«. if xm{ yn. then zk { xm implies Z is the LCS of Xm-1 and Yn. ± 1. then zk { yn implies Z is the LCS of Xm and Yn-1. and Zk-1 is the LCS of Xm-1 and Yn-1.1: Let X=<x1. if xm{ yn. then zk= xm= yn. z2.LCS DP ±step 1: Optimal Substructure ‡ Characterize optimal substructure of LCS. x2. y2. ± 2. if xm= yn.

find LCS of Xm-1 and Yn and LCS of Xm and Yn-1. 41 .LCS DP ±step 2:Recursive Solution ‡ What the theorem says: ± If xm= yn.j>0 and xi { yj. max{c[i-1. c[i.j>0 and xi= yj. ‡ c[i. c[i. then append xm. ± If xm { yn. take which one is longer.j] is the length of LCS of Xi and Yj .j-1]+1 if i.j]= 0 if i=0.j-1]} if i. or j=0 c[i-1. ‡ Overlapping substructure: ± Both LCS of Xm-1 and Yn and LCS of Xm and Yn-1 will need to solve LCS of Xm-1 and Yn-1. find LCS of Xm-1 and Yn-1.j].

.n].j] is defined as above. where c[i.n]..0.j] points to the table entry corresponding to the optimal subproblem solution chosen when computing c[i.n] backward to find the LCS.m.n] is the answer (length of LCS). 42 . where b[i.j].1. ‡ b[1. ± c[m. ± From b[m.m...LCS DP-.step 3:Computing the Length of LCS ‡ c[0.

LCS computation example 43 .

LCS DP Algorithm 44 .

LCS DP ±step 4: Constructing LCS 45 .

or even min{m.j).j]==c[i-1.j-1]+1) ‡ {Print_LCS_without_b(c.} ‡ } ‡ Can We do better? ± 2*min{m.X. 46 .i.j-1).X.j]) ‡ {Print_LCS_without_b(c.n}+1 space for just LCS value. ± If (c[i.j]==c[i-1.} ± else ‡ {Print_LCS_without_b(c. ‡ Print_LCS_without_b(c.i.j){ ± If (i=0 or j=0) return.X.X. print xi} ± else if(c[i.i-1.j-1).n} space.i-1.LCS space saving version ‡ Remove array b.

‡ Differences among divide-and-conquer algorithms. and Memoized algorithm. ‡ Modify the discussed DP algorithms. DP algorithms.Summary ‡ DP two important properties ‡ Four steps of DP. 47 . ‡ Writing DP programs and analyze their running time and space requirement.

Sign up to vote on this title
UsefulNot useful