You are on page 1of 46

6.

D YNAMIC P ROGRAMMING I

‣ weighted interval scheduling


‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

Lecture slides by Kevin Wayne



Copyright © 2005 Pearson-Addison Wesley

http://www.cs.princeton.edu/~wayne/kleinberg-tardos

Last updated on 4/4/18 5:30 AM


Algorithmic paradigms

Greed. Build up a solution incrementally, myopically optimizing



some local criterion.


Divide-and-conquer. Break up a problem into independent subproblems;



solve each subproblem; combine solutions to subproblems to form solution
to original problem.

Dynamic programming. Break up a problem into a series of overlapping
subproblems; combine solutions to smaller subproblems to form solution
to large subproblem. Requirements:

(i) There are only a polynomial number of


subproblems;
fancy name for
caching intermediate results (ii) The solution to the original problem can be easily computed
in a table for later reuse from the solutions to the subproblems; (the original problem can
actually be one of the subproblems)
Intuition: 3x89 = ?
(iii) The is a natural ordering on subproblems from smallest to
possible subproblems:
largest together with an easy to compute recurrence that
2x2=4: not very helpful
allows to compute a solution to the subproblem
3x88=264: useful
from the solution to some number of
smaller subproblems.
2
Dynamic programming history

Bellman. Pioneered the systematic study of dynamic programming in 1950s.



Etymology.
Dynamic programming = planning over time.
Secretary of Defense had pathological fear of mathematical research.
Bellman sought a “dynamic” adjective to avoid conflict.

THE THEORY OF DYNAMIC PROGRAMMING


RICHARD BELLMAN

1. Introduction. Before turning to a discussion of some representa-


tive problems which will permit us to exhibit various mathematical
features of the theory, let us present a brief survey of the funda-
mental concepts, hopes, and aspirations of dynamic programming.
To begin with, the theory was created to treat the mathematical
problems arising from the study of various multi-stage decision
processes, which may roughly be described in the following way: We
have a physical system whose state at any time / is determined by a
set of quantities which we call state parameters, or state variables.
At certain times, which may be prescribed in advance, or which may
be determined by the process itself, we are called upon to make de-
cisions which will affect the state of the system. These decisions are
equivalent to transformations of the state variables, the choice of a
decision being identical with the choice of a transformation. The out-
come of the preceding decisions is to be used to guide the choice of
future ones, with the purpose of the whole process that of maximizing
some function of the parameters describing the final state.
Examples of processes fitting this loose description are furnished
by virtually every phase of modern life, from the planning of indus-
trial production lines to the scheduling of patients at a medical
clinic ; from the determination of long-term investment programs for
universities to the determination of a replacement policy for ma-
chinery in factories; from the programming of training policies for
skilled and unskilled labor to the choice of optimal purchasing and in-
ventory policies for department stores and military establishments.
It is abundantly clear from the very brief description of possible
applications t h a t the problems arising from the study of these
processes are problems of the future as well as of the immediate 3
present.
Dynamic programming applications

Application areas.
Computer science: AI, compilers, systems, graphics, theory, ….
Operations research.
Information theory.
Control theory.
Bioinformatics.

Some famous dynamic programming algorithms.
Avidan–Shamir for seam carving.
Unix diff for comparing two files.
Viterbi for hidden Markov models.
De Boor for evaluating spline curves.
Bellman–Ford–Moore for shortest path.
Knuth–Plass for word wrapping text in T X .
<latexit sha1_base64="EhG7/UUHfhF6o3p74Glw1DH78ik=">AAACK3icbVBNS8NAEN34WetXW49egkXwVBIRrLeCF48VGltoQ9lspu3SzSbsTqQl9C941V/hr/GiePV/mKQ52NYHC4/3ZnZmnhcJrtGyPo2t7Z3dvf3SQfnw6PjktFKtPekwVgwcFopQ9TyqQXAJDnIU0IsU0MAT0PWm95nffQaleSg7OI/ADehY8hFnFDNp0IHesFK3GlYOc5PYBamTAu1h1agN/JDFAUhkgmrdt60I3YQq5EzAojyINUSUTekY+imVNADtJvmyC/MyVXxzFKr0STRz9W9HQgOt54GXVgYUJ3rdy8T/vH6Mo6abcBnFCJItB41iYWJoZpebPlfAUMxTQpni6a4mm1BFGab5rEzJ/46ArVySzGLJWejDmipwhoou0hTt9cw2iXPduGvYjzf1VrOIs0TOyQW5Ija5JS3yQNrEIYxMyAt5JW/Gu/FhfBnfy9Ito+g5Iyswfn4BrkOohg==</latexit>
sha1_base64="K4LIWrwvep29t51jV7N8ecuok+0=">AAACGXicbVBNT8JAEN3iFyIqePWykZh4Iq0X9WbixSMmVkigIdvtFDZst83u1EAa/oBXf4W/xpMx/huXwkHAl2zy8t7MzswLMykMuu6PU9nZ3ds/qB7Wjuq145PTRv3FpLnm4PNUproXMgNSKPBRoIRepoEloYRuOHlY+N1X0Eak6hlnGQQJGykRC87QSp1ho+W23RJ0m3gr0iIrDJvO2SBKeZ6AQi6ZMX3PzTAomEbBJcxrg9xAxviEjaBvqWIJmKAo95zTS6tENE61fQppqf7tKFhizCwJbWXCcGw2vYX4n9fPMb4NCqGyHEHx5aA4lxRTujiaRkIDRzmzhHEt7K6Uj5lmHG00a1PKvzPga5cU01wJnkawoUqcomZzG6K3Gdk28a/bd23vySVVck4uyBXxyA25J4+kQ3zCSUTeyLvz4Xw6X8usK84q9CZZg/P9C1n2pHk=</latexit>
sha1_base64="SWKETCWkcgZ0sm6ZDiXeRUTM5+0=">AAACIHicbVBNT8JAEJ36iYgKXL1sJCaeSOtFvZl48YgJCAk0ZLsMsGG7bXanBtLwF7zqr/DXeNH4YyyFg4Av2eTlvZmdmRfESlpy3S9nZ3dv/+CwcFQ8Lp2cnpUrpWcbJUZgS0QqMp2AW1RSY4skKezEBnkYKGwHk4eF335BY2WkmzSL0Q/5SMuhFJwWUq+JnX655tbdHGybeCtSgxUa/YpT7Q0ikYSoSShubddzY/JTbkgKhfNiL7EYczHhI+xmVPMQrZ/my87ZZaYM2DAy2dPEcvVvR8pDa2dhkFWGnMZ201uI/3ndhIa3fip1nBBqsRw0TBSjiC0uZwNpUJCaZYQLI7NdmRhzwwVl+axNyf+OUaxdkk4TLUU0wA1V0ZQMn2cpepuZbZPWdf2u7j25UIBzuIAr8OAG7uERGtACAWN4hTd4dz6cT+d7GfeOs8q9Cmtwfn4BcFCm+g==</latexit>
sha1_base64="HJV6YlmlVjnPgPYqeSt99ZGHmBg=">AAACK3icbVBNT8JAEN3iF+IX4NFLIzHxRFov4o3Ei0dMqJBAQ7bbATZst83u1EAa/oJX/RX+Gi8ar/4P29KDgC/Z5OW9mZ2Z50WCa7SsT6O0s7u3f1A+rBwdn5yeVWv1Jx3GioHDQhGqvkc1CC7BQY4C+pECGngCet7sPvN7z6A0D2UXFxG4AZ1IPuaMYiYNu9AfVRtW08phbhO7IA1SoDOqGfWhH7I4AIlMUK0HthWhm1CFnAlYVoaxhoiyGZ3AIKWSBqDdJF92aV6lim+OQ5U+iWau/u1IaKD1IvDSyoDiVG96mfifN4hx3HITLqMYQbLVoHEsTAzN7HLT5woYikVKKFM83dVkU6oowzSftSn53xGwtUuSeSw5C33YUAXOUdFlmqK9mdk2cW6ad0370Wq0W0WcZXJBLsk1scktaZMH0iEOYWRKXsgreTPejQ/jy/helZaMouecrMH4+QWtA6iC</latexit>

Cocke–Kasami–Younger for parsing context-free grammars.


Needleman–Wunsch/Smith–Waterman for sequence alignment.

4
Dynamic programming books
4. G REEDY A LGORITHMS I

‣ coin changing
‣ interval scheduling
‣ interval partitioning
‣ scheduling to minimize lateness
‣ optimal caching

SECTION 4.1
Interval scheduling

Job j starts at sj and finishes at fj.


Two jobs compatible if they don’t overlap.
Goal: find maximum subset of mutually compatible jobs.

d
jobs d and g
e are incompatible

h
time
0 1 2 3 4 5 6 7 8 9 10 11

10
Greedy algorithms I: quiz 2

Consider jobs in some order, taking each job provided it′s compatible
with the ones already taken. Which rule is optimal?

A. [Earliest start time] Consider jobs in ascending order of sj.


B. [Earliest finish time] Consider jobs in ascending order of fj.


C. [Shortest interval] Consider jobs in ascending order of fj – sj.


D. None of the above.

counterexample for earliest


shorteststart time
interval

11
Interval scheduling: earliest-finish-time-first algorithm



 EARLIEST-FINISH-TIME-FIRST (n, s1, s2, …, sn , f1, f2, …, fn)

_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

SORT jobs by finish times and renumber so that f1 ≤ f2 ≤ … ≤ fn.



S ← ∅. set of jobs selected

FOR j = 1 TO n

IF job j is compatible with S


 S ← S ∪ { j }.


 RETURN S.
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________




Proposition. Can implement earliest-finish-time first in O(n log n) time.
Keep track of job j* that was added last to S.
Job j is compatible with S iff sj ≥ fj* .
Sorting by finish times takes O(n log n) time.
12
Interval scheduling: analysis of earliest-finish-time-first algorithm

Theorem. The earliest-finish-time-first algorithm is optimal.



Pf. [by contradiction]
Assume greedy is not optimal, and let’s see what happens.
Let i1, i2, ... ik denote set of jobs selected by greedy.
Let j1, j2, ... jm denote set of jobs in an optimal solution with

i1 = j1, i2 = j2, ..., ir = jr for the largest possible value of r.

job ir+1 exists and finishes no later than jr+1

Greedy: i1 i2 ir ir+1 ... ik

Optimal: j1 j2 jr jr+1 ... jm

job jr+1 exists why not replace


because m > k job jr+1 with job ir+1?

13
Interval scheduling: analysis of earliest-finish-time-first algorithm

Theorem. The earliest-finish-time-first algorithm is optimal.

Pf. [by contradiction]


Assume greedy is not optimal, and let’s see what happens.
Let i1, i2, ... ik denote set of jobs selected by greedy.
Let j1, j2, ... jm denote set of jobs in an optimal solution with

i1 = j1, i2 = j2, ..., ir = jr for the largest possible value of r.

job ir+1 exists and finishes before jr+1

Greedy: i1 i2 ir ir+1 ... ik

Optimal: j1 j2 jr ir+1 ... jm

solution still feasible and optimal


(but contradicts maximality of r)

14
Greedy algorithms I: quiz 3

Suppose that each job also has a positive weight and the goal is to

find a maximum weight subset of mutually compatible intervals.

Is the earliest-finish-time-first algorithm still optimal? 


A. Yes, because greedy algorithms are always optimal.

B. Yes, because the same proof of correctness is valid.

C. No, because the same proof of correctness is no longer valid.

D. No, because you could assign a huge weight to a job that overlaps
the job with the earliest finish time.

counterexample for earliest finish time

weight = 100

weight = 1

15
6. D YNAMIC P ROGRAMMING I

‣ weighted interval scheduling


‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

SECTIONS 6.1–6.2
Weighted interval scheduling

Job j starts at sj, finishes at fj, and has weight wj > 0.


Two jobs are compatible if they don’t overlap.
Goal: find max-weight subset of mutually compatible jobs.

sj wj fj

h
time
0 1 2 3 4 5 6 7 8 9 10 11
7
Earliest-finish-time first algorithm

Earliest finish-time first.


Consider jobs in ascending order of finish time.
Add job to subset if it is compatible with previously chosen jobs.

Recall. Greedy algorithm is correct if all weights are 1.


Observation. Greedy algorithm fails spectacularly for weighted version.

weight = 999 b
weight = 1
weight = 1 a
h
time
0 1 2 3 4 5 6 7 8 9 10 11
8
Weighted interval scheduling

Convention. Jobs are in ascending order of finish time: f1 ≤ f2 ≤ . . . ≤ fn .



Def. p( j) = largest index i < j such that job i is compatible with j.
Ex. p(8) = 1, p(7) = 3, p(2) = 0. i is leftmost interval
that ends before j begins

8
time
0 1 2 3 4 5 6 7 8 9 10 11
9
Dynamic programming: binary choice

Def. OPT( j) = max weight of any subset of mutually compatible jobs for
subproblem consisting only of jobs 1, 2, ..., j.

Goal. OPT(n) = max weight of any subset of mutually compatible jobs.

Case 1. OPT( j) does not select job j.
Must be an optimal solution to problem consisting of remaining

jobs 1, 2, ..., j – 1.

 optimal substructure property
(proof via exchange argument)
Case 2. OPT( j) selects job j.
Collect profit wj.
Can’t use incompatible jobs { p( j) + 1, p( j) + 2, ..., j – 1 }.
Must include optimal solution to problem consisting of remaining
compatible jobs 1, 2, ..., p( j).

0 j=0
Bellman equation. OP T (j) =
<latexit sha1_base64="qEOy02oDztIDL8rE3HZJCxx6JUo=">AAACxnicbVHbattAEF0pvaTuzUkf+zLUtMQ0NVJSaIpJCfSlfaoLcRPwCrNajex1Viuxu2pshKE/0g/r33Sl6KG2O7BwODN7ZuZMXEhhbBD88fy9e/cfPNx/1Hn85Omz592Dwx8mLzXHMc9lrq9jZlAKhWMrrMTrQiPLYolX8c3nOn/1E7URubq0qwKjjM2USAVn1lHT7u9vo8ujRR/o8JwOOzTGmVAVd4Jm3aHDAN4Atbi0UIFIYQ0LOIcAKJ0MTjGLXAXN2BKoxNQCrYAeQ6P3LuwfO0m4nS7gbUMVrkm/zlMtZnNXvN6V/tRIdyiqpB1h2u0Fg6AJ2AVhC3qkjdH0wDukSc7LDJXlkhkzCYPCRhXTVnCJbqfSYMH4DZvhxEHFMjRR1fi4hteOSSDNtXvKQsP++6NimTGrLHaVGbNzs52ryf/lJqVNz6JKqKK0qPhdo7SUYHOojwKJ0MitXDnAuBZuVuBzphm37nQbXRrtAvnGJtWyVILnCW6x0i6tZrWL4bZnu2B8Mvg4CL+/712ctXbuk5fkFTkiIflALsgXMiJjwr09r++deKf+Vz/3S//2rtT32j8vyEb4v/4CZY/TYA==</latexit>
max { OP T (j 1), wj + OP T (p(j)) } j>0
10
Weighted interval scheduling: brute force

BRUTE-FORCE (n, s1, …, sn, f1, …, fn, w1, …, wn)


_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Sort jobs by finish time and renumber so that f1 ≤ f2 ≤ … ≤ fn.


Compute p[1], p[2], …, p[n] via binary search.
RETURN COMPUTE-OPT(n).

COMPUTE-OPT( j )
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

IF (j = 0)
RETURN 0.

ELSE
RETURN max {COMPUTE-OPT( j – 1), wj + COMPUTE-OPT( p[ j ]) }.

11
Weighted interval scheduling: brute force

Observation. Recursive algorithm is spectacularly slow because of


overlapping subproblems ⇒ exponential-time algorithm.

Ex. Number of recursive calls for family of “layered” instances grows like
Fibonacci sequence.

1 4 3

2
3 2 2 1
3

4
2 1 1 0 1 0
5
1 0
p(1) = 0, p(j) = j-2
recursion tree

13
Weighted interval scheduling: memoization

Top-down dynamic programming (memoization).


Cache result of subproblem j in M [ j].
Use M [ j] to avoid solving subproblem j more than once.

TOP-DOWN(n, s1, …, sn, f1, …, fn, w1, …, wn)


_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Sort jobs by finish time and renumber so that f1 ≤ f2 ≤ … ≤ fn.


Compute p[1], p[2], …, p[n] via binary search.
M [ 0 ] ← 0. global array

RETURN M-COMPUTE-OPT(n).

M-COMPUTE-OPT( j )
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

IF (M [ j] is uninitialized)
M [ j] ← max { M-COMPUTE-OPT ( j – 1), wj + M-COMPUTE-OPT( p[ j]) }.
RETURN M [ j].
14
Weighted interval scheduling: running time

Claim. Memoized version of algorithm takes O(n log n) time.


Pf.
Sort by finish time: O(n log n) via mergesort.
Compute p[ j] for each j : O(n log n) via binary search.


M-COMPUTE-OPT( j): each invocation takes O(1) time and either


- (1) returns an initialized value M [ j]
- (2) initializes M [ j] and makes two recursive calls


Progress measure Φ = # initialized entries among M [1.. n].


- initially Φ = 0; throughout Φ ≤ n.
- (2) increases Φ by 1 ⇒ ≤ 2n recursive calls.


Overall running time of M-COMPUTE-OPT(n) is O(n). ▪

15
16
Weighted interval scheduling: finding a solution

Q. DP algorithm computes optimal value. How to find optimal solution?


A. Make a second pass by calling FIND-SOLUTION(n).

FIND-SOLUTION(j)
_____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
____________________________________________________________________________________________________________

IF (j = 0)
RETURN ∅.
ELSE IF (wj + M[p[j]] > M[ j – 1])
RETURN { j } ∪ FIND-SOLUTION(p[j ]).
ELSE
RETURN FIND-SOLUTION( j – 1).

M[j] = max { M[j – 1], wj + M[p[ j]] }.

Analysis. # of recursive calls ≤ n ⇒ O(n).


17
Weighted interval scheduling: bottom-up dynamic programming

Bottom-up dynamic programming. Unwind recursion.

BOTTOM-UP(n, s1, …, sn, f1, …, fn, w1, …, wn)


_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Sort jobs by finish time and renumber so that f1 ≤ f2 ≤ … ≤ fn.


Compute p[1], p[2], …, p[n].
M [ 0] ← 0. previously computed values

FOR j = 1 TO n
M [j ] ← max { M[ j – 1], wj + M [p[j]] }.
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

Running time. The bottom-up version takes O(n log n) time.

18
Weighted interval scheduling: bottom-up dynamic programming

Bottom-up dynamic programming. Unwind recursion.

M[0 ] ← 0.
FOR j = 1 TO n
M[ j ] ← max { M[ j – 1], wj + M[ p[j]] }.
MAXIMUM SUBARRAY PROBLEM

Goal. Given an array x of n integer (positive or negative), find a contiguous


subarray whose sum is maximum.

12 5 −1 31 −61 59 26 −53 58 97 −93 −23 84 −15 6

187

Applications. Computer vision, data mining,



genomic sequence analysis, technical job interviews, ….

19
Maximum Subarray Problem (Cormen 4.2)
• Divide and Conquer: O(n log n) • Dynamic Programmming: O(n)
sha1_base64="7MCCgMPkRZ+McrhVixdB+YGxHPQ=">AAADhXicbVLbbtNAEF3XQEu4teWRlxERiJdEdmJKJYQo4oXHIpG2Iraq9WaSrLpeW7tr1MjKx/EZfAGv8AesvbmXkXZ0dObMmfV400JwbYLgl7fn37v/YP/gYevR4ydPnx0eHV/ovFQMBywXubpKqUbBJQ4MNwKvCoU0SwVepjef6/rlD1Sa5/KbmRWYZHQi+Zgzaix1feR9/wTxe/hQp1jg2MAQ4hQnXFZUKTqbV8rFvNXpwWuAt3UKbOrUyHF16vQhjofdCLOkBZEjbArren8pgtBxK2VjAidLkfNsRJFzXyvDLTuHoqUnbHi6+npmsOrp73qu79TbcFqPWHv2lndyduFqC07ZilGOFhtrxYpPpgaS68N20A2agLsgXIA2WcS5/RvH8ShnZYbSMEG1HoZBYRLrazgTaJ1LjQVlN3SCQwslzVAnVfMK5vDKMiMY58oeaaBhNzsqmmk9y1KrzKiZ6t1aTf6vNizN+DSpuCxKg5K5QeNSgMmhflIw4gqZETMLKFPc3hXYlCrKjH14W1Ma7wLZ1pdUt6XkLB/hDivMrVF0brcY7u7sLrjodUO72q9R++x0sc8D8oK8JG9ISN6RM/KFnJMBYd5P77f3x/vr7/sdP/JPnHTPW/Q8J1vhf/wHS3ruiw==</latexit>
sha1_base64="7MCCgMPkRZ+McrhVixdB+YGxHPQ=">AAADhXicbVLbbtNAEF3XQEu4teWRlxERiJdEdmJKJYQo4oXHIpG2Iraq9WaSrLpeW7tr1MjKx/EZfAGv8AesvbmXkXZ0dObMmfV400JwbYLgl7fn37v/YP/gYevR4ydPnx0eHV/ovFQMBywXubpKqUbBJQ4MNwKvCoU0SwVepjef6/rlD1Sa5/KbmRWYZHQi+Zgzaix1feR9/wTxe/hQp1jg2MAQ4hQnXFZUKTqbV8rFvNXpwWuAt3UKbOrUyHF16vQhjofdCLOkBZEjbArren8pgtBxK2VjAidLkfNsRJFzXyvDLTuHoqUnbHi6+npmsOrp73qu79TbcFqPWHv2lndyduFqC07ZilGOFhtrxYpPpgaS68N20A2agLsgXIA2WcS5/RvH8ShnZYbSMEG1HoZBYRLrazgTaJ1LjQVlN3SCQwslzVAnVfMK5vDKMiMY58oeaaBhNzsqmmk9y1KrzKiZ6t1aTf6vNizN+DSpuCxKg5K5QeNSgMmhflIw4gqZETMLKFPc3hXYlCrKjH14W1Ma7wLZ1pdUt6XkLB/hDivMrVF0brcY7u7sLrjodUO72q9R++x0sc8D8oK8JG9ISN6RM/KFnJMBYd5P77f3x/vr7/sdP/JPnHTPW/Q8J1vhf/wHS3ruiw==</latexit><latexit
sha1_base64="7MCCgMPkRZ+McrhVixdB+YGxHPQ=">AAADhXicbVLbbtNAEF3XQEu4teWRlxERiJdEdmJKJYQo4oXHIpG2Iraq9WaSrLpeW7tr1MjKx/EZfAGv8AesvbmXkXZ0dObMmfV400JwbYLgl7fn37v/YP/gYevR4ydPnx0eHV/ovFQMBywXubpKqUbBJQ4MNwKvCoU0SwVepjef6/rlD1Sa5/KbmRWYZHQi+Zgzaix1feR9/wTxe/hQp1jg2MAQ4hQnXFZUKTqbV8rFvNXpwWuAt3UKbOrUyHF16vQhjofdCLOkBZEjbArren8pgtBxK2VjAidLkfNsRJFzXyvDLTuHoqUnbHi6+npmsOrp73qu79TbcFqPWHv2lndyduFqC07ZilGOFhtrxYpPpgaS68N20A2agLsgXIA2WcS5/RvH8ShnZYbSMEG1HoZBYRLrazgTaJ1LjQVlN3SCQwslzVAnVfMK5vDKMiMY58oeaaBhNzsqmmk9y1KrzKiZ6t1aTf6vNizN+DSpuCxKg5K5QeNSgMmhflIw4gqZETMLKFPc3hXYlCrKjH14W1Ma7wLZ1pdUt6XkLB/hDivMrVF0brcY7u7sLrjodUO72q9R++x0sc8D8oK8JG9ISN6RM/KFnJMBYd5P77f3x/vr7/sdP/JPnHTPW/Q8J1vhf/wHS3ruiw==</latexit><latexit
sha1_base64="7MCCgMPkRZ+McrhVixdB+YGxHPQ=">AAADhXicbVLbbtNAEF3XQEu4teWRlxERiJdEdmJKJYQo4oXHIpG2Iraq9WaSrLpeW7tr1MjKx/EZfAGv8AesvbmXkXZ0dObMmfV400JwbYLgl7fn37v/YP/gYevR4ydPnx0eHV/ovFQMBywXubpKqUbBJQ4MNwKvCoU0SwVepjef6/rlD1Sa5/KbmRWYZHQi+Zgzaix1feR9/wTxe/hQp1jg2MAQ4hQnXFZUKTqbV8rFvNXpwWuAt3UKbOrUyHF16vQhjofdCLOkBZEjbArren8pgtBxK2VjAidLkfNsRJFzXyvDLTuHoqUnbHi6+npmsOrp73qu79TbcFqPWHv2lndyduFqC07ZilGOFhtrxYpPpgaS68N20A2agLsgXIA2WcS5/RvH8ShnZYbSMEG1HoZBYRLrazgTaJ1LjQVlN3SCQwslzVAnVfMK5vDKMiMY58oeaaBhNzsqmmk9y1KrzKiZ6t1aTf6vNizN+DSpuCxKg5K5QeNSgMmhflIw4gqZETMLKFPc3hXYlCrKjH14W1Ma7wLZ1pdUt6XkLB/hDivMrVF0brcY7u7sLrjodUO72q9R++x0sc8D8oK8JG9ISN6RM/KFnJMBYd5P77f3x/vr7/sdP/JPnHTPW/Q8J1vhf/wHS3ruiw==</latexit><latexit
<latexit
A =
4
2

3
1

2
2
5
3
6
1
3
1
4
0
1
3
3
2

0
2
5
3
5
1
0
1
1

estimation, technical job interviews, …


13
MAXIMUM RECTANGLE PROBLEM

2
2

1
4
3
1
0
2
1

1
4

3
3
3
3
1

1
2

2
1
1

Applications. Databases, image processing, maximum likelihood


Goal. Given an n-by-n matrix A, find a rectangle whose sum is maximum.

21
6. D YNAMIC P ROGRAMMING I

‣ weighted interval scheduling


‣ segmented least squares
‣ knapsack problem
‣ RNA secondary structure

SECTION 6.4
Knapsack problem

Goal. Pack knapsack so as to maximize total value.


There are n items: item i provides value vi > 0 and weighs wi > 0.
Knapsack has weight capacity of W.

Assumption. All input values are integral.

Ex. { 1, 2, 5 } has value $35 (and weight 10).


Ex. { 3, 4 } has value $40 (and weight 11).

i vi wi
$18 g
5k
$22 g
6k 1 $1 1 kg
11 kg
2 $6 2 kg
3 $18 5 kg
$6 2 kg

4 $22 6 kg
$28
7k
g 5 $28 7 kg
$1 g
1k
knapsack instance
Creative Commons Attribution-Share Alike 2.5 (weight limit W = 11)
by Dake 31
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit>
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
<latexit
OP T (i, w) =
Goal. OPT(n, W).

Bellman equation.

0
Collect value vi.

OP T (i
New weight limit = w – wi.

max { OP T (i
1, w)
Case 2. OPT(i, w) selects item i.
Case 1. OPT(i, w) does not select item i.

1, w), vi + OP T (i
Dynamic programming: adding a new variable

1, w
possibly because wi > w

wi ) }
OPT(i, w) selects best of { 1, 2, …, i – 1 } using weight limit w.
Def. OPT(i, w) = max-profit subset of items 1, …, i with weight limit w.

OPT(i, w) selects best of { 1, 2, …, i – 1 } using this new weight limit.

i=0

wi > w
optimal substructure property
(proof via exchange argument)

32
Knapsack problem: bottom-up dynamic programming

KNAPSACK(n, W, w1, …, wn, v1, …, vn )


__________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

FOR w = 0 TO W
M [ 0, w] ← 0.


previously computed values


FOR i = 1 TO n
FOR w = 0 TO W
IF (wi > w) M[i, w] ← M[i – 1, w].
ELSE M[i, w ] ← max { M[i – 1, w], vi + M [i – 1, w – wi] }.


RETURN M [ n, W].
__________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

0 i=0

OP T (i, w) = OP T (i 1, w) wi > w

max { OP T (i 1, w), vi + OP T (i 1, w wi ) }
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit>
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
sha1_base64="yGc+dBk4QwRLdPr1J0EKEi2YjzI=">AAADBHicbVFNb9NAEF2brxK+0nLkMiICFZFGNkKlKCoq4sKNIDW0UjaK1puxs+p67e6um0RWzvwaTogr/wPxZ1i7OTgpI6309Gbem9mZKJfC2CD44/m3bt+5e2/nfuvBw0ePn7R3976ZrNAchzyTmT6PmEEpFA6tsBLPc40sjSSeRRefqvzZFWojMnVqlzmOU5YoEQvOrKMm7b9fBqf7ogvzV0D7x7TfohEmQpXceZpVi/YDeAnU4sJCCSKGFQg4hgAoHfUOMR27itrhIKw9tmrnEwEfYN6spilbAJUYW6Cl6wkNeRdot6KunOx1I3HgfKr5gGqRzJxw1WiU2RnquTDo+lHaoqim6+kn7U7QC+qAmyBcgw5Zx2Cy6+3RacaLFJXlkhkzCoPcjkumreAS3ToKgznjFyzBkYOKpWjGZX2FFbxwzBTiTLunLNRsU1Gy1JhlGrnKlNmZ2c5V5P9yo8LGR+NSqLywqPh1o7iQYDOoTgpToZFbuXSAcS3crMBnTDNu3eE3utTeOfKNn5SLQgmeTXGLlXZhNau2GG7v7CYYvum974Vf33ZOjtbr3CHPyHOyT0LyjpyQz2RAhoR7H73Ey71L/7v/w//p/7ou9b215inZCP/3P7Vj6Os=</latexit><latexit
<latexit
33
Knapsack problem: bottom-up dynamic programming demo

i vi wi
1 $1 1 kg 0 i=0
2 $6 2 kg OP T (i, w) = OP T (i 1, w) wi > w
3 $18 5 kg max {OP T (i 1, w), vi + OP T (i 1, w wi }
<latexit sha1_base64="Z2vromkJx+wQ2wpWsn/gLtchm2Q=">AAAC+nicbVFda9swFJW9ry77SrvHvVwWNjqWBnuMtSN0FPayt2XQrIXIBFm5TkRl2Uhyk2D8a/Y09ro/ssf9m8luHpx0FwSHc889VzqKcymMDYK/nn/n7r37D/Yedh49fvL0WXf/4LvJCs1xzDOZ6cuYGZRC4dgKK/Ey18jSWOJFfPW57l9cozYiU+d2nWOUsrkSieDMOmra/fN1dH4o+rB8A3R4SocdGuNcqJI7T1N16DCA10AtriyUIBKoQMApBEDpZPAB08gpGoejsPHY0S6nAj7Bsq2mKVsBlZhYoCW0ZvtA+3DtBt622KPagWoxXzh51bLP7AL1Uhh0WyjtUFSzzZ2n3V4wCJqC2yDcgB7Z1Gi67x3QWcaLFJXlkhkzCYPcRiXTVnCJLoTCYM74FZvjxEHFUjRR2WRfwSvHzCDJtDvKQsO2J0qWGrNOY6dMmV2Y3V5N/q83KWxyEpVC5YVFxW8WJYUEm0H9kTATGrmVawcY18LdFfiCacat++6tLY13jnzrJeWqUIJnM9xhpV1ZzeoUw93MboPxu8HHQfjtfe/sZBPnHnlBXpJDEpJjcka+kBEZE+4de5GXeHO/8n/4P/1fN1Lf28w8J1vl//4H1VHmOQ==</latexit>

4 $22 6 kg
5 $28 7 kg
weight limit w

0 1 2 3 4 5 6 7 8 9 10 11

{ } 0 0 0 0 0 0 0 0 0 0 0 0

{1} 0 1 1 1 1 1 1 1 1 1 1 1

subset
 { 1, 2 } 0 1 6 7 7 7 7 7 7 7 7 7
of items
1, …, i { 1, 2, 3 } 0 1 6 7 7 18 19 24 25 25 25 25

{ 1, 2, 3, 4 } 0 1 6 7 7 18 22 24 28 29 29 40

{ 1, 2, 3, 4, 5 } 0 1 6 7 7 18 22 28 29 34 35 40

OPT(i, w) = max-profit subset of items 1, …, i with weight limit w.


34
Knapsack problem: running time

Theorem. The DP algorithm solves the knapsack problem with n items



and maximum weight W in Θ(n W) time and Θ(n W) space.
Pf. weights are integers
between 1 and W
Takes O(1) time per table entry.
There are Θ(n W) table entries.
After computing optimal values, can trace back to find solution:

OPT(i, w) takes item i iff M [i, w] > M [i – 1, w]. ▪

35
Dynamic programming: quiz 4

Does there exist a poly-time algorithm for the knapsack problem?


A. Yes, because the DP algorithm takes Θ(n W) time. “pseudo-polynomial”

B. No, because Θ(n W) is not a polynomial function of the input size.

C. No, because the problem is NP-hard.

D. Unknown.

equivalent to P ≠ NP conjecture
because knapsack problem is NP-hard

36
COIN CHANGING

Problem. Given n coin denominations { c1, c2, …, cn } and a target value V,


find the fewest coins needed to make change for V (or report impossible).

Recall. Greedy cashier’s algorithm is optimal for U.S. coin denominations,



but not for arbitrary coin denominations.

Ex. { 1, 10, 21, 34, 70, 100, 350, 1295, 1500 }.


Optimal. 140¢ = 70 + 70.

37
4. G REEDY A LGORITHMS I

‣ coin changing
‣ interval scheduling
‣ interval partitioning
‣ scheduling to minimize lateness
‣ optimal caching
Coin changing

Goal. Given U. S. currency denominations { 1, 5, 10, 25, 100 },



devise a method to pay amount to customer using fewest coins.



Ex. 34¢.


Cashier′s algorithm. At each iteration, add coin of the largest value that
does not take us past the amount to be paid.



Ex. $2.89.

3
Cashier′s algorithm

At each iteration, add coin of the largest value that does not take us past
the amount to be paid.

CASHIERS-ALGORITHM (x, c1, c2, …, cn)


_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

SORT n coin denominations so that 0 < c1 < c2 < … < cn.


S ← ∅. multiset of coins selected

WHILE (x > 0)
k ← largest coin denomination ck such that ck ≤ x.
IF no such k, RETURN “no solution.”
ELSE
x ← x – ck.
S ← S ∪ { k }.
RETURN S.
_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________

4
Greedy algorithms I: quiz 1

Is the cashier’s algorithm optimal?

A. Yes, greedy algorithms are always optimal.

B. Yes, for any set of coin denominations c1 < c2 < … < cn provided c1 = 1.

C. Yes, because of special properties of U.S. coin denominations.

D. No.

5
Cashier′s algorithm (for arbitrary coin denominations)

Q. Is cashier’s algorithm optimal for any set of denominations?

A. No. Consider U.S. postage: 1, 10, 21, 34, 70, 100, 350, 1225, 1500.
Cashier’s algorithm: 140¢ = 100 + 34 + 1 + 1 + 1 + 1 + 1 + 1.
Optimal: 140¢ = 70 + 70.

A. No. It may not even lead to a feasible solution if c1 > 1: 7, 8, 9.


Cashier’s algorithm: 15¢ = 9 + ?.
Optimal: 15¢ = 7 + 8.

6
Properties of any optimal solution (for U.S. coin denominations)

Property. Number of pennies ≤ 4.


Pf. Replace 5 pennies with 1 nickel.

Property. Number of nickels ≤ 1.


Property. Number of quarters ≤ 3.


Property. Number of nickels + number of dimes ≤ 2.


Pf.
Recall: ≤ 1nickel.
Replace 3 dimes and 0 nickels with 1 quarter and 1 nickel;
Replace 2 dimes and 1 nickel with 1 quarter.

dollars quarters dimes
 nickels
 pennies



(100¢) (25¢) (10¢) (5¢) (1¢) 7
Optimality of cashier′s algorithm (for U.S. coin denominations)

Theorem. Cashier’s algorithm is optimal for U.S. coins { 1, 5, 10, 25, 100 }.
Pf. [ by induction on amount to be paid x ]
Consider optimal way to change ck ≤ x < ck+1 : greedy takes coin k.
We claim that any optimal solution must take coin k.
- if not, it needs enough coins of type c1, …, ck–1 to add up to x
- table below indicates no optimal solution can do this
Problem reduces to coin-changing x – ck cents, which, by induction,

is optimally solved by cashier’s algorithm. ▪

all optimal solutions max value of coin denominations


k ck
must satisfy c1, c2, …, ck–1 in any optimal solution

1 1 P ≤ 4 –

2 5 N ≤ 1 4

3 10 N+D ≤ 2 4+5=9

4 25 Q ≤ 3 20 + 4 = 24

5 100 no limit 75 + 24 = 99
8
Coin Changing
>ℎEAFG EH@IAJ K = "

If ='>? >@'A ': 7 ' → 1 + !(" − 7['])

0, '( " = 0
! " =$ min 1+! "−7 ' , '( " > 0
,-.-/, 0[.]-3

(! " = ∞, '( " < 0)


Dynamic programming summary

typically, only a polynomial


Outline. number of subproblems
Define a collection of subproblems.
Solution to original problem can be computed from subproblems.
Natural ordering of subproblems from “smallest” to “largest” that
enables determining a solution to a subproblem from solutions to
smaller subproblems.

Techniques.
Binary choice: weighted interval scheduling.
Multiway choice: segmented least squares.
Adding a new variable: knapsack problem.
Intervals: RNA secondary structure.

Top-down vs. bottom-up dynamic programming. Opinions differ.

53

You might also like