Professional Documents
Culture Documents
Dynamic Programming: Fibonacci Weighted Interval Scheduling
Dynamic Programming: Fibonacci Weighted Interval Scheduling
LECTURE 13
Dynamic Programming
• Fibonacci
• Weighted Interval
Scheduling
Adam Smith
9/10/10
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Design Techniques So Far
• Greedy. Build up a solution incrementally,
myopically optimizing some local criterion.
• Recursion / divide &conquer. Break up a
problem into subproblems, solve subproblems,
and combine solutions.
• Dynamic programming. Break problem into
overlapping subproblems, and build up solutions
to larger and larger sub-problems.
9/10/10
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Dynamic Programming History
Etymology.
Dynamic programming = planning over time.
Secretary of Defense was hostile to mathematical research.
Bellman sought an impressive name to avoid confrontation.
Areas.
Bioinformatics.
Control theory.
Information theory.
Operations research.
Computer science: theory, graphics, AI, compilers, systems, ….
Sequence defined by
• a1 = 1
• a2 = 1 1, 1, 3, 5, 8, 13, 21, 34, …
• an = an-1 + an-2
Running Time?
Computing Fibonacci Sequence Faster
Resulting Algorithm:
Fib(n)
1. If n =1 or n=2, then
2. return 1
3. Else
4. f[1]=1; f[2]=1
5. For i=3 to n
6. f[i] f[i-1]+f[i-2]
7. return f[n]
h
Time
0 1 2 3 4 5 6 7 8 9 10
Unweighted Interval Scheduling Review
weight = 999 b
weight = 1 a
Time
0 1 2 3 4 5 6 7 8 9 10 11
Weighted Interval Scheduling
8
Time
0 1 2 3 4 5 6 7 8 9 10 11
Dynamic Programming: Binary Choice
0 if j 0
OPT( j)
max v j OPT( p( j)), OPT( j 1) otherwise
Weighted Interval Scheduling: Brute Force
Compute-Opt(j) {
if (j = 0)
return 0
else
return max(vj + Compute-Opt(p(j)), Compute-Opt(j-1))
}
Worst case:
T(n) = T(n-1) + T(n-2) + (…)
Weighted Interval Scheduling: Memoization
for j = 1 to n
M[j] = empty
M[0] = 0 global array
M-Compute-Opt(j) {
if (M[j] is empty)
M[j] = max(wj + M-Compute-Opt(p(j)), M-Compute-Opt(j-1))
return M[j]
}
Weighted Interval Scheduling: Running Time
Iterative-Compute-Opt {
M[0] = 0
for j = 1 to n
M[j] = max(vj + M[p(j)], M[j-1])
}
Run M-Compute-Opt(n)
Run Find-Solution(n)
Find-Solution(j) {
if (j = 0)
output nothing
else if (vj + M[p(j)] > M[j-1])
print j
Find-Solution(p(j))
else
Find-Solution(j-1)
}
# value weight
• Many “packing” problems fit this model
1 1 1
– Assigning production jobs to factories
2 6 2
– Deciding which jobs to do on a single
3 18 5
processor with bounded time
4 22 6
– Deciding which problems to do on an exam
5 28 7
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Knapsack Problem
maximum ratio vi / wi 1 1 1
2 6 2
• Example: { 5, 2, 1 } achieves only 3 18 5
value = 35 greedy not optimal. 4 22 6
5 28 7
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Knapsack Problem
1 1 1
2 6 2
3 18 5
4 22 6
5 28 7
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Dynamic programming: attempt 1
•Definition: OPT(i) =
maximum profit subset of items 1, …, i.
– Case 1: OPT does not select item i.
• OPT selects best of { 1, 2, …, i-1 }
– Case 2: OPT selects item i.
• without knowing what other items were selected before i,
we don't even know if we have enough room for i
•Conclusion. Need more sub-problems!
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Adding a new variable
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Bottom-up algorithm
•Fill up an n-by-W array.
for w = 0 to W
M[0, w] = 0
for i = 1 to n
for w = 1 to W
if (wi > w)
M[i, w] = M[i-1, w]
else
M[i, w] = max {M[i-1, w], vi + M[i-1, w-wi ]}
return M[n, W]
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Knapsack table
0 1 2 3 4 5 6 7 8 9 10 11
0 0 0 0 0 0 0 0 0 0 0 0
{1} 0 1 1 1 1 1 1 1 1 1 1 1
n { 1, 2 } 0 1 6 7 7 7 7 7 7 7 7 7
{ 1, 2, 3 } 0 1 6 7 7 18 19 24 25 25 25 25
{ 1, 2, 3, 4 } 0 1 6 7 7 18 22 24 28 29 29 40
{ 1, 2, 3, 4, 5 } 0 1 6 7 7 18 22 28 29 34 34 40
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Time and Space Complexity
W = 11
What is the input size?
# value weight
• In words? 1 1 1
• In bits? 2 6 2
3 18 5
4 22 6
5 28 7
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne
Time and space complexity
•Time and space: (n W).
– Not polynomial in input size!
– Space can be made O(W) (exercise!)
– "Pseudo-polynomial."
– Decision version of Knapsack is NP-complete.
[KT, chapter 8]
•Knapsack approximation algorithm. There is a
poly-time algorithm that produces a solution with
value within 0.01% of optimum. [KT, section 11.8]
10/8/2008
A. Smith; based on slides by E. Demaine, C. Leiserson, S. Raskhodnikova, K. Wayne