You are on page 1of 56

Dynamic Programming is an algorithm design

technique for optimization problems: often


minimizing or maximizing.
Like divide and conquer, DP solves problems by
combining solutions to subproblems.
Unlike divide and conquer, subproblems are not
independent.
Subproblems may share subsubproblems,
However, solution to one subproblem may not affect the
solutions to other subproblems of the same problem.
DP reduces computation by
Solving subproblems in a bottom-up fashion.
Storing solution to a subproblem the first time it is solved.
Looking up the solution when subproblem is encountered
again.
Key: determine structure of optimal solutions

1. Characterize structure of an optimal
solution.
2. Define value of optimal solution
recursively.
3. Compute optimal solution values bottom-
up in a table.
4. Construct an optimal solution from
computed values.
Well study these with the help of examples.

Optimal substructure
Overlapping sub-problems

Applies to a problem that at first seems to
require a lot of time (possibly exponential),
provided we have:

Simple subproblems: the subproblems can be defined in
terms of a few variables, such as j, k, l, m, and so on.

Subproblem optimality: the global optimum value can
be defined in terms of optimal subproblems.

Subproblem overlap: the subproblems are not
independent, but instead they overlap (hence, should be
constructed bottom-up).

If the subproblems are not independent,
i.e. subproblems share subsubproblems,
then a divide-and-conquer algorithm
repeatedly solves the common
subsubproblems.
Thus, it does more work than necessary!

Question: Any better solution?
YesDynamic programming (DP)!
There are two versions of the problem:
(1) 0-1 knapsack problem and
(2) Fractional knapsack problem

(1) Items are indivisible; you either take an item or
not. Solved with dynamic programming
(2) Items are divisible: you can take any fraction of
an item. Solved with a greedy algorithm.

Given a knapsack with maximum capacity W,
and a set S consisting of n items
Each item i has some weight w
i
and benefit
value b
i
(all w
i
, b
i
and W are integer values)


e e
s
T i
i
T i
i
W w b subject to max
The problem is called a 0-1 problem, because each item
must be entirely accepted or rejected.
S
k
: Set of items numbered 1 to k.
Define B[k,w] = best selection from S
k
with weight
exactly equal to w
Good news: this does have subproblem optimality:



I.e., best subset of S
k
with weight exactly w is either
the best subset of S
k-1
w/ weight w or the best
subset of S
k-1
w/ weight w-w
k
plus item k.

+
>
=
else } ] , 1 [ ], , 1 [ max{
if ] , 1 [
] , [
k k
k
b w w k B w k B
w w w k B
w k B
4/22/2013 11
for w = 0 to W
B[0,w] = 0
for i = 0 to n
B[i,0] = 0
for w = 0 to W
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
4/22/2013 12
for w = 0 to W
B[0,w] = 0
for i = 0 to n
B[i,0] = 0
for w = 0 to W
< the rest of the code >
What is the running time of
this algorithm? ---------
O(W)
O(W)
Repeat n times
O(n*W)
Remember that the brute-force
algorithm takes O(2
n
)
4/22/2013 13
Lets run our algorithm on the
following data:

n = 4 (# of elements)
W = 5 (max weight)
Elements (weight, benefit):
(2,3), (3,4), (4,5), (5,6)
4/22/2013 14
for w = 0 to W
B[0,w] = 0
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
4
4/22/2013 15
for i = 0 to n
B[i,0] = 0
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
4
4/22/2013 16
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=1
b
i
=3
w
i
=2
w=1
w-w
i
=-
1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
4/22/2013 17
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=1
b
i
=3
w
i
=2
w=2
w-w
i

=0
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
4/22/2013 18
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=1
b
i
=3
w
i
=2
w=3
w-
w
i
=1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
4/22/2013 19
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=1
b
i
=3
w
i
=2
w=4
w-
w
i
=2
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
4/22/2013 20
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=1
b
i
=3
w
i
=2
w=5
w-
w
i
=2
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
4/22/2013 21
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=2
b
i
=4
w
i
=3
w=1
w-w
i
=-2
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0
4/22/2013 22
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=2
b
i
=4
w
i
=3
w=2
w-w
i
=-1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0
3
4/22/2013 23
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=2
b
i
=4
w
i
=3
w=3
w-w
i
=0
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0
3
4
4/22/2013 24
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=2
b
i
=4
w
i
=3
w=4
w-w
i
=1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0
3
4
4
4/22/2013 25
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=2
b
i
=4
w
i
=3
w=5
w-w
i
=2
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0
3
4
4
7
4/22/2013 26
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=3
b
i
=5
w
i
=4
w=1..3
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0
3
3
3
3
0 0
3
4
4
7
0
3
4
4/22/2013 27
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=3
b
i
=5
w
i
=4
w=4
w- w
i
=0
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0 0 0
3
4
4
7
0
3
4
5
3
3
3
3
4/22/2013 28
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=3
b
i
=5
w
i
=4
w=5
w- w
i
=1
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0 0 0
3
4
4
7
0
3
4
5
7
3
3
3
3
4/22/2013 29
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=3
b
i
=5
w
i
=4
w=1..4
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0 0 0
3
4
4
7
0
3
4
5
7
0
3
4
5
3
3
3
3
4/22/2013 30
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
0
0
0
0
0
0
W
0
1
2
3
4
5
i
0 1 2 3
0 0 0 0
i=3
b
i
=5
w
i
=4
w=5

Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
4
0 0 0
3
4
4
7
0
3
4
5
7
0
3
4
5
7
3
3
3
3
4/22/2013 31
for w = 0 to W
B[0,w] = 0
for i = 0 to n
B[i,0] = 0
for w = 0 to W
if w
i
<= w // item i can be part of the solution
if b
i
+ B[i-1,w-w
i
] > B[i-1,w]
B[i,w] = b
i
+ B[i-1,w- w
i
]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // w
i
> w
4/22/2013 32
for w = 0 to W
B[0,w] = 0
for i = 0 to n
B[i,0] = 0
for w = 0 to W
< the rest of the code >
What is the running time of
this algorithm? ---------
O(W)
O(W)
Repeat n times
O(n*W)
Remember that the brute-force
algorithm takes O(2
n
)
All of the information we need is in the table.
V[n,W] is the maximal value of items that
can be placed in the Knapsack.
Let i=n and k=W
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1 // Assume the i
th
item is not in the
knapsack
// Could it be in the optimally packed
knapsack?

All of the information we need is in the table.
V[n,W] is the maximal value of items that can be
placed in the Knapsack.
Let i=n and k=W
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1 // Assume the i
th
item is not in the
knapsack
// Could it be in the optimally packed
knapsack?
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
i=4
k= 5
b
i
=6
w
i
=5
V[i,k] = 7
V[i1,k] =7
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
i=4
k= 5
b
i
=6
w
i
=5
V[i,k] = 7
V[i1,k] =7
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
i=3
k= 5
b
i
=5
w
i
=4
V[i,k] = 7
V[i1,k] =7
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
i=2
k= 5
b
i
=4
w
i
=3
V[i,k] = 7
V[i1,k] =3
k w
i
=2
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
7
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
i=1
k= 2
b
i
=3
w
i
=2
V[i,k] = 3
V[i1,k] =0
k w
i
=0
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the i
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
3
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the n
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
i=0
k= 0

The optimal
knapsack
should contain
{1, 2}
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
0 0
0
0
0
0 0 0 0 0 0 0
1
2
3
4 5 0 1 2 3
4
i\W
3 3 3 3
0 3 4 4 7
0 3 4
i=n, k=W
while i,k > 0
if V[i,k] = V[i1,k] then
mark the n
th
item as in the knapsack
i = i1, k = k-w
i

else
i = i1

5 7
0 3 4 5 7
The optimal
knapsack
should contain
{1, 2}
7
3
The greedy method is the most straight
forward design technique. We consider a
problem having n inputs & require obtaining
subset that satisfies some constraints. Any
subset that satisfies these constraints is
called a feasible solution. We need to find a
feasible solution that either maximizes or
minimizes a given objective function. A
feasible solution that does this is called an
optimal solution.

The greedy method is a general algorithm
design paradigm, built on the following
elements:
configurations: different choices, collections, or values
to find
objective function: a score assigned to configurations,
which we want to either maximize or minimize
It works best when applied to problems with the
greedy-choice property:
a globally-optimal solution can always be found by a
series of local improvements from a starting
configuration.
Similar to dynamic programming, but simpler
approach
Also used for optimization problems
Idea: When we have a choice to make, make the one
that looks best right now
Make a locally optimal choice in hope of getting a globally
optimal solution
Greedy algorithms dont always yield an optimal
solution
Makes the choice that looks best at the moment in
order to get optimal solution.

The function Select selects an input from
and removes it. The selected inputs value is
assigned to .
Feasible solution is a Booleanvalued
function that determines whether can be
included into the solution vector. The
function Union combines with the solution &
updates the objectives function.

( , )
/ / [1: ]
{
0 / /
1
{
( )
( , )
( , )
}
}
Greedy a n
a n contains theninputs
solution initializethe solution
for i tondo
x Select a
if Feasible Solution x then
Solution Union Solution x
return Solution
=
=
=
=
Problem: A dollar amount to reach and a collection of coin
amounts to use to get there.
Configuration: A dollar amount yet to return to a customer
plus the coins already returned
Objective function: Minimize number of coins returned.
Greedy solution: Always return the largest coin you can
Example 1: Coins are valued $.32, $.08, $.01
Has the greedy-choice property, since no amount over $.32 can
be made with a minimum number of coins by omitting a $.32
coin (similarly for amounts over $.08, but under $.32).
Example 2: Coins are valued $.30, $.20, $.05, $.01
Does not have greedy-choice property, since $.40 is best made
with two $.20s, but the greedy solution will pick three coins
(which ones?)
Given: A set S of n items, with each item i
having

bi - a positive benefit
wi - a positive weight

Goal: Choose items with maximum total
benefit but with weight at most W.
If we are allowed to take fractional
amounts, then this is the fractional knapsack
problem.

Dynamic programming
We make a choice at each step
The choice depends on solutions to subproblems
Bottom up solution, from smaller to larger
subproblems
Greedy algorithm
Make the greedy choice and THEN solve the
subproblem arising after the choice is made
The choice we make may depend on previous choices,
but not on solutions to subproblems
Top down solution, problems decrease in size

Greedy and Dynamic Programming are
methods for solving optimization problems.
Greedy algorithms are usually more efficient
than DP solutions.
However, often you need to use dynamic
programming since the optimal solution
cannot be guaranteed by a greedy algorithm.
DP provides efficient solutions for some
problems for which a brute force approach
would be very slow.

You might also like