Professional Documents
Culture Documents
Faster Algorithm S For Well-Behaved Optimizat Ion Problems
Faster Algorithm S For Well-Behaved Optimizat Ion Problems
ms
Faster
Algorithm
s for
Well-
Behaved
Optimizat
ion
Problems
Dynamic Programming is Blind
■ Why?
■ We have many choices in computing an optimal solution.
■ We exhaustively (blindly) check all of them.
■ We would much rather like to have a way to decide which choice is the best or at
least restrict the choices we have to try.
■ Greedy algorithms work for problems where we can decide what’s the best
choice.
Greedy choice:
We make the choice that looks best at the moment.
(Every time we make a choice, we greedily try to maximize our profit.)
An example where this does not work:
Finding a longest monotonically increasing subsequence.
Given sequence ‹3 4 5 17 7 8 9›,
a longest monotonically increasing subsequence is ‹3 4 5 7 8 9›.
The greedy choice after choosing ‹3 4 5› is to choose 17, which precludes the
addition of another element and thus results in the suboptimal sequence ‹3 4 5
17›.
2 6
1 7
3 8
4
Let Si,j be the set of classes that begin after time fi and end before time sj; that is,
these classes can be scheduled between classes Ci and Cj.
We can add two fictitious classes C0 and Cn + 1 with f0 = –∞ and sn + 1 = +∞. Then
S0,n + 1 is the set of all classes.
Ci Ck Cj
Si,k Sk,j
Hence, if Q(i, j) is the size of an optimal schedule for set Si,j, we have
Lemma: There exists an optimal schedule for the set Si,j that contains the class Ck
in Si,j that finishes first, that is, the class Ck in Si,j with minimal index k.
Lemma: If we choose class Ck as in the previous lemma, then set Si,k is empty.
Recursive-Activity-Selector(s, f, i, j)
1 m ← i+1
2 while m < j and sm < fi
3 do m ← m+1
4 if m < j
then return {am} U Recursive-Activity-Selector(s, f, m, j)
5 else return ∅
Assume the activities have already been sorted by finish times, the running time of
the call Recursive-Activity-Selector(s, f, i, j) is Θ(n)
Lemma: Let O be the current set of selected classes, and let Ck be the last class
added to O. Then any class Cl, l > k, that conflicts with a class in O conflicts with Ck.
An optimal solution can be obtained by making choices that seem best at the
time, without considering their implications for solutions to subproblems.
Optimal substructure:
An optimal solution can be obtained by augmenting the partial solution
constructed so far with an optimal solution of the remaining subproblem.
Our next goal is to develop a code that represents a given text as compactly as
possible.
A standard encoding is ASCII, which represents every character using 7 bits:
“An English sentence”
1000001 (A) 1101110 (n) 0100000 ( ) 1000101 (E) 1101110 (n) 1100111 (g)
1101100 (l) 1101001 (i) 1110011 (s) 1101000 (h) 0100000 ( ) 1110011 (s)
1100101 (e) 1101110 (n) 1110100 (t) 1100101 (e) 1101110 (n) 1100011 (c)
1100101 (e)
= 133 bits ≈ 17 bytes
Fixed-length codes:
Every character is encoded using the same number of bits.
To determine the boundaries between characters, we form groups of w bits, where
w is the length of a character.
Examples:
■ASCII
■Our first improved code
Prefix codes:
No character is the prefix of another character.
Examples:
■Fixed-length codes
■Huffman codes
a = 01 m = 10 n = 111 o = 0 r = 11 s = 1 t = 0011
Now you send a fan-letter to you favorite movie star. One of the sentences is “You
are a star.”
Assume that there are two characters c and c' that could potentially be the first
characters in the text. Assume that the encodings are x0x1…xk and y0y2…yl. Assume
further that k ≤ l.
c
c'
Since both c and c' can occur at the beginning of the text, we have xi = yi,
for 0 ≤ i ≤ k; that is, x0x1…xk is a prefix of y0y2…yl, a contradiction.
Our example:
‹space› = 000 A = 0010 E = 0011 s = 010 c = 0110 g = 0111 h = 1000
i = 1001 l = 1010 t = 1011 e = 110 n = 111
0 1
0 1 0 1
0 1 0 1 0 1 0 1
‹spc› s e n
0 1 0 1 0 1 0 1
A E c g h i l t
Our example:
‹space› = 000 A = 0010 E = 0011 s = 010 c = 0110 g = 0111 h = 1000
i = 1001 l = 1010 t = 1011 e = 110 n = 111
0 1
0 1 0 1
0 1 0 1 0 1 0 1
‹spc› s e n
0 1 0 1 0 1 0 1
A E c g h i l t
Let C be the set of characters in the text to be encoded, and let f(c) be the frequency
of character c.
Let dT(c) be the depth of node (character) c in the tree representing the code. Then
is the number of bits required to encode the text using the code represented by
tree T. We callInB(T)
Observation: theTcost
a tree of tree T. an optimal prefix code, every internal node
representing
has two children.
Huffman(C)
1 n ← |C| f:5 e:9 c:12 b:13 d:16 a:45
2 Q←C
3 for i = 1..n – 1
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
14
Huffman(C)
1 n ← |C| f:5 e:9 c:12 b:13 d:16 a:45
2 Q←C
3 for i = 1..n – 1
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
Huffman(C)
1 n ← |C| c:12 b:13 14 d:16 a:45
2 Q←C
3 for i = 1..n – 1
f:5 e:9
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
25
Huffman(C)
1 n ← |C| c:12 b:13 14 d:16 a:45
2 Q←C
3 for i = 1..n – 1
4 do allocate a new node z f:5 e:9
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
Huffman(C)
1 n ← |C| 14 d:16 25 a:45
2 Q←C
3 for i = 1..n – 1
f:5 e:9 c:12 b:13
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
30
Huffman(C)
1 n ← |C| 14 d:16 25 a:45
2 Q←C
3 for i = 1..n – 1
4 do allocate a new node z f:5 e:9 c:12 b:13
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q)
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
Huffman(C)
1 n ← |C| 25 30 a:45
2 Q←C
3 for i = 1..n – 1
c:12 b:13 14 d:16
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q) f:5 e:9
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
55
Huffman(C)
1 n ← |C| 25 30 a:45
2 Q←C
3 for i = 1..n – 1
4 do allocate a new node z c:12 b:13 14 d:16
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q) f:5 e:9
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
9 return Delete-Min(Q)
Huffman(C)
1 n ← |C| a:45 55
2 Q←C
3 for i = 1..n – 1
25 30
4 do allocate a new node z
5 left[z] ← x ← Delete-Min(Q)
6 right[z] ← y ← Delete-Min(Q) c:12 b:13 14 d:16
7 f[z] ← f[x] + f[y]
8 Insert(Q, z)
f:5 e:9
9 return Delete-Min(Q)
100
0 1
Huffman(C)
1 n ← |C| a:45 55
2 Q←C 0
0 1
3 for i = 1..n – 1
4 do allocate a new node z 25 30
5 left[z] ← x ← Delete-Min(Q) 0 1 0 1
6 right[z] ← y ← Delete-Min(Q) c:12 b:13 14 d:16
7 f[z] ← f[x] + f[y] 100 101 111
0 1
8 Insert(Q, z)
9 return Delete-Min(Q) f:5 e:9
1100 1101
Why is merging the two nodes with smallest frequency into a subtree a greedy
choice?
We can alternatively define B(T) as
where
By merging the two nodes with lowest frequency, we greedily try to minimize the
cost the new node contributes to B(T).
T T'
x y a b
a b x y
After joining two nodes x and y by making them children of a new node z, the
algorithm treats z as a leaf with frequency f(z) = f(x) + f(y).
Let C' be the character set in which x and y are replaced by the single character z
with frequency f(z) = f(x) + f(y), and let T' be an optimal tree for C'.
Assume the contrary. Then there exists a better tree T'' for C.
Also, there exists a tree T''' at least as good as T'' for C where x and y are sibling
leaves of maximal depth.
The removal of x and y from T''' turns their parent into a leaf; we can associate this
leaf with z.
The cost of the resulting tree is B(T''') – f(x) – f(y) < B(T) – f(x) – f(y) = B(T').
Greedy algorithms are efficient algorithms for optimization problems that exhibit
two properties:
Greedy choice property: An optimal solution can be obtained by making locally
optimal choices.
Optimal substructure: An optimal solution contains within it optimal solutions
to smaller subproblems.
If only optimal substructure is present, dynamic programming may be a viable
approach; that is, the greedy choice property is what allows us to obtain faster
algorithms than what can be obtained using dynamic programming.