This action might not be possible to undo. Are you sure you want to continue?

# Computer Algorithms, Third Edition, Solutions to Selected Exercises

Sara Baase Allen Van Gelder

February 25, 2000

ii

I NTRODUCTION

This manual contains solutions for the selected exercises in Computer Algorithms: Introduction to Design and Analysis, third edition, by Sara Baase and Allen Van Gelder. Solutions manuals are intended primarily for instructors, but it is a fact that instructors sometimes put copies in campus libraries or on their web pages for use by students. For instructors who prefer to have students work on problems without access to solutions, we have chosen not to include all the exercises from the text in this manual. The included exercises are listed in the table of contents. Roughly every other exercise is solved. Some of the solutions were written speciﬁcally for this manual; others are adapted from solutions sets handed out to students in classes we taught (written by ourselves, teaching assistants, and students). Thus there is some inconsistency in the style and amount of detail in the solutions. Some may seem to be addressed to instructors and some to students. We decided not to change these inconsistencies, in part because the manual will be read by instructors and students. In some cases there is more detail, explanation, or justiﬁcation than a student might be expected to supply on a homework assignment. Many of the solutions use the same pseudocode conventions used in the text, such as: 1. Block delimiters (“ ” and “ ”) are omitted. Block boundaries are indicated by indentation. 2. The keyword static is omitted from method (function and procedure) declarations. All methods declared in the solutions are static. 3. Class name qualiﬁers are omitted from method (function and procedure) calls. For example, x = cons(z, x) might be written when the Java syntax requires x = IntList.cons(z, x). 4. Keywords to control visibility, public, private, and protected, are omitted. 5. Mathematical relational operators “ ,” “ ,” and “ ” are usually written, instead of their keyboard versions. Relational operators are used on types where the meaning is clear, such as String, even though this would be invalid syntax in Java. We thank Chuck Sanders for writing most of the solutions for Chapter 2 and for contributing many solutions in Chapter 14. We thank Luo Hong, a graduate student at UC Santa Cruz, for assisting with several solutions in Chapters 9, 10, 11, and 13. In a few cases the solutions given in this manual are affected by corrections and clariﬁcations to the text. These cases are indicated at the beginning of each affected solution. The up-to-date information on corrections and clariﬁcations, along with other supplementary materials for students, can be found at these Internet sites: ftp://ftp.aw.com/cseng/authors/baase http://www-rohan.sdsu.edu/faculty/baase http://www.cse.ucsc.edu/personnel/faculty/avg.html

c Copyright 2000 Sara Baase and Allen Van Gelder. All rights reserved. Permission is granted for college and university instructors to make a reasonable number of copies, free of charge, as needed to plan and administer their courses. Instructors are expected to exercise reasonable precautions against further, unauthorized copies, whether on paper, electronic, or other media. Permission is also granted for Addison-Wesley-Longman editorial, marketing, and sales staff to provide copies free of charge to instructors and prospective instructors, and to make copies for their own use. Other copies, whether paper, electronic, or other media, are prohibited without prior written consent of the authors.

**List of Solved Exercises
**

1 Analyzing Algorithms and Problems: Principles and Examples 1.1 . 1.2 . 1.4 . 1.6 . 1.8 . 1.10 1.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 2 2 3 3 3 1.13 1.15 1.18 1.20 1.22 1.23 1.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 4 4 4 4 5 1.28 1.31 1.33 1.35 1.37 1.39 1.42 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 6 6 6 6 6 7 1.44 1.46 1.47 1.48 1.50 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 7 7 7 7 8

2

Data Abstraction and Basic Data Structures 2.2 . . . . . . . 2.4 . . . . . . . 2.6 . . . . . . . 9 9 9 2.8 . . . . . . . 2.10 . . . . . . 2.12 . . . . . . 9 11 11 2.14 . . . . . . 2.16 . . . . . . 2.18 . . . . . . 12 13 14

9

3

Recursion and Induction 3.2 . . . . . . . 3.4 . . . . . . . 17 17 3.6 . . . . . . . 3.8 . . . . . . . 17 18 3.10 . . . . . . 3.12 . . . . . . 18 18

17

4

Sorting 4.2 . 4.4 . 4.6 . 4.9 . 4.11 4.13 4.15 4.17 4.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 19 19 19 19 20 20 20 21 4.21 4.23 4.25 4.26 4.27 4.29 4.31 4.34 4.35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 22 23 23 23 24 24 4.37 4.40 4.42 4.44 4.45 4.46 4.48 4.49 4.51 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 24 24 25 25 25 25 25 26 4.53 4.55 4.57 4.59 4.61 4.63 4.65 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 26 27 27 28 28 29 29

5

Selection and Adversary Arguments 5.2 . . . . . . . 5.4 . . . . . . . 5.6 . . . . . . . 31 32 32 5.8 . . . . . . . 5.10 . . . . . . 5.12 . . . . . . 33 34 34 5.14 . . . . . . 5.16 . . . . . . 5.19 . . . . . . 34 34 35 5.21 . . . . . . 5.22 . . . . . . 5.24 . . . . . .

31 35 36 37

6

Dynamic Sets and Searching 6.1 . 6.2 . 6.4 . 6.6 . 6.8 . 6.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 39 40 40 41 41 6.12 6.14 6.16 6.18 6.20 6.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 43 45 45 45 46 6.24 6.26 6.28 6.30 6.32 6.34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 47 47 47 48 49 6.36 . . . . . . 6.37 . . . . . . 6.40 . . . . . .

39 49 49 50

. 51 51 51 51 51 52 52 7. . . . . . . . . .6 . . . . .16 . 9. . . . .2 10. . . . 53 53 53 54 54 54 57 7. . . . . . . . .37 7. . . . . . . . . 87 87 88 12.10 11. . . . . .34 13. . . 9. .16 . . . .27 . .10 7.12 .45 7. . .14 7. .19 10.10 10. .28 13. . .43 7.26 . . 12. .20 8.4 . . . . . . . . 9. .14 . . . . . . 9. . . . . . . .8 . . . .4 11. . . . . . .7 . .8 . . . . . . . . . . . 64 64 64 64 8. . . 7. . . . . . . . . 8. . . . . . . . 8. . 89 89 91 91 91 91 13. 84 85 85 85 11. .12 . . . . . .6 . . . . 73 78 79 11 String Matching 11. .2 11. . .6 . . . . . . .4 10.42 13. .18 . . .2 . . .12 8.34 7. . 65 65 65 67 8. . .22 7. . . . . 13.40 7. . . .18 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 7. . . .35 7. . . . . . . . . . . . . . . . . . . . . .8 . . . . . . 11. . 93 93 94 94 96 96 13. . 9.17 . .23 .32 13. . 81 86 12 Polynomials and Matrices 12. . . .27 . . .16 8. . . . .7 . . . . .21 13. . . . .8 . . . . . . . . . . .10 8. . 7. . . . . . .12 8 . . . .26 13. .6 . . . . .47 13.3 . . . . . 92 92 92 93 93 93 13. . .14 13. . . . . . . . . 10. . . . . . . . . 7. .iv List of Solved Exercises 7 Graphs and Graph Traversals 7.32 7.15 . . . . . . . . . . . .37 13. .39 13. . . . .24 .12 11.5 8. 57 57 57 57 58 59 59 7. . 13.44 13. . . . . .1 . . . .4 . . . . . . . . . .14 . . . . . . . .23 . . 9. . .10 . . .35 . . . . . . 8. . . 13. . . .39 . .4 .23 . . . . . .24 7.21 11.18 13.25 . . . . . . . . . . . . 89 96 96 98 99 99 99 . . . .16 10. . . . . . . . .5 10. . .22 . .2 .16 13. .26 . . . 51 59 59 59 60 60 61 Graph Optimization Problems and Greedy Algorithms 8. 63 67 67 67 9 Transitive Closure. . . . 7. . . . . . . . . . 63 63 63 64 8. . .2 . . . . 73 74 75 75 10. . . . . . . . . . . . . . . 7. . . . 69 72 10 Dynamic Programming 10. . . . . . . 88 88 88 87 13 N P -Complete Problems 13. 10. . . . . .3 8. . 13. . . . . . . . . . . 12. . . . .21 .9 . .10 13. . .19 11. All-Pairs Shortest Paths 9. . . . .30 13. . . . . . . .12 10. .8 . . . . .8 . . . . . . . . . . . . . . . . . . . .49 . . . . . . . . . 81 81 81 83 11. . .7 . . . . . . 12. . . . . . 12. . . . .14 . . . .28 7. . . . . . . . . . . . . 12.41 7. 72 72 72 9. . . . . . . . . . .18 7. . 75 76 77 78 10. . . . . 12. . . . . . . . . . . . . . . . . . . . .1 8.10 . . .16 7. . .17 11. .4 . .30 7. . .12 . . .20 7. . . . . . . . . . 87 87 87 12. .49 . . 73 73 73 73 10.1 11. . . . 71 71 71 9. . . . . . . . . .14 . . . . . . . . . . .18 8. . .6 . . .20 13. 84 84 84 84 11. . . . 69 70 71 9. . . . . .

.53 . . . . . . 103 106 107 107 108 108 . . .22 14. . . . . . . . . .29 14. . . . . . 99 13. . . . . . . . . .57 .13 14. . . . . . . . . . . . . . . . . . . . .24 . . . . . . . .14 14.54 .10 14.27 14. . . . . . . . . .61 . . . . . . .4 14.5 14. . . . . . . . .18 14. .11 14.20 14.19 14.16 . . .59 . . . . . . 100 13. . . . . . . . . . . . 104 104 104 105 105 14.25 14. . 100 13. . . . . 105 106 106 106 106 14. . . 103 103 103 104 104 14. . . .32 .8 . . .2 14. .30 14. . . 100 13. 101 v 14 Parallel Algorithms 14.51 . 101 13. 100 13. . .7 14. . . . .55 . . . . . . .List of Solved Exercises 13. . .

vi List of Solved Exercises .

name = Name. int prefix. return n2.eMail = p.copy(p. Personal p2. Appendix A. state.address).Chapter 1 Analyzing Algorithms and Problems: Principles and Examples Section 1. int number. public static Personal copy(Personal p).phone = PhoneNumber. n2. p2.copy() */ Name name. p2.eMail. Address address. public static PhoneNumber copy(PhoneNumber n) /* similar to Name.copy(p. return p2. PhoneNumber phone.middleName. lastName.lastName.7 gives an alternative to spelling out all the instance ﬁelds in the copy methods (functions).firstName. p2.lastName = n. p2.phone).address = Address. middleName. String eMail.firstName = n.copy() */ public static class PhoneNumber int areaCode.1 It is correct for instance ﬁelds whose type is an inner class to be declared before that inner class (as in Figure 1.middleName = n. n2. static Address copy(Address a) /* similar to Name.2 in the text) or after (as here). n2. static Name copy(Name n) Name n2.2: Java as an Algorithm Language 1.name). city. .copy(p. class Personal public static class Name String String String public firstName. public static class Address String String String public street.

0 1. bleft side b So left side = right side.4 It sufﬁces to show: logc x logb c Consider b raised to each side. (There is no way to choose more elements than there are in b n 1¡ 1¡ ¡ 0 in all these cases. (We need the fact that 0! 1.6 2. If n k.0 3. the whole set.8 3.3 . conﬁrming the equation.0 2.2 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples Section 1. 1.6 Let x right side ¡ 1 µ! ´n µ k!´n kµ! n k logb x logb c logc x µ blogb c logc x b logb x ´b clogc x x x lg´n · 1µ . n·1 2x . The solution is based on the fact that 2x 1 x = 0. we have n 1 k n 1 k 1 ´n 1µ! k!´n 1 kµ! ´n 1µ!´n kµ k!´n kµ! 1µ!´kµ k!´n kµ! 1 µ! ´k 1µ!´n kµ! ´n ´n ´n Add them giving: For 0 n k we use the fact that a 0 whenever a b.) k 1 and k are both 1. while (twoToTheX < n+1) x += 1. n 1 and n are both 0.) Thus k k k n 1¡ n¡ 1 when n k 1. return x. twoToTheX = 1.2 For 0 k n. If n k.3: Mathematical Background 1.0 1. again conﬁrming the equation.2 3. The values computed by this procedure for small n and the approximate values of lg´n · 1µ are: n 0 1 2 3 4 5 6 7 8 9 x 0 1 2 2 3 3 3 3 4 4 lg´n · 1µ 0. twoToTheX *= 2.3 2.6 2.

Chapter 1 1. By direct counting: C A D A C A D A B and D B and D B and D B and D Cµ Cµ Cµ Cµ Pr´A C and A B and D Pr´A B and D Cµ Cµ 3 24 6 24 3 24 6 24 1 24 6 24 1 6 5 24 6 24 3 6 3 6 5 6 1 2 1 2 Pr´A D C and A Bµ Pr´A B and D Cµ Pr´A B C and D Cµ Pr´A B and D Cµ Pr´A B D Cµ Pr´A B and D Cµ 1. that the probability that it shows “heads” after being ﬂipped is 1/2 and that the probability that it shows “tails” after being ﬂipped is 1/2. b) No event causes a majority to be “heads”. B is chosen and ﬂipped and comes out “tails”. D3 beats D4 . Deﬁne the elementary events. B is chosen and ﬂipped and comes out “heads”.10 We know A B and D Pr´A Pr´A Pr´B Pr´B C. 1. D2 beats D3 . BH. Pr´S and T µ Pr´T µ Pr´SµPr´T µ Pr´T µ Pr´Sµ 1. CH and CT cause a majority to be “heads”. so the probability is 2/3. as follows. each having probability 1/6. a) BH and CH cause a majority to be “heads”. C is chosen and ﬂipped and comes out “heads”. so the probability is 1/3. so the probability is 0.13 The entry in row i. column j is the probability that Di will beat D j . C is chosen and ﬂipped and comes out “tails”. Call the coins A. and D4 beats D1 . ¼ 12 36 18 36 22 36 12 36 20 36 22 36 12 36 18 36 22 36 12 36 16 36 22 36 ½ Note that D1 beats D2 . . c) AH. and C.12 We assume that the probability of each coin being chosen is 1/3. AH AT BH BT CH CT A is chosen and ﬂipped and comes out “heads”. A is chosen and ﬂipped and comes out “tails”. B.8 Analyzing Algorithms and Problems: Principles and Examples 3 Pr´S T µ The second equation is similar.

else if (b < c) median = c. which can be found in any college calculus text. f ´zµ f ´wµ 0. it is continuous. x´A´xµ µ B´xµµµ x x ´A´xµ ´ µ B´xµµ B´xµµ A´xµ B´xµµ (by Eq. We call upon the Mean Value Theorem (sometimes called the Theorem of the Mean). We use the identity ´ A A as needed. For n 0. else median = b.24) (by Eq. Then ∑0 1 i2 i So the equation holds for the base case. f ¼ ´yµ ´ f ´zµ 0. c) c.20 Let abbreviate the phrase. 1.18 Consider any two reals w z. that is. 1. 1. and 0. The base case is n 0. Eq. By this theorem there is some point y. Therefore. i 1 ∑ i2 n n 1 i 1 2n3 ∑ i2 · n2 2´n 1µ3 · 3´n 1µ2 · n 1 2 ·n 6 6 2n3 3n2 · n 6n2 · 6 6 6n2 · 6n 2 · 3n2 6n · 3 · n 1 · n2 2n3 · 3n2 · n 6 1. they are: Comparisons involving K: Comparisons involving index: Additions: Assignments to index: 1. 0.15 The proof is by induction on n. We need to show that f ´wµ f ´zµ.4 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples 2n3 ·3n2 ·n 6 1.4: Analyzing Algorithms and Problems 1. assume the formula holds for n 1.23 a) if (a < b) if (b < c) median = else if (a < median = else median = else if (a < c) median = a.22 The total number of operations in the worst case is 4n · 2. n n·1 n n·1 b. . such that w y z. Since f ´xµ is differentiable. f ´zµ f ´wµ 0. “is logically equivalent to”. Also.21) (by DeMorgan’s law. 1.23) x ´A ´x µ Section 1. for which f ¼ ´y µ By the hypothesis of the lemma. the upper limit of the sum. a. ´z wµ f ´wµµ ´z wµ 0.

*/ for (i = 0. This is the smallest entry in the set. /** Precondition: n > 0.5: Classifying Functions by Their Asymptotic Growth Rates 1. and recursively ﬁnd the smallest and largest in each half. When we assign this problem after covering Divide and Conquer sorting algorithms in Chapter 4. = E[i+1]. The result can also be proven by induction. the total is 3 2 ´n 1µ . max = E[n-1]. min = E[n-1]. Whether n is odd or even. Then compare the largest keys from each half to ﬁnd the largest overall. if (odd(n)) min = E[n-1]. The recurrence equation for this procedure. assuming n is a power of 2. else if (E[n-2] < E[n-1]) min = E[n-2]. E[i].3. 3 Analyzing Algorithms and Problems: Principles and Examples 5 d) Three comparisons are needed in the worst case because knowing the median of three numbers requires knowing the complete ordering of the numbers.) If there are at most two entries in the set. = E[i+1]. one element is not examined ´ n 2 comparisons). c) Worst case = 3. Pair up the entries and ﬁnd the larger of each pair. It is important that the nonrecursive costs in the n 2 leaves of this tree are 1 each. i = i+2) if (E[i] < E[i+1]) if (E[i] < min) min = if (E[i+1] > max) max else if (E[i] > max) max = if (E[i+1] < min) min E[i].3. Solution 2.28 p´nµ nk ak 1 n a k 2 n2 a1 n k 1 a0 nk n ∞ lim n ∞ lim ak · · · · · ak 0 . many students give the following Divide and Conquer solution. Otherwise. The following algorithm interleaves the three steps. More careful analysis veriﬁes the result 3n 2 2 for all n. including the unexamined element if n is odd ( ´n · 1µ 2 1 comparisons). i <= n-3. 1. Analysis of Solution 2 requires material introduced in Chapter 3. again including the unexamined element if n is odd ( ´n · 1µ 2 1 comparisons). Then ﬁnd the minimum among the smaller elements using the appropriate modiﬁcation of Algorithm 1. break the set in halves. The nonrecursive costs in the n 2 1 internal nodes are 2 each. average = 2 2 .25 Solution 1. compare them to ﬁnd the smaller and larger. and compare the smallest keys from each half to ﬁnd the smallest overall.Chapter 1 b) D is the set of permutations of three items. if n is odd. (But most of them cannot show formally that it does roughly 3n 2 comparisons. Section 1. else max = E[n-2]. Then ﬁnd the maximum among the larger elements using Algorithm 1. This is the largest entry in the set. This leads to the total of 3n 2 2 for the special case that n is a power of 2. max = E[n-1]. is W ´nµ W ´nµ 1 for n 2 for n 2 2W ´n 2µ · 2 The recursion tree can be evaluated directly.

g ¾ Ω´ f µ O´ f µ. . let h´nµ be constant on the interval kk n ´´k · 1µk·1 1µ and let h´nµ kk on this interval.7. so we need to differentiate 2n . f reﬂexivity. Thus when n kk . We will use L’Hˆ pital’s Rule. Let c = ln2 0. For simplicity we show a counter-example in which a nonmonotonic function is used. h´nµ f ´nµ kk ´´k · 1µk·1 1µ. There are c n0 such that for n n0 . But this follows by the fact that h´nµ f ´nµ 1 for odd integers. The other direction is similar. With more difﬁculty h´nµ can be constructed to be monotonic. but when n ´k · 1µk·1 1. Consider the function h´nµ: h´nµ n 1 for odd n for even n Clearly h´nµ ¾ O´ f ´nµµ. which tends to 0 as n gets large. n lim nk ∞ 2n n lim knk 1 ∞ c2n n lim k´k 1µnk 2 ∞ c2 2 n ¡¡¡ n ∞ ck 2 n lim k! 0 since k is constant. as well as examples using monotonic functions. ¾ Θ´gµ means f ¾ O´gµ Ω´gµ. h´nµ ¾ O´ f ´nµµ Θ´ f ´nµµ. h´nµ c´ f · gµ´nµ. ¾ Θ´ f µ. Property 2: f n 0 . The functions on the same line are of the same asymptotic order. Therefore. f ´n µ cg´nµ. 1. Since for any f . so g ¾ Θ´ f µ. 1. There are c 0 and n0 such that for n g´nµ ´1 cµ f ´nµ. lg lg n lg n ln n ´lg nµ2 Ôn n n lg n n 1 ·ε n2 n2 · lg n n3 n n3 · 7n5 2 n 1 2 n en n! 1. Let h ¾ O´ f · gµ. Property 2 gives symmetry. By Property 1.35 Property 1: Suppose f ¾ O´gµ. we ﬁnd that the derivative of 2n is c2n . For all k 1. h´nµ f ´nµ 1.39 1 n for odd n for even n n 1 for odd n for even n f ´nµ g ´n µ There are also examples using continuous functions on the reals.37 n en ln 2 . The other direction is proved similarly. using L’Hˆ pital’s Rule o The derivative of e repeatedly. so h´nµ ¾ Θ´ f ´nµµ.33 Let f ´nµ n.9 of the text gives transitivity. It remains to show that h´nµ ¾ o´ f ´nµµ.31 The solution here combines parts (a) and (b). 1. Now. Property 3: Lemma 1. Observe that 2n ´eln 2 µ o n is en . Then for n n0 . using the chain rule.6 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples 1. so. But h´nµ ¾ Ω´ f ´nµµ. Then for n n0 . h´nµ 2c max´ f gµ´nµ. we have 0 and Property 4: We show O´ f · gµ O´max´ f gµµ.

eliminating the terms for the gaps) gives an average of approximately lg n 1. Redoing the average analysis for Binary Search (Section 1. Binary Search does roughly lgn comparisons. with n internal nodes.Chapter 1 Section 1. 3. of course. 2. whereas the procedure of Algorithm 1. mid+1.6. 12. (If E 1 x.4 does not. int K) if (last < first) index = -1. The left child for a node labeled i is an output node labeled i. lg n comparisons are done. 8.6. down n. 10.4 (Binary Search) in the text. this algorithm combines the tests of lines 5 and 7 into one test on line 10. but this will give a good enough approximation for the average in general. and the left subrange is increased from mid 1 to mid. 7. 9. Compared to Algorithm 1. labeled with indexes 1 leaf is an output node for the case when x is not found. 1. E8 E 2k . because mid might contain the key being searched for. return index.e. so we branch left in the decision tree for “equal” and branch right for “unequal. the search terminates.46 The probability that x is in the array is n m. 11.42 The revised procedure is: 1. else index = binarySearch(E.” Each internal node contains the index of the element to which x is compared. Actually. K). compare x to E 1 .48 For x ln x 0. In this case (again assuming that n 2k 1µ. write x ln x as 1 . 4.) 1. if we can assume the precondition ﬁrst last. else if (last == first) if (K == E[first]) index = first.) Then do the Binary Search in the range E 2k 1 · 1 using at most k 1 comparisons. 1.) Thus the number of comparisons is at most 2k 2 lg n . else index = -1. 13. To keep the number of comparisons in O´log nµ.3) with the assumption that x is in the array (i. We will ﬁnd an element larger than x (perhaps maxint) after at most lg n · 1 probes (i. first. last. An extra base case is needed in lines 3–7. So the average for all cases is approximately n n n ´ lg n 1µ · ´1 µ lg n lg n lg n m m m (Thus.44 The sequential search algorithm considers only whether x is equal to or unequal to the element in the array being examined. then do a binary search in the segment that must contain x if it is in the array at all.) The probability that x is not in the array is 1 n m. then lines 1–2 can be dispensed with. k lg n µ.. That is. 1. if (K E[mid]) index = binarySearch(E. The rightmost to the right. under various assumptions. 6. (The computation in Section 1. The tree will have a long path. int last.e. K). else int mid = (first + last) / 2. E 2k 1 (If x is found. E 4 . examine E 0 . mid. 5.3 assumes that n 2k 1 for some k. Analyzing Algorithms and Problems: Principles and Examples 7 int binarySearch(int[] E. the distance between elements examined in the ﬁrst phase is doubled at each step. 14. which tests for exact equality when the range shrinks to a single entry. E 2 . This procedure propagates that precondition into recursive calls. .6: Searching an Ordered Array 1. int first. By L’Hˆ pital’s rule the limit of this ratio as x o ´ µ x 1 x 1 ´ 2µ x 0 is the same as the limit of x.47 We examine selected elements in the array in increasing order until an entry larger than x is found.. in certain cases.

8. 1.50 The strategy is to divide the group of coins known to contain the fake coin into subgroups such that. 23. 2) (or 1. Then weigh two of those subgroups that have an equal number of coins. 2. Round 1 Round 2 Round 3 Round 4 So four weighings sufﬁce. Obviously. after one more weighing. 8. the fake is in the third subgroup. 24 8. 0) .8 Chapter 1 Analyzing Algorithms and Problems: Principles and Examples 1. 8 3. 3. 1 (or 8. no matter what the outcome of the weighing. but we can do better by making three subgroups that are as equal as possible. two equal subgroups would work. If the two subgroups that are weighed have equal weight. 2 1. 23. 1. the subgroup known to contain the fake is as small as possible. 7) (or 3.

java changes.) We will give several solutions to demonstrate the range of possibilities.6 A tree of height lg´n · 1µ 2 n d 1 2 ´2d 1 µ 2d 2 would. the type discrepancy can be solved with a type cast. 2.8 The point of the exercise is to recognize that a binary tree in which all the left subtrees are nil is logically the same as a list. } } .java would be preferable because the javadoc comments in it should contain the preconditions and postconditions for each operation. } public static BinTree cons(Object newElement. let nd denote the number of nodes at depth d. Its virtue is simplicity.rightSubtree(aList). without worrying too much about details speciﬁc to Java. Solution 2.java will remain unchanged. Section 2. assume the formula holds for d 1. (It is also possible to make all the right subtrees nil and use the left subtrees to contain the elements. (In C. public static public static Object first(BinTree aList) { return BinTree. Solution 1. Therefore. to any solution that brings out that idea. Solutions 2 and 3 have been compiled and tested in Java 1. by Lemma 2. but this is less natural.4 The proof is by induction on d. Because each node at depth d 1 has at most 2 children.java.java is not reliable. have a t most 2 lg n 1 1 1 nodes. However.2. provided that they bring out that looking at the current implementation of the ADT is a bad idea. This solution merely uses the BinTree functions and does not even return objects in the List class. For a given binary tree. as long as the ADT speciﬁcations do not change. BinTree oldList) { return BinTree. this solution doesn’t really meet the speciﬁcations of the List ADT.3: Elementary ADTs — Lists and Trees 2. there is only 1 root node and 1 20 .2.) public class List { public static final BinTree nil = BinTree. so it is a reliable source of information. so all the objects are in the class BinTree. Java is more strict.buildTree(newElement. oldList). BinTree.java would be preferable because the javadoc utility (with an html browser) permits the ADT speciﬁcations and type declarations to be inspected without looking at the source code of Gorp.Chapter 2 Data Abstraction and Basic Data Structures Section 2. For d 0. We would give full credit.root(aList). or almost full credit. nd 2. GorpTester. Gorp.nil. Solution 1. For d 0. But 2lg n 1 1 n·1 1 n 2 lg n 1 1 1 ´ · µ ´ · µ ´ · µ Because that expression is less than n. even if the implementation in Gorp. any tree with n nodes would have to have height at least lg´n · 1µ 1. which can get somewhat complicated. the depth of the node.nil.2 Several answers are reasonable.2: ADT Speciﬁcation and Design Techniques 2. Trying to infer what the ADT operations do by looking at the implementation in Gorp. } BinTree rest(BinTree aList) { return BinTree. then the behavior of GorpTester.

This solution is the “most correct” for Java.listAsTree = BinTree. } public static Object first(List aList) { return BinTree. } public static List rest(List aList) { List newL = new List(). There is no reasonable way to get around this problem without using subclasses. if (oldList == List. return newL. . public class List { BinTree listAsTree. encapsulation.rightSubtree(aList. so the following postcondition does not hold: rest( cons(newElement. which is pointed out after the code. This solution is included mainly to conﬁrm the fact that it is possible to do sophisticated data abstraction with type safety. newL.11. it has a subtle problem. but the ﬁne points of Java are tangential to the purpose of the exercise.listAsTree. BinTree oldTree.10 Chapter 2 Data Abstraction and Basic Data Structures Solution 2. oldList)) == oldList. The nondefault BinTree constructor is accessed by super in the code below. oldTree). which are covered in the appendix. return newL.root(aList. private static List makeNil() { List newL = new List(). } public static List cons(Object newElement.listAsTree).listAsTree = BinTree. public static final List nil = makeNil(). and precise speciﬁcations in Java.nil. return newL. However.nil) oldTree = BinTree.6 in which the class IntList is extended to IntListA. Solution 3. This solution creates objects in the List class.buildTree(newElement. else oldTree = oldList.nil. We assume that the BinTree class has been deﬁned to permit subclasses with the appropriate protected nondefault constructor. A. in analogy with the nondefault IntList constructor in Fig.listAsTree = BinTree. List oldList) { List newL = new List(). BinTree. The technique follows appendix Section A. which is the binary tree that represents that list. newL. Here we extend BinTree to List. but each List object has a single instance ﬁeld.nil.listAsTree). This solution uses only Java features covered in the ﬁrst two chapters of the text. newL. } } The problem with this implementation is that rest creates a new object instead of simply returning information about an existing object.

tuList).nil). qrsList).nil. TreeList. List oldList) { List newList = new List(newElement. . } public static List cons(Object newElement.cons(t. Tree u = Tree. TreeList sList = TreeList. vList).rightSubtree(aList). } public static List rest(List aList) { return (List)BinTree.buildTree("u". Tree s = Tree. public static Object first(List aList) { return BinTree.10 Tree t = Tree. TreeList. sList). BinTree.buildTree("r". } protected List(Object newElement. rsList).nil).buildTree("q".nil).Chapter 2 public class List extends BinTree { public static final List nil = (List)BinTree. TreeList tuList = TreeList.buildTree("t". uList).cons(q.cons(u. oldList). return newList.nil). } } Data Abstraction and Basic Data Structures 11 2. TreeList. TreeList. Tree q = Tree.nil.cons(s. oldList). TreeList qrsList = TreeList.buildTree("v". TreeList vList = TreeList.buildTree("p".cons(r.nil).root(aList). TreeList uList = TreeList. TreeList. TreeList rsList = TreeList.nil).buildTree("s". Tree v = Tree.cons(v. TreeList. TreeList. Tree r = Tree. List oldList) { super(newElement.nil). Tree p = Tree.

nil. InTreeNode makeSizedNode() { return InTreeNode. } } Section 2. ancestor = InTreeNode.nil) { int ancestorData = InTreeNode. p).12 A newly created node has one node (itself) in its in-tree. we must ﬁrst remove it from its current parent’s tree.nodeData(v).parent(v).12 Chapter 2 Data Abstraction and Basic Data Structures 2.nodeData(v). ancestorData).parent(v). newStack.theList == List. } public static boolean isEmpty(Stack s) { return (s. return newStack. InTreeNode p) { InTreeNode ancestor = InTreeNode.theList).nodeData(ancestor).theList = List. ancestorData).4: Stacks and Queues 2.setNodeData(ancestor.nil) { int ancestorData = InTreeNode. we must add to the sizes of the new parent and all of its ancestors. ancestorData -= InTreeNode. InTreeNode. while (ancestor != InTreeNode.makeNode(1).nodeData(ancestor). } public static Object top(Stack s) { return List. Object e) .setParent(v. ancestorData += InTreeNode.14 public class Stack { List theList. Similarly.nil). our parent’s parent. } When setting the parent of a node. void setSizedParent(InTreeNode v.setNodeData(ancestor. } InTreeNode. when attaching to the new parent.first(s. InTreeNode. This decreases the size of our parent. while (ancestor != InTreeNode. public static Stack create() { Stack newStack = new Stack(). } public static void push(Stack s. so use 1 for the initial node data. and so on all the way up the tree.

storage = new Object [n].front = newQueue. } } Data Abstraction and Basic Data Structures 13 Each Stack operation runs in O´1µ time because it uses at most 1 List operation plus a constant number of steps. backIndex.theList = List. The indices for the front and back of the queue are incremented modulo n to circle back to the front of the array once the end is reached.theList).used == 0). return newQueue.theList). } . public class Queue { Object[] storage.used = 0. used++.size = n.storage[frontIndex]. b) An array of n elements is used because there can never be more than n elements in the queue at a time. } public static void pop(Stack s) { s.back = newQueue. newQueue.Chapter 2 { s. } public static boolean isEmpty(Queue q) { return (q.cons(e. s. used. It is also necessary to keep a count of the number of elements used. because frontIndex == backIndex both when the queue is empty and when it has n elements in it. newQueue. Object e) { storage[backIndex] = e. } Object front(Queue q) { return q.16 a) The precondition for enqueue would be that the total number of enqueues minus the total number of dequeues must be less than n. backIndex = (backIndex + 1) % size. } public static void enqueue(Queue q.theList = List.rest(s. newQueue. frontIndex. int size. 2. public static Queue create(int n) { Queue newQueue = new Queue(). It could also be implemented without the count of used elements by making the array size n · 1 because frontIndex would no longer equal backIndex when the queue was full.

int parentIndex = InTreeNode. for (int i = 1. int n) { // initialize remaining to be the number of children for each node int[] remaining = getNodeDegrees(inNode. Also. nodeDegrees[parentIndex]++. n). breaks the task up into two subtasks: ﬁnding the node degrees and building the out-tree.parent(inNode[i]). n. } Tree buildOutTree(InTreeNode[] inNode. } } c) Incrementing frontIndex and backIndex modulo n would be unnecessary. Additional Problems 2. ++i) { if (remaining[i] == 0) { . i <= n.nil. // construct the out-tree bottom-up. storing used would be unnecessary. TreeList[] subtrees = new TreeList [n+1]. because they could never wrap around. i <= n. ++i) { subtrees[i] = TreeList. } // nodes with no children are already "done" and // can be put into the sources stack Stack sources = Stack. } void getNodeDegrees(InTreeNode[] inNode. ++i) { if (! InTreeNode. used--.18 The main procedure.isRoot(inNode[i])) { InTreeNode parent = InTreeNode.nodeData(parent). i <= n. convertTree. int n) { int[] nodeDegrees = new int [n+1]. Tree convertTree(InTreeNode[] inNode. ++i) { nodeDegrees[i] = 0. i <= n. as isEmpty could just return (frontIndex == backIndex). } } return nodeDegrees. int n.create(). for (int i = 1. for (int i = 1. using remaining for bookkeeping return buildOutTree(inNode.14 Chapter 2 Data Abstraction and Basic Data Structures public static void dequeue(Queue q) { frontIndex = (frontIndex + 1) % size. } // calculate nodeDegrees to be the number of children for each node for (int i = 1. remaining). int[] remaining) { Tree outputTree.

If processing a node // causes its parent to become a source. it is immediately processed. subtrees[parentIndex] = TreeList. i). remaining[parentIndex]--.Chapter 2 Stack. subtrees[currentNode]). subtrees[parentIndex]). if (InTreeNode. int[] remaining) { Tree outputTree. InTreeNode sourceNode = inNode[sourceIndex]. } The buildOutTree procedure can also be implemented without the auxiliary stack. outputTree = sourceTree. while (remaining[currentNode] == 0) { remaining[currentNode] = -1. if (remaining[parentIndex] == 0) Stack. } } Data Abstraction and Basic Data Structures 15 // Process elements from sources. if (InTreeNode.pop(sources). outputTree = sourceTree. for (int i = 1. while ( ! Stack. It uses a form of the sweep and mark technique that is introduced for graph traversals in Chapter 7.buildTree(currentNode. i <= n. Stack. adding them to the TreeList for // their parent. thereby saving some temporary space. // it is also the root of our output tree. Tree sourceTree = Tree. } } return outputTree.buildTree(sourceIndex.parent(sourceNode). // mark as done InTreeNode sourceNode = inNode[currentNode]. When the stack is empty. } .nodeData(parentNode). When a source // is found. for (int i = 1. Tree buildOutTree(InTreeNode[] inNode.isRoot(sourceNode)) { // If this source node was the root of the input tree.cons(sourceTree. Tree sourceTree = Tree. ++i) { int currentNode = i.isEmpty(sources) ) { int sourceIndex = Stack. ++i) { subtrees[i] = TreeList. parentIndex). } // Sweep through remaining[] looking for sources. subtrees[sourceIndex]). // Mark processed sources with -1 to prevent possible reprocessing // later in the sweep.isRoot(sourceNode)) { // If this source node was the root of the input tree. } else { InTreeNode parentNode = InTreeNode.nil.top(sources). sources. // it is also the root of our output tree. int n. TreeList[] subtrees = new TreeList [n+1]. i <= n. then process the parent // immediately (instead of pushing it on the stack).push(sources.push(sources. all nodes will have // been processed and we’re done. int parentIndex = InTreeNode.

subtrees[parentIndex] = TreeList.nodeData(parentNode). int parentIndex = InTreeNode. } } } } return outputTree.cons(sourceTree.parent(sourceNode). subtrees[parentIndex]).16 Chapter 2 Data Abstraction and Basic Data Structures else { InTreeNode parentNode = InTreeNode. move up to process it currentNode = parentIndex. remaining[parentIndex]--. } . if (remaining[parentIndex] == 0) { // if the parent is now a source.

**Chapter 3 Recursion and Induction
**

Section 3.2: Recursive Procedures 3.2 Combining parts 3 and 2 of Lemma 1.4, it sufﬁces to show that the second derivative of x lg x is d2 ´x lg xµ dx2

´lg eµ

0 for x

0.

d2 ´x ln xµ dx2

´lg eµ

d ´ln x · 1µ dx

´lg eµ

1 x

0

3.4 Statement 2 is the correct one. The proof is by induction on n, the parameter of F. The base cases are n For n 1, F ´1µ 1 01

3 ¡1

2

015. For n

3 ¡k

2, F ´2µ

1

01

3 ¡2

2

1 and n

2.

0225.

01 2 for all 1 k n. [Note that the range of k includes the base cases and is For n 2, assume that F ´kµ strictly less than the main induction variable; k is our auxiliary variable.] Then letting k n 1 in the ﬁrst case below and k n 2 in the second case below [Notice that both these values are in the correct range for k, but would not be if the inductive cases began with n 2.], F ´n 1 µ F ´n 2 µ In this range of n we are given that F ´nµ F ´nµ 3 2 3 01 2 01 01 01 3 2 3 2

n 1 n 2

n 1

F ´n 1µ · F ´n 2µ, so

·

01

3 2

n 2

n

2 4 · 3 9

01

3 2

n

10 9

01

3 2

n

So the claim holds for all n

2 also.

Section 3.5: Proving Correctness of Procedures 3.6 We give the proof for part (a) with the added statements needed for part (b) enclosed in square brackets. We use the lexicographic order on pairs of natural numbers (nonnegative integers). That is, ´m nµ ´x yµ if m x or if m x and n y. The proof is by induction on ´m nµ in this order. The base cases are pairs of natural numbers of the form ´0 nµ, where n 0. (Recall that the precondition of gcd eliminates ´0 0µ.) In these cases gcd(0, n) returns n. By the deﬁnition of divisor, n is a divisor of both 0 and n, so n is a common divisor. [For part (b): No integer greater than n can be a divisor of n, so n is also the greatest common divisor.] For pairs of natural numbers ´m nµ with m 0, assume the gcd function meets its objectives for pairs ´0 0µ ´i jµ ´m nµ in the lexicographic order. There are two cases in the procedure: m n and m n. If m n, gcd(m, n) returns the same value as gcd(n, m). But ´0 0µ ´n mµ ´m nµ, so by the inductive hypothesis gcd(n, m) is a common divisor of m and n. [For part (b): the greatest common divisor.] But common divisors and the greatest common divisor don’t depend on order, so this value is correct for gcd(m, n). If m n, the function returns gcd(m, nLess), where nLess n m 0. But ´0 0µ ´m nLessµ ´m nµ, so by the inductive hypothesis, gcd(m, nLess) returns a common divisor of m and nLess. [For part (b): the greatest common divisor.] Deﬁne d to be the value returned by gcd(m, nLess). Then n d nLess d · m d has no remainder, so d is a common divisor of m and n. This concludes the proof for part (a). [For part (b): Let h be any common divisor of m and n. It remains to prove that h d. But nLess h n h m h also has no remainder, so h is a common divisor of m and nLess. Therefore, h d.]

18

Chapter 3

Recursion and Induction

Section 3.6: Recurrence Equations 3.8 Solution 1. Applying the Master Theorem (Theorem 3.17 of the text), we ﬁnd that b cn ¾ Θ´n1 µ. Case 3 applies, so W ´nµ ¾ Θ´cnµ Θ´nµ.

1, c

2, E

0, and f ´nµ

Solution 2. Using the assumption that n is a power of 2, we can get an estimate for W ´nµ that will be of the same asymptotic order. To get the formula for W ´nµ, we look at the recursion tree or expand a few terms to see the pattern. W ´nµ cn · W ´n 2µ

lg n 1 i 0

cn · cn 2 · W ´n 4µ

1 ´ µlg n cn 2 1 1 2

cn cn So W ´nµ ¾ Θ´nµ.

∑

1 1 n

1 2

1 2i

·1

·1

2cn´1 1

1 ·1

cn · cn 2 · cn 4 · W ´n 8µ (using Equation 1.9 of the text) 2c´n 1µ · 1

nµ · 1

3.10 Notation is consistent with the Master Theorem (Theorem 3.17 of the text) and/or Exercise 3.9 and Eq. 3.14. b) Same as Exercise 3.8. T ´nµ ¾ Θ´nµ. c) b 2, c 2, E 1, f ´nµ T ´nµ ¾ Θ´n log nµ. d) b e) b 2, c 2, c 2, E 2, E 1, f ´nµ 1, f ´nµ a) b 1, c 2, E 0, f ´nµ c lg n ¾ Θ´log1 nµ. So use Eq. 3.14 with a 1. T ´nµ ¾ Θ´log2 nµ. 0),

cn ¾ Θ´n1 µ. By case 2 of the Master Theorem (or by Eq. 3.14 with a 1. T ´nµ ¾ Θ´n log2 nµ.

cn lg n ¾ Θ´n1 log1 nµ. So use Eq. 3.14 with a

cn2 ¾ Θ´n2 µ. By case 3 of the Master Theorem, T ´nµ ¾ Θ´n2 µ.

Additional Problems 3.12 The recurrence equation for the number of disk moves is: T ´nµ T ´0µ Use Eq. 3.12 as the solution of Eq. 3.11 with b T ´n µ 2T ´n 1µ · 1 0 2, c

n n

for n 1 for n

1 2

0 1, and f ´0µ 0:

1, f ´nµ 2

n

¼

1 2 ∑ h 2 h 1 2n 1

2n1 1 1 1 2

·

½

Alternatively, we can see by inspection that the recursion tree is a complete binary tree of height n, that all the internal nodes have a nonrecursive cost of 1 and the leaves have a nonrecursive cost of 0. So the total nonrecursive cost over the whole tree is 2n 1.

Chapter 4 Sorting

Section 4.1: Introduction 4.2 a) A worst case occurs when the keys are in decreasing order; the for loop is executed for each value of numPairs from n 1 down to 1. The number of key comparisons is n´n 1µ 2.

b) If the keys are already sorted the for loop is executed once, doing n 1 comparisons and no interchanges. 4.4

a) Suppose the last interchange on some pass is done at the jth and j · 1st positions. Then the keys in positions j · 1 through n must be in order. If they were not, there would be a key that immediately preceded a smaller key, and those two would have been interchanged. We need to show that none of these keys belongs in earlier E j · 1 . (If there may be duplicate keys, positions in the array. Let x be the largest key from among E 0 consider the last instance of x in this range.) In the previous pass through the while loop, x would always have interchanged with the next position, so it must have reached the j · 1st position. Hence the smallest key among E n 1 is greater than or equal to all keys among E 0 E j · 1 , so E j · 1 E n 1 are E j·1 in their proper positions. b) The idea is to keep track of the position where the last interchange occurred and use that position to reset numPairs instead of simply decrementing numPairs by 1. We can eliminate the variable didSwitch.

last = n-1; while (last > 0) numPairs = last; last = -1; for (j = 0; j < numPairs; j++) if (E[j] > E[j+1]) Interchange E[j] and E[j+1]. last = j;

c) No, if the keys are in reverse order, the algorithm will still execute the for loop for each value of numPairs from n 1 down to 1. Section 4.2: Insertion Sort 4.6 1, n, n 1

2 and n, n 1

3, 1, 2.

4.9 The average will be lower. The while loop (in shiftVac) that ﬁnds the place for x stops if it ﬁnds a key equal to x.

4.11 This code uses insert1, which appears in Figure 2.6 of the text; the subroutine insert1() inserts one new element into a sorted linked list, using partial rebuilding.

Since the larger subrange is reduced by only 2 at each depth of recursion in quickSort.. and k 2 elements are greater than the pivot.) The subroutine insert1 is called with k varying from 0 to n 1. So.13 /** Postcondition for extendSmallRegion: * The leftmost element in E[low].. the number of garbage objects is proportional to the running time. * If there is no such element. However. remUnsort = unsorted. The space usage depends on how many garbage (unreferenced) objects are created during partial rebuilding. and is in Θ´n2 µ in the worst case. using the pairing-up trick mentioned after Eq. and all other elements are over 100. the total number of comparisons is about n2 4. rest.15 n´n 1µ 2 key comparisons.5 in the text. n 2. Therefore. This is one more than if E[first] is simply chosen as the pivot. it is reasonable to assume this is a constant. return sorted. the exercise speciﬁed that the list ADT given in the text be used. it will require n 2 calls to partition. with ranges of size n. E[(first+last)/2] = 99.17 Consider a subrange of k elements to be partitioned. Otherwise. Then 99 becomes the pivot. the best-case time is Θ´nµ. sorted). n 4. The remaining keys can be chosen so the the worst case repeats. 1. 2´n 1µ element movements (E[first] is moved to pivotElement and then moved back each time partition is called). remUnsort. while (remUnsort nil) int newE = first(remUnsort). sorted = nil. . When sorted has k elements in it.4: Quicksort 4. 98 is the only element less than the pivot. linear space is needed for the recursion stack in the average and worst cases. sorted = insert1(newE. */ 4. highVac is returned. An array that is initially in reverse order also requires Θ´n2 µ comparisons with the median-of-3 strategy for choosing the pivot. With an automatic garbage collector. and average and worst-case times are Θ´n2 µ. . An example of the worst case might be E[first] = 100. 4. E[highVac-1] * whose key is pivot is moved to E[highVac] and * the index from which it was moved is returned. since insert1 might have a depth of recursion that is about k and k can be as large as n 1.20 Chapter 4 Sorting IntList listInsSort(IntList unsorted) IntList sorted. and cons are implemented in constant time. With a “mutable” list ADT it would be possible to avoid the creation of garbage objects by changing their instance ﬁelds. Note that space required for the output is not counted in this analysis. Choosing the pivot requires 3 key comparisons.. In addition. So k comparisons are done by the time partition is ﬁnished. Section 4. then the call to insert1 has a best-case time of Θ´1µ and average and worst-case times of Θ´kµ. It also assumes that any garbage collection takes time proportional to the number of garbage (unreferenced) objects collected. Now k 3 elements remain to be compared to the pivot (it is easy to arrange not to compare elements that were candidates for the pivot again by swapping them to the extremes of the subrange). that is. remUnsort = rest(remUnsort). (This assumes that first. E[last] = 98. we can arrange that the median-of-3 always comes out to be the second-smallest key in the range. . for the overall procedure. Analysis: Assume unsorted has n elements.

Total: n 2 moves in partition calls. So 2´k 1µ 2 moves occur on average in partitioning a range of k elements.19 Note: Students might give several different answers depending on whether certain moves are counted. is approximately 1 4n lgn plus a small linear term. 4. and can i be avoided by doing a test. 10 (10 was the pivot). whichever is used. 2. it is far from clear that it would save time in typical cases. as far as the leading term is concerned. However. (Half of these are moves “in place. a) After the ﬁrst call: 1. 6.23 This algorithm is simple and elegant. In extendSmallRegion an “unknown” element is moved if it is probability of which is 1 . using partitionL. The probability of this event is 1 . 8. 3. 9. So ´k 1µ 2 moves occur on average. Thus using the median-of-3 method for picking the pivot element reduces both the average number of comparisons and the average number of moves. This number is doubled if all elements to be merged are ﬁrst copied into a work area.” done in Line 5 of partitionL. After the second call: 8. the 2 probability of which is 1 . c) Quicksort with partition as given in the text does about 1 4 n lg n key comparisons but only about 0 7 n lgn element movements. including its subroutines. No moves in partition. the gion (but not both). the average number of moves goes down proportionally. 10 (9 was the pivot). 6. then the total is about 1 5 n lgn. but we would give full credit (or almost) for any of the reasonable variants that demonstrated a correct understanding of what is happening. We also mention that if the average number of comparisons in Quicksort is lower. 1. Thus the total number of moves done by Quicksort on average. 18 moves (2´n 1µ). for the complete run of Quicksort. In extendLargeRegion an “unknown” element is moved if it is the pivot. In both cases quickSort itself did an additional 20 moves. 5. 4.Chapter 4 Sorting 4. 5. each “unknown” element is examined by extendSmallRegion or extendLargeRethe pivot. 7. 4. This is only half the average number of com2 parisons. c) The only advantage of partitionL is its simplicity. 2. 3. 3. Although the test would probably save some time in this worst case. 5. 8. 10 (10 was the pivot). In most cases it does more element movement than partition. The question addresses how many moves are done by Quicksort. plus 2´n 1µ moves for pivots in quickSort itself. Parts (a) and (b) illustrated an extreme difference. 7. Mergesort always does an element movement after each key comparison and so does about n lg n element movements on average. The relatively small number of element movements is the source of Quicksort’s speed in practice. After the second call: the same (1 was the pivot).3. 1. Section 4. 1 Total: 2 ∑n 1 i ´n 1µn moves. 16 moves (2 ´n 2µ). but it uses linear space in the frame stack (unless the compiler performs tailrecursion optimization). whereas partitionL does a quadratic number of moves in this case. 7. quickSort itself does 2´n 1µ moves. before Algorithm 4. Thus the total number of moves done by Quicksort on average. As before. two moves occur in partitionL. using partition. The body of Quicksort does two moves of the pivot element in connection with each call to partitionL or partition. as described in Section 4. almost all of the moves occur in partitionL or partition.5. This is equal 2 to the average number of comparisons. assuming the optimization described in Exercise 4.27 is done. 9.21 The analysis assumes that all initial permutations are equally likely and all elements are distinct. The “take home” message is that partition does a linear number of moves when the initial sequence is in reverse order. a) Whenever the “unknown” element is less than the pivot.5 on page 175. 21 b) After the ﬁrst call: 9. . One move was done in partition. Since there are at most n 1 such calls in the entire run. where partitionL did 90 moves while partition did only 5. if only half of the elements to be merged are copied. is approximately 7n lg n plus a small linear term. we shall see that these moves may be neglected in determining the leading term for the average number of moves done by the algorithm. 2.5: Merging Sorted Sequences 4. b) For partition. as suggested in the text. 4. that is. 6.

the subproblem remaining B 0 . . merging the two halves will need only n 2 comparisons. i. By Exercise 1. merged = cons(e1. it doesn’t matter how after moving B 0 to the output area has P´k m 1µ possible permutations. (If A 0 the tie is broken. Once the slots for the k keys from the ﬁrst array are determined. Also. Some people prefer a less verbose style. if (in1 == nil) merged = in2. IntList in2) IntList merged. If A 0 B 0 . listMerge(rest(in1). in2)) : cons(first(in2). we have P´k mµ P´k 1 mµ · P´k m 1µ. merged = cons(e2. the subproblem remaining after moving A 0 to the output area has P´k 1 mµ possible permutations. Then. Thus the number of different output arrangements is the number of ways to choose k slots from m · k. one of the two above subproblems remains. int e2 = first(in2). If A 0 B 0 . else int e1 = first(in1). by expanding a few terms it is easy to see that W ´n µ Let k lg n. if k 0 or m 0 there is only 1 permutation.6: Mergesort 4. else remMerged = listMerge(in1. There are m · k slots in the output array.) Therefore. in2). rest(in2)). rest(in2))). else if (in2 == nil) merged = in1.e. 4.. P ´k m µ k Solution 2.25 Solution 1 (follows hint in text). then W ´nµ n 2 i 1 ∑ 2 · 2 k W ´n k n 2k µ lg n. such as: IntList listMerge(IntList in1. remMerged). So the recurrence relation is W ´nµ W´ n 2 µ ·W´ n 2 µ· n 2 and W ´1 µ 0 For simplicity. IntList in2) return in1 == nil ? in2 : in2 == nil ? in1 : first(in1) first(in2) ? cons(first(in1). for k 0 and m 0. return merged. remMerged). if (e1 e2) remMerged = listMerge(rest(in1).2 in the text we see that n is a solution to this recurrence equation and satisﬁes the boundary conditions. Let P´k mµ be the number of possible permutations of the merged sequences where the ﬁrst sequence has k elements and the second has m. listMerge(in1. m·k . assume that n is a power of 2. the keys from the second array must ﬁll the remaining slots in ¡ order. Let n k · m. k Section 4.26 If the keys are already sorted. IntList remMerged.22 Chapter 4 Sorting IntList listMerge(IntList in1.

mergeSort(in. work.29 There are 2D 1 nodes at depth D 1 in the recursion tree. last.length]. int last) if (first == last) if (in out) out[first] = in[first].0 2.0. // else out[first] already = in[first]. as optimized here. last).2 ≥ 2:1 < ≥ < ≥ 1. 23 Now the idea is that the revised mergeSort.2 . 2:0 < ≥ 1:0 1:0 < 1:2 ≥ 2. there are only two distinct arrays.21 showed that Quicksort does about 0 7 n lg n element movements on average.7: Lower Bounds for Sorting by Comparison of Keys 4. 4. Then mergeSort calls merge to merge from the two subranges of work into the desired subrange of out.1. So there are 2´2D 1 Bµ leaves (base cases for Mergesort) at depth D. The changes to merge. is about n lg n element movements. mid). merge(work. b) Now merge does one move per comparison because its input and output areas are different. then ´2D 1 Bµ have two children each. are straightforward. mergeSort(in. mid+1. out. Element[] work.Chapter 4 Sorting 4. out. E. out). so the average for Mergesort.1 0. Altogether there are n bases cases for Mergesort.2.0 0. first. Although there are three parameters (in. else mid = (first + last) / 2. and so it arranges for the recursive calls to put their output in two subranges of work (its third parameter). Element[] out.27 a) Mergesort is revised to have a work array as an additional parameter. actually in its subroutines. the second index is the index of the pivot element. mergeSort(Element[] in.4 in the text. int last) Element[] W = new Element[E. work). mid.1 < 1. In each internal node in the decision tree below. first. int first.31 The comparisons are done in the partition algorithm (Algorithm 4.1. first. Section 4. If B of these have no children. anticipates that its output needs to be in out (its second parameter).3). last). out. A wrapper might be used to allocate the work array: mergeSortWrap(Element[] E. shown below. W. mergeSort(E. so n 2´2D 1 Bµ · B 2D B. compared to Algorithm 4. Exercise 4. int first. work.0.2. E and W.

68.34 Arranging the elements in a binary tree: 25 19 5 3 7 10 12 4 15 13 No. vacant is at H[8]. so so keep going with h 1. 98. each of the ´n 1µ 2 internal nodes has two children. move 69 to H[16]. the total is n 1. b) Since the keys are in decreasing order. the last internal node has only one child. 84. so two key comparisons are done by FixHeap for each one. In both cases. move 93 to H[4]. c) Best case. The overhead of an extra call to FixHeap is eliminated. Compare K with parent of H[8]. If n is odd.42 a) K H[100] 1. Compare 93. but there is no decrease in the number of key comparisons. Compare 85. so keep going with h 3. then compare K to 37. which contains 93. which contains 69. Compare 97.35 The array containing the heap. K is smaller. 96. If n is even. Compare K with parent of H[32]. . 4. K is smaller. Compare 99. no keys move. vacant 1 and h 6.40 The statement to be added is Interchange E[1] and E[2]. two key comparisons are done. “Pull out” 100 from H[1]. FixHeap does only one or two key comparisons each time it is called. Compare 69. so FixHeap does only one key comparison at that node. 92.8: Heapsort 4. it’s not a heap.24 Chapter 4 Sorting Section 4. 4. is: Y T X P O E M I C L 14 key comparisons are needed. For each other internal node. beginning at index 1. because 5 is smaller than its child 7. 36. so move 37 to H[32] and insert K in H[64]. 4. vacant is at H[16]. 4. Compare 37. vacant is at H[2]. move 85 to H[8]. move 99 to H[1].37 a) 9. move 97 to H[2]. vacant is at H[4]. As fixHeapFast begins. vacant is at H[32]. K is smaller.

44 The goal is to show that lg´ If h 2k 1. If there are n books and the list of publishers is sorted and searched by binary search. One sequential pass through the bills ﬁnds those that are unpaid. then lg´ 2k 1 1 and the left side lg´ 1 2h ·1 µ ·1 k. Consider an initial sequence in the reverse of sorted order. 25 c) fixHeap does 12 comparisons. Assuming the number of bills and the number of checks are approximately equal. the number of key comparisons is approximately 30 lg 30 1 4 £ 30 · n lg30 = 5n · 108. . Again. one for each publisher. then do binary search for each check. Otherwise. and so on.46 The radix (hence the number of buckets) is squared. An alternative solution would be to sort just the bills. so lg´h · 2µ Section 4. But lg´h · 1µ is not an exact integer in this lg´h · 1µ . and letting n be the approximate number. 2 Section 4. then check through the piles for bills without corresponding checks. 4. Then scan the list ignoring duplicates and count distinct people. or a specialized hash function can be designed to compute the proper counter index given a publisher’s name. Scan the list of books and increment the appropriate counter. 2 at each level except the leaf. this method does Θ´n lognµ steps. The list of publishers can be alphabetized. b) Use an array of counters.Chapter 4 Sorting b) 4 · 3 · 2 9. 1 is an integer. (Depending on the sorting algorithm used. and an equal number to 2 sort elements 1 k · 1 2k · 1 . This phase will cost about 1 n2 k.) The number of steps is in Θ´n log nµ.11: Radix Sorting 4. Also. Sort this list. which is in Ω´n2 µ.10: Shellsort 4. the right side 1 2 ´h · 2 µ µ 1 2 h · 1µ · 1 lg´2 If h is odd. again as was to be shown. the number of steps is in Θ´n lognµ. where k 1 2 h · 1µ · 1 1 2h lg´h · 1µ for all integers h 1 k. c) Enter the person’s name or identiﬁcation number from each record into an array. as was to be shown. so the last expression equals lg´h · 1µ .48 a) Sort both the bills and the checks by phone number. and the total number of steps is in Θ´nµ. case (here we use the fact that h 0 as well as even). 2 If h is even the last expression in the above equation equals lg´h · 2µ . Additional Problems 4. it may be convenient to eliminate duplicates during the sort. marking the bills that are paid. The ﬁrst phase of Shellsort (with increment k) will do about 1 n2 k2 comparisons to sort elements 0 k 2k . 2 1 ´h · 2µ h · 1.45 Suppose the largest of the 5 constants is k.

51 a) Insertion Sort: Stable. we would need a list of the sublists. which is now h-sorted. 1. 1. and Shellsort can be adapted to linked lists within the same asymptotic order. 2. and Radix Sort are all good answers. which in general must have Θ´nµ slots to achieve linear-time sorting. T ´n µ Ô Ô Ô Ô cn · n c n · n1 4T ´n1 4 µ cn · nT ´ nµ cn · cn · n1 2n1 4 T ´n1 cn · cn · n 1 2 ·1 4 4 µ ·n 1 8 cn 1 4 T ´n1 8 8 µ cn · cn · cn · n1 kcn · n 1 1 2k 2·1 4·1 8 1 2k T ´n1 µ T ´n µ The boundary condition is stated for T ´2µ. but this goes against the spirit of the question. 1. Thus we solve 2 Raising both sides to the power 2k gives 22 Then taking logs twice gives 2k So T ´nµ cn lg lg n · n1 1 cn lg lg n · n1 1 2lglg n lgn k n1 2k n lg n and k lg lg n ¾ Θ´n log log nµ 4. 1. 2. whereas it is possible with an array. but a sizable increase in the constant factor. it must be possible to access any element in this array in constant time. using Insertion Sort for the sorting. c) Bubblesort: Stable. Shellsort. it can overlap with the creation of the h sublists. One could argue that the binary tree can be implemented with linked lists. f) Mergesort: Stable. 3. 2 and h1 1. These extra steps take only linear time per increment. However. With considerable trouble it might use the usual binary tree structure with edges directed away from the root. (Actually. 3.53 There are several reasonable answers for this open-ended question. Heapsort requires some implementation of a binary tree.49 Expand the recurrence to see the pattern of the terms. That is. and this is true. and would degrade to a quadratic-time sort in any straightforward adaptation. rather than Ω´log nµ. So fixHeap. What matters most is if the choices are supported by good reasons. Shellsort cannot be easily adapted to linked lists. 1. Although Radix . Radix Sort also depends intimately on the buckets array. Heapsort. Note: This exercise was updated in the second printing. e) Heapsort: Not stable: 1. if the array were simulated by a linked list in a straightforward manner. where there’s a will there’s a way. But there is no way to jump long distances in a linked list in constant time. g) Shellsort: Not stable: 2. d) Quicksort: Not stable: 2.) Then go through the sublists in rotation and assemble them into one list.) Then sort each of the sublists. as with Heapsort. To sort subsequences separated by an increment h.26 Chapter 4 Sorting 4. would require Ω´nµ time in the worst case. There is no easy way to jump by the appropriate increment in constant time to ﬁnd the key to be compared. 4. 3. one could make a pass through the linked list and create h sublists. There is no way to use linked lists in a natural manner to implement Heapsort in Θ´n log nµ. for example. with h2 h) Radix Sort: Stable. taking elements in rotation. Be sure that the errata in the ﬁrst printing (available via Internet) are applied if you are using the ﬁrst printing. (For bookkeeping. b) Maxsort: Not stable: 1. so they do not increase the asymptotic order for the algorithm.

enough CEX instructions are done to let each new key ﬁlter all the way to the beginning of the array if necessary. One idea to keep in mind is that sorted lists can be constructed from the largest element back to the smallest. CEX 2.3 b) The following algorithm is based on Insertion Sort. W. // index of last red int u. n. Some students who follow the algorithms too literally observe that there are difﬁculties in using linked lists to exactly imitate the way an algorithm works on arrays. so there are n iterations. r ++.) 4.2 CEX 4.28). when switching data structures. Procedures can be found in Lisp texts. At an intermediate step the elements are divided into four We assume the elements are stored in the range 1 regions: the ﬁrst contains only reds.11).3 CEX 1. This is actually easier to do with lists than it is to do in place with arrays. 27 With each iteration of the while loop either u is incremented or b is decremented.4 CEX 2. u = 1.3 CEX 1. (From E. Since there are no test-and-branch instructions. // index of first unknown int b.2.5 CEX 3.3. it would degrade to quadratic time. b = n+1. u ++. for example.2 Insert x4 Insert x5 CEX 3. and the fourth contains only blues. else if (E[u] == white) u ++. int r. A Discipline of Programming.22). if it were required to use a list in place of the buckets array. Dijkstra. described in the comments below. While this is true for partition. 4. rather than the code. Thus the central idea of partition is to partition the set into “small” and “large” subsets whose keys do not overlap. Mergesort (see Exercise 4.55 This is an extension of the strategy used by partitionL. Quicksort (see Exercise 4.3 CEX 1.57 a) Time step 1 2 3 Instructions CEX 1. Time step 1 2 3 4 5 6 7 Instructions Insert x2 CEX 1. There are three indexes. // index of first blue r = 0. else // (E[u] == blue) Interchange E[b-1] and E[u]. The algorithm is shown for n 5. and Bubblesort can be adapted fairly easily for linked lists. Maxsort. the second contains only whites. the third contains elements of unknown color. b --. CEX 3.4 CEX 1.Chapter 4 Sorting Sort uses lists as well as the buckets array.2 .2 Insert x3 CEX 2.4 CEX 2.4 CEX 2. while (u < b) if (E[u] == red) Interchange E[r+1] and E[u]. the general form should be clear. Insertion Sort (see Exercise 4. it is preferable to think of adapting the ideas.

so Insertion Sort can be implemented in linear time using simultaneous CEX instructions. Do this for each s. The algorithm would give the same response. so the total is in Θ´n2 µ. Programming Pearls (Addison-Wesley. Both of the solutions above consider every subsequence explicitly. return maxSoFar. hence shows the subtlety sometimes necessary in lower bounds arguments. Bentley describes a Divide and Conquer solution that runs in Θ´n log nµ time. There are other solutions. Thus any (correct) algorithm must examine each element. n s 1 additions are done. There are Θ´n2 µ subsequences. e.59 This is a good problem to spend some time on in class because it has a variety of solutions that can be used to illustrate many points about developing good algorithms. Simply change E[i] to M · 1. The total is 2n 3. What matters most is if the choices are supported by good reasons. for a subsequence ´0 s e n 1µ. s. As e varies from s to n 1. insertion of xn . The last column. The problem is discussed at length in Bentley. Many students turn in complicated solutions with a page and a half of code. maxSoFar = 0. I (Sara Baase) write the following quotation from Antoine de Saint-Exup´ ry on the board before I start discussing the problem. but when there is no longer anything to take away. and compute the sum of each subsequence. for (i = 0. always saving the maximum sum found so far.28 Chapter 4 Sorting There are n 2 earlier time steps needed to get the other insertions started. some students argue that this is necessary and therefore any solution would require time in Ω´n2 µ. using only one addition. and each ending point. Thus each possible starting point for a subsequence does not have to be explicitly considered. each with an average of roughly n 2 terms to be summed. always saving the maximum sum found so far. yet each of the solutions described below can be coded in fewer than ten lines. hence do Ω´nµ steps. See the odd-even transposition sort in Selim Akl. so this solution takes time in Θ´n2 µ. and it would be wrong. Parallel Sorting Algorithms for another interesting approach. Here’s the solution. i < n. the maximum subsequence can not begin at the next position because including the positive element or subsequence would increase the sum. Solution 3: Observe that if an element in a particular position is positive. It is more complicated than the solutions described here. The only places where a new subsequence must be started is immediately following a position where the sum has become negative. compute each new subsequence sum by adding the next element to the current sum. and suppose its answer is M. takes n 1 time steps. 4. say E[i]. but it is also interesting to present in class. so this brute-force method takes time in Θ´n3 µ. b) Suppose an algorithm does not examine some element.” a) Solution 1: Consider each starting point. The number of comparisons of sums is roughly equal to the number of additions. 4. if (currentSum < 0) currentSum = 0. The next solution is linear. The number of comparisons done to ﬁnd the maximum sum is smaller than the number of additions. For each s. e “A designer knows he has arrived at perfection not when there is no longer anything to add. i ++) currentSum = currentSum + E[i]. or if the sum of a subsequence ending at that position is positive. currentSum = 0.61 There are many reasonable answers for this open-ended question. . else if (currentSum > maxSoFar) maxSoFar = currentSum. Solution 2: Fix the starting point s. 1986).

If not. as you just pile them up. and the sorting procedure is correct. so let’s look brieﬂy at some average cases. as discussed above. use the second table to distribute into 26 piles based on second letter.) If such information is needed. b) The main problem with distributional sorts in a computer is the need to arrange space for buckets that will have an unknown number of items in them. A little thought reveals that if there are duplicates. then there must be an “equal” comparison at some point. If any three-way comparison is “equal” there are duplicates. Quicksort would tend to degrade toward its worst case.19(b). Merge them with the sorted elements. If three-way comparisons are not convenient. c) Simply count how many have each key value. Each of these piles can probably be sorted ad hoc. The idea is similar to text section 4. and given that most classes have about 30 students.) 4. Another is to sort each class individually by student name. doing 5000 separate sorts. (This is often the case in real-world problems. rather than random? This is a reasonable possibility when the data is extracted from another database. while Mergesort does about 150. and an equal number of element movements.000 comparisons on average. Mergesort would stay about the same. so the two might be quite close in time used for sets of 30. Having decided to sort each class individually. We might consider Quicksort also. so the chosen sorting method is almost irrelevant in this case. a) Simply sort them with a comparison sort using three-way comparisons. 29 . which it would be if it was in the order in which students registered. b) The time is Θ´n log nµ using Mergesort or Heapsort. if the data is randomly ordered. which are separate from the class records. but if one or two need a further breakdown. do a post-scan of the sorted array to check for equal adjacent elements. it is probably 4 to 5 times faster than Insertion Sort.63 One point to this problem is that you can’t do any better than the obvious method (if only comparisons are available). the matching student record for each class record can be looked up after the class records are sorted. and Insertion Sort would run faster than average. The problem statement does not make it clear whether information is needed from the student records to compose the class lists.3. Even allowing for Mergesort’s higher overhead per comparison. 4. But Mergesort probably has more overhead per comparison than Insertion Sort. so m is in Θ´log nµ. Because sorting costs grow faster than linearly. it might be more expedient to use a simple method like Insertion Sort. using an array of 2n · 1 entries to store the counts. The merge step requires n 1 comparisons at most. Since the largest class will be 200 at most. the person can shufﬂe things around to make enough space somewhere. even though its asymptotics are not favorable. for the occasional large classes. So simply distribute into 26 main piles based on ﬁrst letter. But suppose we frequently have data that is partially sorted. Mergesort would use about 1600 comparisons on average. A ﬁnal issue to consider is the role of the student records. many small sorts will be cheaper than one large sort. m is about lg k and n k · m. There is no reason to expect the worst case for Insertion Sort. Insertion Sort should use about 10. So the total number of comparisons is in Θ´nµ. this is not a problem. using the disk address in the class record. Readers who are familiar with the relative speeds of disk access and internal computations will realize that the time for these disk accesses will dwarf the sorting time. One is to throw everything together and sort once on a combined key of course and student name. For the ﬁrst solution. For manual sorting of exams. then Quicksort might perform slightly better than Mergesort on average because it does fewer moves and the data records are rather sizable. working right-to-left (or high-to-low). using the extra space at the end of E1 for the initial output area.5. Trying to optimize the pattern of disk accesses goes beyond the scope of the text.Chapter 4 Sorting a) There are two basic approaches to this kind of sorting problem. sort the unsorted elements in E2 by themselves with Θ´m log mµ Θ´log n log log nµ comparisons. In this case. The exercise implied that the input data was already grouped into classes.65 To simplify the notation let m denote the number of elements in E2. On a set of 30. If a main pile is too large to sort ad hoc. then Radix Sort would probably be the best way to group it into classes so that each class can be sorted as a small set. See Exercise 5. Insertion Sort does about 225 comparisons on average. (We can quit as soon as any count becomes 2. However. Because of the small sizes the space overhead incurred by Mergesort is not a serious drawback.

8) reduces the number of comparisons to Θ´log2 nµ.) After sorting E2 declare an array rankE1 with m entries. the element initially in E1[i] must move to E1[i+j+1]. E2. it is sort of pointless to try to reduce the number of comparisons below Θ´nµ. But again. E2[j] will go into E1[j+r]. for all i such that rankE1[j] i rankE1[j+1]. With all this machinery. which 2 is in Θ´n log nµ. and E1. Actually. then store r in rankE1[j]. For a third method. but it is possible. in the manner of Insertion Sort. Since any method requires Ω´nµ element movements in the worst case. we can perform the merge without doing any further comparisons. but it simpliﬁes the description of the algorithm. For each j. However. which is quite simple. This step requires about m lg k comparisons. simply insert each element from E2 into E1. This means that when E1 and E2 are merged. only n element movements are needed. This array will record where each element in the sorted E2 array belongs in the ﬁnal merged sequence. (A related technique comes up in Chapter 14 where it is called cross-ranking. it does not reduce the number of element movements. We work right-to-left through rankE1. But m lg k is in Θ´log2 nµ. If done naively. use binary search in E1 to ﬁnd out where E2[j] belongs in sorted order. For example. This dominates the cost of sorting E2. . 0 j lg k. as its elements can be computed on the ﬂy. binary search in E1 (see Exercise 4. this requires about ´k · 1 mµm comparisons and element movements in the worst case. the array rankE1 is not needed. Once we know where every element of E2 belongs in the merged array. If it is discovered that E2[j] should immediately precede E1[r].30 Chapter 4 Sorting The second method will do o´nµ comparisons.

but the meaning is clear here. Solution 3. 2 n 2 Therefore the result given by this adversary argument is very close to the result of Theorem 4. Solution 1. 1.25 tells us that the total number of possible ¡ n comparisons. It gives the response consistent with the most permutations. the number of remaining permutations is at least half as many as before the comparison.10. but is weaker by a log term. the adversary considers how many permutations of the keys are consistent with both possible answers. We can carry one more term in our simpliﬁcation of Stirling’s formula and get a more precise bound.Chapter 5 Selection and Adversary Arguments Section 5. n n 2 n n 2 ´n n! 2µ!´n 2µ! lg lg´n!µ 2 lg ´´n 2µ!µ The difﬁculty is that the simple lower bound given in Eq. By inspection of Stirling’s formula (Eq. Thus after each comparison. Exercise 4. It always chooses a response to preserve at least half the remaining possibilities.4. Solution 2.18 cannot be used on the negative terms. n 2 There is some abuse of notation to treat Θ´ µ and O´ µ as functions. This degree of analysis on the adversary argument gives us a lower bound with the same leading term (n) as that given in Theorem 4. Theorem 4. n nÔ n Θ ´1 µ n! e lg´n!µ n´lg´nµ lg´eµµ · 1 lg´nµ · Θ´1µ 2 n lg ´´n 2µ!µ ´lg´nµ 1 lg´eµµ · 1 lg´nµ · Θ´1µ 2 2 n lg n 1 lg´nµ ¦ O´1µ for n 2 and even. b) The adversary keeps track of all remaining possible arrangements of the keys consistent with the responses it has given to the algorithm and the fact that the two sequences being merged are sorted. so the adversary makes any algorithm do at least lgn! comparisons in the worst case. Applying Stirling’s formula carefully gives the following bounds that are often useful in their own . This is the same result we obtained in Theorem 4.19 in the text) we can write n n Ô ¡ Θ n n! e lg´n!µ n´lg´nµ lg´eµµ · Θ´lognµ n lg ´´n 2µ!µ ´lg´nµ 1 lg´eµµ · Θ´lognµ 2 n lg n ¦ O´lognµ for n 2 and even. Thus the adversary can force an algorithm to do at least lg n 2 requires some care to obtain a rigorous lower bound on this quantity.2 a) For each comparison. also we do not know (from the above analysis) whether it is ¦Θ´log nµ because the leading terms might cancel.4 gave a lower bound of n 1 for this problem. It permutations is nn2 . There are n! possible permutations in all. We wrote “¦O´lognµ” to remind ourselves that we do not know whether this term is positive or negative. but the second order term is not known.1: Introduction 5. 1. which is the best possible.4.

The probability that E i is one of the i 2 smaller keys is ´i 2µ i. E[1] will have the maximum of all the leaves. the max of the original keys. else i = 2*i+1. secondLargest = ∞. ﬁnally: lg n n 2 n 1 1 lg´nµ 2 for n 2 and even.3: Finding the Second-Largest Key 5. b) At each internal node. 5.6 a) Worst case number of comparisons = 1 (before the for loop) + 2´n 2µ (in the loop) = 2n 3. if (E[i-1] > secondLargest) secondLargest = E[i-1].11 was used to estimate 2 i 2 · i i n 1·4∑ i n n 3 · 2 lnn n n 1 1 · ∑1 2∑ 3 i i 3 i 3 i 1 · n 2 · 2∑ i n 1 3i i 3 ∑ i.. depending on n. 1 . One comparison is done before the loop. So. The other child lost when that node was ﬁlled. Thus. so position k gets the maximum of the leaves in its subtree. each position contains the maximum of all the leaves in its subtree. positions 2k and 2k · 1. position k gets the maximum of its children. positions n through 2n 1. and consider node k.e. Selection and Adversary Arguments Ö Ö 2 π 4 1 11n 2n Ôn n n 2 2 π 1· 1 11n 2 Ôn n Therefore the “¦O´1µ” term in Solution 2 is actually between 32 and 0 62. Then. At that time. b) Each time through the loop either one or two comparisons are done. if (E[i+1] > secondLargest) secondLargest = E[i+1].32 Chapter 5 right. It is ﬁlled when last 2k. The claim is clearly true for them. i. The probability for this is 2 i. The base cases are the leaves. while (i < n) if (E[2*i] == max) i = 2*i. Section 5. return secondLargest. the other child of each node we pass contains a key that lost to the max. Let’s assume the keys are distinct. Suppose the claim is true for all positions greater than k.4 a) By examination of the for loop we see that the array positions are ﬁlled in reverse order from n 1 down to 1 and that each position is assigned a value once and not changed. so the average number of comparisons is A´nµ 1·∑ 2¡ i 3 n n 1 · 2´lnn 1µ Equation 1. by the induction hypothesis. We use an informal induction argument to show that after heapFindMax executes. the key in that node is the same as one of its children. i = 1. each of which. as we move down the tree from the root along the path where each node has a copy of max. is the maximum of the leaves in its subtree. Two comparisons are done if and only if E i is one of the two largest keys among the ﬁrst i keys. The worst case occurs for keys in increasing order. c) max = E[1].

int k) Element ans. int last. k). will generate successive calls with first = 1. . first. int splitPoint = partition(E. T ´nµ max T ´n kµ. Key pivot = pivotElement. n 2 and last = n. where c is a constant to be chosen. whereas Quicksort must recursively sort both segments after partitioning the array. which is the call that would be used to ﬁnd the median. else Element pivotElement = E[first].4: The Selection Problem 5. E[splitPoint] = pivotElement. For all of these calls the subrange that is partitioned has at least n 2 elements. we have a worst case if each call to partition eliminates only one key. else ans = pivotElement. first. else if (k > splitPoint) ans = findKth(E.key. last). if (k < splitPoint) ans = findKth(E. int first. splitPoint . ¡ 1 2 c´n · 2kn 2k2 3n · 2µ 2n n´1 · 1 cµ · k´c c´k nµµ · c´ 3 2 · 1 ´k nµ · 1 nµ 1 2 2 . becomes: n 1· 1 n k 1 i 0 ∑ c´n 1 iµ · ∑ n 1 i k ·1 c´i 1µ n 1· n 1· ¡ 1 1 1 2 k´n 1 · n kµc · 2 ´n 1 kµ´k · n 2µc n We need to choose c so that this expression is at most cn when 0 k n. /** Precondition: first k last. 1. A recurrence for the average number of comparisons when there are n elements and rank k is to be selected is: T ´n k µ T ´1 0µ n 1· 0 1 n k 1 i 0 ∑ T ´n 1 i k 1 iµ · ∑ n 1 i k ·1 T ´i 1 k µ for n 1 0 k n The running time will be of the same asymptotic order as the number of comparisons. n. c) Solution 1. Solution 2. The same technique works for Solution 2. If the input is sorted. findKth need search only one segment. findKth never does more comparisons than Quicksort. Let T ´nµ denote the average number of comparisons when there are n elements and k has its most unfavorable value. if (first == last) ans = E[first]. return ans. Also. b) As with Quicksort. This method of solution also works for the recurrence for T ´nµ. 2. splitPoint + 1.1. so the number of comparisons is in Ω´n2 µ. findKth(E. In other words. and the desired inequality holds. Since k´c c´k nµµ cn 4. k).Chapter 5 Section 5. we will guess that the solution is T ´n kµ cn.8 Selection and Adversary Arguments 33 a) The idea is that. we choose c 4. pivot. Then k T ´nµ n 1· 1 n k 1 i 0 ∑ T ´n 1 iµ · ∑ n 1 i k·1 T ´i 1µ d) Working with Solution 1 above. n 2 ). */ Element findKth(Element[] E. The right-hand side of the recurrence. last. To avoid a 2-parameter recurrence we might settle for an upper bound expression instead of an exact recurrence. and try to verify the guess. so its worst case is in Θ´n2 µ. with the guessed solution substituted in.

and the total time will still be linear in n.2 CEX 3. We’ll assume the algorithm does 3-way compares. it may give status S to at most n 2 1 keys and status L to at most n 2.14 Any key that is found to be larger than three other keys or smaller than three other keys may be discarded. The subscript on the variable indicates how many questions have been answered.3 If an interchange occurs here. also interchange E[2] and E[4]. Section 5. by comparison of keys. The adversary maintains variables that are initialized and have meanings as follows.5 CEX 2. L0 H0 M0 R0 0 n 1 ´n 1µ 2 n (low) (high) (midpoint) (range) smallest index whose key might equal K. we use the CEX (compare-exchange) instructions described in Exercise 4.3 is: Any algorithm to ﬁnd the median of n keys for even n.e. The adversary follows the strategy summarized in Table 5. Lq Hq 1 ). Additional Problems 5. Hq i 1. largest index whose key might equal K.56. with any fractional part. Lq E i .5 // Now E[3] is the smallest of the three remaining keys. must do at least 3n 2 2 comparisons in the worst case. the adversary responds and updates L and H as follows: 1. No updates (i. Since each comparison described in Table 5.4. If Mq 1 Mq 1 . Lq E i .6: Designing Against an Adversary 5.. answer K Hq 1 .12 At least n 1 crucial comparisons must be done whether n is even or odd. // We know E[3] E[4]. The modiﬁed version of Theorem 5.16 The array indexes holding keys are 0. // We know E[3] E[4].4 creates at most one L key. so k may be as large as n lg n. DeleteMax uses FixHeap which does approximately 2 lg m comparisons if there are m keys in the heap. Section 5. Thus the for loop does approximately ∑k 1 2´lg n iµ comparisons.e.4 CEX 1. . 3. // E[2] is smaller than three other keys. M and R are recomputed whenever L or H is updated. it can be rejected. Update Lq E i .3 If an interchange occurs here.. answer K Hq 1 . it is the median. If i 2. CEX i. No updates (i. also interchange E[4] and E[5]. ´H · 1 Lµ.34 Chapter 5 Selection and Adversary Arguments 5. Update Hq Lq 1 and Hq Lq 1 and Hq i · 1. When the algorithm compares K to E i (asks the q-th question). CEX 3.5: A Lower Bound for Finding the Median 5. For simplicity in presenting the solution. If Lq 1 4. and each creates at most one S key. CEX 2. ´L · H µ 2. CEX 1. Hq 1 ). The sum is i bounded by 2k lg n. the adversary can force any algorithm to do at least n 2 1 non-crucial comparisons. If i Lq 1 .10 Building the heap takes at worst 2n comparisons. where the median is the n 2th smallest key. it can be rejected. . Lq 1 . // E[1] is smaller than three other keys. answer K Hq 1 . the number of slots that might hold K. answer K i i E i .j compares the keys with indexes i and j and interchanges them if necessary so that the smaller key is in the position with the smaller index. The search key is K and the array name is E. n 1.

the ”=” test in the comparison does not provide any extra information. because if there were only one. but stop and report that a key is duplicated if any comparison comes out “equal”. then it has enough information to sort the keys. the algorithm must collect enough information to sort the keys. so the algorithm would give the wrong answer for that input.” We use shiftVac from Insertion Sort (Section 4. Thus. the largest key retained by the algorithm is the median. For example. ´Hq 1 Mq 1 1 Lq 1µ 2 1 · Lq 1 2 ´R q 1 · 1 µ 2 2 · Lq 1 Lq 1 Hq Lq · 2 ´Rq 1 · 1µ Case 3 is similar. If a standard sorting algorithm is used (without a test for ). More speciﬁcally: . in case 4. in a sorted array. and copied into maxSoFar if it is larger. The problem can be solved with ´n · 1µ 2 · 1 cells. Since q must be an integer.) The total time is in Θ´n log nµ.21 To establish the minimum number of cells needed requires both an algorithm and an argument that no other algorithm can solve the problem using fewer cells. the adversary can change xi to make it equal to xi·1 without affecting the results of any other comparisons.19 a) Sort the keys using an Θ´n log nµ sort. Thus determining if the keys are distinct also needs Ω´n log nµ comparisons. The three-way comparison is more powerful than the two-way comparisons for which we established the Ω´n log nµ lower bound for sorting. so the total is Θ´n log nµ in the worst case. but if the keys are distinct. then all keys are distinct. Let x1 x2 the algorithm sees them). Suppose there are two keys xi and xi·1 whose relative order is not known by the algorithm. it is wrong. If sorting completes without any “equal” occurring. Now assume the algorithm stops when Rq 0. For example. If the algorithm says there are two equal keys. 5. So q 5.2. so that Rq 1. sorting needs at Ω´n log nµ three-way comparisons. xn be the keys in order. The problem can’t be solved with fewer than two cells. The algorithm’s strategy is to save the ´n · 1µ 2 smallest keys it has seen so far. it is possible that K E Lq and it is possible that K is not in the array. using the extra cell for “scratch work. After all keys have been read. a) The algorithm uses two cells maxSoFar and next. If the algorithm knows the relative order of each pair xi and xi·1 . compared with maxSoFar. The ﬁrst key is read into maxSoFar. so in the worst case. (not necessarily the way b) Consider a list where the keys are distinct. no comparisons could be done. then there are still at least two possible outcomes of the search. If the algorithm stops before Rq 0. each remaining key is read into next. have Rq · 1 2 ´n · 1µ. Then the algorithm does not have enough information to determine whether xi xi·1 since no comparisons of these keys with others in the list would provide that information. or the algorithm would be unsound. one pass through the sorted array can be used to detect duplicates in O´nµ time. if Lq Hq .Chapter 5 The essential invariant is Rq · 1 Hq Lq ´Rq · 1µ i 1 ´Rq 1 · 1µ Selection and Adversary Arguments 35 2. (Every pair of keys that are adjacent in the ﬁnal sequence must have been compared directly. to determine whether the keys are distinct. If the algorithm says the keys are all distinct. q lg´n · 1µ . By the recurrence for Rq · 1 and the base case R0 · 1 n · 1. b) For simplicity we assume n is odd. we 1 ¡q lg´n · 1µ.2) to insert the next key into the correct position if it is among the smallest keys.

(n+1)/2 . unseen by the algorithm. new w´ jµ 0. scratch). i ++) Read the next key into scratch. there is no key with weight n k. the algorithm has read no more than ´n · 1µ 2 keys. and the loser is the new root. w´ jµ. the algorithm reads a key into a cell that was used earlier. If it chooses any but the largest of these. Suppose the ﬁrst key that is overwritten by the algorithm is the ith smallest of the keys it has seen up to that point. . Suppose. w´iµ is the number of nodes in that tree. Sort these keys (in place). Speciﬁcally. If the algorithm compares E i and E j and w´iµ w´iµ prior ´w´iµ · w´ jµµ. The adversary makes ´n 1µ 2 ´i 1µ of the unseen keys smaller than the discarded key and the other unseen keys larger. If the algorithm compares E i and E j and w´iµ w´ jµ prior ´w´iµ · w´ jµµ. Case 2: Before choosing its answer. so there are too many larger keys. if (scratch < E[(n+1)/2 .2. for (i = (n+1)/2. an adversary can make all the remaining keys. If the algorithm compares E i and E j and w´iµ w´iµ prior ´w´iµ · w´ jµµ. Two trees are combined only if neither root won before. As in the proof of Theorem 5. E[vacant] = scratch. the winner of a comparison is the larger key. There are at least ´n 1µ 2 keys it hasn’t seen (at that time).1]. 1. w´iµ 1 for all keys initially. Case 1: The algorithm gives its answer before overwriting a key in one of the cells. we show that an adversary can make it give a wrong answer. 5. Suppose an algorithm used fewer cells. (Recall. the adversary can make all the remaining keys smaller. // the key that was in E[(n+1)/2 . 4. i < n. so in this problem we are looking for the champion loser. vacant = shiftVac(E. except that the replies are reversed.22 a) Find the minimum of E k through E n with n k comparisons. w´ jµ. and the algorithm’s answer is wrong. larger than the key chosen. If the algorithm chooses the largest of the keys it has seen. except that the smallest key is at the root. The total number of keys smaller than the discarded keys is i 1 (seen by the algorithm) + ´n 1µ 2 ´i 1µ (not yet seen by the algorithm) = ´n 1µ 2. Then. new w´ jµ 0. E[(n+1)/2 .) The sum of all weights is always n. 3.2 in the text. then reply that E i w´ jµ 0. Now we show that the problem can’t be solved with fewer than ´n · 1µ 2 · 1 cells. new w´iµ 0.1]) // Insert the new key in its proper place.36 Chapter 5 Selection and Adversary Arguments Read the first (n+1)/2 keys into E[0]. Then whatever output the algorithm makes.1. We will show that an adversary can make the destroyed key be the median. b) Follow the adversary strategy in the proof of Theorem 5. return E[(n+1)/2 . the adversary can demonstrate a consistent set of keys for which the output is wrong. Whenever i is the root of a tree. when the algorithm stops. imagine that the adversary builds a tree to represent the ordering relations between the keys. If the algorithm compares E i and E j and w´iµ and do not update any w. and update new 2. and update new E j . the one chosen by the algorithm is not the median. if the . then reply that E i w´ jµ 0. so the discarded key was the median. and update new E j . 5.1].1] is discarded. There are at least ´n 1µ 2 unseen keys and at least one key seen by the algorithm is larger than the key it chose. so again. then reply that E i E j . then reply consistently with earlier replies A key has won a comparison if and only its weight is zero. The discarded key cannot be the algorithm’s output. at the time it determines its answer. discarding the key that was there.

A tree of n k · 1 nodes must have n k edges. If the algorithm says x is not in M without examining all 2n 1 keys on the two diagonals. a correct algorithm can stop only when the weight of some key is at least n k · 1. So the lower bound is n k questions. elements on the lower diagonal satisfy i · j n. The ﬁrst diagonal is characterized by the property that i · j (the sum of the row and column numbers of an element) is n 1. then x n 1. So our argument will show that any algorithm for this problem must examine at least 2n 1 matrix entries. Whenever an algorithm compares x to an element M i j the adversary responds as follows until there is only one element on the two diagonals described above that has not been examined by the algorithm: if i · j if i · j n 1. But each question creates at most one new edge in the set of trees that the adversary builds. Since at least k keys are not in tree T .Chapter 5 Selection and Adversary Arguments 37 output is m and E m is in tree T then make all keys not in tree T less than all keys that are in tree T . There are 2n 1 keys on these diagonals. then x Mi j Mi j The adversary’s answers are consistent with the given ordering of the rows and columns and with each other.24 The adversary will force an algorithm to examine each entry on the diagonal from the upper right corner ´M 0 n 1 µ to the lower left corner ´M n 1 0 µ. 5. so the algorithm would be wrong. and on the diagonal just below this one ´M 1 n 1 to M n 1 1 µ. If the algorithm says x is in M. Therefore. Observe that the known ordering of the rows and columns in the matrix implies nothing about the ordering of the keys on each diagonal. the adversary can ﬁll all slots with keys unequal to x. the adversary puts x in an unexamined position on one of the diagonals. . this refutes the algorithm.

38 Chapter 5 Selection and Adversary Arguments .

Section 6. b) Use accounting costs as follows: 1. which is less than 2N0 2. then 2 pushes. forcing an array double. Do 41 pushes. For pops. compared to ´2nµt for array doubling. no shrink: 2t. A proof by induction on i. then N 2 · 1 n 2 1 and the array size was 4n · 4 N 4n · 1. ¡ . and sums to at most ´4n 3µt. Now do 2 pops. reducing the number of elements to the original n. the number of the operation. Say the stack size is N0 and n N0 1.3: Amortized Time Analysis 6. nt · 2t. Note that an accounting cost of 1t on pop is not enough. is straightforward. at a cost of 2 · t ´N0 1µ. The cost of 4 operations is 4 · 2tN0 t 2n. Note that the array size is at least n · 1 after the shrink and was at least 4n · 1 before being shrunk. 2. d) Expand the array by a factor E 2 and use accounting cost EE 1 t instead of 2t for push. and the cumulative accounting cost is negative. for actual cost 2 · tN0 . array double from n to 2n: 3.2: Array Doubling 6. so the formula still holds. But the new array size 2n or 2n · 1 so the formula holds. The new array size is n · 1 or n. by the inductive hypothesis the cumulative accounting cost was at least ´2nµt before the shrink and at least ´nµt afterwards. leaving n elements and causing a shrink: nt · 2t. and after it is 0.1 The geometric series for array quadrupling is n. The accounting cost for pop is E 1 t when there is no shrink and nt · E 1 t when there is a shrink. push. The cumulative accounting cost is always ´4nh 2n N µt when the stack had nh elements at high water since the last size change. 4. n 16. Do 2 pushes. The cumulative accounting cost is ´3nh n N µt. pop. n 4. no array double: 2t. However. So this forces a shrink. then 22 pops.Chapter 6 Dynamic Sets and Searching Section 6. The new nh n. Then the amortized cost is 1 · 2t for push and 1 · 2t for pop.2 a) Worst case cost is Ω´nµ. This sequence can be repeatedly indeﬁnitely. the amount of space allocated might be almost four times what is necessary instead of only twice what is necessary. Suppose N0 10. pop. push. So. If the i-th operation was a pop that caused a shrink with accounting cost tn. shrink by 1 ¡ 1 ¡ a factor of E when n N E 2 . c) The accounting costs for pop are t and nt · t. The new size is 2N0 . The analysis is similar to part (b) except that before a shrink the cumulative accounting cost is nt. .

2 . for part 2. . An ARB0 does not exist.4 ARB1 a = RB1 A = B = a C = a D = a a ARB2 e = A A f = A B g = A C h = A D i j k l Like above but B is left subtree m n o p Like above but C is left subtree In detail.. so part 1 holds. not counting the root.4: Red-Black Trees 6. 2 For part 3.. so parts 1-3 for ARB0 trees hold vacuously. We showed above that each has at most 1 ´4h µ 1 nodes. . the black height of the RBn or ARBn tree.. A D A e . any path from the root to a black node has at least as many black nodes as red nodes. D and e. So black depth 1 depth.. An RB0 tree consists of 1 external node.6 The proof is by induction on h. . . 2 An RBh tree has a minimum number of internal black nodes when both principal subtrees are RBh 1 trees. Similarly.. But 1 · 2´2h 1 1µ 2h 1. . For h 0 assume the lemma holds for black heights 0 j h. So this is 2´2h 1 1µ 2h 2. t are the left subtrees Last tree in detail = 6. A t (20 right subtrees in all) 19 more rows like above but B. so the RBh tree has at most 2 1 · 2´ 1 ´4h µ 1µ 4h 1 internal nodes. . for RBh trees. The principal subtrees of an ARBh tree are both RBh 1 trees.40 Chapter 6 Dynamic Sets and Searching Section 6. So this is 1 · 2´4h 1 1µ 1 ´4h µ 1. An ARBh tree has a minimum number of internal black nodes when both principal subtrees do. an ARBh tree has a maximum number of nodes when both principal subtrees do. The base case is h 0. and part 1 holds for ARBh trees. t = q r s t Like above but D is left subtree RB2 A A . and part 2 holds for RBh trees. so parts 1-3 hold for this RB0 tree. An RBh tree has a maximum number of nodes when both principal subtrees are ARBh trees..

color. t.color = black. t. then rebalance higher in tree.color = black. it is nil and case 3 of the lemma is true.12 The proof is by induction on h. 20 10 30 40 50 60 70 80 90 10 20 30 50 40 60 70 80 90 100 Inserting 100 needs color flip. and the inductive hypothesis applies to whichever principal subtree is passed as the parameter in the recursive call to rbtIns. For deﬁniteness. For deﬁniteness. 6. Then case 4 of the lemma applies (case 5 after repairRight). so the recursive call returns status brb. .10 a) int colorOf(RBtree t) if (t == nil) return black. The base case is h 0.8 Dynamic Sets and Searching 41 10 10 20 10 20 30 10 20 30 40 20 10 30 40 50 10 20 30 40 50 60 20 10 30 40 50 60 70 10 20 30 40 50 60 70 80 Inserting 80 needs color flip. Then rbtIns is called on one of these these children.Chapter 6 6. and case 3 applies. assume the lemma holds for 0 j h. There are 5 cases to consider for this recursive call. assume recursion goes into the left subtree. Then each principal subtree is either an RBh 1 tree or an ARBh tree. The case of the right subtree is symmetrical.color = red. If oldRBtree is an ARB1 tree. If oldRBtree is an RB0 tree. it is a red node with both children nil. Suppose oldRBtree is an RBh tree. then color flip higher in tree. else return t. b) 6. void colorFlip(RBtree t) t. the case of repairRight is symmetrical. assume repairLeft was called.leftSubtree.rightSubtree. This status is passed to either repairLeft or repairRight. where oldRBtree is either an RBh tree or an ARBh·1 tree. For h 0.

rbr is never assigned as a status in any of the procedures as given. are: RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 ans. which is ARBh·1 .status = brb The left case is again case 1 of the lemma. The recursive call returns ansLeft.status = brb and ansLeft.newTree is ARBh·1 . Further.newTree as the left subtree of oldRBtree we have one of these conﬁgurations. The case for the right subtree is symmetrical. 3. Then repairLeft attaches ansLeft. Now oldRBtree. the same as oldRBtree. 2.color = red.leftSubtree = ansLeft. In (recursive) case 1 of the lemma. .newTree. Then a recursive call to rbtIns is made with one of these principal subtrees as a parameter. where white circles denote red nodes: ARBh RBh−1 After attaching ansLeft. which is case 1 of the lemma.newTree is ARBh . Now suppose oldRBtree is an ARBh·1 tree. So all returns from the recursive call of rbtIns lead to one of the cases stated in the lemma.status = ok ans. This completes the proof for oldRBtree being an RBh tree.status = rrb and ans.42 Chapter 6 Dynamic Sets and Searching 1. Again we assume the recursion goes into the left subtree and consider the possible values the might be returned into ansLeft.newTree.status = rbr and ansLeft.status = ok and ansLeft.newTree = oldRBtree and ans. It could have been assigned after rebalLeft was called.status = brb and ansLeft. The recursive call returns ansLeft.newTree is RBh 1 . 5. Notice that case 2 of the lemma never applied.newTree is RBh 1 or ARBh .newTree as the new principal left subtree and returns status = ok. by arguments that are symmetrical to those above. now rooted at ans. This is the mirror image of case 4 above and leads to either case 1 or case 3 of the lemma. The recursive call returns ansLeft. so ans. Notice that only cases 1 and 3 of the lemma occurred when the parameter of rbtIns was an RBh tree. ansLeft. The recursive call returns ansLeft. either case 1 or case 5 of the lemma applies. 4. while the right case is case 3. Indeed. so each of its principal subtrees is an RBh tree. where the root is oldRBtree: RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 RBh−1 In the left case rebalLeft is called and in the right case colorFlip is called. and case 1 of the lemma applies. So all details of case 4 of the lemma are conﬁrmed.newTree looks like this. For oldRBtree being an ARBh·1 tree and the recursion going to the right subtree.status = rrb and ansLeft. In (recursive) case 3 of the lemma. This is similar to case 1 above. ans. and one of the cases described above applies to the value returned.rightSubtree. The outcomes of each.rightSubtree is RBh . no further repair is needed. This is similar to case 1 above.newTree.newTree.

To delete 90. To delete 100. since its left subtree is nil.17 (r is red) at 40 60 80 90 100 To delete 20. simply attach its right subtree. simply attach its right subtree. which is nil. 6.16(b) at 20 (mirror image) 60 50 70 80 90 100 10 20 50 Fig.16(b) at 20 40 20 30 50 60 70 80 90 100 20 30 50 40 60 80 20 30 40 50 70 Fig. move 70 into 60’s node and delete 70’s prior node (see next). Delete 30: Fig.8. 6.14 Note for instructor: The initial tree is not given explicitly in the text. but is the result of Exercise 6. then (p is red) at 60 40 80 90 100 To delete 60. Delete 50: 40 60 70 80 90 100 60 70 Fig. 6. The starting point for all parts is: 40 20 10 30 50 60 70 80 90 100 Circles with white backgrounds denote red nodes and gray circles are gray nodes. 6. as the new right subtree of its parent. move 30 into 20’s node and delete 30’s prior node (see next). a) Delete 10: Fig. Recolor the right subtree to black. move 50 into 40’s node and delete 50’s prior node (see next).17 (s is red) at 60. 6. which is solved above.17 (r is red) at 40 40 60 80 10 20 40 50 70 60 80 90 100 40 20 10 To delete 40. . move 90 into 80’s node and delete 90’s prior node (see next). as the new right subtree of its parent. 6.Chapter 6 Dynamic Sets and Searching 43 6. Delete 70: Fig. rooted at 100.17 (p is red) at 80 40 60 50 80 90 100 50 40 60 80 90 100 To delete 80. since its left subtree is nil.

move 100 into its place and delete 100’s node. Deleting 30 is similar to deleting 70 in part (b). shown in the third tree. Deleting 80 uses Fig. 80 20 10 30 90 100 10 20 30 80 90 100 10 20 30 90 100 To delete 90 (now in the root).17. giving the tree in the middle. Then deleting 60 is simple. move 80 into its place and delete 80’s prior node. delete 60 (now in the root) by moving 70 into its place and deleting 70’s prior node. Although 80 is initially gray after the transformation. and leads to the left tree below. Deleting 70 uses the case ““r is red” in Fig. After applying the transformation of Fig. 6. Deletion of the last two nodes is simple. and leads to the right tree. Next. leading to the second tree. The rest is simple. with 90 now as the gray node. and leads to the middle tree.” leading to the left tree below. terminating the operation. 50 80 60 70 90 100 70 60 80 90 100 10 20 30 80 70 90 100 To delete 70 (now in the root). 6. then apply the case “p is red. the result is shown on the left below.44 Chapter 6 Dynamic Sets and Searching b) Delete 40 as in part (a). 60 40 30 50 70 80 90 100 40 50 60 70 80 90 100 50 60 70 80 90 100 To delete 50. 80 is deleted without difﬁculty in the right tree. 20 10 30 100 10 20 30 100 10 20 30 100 10 30 100 10 100 c) Deleting 10 is seen in part (a). leading to the middle tree below. Then delete 50 (now in the root) by moving 60 into its place and deleting 60’s prior node.17. The fourth tree shows 20 being deleted.16(b) in the text. apply the transformation for the case “s is red” in Fig. 80 60 70 90 100 70 80 90 100 80 90 100 90 100 . shown in the fourth tree. 6. it is recolored to black. The result is shown in the right tree. The ﬁfth tree shows 30 being deleted. Then apply the transformation for the mirror image of the case “p is red. 6.17 of the text. giving the result shown on the left below. In this conﬁguration it is necessary to apply the mirror image of the same transformation.” to the third tree. since it is the root of the entire tree. giving the tree on the left. Apply the transformation for the mirror image of the case “s is red” in Fig. shown in the second tree. 6. which completes the operation.16(b). below. Then deleting 20 is simple. Then deleting 40 is simple. 6.17. This leads to the case “r is red” in Fig.

then reorganization into an array of the same size might be satisfactory. just examine the corresponding matrix entry. Section 6. and A i j 0 otherwise. for (k = 1. As shown below.18 The operations hashDelete and hashFind terminate in failure if they encounter an empty cell. however.6: Dynamic Equivalence Relations and Union-Find Programs 6. Section 6. it increments full and decrements empty. . say 1 1 2 h. Code for the other operations follows the same pattern. with the differences mentioned in the paragraphs above. if it inserts into an obsolete cell. because all obsolete cells are discarded during reorganization. s). The reader should be able to ﬁnd ways to improve the algorithm below.20 The point of this exercise is to show that a straightforward matrix implementation is not efﬁcient. whereas hashInsert continues over it without checking for a match. hashDelete decrements full. if (H[i] obsolete) if (H[i]. k ++) if (A[j][k] == 1) A[i][k] = 1. k n. k ++) if (A[i][k] == 1) Copy row i into row k. p is red. Use an n ¢ n matrix A where A i j Implementation of an IS instruction is trivial. The operations hashDelete and hashFind continue over an obsolete cell. whereas hashInsert terminates with success at such a cell. If full is also large. However. while (true) if (H[i] == emptyCell) // K not there break. /** empty > 0 */ void hashDelete(Key K) i = hashCode(K). are maintained.5: Hashing 6. full --. k n.) Two counters. 1 if i and j are in the same equivalence class. for (k = 1.16 Dynamic Sets and Searching 45 ¯ ¯ is red: rightRotate( . it only increments full. ). then leftRotate(p. then reorganization into a larger array is indicated. The following code shows how obsolete is handled in hashDelete. full and empty. if full is 2 h. // Assign to row i the union of rows i and j. MAKE i j could be implemented as follows. break. (We assume that hashDelete deletes just one occurrence if the key has multiple matches. assuming that duplicate keys are allowed. but the order of the worst case time will still be high compared to the techniques developed in the text. // Copy this union into each row k such that k is in the union. or s is red: leftRotate(p. If empty gets too small. The operations hashDelete and hashFind terminate with success at a cell whose key matches the search key.key == K) H[i] = obsolete. The combination is also called a right double rotation. h full empty is the number of obsolete cells. whereas hashInsert terminates with success in that case. i = ((i + 1) % h // Continue loop. s).Chapter 6 6. r is red. If hashInsert inserts into an empty cell. the efﬁciency of hashDelete and hashFind will degrade.

then either h h1 (because h1 h2 µ or h1 h2 and h h1 · 1 h2 · 1 (in this case. int u) if (height[t] > height[u]) parent[u] = t.5) produce the two trees shown in the following ﬁgure.6 for hUnion. if the next instruction is union(2. The proof for the ﬁrst case is the same as in the text.5). b) The instructions Union(1.3) Union(4. Theorem 6. c) The worst case is in Θ´n log nµ. using the induction hypothesis.2) Union(2.6 modiﬁed for hUnion. if u was made a child of t. void hUnion(int t. then wUnion will produce a tree with root 2. Consider the following program.22 n´n 1µ · n2´n · 1µ 2 n ¾ Θ´n3 µ. For the second case. if (height[t] == height[u]) height[u] ++. MAKE 1 MAKE 3 MAKE 4 MAKE n 2 2 3 n 1 2 ´n · pnµ The number of steps is proportional to ∑n p 6. and hUnion will produce a tree with root 5. We just need to modify the proof of Lemma 6. 2 1 3 5 4 However.46 Chapter 6 Dynamic Sets and Searching The number of matrix entries operated on when implementing MAKE i j is proportional to n · pn where p is the number of elements in the new equivalence class (the union of i’s class and j’s class). In the proof of Lemma 6. no matter whether the height or the number of nodes is used as the weight. else parent[t] = u. a) Assume the trees are represented by arrays parent and height where height[i] is the height of the tree rooted at i if i is a root (and is irrelevant otherwise). h So h lg´2 min´k1 k2 µµ lg´k1 · k2 µ lg k h1 · 1 lg k1 ·1 and h h2 · 1 lg k2 ·1 . u must have been the ﬁrst argument of the hUnionµ.7 completes the argument without modiﬁcation.

r0 . attaching v to r0 . if a tree can be decomposed as described. Thus the total amount of work done by the cFinds is bounded by a multiple of the number of distinct branches in T . Suppose that cFind(w) is done later. find(1): 2 parent accesses. find(6): 3 parent accesses. The handle of T . So attaching rk 2 to rk 1 creates an Sk tree. Let Tk 1 V . An S1 tree has two nodes. T was constructed by attaching the root of one Sk 1 tree U to the root of another V . The base case is k 1. a worst-case program must have some cFinds. That is. 6 7 8 6. Let T be a tree constructed by the wUnions. is the handle of Tk 2 as described. Then T0 is the only tree in the decomposition and is an S0 tree.) 6.) It is given that Tk 1 is an Sk 1 tree. . Let T ¼ be the tree that results from cFind(v). the path from T ’s handle v to the root of T consists of roots of Si trees for 0 i k 1.26 First we show that an Sk tree can be decomposed as described.29 in the text). The proof is by induction on k. U can be decomposed into v T0 requirements of the decomposition. so there is an Sk tree initially embedded in U ¼ . This can also be proved by a simple induction argument. the parent of each of these roots is r in U ¼ . For k 1. By the decomposition described in Exercise 6. The base case is k 1. 6. so it consists of one node. Now let T be an Sk tree for k 1 and suppose that any Sk 1 tree has the described decomposition. Attaching v to r0 yields an S1 tree.30 in the text). 6. suppose that the tree is decomposed into v and trees T0 such that Ti is an Si tree.28 Suppose that T is an Sk tree properly embedded in a tree U. say at x. 6. respectively.Chapter 6 6. By the inductive hypothesis.24 Tree before any ﬁnds: find(1): 4 parent accesses. cFind( f ´vµ) in U causes all these roots to become children of r. If the path in T from w to the root meets the path in T from v to the root. it satisﬁes the U. the total amount of work done by the cFinds is in O´nµ. Resulting tree: 9 1 2 4 3 7 6 8 1 9 2 4 3 find(2): 2 parent accesses. By the inductive hypothesis. Resulting tree: Dynamic Sets and Searching 47 9 4 2 1 3 7 6 8 find(4): 2 parent accesses. then cFind(w) follows only one link for the overlapping path segment since in T ¼ x is a child of the root. Since each wUnion introduces one new branch. (The sequence of attachments indexed by i is empty if k 2. (Notice that v is not part of this Sk tree. No change in tree.26 (see Fig. Then these Si trees.30 Since wUnions take constant time. and let r be the root of U. and the handle v is the child of r0 . then attaching ri to ri·1 for 0 i k 2 yields an Sk 1 tree. No changes in tree. satisfy the requirements of the decomposition described in Exercise 6. along with r.27 (see Fig. Tk 1 with roots r0 rk 1 . which is v. The root r0 is an S0 tree. in T ¼ all nodes on the path from v to the root of T are now children of the root. Conversely. We will use the name T to mean this tree as it appeared before any cFinds are done. then it is an Sk tree.

For each vertex (i. the leader information that is stored for a vertex might be out of date until a find is executed for that vertex. then offset(uLdr) is updated to become newOffset(uLdr. then the new offset is inconsistent with the earlier offset.32 We will use a Dictionary ADT to keep track of variable and array names and to assign successive integers 1. we keep track of its leader and its offset from its leader.v) 3. It is necessary to update offsets during a find according to the rule . Now we execute union(uLdr. . indexed by the vertex number. 3. We initialize a Union-Find ADT with create(0). . If the leaders are the same and newOffset(u. If the leaders are the same and newOffset(u. then offset(vLdr) is updated to become -newOffset(uLdr. The offset is consistent with an offset between u and v that was determined earlier. our algorithm only works with differences between addresses.e. vLdr). dk ).v) Notice that we never need any actual addresses to compute newOffset(uLdr. call them uLdr and vLdr. d2 ) equivalence (d1 . If the result is that vLdr becomes the leader of uLdr. If the result is that uLdr becomes the leader of vLdr. it is its own leader and its offset is 0. At this point we execute makeSet on the new vertex. say “equivalence (d1 . and can store information about them in arrays. vLdr). which we’ll call newOffset(u. 2. d3 ) equivalence (d1 . As usual with the Union-Find ADT. No more updates are required for this equivalence. this offset is new information. dk ) and we think of each binary equivalence as an edge between the two vertices whose names occur in the expressions di . When a declaration is ﬁrst encountered. 1. When an equivalence declaration is encountered. This equivalence should be ﬂagged as an error. There are three possibilities for the new offset: 1. equivalence (d1 .” we treat it as the sequence of binary equivalence declarations. The offset is inconsistent with the earlier offset. 2..v). vLdr). Similarly. First we execute find(u) and find(v) to get their current leaders.v) offset(u) offset(v) newOffset(u. vLdr) address(uLdr) address(vLdr) offset(u) offset(v) · newOffset(u. The offset gives the relative memory location of the vertex from its leader. variable or array declaration). vLdr). So the rest of the program can treat them as vertices in a graph. d2 . to them when they are ﬁrst encountered. The exact form of d1 and di determines a necessary offset between u and v. 2. Which of these cases applies is determined as follows.48 Chapter 6 Dynamic Sets and Searching 6. The offset is new information. We combine the following conceptual equations: address(u) address(uLdr) address(v) address(vLdr) address(u) address(v) to get the new relationship: newOffset(uLdr. Assume the underlying vertices are u and v. If the leaders are different. offset(u) address(u) address(uLdr) although the appropriate addresses will be determined later by a compiler. Conceptually. the offset will be out of date in these cases.v) offset(u) offset(v) offset(u) offset(v) then the new offset is consistent with the earlier offset.

then the above argument still holds.2).5) is compared to 1(7.5) is compared to K and 3(4. Luckily. Now do a series of getMax and deleteMax until r is again the root of the only tree.36 Yes. It uses n 1 comparisons. 6. so we wait until we are traversing the list and come across it. making them less than r. it is always optimal for n 2k . suppose we start with one tree whose root r has m children.Chapter 6 offset[v] = offset[v] + offset[parent[v]]. vacant = 1 and xref[4] = 0 to delete element 4(3. ensuring that all of the keys become less than r.2. and reduce heapSize to 4.4) has no sibling to compare with. Now vacant = 2. Since lg´nµ is an integer.forest == nil). set xref[7] = 0. Then 5(5.37 An obsolete node is no longer referenced by the xref array. Execute decreaseKey on each vm creating m obsolete nodes in the children of r. 6. The n inserts create n trees of one node each and use no comparisons. and the obsolete nodes are still children of r. we store K in vacant and update xref[7] = 4.forest). -1) . Now vacant = 4. The second getMax performs lg´nµ 1 comparisons. It occurs somewhere in a list of trees. the total might be one less than the worst case. Now do deleteMax and the m obsolete nodes are roots.2.4) moves up to vacant. The main idea is that it might be expensive to remove it from the list at the time it becomes obsolete.5) moves up to vacant. v1 . Now isEmpty will require time in Θ´mµ. so there are lg´nµ trees at this point. This requires the update xref[3] = 1. This requires the update xref[5] = 2.id return (remForest). replacing lg´nµ by lg´nµ . 6. just as it would be in the tournament method of Section 5. a) To see that a lot of obsolete nodes can become roots of trees. the ﬁrst time we come across it we will be able to delete it cheaply. TreeList bypass(TreeList remForest) if (remForest == nil) return (remForest) else if (root(first(remForest)).22 of the text. else return bypass(rest(remForest)).forest = bypass(pq. Then 3(4. return (pq. vm . In this case. . Do additional decreaseKeys on all the descendants of the vi . but this was not required for the exercise. This exercise ﬂeshes out the details. Section 6.34 The point of this exercise is to show how xref is updated in tandem with the steps of the usual heap deletion procedure.7: Priority Queues with a Decrease Key Operation 6. The getMax reorganizes the elements into one tree in which the root has lg´nµ children. Dynamic Sets and Searching 49 In cFind.3. so 5(5. this statement would be placed right before the statement that updates parent[v] in Fig. Elements are shown in the format id(priority) for clarity. Since vacant is a leaf now. First we save K 7´6 4µ.4) and 3(4. The deleteMax removes the root and leaves a forest in which the children of the former root are now roots.4) is compared to K and 5(5. depending on the outcomes of comparisons. this agrees with Theorem 5. If n is not an exact power of 2. The code for isEmpty could be: boolean isEmpty(PairingForest pq) pq. The total number of comparisons is n lg´nµ 2. Next.

Since bypass runs in amortized time O´1µ. Therefore the cumulative accounting cost can never be negative. Moreover. Thus all operations have the same asymptotic order as they would for the binary-heap implementation of priority queues. so updating xref costs O´1µ per structural change. For a full priority queue use an xref array similar to the one described in the text except that xref[i] refers to the subtree whose root holds element i. But each node contains the element’s id. Tree winner = pairTree(t1. pairForest runs in amortized time O´kµ because its loop executes about k 2 times and each pass runs in amortized time O´1µ. so this veriﬁcation is straightforward. Additional Problems 6. The code.40 It is straightforward for red-black trees to implement an elementary priority queue with each operation running in time O´log nµ. Thus decreaseKey runs in O´log nµ. followed by an insert of the same id with the new priority. since it returns a list without that node). The usual method can be used to delete this node. in additional to the actual cost of 1. remForest = bypass(rest(remForest)). There are only four places to check. c) Charge decreaseKey an accounting cost of +2 for creating an obsolete node. t2). as in rebalancing. Charge the bypass subroutine an accounting cost of –2 for each obsolete node that it encounters (and discards. first. // Continue loop. return newForest. There is no problem doing getPriority in O´1µ. using bypass from part (a). isEmpty does also. The cost is Θ´k · mµ but m can be arbitrarily large in relation to k. cons. Create a forest with k genuine trees and m obsolete trees. Thus bypass runs in amortized time O´1µ.50 Chapter 6 Dynamic Sets and Searching b) The argument that pairForest does not run in O´kµ is similar to part (a). the xref array has to be kept up to date. while (remForest nil) Tree t1 = first(remForest). newForest = cons(winner. newForest). either because that variable is never accessed again or because the variable is overwritten with a new list from which the obsolete nodes seen by bypass have been removed. . Whenever the red-black tree undergoes any structural change. Here we are using the facts that pairTree. as well as its priority. and the id is the index in xref. Treat decreaseKey as a delete (locating the node through xref instead of by a BST search). else Tree t2 = first(remForest). remForest. newForest = nil. newForest). and rest all run in time O´1µ. in addition to the actual cost of 2 for testing and discarding the node. if (remForest == nil) newForest = cons(t1. through xref. could be: TreeList pairForest(TreeList oldForest) TreeList newForest. remForest = bypass(rest(remForest)). To ﬁnd the minimum element simply follow left subtrees to the internal node whose left subtree is nil. The usual method can be used to insert new elements and test for emptiness. To verify that the obsolete nodes really are discarded we note that any list passed to bypass is never accessed again. remForest = bypass(oldForest).

This follows from the following lemma. No spanning tree of G with the same root can have smaller height. it puts v on w’s list in the new structure. The base case is k 0. the distance of a vertex x from v. and the length of the shortest path is called the distance from v to x. Let u be the next-to-last vertex on a shortest path from v to x in G. For any vertex x in the tree. so its distance from v in the tree is k.4 The diagrams show discovery and ﬁnishing times. but which help to interpret the DFS tree. Then x cannot already be in the BFS tree at a depth of k 1 or less.3: Traversing Graphs 7. (The length of a path from v to x is simply the number of edges in the path. the path from v to x using tree edges is a shortest path from v to x in the graph. (a) 3/10 A 4/7 B 8/9 F C 5/6 D 2/11 G 1/14 E 12/13 8/9 F 7/12 A 10/11 B C 3/4 (b) D 6/13 G 1/14 E 2/5 7. Lemma: Let v be the root of a breadth-ﬁrst search tree for a graph G. x will be put ¾ on depth k of the tree. then there is an edge vw in G. Then the distance from v to u is k 1 0. . which were not asked. so by the inductive hypothesis. For each vertex w found on vertex v’s list. Thus the height of a breadth-ﬁrst search tree is the distance of the farthest vertex from the root. The empty path is the shortest path from v to v and is the tree path from v to v.) Proof The proof is by induction on k.1 7. u is at depth k 1 in the breadth-ﬁrst search tree.6 It is always true that height´TD µ height´TB µ. Section 7. Since x is adjacent to u.Chapter 7 Graphs and Graph Traversals Section 7. Section 7.4: Depth-First Search on Directed Graphs 7.2: Deﬁnitions and Representations 7. Suppose the distance of a vertex x from v in G is k 0.3 If v and w are any two distinct vertices of G and there is a path from v to w.8 The algorithm goes through the adjacency lists of the given graph and constructs a new array of adjacency lists for the transpose.

trAdjVerts[w]). v++) trAdvVerts[v] = nil. (a) (b) A 1/22 b 2/21 B H 9/20 11/18 B A 10/19 b H 9/22 3/8 D b F 4/5 c G c E 10/19 12/17 D b c E 20/21 c C 11/18 b J 13/16 b K 14/15 F 13/14 c G 15/16 2/7 I b C 4/5 J 3/6 b K 1/8 6/7 12/17 I . and d for descendant (forward) edge. int n) IntList[] trAdjVerts = new IntList[].10 Gray. return trAdjVerts. c for cross edge. b) Assume the array of adjacency lists for G indexes vertices in alphabetical order. Heavy solid edges are trees edges. The adjacency lists of GT are as follows: A: B: C: D: E: F: G: F A F G G A E D E B D B A 7. for (v=1. remAdj = rest(remAdj). dotted edges are nontree edges.12 Roots of the DFS trees are shaded. The order of the actual adjacency lists for G is irrelevant. trAdjVerts[w] = cons(v.52 Chapter 7 a) Graphs and Graph Traversals IntList[] transpose(IntList[] adjVerts. v++) remAdj = adjVerts[v]. The adjacency lists in the transpose graph will be in reverse alphabetical order because cons puts new nodes at the beginning of a list. v n. their labels are b for back edge. for (v=1. v n. 7. while (remAdj nil) int w = first(remAdj).

Let c be node with the smallest active interval that contains the active intervals of both v and w. and use the results about active intervals in Theorem 7. it does not appear on the part of P2 that follows c. reversing the JC edge makes it a descendant edge. 9. Algorithm 7. 11.) Insertion at line 9 would also work. respectively. reversing the JK edge makes it a descendant edge. Solution 2. and c is a common ancestor of v and w. one of them contains the active interval of v and some other one contains the active interval of w. reversing the EC edge makes it a cross edge.18 There are six trees. and 13 and ans are not needed. ¯ ¯ ¯ For part (a). H(3/12). Node c exists because at least the root of the tree is on both paths. A(1/14).16 Modify the directed DFS skeleton. We would probably only assign one part of this exercise for homework and use another part for an exam. So the paths from c to v and c to w are disjoint except for c. as follows. 7. I(16/21). Small changes can cause all four edge types to appear in the other parts without having much effect on the solution. The active intervals of the children of c are disjoint. do a depth-ﬁrst search. At “Exploratory processing” (line 7). (Lines 2. For part (c). C(15/22).14 Solution 1 (follows the hint). For part (b). no node following c on path P1 appears on P2 at all.Chapter 7 (c) (d) Graphs and Graph Traversals 53 A 1/22 b 2/21 B d 5/10 D b F 7/8 d G 6/9 12/17 I b C 11/18 b J 13/16 K 14/15 E 4/19 11/16 D b F 13/14 d G H 3/20 19/20 B c A 18/21 b H 9/22 E 10/17 c C 4/5 b J 3/6 b K 1/8 2/7 I 12/15 Notes for instructor: Note that only part (d) has all four edge types. each time the course is taught. and are contained in the active interval of the root. Treat the tree as a directed graph. K(18/19). 7. B(2/13). Therefore the part of P1 following c is disjoint from the part of P2 following c. The active intervals of v and w are disjoint.1 of the text. the ﬁrst ﬁve have only one vertex each. and revises the times as follows: E(4/11). By the deﬁnition of c. makes a second DFS tree. Let P1 and P2 be the unique simple paths from the root to v and w.3. insert a statement to output the edge vw. there can’t be a tie because active intervals are either nested or disjoint—they can’t partially overlap. and shufﬂe them around from term to term. . J(17/20). 7. so in particular. Let c be the last vertex on path P1 that also appears in P2 .

5: Strongly Connected Components of a Directed Graph 7.8. and C I J K (with the appropriate edges). The vertex v can be reached by following edges from each other vertex because there are no cycles and no other dead ends. (This may be easier to see by looking at the adjacency lists in Example 7. only vertex 4 will have an empty adjacency list. Since each Si is strongly connected there is a path in G vi·1 (where we let vk v0 ). that vertex can reach all others in the original digraph.) If there is more than one vertex with an empty adjacency list. The finishStack at the end of Phase 1 is shown. from wi to vi·1 . Then check if there is exactly one vertex in the transpose graph with an empty adjacency list.14. c) There is exactly one vertex with an empty adjacency list. which we will indicate by wi v1 w1 vk 1 wk 1 vk ´ v0 µ. Now construct the adjacency lists for the transpose digraph. Section 7. The total is in Θ´n · mµ. We show the algorithm’s output.22 sk ´ s0 µ is a cycle in the condensation of G. E . under each diagram. go through the adjacency list array and determine if there is exactly one vertex. so the original cycle assumed to be in the condensation of G cannot exist. b) Checking the adjacency list arrays for empty lists takes Θ´nµ time. the scc array. neither can reach the other. for each part of the problem.) Thus the digraph in Example 7. The DFS trees in the transpose graph each encompass one SCC. But then all the Thus we can form the following cycle in G: v0 w0 vertices in this cycle should have been in the same strong component. We continue only if we found a unique vertex v with an empty list. Let Si be the strong component of G that corresponds to the vertex si in the condensation (0 i k. So. Construction of the adjcency lists for the transpose digraph takes Θ´n · mµ time. there is an edge vi wi in G where vi is in Si and wi is in Si·1 . so the DAG is not a lattice.24 The strong components are the same. (Otherwise there would be a cycle in the digraph. If the edges are all reversed. since there is an edge si si·1 in the condensation. say v whose list is empty. tree edges are heavy solid. of course. They are A B H .15 is a lattice.54 Chapter 7 Graphs and Graph Traversals 12/13 3 14/17 2 15/16 1 6 7/8 7 5/6 11/18 4 9/10 5 9 1/2 8 3/4 vertex topo 1 7 2 8 3 6 4 9 5 5 6 4 7 3 8 2 9 1 7. D F G . . Sk S0 ). If so.20 a) There must be at least one vertex with an empty adjacency list. For each i ´0 i k 1µ.) This takes Θ´n · mµ time. and suppose s0 s1 because a condensation has no edges of the form vv. In the diagrams the roots of the DFS trees are shaded. and nontree edges are dashed. We can assume k 2 Let G be a directed graph. vertex 9. containing the leaders for the components. (See Exercise 7. 7.

Chapter 7 (a) Graphs and Graph Traversals 55 A 1/22 A 1/6 2/21 B H 9/20 A B H E C I J K D G F finishStack 3/4 B H 2/5 3/8 D E 10/19 17/22 D E 7/8 F 4/5 G 6/7 12/17 I C 11/18 J 13/16 K 14/15 F 18/21 G 19/20 11/14 I C 9/16 J 10/15 K 12/13 Original graph Transpose graph A B C D E F G H I J K scc: A A C D E D D A C C C (b) A 10/19 A 3/4 11/18 B H 9/22 H E A B D G F K I J C finishStack 2/5 B H 1/6 12/17 D E 20/21 9/14 D E 7/8 F 13/14 G 15/16 2/7 I C 4/5 J 3/6 K 1/8 F 10/13 G 11/12 17/20 I C 18/19 J 16/21 K 15/22 Original Graph Transpose Graph A B C D E F G H I J K scc: H H K D E D D H K K K .

and shufﬂe them around from term to term. each time the course is taught. .56 Chapter 7 Graphs and Graph Traversals (c) A 1/22 A 1/6 2/21 B H 3/20 A B H E C I J K D G F finishStack 3/4 B H 2/5 5/10 D E 4/19 17/22 D E 7/8 F 7/8 G 6/9 12/17 I C 11/18 J 13/16 K 14/15 F 18/21 G 19/20 11/14 I C 9/16 J 10/15 K 12/13 Original Graph Transpose Graph A B C D E F G H I J K scc: A A C D E D D A C C C (d) A 18/21 A 3/4 19/20 B H 9/22 H A B E D G F K I J C 2/5 B H 1/6 11/16 D E 10/17 9/14 D E 7/8 F 13/14 G 12/15 2/7 I C 4/5 J 3/6 K 1/8 F 10/13 G 11/12 17/20 I C 18/19 J 16/21 K 15/22 Original Graph finishStack Transpose Graph A B C D E F G H I J K scc: H H K D E D D H K K K Note for instructor: Like Exercise 7. we would probably only assign one part of this exercise for homework and use another part for an exam.12.

7. If it is known that G is connected.30 Use the connected component algorithm (Algorithm 7. we are done. A. a complete depth-ﬁrst search from w would have been completed before returning to v and the edge wv´ vwµ would have been encountered ﬁrst from w. We will show that there is a simple cycle C that contains e1 and e3 . at the time when wv is ﬁrst encountered.7: Biconnected Components of an Undirected Graph 7. This shows the value of working with the skeleton and inserting code only in the indicated places. if w had been visited while traversing subtrees of v. 9.6: Depth-First Search on Undirected Graphs Graphs and Graph Traversals 57 7. Case 2: The edge wv´ vwµ is ﬁrst encountered while traversing the adjacency list of w. then it contains e2 and e1 .e.Chapter 7 Section 7. count the edges.8. and 13 and ans are not needed. as follows. (Lines 2. it will be visited immediately as a child of v and vw will be a tree edge. If e3 is in C1 also. otherwise vw would have already been encountered from v. then G is a tree if and only if m n 1. Let C1 be a simple cycle containing e1 and e2 . Algorithm 7. At “Exploratory processing” (line 7) and “Checking” (line 11) insert statements to output the edge vw.28 Let vw be an edge. The symmetric property is satisﬁed because if a cycle contains e1 and e2 . after line 5 of the skeleton) will not work for undirected graphs. Thus v is an ancestor of w. we assume that v is encountered before w in the depth-ﬁrst search. (If m is not given as input. C is constructed by combining a piece of C1 with a piece of C2 . 7. although it would work for directed graphs. so wv is a back edge. this is not trivial. Without loss of generality. Since v was visited before w. Let C2 be a cycle containing e2 and e3 . the depth-ﬁrst search from v must still be in progress.) Notice that outputting the edge vw as soon as it is obtained from the adjacency list (i. At this point w could not yet have been visited because. Since w has not been visited. 7. The following ﬁgure illustrates the construction described in the next paragraph. then there is a simple cycle containing e1 and e3 .2) modiﬁed to quit with a negative answer if any back edge is encountered or if any vertices remain unmarked after the ﬁrst call to ccDFS. I.34 a) The reﬂexive property is satisﬁed because we are given that e1 Re2 if e1 e2 . Since we require a simple cycle. by the recursive nature of the algorithm.) Section 7.32 D.. e1 • • C1 w• • v e1 • • • v e2 • • • • e3 C2 w• e2 • • • • e3 . Suppose it is not.27 Modify the undirected DFS skeleton. For the transitive property we must show that if there is a simple cycle containing e1 and e2 and a simple cycle containing e2 and e3 . Case 1: The edge vw is ﬁrst encountered while traversing the adjacency list of v.

let C23 be the piece that contains e3 . Because J is biconnected. J must include an edge that is not in H (since if it contained only additional vertices. old2 B A In the middle tree. Case 1: All vertices in J are also in H. A B C D E F a) In the leftmost depth-ﬁrst search tree. We can append the edge xy to form a simple cycle in G. i. then that it is a maximal biconnected subgraph (i. y. J is connected. that is also in C2 . say v.. in H and the other. but old2 C old2 D . so assume that H has more than one edge for the rest of this paragraph.58 Chapter 7 Graphs and Graph Traversals In C1 . but B and D are not in the same biconnected A and old1 D A and old2 D D. The vertices x and y are in H. b) The graph in the diagram in the text has two equivalence classes. We will use the facts that H contains at least two vertices and every vertex in H is incident upon some edge in H. H is connected because for any two distinct vertices v and w in H. so let xy be an edge with one vertex. c) Let H be the subgraph based on such an equivalence class. 7. v and w are incident on edges in a cycle. any two edges in H are in some simple cycle. a larger subgraph of G that is also biconnected. C and D (and the edge CDµ form one bicomponent. of course). it would not be connected). D. Case 2: J has at least one vertex not in H. Because the cycle is simple. It also falls into two pieces. . C and D (and the edge CDµ form one bicomponent. Continue around C1 to ﬁnd the last vertex. Thus H is biconnected. (They must have at least two common vertices because e2 is on both cycles. let C11 be the piece that contains e1 . and continue around C1 until reaching the ﬁrst vertex.) Now break C1 at v and w. the two triangles. Let v be any other vertex in H adjacent to x. there must be a simple path from v to y in G that does not use x. A A B C D E C B A D E old1 D . so again xy should have been in H. so xy should have been in H. but old1 C b) In the rightmost DFS tree. It falls into two pieces. say w. C is a simple cycle because no vertex of C2 appears on C11 (except the endpoints v and wµ. in J but not H. In the middle tree. that C1 and C2 have in common. x. so the loss of any one other vertex leaves a path between v and w. This path forms a simple cycle when the edges xv and xy are included. a biconnected component). If H contains only one edge. so there is a path between x and y in H. it is biconnected by deﬁnition. or H contains only one edge.e. We show ﬁrst that H is biconnected. begin at e1 . but B and D are not in the same biconnected component. Now suppose H is properly contained within J. Let xy be an edge in J that is not in H. which is connected. there are two disjoint paths between v and w (disjoint other than the endpoints v and w. old1 B component.35 Consider the following undirected graphs..e. Attach C11 and C23 at v and w to form the cycle C. in which the solid lines are edges in a DFS tree rooted at A and dashed lines are back edges. Break C2 at v at w.

If there is only one child. The shared vertex is an articulation point. It will be popped later as part of the other bicomponent. ED (top). A similar exercise is: Can a leaf in a depth-ﬁrst search tree for a connected graph be an articulation point? 7. then vi cannot be a hypersink. 7. when the search is ready to branch out again from C is the edge CE encountered and stacked. So each probe in the matrix can eliminate one vertex as a possible hypersink.Chapter 7 Graphs and Graph Traversals 59 7. The ﬁrst solution is a little bit easier to follow and to write out. When the search backs up from D to C. At the time the algorithm backs up from D to C.35. Observe that for i hypersink. then check whether the remaining vertex is actually a hypersink. j. We move down column s as long as we see 1’s. and if ai j 1. Thus. but the graph is edge-biconnected. EC. DC. the second child of the root would have been encountered while doing the depth-ﬁrst search from the ﬁrst child and would have been put in the ﬁrst subtree. BA.) Solution 1. there is a path between each pair of vertices (other than the root) that does not go through the root. CB. and it pops the top ﬁve entries from the stack. every edge is part of a cycle. D. Let v C and w D. If an edge is stacked on edgeStack each time it is encountered. and E. if ai j 0. Such a graph is biconnected. If a biconnected graph has more than one edge.39 Consider the leftmost depth-ﬁrst search tree shown in the solution of Exercise 7. Proof. so removal of any one edge can not disconnect the graph. BC. and E.40 No. So the root is not an articulation point.34(b) in the text).41 a) True except for a graph consisting of exactly one edge. With minor changes ss can be considered to be required or prohibited. b) Not true. Consider a graph consisting of two triangles that share one vertex (the diagram in Exercise 7. 7. we jump right to the column of the next possible hypersink (the row number where we encountered the 0) and continue moving down the column. back = 1 (the “back” value for C). then v j cannot be a Let A ´ai j µ be the adjacency matrix (1 i j n). there can be no path from the ﬁrst child to the second child that does not go through the root. then after the depth-ﬁrst search backs up from E. the subtree rooted at that child contains all the other vertices. DE. They both are linear in n. and since it is a tree. and assume the vertices on each adjacency list are in alphabetical order. the root is an articulation point.35. but removing the edge would leave two isolated vertices. the second is a little faster. Consider the leftmost graph shown in the solution of Exercise 7. so the solution given allows ss to be present or not. edgeStack contains (bottom) AB. Additional Problems 7. If there were. Only then. . if there is more than one child. it detects the bicomponent containing the vertices C. D. CD.43 The general strategy of both solutions below is to ﬁrst eliminate all but one vertex as possible hypersinks. (The deﬁnition of hypersink does not specify anything about edges of the form ss. If the root has two (or more) children.37 The root of a depth-ﬁrst search tree for a graph is an articulation point if and only if the root has at least two children. Let vs be a possible hypersink. CA. so the test suggested in the exercise would fail to detect the bicomponent consisting of C. When a 0 is encountered.

(1. if (i s && a[s][i] == 1) hsink = false. using Θ´n · mµ time.” Information Processing Letters. 7. In G T . The relevant matrix entries do not have to be reexamined in the veriﬁcation phase.3) and (2. s = i. // s is a candidate for the hypersink for (i = s+1. ¯ ¯ ¯ At “Exploratory processing” (line 7) insert a statement to output the edge vw. So s can reach every vertex in G . // Continue outer loop. then the edge wv. Algorithm 4.) 7. At “Backtrack processing” (line 9) insert a statement to output the edge wv.3) and the DFS starts at 1. break. return hsink.3. The explanation is that every vertex can reach s in G T because the graph is acyclic and has no other dead end. i n. The veriﬁcation phase examines 2n 2 entries.2). The elimination phase can be conducted like a tournament (where examining an entry ai j is a “match” between vi and v j . break. for (i = 1. if (hsink) // All vertices except s were eliminated. If there is just one such vertex. . giving G T . At “Checking” (line 11) insert statements to output the edge vw. “An optimal algorithm for sink-ﬁnding.60 Chapter 7 Graphs and Graph Traversals boolean hsink. Solution 2. while (s < n) int i. as follows.47 This problem is a good example of the power of strongly connected component analysis. The total is 3n 3. (For details see King and Smith-Thomas. the one possible hypersink) will have competed directly against roughly lgn other vertices. i n. int s.2 we saw that the overall winner of a tournament (in this problem. and the vertex that is not eliminated is the “winner. then it (or any vertex in its SCC) can reach all vertices in the original graph. First ﬁnd the SCCs of the directed graph G and create the condensation graph (denoted G in the text). i++) if (a[i][s] == 0) hsink = false. So any vertex in the SCC of s can reach every vertex in G. // possible hypersink s = 1. if the graph has edges (1. hsink = true. determine how many vertices have no outgoing edges. i++) if (i s && a[i][s] == 0) hsink = false.45 For simplicity we will output the edges of the desired path in sequence. say s.” In Section 5.8. // No vertex with index < i (including s) can be a hypersink. the outputs would be 12 23 31 13 32 21. Now take the transpose (reverse the edges) of the condensation graph. Modify the undirected DFS skeleton. So the problem can be solved by examining at most 3n lg n 3 matrix entries. hsink = true. For example. The elimination phase of the algorithm examines n 1 matrix entries (one from each row other than the ﬁrst row). // Now check whether s is indeed a hypersink. May 1982. // Continue inner loop.

If the test fails. This can be done by initializing v’s descendant counter to 1 when v is ﬁrst visited and adding the descendant counter for each of its children when the search backs up from the child to v. keep a count of how many descendants of v (including v itself) have been visited. Let w be the root of a subtree of v that has been visited. and the component that contains ancestors of v has at most n 2 vertices. so we cannot if count v conclude that v satisﬁes the required property. If the test is satisﬁed. each subtree of v that has not yet been visited has at most n 2 vertices. when v is removed. . When v is removed. count w was found to be less than n 2 (otherwise the algorithm would have quit there and output wµ. Since we have already visited at least n 2 descendants of v. each separate component has at most n 2 vertices. subtree of v could have more than n 2 vertices. each subtree of v will be a separate component. For each vertex v. We will show that v satisﬁes the property. When the search backed up to w. another.Chapter 7 Graphs and Graph Traversals 61 7.49 Use depth-ﬁrst search. Thus. the algorithm outputs v and halts. and the traversal continues. Each time the search backs up to v test n 2. so the subtree rooted at w has fewer than n 2 vertices. and the ancestors of v along with their descendants (other than v and its subtrees) will be one other component. unvisited.

62 Chapter 7 Graphs and Graph Traversals .

If the graph has n vertices. there is an edge between the ith and i · 1st vertices. There is no broad class of graphs for which we know that the average size of the fringe is in o´nµ. not the asymptotic order of the algorithm. (We mean the average over the course of the algorithm. there is an edge between the ith and i · 1st vertices. which in turn are discussed in Section 6.) If the average size of the fringe is in Ω´nµ. for 1 n 1. Section 8. . 1 s 8 u 1 1 v w 8 8.8 outlines the best ways that are known to improve the performance of Prim’s and Dijkstra’s algorithms. Let the start vertex be 1.Chapter 8 Graph Optimization Problems and Greedy Algorithms Section 8. weight comparisons are substantially more expensive than integer or boolean tests. (We can’t omit “plus n” because of pathological examples which the number of weight comparisons is o´nµ. W ´xyµ. b) Let Gn be a simple chain of n vertices. But this is only a partial justiﬁcation. For a more general class.2: Prim’s Minimum Spanning Tree Algorithm 8. then any implementation of getMin that checks all of the fringe vertices will put the overall cost in Θ´n2 µ. because it addresses a constant factor. a) Let Gn be a simple cycle of n vertices. In fact. But zw is in T2 . Then there are at most two candidate edges to choose from at one time. (We adopt the convention given in part (b) of the exercise. 3. not the average over some set of inputs. These methods don’t check all of the fringe vertices. We preferred to present the main ideas as simply as possible. Be sure that the errata in the ﬁrst printing (available via Internet) are applied if you are using the ﬁrst printing.) There are three reasons why the text does not give one of these more sophisticated implementations for the “standard” version of Prim’s algorithm or Dijkstra’s algorithm: 1. so even if the number of weight comparisons is of a lower order. i Remarks: Focusing on weight comparisons. and we can only hope for improving the constant factor. using a Pairing Forest or a Fibonacci Heap. that a comparison of an edge weight with ∞ does not count. that is. let C be some constant. Prim’s algorithm does “bookkeeping” work that requires time in T heta´n2µ. Then there are at most two fringe vertices per simple cycle at any one time and updateFringe does at most one edge-weight comparison per simple cycle. 2. and consider the class of connected undirected graphs consisting of the union of at most C simple cycles.2. More sophisticated implementations can make the bookkeeping time proportional to the number of weight comparisons plus n. that is. say zw. for 1 i n 1 and an edge between 1 and n. requires some justiﬁcation. So W ´xyµ W ´zwµ W ´uvµ. as is done in this exercise and in several others in the chapter. If the weights are ﬂoating point. such as we saw in part (b) above. the bookkeeping work will dominate for large enough n. the total is about 2Cn.1 Start at s. At least one edge in 8. as given in the text.2.7.) The updateFringe subroutine only does an edge-weight comparison for the last vertex to enter the fringe. By the MST property all edges in the cycle have weights the cycle is not in T1 . so each of the n getMins requires at most one comparison of edge weights.3 Add xy to T2 forming a cycle.5 Note: Part (a) of this exercise was updated in the second printing.

every candidate edge is involved in a weight comparison to ﬁnd the candidate with minimum weight. With one of these alternatives in place. w. and they are stored in the range 1. deleteMin can simply subtract 1 from fringeSize. insert adds new vertices at the end.64 Chapter 8 Graph Optimization Problems and Greedy Algorithms With that said. or ´n 1µ´n 2µ 2 in all.2. status[v] = fringe and fringeWgt[v] = ∞ for 1 v n. This accounts for ´n 2µ · ´n 3µ · ¡¡¡ · 1 weight comparisons. In the create function. but each time a candidate edge is selected. incrementing fringeSize. Another alternative is to maintain the set of fringe vertices in an array. It is also possible to use the List ADT in the text.7 All n 1 edges immediately become candidate edges.7. by incorporating the idea of obsolete nodes described in Section 6.14 Dijkstra’s shortest path algorithm will not work correctly with negative weights. We believe that the overhead of lists and complications of implementing all the cases correctly make these alternatives less desirable than the alternative described below. The asymptotic order is unchanged. Limited empirical results indicate that the last technique would be proﬁtable when the average fringe size is n 20 or less. A counter. initialize pq.12 The source vertex is s. Then findMin ﬁnds the vertex in inFringe with the minimum fringWgt and swaps it with the vertex in inFringe[fringeSize]. 8. b) All of them. One alternative is to use a (mutable) linked list to hold the fringe vertices.1 delete “if (pq. . its edge to every vertex not yet in the tree is involved in a weight comparison in updateFringe. fringeSize of inFringe. 8.10 In Algorithm 8. else” as the “if” will always be false.3: Single-Source Shortest Paths 8. Consider the following ﬁgure. which is immutable. 8. In addition. we can brieﬂy mention a few ways to reduce the bookkeeping in getMin to be proportional to the size of the fringe. 4 s 8 v w −8 . Later.status[w] == unseen) insert(pq. Finally. The total number of weight comparisons is ´n 2µ´n 1µ 2. v. So the total is ´n 1µ´n 2µ. It would require extensive experimentation to determine under what circumstances these techniques produce improved performance. newWgt). keeps track of how many fringe vertices there are at any moment. say inFringe. 1 s 8 u 1 1 8 v w 8 8.8 a) Each time a vertex is taken into the tree (after the start vertex). evaluating the time of the algorithms by considering only edge-weight comparisons (plus a linear term to cover pathological cases) would be fully justiﬁed. Section 8. getMin does ´n 2µ · ´n 3µ · ¡¡¡ · 1 weight comparisons over the course of the algorithm.numPQ = n. fringeSize. but we do not know what graphs tend to have this property.

hypothesis) and they are ∞ for vk 8.Chapter 8 Graph Optimization Problems and Greedy Algorithms 65 The algorithm. and d ´s xµ · W ´xyµ d ´s yµ. There are three cases to handle. 8. the paths that had been the best paths to y are discarded. and d ´s xµ · W ´xyµ d ´s yµ. At this point.3. so the edge with negative weight. Let y be a vertex on the list. the index of this sequence. There were no paths to y known previously. 1. The shortest path from s to s is 0 and d ´s sµ 0 before the main loop. . they may partially overlap.3.2. x’s adjacency list is traversed. Another exercise: Review the proof of Theorem 8. and then pull v into the tree. path distances for the vertices v0 The proof is by induction on k. with s as the start vertex. y was an unseen vertex. We assume there are no edges of weight 0. 3. never becomes a candidate. The status array is replaced by a boolean array inFringe. 2. The base case is k 0. if the candidate edge that is taken in the k-th pass through the main and v0 loop is tvk . This is the value that the algorithm vk . The current candidate edge to y is not replaced. which gives a shorter path to w.18 We use the notation of dijkstraSSSP and Section 8. The edge xy replaces the current candidate to y. shortest path distances have been computed correctly for v0 vn 1 . otherwise there can be an inﬁnite number of shortest paths. Now each best path to x gives a best path to y. so the algorithm has computed the shortest paths for v0 If there are no candidate edges remaining at the beginning of the k-th pass through the main loop. assume the theorem holds for k 1. y is a fringe vertex. so again count[y] = count[x]. so count[y] = count[x]. Therefore count[y] is updated as follows: count[y] += count[x]. then the algorithm vk 1 (by the inductive terminates. y is a fringe vertex. Then the hypotheses of Theorem 8. Therefore. but all shortest paths to x with xy appended on the end give paths to y as short as the best already known. Suppose that Dijkstra’s algorithm adds vertices to its tree vn 1 . a) This solution uses an n ¢ n weight matrix W to represent the graph. Initialize count[s] = 1. but each best path to x gives a best path to y. Note that two paths are called distinct if they have any diffence at all.20 We use the notation of dijkstraSSSP and Section 8. Use an array count such that for any vertex u. will pull w into the tree with the path of length 4. 8.2. then d ´s t µ · W ´tvk µ is the shortest path distance from s to vk in G. When a vertex x is added to the tree.16 We use the notation of dijkstraSSSP and Section 8.2.3. For k 0.) We will show that after k times through the main loop the algorithm has computed the correct shortest vk .6 hold with vk in the role of z vk 1 in the role of V ¼ . This is where the count is initialized for each vertex other than s. count[u] is the number of distinct shortest paths from s to u known so far.6 (the correctness argument for Dijkstra’s algorithm) to ﬁnd where it uses the hypothesis that the weights are nonnegative. stores for d ´s vk µ. (If some vertices are unreachable from s they are added arbitrarily to the end of in the sequence v0 ´ sµ v1 the sequence.

// Initialization for (y = 1. updateFringe(W.2. inFringe). y ++) if (inFringe[y] && dist[y] < best) best = dist[y]. n. treeSize < n. int s. best = ∞. /** Scan vertices with inFringe[x] == true to find one for which dist[x] is minimum. return this vertex. treeSize. parent[y] = -1. x = -1.2 will do substantially fewer weight comparisons on some graphs. dist[y] = newDist.66 Chapter 8 Graph Optimization Problems and Greedy Algorithms void sssp(float[][] W. this algorithm will always do a total of about 1 n2 weight comparisons when searching the fringe vertices to ﬁnd the candidate edge with lowest 2 weight. and only one is removed during each pass of the main loop. int[] parent. As discussed in the solution of Exercise 8. float[] dist. boolean[] inFringe = new boolean[n+1]. since all vertices are in the fringe at the beginning. int x) int y. for (y = 1. int[] parent. y n. Also. dist. If the actual graph is sparse. // Continue loop. Focusing on weight comparisons requires some justiﬁcation because the bookkeeping in both algorithms takes time in Θ´n2 µ. b) The procedure in part (a) is clearly simpler than Algorithm 8.5. y++) dist[y] = ∞. return x. x). this one uses Θ´n2 µ space for the weight matrix. for (y = 1. // Continue loop. // Main loop dist[s] = 0. */ void updateFringe(float[][] W. boolean[] inFringe) int x. Like any algorithm that treats the graph as a complete graph. Algorithm 8.) x = y. However. for (treeSize = 0. y n. float[] dist. y ++) float newDist = dist[x] + W[x][y]. . int n) int x. int n. parent. treeSize++) x = findMin(dist. // We did not have to explicitly test whether y is in the fringe or the // tree because if it is in the tree. inFringe[x] = false. more sophisticated implementations can make the bookkeeping time proportional to the number of weight comparisons. if (newDist < dist[y]) parent[y] = x. inFringe[y] = true. y n. y. float best. although we do not know any general class of graphs for which this is true. /** Replace appropriate candidate edges with edges from x. */ int findMin(float[] dist. y. then weight comparisons are substantially more expensive than integer or boolean tests. the current path to y (of length // dist[y]) cannot have higher weight than the path through x to y. there is a lot of wasted space. return. if the weights are ﬂoating point.

6). for (i = 1.24 The equivalence classes other than singletons are shown after each union. Take DJ. Additional Problems 8.Chapter 8 Section 8. AH.6 of the text).9 (Section 6. the path in the breadth-ﬁrst search tree is a shortest path from s to w. ´ABµ. 8. ´AB EFK GHLµ. CD. Take GH. if v and w are in the same connected component of F. ´ABCGHLMEFKDJ µ. ´ABCGHLMEFK DJ µ. w) == false) MAKE v w. then there must be a path from v to w in F. i ++) Read (v. where two vertices are in the same equivalence class if and only if they are in the same connected component of the graph consisting of the edges read in so far. ´ABC EFK GHLM µ. Adjoining e to the path gives a cycle. Take EK. i m. ´AB EF µ. if (count == n-1) connected = true. so v and w are in the same connected component of F. Drop HL. Take FG. 8. When w is encountered. ´ABCGHLM DJ EFK µ. Thus this algorithm includes a Union-Find program with at most 2m · 3´n 1µ instructions. The worst-case running time is in where lg£ is the slowly-growing function deﬁned in Deﬁnition 6. Take JM. CJ. Take AI. Take GL.26 Dijkstra’s shortest path algorithm could be used with all edge weights assigned 1. Drop FK. The simplest solution is to use breadth-ﬁrst search starting at s. ´ABCGHLM EFK µ. ´ABCGHLMEFKDJI µ. but there is a lot of unneeded overhead. Drop LM. m). Read (n. ´AB EFK GHLM µ. O ´n · mµ lg£ ´n · mµ ¡ . Take EF. Take AB.22 If e forms a cycle with edges in F. count ++. w). Each IS is implemented with two cFinds and each MAKE uses two cFinds and one wUnion. else connected = false. There is no straightforward way to use depth-ﬁrst search. ´AB EFK µ. Take BC. Take GM. ´AB EFK GH µ.4: Kruskal’s Minimum Spanning Tree Algorithm Graph Optimization Problems and Greedy Algorithms 67 8. HM. there is a path between them that does not use e (since e is not in F). if (IS(v. This is a spanning tree now so remaining edges can be ignored. On the other hand. Take CM.27 Use a dynamic equivalence relation (Section 6. count = 0.

68 Chapter 8 Graph Optimization Problems and Greedy Algorithms .

An alternative second solution is sketched brieﬂy. but part (c) shows that it also misses some paths. pathList = cons(v. R is the (global) reachability matrix being computed. That is. pathList = rest(pathList). else if (discoverTime[w] < discoverTime[v]) // vw is a back edge or cross edge For all u in pathList down to w (or to the end): remAdj = rest(remAdj). for every vertex w that is adjacent to v (via an out-edge). Solution 2. the cost for the whole graph of these “steps” is in Θ´nmµ.3. Solution 1. finishTime[v] = time. Assume the usual initializations done in dfsSweep.Chapter 9 Transitive Closure. so we will not give a complete solution. while (remAdj nil) w = first(remAdj). if R w i 1 set R v i to 1 for 1 i n. All-Pairs Shortest Paths Section 9. Except for the postorder processing. some students may come up with different solutions. to be described in a moment. This could be implemented by not using built-in recursion and manipulating the stack directly. // "pop" v. color[v] = black. if (color[w] == white) reachFrom(w). except 1’s on the main diagonal (in time Θ´n2 µ). time ++. // Consider vertices adjacent to v. Initialize R to zeros. and pathList = nil. a) We would expect students to try the following procedure (to be called repeatedly to start new depth-ﬁrst searches from vertices that have not yet been visited). Perform a depth-ﬁrst search according to the DFS skeleton. The algorithm is written recursively for clarity. some paths will be missed. perhaps with the embellishment described after it. We will see when we look at the example in part (c) that this is not enough.2: The Transitive Closure of a Binary Relation 9. For all u in pathList: R[u][v] = 1. void reachFrom(int v) int w. the two given here should be taken with a grain of salt. “or” row w of R into row v of R. IntList remAdj. named pathList to store these vertices. Since the two solutions given here in part (a) are shown not to be very good in parts (b) and (c). time ++. // Number v and fill matrix entries for its ancestors. discoverTime[v] = time. this take time in Θ´n · mµ. The cost of this “step” for v is n times the out-degree of v. (We put “step” in quotes because it is a for loop within a while loop—quite a substantial “step. remAdj = adjacencyList[v].2 The point of this exercise is to see that depth-ﬁrst search does not give a particularly good solution to the problem. pathList).”) . // "push" v. At postorder time for vertex v (line 13 in the DFS skeleton) we compute row v of the matrix R as follows: In a loop through v’s adjacent list. Algorithm 7. color[v] = gray. R[u][w] = 1. but we use a linked list. The list operates like a stack except that all elements can be inspected without modifying the list. but it accesses the vertices on the recursion stack (the vertices on the path from the root to the current vertex).

For purposes of making Algorithm 8. For solution 2. but instead of reading in a graph. merely giving a permutation of 1 Solution 1. The simple way to overcome the bugs in these solutions is to start a depth-ﬁrst search from each vertex. The themes of this chapter are revisited in Chapters 12 and 14. Form a conjecture about the general case. 2q+1) . there are never more than n vertices on the stack. Solution 3. Such a graph is speciﬁed by n. This obviously will do some repeated work. ´1 q · 1µ. Regardless of how large n is. the times for various parts were given in part (a). Let q be about n. the algorithm seems to ﬁnd long paths very quickly.1 on this graph and report how many passes through the while loop were used. . and there are many other variations. Make directed edges as follows.1. followed by zero or more vertices in descending order. it is sufﬁcient to consider graphs whose directed edges form a single simple path through all n vertices. in this case. Ô . .) How many passes will the while loop take in Algorithm 8. then ´´q 1µq · 2 3µ to wrap around. Warshall’s algorithm and other methods studied in this chapter introduce algorithmic themes that are useful in other situations. for arbitrarily large n. but the paths from v2 v3 and v4 to v5 will not be recorded in R. Solution 2. the authors have not been able to ﬁnd a graph (with any number of vertices) that requires ﬁve or more passes. that at most four passes are needed. which is studied in this chapter. from left to right. so the worst case time is in O´nmµ. where n is the number of vertices and m is the number of edges. have a procedure to generate one. q+2) ´q · 2 2q · 2µ.4 Note to instructor: It is surprisingly difﬁcult. if reachFrom is called with v1 as the root. Plot the maximum number of passes observed for each p. For solution 2. but it is about the best solution known for sparse graphs and has the same asymptotic order on dense graphs (O´n3 µ) as Warshall’s algorithm. Program Algorithm 8. including the pass to detect that nothing changed. Run Algorithm 8. ((q-2)q+1. ´´q 2µq · 2 ´q 1µq · 2µ. noting that p lg n. .1 run for as many passes as possible through its while loop. Make n 1 directed edges between adjacent vertices in the random sequence. (q+1. followed by zero or more vertices in ascending order will be found in one pass of the while loop. and so on. For a sequence of p beginning at 2 and going as high as your computer can stand it. no matter how long the path is. any path that requires a back edge will be missed. (q-1)q+1) then((q1)q+1.” Unless students check their constructions by programming and running Algorithm 8. Note that a path beginning at any vertex. It is not easy to predict the course of the algorithm “by eye. set n 2 p and generate a random permutation of vertices 1 through n. to ﬁnd a graph for which Algorithm 8.1? It can be shown. including a last pass in which nothing changes: 3 The ﬁnal R matrix is: ¾ 5 1 1 1 1 1 1 1 0 1 1 0 1 0 1 7 0 0 1 0 0 0 0 2 0 1 1 1 1 0 1 0 0 1 0 1 0 0 4 0 1 1 1 1 1 1 6 0 0 1 0 1 0 1 ¿ 1 As we mentioned. on various tries).1. Then do more runs for values where you hope to ﬁnd a higher maximum than you found in your small sample. c) For solution 1. they are likely to jump to the wrong conclusions (as both authors have. 2)towraparound then(2. Here are the edges for a directed graph with 7 vertices that forces Algorithm 8. Section 9. (Solution 1 uses a variation of this idea.1 requires more than four passes through the while loop. and the total is Θ´n´n · mµµ.3: Warshall’s Algorithm for Transitive Closure 9.70 Chapter 9 Transitive Closure. All-Pairs Shortest Paths b) For solution 1.1 to make four passes through the while loop. the depth-ﬁrst search tree will contain all ﬁve vertices. Do about 10 such runs for each p to get an idea of what is happening. if not impossible.

The algorithm. j++) if (D[i][j] > D[i][k] + D[k][j]) D[i][j] = D[i][k] + D[k][j]. 9. D contains the weight matrix for the graph.7 ¼ D 0 3 5 4 2 0 4 1 4 7 0 4 3 3 3 0 ½ 9. k++) for (i = 1. All-Pairs Shortest Paths 71 1 2 3 4 . 9. Initially. i n. first step from i is to j. // Find shortest paths and record best first step in go. j n. i++) for (j = 1. 1 3 2 4 D1 3 D1 4 ∞. the second after the inner loops for k 2. for (k = 1. regardless of whether weights are negative or not. etc.10 The following algorithm computes the routing table go as well as the distance matrix D. // Initialize go: If edge ij exists. and let the weight of each edge be 1. Algorithm 9.6 Let V Transitive Closure. let E 12 24 43 . with the modiﬁcation described in the exercise. void routeTable(float[][] D. k. i n. ¾ 0 2 4 3 3 0 7 3 5 7 0 3 ∞ 1 4 0 ¿ ¾ 0 2 4 3 3 0 7 3 5 7 0 3 2 1 4 0 ¿ ¾ 0 2 4 1 3 0 7 3 5 7 0 3 2 1 4 0 ¿ ¾ 0 0 4 1 3 0 7 3 1 4 0 3 2 1 4 0 ¿ b) As long as there are no negative-weight cycles. i++) for (j = 1.4 is guaranteed to ﬁnd the shortest simple path. k n. The last matrix is the distance matrix. computes Initially D 2 3 D 1 3 before it computes D 2 3 and D 1 4 .4: All-Pairs Shortest Paths in Graphs 9.8 a) The matrices below show the steps to calculate the distance matrix. go[i][j] = go[i][k]. j++) if (D[i][j] == ∞) go[i][j] = -1. j n. the ﬁrst shows the matrix after all iterations of the inner for loops for k 1. That is. It will incorrectly leave D 1 3 with the value ∞. for (i = 1. int n) int i. int[][] go. . else go[i][j] = j. there is always a simple shortest path between any pair of nodes.Chapter 9 Section 9. j.

72

Chapter 9

Transitive Closure, All-Pairs Shortest Paths

9.12 The shortest nonempty path from one vertex to itself is the shortest cycle. In Algorithm 9.4, if we initialize D i i ∞ for 1 i n (after copying W into D), then run this algorithm, the ﬁnal value in D i i is the length of a shortest cycle containing i, as long as there are no negative-weight cycles. There is a negative-weight cycle if and only if some D i i 0. If the graph is undirected, the algorithm will incorrectly interpret an edge vw as a cycle v w v. Although the exercise did not explicitly require discussion of negative-weight cycles, it is worth knowing that they can be dealt with fairly straightforwardly. Should a negative-weight cycle be discovered when Algorithm 9.4 terminates, then other nodes might be in nonsimple negative-weight cycles that have not been discovered yet. One simple way to ﬁnd all such nodes is to set D i i to ∞ wherever D i i 0 and use this matrix as input for a second run of Algorithm 9.4. (This time, of course, do ∞ before starting. We might need to evaluate ∞ · ´ ∞µ; the appropriate value is ∞, denoting that the not set D i i path doesn’t exist.) Section 9.5: Computing Transitive Closure by Matrix Operations 9.14 vk ´ wµ be a path from v to w with k n. Since there are only n vertices in the graph, some Let P v0 ´ vµ v1 vi v j·1 vk is also a vertex must appear more than once on the path. Suppose vi v j where i j. Then v0 path from v to w, but its length is strictly smaller than k. If the length of the new path is still larger than n, the same shortening process (removing loops in the path) can be used until the length is less than n. Section 9.6: Multiplying Bit Matrices—Kronrod’s Algorithm 9.16 Element j ¾ Ci if and only if

n

aik bk j

k 1

**1 if and only if there is a k such that aik
**

k¾Ai

1 and bk j

1 if and only if there

is a k ¾ Ai such that j ¾ Bk if and only if j ¾ Additional Problems

Bk .

9.18 If the graph is undirected, compute the Boolean matrix product A3 and see if there is a 1 on the main diagonal. If the graph is directed and we interpret a triangle as a simple cycle with three edges, we must avoid using selfloops. So copy the adjacency matrix to a new matrix T , replace all main-diagonal entries with zero, compute T 3 , and see if there is a 1 on the main diagonal. If straightforward matrix multiplication is used, the number of operations is in Θ´n3 µ.

**Chapter 10 Dynamic Programming
**

Section 10.1: Introduction

10.2

if (n 1) return n; prev2 = 0; prev1 = 1; for (i = 2; i n; i ++) cur = prev1 + prev2; prev2 = prev1; prev1 = cur; return cur;

Section 10.3: Multiplying a Sequence of Matrices

10.4 Let n high low be the problem size. When k low · 1, line 7 of the mmTry2 procedure in the text makes a recursive call with problem size n 1. When k high 1, line 6 makes another recursive call with problem size n 1. The loop beginning at line 5 executes n 1 times, and makes 2 recursive calls in each pass, for a total of 2n 2. For n 2, we have 2n 2 n. Therefore, letting T ´nµ denote the number of recursive calls made by mmTry2 on a problem of size n: T ´nµ For an easy¡lower bound, we claim T ´nµ 2n 2 for n 2 2n 1 2 · n 2n · n 4 2n 2 for n 2. 2T ´n 1µ · n for n 2 T ´1µ 0 0 and 1. The proof is simple by induction, since 21 2

10.5

¾

cost

0 600 2800 1680 0 1200 1520 0 2400 0

¿

¾

1

1 1

last

1 2 1

1 3 3 1

¿

multOrder

¢

2 31

£

10.7 Let the dimensions of A, B, and C be 100 ¢ 1, 1 ¢ 100, and 100 ¢ 1, respectively. Section 10.4: Constructing Optimal Binary Search Trees

10.9 a) In the matrix below, expressionsp · q w mean that p p´i jµ, the weight of elements i through j, and q is the minimum weighted cost of two subtrees, for various choices of root from i through j, so the w is the optimum weighted cost for subproblem ´i jµ.

74

Chapter 10

Dynamic Programming

¾ ¿

0

0 20 0

0 44 · 0 20 0 60 · 0 36 0 88 · 0 80 0 92 · 0 88 1 00 · 1 08 0 64 0 96 1 68 1 80 2 08 0 24 0 0 40 · 0 16 0 68 · 0 52 0 72 · 0 60 0 80 · 0 72 0 56 1 20 1 32 1 52 0 16 0 0 44 · 0 16 0 48 · 0 20 0 56 · 0 32 0 60 0 68 0 88 0 28 0 0 32 · 0 04 0 40 · 0 16 0 36 0 56 0 04 0 0 12 · 0 04 0 16 0 08 0

¾ ¿

cost

1

root

1

1

2 2 1

2 2 3 1

2 3 4 4 1

2 3 4 4 5 1

2 4 4 4 6 6 1

b) A

ºB ²

ºD² C ºF

E

10.10 We build a binary search tree where each node has a key ﬁeld Kk . The buildBST function is called with

T = buildBST(1, n).

**Here’s the function buildBST. To simplify the notation, we let the matrix root be global.
**

/** buildBST constructs the tree in postorder using the global root matrix. The current subtree being constructed will contain the keys K f irst Klast . */ BinTree buildBST (int first, int last) BinTree ans; if (last < first); ans = nil; else int k = root[first][last]; // index of the root’s key BinTree leftSubtree = buildBST(first, k-1); BinTree rightSubtree = buildBST(k+1, last); ans = buildTree(Kk , leftSubtree, rightSubtree); return ans;

Since there is no loop within the procedure and each call generates one new node, the running time is in Θ´nµ.

Suppose its index is k. The strategy is nonoptimal for the input A´ 41µ B´ 40µ C´ 20µ because it will make a stringy tree to the right with weighted cost 0 41 · 2 £ 0 40 · 3 £ 0 20 1 81. Tree 3 costs 1 + Tree 4 + Tree 5 = 52. Tree 1 costs 1 + Tree 2 + Tree 3 + Tree 4 = 168. However. k-1) + C(n-1. . Tree 11 costs 1. 8. Θ´nµ stack space is used. c) In the cost calculations below for the non-DP recursive version. Additional Problems 10.Chapter 10 Dynamic Programming 75 10. The balanced BST with root B has weighted cost 2 £ 0 41 · 0 40 · 2 £ 0 20 1 62.16 It is not hard to see that if k n then C´n kµ 0. Tree 7 costs 1 + Tree 8 + Tree 9 + Tree 10 = 6. “Tree k” refers to the subproblem tree rooted at subproblem k and the cost is the number of nodes in the tree.14 penalty 343 1 64 64 0 472 b) The subproblems that are evaluated are those beginning at words 11. the recursive function deﬁnition is: int C(int n. Since the depth of the call tree will be n. int k) if (n 0 && k == 0) ans = 1. This is true because C´n kµ ´n kµk . 1. Tree 9 costs 1. k). Section 10. including the main problem. Then recursively build the left subtree for the range first through k 1 and build the right subtree for the range k · 1 through last (using the same greedy strategy). Each node represents one activation frame. a) words Those who cannot remember the past are condemned to repeat it X 7 1 4 4 0 As with the Fibonacci numbers. this is a very inefﬁcient computation. 9. Tree 6 costs 1 + Tree 7 + Tree 8 = 10. else if (n == 0) ans = 0. choose the most probable key in the range as the root of the BST.5: Separating Words into Lines 10. For Method 1. 10. return ans. Tree 4 costs 1 + Tree 5 + Tree 6 = 31. so the total is 11. so other cases are computed only at the algorithm’s convenience. Tree 8 costs 1 + Tree 9 + Tree 10 = 3. We use this fact in some of the solutions.12 For keys in the range first through last. the number of recursive calls to C (and the number of additions) will in general be exponential in n. and there will be more than C´n kµ leaves in the recursive call tree. Tree 2 costs 1 + Tree 3 + Tree 4 = 84. Tree 5 costs 1 + Tree 6 + Tree 7 + Tree 8 = 20. . Each subproblem is evaluated once in the DP version. we are given the precondition that n k. Tree 10 costs 1. else ans = C(n-1. 1.

C maxint k n k maxint The moral is that mathematical transformations may pay bigger dividends than clever programming. // Compute C C = numerator / denominator. i ++) C = (C * (n-i)) / (i+1). if n and k are large. the two partial rows the algorithm is using at any one time can be stored in 2´n k · 1µ cells. initialized to 0. we could use a table C 0 n 0 k with all entries in the lefthand column initialized to 1. C k k are equal to 0 and are not needed in the computation. i < k. After j passes through the loop. i k. If ﬂoating point arithmetic is used there is the possibility that the answer will be inaccurate and might not be an integer. so this method used O´kµ space. Floating point numbers even overﬂow their exponent parts for n! when n has fairly modest values. All entries on the diagonal are equal to 1. That includes k´n k · 1µ entries. C n k are not needed in the these can be initialized. even when the quantity C´n kµ is well within the range of values that can be represented on the machine. At most nk entries are computed. i ++) numerator = numerator * i. So the table can easily be ﬁlled until C n k is computed. // Compute k! * (n-k)! denominator = 1. at most two rows need be stored. Entries below the diagonal consisting of C n k 0 computation. All entries above the diagonal consisting of C 0 0 . for (i = 2. A little thought shows that this solution can be improved.76 Chapter 10 Dynamic Programming 2. intermediate values will be about k times as large as C and may cause an integer overﬂow that might have been avoided in the ﬁrst two methods. for (i = 2. 10. Since the computation of each entry needs only entries in the previous row. All such . for (i = 0. computing numerator could cause integer overﬂow. Method 3 could be implemented with one loop in Θ´kµ time as follows: if (k > n/2) k = n-k. 3. The only drawback is that for a small range of inputs it will overﬂow when the ﬁrst two methods do not. n . To compute an entry C i j only two neighbors in the previous row are needed. which is an integer. so the integer division j is precise. i n. The critical range is Note the parenthesization. This method is clearly superior to the ﬁrst one. Thus only entries in a strip of n k · 1 diagonals need be computed. hence k´n k · 1µ additions. However. the value of C increases monotonically. . other than the ﬁrst. Method 4 could be implemented with simple loops in O´n · kµ time as follows: // Compute n! numerator = 1. i n-k.1. i ++) denominator = denominator * i. However. For Method 2.18 Many students try to solve this problem by a brute force method using index variables to keep track of the current location in each string (more or less generalizing Algorithm 11. and all entries in the top row. 4. for (i = 2. This method is clearly faster than the ﬁrst two and uses less space (essentially none). C = 1. With careful indexing. i ++) denominator = denominator * i. the brute force string matching algorithm). so at most nk additions are done. Because k is made less than n 2.

To ﬁnd. j+1). remove one or the other. it is easy to keep track of the largest value computed so far while ﬁlling the table. only two rows need actually be stored. The answer to the problem is the largest entry. the longest common subsequence. The worst case time for a correct brute force algorithm would probably be cubic. // point 1 if (A[i] == B[j]) action = ’T’. len = 1 + findLCS(i+1. increment i and j. If the ﬁrst elements of both sequences are equal. len = lenA. lenB = findLCS(i. remove both. The outline of a recursive solution is int findLCS(i. adding their common value to the common subsequence. . A. j). The top row and lefthand column are initialized to contain 0. j+1). if (j > n) return 0. j) if (i > n) return 0. Each such string corresponds to a common substring.5 in the text. deﬁne a dictionary with the id ´i jµ. or B. Discard the element at A i and increment i. len = lenB. else action = ’B’. // point 2 return len. the algorithm scans all diagonals (slanting downward to the right) for the longest string of contiguous 1’s. in the ﬁrst version we will only ﬁnd the length of the longest common subsequence. not its contents. The signiﬁcance of point 1 and point 2 is explained below. When we convert into the DP version we will also record the decisions that allow us to construct it. Discard the element at B j and increment j. Actually. Then continue with the remaining subsequences.Chapter 10 Dynamic Programming 77 solutions that I have seen were wrong. However. we also want to record the decisions that led to the longest common subsequence. the remaining entries are computed as follows: Si j S i 1 j 1 0 ·1 if xi y j otherwise The table entries can be computed row-by-row. If they are unequal. To convert into DP.19 We can construct a common subsequence by a variant of the merge procedure of Section 4. they missed some valid matches. This method also takes Θ´nmµ time. Computing each table entry and updating the maximum if necessary clearly takes constant time so the total time in the worst case is in Θ´nmµ. else lenA = findLCS(i+1. but it uses more space (the whole n ¢ m table). meaning: T A B Take the element at both A i and B j . An alternative method uses a table M 1 n 1 m where Mi j 1 if xi y j 0 otherwise After ﬁlling the table. Deﬁne problem ´i jµ to be the problem of ﬁnding the longest common subsequence of A i n and B j n . we need to backtrack over both choices of which element to remove when the front elements are unequal. The values of lcsAction[i][j] will be one of the characters T. It can be implemented as a two-dimensional array lcsLength[n+1][n+1]. if (lenA > lenB) action = ’A’. so we deﬁne a second two-dimensional array lcsAction[n+1][n+1]. Our ﬁrst solution (dynamic programming) uses a table S 0 n 0 m where S i j = the length of the longest common substring ending at xi and y j . 10.

There are about n2 subproblems. The dictionary requires space in O´n2 µ. one nickel. one nickel (two could be replaced by one dime). and one quarter).43 (U. For example. S may be very large relative to n. 10. The answer will be in M n a .21 If i 1 ∑ si is odd. and three pennies. Use a table P 1 si sums to j n n 0 H where Pi j true f alse if some subset of s1 otherwise All entries in column 0 are initialized to true (the empty set has sum 0).23 a) Use as many of the largest coin as possible. two dimes. We don’t know what the ﬁrst coin should be. and one quarter. Then using any of the coins c1 Mi j min´M i 1 j 1 · M i j ci µ where the second term is considered to be ∞ if j ci . The second solution is similar to the solutions for the problems discussed in Section 10. one nickel. After deciding on the ﬁrst coin. it is easy to see that the greedy algorithm is optimal. use two half-dollars. then as many of the next largest as possible. to make change for $1. so we . 10.” As we have seen with many dynamic programming problems. it is approximately the magnitude of the data.S). we would make change for the remaining amount in the optimal way. If any entry in the last column is found to be true. Initialize the entries of lcsLength to –1 to denote that the solution is unknown for that entry. Let H i 1 n n 1 2 i 1 ∑ si . this is not a polynomial-bounded algorithm. Let M j be the minimum number of coins needed to make change for j cents. In the terminology of Chapter 13.21. and so on. In general. where the rest of the subset must sum to j si . If point 2 is reached. action was set accordingly. one dime. where M i j is the minimum number of coins needed to make change for j cents ci . for which we can ﬁnd the answer in P i 1 j . and try to make change for 30 cents.3. There is a solution very similar to the solution for the Partition Problem. so the total time used in the worst case is in Θ´nH µ Θ´nSµ. Remember that H is computed from the data. but not preserved after the procedure returned. so we ﬁnd the answer in P i 1 j si .2. Computing each entry takes constant time. the algorithm does not have to store the whole table. Let the table be M 1 n 0 a .5. to determine if a sum j can be achieved using some subset si . so depth-ﬁrst search of the subproblem graph requires time in O´n2 µ. So Pi j P i 1 j or P i 1 j si where the second term is retrieved only if j si and is considered false otherwise. c) Let the coin denominations be 25. continue as before. b) An optimal way to make change must use at most four pennies (ﬁve could be replaced by one nickel).78 Chapter 10 Dynamic Programming In the above procedure. store len and action in lcsLength[i][j] and lcsAction[i][j] before returning. not the number of data. and 1. Exercise 10. one quarter. so the worst case time is in Θ´naµ. d) Solution 1. Suppose ∑ si is even. we consider subsets that do not contain si . two dimes (three could be replaced by a quarter and a nickel). Solution 2. By considering all the remaining cases. 10.1). otherwise. Modify the recursive findLCS to check whether lcsLength[i][j] is nonnegative at point 1. The rest of the ﬁrst row is initialized with P 1 s1 = true and all other entries false. Other algorithms whose running time varies with the magnitude of the data are considered in Section 13. computation of each entry uses only the previous row. and subsets of s1 that do contain si . So half-dollars would be used for all but at most 54 cents (the value of four pennies. Each subproblem calls two other subproblems so the subproblem graph has about 2 n2 edges. there is no solution. Computation of each table entry takes constant time. return its value immediately. A wrapper initializes the dictionary and calls findLCS(1. the answer to the problem is “yes. If so. The processing per “vertex” and per “edge” is in O´1µ.

let n be the number of songs. Computing each table entry takes constant time. but computing an entry may take as many as Θ´nµ steps. search the last column from the bottom up for the ﬁrst entry (the largest i) such that T i 500 300. j ++) T[1][j] = min(T[1][j-1]. M j min ´1 · M 1 i n j ci µ Dynamic Programming 79 where M 0 0 and terms with index less than zero are ignored (or treated as ∞). . i To ﬁnd the answer. Only the entries on and above the main diagonal are needed. j 500. The collection of i songs whose total time is represented in T i j may or may not include Ti j min´T i j 1 T i 1 j 1 # jµ so The entries can be computed column-by-column (only down as far as the main diagonal).Chapter 10 try all possibilities. The initialization and search for the answer take Θ´nµ steps. The top row and the main diagonal can be initialized by T[1][1] = 1 . 10. Thus the worst case time is in Θ´naµ. In general. so the total time is in Θ´n2 µ. We deﬁne a summing operation # (to compute the amount of time a sequence of songs takes up on 60-minute disks) as follows: a#b a·b 60k · b ´a if ´a mod 60µ · b 60 mod 60µ · b 60 and a 60 k j. Note that only a onedimensional table is used here. and only two columns need be stored.26 We use a table T 1 500 1 500 as described in the hint in the exercise. i ++) T[i][i] = T[i-1][i-1] # j ). for (i = 2. for (j = 2. At most i songs can ﬁt on the ﬁve 60-minute disks satisfying the constraints of the problem. i 500.

80 Chapter 10 Dynamic Programming .

Also. i = j.backup” has the same effect as “j = i” in that algorithm. j. . the statement “j = j .m” (on the line with the comment “Match found ”). when the algorithm terminates with k m · 1. i = j. return match.” IntList listScan(IntList P. 3. k. 2. k = P. match = nil. j = i.Chapter 11 String Matching Section 11. // i is the current guess at where P begins in T.1 In Algorithm 11. Because the invariant i j k·1 is satisﬁed in Algorithm 11. start over. while (j nil) if (k == nil) match = i. // value to return IntList i.2: A Straightforward Solution 11. int maxLength(OpenQueue q) precondition: q was created by create. j = rest(j). // j begins at the current character in T.1. j. // Match found break. Delete all other references to i. 11. maxLength(q) = n. The pattern length m is not needed because the list will be empty when the pattern is exhausted. if (first(j) == first(k)) j = rest(j).1. k = rest(k). OpenQueue create(int n) precondition: n > 0. postconditions: If q = create(n) 1. else // Back up over matched characters. IntList T) IntList match.2 The idea is that i. // Continue loop. “k = k .4 Answers can vary considerably for this open-ended question.backup” has the same effect as “k = 1. j m must be where the match began in the text. and k become (references to) linked lists and “++” becomes the rest operation. j = T. a) Looking ahead to part (c) we let elements of OpenQueue be type char instead of Object. Since j and k are incremented together when characters from the pattern and text match. depending on the software-engineering style. // k begins at the current character in P. // Slide pattern forward. 11. replace “match = i” with “match = j . q refers to a new object. k = P. length(q) = 0.

p). // circular array of elements void create(int n) OpenQueue q = new OpenQueue().front = (q. q. void deq(OpenQueue q) int n = q.82 Chapter 11 String Matching int length(OpenQueue q) precondition: q was created by create.1. // index of the first element in the queue int rear. q. p < k. none of the pattern-matching algorithms need it.count ++.length. // index of the last element in the queue int count. q. q. // number of elements in the queue char[] elements. 2. q. getElmt(q.rear = (q. 0) = e. class OpenQueue int front. All methods in Text are public static.length.elements.length.elements = new char[n].elements.front = 0.count --. 1. q. whose position is 0. char e) int n = q. 0 position < length(q). void enq(OpenQueue q. int position) int n = q. int position) preconditions: length(q) > 0. p) = getElmt(/q/. void enq(OpenQueue q. return (q.count. int index = (q. p+1) = getElmt(/q/.front + 1) % n. 2.rear = n-1.count = 0. int length(OpenQueue q) return q.length. . getElmt(q. b) All methods in OpenQueue are public static.elements[q. postconditions: Let k = length(/q/). getElmt(q.rear . q. char e) Notation: /q/ denotes the queue before the operation. precondition: length(/q/) < maxLength(/q/). precondition: length(q) > 0. 1. c) Note that position counts backward (to the left) from the most recently read text character.elements[index]). length(q) = k + 1. p). For 0 p < k-1.elements. postcondition: Let k = length(/q/). Although a method getPosition is provided for completeness.position + n) % n. int maxLength(OpenQueue q) return q.elements. length(q) = k .rear] = e.rear + 1) % n. For 03. void deq(OpenQueue q) Notation: /q/ denotes the queue before the operation. q. char getElmt(OpenQueue q. q. char getElmt(Queue q.

getPosition(T) = 0 and either (forward . Reader fp.Chapter 11 class Text OpenQueue q. int backup) T. T. newpos = 0.position = 0.position . char getTextChar(Text T) return getElmt(T. boolean eof.3: The Knuth-Morris-Pratt Algorithm 11. T.q = OpenQueue. int position. T.q. */ Section 11.eof = false. */ /** Postcondition: getPosition(T) = getPosition(/T/) + backup. */ void advanceText(Text T. /** m should be the pattern length. if (length(T. T.eof.fp = input.eof = true.q). enq(T.position = newpos.int forward) int newpos.fp. * Otherwise.create(m+1).position. (char)c). newpos = T.q).q)) // there’s no room. return T. int m) Text T = new Text().forward.6 a) b) c) A A A B 0 1 2 3 A A B A A C A A B A B A 0 1 2 1 2 3 1 2 3 4 5 1 A B R A C A D A B R A 0 1 1 1 2 1 2 1 2 3 4 .q.fp or endText(T) is true. */ /** Notation: /T/ denotes T before the operation. int getPosition(Text T) return T. boolean endText(Text T) return T.read(). T.forward.. delete one first deq(T.q) == maxLength(T. * Postcondition for advanceText(T. T /** Precondition: backup + getPosition(/T/) < length(T. */ if (c == -1) T.getPosition(/T/)) * new characters were read from T. String Matching 83 /** endText(T) is true if an earlier advanceText(T) encountered end-of-file. while (newpos < 0) int c = T.position)). then getPosition(T) = getPosition(/T/) . */ Text create(Reader input. // c is either -1 or in the range of the type char.position += backup. newpos ++. forward): * If forward getPosition(/T/). break. // add 1 for safety. void backupText(Text T.

Since j begins at 1. k is assigned 1). is executed at most 2n times. 11. where the character comparison is done. Then charJump t j .84 Chapter 11 d) String Matching A S T R A C A S T R A 0 1 1 1 1 2 1 2 3 4 5 11. 2). 11. either j is incremented (along with k) or k is decremented. Section 11. that should never happen. j can increase at most n times.10 a) The fail links are A B A A A ¡ ¡ ¡ 0 1 2 ¡ ¡ ¡ m 2 m 1 The procedure kmpSetup does m 2 character comparisons to compute them.12 Add the following loop at the end of the procedure kmpSetup (after the end of both the while loop and the for loop in the procedure). k can decrease at most n times. Thus the last if statement. Then kmpSetup will do 2m 5 character comparisons (for m 2). Since k starts at 1 and is never negative. is incremented by exactly 1 whenever it is incremented. if k 0. k is incremented at most n times. The pattern would shift left instead of right. k m. pattern character.8 At most one character comparison is done each time the body of the loop is executed. b) otherwise. b) 2n ´m 1µ (for m c) Let Q consist of m 2 A’s followed by two B’s.4: The Boyer-Moore Algorithm 11. k ++) if (pk == p f ail k ) fail[k] = fail[fail[k]]. Since k is incremented the same number of times as j (note that in the second if statement.15 a) charJump[A] charJump[B] charJump[C] charJump[D] charJump[R] charJump[x] charJump[A] charJump[C] charJump[R] charJump[S] charJump[T] charJump[x] = = = = = = = = = = = = 0 2 6 4 1 11 0 5 1 3 2 11 otherwise. 11. the last 0. suppose t j is the same as pm . So we count how many times j is incremented overall and how many times k is decremented. To see that this can happen.17 It is possible that charJump[t j ] is so small that j · charJump t j is smaller than the index of the text character that is currently lined up with the end of the pattern. in the last of the three if statements. for (k = 2. and the loop terminates when j n (where n is the length of T ). Each time this if statement is executed. and never decreases.

combined with cases in which pk·1 ¡¡¡ pm matches an earlier substring of P or some sufﬁx of pk·1 ¡¡¡ pm matches a preﬁx of P or neither (i.5: Approximate String Matching 11. If one wants to bias the algorithm in favor of a particular kind of mismatch. the longest sufﬁx and preﬁx that “match” are length 0). according to which case above gave the minimum value. 4. 5.23 The following modiﬁcation works for both the straightforward algorithm and Knuth-Morris-Pratt (Algorithms 11. while (endText(T. . so the correct way to increment j to force the pattern to move at least one place to the right is j += max(charJump[t j ]. . k = 1. j = j-m+1. is computed by taking the minimum of the following values: 1. // Continue main while loop. . Then length[m][j] is the length of the text segment that matches the pattern. D i 1 j 1 (if pi t j ) 2. If charJump[t j ] matchJump[k]. leave j unchanged. In the algorithm. Initialization lines unchanged. to find only // non-overlapping copies.. 11. The entry in the length table is updated as follows. the number of new characters of text read is matchJump k ´m k µ k m · matchJump k This doesn’t count text characters already read while matching right-to-left from pm to the current pk . Thus there may be more than one correct value for length of the matched segment. It outputs a list (possibly empty) of indexes in T where a copy of P begins. Note that more than one of the cases may occur (more than one of the values considered when ﬁnding the minimum may have the minimum value). k for some j. Another exercise: Determine in exactly which cases charJump[t j ] matchJump[k] when a mismatch occurs.3.1 and 11. Do you need to split these cases still further? Section 11. one can do so.1 or 5. length[i][j] = length [i-1][j-1] +1. D i j 1 · 1 (the case where t j is missing from P). if (k > m) Output (j-m). the number of differences between the two strings at this point. j) == false) // New outer while loop Unchanged while loop from Alg. m-k+1). Consider cases in which t j does or does not occur in P.Chapter 11 String Matching 85 The distance from t j to the text character that is currently lined up with the end of the pattern is m k.21 Use an additional 2-dimensional array length such that length[i][j] is the length of the text segment ending at t j that matches the pattern segment p1 ¡¡¡ pi . . length[i][j] = length [i-1][j].3). length[i][j] = length [i-1][j-1] +1. 2. 3. then the number of new text characters needed is greater than the above amount. D i 1 j · 1 (the case where pi is missing from T ). 4.e.19 Assuming that matchJump[k] slide k charJump[t j ]. length[i][j] = length [i][j-1] +1. D i 1 j 1 · 1 (if pi t j ) 3. // This will find overlapping copies. 1. Additional Problems 11. where m is the pattern length and k is the maximum number of differences Now suppose D m j allowed. that is D i j .

. j = j+m+1. The new if statement that follows the original (now inner) while loop would be if (k == 0) Output (j+1). // j += 2*m for non-overlapping instances k = m. two copies of Y concatenated together). Setting up the fail links takes Θ´nµ time and the scan takes Θ´2nµ = Θ´nµ time. an outer while loop is added as above. 11.86 Chapter 11 String Matching For the Boyer-Moore algorithm (Algorithm 11.25 Use the Knuth-Morris-Pratt algorithm to determine if there is a copy of X in YY (that is.6).

saving the intermediate results. A´kµ 2 · 2A´k 1µ 2 · 2 2 · 2A´k 2µ 2 · 4 · 4A´k 2µ 2 · 4 · 4 2 · 2A´k 3µ 2 · 4 · 8 · 8A´k 3µ 21 · 22 · ¡¡¡ · 2 j · 2 j A´k jµ k 1 i 1 So let j k 1 to get a sum without any recursive A terms.3: Vector and Matrix Multiplication 12. In the proof of Lemma 12. A´kµ 2A´k 1µ · 2 2 ¡ 3 ¡ 2k 2 2. The last expression simpliﬁes to 3 ¡ 2k 1 2. Solution 1. 3 ¡ 20 2 1. using at most k more multiplications.6 Since n 2k 1. ∑ 2 i · 2 k 1 ¡ 1 2 k 2 · 2 k 1 3 ¡ 2k 1 2 Section 12. 0 is substituted for an wherever it appears. This can be computed with at most by 2 lg n · 1 arithmetic operations on the data by the same technique used in part (a). . Then subtract 1 and divide by x 1. b) Observe that p´xµ ´x · 1µn .Chapter 12 Polynomials and Matrices Section 12. If there were any steps of the form s = q an this transformation would not produce a valid program. b) n · 1. that is.1. the simplest solution is to verify it by induction. Since we were given the answer. a) Observe that p´xµ xn · ¡¡¡ · 1 12.8 No */’s are required because each term ui vi in the dot product can be computed by adding ui copies of vi . x8 2k . 12.2: Evaluating Polynomial Functions 12. a step of the form s = q 0 would be illegal. Let k lg´n · 1µ . as required for k 1. we use the fact that A´1µ A´kµ for 1 j k 1. Solution 2. 21 appropriate terms.4 In a straight-line program. 12. we need to show that A´kµ 3 ¡ 2k 1 2. for a total of at most 2k · 3 2 lg´n · 1µ · 3 arithmetic operations on the data. x4 . start with t x and multiply t by itself k times. so xn·1 can be computed by multiplying together the Now. To get xn·1 . For k 1. Start by expanding the recurrence to see the pattern of the terms. x. the latter by the inductive hypothesis.10 a) n. Let’s suppose we weren’t given the answer and want to discover it. x 1 k x2 .2 x n ·1 1 . n · 1 is the sum of powers of 2 among 20 . x2 .

c. There is also bookkeeping proportional to lg´nµ to ﬁnd the 1-bits in the binary form of n. and ω2 is a primitive ´n 2µ-th root of 1. v = b * d. d) The element in row i and column j of DG1 is ωi G1 i j 2 ωi 2 j ´ ·1µ .17 Use the technique of repeated squaring given in the solution for Exercise 12. f) The recurrence equation for each operation is A´kµ 2A´k 1µ · n 2 for k 1 A ´0 µ 0 This method does roughly the same number of arithmetic operations on the data as recursiveFFT. z = (a+b) * (c+d).14 Let a.v. This requires about 2 lg´nµ matrix multiplications of 2 ¢ 2 matrices. but not the sum itself. (b). For Property 2: The square roots of 1 are 1 and –1. Suppose 1 ω2 ´ω2 µ2 k and ωm be two of these terms that are equal.4: The Fast Fourier Transform and Convolution 12.bd imaginaryPart = z . So the total is about 16 lg´nµ *’s and 8 lg´nµ +’s. representing the complex numbers a · bi and c · di.u . Let ω contradicting the fact that ω is a primitive nth root of 1. but ωn 2 1 because ω is a primitive nth root of 1. // z = ac + ad + bc + bd u = a * c. so the roots don’t sum to zero. So this matrix-power method with repeated squaring is “exponentially faster. The corresponding element in G4 is ω´i·n 2µ´2 j·1µ ωi´2 j·1µ·n j·n c) G2 i j nj 1 and ωn 2 1. ´ωn 2 µ2 1. ω ´1 π µ 1.12 a) If n n 1 c j ´ω µ j 0 1. realPart = u . If n divides c. 1. ´1 2π nµ. are not distinct. The formula for FnV follows from (a). // imaginaryPart = ad + bc 12. // realPart = ac . The corresponding element in G3 is ω2´i·n 2µ j ω2i j·n j ω2i j since ωn j 1. ω2i j . Permuting the columns of Fn and elements of V in the same way will j affect the order of the terms in the summation.” . since e) The entries of G1 are ω2i j . The recursive computation of Fn requires about Fn 1 +’s. b) G1 i j ωi´2 j·1µ . (See the recurrence that follows Algorithm 12.) Additional Problems 12. An alternative argument for Property 2: In polar coordinates. then ωc n.4. The product of these complex numbers is ´ac bd µ · ´ad · bcµi. so we need to compute ac bd and ad · bc using only three multiplications. plus bookkeeping. etc. b. ω ωi ω2i j ωi´2 j·1µ G2 i j . and ω2 is a primitive ´n 2µ-th root of 1. Thus the terms must be distinct.v. The iterative computation of Fn requires about n +’s. But k and m are both less than n 1. so ∑ ´ω2 µn 2 1 b) For Property 1: ω2 is an ´n 2µ-th root of 1 because ´ω2 µn 2 ωn 1. (c). so ωn 2 ´1 n 2 n 2 ¡ 2π nµ 12.2(a) to compute An 1 . so ωn 2 must be –1.16 1 a) The ith component of FnV is ∑n 0 ωi j v j . the only nth root of 1 is 1 itself. and d be the given data. which is about 1 618n 1. Here’s the algorithm. each involving eight multiplications and four additions. and (d).88 Chapter 12 Polynomials and Matrices Section 12.

so we do not state that each time. Now forming the sum involves computing a common denominator. sn . undergraduates would probably omit the discussion of the length of the proposed solution that is emphasized in parts (a) and (c) below. An intermediate sum will never exceed L digits. as ´ni di µ. Multiplying two L-digit numbers takes time in O´L2 µ. For each set of indexes I denoting a set of elements to go in a single bin. symbol. or semi-colon as one n. Let L be the length of the input. we cannot assume they can be added in one step.59. 3.8 and 13. Assuming that the sizes si are in an array. ∑ si i¾I n. 1. The input to the bin packing problem has length L larger than n since each of the n sizes may be many digits long. the sums can all integers in 1 be computed and checked in one pass over the proposed solution string. part of the checking is to check for correct syntax. The properties to be checked to verify the solution are: ¯ ¯ ¯ ¯ The proposed solution contains at most 2n symbols. where ni and di are positive integers. First note that the total number of digits in all di together is less than L.2 The graph has no edges. An invalid solution may be arbitrarily long but the checking algorithm returns false as soon as it detects an integer greater than n or more than 2n symbols. The issue is less clear if si are represented in the “pure” form of rational numbers. or O´L3 µ. We will show that the amount of time needed to check a proposed solution is bounded by a polynomial in L. any numerator needed to represent an si with the common denominator is at most L digits. The formats for the proposed solutions are certainly not unique. Proposed solution: a sequence of indexes with commas separating items to go in the same bin and semi-colons marking the end of the sequence of indexes for each bin. Be sure that the current errata (available via Internet) are applied if this error is present in any of the texts being used. Adding up the new numerators is in O´Ln2 µ. 7. 1. We need to compute one common denominator and n new numerators. 13. 4. also each integer is in the range 1 The indexes in the sequence include all of the integers 1 There are at most k semicolons (at most k bins). For example. Example: 2. these are some of several reasonable approaches. the time to add two of them digit by digit is in O´Lµ. The length of a valid solution is in O´n log nµ since writing each index may use up to Θ´log nµ digits. comma. which can be arbitrarily larger than n. . Similarly. or O´L3 µ. We would expect different levels of sophistication and completeness in solutions in undergraduate courses and graduate courses. So multiplying them all together yields a product with at most L digits. 6. Multiplying n L-digit numbers. spaces may appear for readability although they are not mentioned. The only remaining question is how long that pass takes. Counting the semicolons is easy. counting each integer. so adding si to an intermediate sum is also in O´Lµ. takes time in O´nL2 µ. Since the si are arbitrary precision. In all cases. If each si is represented as a terminating decimal. s1 .Chapter 13 N P -Complete Problems Section 13. which is in O´L4 µ. a) The input is an integer k and a sequence of rational numbers. In this case the total time for checking sums is certainly in O´Ln2 µ. . 5. and that can be done in O´n2 L2 µ. where the product is at most L digits. Also.2: P and N P 13.4 Note: Part (e) of this exercise was updated by a correction that was too late for the second printing. Note: The issue of arithmetic with rational numbers also arises in Exercises 13. A simple marking scheme can be used to check that the indexes listed include n.

If x j i and the i appears in the proposed solution. e) Note: Due to an error. then x j evaluates to true. A proposed solution that is extremely long or contains extremely large numbers can be rejected as soon as we detect an integer that is not in the correct range or detect more than n integers. A proposed solution that is extremely long or contains extremely large numbers can be rejected as soon as we detect an integer that is not incident upon any input edge or detect more than m integers. n) and each index is in the range 1 n. Then a very large integer can be written with just a few characters. such as 2 ££100 ¢ 7 ££400. Interpretation: This is the order in which to visit the Proposed solution: a sequence of indexes. n. i.. and each of these For each edge ´i jµ in the graph. . vol. (Since n may be very large in comparison to L. The properties to be checked to verify the solution are: ¯ ¯ The proposed solution is no longer than the input. too hard even for two stars in the context of the text. then the truth value to be assigned to variable i is true.e. The overall input length is L. c) The input is a CNF formula in which the propositional variables are represented by integers in the range 1. then x j evaluates to true. There is an edge vi vi·1 for 1 i n 1 and an edge vn v1 . That is. pp.) vk .. let’s use “2 ££100” to denote 2100 and use ¢ to denote multiplication. the ﬁrst two printings of the text ask for a proposed solution to the question. Proposed solution: Two positive integers. . k No index in the proposed solution is repeated. (An alternative is an integer n and an n ¢ n bit matrix. Is bin-packing still in N P ? There is no problem about multiplying such numbers. The input formula can be checked clause by clause in time O´L2 µ. at most m integers). Interpretation: if i is present. at most L. Each clause in the CNF expression evaluates to true with these truth values. 214–220. If x j i and i does not appear in the proposed solution. The work to check these properties is clearly straightforward and can be done in O´m3 µ time. The input is an L-bit integer n. otherwise i is true.e. b) The input is an integer n and a sequence of edges i j.90 Chapter 13 N P -Complete Problems An advanced study question: Suppose the si are written as ´ni di µ but ni and di may be written in factored form. Proposed solution: a sequence of integers. Interpretation: These are the vertices in a vertex cover. we do not want to assume we can use an array with n entries. see Vaughan Pratt. A clause consisting of the ¨ © literals x j evaluates to true if at least one of the x j evaluates to true. 1975. at least one of i and j is in the sequence. but is clearly straightforward and can be done in O´n2 µ time. where i and j are between 1 and n. x and y. The work done to check these properties depends on the representation of the graph. Interpretation: xy The properties to be checked to verify the solution are: n. v1 v2 vertices in a Hamiltonian cycle. 4. Proposed solution: a sequence of integers in the range 1. v1 The properties to be checked to verify the solution are: ¯ ¯ The proposed solution contains at most k integers (or if k integers is incident upon some edge in the input graph. Note that n might be much larger than L. the problem is adding them in polynomial time. no. 3. since each literal in a clause can be checked in O´Lµ by scanning the proposed solution. but the intended question is “Is n nonprime?”.” SIAM Journal on Computing. d) The input is an integer k and a graph with vertices 1 vq . m. “Every prime has a succinct certiﬁcate. but would be 369 digits if written out in decimal form. . The properties to be checked to verify the solution are: ¯ ¯ ¯ The proposed solution contains exactly n indexes (i. which is 13 characters in this form. n. “Is n prime?”.) n and m edges. but it would be a very advanced graduate-student exercise. The ﬁrst question is indeed also in N P .

Since the integers are a subset of the rational numbers. an input for R. If H has a tour of length k. xy n. 13. Students are likely to say: Since T and U are computed with input x.8 Let SSI be the subset sum problem for integers and let SSR be the subset sum problem for rational numbers. where P P R and R P Q. For any subset J of 1 So the input for SSR has a yes answer if and only if the transformed input for SSI does. the total time is ´ p · qµ´nµ. Now. Let n be the size of x. then there is one in G. Then we can conclude that the size of y is at most cp´nµ. . UT ´xµ U ´T ´xµµ. let s1 sn ..3: N P -Complete Problems 13. such that the answer for R with this input is the correct answer for P with input x. Clearly T ´Gµ can be computed in polynomial time.3. where H is a complete weighted undirected graph and k is an integer.12 Let the transformation T take an undirected graph G ´V E µ as input and produce H and k as output. The ﬁrst is easy to see. an input for Q. The input is transformed to ds1 dsn . The product xy can be computed by the manual long-multiplication technique in time proportional to L2 . j ¾J 13. P P Q. For each edge in the cycle in G. so SSI P SSR . respectively. cise 13. an input for P. and Q. H and k are calculated from G as follows. If there is a Hamiltonian cycle in H. this can be done in polynomial time. which is polynomial.6 Let the problems be P. There is a polynomial-time transformation T that takes input x for P and produces T ´xµ. Let p and q be polynomial time bounds for algorithms to compute T and U.4. Therefore the edges in the tour all occur in G. Then if there is a Hamiltonian cycle in G. vw and wv. H ´V F µ. it must consist entirely of edges of weight 1 because k cities need to be visited. Each integer is greater than 1. so they provide a Hamiltonian cycle for G. so the time for the transformation UT is at most p´nµ · q´cp´nµµ. which may have size much larger than n. The value of k is V . sn . and C be an input for SSR. the timing for UT . UT takes input x for P and produces U ´T ´xµµ.) Hence. (A similar argument appears in the proof of Theorem 13. The second point. the cycle in H uses the edge from a vw and wv pair that corresponds to the direction the edge was traversed in G. As discussed in Exers1 n . If G has a Hamiltonian cycle. There are two things we must verify. there is one in H. 13.10 Transform an undirected graph G into a directed graph H by replacing each edge vw of G by edges in each direction. then applies U to the result. such that the answer for Q with this input is the correct answer for R with input y. where the weight of an edge ´i jµ ¾ F is 1 if ´i jµ ¾ E and is 2 otherwise.e. Section 13. and SSR j ¾J P SSI . and dC (all of which are integers). may seem clear too. an input for Q. i. ∑ s j C if and only if ∑ ds j dC. that UT preserves correct answers and that it can be computed in polynomial time. simply replace either edge vw or wv in the cycle in H by the corresponding undirected edge in G. Let d be the product of the denominators of For the other direction. R. The hitch is that U is applied to y T ´xµ. let UT be the composite function that ﬁrst applies T to its argument. Let c be the (constant) limit on the number of units of output (perhaps characters) an algorithm may produce in one step. then H has a tour of length k by taking the edges of the Hamiltonian cycle. There is a polynomial-time transformation U that takes input y for R and produces U ´yµ. any algorithm for SSR solves SSI . but there is a (small) difﬁculty in the proof that should not be overlooked. The answer for Q with this input is the correct answer for P with input x.Chapter 13 N P -Complete Problems 91 ¯ ¯ ¯ The proposed solution is a pair of integers of at most L bits each. and C.

because there are no edges between vertices that correspond to literals in the same clause. then x jc must be false.) The set of propositional variables is xvc 1 v n 1 c k . Therefore the formula’s length is in O´n3 µ and is easily computable in polynomial time. (There are Θ´k2 µ. let ti be an index of a true literal in that clause. so i and j have different colors.) Now suppose that the graph constructed for E has a k-clique. No two adjacent vertices may have the same color. so for every edge ´i jµ ¾ G and every color 1 c k we create x jc µ. and V ¼ is a k-clique. . Thus a cycle in T ´Gµ that corresponds to a cycle in G will miss those new vertices in the edges that aren’t in G’s cycle. for some 1 c k. So the total number of literals in the formula is 2mk · nk. that each is actually a vertex in the graph. A Hamiltonian cycle does not in general traverse every edge in the graph. we show that E is satisﬁable. This amounts to mk clauses.) So li ti . but there are some graphs G that have Hamiltonian cycles where T ´Gµ does not. If G is k-colorable. so E is satisﬁable. for 1 i k. G (Has a Hamiltonian cycle) T(G) (Has no Hamiltonian cycle) 13. (The exercise asked for k 3. (If more than one xvc is true for a given v. It remains to show that this transformation preserves both yes and no answers for the graph colorability problem. so for every vertex 1 v n we create the clause ´xv1 xv2 ¡¡¡ xvk µ. then we claim that the propositional variables that are true in a satisfying assignment deﬁne a k-coloring for G. ´i ti µ 1 i p .” as given above. Also the formula is in 3-CNF in this case. We may immediately reject any input with k n. so the length is in O´n · mµ. with two literals each. suppose a CNF formula E is satisﬁable and ﬁx some satisfying truth assignment for the variables. A related construction can be used. then the CNF formula is satisﬁable by setting all the propositional variables according to their “meaning. but T ´Gµ does not. but the encoding is general. (If more than one literal in a clause is true.) Now. G has a Hamiltonian cycle. One literal in each clause will be true.) And for each edge ´i jµ. G is encoded as a CNF formula that is satisﬁable if and only if G is k-colorable. Consider the vertices V ¼ from the graph only if li ti l j t j . so in the following discussion. Finally. we note that the transformation can be done in quadratic time. let a proposed solution be a sequence of vertices. Thus we may conclude that the vertices in the k-clique are ´i mi µ. The problem is trivial if k n. 13. we will show that the graph constructed from E (as described in the exercise) has a k-clique. k was speciﬁed as 3.18 To see that the clique problem is in N P . There is an example in the ﬁgure below. The “meaning” of xvc is that vertex v is colored c. First. and that they are mutually adjacent. This amounts to n clauses of k literals each. So each pair of vertices in V ¼ is adjacent. The vertices in the k-clique must differ in the ﬁrst component. so we never need to encode such problems. we assume k n. for 1 i p. if xic is true.92 Chapter 13 N P -Complete Problems 13. For this exercise. For each v at least one of xvc is true. edges to be checked. There can be no conﬂicts in this assignment because li mi l jm j . Edges are omitted choose any one. Another exercise: Show that Independent Set is N P -complete by reducing SAT to Independent Set.16 There is a well-known way to encode graph colorability instances as CNF-satisﬁability instances. but that can’t happen here because all these literals are true. If the formula is satisﬁable. and k colors are available. For each clause Ci . hence O´n2 µ. (Recall k p. choose any one of them. we show that SAT P Clique. Suppose a graph G has n vertices and m edges.14 T ´Gµ is bipartite. We will use this encoding as our transformation to prove that graph coloring is reducible to SAT. It is easy to check in polynomial time that there are k vertices in the sequence. Assign the value true to each literal li mi for 1 i k. the clause ´ xic Every vertex must have some color. are true.

Therefore all the vertices not in C comprise a clique in G. logarithmic) compared to N.. For purposes of the transformation. has the same vertices as G and has exactly those undirected edges that are not in G. where G is the complement of G. 13. Check each subset of four vertices. These conditions can be detected using a straightforward depth-ﬁrst search. We claim that the vertex cover for G is a feedback vertex set for H. 13. All the transformations have a similar ﬂavor. There are n 4 the existence (or nonexistence) of six edges must be checked. k) which returns true if and only if the n items whose sizes are in the array S can be packed in k bins.Chapter 13 N P -Complete Problems 13. Consider any directed cycle in H. Then any edge can be checked in O´1µ. so it must be an edge in G. vw and wv. there are no edges between any pair of vertices in K. Vertex Cover. There is no problem if G is presented using N adjacency lists or an N ¢ N adjacency matrix. then each connected component of the graph is either an isolated vertex.23 If each vertex has degree at most two. The input for the feedback vertex set problem will be H and k.) The chromatic number of the graph will be three if there is a cycle of odd length. However. and the integer c n k. Construct the adjacency matrix representation of the graph in Θ´n2 µ time if it is not given in that form. This edge came from an edge vw in G. 13. We claim it is a vertex cover for G. and there are k n c of them. or a simple cycle.20 This solution uses the notion of the complement of a graph. let it be the set of vertices C.21 Let G (an undirected graph) and k be an input for the vertex cover problem. The transformation can be computed in time O´n3 µ. Then i j cannot be an edge in H. let n be the number of such vertices. Suppose G has a vertex cover using at most k vertices. On the other hand. and Independent Set can be reduced to any of the others. Let vw be an edge of G. (The number of edges is in O´nµ. Thus H may have T heta´N 2µ edges. The question for the transformed input is. For an undirected graph G. otherwise it will be two (if there are any edges at all) or one (if there are no edges). the roughly N 2 edges of H cannot be constructed in polynomial time. G could be presented as the integer N. Choose the remaining c n k vertices (those not in K) as the vertex cover of H. denoted G. (We explain this restriction later. then a sequence of edges. so the algorithm’s time is in Θ´n4 µ. and select any edge in the cycle. that is. 93 . the complement of G. If H has a vertex cover of size c. “Does H have a vertex cover of size c (or less)?” If G has a clique of size k. Thus at least one of the vertices v or w must be in the feedback vertex set. We use the function as follows. In H. For each set of four vertices. Suppose G actually has N vertices.) We transform G and k to the graph H G. Transform G to a directed graph H by replacing each edge vw of G by edges in each direction.26 ¡ n´n 1µ´n 2µ´n 3µ 24 such sets. either has size in Ω´N µ. 13. suppose H has a feedback vertex set. n. whether the size of H is bounded by a polynomial function of the size of the input graph G. Therefore it is not necessary for any vertex in K to be in a vertex cover of H. Let i and j be any two vertices not in C. In H there is a cycle consisting of only the two directed edges vw and wv.g. so at least one vertex is in the vertex cover. If the number of edges is tiny (e. The transformation can be done in linear time. Other variations: Show that any of Clique. Why did we deﬁne n to exclude isolated vertices? The problem has to do with the size of H. we use only vertices incident upon some edge of G. a simple chain of vertices. Let the input for Clique be a graph G and an integer k.28 Suppose we have a polynomial time Boolean subroutine canPack(S. let it be the set of vertices K. specifying the number of vertices without enumerating them.

3. So it is easy to make FFD use an extra bin.) k 1 1 · ¯ Verify: k 1 k k2 1 . . n) for (k = 1. 3.5: Bin Packing 13. k 1 objects 2 k 1 Group 2: k 1 objects size ´k 1µ 1 ´k · 1 µ Group 3: k 1 objects size · ε´kµ k k2 1 ´k 1 µ Group 4: k 1 objects size ε´kµ k k2 In an optimal packing one object from each of the groups 1. if (cando) break. Claim: FFD puts all the Group 3 objects. We know there will be at most n iterations of the for loop. we will simply state the claim and the condition it requires on k. The following construction works for k Group 1: 4.30 FS´F µ is the set of all subsets of F that are covers of A. so k 1 bins are ﬁlled with all the objects in these groups. since FFD handles larger sizes ﬁrst. so the optimal number of bins is k. Section 13. We give two schemes for this. return k. (True if k 4.e. creating some waste.e. (True if k 4. 2. Nothing else will ﬁt into the ﬁrst k 1 bins. k ++) cando = canPack(S. Section 13. but one each of Groups 1. For any such cover x. in the kth bin. n.32 The objects are speciﬁed in four groups by decreasing sizes. if any. If a person were trying to ﬁll the bins. Claim: FFD puts one object from Group 1 and one object from Group 2 together in each of the ﬁrst k 1 bins. i.94 Chapter 13 N P -Complete Problems int minBins(S. The technical difﬁculty is forcing it to put k 1 objects into that bin. 3. Claim: The sizes are listed in nonincreasing order. k n. k). it is “tricked” into distributing Group 2 among the ﬁrst k 1 bins after it distributes Group 1 into those bins. This is the usual technique for fooling a greedy algorithm: make sure the optimal solution requires more lookahead than the greedy strategy has.) 1. We’ll deﬁne ε´kµ size 1 1 k2 . so does minBins. 1 2 · k 1 · k k2 1 k 3. We won’t include all the details. he or she would probably notice that one Group-1 object and one Group-2 object left a lot of waste space. 1 . and no others... and 4 ﬁll a bin completely. However. All the objects in Group 2 ﬁt in one bin. We establish the behavior of First Fit Decreasing on this input by verifying a series of claims. val´F xµ is the number of sets in x. The idea is that the optimum packing ﬁlls k 1 bins exactly by using one object each from Groups 1. Solution 1. ¯ Verify: 1 2 k 1 ¯ ¯ Verify: Two objects from Group 1 won’t ﬁt. 2µ Verify: An object from Group 1 and Group 2 do ﬁt. 1. and 4.4: Approximation Algorithms 13. 2 1 · k k 1 1 ¯ Verify: No other objects ﬁt with these two. i. (True if k 1. and 4 was a perfect ﬁt. and puts all of Group 2 in one bin. so if canPack runs in polynomial time.

. 3. Claim: The sizes are listed in nonincreasing order. the common denominator.) 2 ´2k · 13k · 19µ. 1. since 2 2 ´k 1µ´k · 7k · 12µ · 3´k · 6k · 9µ D´kµ so the optimal number of bins is k. All the remaining objects ﬁt in one bin.. (In fact. The behavior of FFD is shown with claims similar to solution 1. they will all go in one bin.) It is informative to work out an example with k 10. leaving Group 2 and three objects from Group 4. Verify: No other objects ﬁt with these two.. 2. so k 1 bins are ﬁlled with the objects in these groups. ´k 1µ k k2 1 · k k2 1 1.Chapter 13 N P -Complete Problems k·1 1. Claim: FFD puts one object from Group 1 and one object from Group 2 together in each of the ﬁrst k 1 bins.. It is a reﬁnement of the idea in solution 1.10. 2. and 4 ﬁll a bin completely.e. The following construction works for k 2. k2 · ¯ Verify: No Group 4 object ﬁts in this bin.. and so covers all cases of Lemma 13.e. (True if k 2 µ Thus FFD has k 1 remaining objects that it puts in extra bins. i. to be D´kµ ´k · 2µ´k · 3µ´k · 4µ 95 ¯ Verify: All Group 3 objects ﬁt in one bin. D´kµ.. i. (True if k Verify: An object from Group 1 and Group 2 do ﬁt. ´k 1µ´k2 · 7k · 10µ · 3´k2 · 6k · 9µ 2 3.) Verify: Two objects from Group 1 won’t ﬁt.e. ´k 1µ´k k 2µ D´kµ. ´k2 · 7k · 12µ · ´k2 · 6k · 9µ Verify: These objects ﬁt. Claim: FFD puts all the Group-3 objects and exactly three Group-4 objects in the kth bin. i. 2k2 · 13k · 19 D´kµ k2 · 7k · 12 size D ´k µ 2 · 7k · 10 k size D ´k µ k2 · 6k · 9 size D´kµ size 1 ¯ ¯ ¯ ¯ ¯ ¯ Verify: D´kµ ´2k2 · 13k · 19µ 2 ´k · 7k · 12µ. Verify: A fourth Group-4 object does not ﬁt. i. (True if 2 · 7k · 10µ · 4´k · 6k · 9µ Thus FFD has k 1 remaining Group-4 objects that it puts in an extra bin. (True if k 2. We’ll deﬁne D´kµ.e.e. Solution 2. i. ´k 1µ k3 · 9k2 · 26k · 24 Group 1: Group 2: Group 3: Group 4: k 1 objects k 1 objects k 1 objects k · 2 objects In an optimal packing one object from each of the groups 1.

2 is that proﬁt density is used. i ++) // Look for a bin in which s[i] fits best. at the beginning of the ith iteration of the outer for loop. we assumed that pi si . The output parameter take is in this class.96 Chapter 13 N P -Complete Problems 13. j binsUsed.2. bin) float[] used = new float[n+1].2.2. binpackBFD(S.1.34 Best Fit Decreasing follows the procedure of First Fit Decreasing (Algorithm 13. except that all bins are considered before choosing one. which can be chosen arbitrarily large. giving the sequence s1 s2 ¡¡¡ sn . Sort S into descending (nonincreasing) order. Section 13. so that all objects had equal proﬁt density. used[bestJ] += si . // Indexes for the array bin correspond to indexes in the sorted sequence. if (bestJ > binsUsed) binsUsed = bestJ. 0. 0. bestJ = binsUsed + 1. i n. 0. BFD uses two bins. b) 0. 0. 0. // used[j] is the amount of space in bin j already used up. although s2 would ﬁll the knapsack completely.2. for (i = 1.1. 0.1.2.6: The Knapsack and Subset Sum Problems 13.35 a) 0. In the simpliﬁed version in the text. 0. for (j = 1. 0.7. // bin[i] will be the number of the bin to which si is assigned. The ratio of the optimal to the algorithm’s result is C. BFD uses three bins.3. Lines that are new or changed.3.0. binsUsed = 0. int binsUsed. Initialize all used entries to 0. .6. // Continue for (i) The worst case time occurs when n bins are used. 13. This is deﬁned as pi si . The greedy algorithm described in the exercise puts s1 in the knapsack and quits.35. n. we assume the class IndexSet is available to represent subsets of N and the needed set operations are implemented efﬁciently in this class.37 Consider the input where s1 1 and s2 C. but two is optimal. 0. are numbered. relative to Algorithm 13. In such cases. but FFD uses three.39 The main departure from Algorithm 13. where object i has proﬁt pi and has size si .4. so the inner for loop executes Θ´n2 µ times altogether.2. 0. 0. 0. j. bin[i] = bestJ. binsUsed i 1. bestJ.0 && used[j] > used[bestJ]) bestJ = j.25.1) closely. As in Algorithm 13. 13. j ++) if (used[j] + si 1. int i.

This subset will be considered explicitly next.” JACM. 2 . are at most 1 · 1 k for all m and n. maxProfit = profit. The sum of the proﬁts in the optimal solution can be described as: m kLargest · middles · smalls (13. return maxProfit. we will denote these objects by indexes 1.) Ak considers the remaining objects in proﬁt density order. The only case where this affects the result is when k 0. 1. if (maxProfit < profit) maxSum = sum.Chapter 13 N P -Complete Problems knapsackk (C. described pk be the k objects from T £ with the largest proﬁts. Jan. 0. The proof of Theorem 13.12 must be modiﬁed to include the time for the sorting. int j. giving the sequences s1 s2 sn and p1 p2 pn such that p1 s1 p2 s2 ¡¡¡ pn sn . // See if T fills the knapsack best so far. Proof : (Based on S. profit. 1975). 6. The worst-case time for A0 was linear before.) Thus the optimal solution is indexed as follows. sum. then T £ is explicitly considered by Ak so val´Ak I µµ m and rAk ´I µ 1. profit += p j . so proving the theorem for T £ will be sufﬁcient. so pq m ´k · 1µ (13. T = T j . then Ak gives an optimal solution. Thus this indexing does not correspond exactly to the indexes used in the algorithm and does not imply that the optimal solution consists of the t objects with greatest proﬁt density. t rather their actual indexes. that set must give at least as good a proﬁt sum. Let pk·1 pt be indexed in decreasing (or nonincreasing) proﬁt density order. Let T £ be the set of objects in an optimal solution. maxProfit = 0. Sort S and P into descending (nonincreasing) order of profit density. IndexSet T = new IndexSet. P. and it will be useful to think of the objects divided into the three groups indicated. We will assume that the objects in the optimal solution are indexed in a particular way. Sahni. (If there is no such q. 5. For each j not in T : if (sum + s j C) sum += s j . S.1) We will use this bound later. adding those that ﬁt. Let Ak be the sequence of algorithms given above for the general knapsack problem. / 2. since for k 0. For k 0. // Continue with next subset of at most k indexes. profit = ∑i¾T pi . We will examine the action of Ak when it works with this particular T . RAk ´mµ and SAk ´nµ.. 3. take = 0.2) . take) int maxSum. now it is in Θ´n log nµ. 7. Let q be the ﬁrst index in the optimal solution that is not added to T by Ak . which has total proﬁt m. the worst-case ratios of the optimal solution to the value found by Ak . p1 ¡¡¡ pk . (If the algorithm’s output uses a different T . Theorem. If t k. the worst-case time for Ak is in Ω´n2 µ. float maxProfit. 97 Theorem 13. maxSum = 0. 4. To keep the notation simple.13 becomes much more complicated. Copy fields of T into take. “Approximate algorithms for the 0/1 knapsack problem. if (sum C) // Consider remaining objects. Let p1 p2 as T by Ak . Suppose t k. pk·1 sk·1 ¡¡¡ pq 1 sq 1 pq sq ¡¡¡ pt st k largest proﬁts Middle group Small group Observe that pq is not one of the k largest proﬁts in the optimal solution. Fix k and a particular input I with opt ´I µ m. For each subset T N with at most k elements: sum = ∑i¾T si .

3). its profit is profit kLargest · middles · extras (13. The ﬁrst bound is: extras Proof : extras pq sq extras 0 (13. A color is rejected only if there is a vertex adjacent to v that has already been assigned that color. Now we aim to get bounds on the proﬁts of some of the groups that make up the optimal solution and the algorithm’s solution.3) where the extras all have proﬁt density at least pq sq (because they were considered before the qth object). and (13.1) and (13.5) i q ∑ pi pq sq ´C i q ∑ t pq sq si pq sq smalls µ kLargest middles The last inequality follows from the fact that the small group must ﬁt in the knapsack along with the largest and middle groups. the capacity of the knapsack: smalls Proof : smalls pq sq t ´C kLargest middles µ (13.42 For each vertex v. Sequential Coloring tries each color in turn. The next bound we need involves C. (13. we can conclude that m kLargest · middles · smalls kLargest · middles · extras ´kLargest · middles · extrasµ · pq sq pq sq extras ´C · pq sq ´C kLargest middles µ µ kLargest middles extras because C kLargest middles extras sq . so the maximum number of colors that can be rejected is the degree of v.2).5).7: Graph Coloring 13.98 Chapter 13 N P -Complete Problems At the point when Ak considers and rejects object q.4). we now have m ´kLargest · middles · extrasµ · pq profit · m k·1 m val´Ak I µ · k·1 and it is easy to rearrange terms to get m val´Ak I µ Thus RAk ´mµ 1· 1 and SAk ´nµ k 1 1· . using the last result and Equations (13. Hence the maximum number of colors used is at most ∆´Gµ · 1. The color assigned to v is at most degree´vµ · 1. k 1· 1 k ´kLargest · middles · extrasµ · pq Section 13. .4) ∑ pj ∑ pj sj sj ∑ pq sq sj pq sq extras where the j in the summations indexes the extra objects added by Ak that are not in the optimal solution. We will use group to mean the total size of the objects in a group. otherwise Ak would have put the qth object in the knapsack. So now using Equations (13. beginning with color 1. So. until it ﬁnds one it can use.

note that any triangle must use three colors. the possibilities are quickly exhausted.1.3. else v = w. 2). b) It ﬁnds the same tour. 2. While there are vertices not yet in C: Select an edge uw or vw of minimum weight. 3). and is forced to choose (1. then (4. nearest2ends selects the edge (1. The results are similar starting elsewhere. c) It might seem that having more choices at each step gives a better chance of ﬁnding a minimum tour. Starting at 1. 4. which has weight 44. the neighborhood of every vertex is 2-colorable. The shortest link algorithm chooses edge (2.51 a) nearest2ends(V. 3. more chances to be greedy can produce even worse tours. 1 for a cost of 82.4).3). but the graph is not 3-colorable. // The other end. starting at any vertex. // One end of the chain. However.4.Chapter 13 N P -Complete Problems 13. then (2.8: The Traveling Salesperson Problem 13. nearest2ends produces a worse tour than the ordinary nearest-neighbor strategy. If the selected edge is uw u = w.44 G1 G2 has n1 n2 vertices and n1 m2 · m1 n2 edges. 1). The nearest-neighbor strategy produces the cylce 1. 4 for a cost of 105. Add the edge vu to C.3) with weight 50. 3.47 In the graph at the right. Add the selected edge to C. W) Select an arbitrary vertex s to start the cycle C. since being greedy doesn’t work for the TSP problem.49 Consider the graph at the right. 99 Section 13. 15 10 4 12 3 1 50 12 2 5 13. u = s. The nearest neighbor algorithm chooses the tour 1. Start at vertex 1. Another exercise: Construct a graph for which nearest2ends does better than the nearest-neighbor strategy. 1. v = s. 1 20 25 50 4 20 3 22 15 2 . In the graph at the right. 2. where w is not in C. E. then (3. To verify the latter. return C. 2 13. and ﬁnally it completes the cycle 4.2.

Now think of the MST-based tour as using only edges in the MST.. so using a general satisﬁability algorithm would be overkill. Any algorithm for satisﬁability solves 2-SAT (2-satisﬁability.54 At the beginning of Step 2c. Choose any two consecutive edges in such a path. i. add the next ti to a bin with the smallest current sum. 2-SAT is in P . P N P. Consider an optimum tour. Section 13..18 showed that an in-tree can be converted to an out-tree in linear time. the MST) is helpful here. 2. once forward and once backward. R2 . then . v1 . suppose they are (1.55 a) True. If P N P . into decreasing (nonincreasing) order.) This clearly can be implemented in O´n2 µ time. vn . Then. ti . Exercise 2. it is easy to see that it extends to round-about routes of k 2 edges. Viewing DFS as a journey around the graph (i.57 A simple greedy algorithm could work as follows. 2 shows N P . because any direct routes it uses are less in weight than the round-about route used by the pseudo-tour. So this pseudo-tour that stays in the MST edges costs twice the MST weight. if we take preorder time as being when they are visited. .3). then R4 could attach to such a strand. Let R1 . but an outtree is more convenient for preorder traversal. All we know now is that if P there is no such approximation algorithm. in the order they are traversed by depth-ﬁrst search. sometimes called 2-CNF-SAT). c) True. (In terms of bins. (In fact. v1 with total weight T . 2. so the identity transformation shows that 2-SAT P satisﬁability. What remains is a spanning tree.e. b) Rather than give a formal proof. The triangle inequality implies that a direct route between two cities is never more than a round-about route.53 a) Θ´n2 µ is the best possible because the graph has about n2 2 edges (so we can’t do better) and Prim’s algorithm runs in Θ´n2 µ. The vertices are visited in the same order as in the MST-based tour. we will present an intuitive argument. b) False. then exact graph coloring would be in P . 2. Sort the job time requirements.) d) Unknown. But this does not imply that 2-SAT is N P -hard. 2 and time requirements 7. v2 . we have single DNA strands representing paths of length n.2) and (2. 13. 6. The cost of this “pseudo-tour” is twice the weight of the MST.e. Always assign the next job to the processor that ﬁnishes soonest. However. beginning at vstart and ending at vend . some strands include the sequence S1 2 S2 3 . The traveling salesperson problem is N P -complete. 1. For simplicity of numbering. and the actual MST-based tour costs at most the same. The example with p that this strategy does not always produce optimal solutions. and 3.100 Chapter 13 N P -Complete Problems 13. so it has a weight at least as great as the minimum spanning tree. Although it only is explicit about round-about routes of two edges. and R3 be the strings for vertices 1. d1 11 ¡¡¡ d1 20 d2 1 ¡¡¡ d2 10 d2 11 ¡¡¡ d2 20 d3 1 ¡¡¡ d3 10 If R4 happens to be d1 16 ¡¡¡ d1 20 d2 1 ¡¡¡ d2 15 . So it sufﬁces to show that the tour generated by the MST-based algorithm costs at most twice the weight of the MST. 3. Remove the minimum-weight edge in this tour. but using each edge twice. Additional Problems 13.9: Computing with DNA 13. Prim’s algorithm produces an in-tree representation of the minimum spanning tree.

then findTour does. if (decideTSP(G. high = Tmax .) int minTSP(G) Wmax maxe¾E W ´eµ.4.59 a) The TSP is commonly deﬁned with integer edge weights. Tmax = Wmax * n. else low = mid + 1. and so such a clause causes the CNF formula to be unsatisﬁable. mid) == true) high = mid. decideTSP is called E times. so if decideTSP runs in polynomial time. We consider an edge discarded if its ﬁnal weight is greater than T .47. T) For each edge e in G: Change W(e) to T+1. Although the text does not go into detail on empty clauses. The next algorithm constructs an optimal tour by testing each edge in G and discarding those that are not required. if (decideTSP(G. assign. choose the opposite value for the variable. Let Tmax Wmax n. Output edges with W(e) T. The idea of the algorithm is to use Binary Search on the integers 0 (This is similar to the solution of Exercise 1. k) be a Boolean function that decides the traveling salesperson decision problem for a complete weighted graph G ´V E W µ and tour bound k. (The method described in this solution does not work if the weights are ﬂoating point numbers. (The substitution process may create empty clauses. Change W(e) back to its original value. this can be done in polynomial time by putting all the rational numbers over a common denominator. low = 0. return high.) Let decideTSP(G. 101 The number of iterations of the loop is about lg Tmax lg´Wmax µ · lg´nµ. so we don’t time time to check all the integers in the range 0 through Tmax to ﬁnd the optimal tour weight. substitute it into the formula. an edge with weight greater than T cannot be in a tour of total weight T . which is polynomially bounded in the size of the input. If rational weights are allowed. Let Wmax be the largest edge weight in G. A boolean array. the weight of an optimal tour for the graph G ´V E W µ. is supplied to store the assignment.61 Let polySatDecision(E) be a polynomial-time Boolean function that determines whether or not the CNF forxn . findTour(G. then so does minTSP. certain weights are increased in the for loop. Tmax . Note that the values of Wmax and Tmax are not polynomially bounded in the size of G.) If not. As discussed in the solution of Exercise 13. while (low + 1 high) mid = (low + high) / 2.Chapter 13 N P -Complete Problems 13. // Continue loop. . Clearly. The idea is to choose a mula E is satisﬁable. we will assume that the weights have been scaled to integers as a preprocessing step. Let n be the number of variables in E. and let the variables be x1 truth value for each variable in turn. and then use polySatDecision to check whether there is a truth value assignment for the remaining variables that will make the simpler formula true. b) Use the algorithm minTSP in part (a) to ﬁnd T . then multiplying all the weights by that common denominator. T) == false) // e is needed for a minimum tour. The weight of an optimal tour is at most Tmax . If decideTSP runs in polynomial time. the deﬁnitions given imply that an empty clause cannot be made true by any assignment. This blows up the input length by a quadratic factor at most. 13.

Since there are n variables. and the formula never gets larger. if (polySatDecision(Etrue) == true) assign[i] = true. if polySatDecision runs in polynomial time. Thus. then setAssign does also. // Continue loop. assign) if (polySatDecision(E) == false) Output "not satisfiable". so the number of iterations of the loop is polynomial in the size. E = Efalse. return. The modiﬁcations needed to produce the new formula can be made in one pass over the current version of the formula. for (i = 1. i n. Efalse = the formula constructed from E by deleting all clauses containing xi and deleting the literal xi from all other clauses. .102 Chapter 13 N P -Complete Problems satAssign(E. the size of the formula is at least n. E = Etrue. i ++) // Try assigning true to xi . Output truth assignment in assign array. Etrue = the formula constructed from E by deleting all clauses containing xi and deleting the literal xi from all other clauses. else assign[i] = false.

Each processor carries out the algorithm using its own index for pid. M[1]. indexed Pi j k for 0 i j k n. Write ∞ (some very small value) into M[n]. Write pid into idx[pid] and write n into idx[pid+n]. Pi j 0 copies ai j into ri j . idx[2*n-1] is used and the ﬁnal parTournMaxIndex(M. The array idx[0]. R.Chapter 14 Parallel Algorithms Section 14. M[n-1]. 14. for 0 k n. Write (sum0 + sum1) into M[pid]. The integers to be added are in memory cells M[0]. in the range 0 result is left in idx[0] instead of M[0]. An element newR[i][j][k] in a three-dimensional work array is set to 1 in each iteration if and only if a path from i to j through k has been found. idx. n) int incr. Pi i 0 (those on the diagonal) set rii to 1.3: Some Simple PRAM Algorithms 14. the sum will be left in M[0]. Write 0 into M[n+pid]. n 1. while (incr < n) int idx0. . idx1. incr = 1. . Section 14. Read idx[pid] into idx0 and read idx[pid+incr] into idx1. Each group of processors with k as their third index does: Compute the Boolean OR of newR[i][j][0].4: Handling Write Conﬂicts 14. else idxBig = idx0.5 This is a simple modiﬁcation of Algorithm 14. incr = 1. incr = 2 * incr. Read M[idx0] into key0 and read M[idx1] into key1. newR[i][j][n-1]. using binary fan-in. and no path from i to j was already known. leaving the result in newR[i][j][0]. if (key1 > key0) idxBig = idx1.1. sum1. incr = 1. sum0. key1.2 Each processor’s index is pid. idxBig. do no more work. Then use binary fan-in to combine the results for each ´i jµ and update ri j . while (incr < n) Read M[pid] into sum0 and read M[pid+incr] into sum1. Pi j k . to parallelize the nested for loops in the original algorithm. n) int incr. incr = 2 * incr. n) int incr. Key key0. If ri j is already 1. . while (incr < n) if (ri j == 0) Each Pi j k does: newR[i][j][k] = rik rk j . incr = 2 * incr. parTransitiveClosure(A. parSum(M. Write idxBig into idx[pid].4 Use n3 processors. Each Pi j 0 does: if(newR[i][j][0] == 1) write 1 into ri j .

contradicting the lower bound for Boolean or. so there will be no entry in the loser array that is zero. all equal. A local variable pid is the processor’s index.13 There are two natural solutions. 14. It uses n 1 processors. C.4 (see above). and P12 might write 1 in loser[1]. cTerm[i][j][n-1] as in Algorithm 14. Pi j n 1 compute the Boolean OR of cTerm[i][j][0]. so this algorithm works on Common-Write or stronger PRAM models. Step 4: Pi j 0 .1 before earlier iterations are completed. Section 14. and let B be an n ¢ n Boolean matrix whose ﬁrst column contains all 1’s. and let C be the product. Suppose there are three keys. indexed P0 . Let x0 x1 the Boolean or of these bits in o´log nµ steps. . In Algorithm 14. if multiple processors write into the same cell. writing the result in ci j instead of M[0]. Steps 2 and 3: Each Pi j k computes aik bk j and stores it in cTerm[i][j][k].3 could fail if each processor chooses the loser arbitrarily when the keys it compares are equal. The ﬁrst (upper left corner) element in the matrix product is x0 x1 ¡¡¡ xn 1 . it does each insertion (the work of the while loop) in parallel in a constant number of steps. B. Let A be an n ¢ n Boolean matrix whose ﬁrst row contains the xi .10 Suppose that there were an algorithm that can multiply two n ¢ n Boolean matrices in o´log nµ steps on a CREW xn 1 be n bits. it begins a new iteration of the outer loop (the for loop) in Algorithm 4. one group of n processors Pi j0 to compute each entry ci j of C.104 Chapter 14 Parallel Algorithms 14. it writes a 1 into ri j .5: Merging and Sorting 14. Pn 2 . One solution overlaps the insertion of keys. 14. except that newR is a local variable in each processor and if Pi j k ’s value of newR is 1. . .7 Let the matrices be A and B.2 more than one processor may write a 1 in the same cell at one time. Now the body of the while loop executes in Θ´1µ steps and there are Θ´log nµ iterations. 14. The solution given here is a little easier to follow. P02 might write 1 in loser[2]. n) Step 1: Each Pi j 0 writes 0 into ci j . they all write a 1.2. .11 Algorithm 14.8 The solution is similar to Exercise 14. P01 might write 1 in loser[0]. parMultBooleanMatrix(A. that is. n 1 ci j k 0 aik bk j Pi j n 1 We use n3 processors. We will show that the matrix multiplication algorithm could be used to compute PRAM. Thus.

this is a CREW PRAM algorithm. // Only one processor writes newKey (the original M[i]). So mx · rx my · ry . where m1 m2 . Each P0 j does: Read isBefore[0][j] into rankJ. if (newKey < key1) write key1 into M[pid+1]. The algorithm computes elements of a matrix isBefore such that isBefore[i][j] is 1 if M[i] should be somewhere before M[j] in the sorted order and is 0 otherwise. . n) Each Pi j does: Read M[i] and M[j]. depending on the PRAM model being used. else key0 = ∞. key1.1 can be used. determine the minimum column index among elements in that row that contain a 1. Similarly. A simple variant of Algorithm 14. and // it writes into the correct position.4 or 14. // This moves all keys larger than M[i] to the ‘‘right. . Write the saved value of M[j] into M[rankJ]. If (pid > 0) read M[pid-1] into key0.18 First. The argument for two elements in Y is the same. It is also easy to show that two elements in X. Finally. The binary fan-in takes Θ´log nµ steps and the other parts take Θ´1µ steps.14 Assume that there is an X element Kx at position mx and a Y element Ky at position my . Use n2 processors. leaving the result in isBefore[0][j]. i ++) // Insert the key in M[i] into its proper place among the sorted // keys in M[0]. Section 14. the original M[j] is placed in M[rank´ jµ]. This index in row v is leader[v]. for (i = 1. Since K1 K2 . else Write 0 into isBefore[i][j]. and Kx and Ky cannot be written to the same output location. Suppose K1 is at position m1 and K2 is at position m2 .8. perform Transitive Closure by Shortcuts by the method from Exercise 14. Pi j for 0 i j n. Each processor does: Read M[i] into newKey. and Kx and Ky cannot be written to the same output location. K1 and K2 . for each row v of R. if Kx Ky then ry mx and rx my . 14. Use the isBefore matrix for work space. i < n. Then the algorithm sums each column of the matrix to ﬁnd rank´ jµ. then ry mx and rx my . Each group of processors with j as their second index does: Sum column j of isBefore using binary fan-in. So the output location of K1 is less than that of K2 . Read M[pid] into key1. // M[j] was read earlier. 105 Since there are no concurrent writes. The exercise asks only about an X element and a Y element. cannot be written to the same output location.’’ If (key0 newKey < key1) write newKey into M[pid]. Next.Chapter 14 Parallel Algorithms parInsertionSort(M. the number of keys that should precede M[j] in the sorted order. if (M[i] < M[j] || (M[i] == M[j] && i < j)) Write 1 into isBefore[i][j]. the cross-rank of K1 is less than or equal that of K2 . M[n-1]. So mx · rx my · ry .16 The keys are in M[0]. 14. M[i-1]. If Kx Ky . Call the resulting matrix R. newKey. parCountingSort(M. The processors work in n groups of n . n) Key key0. Let the cross-rank of Kx be rx and the cross-rank of Ky be ry .6: Finding Connected Components 14.

106 Chapter 14 Parallel Algorithms for each row in parallel. singleton[parent[v]] = false.22 If there is any undirected edge i j in G such that i j then the conditional singleton hooking step of initParCC hooks i to something. Then compute the sum of the entries in the root array in O´log nµ steps (using the solution to Exercise 14. Therefore. a variant of Algorithm 14. 14. then any undirected edge i j in G must satisfy i j. no node in j’s tree is a singleton. (For the Common-Write model. In the conditional singleton hooking step of initParCC it is possible that every node i 2 is attached to node 1. The leaders can be computed in Θ´1µ steps with a variant of Algorithm 14. so no node in j’s tree is attached to any new node in this step. If node i is a singleton. The leaders can be computed in Θ´log nµ steps with a variant of Algorithm 14. No cycles are formed by conditional singleton hooking because all directed edges (parent pointers) in the data structure point from a higher numbered vertex to a lower numbered one. 14. but it won’t change the asymptotic order.19 The idea is to count the roots.3 can be used. so j cannot be a root at this point.24 Do in parallel for all vertices v: singleton[v] = true. b) For the CREW model. Use n processors. the Transitive Closure by Shortcuts can be done in Θ´log2 nµ steps by the method from Exercise 14. Attaching the singleton i to the parent of j cannot produce a cycle. node i must get attached to something. 9 10 7 11 12 8 14. the Transitive Closure by Shortcuts can be done in Θ´log nµ steps by the method from Exercise 14. So the total time is Θ´log2 nµ. 14. Let root be an n-element array in memory.25 Let the undirected graph Gn consist of edges ´1 iµ and ´i 1 iµ for 2 i n.5). Now consider the unconditional singleton hooking step. In either case.4. so node i is not a root for the next step. So the total is Θ´log nµ. so all nodes are in in-trees of two of more nodes at the end of this step.1. except for isolated nodes.8.) a) For the Common-Write model.20 The result is the same tree in both cases because 7 is its own parent and 7 is 8’s parent. if (parent[v] v) singleton[v] = false. else root[i] = 0. The processors work in n groups of n2 for each row in parallel. 14.3. Each Pi does: if (parent[i] == i) root[i] = 1. immediately making one star containing all .

This works because only one processor. reads M 0 and writes a 0 if M 0 0 or writes a 1 if M 0 1.4 of the text so that processors ﬁrst try to reserve the right to write in parent[v] by writing their pid in reserve[v]. In the second case. else write 0. c) Each processor Pi reads M i into cur. the algorithm terminates after one iteration of the while loop because nothing changes. . then no processor will attempt to write into M 0 . Read parent[i] into parI. . then P0 decides whether to write a 0 (if M 0 0) or a 1 (if M 0 1 and M 1 0) or not write (both are 1’s). . M n 1 . . Pi does: Read M[i] into bit. Each processor Pi reads M i · 1 into next. Each processor Pi does: If cur is 1 and next is 0. so Θ´log nµ iterations are needed. . n 1. which is correct.Chapter 14 Parallel Algorithms the nodes. . If it is a 1. so P0 still ﬁnishes in the second step. 1. (Two processors may have tried to write the same value in the same memory location.2). M k 1 . Processors Pn k .26.29 Processors are indexed by 0. but for different hook operations. but M 0 already contains a 0 so the result is correct. Nothing is hooked in the unconditional singleton hooking step in either case. . Suppose there are 1’s in M 0 . if desired. If there are no 1’s in M 0 . . Solution 2. .e. making an in-tree of height n 1. which precedes the write phase. j) Pi j does: Read parent[j] into parJ.) We introduce a new array reserve and modify the hooking operation given in Deﬁnition 14. i then write 1 into M i . only processor Pn k will succeed in writing k to M 0 . where k 0. then Pn 1 . if no processor writes. respectively. it may be impossible to determine which processor actually succeeded in writing. into M 0 . Read reserve[parI] into winningPid. then write i · 1 to M 0 . However. hook(i. if the problem were changed slightly to require the output in a different location. will write a result to M 0 and so there can be no write conﬂict.27 Based on Exercise 14. write n i to M 0 . Write pid into reserve[parI]. except that Pn 1 just assigns 0 to next. the one that reads the ﬁnal 1.. As in part (b). it is only necessary to record the edges i j that were associated with successful hooking operations. i. the lowest-priority processor. then M 0 0. If k The result may be written into a location different from M[0]. After a write operation is performed. 107 Additional Problems 14. k 1. Each processor Pi reads M n 1 i . In the ﬁrst case. It is also possible that each node i 2 is attached to node i 1. Pn k·1. But if the output is required to be in a different location. Pn 1 will all be trying to write results k. . a) Each processor Pi reads k from M 0 . Because low numbered processors have priority. if (bit == 0) Write i into M[0]. different edges. . b) Solution 1. else if (i == n-1) Write n into M[0]. if (winningPid == pid) Write parJ into parent[parI]. Write 1 into stc[i][j]. 14. . the depth of node n starts at n 1 and shrinks by a factor of approximately 2 in each iteration (see the proof of Theorem 14. Solution 2. This decision is made during the computation phase.

and the Boolean or of n inputs (Algorithm 14. while (first < last . Read first and read last. Matrix multiplication (Section 14. K) P0 and Pp 1 do in parallel: Write 0 into first.5).13) does not run in poly-log time. 14. the sum. write n into last. The search then proceeds in the same manner on an interval roughly 1 ´ p · 1µ times the size of the previously searched interval. n 1 of M.1). The variables first and The keys reside in indexes 0 last (the boundaries of the interval being searched) are in the shared memory. Finding connected components (Algorithm 14.2). The idea is to divide the keys into p · 1 roughly equal intervals and use the p processors to “sample” the endpoints of the ﬁrst p intervals. The more precise question would be “Are there any algorithms in this chapter that do not run in poly-log time with a polynomially bounded number of processors?”) The O´nµ implementation of Insertion Sort (Exercise 14. Merging sorted sequences (Algorithm 14. the processors can determine which interval can contain K (x. 14.108 Chapter 14 Parallel Algorithms 14. Each processor executes the following loop. problems. else if (pid == n-1 && K M[upper]) Write upper into first.30 This solution is a simpliﬁed version of one given by Marc Snir. 1985. // Continue while loop. From the results of these 2p comparisons.32 All the following problems are in N C : ¯ ¯ ¯ ¯ ¯ ¯ Finding the maximum. halt.” SIAM Journal on Computing. if (K > M[n-1]) Output K " is not in the array". lower = ((p . // Only one processor writes into first and last. Thus the number of steps is in Θ´log´n · 1µ log´ p · 1µµ.4). vol. so the number of iterations is roughly lg´n · 1µ lg´ p · 1µ. not algorithms. . halt. 3. if (K < M[0]) Output K " is not in the array". in the statement of the exercise) if it is in the array at all. (The wording of the exercise is a little loose. Write last into last. Almost all the algorithms described in or to be written for the exercises solve problems that are in N C .pid + 1) * first + (pid) * last) / (p+1). n. Finding the transitive closure of a relation (Exercise 14. no. We can think of M[n] as containing ∞. P0 does: if (K == M[first]) Output first.3. parSearch(M. p. “On Parallel Searching. Sorting (Algorithm 14. if (M[lower] K < M[upper]) Write lower into first. are or are not in N C . Each processor has local variables lower and upper that indicate which elements it is to examine.4). The size of the interval being searched is roughly 1 ´ p · 1µ times the size of the interval at the preceding iteration. upper = ((p .pid) * first + (pid + 1) * last) / (p+1). Write upper into last.6). The work done in one iteration of the loop is constant. else Output K " is not in the array".1) // Precondition: M[first] K < M[last]. A straightforward parallelization of some dynamic programming algorithms might not run in poly-log time. Each processor reads first and last. its processor index. as well as pid.