You are on page 1of 8

Introduction to Algorithms Massachusetts Institute of Technology Professors Piotr Indyk and Charles E.

Leiserson

September 24, 2004 6.046J/18.410J Handout 7

Problem Set 1 Solutions

Exercise 1-1. Do Exercise 2.3-7 on page 37 in CLRS. Solution: The following algorithm solves the problem: 1.Sort the elements in S using mergesort. 2.Remove the last element from S . Let y be the value of the removed element. 4.If S contains such an element z , then STOP, since we have found y and z such that x = y + z . Otherwise, repeat Step 2. 5.If S is empty, then no two elements in S sum to x. Notice that when we consider an element yi of S during ith iteration, we don’t need to look at the elements that have already been considered in previous iterations. Suppose there exists y j ∗ S , such that x = yi + yj . If j < i, i.e. if yj has been reached prior to yi , then we would have found yi when we were searching for x − yj during j th iteration and the algorithm would have terminated then. Step 1 takes �(n lg n) time. Step 2 takes O(1) time. Step 3 requires at most lg n time. Steps 2–4 are repeated at most n times. Thus, the total running time of this algorithm is �(n lg n). We can do a more precise analysis if we notice that Step 3 actually requires �(lg(n − i)) time at ith iteration. � −1 However, if we evaluate n i=1 lg(n − i), we get lg(n − 1)!, which is �(n lg n). So the total running time is still �(n lg n). Exercise 1-2. Do Exercise 3.1-3 on page 50 in CLRS. Exercise 1-3. Do Exercise 3.2-6 on page 57 in CLRS. Exercise 1-4. Do Problem 3-2 on page 58 of CLRS. 3.If S is nonempty, look for z = x − y in S using binary search.

Problem 1-1. Properties of Asymptotic Notation Prove or disprove each of the following properties related to asymptotic notation. In each of the following assume that f , g , and h are asymptotically nonnegative functions.

n�0 ). (b) f (n) + g (n) = �(max(f (n). g (n))). Therefore. Since f (n) = O(g (n)). Therefore. We disprove this statement by giving a counter-example. g (n)) + max(f (n). for each n. g (n)) and so f (n) + g (n) = �(max(f (n). f (n) = O(h(n)). f (n) ← cc Hence. c n √ n�0 . Since f (n) = O(g (n)). Similarly. f (n) ← cg (n). either f (n) √ max(f (n). Since 2n is not O(8n ). g (n)) and g (n) ← max(f (n). this choice of f . then there exists an n0 and a c such that for all n √ n0 . n�0 ). g (n))). for all n √ 1. g (n))). then there exists an n0 and a c such that for all n √ n0 . g (n)) ← 2 max(f (n). there exists an n �0 and a c� such that � h(n). Similarly. Hence. since g (n) = O(h(n)). g (n)) or else g (n) √ max(f (n). . f (n) ← cg (n). g (n) ← c� h(n). Therefore. g (n)). (c) Transitivity: f (n) = O(g (n)) and g (n) = O(h(n)) implies that f (n) = O(h(n)). f (n) + g (n) = �(max(f (n). g (n)). since g (n) = O(f (n)). (d) f (n) = O(g (n)) implies that h(f (n)) = O(h(g (n)). Therefore: f (n) + g (n) ← max(f (n). g and h is a counter-example which disproves the theorem. for all n √ n�0 . Solution: This Statement is True. Solution: This Statement is False. f (n) ← max(f (n). Additionally. g (n))). Let f (n) = n and g (n) = 3n and h(n) = 2n . for all n √ max(n0 . Then h(f (n)) = 2n and h(g (n)) = 8n . for all n √ max(n0 . g (n)) and so f (n) + g (n) = O(max(f (n). Solution: This Statement is True. g (n) ← c � g (n) ← f (n) ← cg (n). f (n) + g (n) √ max(f (n). there exists an n�0 and a c� such that for all 1 � f (n). f (n) = �(g (n)).2 Handout 7: Problem Set 1 Solutions (a) f (n) = O(g (n)) and g (n) = O(f (n)) implies that f (n) = �(g (n)). For all n √ 1. Thus. Solution: This Statement is True.

f (n) is not o(g (n)). In Exercise 1-3. h(n) ← f (n). 5 is its conjugate. where � is the golden ratio and � . Since h(n) = o(f (n)). For all even values of n. We disprove this statement by giving a counter-example. For all odd values of n. because if there does not exist a c 1 for which f (n) ← c1 g (n). f (n) + h(n) √ f (n). Since for all n √ 1. and there does not exist a c 1 for which f (n) ← c1 g (n). We prove that f (n) + o(f (n)) = �(f (n)). Thus. Thus. f (n) < c1 g (n). it follows that g (n) is not o(f (n)). there cannot exist c2 > 0 for which c2 g (n) ← f (n).Handout 7: Problem Set 1 Solutions (e) f (n) + o(f (n)) = �(f (n)). and there does not exist a c for which g (n) ← cf (n). then it cannot be the case that for any c1 > 0 and sufficiently large n. Let h(n) = o(f (n)). Fn = Fn−1 + Fn−2 3 for n √ 2 . f (n) = 0 and g (n) = 2. We have shown that there do not exist constants c1 > 0 and c2 > 0 such that c2 g (n) ← f (n) ← c1 g (n). (f) f (n) ≤= o(g (n)) and g (n) ≤= o(f (n)) implies f (n) = �(g (n)). f (n) + h(n) = �(f (n)). f (n) is not �(g (n)). then there exists an n0 such that for all n > n0 . By the above reasoning. for all n > n0 . Problem 1-2. Thus. f (n) + h(n) ← 2f (n) and so f (n) + h(n) = O(f (n)). Computing Fibonacci Numbers The Fibonacci numbers are defined on page 56 of CLRS as F0 = 0 . Solution: This Statement is False. Also. F 1 = 1 . f (n) = 2 and g (n) = 0. Solution: This Statement is True. Therefore. Consider f (n) = 1+ cos(� ≈ n) and g (n) = 1 − cos(� ≈ n). then f (n) + h(n) = �(f (n)). of this problem set. because we could set c = 1/c2 if such a c2 existed. you showed that the nth Fibonacci number is Fn = n �n − � � .

1} and note the running time is no more than 10 − 2 = 8. Our Induction Hypothesis is: T (n) √ Fn . 1} and note the running time is no less than 1. T (n) ← Fn . 1 . To prove the inductive step: T (n) √ Fn−1 + Fn−2 + 1 √ Fn + 1 Therefore. Let’s analyze its running time. To prove the inductive step: T (n) ← cFn−1 + cFn−2 − b − b + 1 ← cFn − 2b + 1 Therefore. We use the substitution method. on the number of memory words). inducting on n. (b) Similarly. since it directly implements the definition of the Fibonacci numbers. show that T (n) = �(Fn ). 1 (a) Give a recurrence for T (n). However. and hence.4 Handout 7: Problem Set 1 Solutions A fellow 6. T (n) ← cFn − b + 1 provided that b √ 1. For the base case consider n ∗ {0. for the purpose of this problem. In this problem. F IB(n) 1 if n = 0 2 then return 0 3 elseif n = 1 4 then return 1 5 return F IB(n − 1) + F IB(n − 2) This algorithm is correct. In reality. the approximation of unit time addition will suffice. Our Induction Hypothesis is: T (n) ← cFn − b. and use the substitution method to show that T (n) = O(Fn ).046 student comes to you with the following simple recursive algorithm for computing the nth Fibonacci number. Solution: The recurrence is: T (n) = T (n − 1) + T (n − 2) + 1. Let T (n) be the worst-case running time of F IB(n). For the base case consider n ∗ {0. inducting on n. the time it takes to add two num­ bers depends on the number of bits in the numbers being added (more precisely. that T (n) = �(Fn ). Solution: Again the recurrence is: T (n) = T (n − 1) + T (n − 2) + 1. We use the substitution method. We choose b = 2 and c = 10. please assume that all operations take unit time.

To complete the induction step we observe that if sum = Fi+1 after the k = (i − 1) and if the call to F ib� (i) on Line 7 correctly returns Fi (by the induction hypothesis of our correctness proof in the previous part of the problem) then after the k = i iteration of the loop sum = Fi+2 . using induction over k . sum = 2 which is the 3rd Fibonacci number. Our base case is m = 2. This follows immediately form the fact that Fi + Fi+1 = Fi+2 . Solution: Our loop invariant is that after the k = i iteration of the loop. Our induction hypothesis is that for all n < m. We observe that the first four lines of Potemkin guarantee that Fib � (n) returns the correct value when n < 2. Our base case is i = 1. Fib � (n) returns Fn . that your “invariant” is indeed invariant. . and his tenure committee has asked you to review his algorithm. (d) State a loop invariant for the loop in lines 6-7. sum = Fi+2 .Handout 7: Problem Set 1 Solutions 5 Professor Grigori Potemkin has recently published an improved algorithm for computing the nth Fibonacci number which uses a cleverly constructed loop to get rid of one of the recursive calls. F IB� (n) 1 if n = 0 2 then return 0 3 elseif n = 1 4 then return 1 5 sum � 1 6 for k � 1 to n − 2 7 do sum � sum + F IB � (k ) 8 return sum Since it is not at all clear that this algorithm actually computes the nth Fibonacci number. using a loop invariant in the inductive step of the proof. the nth Fibonacci number. Prove. We prove this induction using induction over k . Solution: To prove the algorithm is correct. we are inducting on n. (c) State the induction hypothesis and the base case of your correctness proof. We observe that after the first pass through the loop. Professor Potemkin has staked his reputation on this new algorithm. sum = Fi+1 . We’ll prove this by induction over n. We assume that after the k = (i − 1) iteration of the loop. let’s prove that the algorithm is correct.

sum = Fm which completes our correctness proof. where a1 . with degree-bound n as an array P [0 . (a) Give a divide-and-conquer algorithm for multiplying two polynomials of degree-bound n. to form the polynomial C (x) = m1 x2 + (m2 + m3 )x + m4 . a0 · b0 . Problem 1-3. as well as one numerical addition. F ib. We can thus conclude that after the k = m − 2 iteration of the loop. then at the end of the k = i iteration of the loop sum = Fi+2 . Another way to see that T � (n) = �(Fn ) is to use the substitution method with the � −2 � hypothesis T � (n) √ Fn and the recurrence T � (n) = cn + n k=1 T (k ). of F IB� (n)? Would you recommend tenure for Professor Potemkin? Solution: We will argue that T � (n) = �(Fn ) and thus that Potemkin’s algorithm. m3 + m2 . . a0 · b1 . a1 · b0 . a0 . T � (n). A(x) = a 1 x + a0 and B (x) = b1 x + b0 . (f) What is the asymptotic running time. b1 ]. c2 ] = [m4 . n] of coefficients. . and b0 are numerical coefficients. respectively. Polynomial multiplication One can represent a polynomial. F ib� does not improve upon the assymptotic performance of the simple recurrsive algorithm. One way to see that T � (n) = �(Fn ) is to observe that the only constant in the program is the 1 (in lines 5 and 4). which can be represented by the array [c0 . That is. Consider two linear polynomials. From the previous part we know that if F ib� (n) returns Fn for all n < m. based on this formula. We can multiply A and B using the four coefficient multiplications m1 m2 m3 m4 = = = = a1 · b1 . in order for the program to return F n lines 5 and 4 must be executed a total of Fn times. Solution: To complete the inductive step of our correctness proof. m1 ] . in a symbolic variable x. c1 . Therefore we would not recommend tenure for Professor Potemkin. we must show that if F ib� (n) returns Fn for all n < m then F ib� (m) returns m. a1 ] and [b0 . b1 . which can be represented by the arrays [a 0 .6 Handout 7: Problem Set 1 Solutions (e) Use your loop invariant to complete the inductive step of your correctness proof. represented as coefficient arrays.

and d(x) are polynomials of degree n/2 − 1. m3 = bd . c(x). The polynomial product is then p(x)q (x) = (a(x)xn/2 + b(x))(c(x)xn/2 + d(x)) = a(x)c(x)xn + (a(x)d(x) + b(x)c(x))xn/2 + b(x)d(x) . where n is a power of 2. and divide each into the upper n/2 and lower n/2 terms: p(x) = a(x)xn/2 + b(x) . and b(x)d(x) are com­ puted recursively. b(x). m2 = ac . Solution: We can use the following 3 multiplications: m1 = (a + b)(c + d) = ac + ad + bc + bd . where a(x). so the polynomial product is (ax + b)(cx + d) = m2 x2 + (m1 − m2 − m3 )x + m3 . (b) Give and solve a recurrence for the worst-case running time of your algorithm. q (x) = c(x)xn/2 + d(x) . b(x)c(x). (c) Show how to multiply two linear polynomials A(x) = a 1 x + a0 and B (x) = b1 x + b0 using only three coefficient multiplications. a(x)d(x).Handout 7: Problem Set 1 Solutions Solution: We can use this idea to recursively multiply polynomials of degree n − 1. The four polynomial products a(x)c(x). re­ cursive polynomial multiplication gives us a running time of T (n) = 4T (n/2) + �(n) = �(n2 ) . as follows: Let p(x) and q (x) be polynomials of degree n − 1. 7 . Solution: Since we can perform the dividing and combining of polynomials in time �(n).

. at each level k . and we can proceed with our multiplications in a way analogous to what was done above.585 ) Alternative solution Instead of breaking a polynomial p(x) into two smaller poly­ nomials a(x) and b(x) such that p(x) = a(x) + xn/2 b(x).8 Handout 7: Problem Set 1 Solutions (d) Give a divide-and-conquer algorithm for multiplying two polynomials of degree-bound n based on your formula from part (c). Then collect all the odd powers of p(x). which takes time �(1) per level and does not affect the asymptotic running time. Then we can see that p(x) = a(y ) + x · b(y ) Both a(y ) and b(y ) are polynomials of (roughly) half the original size and degree. factor out x and substitute y = x 2 to create the second polynomial b(y ). Solution: Similar to part (b): T (n) = 3T (n/2) + �(n) = �(nlg 3 ) � �(n1. we could do the following: Collect all the even powers of p(x) and substitute y = x2 to create the polynomial a(y ). Solution: The algorithm is the same as in part (a). as we did above. (e) Give and solve a recurrence for the worst-case running time of your algorithm. 2 Notice that. we need to compute yk = yk −1 (where y0 = x). except for the fact that we need only compute three products of polynomials of degree n/2 to get the polynomial product.