1

Exercises and Solutions

Most of the exercises below have solutions but you should try first to solve them. Each subsection with solutions is after the corresponding subsection with exercises. T

1.1

Time complexity and Big-Oh notation: exercises

1. A sorting method with “Big-Oh” complexity O(n log n) spends exactly 1 millisecond to sort 1,000 data items. Assuming that time T (n) of sorting n items is directly proportional to n log n, that is, T (n) = cn log n, derive a formula for T (n), given the time T (N ) for sorting N items, and estimate how long this method will sort 1,000,000 items. 2. A quadratic algorithm with processing time T (n) = cn2 spends T (N ) seconds for processing N data items. How much time will be spent for processing n = 5000 data items, assuming that N = 100 and T (N ) = 1ms? 3. An algorithm with time complexity O(f (n)) and processing time T (n) = cf (n), where f (n) is a known function of n, spends 10 seconds to process 1000 data items. How much time will be spent to process 100,000 data items if f (n) = n and f (n) = n3 ? 4. Assume that each of the expressions below gives the processing time T (n) spent by an algorithm for solving a problem of size n. Select the dominant term(s) having the steepest increase in n and specify the lowest Big-Oh complexity of each algorithm. Expression 5 + 0.001n3 + 0.025n 500n + 100n1.5 + 50n log10 n 0.3n + 5n1.5 + 2.5 · n1.75 n2 log2 n + n(log2 n)2 n log3 n + n log2 n 3 log8 n + log2 log2 log2 n 100n + 0.01n2 0.01n + 100n2 2n + n0.5 + 0.5n1.25 0.01n log2 n + n(log2 n)2 100n log3 n + n3 + 100n 0.003 log4 n + log2 log2 n 1 Dominant term(s) O(. . .)

containing each up to 1012 records.5n2 microseconds.1 · n · log2 n microseconds. Is it TRUE or FALSE? If it is FALSE then write the correct formula Statement Rule of sums: O ( f + g ) = O ( f ) + O (g ) Rule of products: O ( f · g ) = O (f ) · O (g ) Transitivity: if g = O(f ) and h = O(f ) then g = O(h) 5n + 8n2 + 100n3 = O(n4 ) 5n +8n2 +100n3 = O(n2 log n) 6. Prove that T (n) = a0 + a1 n + a2 n2 + a3 n3 is O(n3 ) using the formal definition of the Big-Oh notation. which algorithm will you recommend to use? 8. respectively. 7. and find out a problem size n0 such that for any larger size n > n0 the chosen algorithm outperforms the other. for a problem of size n. Hint: Find a constant c and threshold n0 such that cn3 ≥ T (n) for n ≥ n0 . Choose the algorithm. One of the two software packages. Algorithms A and B spend exactly TA (n) = cA n log2 n and TB (n) = cB n2 microseconds. Average processing time of the package A is TA (n) = 0. 9.5. and the average processing time of the package B is TB (n) = 5 · n microseconds. Determine whether each statement is TRUE or FALSE and correct the formula in the latter case. should be chosen to process very big databases. Find the best algorithm for processing n = 220 data items if the algoritm A spends 10 microseconds to process 1024 items and the algorithm B spends only 1 microsecond to process 1024 items. Algorithms A and B spend exactly TA (n) = 0. respectively. Which algorithm is better in the Big-Oh sense? For which problem sizes does it outperform the other? 10.1n2 log10 n and TB (n) = 2. which is better in the Big-Oh sense. A or B. for a problem of size n. Algorithms A and B spend exactly TA (n) = 5 · n · log10 n and TB (n) = 25 · n microseconds. 2 . respectively. The statements below show some features of “Big-Oh” notation for the functions f ≡ f (n) and g ≡ g (n). If your problems are of the size n ≤ 109 . for a problem of size n.

3. One of the two software packages. respectively.001n milliseconds and the average √ processing time of the package B is TB (n) = 500 n milliseconds.75 . n therefore T (n) = T (N ) N 2 = 2 n2 10000 ms and T (1000) f (1000) = 10 f (1000) milliseconds per item. the average time of processing n = 104 data items with the package A and B is 100 milliseconds and 500 milliseconds. each algorithm spends 10 seconds to process 100 data items. 12. Software packages A and B of complexity O(n log n) and O(n).5 ). During a test. 000) · 1000000 1000 log10 1000 1000000·6 1 · 1000·3 = 2. Software packages A and B have processing time exactly TEP = 3n1. any appropriate base can be used in the above formula (say.000 items. 000) = 10 f (100 f (1000) ms. Because processing time is T (n) = cn log n. 000) = 1000 ms. T (n) = ms and T (100. spend exactly TA (n) = cA n log10 n and TB (n) = cB n milliseconds to process n data items. If f (n) = n then T (100. Let three such algorithms A. for log10 1000000 = n = 1000000 the time is T (1. O(n1.Which algorithm has better performance in a ”Big-Oh” sense? Work out exact conditions when these packages outperform each other. 11. and C have time complexity O(n2 ).5 and TWP = 0. the constant factor c = n log n T (N ) N log N .000) fore. hence. 500 ms. respectively. and O(n log n). Average processing time of the package A is TA (n) = 0. 000) = T (1. base of 10). A or B. 13. The constant factor c = f ( n) 10 f (1000) T (N ) N2 . 000) = 107 ms. Therefore. The constant factor c = T (5000) = 2. Derive the time each algorithm should spend to process 10. If f (n) = n3 . Let processing time of an algorithm of Big-Oh complexity O(f (n)) be directly proportional to f (n). then which package should be choose? 1.2 Time complexity and Big-Oh notation: solutions T (N ) N log N . 14. Which algorithm has better performance in a ”Big-Oh” sense? Work out exact conditions when these packages outperform each other.03n1. Work out exact conditions when one package actually outperforms the other and recommend the best choice if up to n = 109 items should be processed. There- . respectively. then T (100. If you are interested in faster processing of up to n = 108 data items. should be chosen to process data collections. respectively. 000. 3 . During a test. B. 000 ms 2. and T (n) = Ratio of logarithms of the same base is independent of the base (see Appendix in the textbook). containing each up to 109 records. 1.

Expression 5 + 0.01n2 0.25 ) O(n(log n)2 ) O(n3 ) O(log n) Statement If it is FALSE then write the correct formula O(f + g ) = max {O(f ). 100n + 0.25 0.) O(n3 ) O(n1. n log2 n 3 log8 n 0.5 ) O(n1. then T (n) ≤ cn3 where c = |a0 | + |a1 | + |a2 | + |a3 | so that T (n) is O(n3 ). the algorithm B is better.003 log4 n Is it TRUE or FALSE? O(.5 + 50n log10 n 0.3n + 5n1.75 ) O(n2 log n) O(n log n) O(log n) O(n2 ) O(n2 ) O(n1.01n2 100n2 0.001n3 + 0. Thus if n ≥ 1. In the Big-Oh sense.5 + 2.003 log4 n + log2 log2 n Dominant term(s) 0.5n1.01n + 100n2 2n + n0.5 2.75 n2 log2 n + n(log2 n)2 n log3 n + n log2 n 3 log8 n + log2 log2 log2 n 4. O(g )} Rule of sums: O(f + g ) = O(f ) + O(g ) Rule of products: O(f · g ) = O(f ) · O(g ) 5.5 · n1.5n1. .5 + 0.001n3 100n1.01n log2 n + n(log2 n)2 100n log3 n + n3 + 100n 0.75 n2 log2 n n log3 n.25 n(log2 n)2 n3 0. . It is obvious that T (n) ≤ |a0 | + |a1 |n + |a2 |n2 + |a3 |n3 . 7.025n 500n + 100n1. Transitivity: if g = O(f ) and h = O(f ) then g = O(h) 5n + 8n2 + 100n3 = O(n4 ) FALSE TRUE if g = O(f ) and f = O(h) then g = O(h) FALSE TRUE 5n + 8n2 + 100n3 = O(n3 ) 5n + 8n2 + 100n3 O(n2 log n) = FALSE 6. It outperforms the algo4 .5n1.

he package B has better performance in a The package B begins to outperform A when (TA (n) ≥ TB (n). to process 220 = 10242 items the algorithms A and B will spend TA (220 ) = 1 1 20 240 = 220 µs. 5 13. or log10 n ≥ 5. respectively. 10000 log 10000 T (10. 11. This inequality reduces B √ to n ≥ 5 · 105 . the algorithm B of complexity O(n) is better than A of complexity O(n log n). or n ≥ n0 = 1025 . It outperforms the algorithm A if TB (n) ≤ TA (n). the package B of linear complexity O(n) is better than the package A of O(n log n) complexity. the method of choice is A. Because TB (220 ) 9. Thus for processing up to 109 data items. or n ≥ 25 · 1010 . 1024 log2 1024 1024 cB = 1 10242 Thus. 12. 10.000 items 2 T (10. or n ≥ 100. If n ≤ 109 .5n2 ≤ 0. A1 A2 A3 Complexity O(n2 ) O(n1. the algorithm B is better. The constant factors for A and B are: cA = 10 1 = . 000. This 400 400 20 inequality reduces to log10 n ≥ 20 . In the “Big-Oh” sense. the package of choice is A. when 0.rithm A when TB (n) ≤ TA (n). . or n ≥ 250 ≈ 1015 . or n ≥ 10 . The tests allows us to derive the constant factors: cA cB = = 100 104 log10 104 500 104 = = 1 400 1 20 The package B begins to outperform A when we must estimate the data n 10 n size n0 that ensures TA (n) ≥ TB (n). This inequality reduces to 0.1n log2 n ≥ 5 · n. 000 sec. 2 log2 (220 ) = 20280µs and TB (220 ) = 1024 10242 TA (220 ). The processing times of the packages are TA (n) = cA n log10 n and TB (n) = cB n. the package of choice is A. the package of choice is A. Thus for processing up to 109 data items. 001 n ≥ 500 n .5= 10 · 10000 = 100. 000) = T (100) · 100 log 100 = 10 · 200 = 2. that is. that is. 000 sec. when n log ≥ 20 . In the Big-Oh sense. T (10.5 = 10 · 1000 = 10. that is.5 ) O(n log n) Time to process 10. when 2. 8. 000) = T (100) · 10000 1002 1. the algorithm of choice is A. This inequality reduces to log10 n ≥ 25. the package B of complexity O(n0. In the “Big-Oh” sense. respectively. if 25n ≤ 5n log10 n. 000 sec. The package B begins to outperform A when √ (TA ( n ) ≥ T ( n ).5 ) is better than A of complexity O(n).1n2 log10 n. when 0 . that is. 000) = T (100) · 10000 1001. that is.1 log2 n ≥ 5. Thus for processing up to 1012 data items. In the “Big-Oh” sense.

T (1) = 0. 3.5 ≤ 0. and k . high ). if ( goal < target ) return recurrentMethod( a. that is. and k and detemine computational complexity of this algorithm in a “Big-Oh” sense. low.71828 . else return a[ target ]. T (1) = 0.03n1. k Derive a closed form formula for T (n) in terms of c. Thus for processing up to 108 data items. This inequality reduces to n0.03(= 100). slightly different algorithm is described by the recurrence: T (n) = k · T n + c · k · n. int high. What value of k = 2. or n ≥ 108 . Derive the recurrence that describes processing time T (n) of the recursive method: public static int recurrentMethod( int[] a.). high. 3. target+1. assume that n = k m with the integer m = logk n and k .14. the package A is better.3 Recurrences and divide-and-conquer paradigm: exercises 1. goal ). } 6 . int goal ) { int target = arrangeTarget( a. Running time T (n) of processing n data items with another. 2. Hint : To have the well-defined recurrence. when 3n1. n. or 4 results in the fastest processing with the n above algorithm? Hint : You may need a relation logk n = ln ln k where ln denotes the natural logarithm with the base e = 2. assume that n = k m with the integer m = logk n and k . goal ). n. But it outperforms the package B when TA (n) ≤ TB (n). low. Running time T (n) of processing n data items with a given algorithm is described by the recurrence: T (n) = k · T n + c · n.75 . int low. target-1. In the Big-Oh sense. the package of choice is B. else if ( goal > target ) return recurrentMethod( a. 4. k Derive a closed form formula for T (n) in terms of c. . What is the computational complexity of this algorithm in a “Big-Oh” sense? Hint : To have the well-defined recurrence. 1.25 ≥ 3/0. .

Time every target = 0.1.g. 7 . . Output values of arrangeTarget() are equiprobable in this range. The obvious linear algorithm for exponentiation xn uses n − 1 multiplications. if low = 0 and high = n − 1. Hint: consider a call of recurrentMethod() for n = a. . e. . goal ) and analyse all recurrences for T (n) for different input arrays. 1. a. 1 If you know difference equations in math. n − 1 occurs with the same probability n for performing elementary operations like if–else or return should not be taken into account in the recurrence. 6. then 1 .4 Recurrences and divide-and-conquer paradigm: solutions 1. 5. Determine an explicit formula for the time T (n) of processing an array of size n if T (n) relates to the average of T (n − 1). e. The closed-form formula can be derived either by “telescoping”1 or by guessing from a few values and using then math induction.The range of input variables is 0 ≤ low ≤ goal ≤ high ≤ a.length data items. Propose a faster algorithm and find its Big-Oh complexity in the case when n = 2m by writing and solving a recurrence formula. Hint: You might need the equation 11 ·2 + deriving the explicit formula for T (n).length − 1. T (0) as follows: T (n) = where T(0) = 0. . 1 2 ·3 2 (T (0) + · · · + T (n − 1)) + c n + ··· + 1 n(n+1) = n n+1 for 7. T (0) = 0 n 1 1 Hint: You might need the n-th harmonic number Hn = 1+ 2 +1 3 +· · ·+ n ≈ ln n + 0. . 1.g. Derive an explicit (closed–form) formula for time T (n) of processing an array of size n if T (n) is specified implicitly by the recurrence: T (n) = 1 (T (0) + T (1) + · · · + T (n − 1)) + c · n. . . 0.length . . recurrentMethod( a. you will easily notice that the recurrences are the difference equations and ”telescoping” is their solution by substitution of the same equation for gradually decreasing arguments: n − 1 or n/k.577 for deriving the explicit formula for T (n). Each recurrence involves the number of data items the method recursively calls to. A non-recursive method arrangeTarget() has linear time complexity T (n) = c · n where n = high − low + 1 and returns integer target in the range low ≤ target ≤ high.

. . Therefore. Let us explore it with math induction. . m − 1. so that T (n) = c · n · logk n. or T (k m ) = c · k m · m. .(a) Derivation with “telescoping”: the recurrence T (k m ) = k · T (k m−1 ) + c · k m can be represented and “telescoped” as follows: T (k m ) km m−1 T (k ) k m−1 ··· T (k ) k m = = ··· = T (k m−1 ) +c k m−1 T (k m−2 ) +c k m−2 ··· T (1) +c 1 k ) = c · m. or T (n) = c · n · logk n. Then T (k m ) = k · T (k m−1 ) + c · k m = k · c · (m − 1) · k m−1 + c · k m = c · (m − 1 + 1) · k m = c · m · k m Therefore. T ( km The Big-Oh complexity is O(n log n). (ii) Let the assumption holds for l = 0. 1. (b) Another version of telescoping: the like substitution can be applied directly to the initial recurrence: T (k m ) k · T (k m−1 ) ··· k m−1 · T (k ) = = ··· = k · T (k m−1 ) + c · k m k 2 · T (k m−2 ) + c · k m ··· k m · T (1) + c · k m so that T (k m ) = c · m · k m (c) Guessing and math induction: let us construct a few initial values of T (n). . ∞. 8 . . The inductive steps are as follows: (i) The base case T (1) ≡ T (k 0 ) = c · 0 · k 0 = 0 holds under our assumption. starting from the given T (1) and successively applying the recurrence: T (1) = 0 T (k ) = k · T (1) + c · k = c · k T (k 2 ) = k · T (k ) + c · k 2 = 2 · c · k 2 T (k 3 ) = k · T (k 2 ) + c · k 3 = 3 · c · k 3 This sequence leads to an assumption that T (k m ) = c · m · k m . . the assumed relationship holds for all values m = 0. .

. 4. + T (n − 1)) + c · n n 5.. or T (k ) = c · k The complexity is O(n log n) because k is constant. and ln 4 = 2. and by subtracting the latter equation from the former one. The recurrence suggests that nT (n) = T (0)+ T (1)+ · · · + T (n − 2)+ T (n − 1)+ cn2 . .577 9 .The Big-Oh complexity of this algorithm is O(n log n). The recurrence T (k m ) = k · T (k m−1 ) + c · k m+1 telescopes as follows: T (k m ) km+1 T ( k m− 1 ) km T (k ) k2 m ··· = = ··· = ··· T ( k m− 1 ) km T ( k m− 2 ) k m− 1 +c +c T (1) k +c (k ) m m+1 · m. Because ln 2 = 2. 3. the fastest processing is obtained for k = 3. Because all the variants: T (n) = T (0) + cn T (n) = T (1) + cn T (n) = T (2) + cn .. Processing time T (n) = c · k · n · logk n can be easily rewritten as T (n) = k 2 c ln k · n · ln n to give an explicit dependence of k .7307. target = n − 1 are equiprobable. then the recurrence is T (n) = 1 (T (0) + . It follows that (n − 1)T (n − 1) = T (0)+ .. Therefore. + T (n − 2)+ c · (n − 1)2 .. It reduces to nT (n) = n · T (n − 1) + 2cn − c.. T km+1 = c · m. or T (n) = T (0) + cn if if if if target = 0 target = 1 target = 2 .8854. or T (n) = c · k · n · logk n. or T (n) = T (n − 1) + 2c − n Telescoping results in the following system of equalities: T (n) T (n − 1) ··· T (2) T (1) = = ··· = = c T (n − 1) +2c − n c T (n − 2) +2c − n− 1 ··· ··· ··· c T (1) +2c − 2 T (0) +2c −c Because T (0) = 0.8854. . T (n) = T (n − 1) + cn or T (n) = T (n − 1) + cn or T (n) = T (n − 2) + cn or T (n) = T (n − 3) + cn . 4 3 ln 3 = 2. . the explicit expression for T (n) is: T (n) = 2cn − c · 1 + 1 1 + ··· + 2 n = 2cn − Hn ≡ 2cn − ln n − 0.. we obtain the following basic recurrence: nT (n) − (n − 1)T (n − 1) = T (n − 1) + 2cn − c. c . 2.

Processing time for such more efficient 10 . and T (n) = cn. m−1 7. The recurrence suggests that nT (n) = 2(T (0)+ T (1)+ . + T (n − 2)+ T (n − 1)) + c · n. An alternative approach (guessing and math induction): T (0) = 0 2 T (1) = 1 ·0+c 2 (0 + c) + c T (2) = 2 T (3) = 2 3 (0 + c + 2c) + c T (4) = 2 4 (0 + c + 2c + 3c) + c = c = 2c = 3c = 4c It suggests an assumption T (n) = cn to be explored with math induction: (i) The assumption is valid for T (0) = 0. . the subtraction of the latter equality from the former one results in the following basic recurrence nT (n) − (n − 1)T (n − 1) = 2T (n − 1) + c.6. + (n − 1)c) + c 2 (n−1)n = n c+c 2 = (n − 1)c + c = cn The assumption is proven. . . more efficient exponentiation is performed as follows: x2 = x · x. the explicit expression for T (n) is: 1 c 1 c n T (n) = + + ··· + =c· n+1 1·2 2·3 (n − 1)n n(n + 1) n+1 so that T (n) = c · n. reduces to nT (n) = (n + 1)T (n − 1) + c. it n can be computed with only one multiplication more than to compute x 2 . . . . . etc. T (k ) = kc for k = 1. . . n − 1. + T (n − 2)) + c · (n − 1). that is. or T n+1 = n Telescoping results in the following system of equalities: T ( n) n+1 T (n−1) n = = ··· = = T (n−1) n T (n−2) n−1 + + ··· + + c n(n+1) c (n−1)n ··· T (2) 3 T (1) 2 ··· T (1) 2 T (0) 1 ··· c 2·3 c 1·2 Because T (0) = 0. . . . Then T (n) 2 = n (0 + c + 2c + . x8 = x4 · x4 . . Because (n − 1)T (n − 1) = 2(T (0) + . Therefore. . But because xn = x 2 · x 2 . n − 1. (ii) Let it be valid for k = 1. It T (n−1) ( n) + n(nc+1) . x4 = x2 · x2 . A straightforward linear exponentiation needs n multiplications 2 = 2 n n n n to produce x after having already x 2 .

i < n. i > 0. k += 2 ) { sum += (i + j * k ). Work out the computational complexity of the following piece of code.. Work out the computational complexity of the following piece of code: for( int i = for( int j for( int .. k < n. k += 2 ) { // constant number of operations 2. j > 0. i-. } } } n. i /= 2 ) { = 1.5 Time complexity of code: exercises 1. j < n. or T (n) = log2 n. i > 0. 1.) { = 1. T (2m ) = m. The “Big-Oh” complexity of such algorithm is O(log n).. j /= 2 ) { for ( k = j. for ( i=1. Work out the computational complexity of the following piece of code assuming that n = 2m : for( int i = for( int j for( int . and by telescoping one obtains: T (2m ) = T (2m−1 ) = ··· T (2) = T (1) = T (2m−1 ) + 1 T (2m−2 ) + 1 ··· T (1) + 1 0 that is. k < j. } } } 3. } } } n.. i *= 2 ) { for ( j = n.algorithm corresponds to a recurrence: T (2m ) = T (2m−1 ) + 1. j *= 2 ) { k = 0. k < n. j < n. j *= 2 ) { k = 0. k++ ) { // constant number C of operations 11 .

j < n. // constant number of operations } } } 5. . that the method randomValue takes constant number c of computational steps to produce each output value.i ). i < n. j *= 2 ) { . else { int i = random( n . j++ ) a[ j ] = randomValue( i ).. i++ ) { for( int j = 0.4. Determine the Big-Oh complexity for the following fragments of code taking into account only the above computational steps: for( i = 0. .1 ). return myTest( i ) + myTest( n .g. and that the method goodSort takes n log n computational steps to sort the array. . T (0).. Work out the computational complexity (in the “Big-Oh” sense) of the following piece of code and explain how you derived it using the basic features of the “Big-Oh” notation: for( int bound = 1. i++ ) { for( j = 0... } 12 . 6.1 . j < n. j < n. Hints: derive and solve the basic recurrence relating T (n) in average to 1 1 T (n − 1). .. n] whereas all other instructions spend a negligibly small time (e. bound *= 2 ) { for( int i = 0. You might need the equation 11 ·2 + 2·3 + · · · + n(n+1) = n n+1 for deriving the explicit formula for T (n). Determine the average processing time T (n) of the recursive algorithm: 1 2 3 4 5 6 7 int myTest( int n ) { if ( n <= 0 ) return 0. goodSort( a ). i < bound. Assume that the array a contains n values. T (0) = 0). } } providing the algorithm random( int n ) spends one time unit to return a random integer value uniformly distributed in the range [0. bound <= n. // constant number of operations } for( int j = 1. j += 2 ) { .

respectively. the total complexity is O(n(log n)2 ). and log n. Let n = 2k . so a straightforward (and valid) solution is that the overall complexity is O(n2 log n). + 2m−1 = 2m − 1 ≈ n times.. and outer loop is proportional to n. middle. . . 2. so the bounds may be multiplied to give that the algorithm is O n(log n)2 . . . .5 + (k − 1) · 2k−1 ≡ (log2 n − 1) n 2 = O(n log n) Thus.. because of doubling the variable j. 1. Thus the overall Big-Oh complexity is O(n(log n)2 ). the overall complexity of the innermost part is O(n). More detailed optional analysis gives the same value. The outermost and middle loops have complexity O(log n) and O(n). 13 . the inner loop has different execution times: j Inner iterations 2k 1 2k−1 (2k − 2k−1 ) 1 2 2k−2 (2k − 2k−2 ) 1 2 . For each i. the number of inner/middle steps is 1 + k · 2k−1 − (1 + 2 + . In the outer for-loop. log n..6 Time complexity of code: solutions 1. 3. For each i. 2k−1 . . For each j. .. the middle loop is executed k + 1 times. + 2k−1 ) 1 2 = = 1 + k · 2k−1 − (2k − 1) 1 2 1. so that the two inner loops together go round 1 + 2 + 4 + . and for each value j = 2k . next loop goes round also log2 n times. the variable i keeps halving so it goes round log2 n times. 2. the innermost loop by k goes round j times. respectively. Running time of the inner. Loops are nested. because of doubling the variable j. The first and second successive innermost loops have O(n) and O(log n) complexity. the next loop goes round m = log2 n times.1. . so the bounds may be multiplied to give that the algorithm is O n2 . . The outer for-loop goes round n times. Loops are nested. The innermost loop by k goes round n 2 times. Thus. Then the outer loop is executed k times. 21 (2k − 21 ) 1 2 0 2 (2k − 20 ) 1 2 In total. 4.

n − 1. . But selection of either answer (O(n2 log n) and O(n2 ) is valid. or T (n−1) ( n) + n(n1 nT (n) = (n + 1)T (n − 1) + 1. the explicit expression for T (n) is: T (n) 1 1 1 1 n = + + ··· + = n+1 1·2 2·3 (n − 1)n n(n + 1) n+1 so that T (n) = n. but the next called method is of higher complexity n log n. Because the outer loop is linear in n. 1. . 5.More detailed analysis shows that the outermost and middle loops are interrelated. . . The algorithm suggests that T (n) = T (i) + T (n − 1 − i) + 1. . . . Because (n − 1)T (n − 1) = 2(T (0) + . we obtain that in average nT (n) = 2(T (0)+T (1)+. and the number of repeating the innermost part is as follows: 1 + 2 + . By summing this relationship for all the possible random values i = 0. Thus actually this code has quadratic complexity O(n2 ). . . 14 . The telescoping results in the following system of expressions: T ( n) n+1 T (n−1) n = = ··· = = T (n−1) n T (n−2) n−1 + + ··· + + 1 n(n+1) 1 (n−1)n ··· T (2) 3 T (1) 2 ··· T (1) 2 T (0) 1 ··· 1 2·3 1 1·2 Because T (0) = 0.+T (n−2)+T (n−1))+n. the basic recurrence is as follows: nT (n) − (n − 1)T (n − 1) = 2T (n − 1) + 1. + T (n − 2)) + (n − 1). + 2m = 2m+1 − 1 where m = log2 n is the smallest integer such that 2m+1 > n. . or T n+1 = n +1) . the overall complexity of this piece of code is n2 log n. 6. The inner loop has linear complexity cn.

Sign up to vote on this title
UsefulNot useful