This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

*

Key topics: * Recurrence Relations * Solving Recurrence Relations * The Towers of Hanoi * Analyzing Recursive Subprograms Up until now, we have been analyzing non-recursive algorithms, looking at how big-Oh notation may be used to characterize the growth rate of running times for various algorithms. Such algorithm analysis becomes a bit more complicated when we turn our attention to analyzing recursive algorithms. As a result, we augment the analytic tools in our repertoire to help us perform such analyses. One such tool is an understanding of recurrence relations, which we discuss presently.

Recurrence Relations

Recall that a recursive or inductive definition has the following parts: 1. Base Case: the initial condition or basis which defines the first (or first few) elements of the sequence 2. Inductive (Recursive) Case: an inductive step in which later terms in the sequence are defined in terms of earlier terms. The inductive step in a recursive definition can be expressed as a recurrence relation, showing how earlier terms in a sequence relate to later terms. We more formally define this notion below. A recurrence relation for a sequence a1, a2, a3, ... is a formula that relates each term ak to certain of its predecessors ak-1, ak-2, ..., ak-i, where i is a fixed integer and k is any integer greater than or equal to i. The initial conditions for such a recurrence relation specify the values of a1, a2, a3, ..., ai-1. A recursive definition is one way of defining a sequence in mathematics (other ways to define sequences include enumerating the sequence or coming up with a formula to express the sequence). Suppose you have an enumerated sequence that satisfies a given recursive definition (or recurrence relation). It is frequently very useful to have the formula for the elements of this sequence in addition to the recurrence relation, especially if you need to determine a very large member of the sequence. Such an explicit formula is called a solution to the recurrence relation. If a member of the sequence can be calculated using a fixed number of elementary operations, we say it is a closed form formula. Solving recurrence relations is the key to analyzing recursive subprograms.

–2–

Example 1

a) Make a list of all bit strings of length 0, 1, 2, & 3 that do not contain the bit pattern 11. How many such strings are there? b) Find the number of strings of length ten that do not contain the pattern 11.

a) One way to solve this problem is enumeration: length 0: empty string length 1: 0, 1 length 2: 00, 01, 10, 11 length 3: 000, 001, 010, 011, 100, 101, 110, 111

1 2 3 5

b) To do this for strings of length ten would be ridiculous because there are 1024 possible strings. There must be a better way. Suppose the number of bit strings of length less than some integer k that do not contain the 11 pattern is known (it just so happens that we do know it for 0, 1, 2, 3). To find the number of strings of length k that do not contain the bit pattern 11, we can try to express strings that have a certain property in terms of shorter strings with the same property. This is called recursive thinking. Consider the set of all bit strings of length k that do not contain 11. Any string in the set begins with a 0 or a 1. If the string begins with a 0, the remaining k-1 characters can be any sequence of 0's and 1's except the pattern 11 cannot appear. If the string begins with 1, then the second character must be a 0; the remaining k-2 characters can then be any sequence of 0's and 1's that does not contain the pattern 11. So, the set of all strings of length k that do not contain the pattern 11 can be partitioned as follows: set of all bit strings of form 0---k-1 bits without 11 set of all bit strings of form 10 - - k-2 bits without 11

The number of elements in the entire set then equals the sum of the elements of the two subsets: the number of strings of length k that do not the number of strings the number of strings of length k-1 that do not + of length k - 2 that don’t

=

it works like this: you take the recurrence relation and use it to enumerate elements of the sequence back from k. before combat begins. as in Life… Bring on the Recurrence Relation Many problems in the biological. seriously. Here are the famous Lancaster Equations of Combat: Two armies engage in combat. we get 144. Each army counts the number of soldiers still in combat at the end of the day.an represents the number of soldiers lost by the first army on the nth day. These constants measure the effectiveness of the weapons of the different armies. It works like magic.bn = B * an-1. management. Example 2 Consider the following recurrence relation: a0 = 1 ak = ak-1 + 2 Problem: What is the formula for this recurrence relation? We start by substituting k-1 into the recursive definition above to obtain: ak-1 = ak-2 + 2 . Similarly. Thus.bn represents the number of soldiers lost by the second army on the nth day. in other words (as a recurrence relation): 1) s0 = 1 s1 = 2 2) sk = sk-1 + sk-2 If we calculate this recurrence to s10. bn-1 . No. respectively. and social sciences lead to sequences that satisfy recurrence relations (remember Fibonacci’s rabbits?). we have constants A and B such that an-1 . Thus. In Death. an-1 . Solving Recurrence Relations Using Repeated Substitution One method of solving recurrence relations is called repeated substitution.–3– contain 11 contain 11 contain 11 Or. we substitute k-1 in for k in the recurrence relation. To find the next element back from k. and let an and bn denote the number of men in the two armies at the end of the nth day. Let a0 and b0 denote the number of soldiers in the first and second armies.an = A * bn-1 and bn-1 . Suppose it is known that the decrease in the number of soldiers in each army is proportional to the number of soldiers in the other army at the beginning of each day. and can be computed using the recurrence relations for a and b.

we substitute this value for ak-1 back in the original recurrence relation equation. we will hopefully find a pattern.. we do not have to evaluate previous members of the sequence to get the one we want. We do this because we want to find an equation for ak. Specifically. We need to get rid of some of these variables though.. a0 = 1 2.–4– Then. an is the sequence defined by: 1. we set i = k. a3. we would have to do a proof by induction to show that the recurrence relation represents the same sequence as the closed form formula. Now. there’s no better time than the present. . we show that we get the same answer using the formula: an = 1 + 2n for n = 1. ak = ak-1 + 2 = (ak-2 + 2) + 2 = ak-2 + 4 Then. So. yielding: ak = a0 + 2k ak = 1 + 2k To substantiate this substitution (setting i = k). a2. but substitute those equations in the formula for ak. we substitute k-2 for k in the original equation to get ak-2 = ak-3 + 2 and then substitute this value for ak-2 in the previous expression: ak = ak-2 + 4 = (ak-3 + 2) + 4 = ak-3 + 6 ak = ak-3 + 6 = (ak-4 + 2) + 6 = ak-4 + 8 ak = ak-4 + 8. ak+1 = a((k+1)–1) + 2 ak+1 = ak + 2 ak+1 = 1 + 2k + 2 ak+1 = 1 + 2(k + 1) from the recurrence relation (this is a "given") algebra substitution from inductive hypothesis (ak = 1 + 2k) algebra . Using the definition of the recurrence relation. an = 1 + 2n = 1 + 2 = 3. What we are looking for is a lot of different representations of the formula for ak based on previous elements in the sequence. the base case holds.. we have: a1 = a0 + 2 = 1 + 2 = 3. Inductive Case: The induction hypothesis is to assume P(k): ak = 1 + 2k. we present this proof here. so if we enumerate back from k. Since. an = 1 + 2n for all n >= 1 Base Case: prove that P(1) is true. So.. The pattern that emerges is obvious: ak = ak-i + 2i. ak = ak-1 + 2 then. we need to get rid of the ak-i on the right-hand side of the equation to get a closed form formula. We then need to show that P(k+1) = ak+1 = 1 + 2(k + 1).. Proof: Let P(n) denote the following: if a1. Therefore.. In other words.

e.. when we add the (n + 1)st vertex. + (n – 1) = Σi = n(n – 1)/2 We don’t give a full inductive proof here that this formula is.. Solution: By inspection. The disks all start by being placed in ascending order by size on the first peg. This gives us the recurrence relation: Kn+1 = Kn + n or. So far.–5– Thus. Since P(k+1) is true when P(k) is true. which is defined by three pegs and a series of disks of various sizes that may be placed on the pegs. we conjecture that adding another vertex to the graph will require us to add edges from each of the existing vertices in the graph to the new vertex we added. we can see that K1 = 0. but the curious reader can quickly prove this themselves to get a little more comfortable with using induction. correct. we just need to generalize a formula from this sequence. Example 4 Consider the Tower of Hanoi puzzle (which you probably saw already in CS106B/X). in fact. and we have shown P(1) holds. between any two of the n vertices lies an edge). Define a recurrence relation for the number of edges in Kn. we have shown the desired result. and then solve this by enumerating and guessing a formula. Example 3 Let Kn be a complete graph on n vertices (i. Now. we have shown inductively that P(n) is true for all n >= 1. we must add n new edges to the graph to make it a complete graph.. From the recurrence relation. . if there were already n vertices in the graph. since that graph has no edges. we can see that this sequence may be given by the formula (for all n >= 1): n-1 i=0 Kn = 0 + 1 + 2 + . Now. alternatively Kn = Kn-1 + (n – 1) Enumerating the first few values of this sequences yields: K2 = K1 + 1 = 0 + 1 = 1 K3 = K2 + 2 = 1 + 2 = 3 K4 = K3 + 3 = 3 + 3 = 6 K5 = K4 + 4 = 6 + 4 = 10 These numbers seem to match the graphs given. Thus. This configuration is depicted below. so good.

At no time can a larger disk be placed on a smaller one. Tower(k-1. we will try repeated substitution: Hn = 2Hn-1 + 1 . Aux). use one move to transfer the bottom disk from the beginning peg to the ending peg. and End is the peg that the disks must ultimately be moved to. Finally. To determine a closed form formula where there is no Hn on the right side of the equation. the n – 1 disks on the auxiliary peg are moved to the ending peg. where k is the number of disks to move. Aux. } } Let Hn denote the number of moves to solve the Tower of Hanoi puzzle with n disks. since if there are no disks. Beg. Beg. the goal configuration of the puzzle is: The pseudocode for a recursive algorithm that can solve this problem is given below. Only one disk may be moved at a time (specifically. Beg. Thus. the top disk on any peg). Aux is the peg that may be used an "auxiliary storage" during the move. End). Aux. This gives us the recurrence relation: Hn = 2Hn-1 + 1 The base case for this relations is H0 = 0. End) { if (k == 1) move disk from Beg to End. Then.–6– The object of the Towers of Hanoi is to move all the disks from the first peg to the third peg while always following the following rules: 1. Beg is the peg to move the disks from. requiring an additional H n-1 moves. Tower(k. End. else { Tower(k-1. 2. move disk from Beg to End. Moving the n – 1 disks from the beginning peg to the auxiliary peg requires H n-1 moves. there are no moves required to have “all” the disks on the third peg.

. yields the formula: Hn = 2n-1 + 2n-2 + . the monks should be busy for about 500 billion years to solve the puzzle.. For brevity.. yielding: an = 3n a0 an = 2 * 3n . The myth says that the world will end when they finish the puzzle. the number of moves required for n disks is 2n – 1. There is an ancient myth of this puzzle that says there is a tower in Hanoi where monks are transferring 64 gold disks from one peg to another according to the rules of the puzzle. They take one second to move one disk. Therefore. A quick calculation to see how much time this will take reveals: 264 – 1 = 1. a) Recurrence relation: a0 = 2 an = 3an-1 Solution: an = 3an-1 an = 3 (3an-2) = 32 an-2 an = 32 (3an-3) = 33 an-3 an = 3i-1 (3an-i) = 3i an-i Set i = n.–7– = 2(2Hn-2 + 1) + 1 = 22 * Hn-2 + 2 + 1 = 22 (2Hn-3 + 1) + 2 + 1 = 23 * Hn-3 + 22 + 2 + 1 The pattern to this recurrence relation is: Hn = (2i * Hn-i) + 2i-1 + 2i-2 + . + 22 + 21 + 1 This is a well known summation formula that can easily be proven by induction to be equal to 2n – 1.709. Example 5 Find a closed form solution for the following recurrence relations.073. we omit the inductive proofs..551.744... + 22 + 21 + 1 Substituting H0 = 0.446.615 seconds So. + 22 + 21 + 20 We can set n = i (this will require an inductive proof): Hn = (2n * H0) + 2n-1 + 2n-2 + . Lucky for us. Not so lucky for the monks who probably aren’t having much fun moving disks from peg to peg.

–8– .

We will then use this formula to determine an upper bound on the program's worst-case performance. yielding: an = n(n-1)(n-2)…(1) a0 = a0 n! an = 5n! Analyzing Recursive Programs Solving recurrence relations is the key to analyzing recursive programs. yielding: an = 2n (a0) – 2n-1 – 2n-2 – … – 20 = 2n – 2n-1 – 2n-2 – … – 20 an = 2n – (2n – 1) (Note: 2n-1 – 2n-2 – … – 1 = 2n – 1) an = 1 Note that while the recurrence relation looks deceptively similar to the recurrence relation in the Tower of Hanoi problem. c) Recurrence relation: a0 = 5 an = nan-1 Solution: an = n(n-1)an-2 an = n(n-1)(n-2)an-3 an = n(n-1)(n-2)…(n-i+1)an-i Set i = n. We need a formula that represents the number of recursive calls and the work done in each call as the program converges to the base case. a recurrence relation can be viewed as follows: . In terms of the analysis of algorithms. the solution to the recurrence relation here gives a vastly different result.–9– b) Recurrence relation: a0 = 1 an = 2an-1 – 1 Solution: an = 2(2an-2 – 1) – 1 = 22 an-2 – 2 – 1 an = 22 (2an-3 – 1) – 2 – 1 = 23 an-3 – 22 – 2 – 1 an = 2i (an-i) – 2i-1 – 2i-2 – … – 20 Set i = n.

This gives the following recurrence relation: . } 1) 2) 3) We will use T(n) to represent the unknown running time of this function with n as the size of the parameter. However. O(1)) which we will just designate as some time value a. then the running time for a call to factorial with parameter value n – 1 will be T(n-1). and how much work is done at each level. Example 7 Consider the following function to compute factorials: int factorial(int n) { if (n <= 1) return(1). we will take n to equal 1. We then solve the recurrence relation to derive a big-Oh expression for F. Line 1 takes some constant time value b and line 3 takes T(n-1) since we are making a recursive call. When n > 1. When n = 1. we execute only lines 1 and 2.. since no recursive calls are made in this case. There are two cases. else return(n * factorial(n-1)). For the base case of the inductive definition of T(n). This includes how many recursive calls are generated at each level of the recursion. If the running time of factorial with parameter value n is T(n). how much of the problem is solved by each recursive call. 2. This case corresponds to a base case in an inductive definition of TF(n). We call this T F(n). Say we have some unknown running time for function F that is defined in terms of F's parameter n (since this parameter controls the number of recursive calls).e. each of which has a constant running time (i. The value of TF(n) is established by an inductive definition based on the size of argument n. the condition of line 1 is false so we execute lines 1 and 3. The recurrence relation for TF(n) is derived by examining the code of F and defining the running times for the two cases stated above.– 10 – A recurrence relation expresses the running time of a recursive algorithm. This case corresponds to an inductive step in the definition of TF(n). n is sufficiently large that recursive calls are made. n is sufficiently small that no recursive calls are made. examined below: 1. they will be made with smaller parameter values. we assume that whatever recursive calls F makes.

each of which requires O(1) time. Base case: prove that P(1) is true T(1) = a + ((1 – 1) * b) = a + (0 * b) = a. And.b = a + kb P(k+1) holds when P(k) holds. recall that the recurrence relation definition give us: T(1) = a. We use the inductive hypothesis to substitute for T(k). we make n calls to factorial. This gives us: T(k+1) = b + T(k) = b + a + (k-1)b = b + a + kb . Proof: Let P(n) denote that the running time of factorial is T(n) = a + (n-1)b. We know from the recurrence relation definition that T(k+1) = b + T(k).b has a complexity of O(n).– 11 – base case: T(1) = a induction: T(n) = b + T(n-1). for n > 1 Now we must solve this recurrence relation to come up with a closed form formula. T(n) = a + (n-1)b = a + bn . and P(1) holds. We can try just enumerating a few to see if there is a pattern: T(1) T(2) T(3) T(4) T(n) = = = = = a b b b a + + + + T(1) = b + a T(2) = b + (b + a) = a + 2b T(3) = b + (a + 2b) = a + 3b (n-1)b for all n >= 1 To be complete. therefore P(n) is true for all n >= 1. and T(n) = b + T(n-1) for n > 1. This equation holds since the base case of our inductive definition states that T(1) = a. we need to do an inductive proof to show that the running time formula T(n) = a + (n-1)b is true for all n >= 1. and show P(k+1): T(k+1) = a + kb. . Inductive case: inductive hypothesis is to assume P(k): T(k) = a + (k-1)b. Now that we know the closed form formula for this recurrence relation. we use this formula to determine the complexity. This makes sense: all it says is that to compute n!.

We now develop a recurrence relation.n) will sort the entire array through recursive calls. The for loop of lines 3. specifically A[i. taking some constant amount of time which we will call a. Let’s try repeated substitution: . There is one base case (m = 1). The inductive case is for m > 1: this is when recursive calls are made. m = 1. small. A[i] = temp. We will denote this value as m. } Parameter n indicates the size of the array. which we shall designate O(m). the inductive definition of recursive SelectionSort is: T(1) = a T(m) = T(m-1) + O(m) To solve this recurrence relation. n). we can either try repeated substitutions or just enumerate a few cases to look for a pattern. j++) { if (A[j] < A[small]) small = j. SelectionSort(A. So a recursive call to this function will be dominated by the time it takes to execute the for loop m times. int n) { int j. j <= n. so we can substitute C*m for O(m): T(1) = a T(m) = T(m-1) + C*m Now. 7. Note that m = 0 is not the base case because the recursion converges down to a list with one element. Note that the size of the array to be sorted on each recursive call is equal to n – i + 1.– 12 – Example 8 Consider the following recursive selection sort algorithm: void SelectionSort(int A[]. we first get rid of the big-Oh expression by substituting the definition of big-Oh: (f(n) = O(g(n)) if f(n) <= C * g(n)). } temp = A[small].n]. 4 and 5 will execute m times (where m = n – i + 1). The time for the recursive call of line 9 is T(m-1). 6. temp. only line 1 is executed.. Lines 2. parameter i tells us how much of the array is left to sort. and 8 each take a constant amount of time. the size of the array during any particular recursive call. i + 1. A[small] = A[i].. In this case. when i = n. for (j = i + 1. So. 1) 2) 3) 4) 5) 6) 7) 8) 9) } if (i < n) { small = i. a call to SelectionSort(A. int i.1. So.

If we were to continue the repeated substitution down to the last possible substitution. The question is how do you sort the two halves? We sort them by using MergeSort recursively.C = T(m-3) + 3Cm . The complexity of this formula is O(m2). = T(m-j) + jCm .10C . The pseudocode for MergeSort is given below: MergeSort(L) 1) if (length of L > 1) { 2) Split list into first half and second half 3) MergeSort(first half) 4) MergeSort(second half) 5) Merge first half and second half into sorted list } . the problem of sorting a list is reduced to the problem of sorting two smaller lists.(m2C . and then does a recursive call (usually where there is tail recursion involved). we want to stop at T(1) because T(0) is really not the base case that the recursion converges on..2C)/2.((m-1)(m-2)/2)C = a + m2C . Suppose that we have a sorted array A with r elements and another sorted array B with s elements. The above recurrence relation generally occurs with any recursive subprogram that has a single loop of some form. which is the same complexity as iterative selection sort. Example 9 MergeSort is an efficient sorting algorithm that uses a "divide and conquer" strategy.m2C + 3Cm . so doing it recursively did not save us any time.2Cm .2C)/2 So. Merging is an operation that combines the elements of A with the elements of B into one large sorted array C with (r + s) elements. (Note that we have to do an inductive proof later to allow us to do this substitution.3C = T(m-4) + 4Cm . We then sort the two halves and merge them together. but for now): T(m) = T(1) + (m-1)Cm . and then merging those smaller sorted lists. MergeSort is based on this merging algorithm. We do this because our base case is T(1). That is.6C = T(m-5) + 5Cm . What happens is we set up a recursive function that divides a list into two halves.– 13 – T(m) = T(m-1) + C*m = T(m-2) + 2Cm .3Cm + 2C)/2 = a + (2m2C . finally we have a closed form formula T(m) = a + (m2C + Cm .Cm .2C)/2 = a + (m2C + Cm .(j(j-1)/2)C because T(m-1) = T(m-2) + C(m-1) because T(m-2) = T(m-3) + C(m-2) because T(m-3) = T(m-4) + C(m-3) because T(m-4) = T(m-5) + C(m-4) To get a closed form formula we let j = m – 1..

The recursive calls continue dividing the list into halves until each half contains only one element. For example. and end up with separate lists of 4 and 2: 4 2 We end up merging these two sorted lists into one list containing 2 and 4 just as we did with the 7 and 9. which splits the list into two parts again: (MergeSort #2) 7 9 When we call MergeSort the next time. obviously a list of one element is sorted. calling MergeSort(second half) on the list containing 4 and 2. Now we execute the next line in MergeSort #2: MergeSort(second half) which executes on the list containing just 9. Again the if condition is false. producing: 2 4 . We execute the next line in MergeSort #1. The algorithm then merges these smaller halves into larger sorted halves until we have one large sorted list. the if condition is false and we return up a level of recursion to MergeSort #2. We execute the next line. which merges the “sorted” lists containing 7 and 9. Now we return up to recursive call MergeSort #1 and find we are on the list containing 4 and 2. producing: 7 9 We have completed MergeSort #2. and we return back up to MergeSort #2.– 14 – Note: we will also assume that we have Split and Merge functions that each run in O(n). if the list contains the following integers: 7 9 2 4 3 6 1 8 The first thing that happens is we split the list into 2 halves (we’ll refer to this call as MergeSort #0 = original call): 7 9 4 2 3 6 1 8 Then we call Mergesort recursively on the first half which again splits the list into two halves: (MergeSort #1): 7 9 4 2 We call Mergesort recursively again on the first half.

O(1).– 15 – Once that work is complete. Note the following relationships: value of n: coefficient of b: ratio: 2 2 1 4 8 2 8 24 3 16 64 4 . This sends us through another cycle of recursion just like what we did for the first part of the list. the base case is constant time. Thus. We will stick to n's that are powers of two so things divide evenly: T(2) = 2T(1) + 2b = 2a + 2b T(4) = 2T(2) + 4b = 2(2a + 2b) + 4b = 4a + 8b T(8) = 2T(4) + 8b = 2(4a + 8b) + 8b = 8a + 24b T(16) = 2T(8) + 16b = 2(8a + 24b) + 16b = 16a + 64b There is obviously a pattern but it is not as easy to represent as the others have been. we have a second sorted list: 1 3 6 8 The last line of MergeSort #0 merges these two sorted lists of four elements into one sorted list: 1 2 3 4 6 7 8 9 To analyze the complexity of MergeSort. we get 2T(n/2) + O(n). we need to define a recurrence relation. If the test of line 1 fails. If we substitute constants in place of the big-Oh notation we obtain: T(1) = a T(n) = 2T(n/2) + bn To solve this recurrence relation. The base case is when we have a list of 1 element and only line 1 of the function is executed. The time spent in this function when the length of the list > 1 is the sum of the following: 1) O(1) for the test on line 1 2) O(n) for the split function call on line 2 3) T(n/2) for recursive call on line 3 4) T(n/2) for recursive call on line 4 5) O(n) for the merge function call on line 5 If we drop the O(1)'s and apply the summation rule. we return to MergeSort #1 and execute the merge on the two lists each with two elements. When we finish. we will enumerate a few values. we must execute lines 2-5. This produces the sorted list: 2 4 7 9 We return to recursive call MergeSort #0 (the original call) and execute the next line: MergeSort(second half).

Our "guess" for the solution of this recurrence relation is T(n) = an + bn log2 n. . The ratio is log2 n because log2 2 = 1. log2 4 = 2. etc. it appears that the coefficient of b is n times another factor that grows by 1 each time n doubles.– 16 – So. log2 8 = 3.

Then: f(n) = O(nd) O(nd log n) O(nlogb a ) if a < bd if a = bd if a > bd Example 10 Problem: Use Theorem 1 above to find the big-Oh running time of MergeSort (from Example 9). We choose: a = 2. making Theorem 1 applicable to the recurrence relation T(n). Given our choice of constants. Theorem 1 (Master Theorem): Let f be an increasing function that satisfies the recurrence relation: f(n) = a f(n/b) + cnd whenever n = bk. Since we . the function f(n) in the theorem becomes: f(n) = 2 f(n/2) + xn1 Note that f(n) now matches the function T(n) given earlier. You will often see this recurrence relation in "divide and conquer" sorts. but we leave the proof as an exercise for the reader. where k is a positive integer. there is a handy theorem that is often useful in analyzing the complexity of general “divide and conquer” algorithms. c is a positive real number. Of course. Theorem 1 tell us the complexity of our recurrence relation is O(nd log n). since 2 = 21. b = 2. A Theorem in the Home or Office is a Good Thing… Fortunately. a >= 1. b is an integer greater than 1. Finally. Solution: In Example 9. we must show that this recurrence relation fits the functional form for f(n) in the theorem. we know that a = bd. and d = 1. if we let i = log2 n. Thus. we are given the recurrence relation for MergeSort as: T(n) = 2T(n/2) + xn where we have simply replaced the positive constant b in the original recurrence relation with the constant x. we would have to do an inductive proof to show that this formula does indeed hold for all n >= 1. we end up with: n*T(1) + bn log2 n = an + bn log2 n (because 2log2n = n). c = x. so that we do not confuse variable names below. With these constants.– 17 – We could have used repeated substitution in which case we would have the following formula: T(n) = 2i T(n/2i) + ibn Now. and d is a non-negative real number. In order to apply Theorem 1. which is considerably faster than Selection Sort. note that the complexity of an + bn log2 n is O(n log n).

We will note this constant amount of work as a. a = bd: f(n) = 2 f(n/2) + xn1. • When n >= 2 (the recursive case). we obtain the final complexity result O(n log n). a > bd: f(n) = 8 f(n/2) + xn2. The two terms grow together. and thus f(n) = O(nlog 8) = O(n3). which is the same complexity value we obtained via repeated substitution. 2 = 2. which are based on the relationship between a and bd. } return total. i++) { total += recurse(n-2). doing O(1) work. we have: From the recursive case. It’s interesting to consider the three cases of the Master Theorem. 1) 2) 3) 4) } if (n == 0) return 1. 2. and 4. total = 0. The 8 f(n/2) term is growing faster than the xn2 and thus dominates. this function only executes line 1. this function does O(1) work in executing lines 1. 2 < 22. From the base case. In this example. and thus f(n) = O(n log n). which we will denote by b. for (i = 0. i < 4. we have: T0 = a Tn = 4Tn-2 + b We apply repeated substitution: Tn = 4Tn-2 + b Tn = 4(4Tn-4 + b) + b = 42 Tn-4 + 4b + b Tn = 42 (4Tn-6 + b) + 4b + b = 43 Tn-6 + 42 b + 4b + b Tn = 4i Tn-2i + 4i-1 b + 4i-2 b + … + 40 b . In this example. We use these two points to perform the recurrence relation analysis below. Here are some examples to consider: • • • a < bd: f(n) = 2 f(n/2) + xn2. 8 < 22. Solution: • When n = 0 (the base case). /* Assume only non-negative even values of n are passed in */ void recurse(int n) { int i. Verify your result using induction. The xn2 term is growing faster than the 2 f(n/2) and thus dominates. In this example.– 18 – chose d = 1. Example 11 Problem: Use repeated substitution to find the time complexity of the function recurse. and thus f(n) = O(n2). and also makes four recursive calls with parameter values n-2 in the body loop (line 3).

Here is a brief digression on how we compute this closed form formula: Let x = (2n-2 + 2n-4 + … + 20) So. by definition of the recurrence substituting Tn with the inductive hypothesis algebra: 4 = 22 algebra: multiply in the 22 algebra: -22 = -3 . yielding: Tn = 22i Tn-2i + 22(i-1) b + 22(i-2) b + … + 20 b Set i = n/2. Let P(n) denote Tn = 2n a + ((2n – 1)/3) b. Finally. we calculate the big-Oh complexity for the function.– 19 – Note that 4k = 22k. as follows: From the recurrence relation Tn = 2n a + ((2n – 1)/3) b. we verify this result using induction. Base case: Show P(0). T0 = 20 a + ((20 – 1)/3) b = a + ((1 – 1)/3) b = a.1 algebra: (-3/3)b = -b this is the desired result . we need to compute the closed form for (2n-2 + 2n-4 + … + 20) = (2n – 1)/3. This is the initial condition. we see that the algorithm has time complexity: O(2n a) + O(2n b) = O(2n) + O(2n) = O(2n). Inductive case: Assume inductive hypothesis P(n): Tn = 2n a + ((2n – 1)/3) b Show P(n+2): Tn+2 = 2n+2 a + ((2n+2 – 1)/3) b Note: only consider even values of n Tn+2 = 4Tn + b Tn+2 = 4(2n a + ((2n – 1)/3) b) + b Tn+2 = 22 (2n a + ((2n – 1)/3) b) + b Tn+2 = 2n+2 a + ((2n+2 – 22)/3) b + b Tn+2 = 2n+2 a + ((2n+2 – 3 – 1)/3) b + b Tn+2 = 2n+2 a + ((2n+2 – 1)/3) b – b + b Tn+2 = 2n+2 a + ((2n+2 – 1)/3) b QED. yielding: Tn = 2n T0 + 2n-2 b + 2n-4 b + … + 20 b = 2n a + (2n-2 + 2n-4 + … + 20) b So. 22 x = 4x = 22 (2n-2 + 2n-4 + … + 20) = (2n + 2n-2 + … + 22) 22 x – x = 3x = (2n + 2n-2 + … + 22) – (2n-2 + 2n-4 + … + 20) 3x = 2n – 20 = 2n – 1 x = (2n – 1)/3 = (2n-2 + 2n-4 + … + 20) Substituting this closed form for the sum back in to the equation for Tn yields: Tn = 2n a + (2n-2 + 2n-4 + … + 20) b Tn = 2n a + ((2n – 1)/3) b Now.

R.– 20 – Bibliography A. 1991. New York: McGraw-Hill. . New York: W. J. Rivest. Freeman. Leiserson.H. Cormen. Aho. the content of this presentation is licensed under the Creative Commons Attribution 2. Ullman. Foundations of Computer Science.5 License. T. D. * Except as otherwise noted. 1992. Introduction to Algorithms. C.

- Divide and Conquer 4up
- chpt2-solutions
- An Algebraic Approach to Constraint Satisfaction Problems
- Computer Representation
- ALGORITHM FOR LISTING OF SMARANDACHE FACTOR PARTITIONS
- P3_C11
- Analysis of Algorithms II
- Lecture2_2pp
- The Doomsday Algorithm
- factorizacion-algnotes
- Ans
- 02 Analysis
- Dynamic Practice 2
- Tutorial on Design and Anal of Algorithm
- QUESTIONS Dynamic Programming
- Kuhn's Algorithm for Finding the Greatest Matching in a Bipartite Graph
- Chapter 4 - Recursion Tree
- fasc3a
- Lecture10-NPcomplete
- algo - ex2 - incomplete!
- lecture3_SYexample
- u4.31.03
- Backtracking
- 2 Basics of Algorithm Analysis
- Avg Case 4up
- Reductions
- PrimalityTesting
- mst-4upgfksdnkfgbdskf
- Graham Scan

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd