You are on page 1of 21

Recurrence Relations

Recurrences
The expression:
c n 1

T ( n)
2T n cn n 1
2
is a recurrence.
Recurrence: an equation that describes a function in
terms of its value on smaller functions
Recurrence Examples

0 n0 0 n0
s ( n) s ( n)
c s(n 1) n 0 n s(n 1) n 0

n 1
c c n 1

T ( n) T ( n)
2T n c n 1 n
2 aT cn n 1
b
Solving Recurrences
Substitution method
Iteration method
Master method
Solving Recurrences
The substitution method
Making a good guess method
Guess the form of the answer, then use
induction to find the constants and show that
solution works
Examples:
T(n) = 2T(n/2) + (n) T(n) = (n lg n)
T(n) = 2T(n/2) + n ???
Solving Recurrences
The substitution method
Making a good guess method
Guess the form of the answer, then use
induction to find the constants and show that
solution works
Examples:
T(n) = 2T(n/2) + (n) T(n) = (n lg n)
T(n) = 2T(n/2) + n T(n) = (n lg n)
Substitution method

Guess the form of the solution .

Use mathematical induction to find the constants and


show that the solution works .

The substitution method can be used to establish either upper


or lower bounds on a recurrence.
An example (Substitution method )
T(n) = 2T(floor(n/2) ) +n
We guess that the solution is T(n)=0(n lg n).
i.e. to show that T(n) cn lg n , for some constant c> 0 and n m.

Assume that this bound holds for [n/2]. So , we get


T(n) 2(c floor (n/2) lg(floor(n/2))) + n
cn lg(n/2) + n
= cn lg n cn lg 2 + n
= cn lg n cn + n
cn lg n
where , the last step holds as long as c 1.
Boundary conditions :

Suppose , T(1)=1 is the sole boundary condition of the


recurrence .
then , for n=1 , the bound T(n) c n lg n yields
T(1) c lg1=0 , which is at odds with T(1)=1.
Thus ,the base case of our inductive proof fails to hold.

To overcome this difficulty , we can take advantage of the


asymptotic notation which only requires us to prove
T(n)c n lg n for n m.
The idea is to remove the difficult boundary condition T(1)= 1
from consideration.
Thus , we can replace T(1) by T(2) as the base cases in the
inductive proof , letting m=2.
Contd....

From the recurrence , with T(1) = 1, we get


T(2)=4

We require T(2) c 2 lg 2

It is clear that , any choice of c2 suffices for


the base cases
Are we done?
Have we proved the case for n =3.
Have we proved that T(3) c 3 lg 3.
No. Since floor(3/2) = 1 and for n =1, it does not hold. So
induction does not apply on n= 3.
From the recurrence, with T(1) = 1, we get T(3) = 5.
The inductive proof that T(n) c n lg n for some constant c1
can now be completed by choosing c large enough that
T(3)c 3 lg 3 also holds.
It is clear that , any choice of c 2 is sufficient for this to hold.
Thus we can conclude that T(n) c n lg n for any c 2 and n 2.
Solving Recurrences using Recursion Tree Method

Here while solving recurrences, we divide the problem into


subproblems of equal size.

For e.g., T(n) = a T(n/b) + f(n) where a > 1 ,b > 1 and f(n) is a given
function .

F(n) is the cost of splitting or combining the sub problems.

T n/b T n/b
1) T(n) = 2T(n/2) + n

The recursion tree for this recurrence is :

n
n
T n/2 T n/2
n/2 n/2 n

log2 n n/22 n/22 n/22 n/22


:
:
:

1 1 1 1 1 1 1
When we add the values across the levels of the recursion tree, we get a
value of n for every level.

We have n + n + n + log n times


= n (1 + 1 + 1 + log n times)
= n (log2 n)
= (n log n)

T(n) = (n log n)
II.
Given : T(n) = 2T(n/2) + 1
Solution : The recursion tree for the above recurrence is

1 1 2

log n 4

:
:
:
Now we add up the costs over all levels of the recursion tree, to determine
the cost for the entire tree :

We get series like


1 + 2 + 22 + 23 + log n times which is a G.P.

[ So, using the formula for sum of terms in a G.P. :


a + ar + ar2 + ar3 + + ar n 1 = a( r n 1 )
r 1 ]
= 1 (2log n 1)
21
= n1
= (n 1) (neglecting the lower order terms)
= (n)
The Master Theorem
Given: a divide and conquer algorithm
An algorithm that divides the problem of size
n into a subproblems, each of size n/b
Let the cost of each stage (i.e., the work to
divide the problem + combine solved
subproblems) be described by the function
f(n)
Then, the Master Theorem gives us a
cookbook for the algorithms running time:
The Master Theorem
if T(n) = aT(n/b) + f(n) then



n
logb a

f (n) O n logb a


0
T (n) n logb a
log n
f ( n) n
logb a

c 1

f (n)
f (n) n logb a AND

af (n / b) cf (n) for large n
Examples

Ex. T(n) = 4T(n/2) + n


a = 4, b = 2 nlogba = n2; f (n) = n.
Case 1: f (n) = O(n2 ) for = 1.
T(n) = (n2).

Ex. T(n) = 4T(n/2) + n2


a = 4, b = 2 nlogba = n2; f (n) = n2.
Case 2: f (n) = (n2lg0n), that is, k = 0.
T(n) = (n2lg n).
Examples
Ex. T(n) = 4T(n/2) + n3
a = 4, b = 2 nlogba = n2; f (n) = n3.
Case 3: f (n) = (n2 + ) for = 1
and 4(cn/2)3 cn3 (reg. cond.) for c = 1/2.
T(n) = (n3).

Ex. T(n) = 4T(n/2) + n2/lg n


a = 4, b = 2 nlogba = n2; f (n) = n2/lg n.
Master method does not apply. In particular, for every
constant > 0, we have n w(lg n).
When Masters Theorem
cannot be applied
T(n) = 2T(n/2) + n logn
T(n) = 2T(n/2) + n/ logn

You might also like