You are on page 1of 31

Design & Analysis of Algorithms

Dr Anwar Ghani

Semester: Fall 2017

Department of Computer Science & Software Engineering,


International Islamic University, Islamabad.
Recurrences
Recurrences
Topics
ƒ Overview of Algorithm Design

ƒ Recursive versus Nonrecursive Implementations

ƒ Introduction to recurrences

ƒ Using basic methods to solve recurrences

ƒ Using theorems to solve recurrences


Algorithm Design
Paradigm
The algorithms are categorized according to the underlying approach or the technique used
to solve a given problem. The main approaches are categorized as follows:

• Divide-and-Conquer Algorithm

• Decrease-and-Conquer

• Dynamic Programming

• Greedy Algorithm

• Backtracking Algorithm
Algorithm Design
Strategies
Divide-and-Conquer: The problem is divided into a two distinct and independent
subproblems which are solved and combined to obtain the solution to the main problem
Familiar application are Binary Search, Quick Sort, Merge Sort

Decrease-and-Conquer: The problem size is reduced by a fixed amount. The reduced


problem is solved and used to obtain solution to the main problem. Examples include
Selection Sort, Linear Search

Dynamic Programming: The problem is split into two overlapping subproblems which
are solved to produce solution to the main problem. Applications include Shortest Path
in Network, Longest Common Subsequence

Greedy Algorithm: At each stage of problem solving, a greedy choice (of largest or
smallest value ) is made from amongst the available options. Common applications are
Shortest Paths in Network, Huffman’s Coding

Backtracking: All available options for the problem are explored by pursuing different
paths in a tree-like structure. Either the solution is found or a dead end is reached. In the
latter case alternative path is explored. Familiar examples are Depth First Search,
Exhaustive Search
Algorithm Analysis
Nonrecursive veusus Recursive Implementation
The basic steps of a given algorithm design can be coded by using Iteration Loops or
Recursive Calls. The former approach is also referred to as Nonrecursive implementation

Nonrecursive Solution: The implementation is based on one or more loops. The loops may
also be nested. The time efficiency of the algorithm is determined by counting the basic
operations in each loop cycle. For example, if the inner loop executes n times and outer loop
m times, the overall time complexity would be O( n.m).

Recursive Solution: This implementation involves a sequence of recursive functions calls.


The procedure is terminated when given conditions are met. In order to determine the time
complexity of the algorithm, the running time is expressed in terms of a recurrence, which,
essentially reflects the running time of a problem in terms of running time of subproblems
and the cost associated with the reduction steps . The recurrence equation is solved to find
the running time of the algorithm

¾ Although an algorithm can be implemented and analyzed by using nonrecursive method,


the recursive approach is often used to determine the running times of several important
algorithms. Next we look at the definitions and solutions of recurrences for the Decrease-
and-Conquer and Divide-and-Conquer algorithms
Recurrences
Definition
A recurrence is an equation or relation that defines a function in terms of lower order
arguments, with the following properties
(i) The function is defined over a set of natural numbers {0,1,2, 3,……}

(ii) The definition includes a base value, called boundary condition or initial condition

Example(1): The factorial function f(n)= n! =1.2.3….(n-1)n


can be expressed as a recurrence:

f(n) = n . f(n – 1) for n>1

f(0) =1 (initial condition)

Example(2): The Fibonacci numbers 0,1,1,2,3,5,8,13,21,34…..can be defined by the


recurrence

f(n) = f(n-1) + f(n-2) for n>1

f(0)= 0, f(1)=1 (initial conditions)


Solution to Recurrence
Closed Form
The closed form provides an exact solution to recurrence equation, which is usually
expressed as a formula.

Example :A closed form of solution to the recurrence


1, when n =2
T(n) =
2T(n/2) + 2, when n > 2
is given by the formula
T(n) = ( 3 / 2 ) n - 2

The solution can be verified for different n. Consider, for example, n=8
(i) Using the formula T(8) =(3/2)8-2=10.

(ii) Using the recurrence definition,


T(8)=2T(4)+2 [ Definition of T(8) ]
=2[2.T(2)+2]+2 [ Definition of T(4) ]
=4T(2)+6
=4.1+6=10 [ Initial condition, T(2)=1]
Solution to Recurrence
Asymptotic Notation
In most cases the exact solution to a recurrence cannot be determined easily, or the closed
form solution may not exist For analysis purpose, the solution is often expressed in
asymptotic notation( O, θ, Ω)

Example: The solution to the recurrence

c, when n =1
T(n) =
3T(n/4) + cn2 when n > 1

in asymptotic notation is

T(n) = θ(n2)
Methods for Solving Recurrences
Basic Techniques
ƒ The basic methods for obtaining the solution to recurrence equations for the
Decrease-and-Conquer and Divide-and-Conquer algorithms are referred to Iteration
Method and Substitution Method..

ƒ Several theorems have been propounded ,which provide a convenient method to get a
quick solution to a recurrence. A celebrated theorem for the solution to generalized Divide-
and-Conquer problems is known as Master Theorem . For details of the theorem, its proof
and applications, the book Introduction to Algorithms by Cormen et al may be consulted
(3rd Edition, page-97)

ƒ A comprehensive treatment for the solution of a large variety of recurrences is given in


the book An Introduction to the Analysis of Algorithms, by Sedgwick and Flajolet
Decrease-and-Conquer Recurrences
Decrease-and-Conquer Algorithm
Running Time
The running time of Decreas-and-Conquer algorithm can, in general, be expressed in terms of
recurrence as follows:

T(n) = T(n-1 ) + f(n)

Running time of Running time of Cost of reduction


problem of size n subproblem of size n-1

¾ The cost function f(n) depends on the application


Solving Decrease-and-Conquer Recurrence
Iteration Method
The Iteration Method can be used to find solution to Decrease-and-Conquer recurrences.
The method uses the top-down approach. Broadly, it involves following steps:

Step #1: Use the recurrence to set up the equations for the arguments n, n-1,…3,2,1.

Step # 2: On reaching the bottom level ( n=0 ), apply the the boundary condition

Step#3: Add the equations, and cancel identical terms on the left-hand and the right-
hand sides

Step#4: Perform summation to obtain solution in closed form or in asymptotic notation


The Iteration Method
Examples
Example(1): The running time for a linear search of an array is given by the recurrence
T(0)=0
T(n)= T(n-1) + c for n>0

¾ In this recurrence, T(n) is time to search array of size n, T(n-1) time to search subarray of
size n-1, and c is the cost of searching one array cell. The solution is determined as follows:
Iterating the equation:.
T(n) = T(n-1 ) + c
T(n-1) = T(n-2) + c
…………………………
T(2) = T(1) + c
T(1) = T(0) + c
Adding both sides of the equations, and canceling the equal terms on the left- and right-hand
sides:
T(n) = T(0) + c+ c +………+ c ( n terms )

Summing the constant terms, T(n) = n.c (Closed form)

Ignoring constant, T(n) = θ(n) ( Asymptotic form)


The Iteration Method
Examples
Example(2): The running time of Selection Sort is given by the recurrence
T(0)=0
T(n) = T(n-1) + cn
¾ Here T(n) is the time to sort an array of size n, T(n-1) is the time to sort subarray of size
n-1, and c.n is cost of finding a maximum key in the array and swapping it with the last key.
The solution to recurrence is obtained by iterating the equation, as follows:
Iterating:
T(n) = T(n-1 ) + nc
T(n-1) = T(n-2) +(n-1)c
……………………
T(2) = T(1) + 2c
T(1) = T(0) + c
Adding both sides of the equations, and canceling equal terms:
T(n) = T(0)+ c(1 + 2+ 3 +………+n)

Evaluating the summation


T(n) = cn(n+1)/2 (Closed form)

Ignoring lower order term n compared to n2 , and the constant c


T(n) = θ (n2) (Asymptotic notation )
The Iteration Method
Examples
Example (3): Consider the Decrease-and-Conquer recurrence

T(0)=0
T(n) = T(n-1) + c lg n

Iterating:
T(n) = T(n-1 ) + c lg n
T(n-1) = T(n-2) +c lg((n-1)
……………………
T(2) = T(1) +c lg(2)
T(1) = T(0) + c lg(1)

Adding the equations, and canceling equal terms on the left- and right-hand sides:
T(n) = T(0)+ c[ lg(1) + lg(2)+ ……+lg(n-1)+ lg(n) ]

Using initial condition and property of logarithm


T(n)= c lg(1.2…(.n-1) .n)

Using definition of factorial


T(n) =c lg( n!) (Closed form )
Divide-and-Conquer Recurrences
Divide-and-Conquer Algorithm
Running Time
The Divide-and- Conquer algorithm solves a problem by splitting it into subproblems, and
combining the solutions to generate solution to the main problem

Let n be the size of the main problem, a the number of subproblems, n / b the size of each
subproblem, where a, b>1 are constants. The running time of the algorithm is expressed in
terms of recurrence

T(n ) = a T( n / b ) + f(n)

Running time of Running time of Cost of dividing and


main problem of size n ‘a’ number of combining
subproblems
each of size ‘ n / b ‘

¾ The subproblems are also referred to as partitions. The number and sizes of partitions,
and the cost function depend on the application
Solving Divide-and-Conquer Recurrences
Substitution Method
The Substitution Method for solving the Divide-and-Conquer recurrence consists of the
following steps

Step #1: In the recurrence, plug in progressively the values n/b, n /b2, n /b3…, on the
right-hand side of the equation

Step #2: Repeat the procedure until the base case is reached

Step #3: The iterative steps would generate some kind of pattern or a series. Perform the
summation to express the running time in closed form.

Step #4: Analyze the summation to express the running time in asymptotic notation
The Substitution Method
Examples
Example(1): The running time of binary search algorithm is given recurrence
T(1) = c ( constant)
T(n) = T(n/2) + c, n > 1

¾ Here T(n/2) is time to search left-half or right-half of a sorted array, c is the combined cost
of comparing one key and finding the middle element in the array. Solution is as follows:

Initially, T(n) = T(n/2) + c = T(n/21) + c

Substituting for T(n/2), T(n) = T(n/4) + 2c


= T(n/22) + 2.c
Again substituting for T(n/4), T(n) = T(n/8) + 3c
= T(n/23) + 3.c
Continuing, after kth step, T(n) = T(n/2k) + k.c
The base case is reached when n / 2k =1, or n=2k . or k = lg n

Substituting for k , T(n) = T(1)+ lg n. c


= c + lg n. c (Closed form)

Ignoring constants, T(n) = θ( lg n) ( Asymptotic notation)


The Substitution Method
Examples
Example(2): Recurrence for Divide-and-Conquer algorithm with fixed cost, which splits
the problem into two sub-problems of equal sizes, is as follows.
T(1) = c
T(n) = 2T(n/2) + c, n > 1

Initially: T(n) = 2.T(n/2) + c


= 2.T(n/21) +20 c

Substituting for T(n/2), T(n) = 2.[2.T(n/4)+c] +c


= 4.T(n/4) + 3c
= 22T(n/22)+ (20+ 21).c

Substituting for T(n/4), T(n) = 4.[2 T(n/8)+c] + 3c


=8.T(n/8) + 7.c
= 23T(n/23) + (20+21+22).c

Continuing, after kth substitution,


T(n) = 2kT(n/2k) + (20+21+22+…..2k-1).c

Summing the geometric series,


T(n) = 2kT(n/2k) + (2k - 1)c (cont’d)
The Substitution Method
Examples
Example(2) cont’d :

The base case is reached when n / 2k =1, or n=2k

Substituting for 2k , T(n) = n.T(1) +( n-1).c

Using initial condition, T(n) = n.c +n.c - c

Simplifying, T(n ) =c.(2n-1) (Closed form)

Ignoring constants, T(n) = θ(n) (Asymptotic notation)


The Substitution Method
Examples
Example (3) :The running time for merge sort is given by recurrence
T(1)= c
T(n)= 2T(n/2) + cn, n > 1
¾In this recurrence, T(n) is the running time of sorting an array of size n, which is split into
two equal subarrays each of size n/2. T(n/2) is the running time to sort subarray of size n/2, cn
is the cost of splitting and merging the two subarrays. The solution to recurrence is as follow:
Initially, T(n) = 2T(n/2) + cn
= 21T(n/2)+cn

Substituting for T(n/2), T(n)= 2[ 2T(n/4)+ cn/2] +cn


=4T(n/4) + 2.cn
=22T(n/22)+2cn

Again substituting for T(n/4), T(n)=22[2T(n/8) +cn/4]+2.cn


=23T(n/8) +3.cn
=23T(n/23) + 3cn

Continuing, after kth step, T(n) = 2kT(n/2k)+ kcn


(cont’d)
The Substitution Method
Examples
Example(3) cont’d:

The base case is reached when n/2k = 1, i.e 2k =n, or k=lg(n)

T(n) = n.T(1) + cn.lg n

Using initial condition, T(n) = cn + cn.lg n

Simplifying, T(n) =c( n +n lg n ) (Closed form )

Discarding constant c, and the lower order term n in favor of bigger term nlg n

T(n) = θ(n lgn) (Asymptotic form)


The Substitution Method
Examples
Example(4) :The recurrence for Divide-and-Conquer algorithm, which splits a problem into
three subproblems is

T(1)= c
T(n)= 3T(n/4) + cn, n > 1

Initially, T(n)= 3T(n/4)+ cn

Substituting for T(n/4), T(n)= 3[3T(n/16) + cn/4] +cn


= 9T(n/16) + cn +cn.3/4
=32T(n/42) +cn[(3/4)0 + (3/4)1]

Again. substituting for T(n/16), T(n) =9[ 3T(n/64)+cn/16] +cn+3cn/4


=27T(n/64)+ cn+cn.3/4+ cn.916]
=33T(n/43) + cn[(3/4)0 + (3/4)1+ (3/4)2]

Continuing, after kth step, it follows

T(n)= 3kT(n/4k)+cn[(3/4)0+ (3/4)1+(3/4) 2+ ……..+(3/4) k-1]


(cont’d)
The Substitution Method
Examples
Exampe(4) cont’d:

The base case, is reached when n/4k = 1.

Taking log to base 4, k = log4 n.


T(n) = c.3 log4 n +cn[ (3/4)0+ (3/4)1+(3/4)2+………..+(3/4) log 4 n-1 ]

The geometric series has geometric ratio 3/4, which is less than 1. Therefore,
(3/4)0 + (3/4)1+ (3/4)2+………..(3/4)log4 n-1 = θ(1)

Using the above relations


T(n) = c. 3 log 4 n + cn.θ(1)

By using property of logarithm, 3 log 4 n = n log 4 3, and ignoring constant term θ(1)
T(n) = cn log 4 3 + cn

Since, log 4 3 < 1, the term n log 4 3 is smaller compared to n. The term c n log 4 3 is discarded
in favor of n. Therefore,
T(n) = θ(n) (Asymptotic form)
Solving Divide-and-Conquer Recurrences
Fundamental Theorem
If a and b>1 are constants, the solution to the Divide-and-Conquer recurrence

T(1) = c

T(n) = a T( n / b ) + cnx , for n >1,and x is some constant,

is given by the formula

θ(nx), when a < bx

T(n) = θ(nx logb n), when a= bx

θ(nlogb a), when a > bx


Using Theorem to Solve Recurrences
Examples
Recurrence: T(1) = c
T(n) = a T( n / b ) + cnx ,

θ(nx), when a < bx ( case 1)


Formula: T(n) = θ(nx log n), when a= bx (case 2)
θ(nlogb a), when a > bx (case 3)

Example(1): T(n) = 2T(n/2) + n3


Here x=3, a=2, b=2 , bx=b3=8. Since a < bx, case 1 of the theorem applies
Therefore, solution is T(n)=θ(nx)= θ(n3)

Example(2): T(n)=4T(n/2) +n2


x=2, a=4, b=2, b2=4 Since a = bx, case 2 of the theorem applies
Therefore, the solution is T(n)=θ(n2 lg n)

Example(3): T(n)=7T(n/2) +n2


x=2, a=7, b=2 ,bx= b2=4 Since a > bx, case 3 of the theorem applies
Therefore, the solution is T(n)=θ(n log 2 7)
Using notation for binary logarithm, T(n) = θ(n lg 7)
Fundamental Theorem
Proof for Linear Cost Function
Consider the recurrence
T(1) = c
T(n) = a T( n / b ) + c.n ( Linear cost)
The Substitution Method can be used to prove the theorem, as shown below.
Substituting for T(n/b):
T(n)=a[ aT(n/b2) + c (n/b)] + cn = a2T(n/b2) + cn[ 1+(a/b)]

Substituting for T(n/b2)


T(n)=a2[aT(n/b3) +c (n/b2)] + cn[1+(a/b)] =a3T(n/b3) + cn[1+(a/b) +(a/b)2]

Again substituting for T(n/b3):


T(n)= a3[ aT(n/b4)+ c(n/b3)] + cn[1+(a/b)+(a/b)2] = a4T(n/b4) + cn[1+(a/b) + (a/b)2 + (a/b)3]

Continuing, after kth step,


T(n)= akT(n/bk) +cn[1+(a/b)+(a/b)2+(a/b)3+…..+(a/b)k-1]

The term T(n/bk) would reduce to base case T(1) when n / bk =1, or bk = n.
Or, taking logarithm with base b, k = log b n.

Substituting for k in the summation:


T(n)= alogb n.T(1) + cn[1+(a/b)+(a/b)2+(a/b)3+……+(a/b)log b n - 1]

=c. alogb n. + cn[1+(a/b)+(a/b)2+(a/b)3+……+(a/b)log b n - 1] ………(1) cont’d


Fundamental Theorem
Proof cont’d
T(n) =c. alogb n. + cn[1+(a/b)+(a/b)2+(a/b)3+……+(a/b)log b n - 1] ……….(1)
Now, consider alogb n. Multiplying and dividing numerator and denominator with blogb n,.
alogb n.blogb n,
alogb n= = ( a/b) logb n. blogb n
blogb n
Since, blogb n = n log b b =n, it follows, alogb n = n( a/b) logb n
Substituting into the above equation (1), and rearranging
T(n) =cn[1+(a/b)+(a/b)2+(a/b)3+……+(a/b)log b n ] ………………….(2)
The right side is geometric series with geometric ratio a/b. It’s behavior is determined by the
largest term of the series. Three cases need to be considered separately.
Case a < b: The geometric series (2) has decreasing terms Its first term, (a/b)0 = 1, makes the main
contribution asymptotically, which is θ(1) Therefore, recurrence has the solution
T(n)=cn.θ(1)=θ(n) (Dropping constant)
Case a =b :All terms of the series make equal contribution of 1.There being log b n +1 terms,
T(n) =cn.log b n +cn = θ(n log b n)+θ(n) =θ(n log b n) (Dropping lower order terms)
Case a > b: The geometric series has increasing terms .The last term, (a/b)log b n , makes main
contribution asymptotically
Since, (a/b)log b n = a log b n / b log b n =n log b a / n log b b =n log b a / n
Therefore, the recurrence has the solution
T(n)= cn.nlog b a / n = c n log b a =θ(n log b a ) (Dropping constant)
General Theorem
ƒ The recurrence
Outline of Proof
T(1) = c
T(n) = a T(n/b) + cnx , for n >1

can be solved by the Substitution Method.


ƒ It would follow that after k substitutions the result is
T(n)= akT(n/bk) +cnx[1+(a/bx)+(a/bx)2+(a/bx)3+…..+(a/bx) k-1]

ƒ The term T(n/bk) would reduce to base case T(1) when n / b k =1, or bk = n.
Taking logarithm to base b, k = log b n. Substituting for k in the summation:

T(n)= alogb n.T(1) + cnx[ 1+(a/bx)+(a/bx)2+(a/bx)3+……+(a/bx) log b n -1 ]

ƒ The geometric series has geometric ratio a/bx By considering the asymptotic behavior of the
series, it can be shown that
T(n) = θ(nx), when a < bx
T(n) = θ(nx log n), when a= bx
T(n) = θ(nlog b a), when a > bx

You might also like