You are on page 1of 116

RECURRENCES AND

DIVIDE AND CONQUER


TOPICS TO BE COVERED
The basics of divide & conquer method,
Binary search
Merge sort
Quick sort
Solving recurrences:
Substitution method
Recursion tree method
Master method
Finding maximum and minimum
Strassen's matrix multiplication.
INTRODUCTION

Divide the problem (instance) into sub problems of sizes that are fractions of the
original problem size.

Conquer the sub problems by solving them recursively.

Combine sub problem solutions.


BINARY SEARCH
• Binary search looks for a particular item by comparing the middle most item of
the collection.
• If a match occurs, then the index of item is returned.
• If the middle item is greater than the item, then the item is searched in the sub-
array to the left of the middle item.
• Otherwise, the item is searched for in the sub-array to the right of the middle item.
• This process continues on the sub-array as well until the size of the subarray
reduces to zero.
BINARY SEARCH EXAMPLE
Binary Search the location of value 31 using binary search.

The value stored at location 7 is not a match, rather it is more than what we are looking
for. So, the value must be in the lower part from this location.

mid = low + (high - low) / 2

Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.
Hence, we calculate the mid again. This time it is 5.
.

We compare the value stored at location 5 with our target value. We find that it is a match

low = mid + 1 mid = low + (high - low) / 2

Our new mid is 7 now. We compare the value stored at location 7 with our target
value 31.
BINARY SEARCH (ALGORITHM)
Algorithm BinSearch(l, h, key) T(n) else
{ {
if (l==h) mid = (l+h)/2; 1
{ if(key == A[mid]) 1
if(A[l] == key) 1 return mid;
return l; if(key < A[mid])
else return BinSearch(l,mid-1, key);
T(n/2)
return 0; else
} return BinSearch(mid +1, h,
key)
}
}
BINARY SEARCH (RECURRENCE
RELATION FROM THE ALGORITHM)

 b
O(1) if n  =2 1
Recurrence T (n)  
2T (n / 2)  bn1 if n  >2 1
Relation:
Mergesort
• Mergesort.
• Divide array into two halves.
• Recursively sort each half.
• Merge two halves to make sorted whole.

A L G O R I T H M S

A L G O R I T H M S divide O(1)

A G L O R H I M S T sort 2T(n/2)

A G H I L M O R S T merge O(n)
Merge-Sort
• Merge-sort on an input sequence S with n elements consists of three steps:
• Divide: partition S into two sequences S1 and S2 of about n/2 elements each
• Recur: recursively sort S1 and S2
• Conquer: merge S1 and S2 into a unique sorted sequence
Part Divide and
3 Conquer
MergeSort Example

85 24 63 45 17 31 96 50

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

85 24 63 45

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

63 45

85 24

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

63 45

24

85

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

63 45

85
8 24
5

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

63 45

85
8
5
24

Master Informatique
Part
3
MergeSort Example

17 31 96 50

63 45

85
8 24
2
5 4

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

63 45

24 85

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 85 63 45

Master Informatique
Part
3
MergeSort Example

17 31 96 50

24 85

63 45

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 85

45

63

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 85

63 45

Master Informatique
Part
3
MergeSort Example

17 31 96 50

24 85

63

45

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 85

63 45

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 85

45 63

Master Informatique
Part
3
MergeSort Example

17 31 96 50

24 85 45 63

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 31 96 50

24 45 63 85

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

24 45 63 85 17 31 96 50

Master Informatique
Part
3
MergeSort Example

24 45 63 85

17 31 96 50

Master Informatique
Part
3
MergeSort Example

24 45 63 85

17 31 96 50

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

24 45 63 85

17 31 50 96

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

24 45 63 85 17 31 50 96

Master Informatique
Part Divide and
3 Conquer
MergeSort Example

17 24 31 45 50 63 85 96

Master Informatique
MERGE SORT (ALGORITHM)
Algorithm MergeSort(l,h) T(n)
{
if (l < h) 1
{
1
mid = (l+h)/2;
MergeSort(l, mid); T(n/2)
MergeSort(mid+1,h);
T(n/2)
Merge ( l, mid, mid+1, h); n
}
 b
O(1) if n  =2 1
T (n)  
}
Recurrence

2T (n / 2)  bn if n  >2 1
Relation:
RECURRENCE RELATION
• Many algorithms are recursive in nature, it is natural to analyze algorithms based
on recurrence relations.
• Recurrence relation is a mathematical model that captures the underlying time-
complexity of an algorithm.
• Methods given below are used to derive solutions to recurrence relations that
yields the time-complexity of algorithms:
 Substitution method,
 Recurrence tree method,
 Master Theorem
SUBSTITUTION METHOD
(BINARY SEARCH)
 O(1)
b if n  =2 1
T (n)  
2T (n / 2)  bn
1 if n  >2 1
T(n) = T(n/2) + c (1)
but T(n/2) = T(n/22) + c, so

T(n) = T(n/22) + c + c
T(n) = T(n/22) + 2c (2)
T(n/22) = T(n/23) + c

T(n) = T(n/23) + c + 2c
T(n) = T(n/23) + 3c (3)

T(n) = T(n/ 2k) + 3c (4)


Lets assume,
n/2k = 1
=> n = 2k
=>k = log2n

Substituting back in (getting rid of k):


T(n) = T(1) + c log(n)
= c log(n) + c0
= O(log(n) )
SUBSTITUTION METHOD
(MERGE SORT)
 b
O(1) if n  =2 1
T (n)  
2T (n / 2)  bn if n  >2 1
T(n) = 2T(n/2) + n
Lets assume,
= 2(2T(n/4) + n/2) + n n/2k = 1
=> n = 2k
= 22T(n/4) + 2n =>k = log2n
= 22(2T(n/8) + n/4) + 2n
Substituting back in (getting rid of k):
= 23T(n/8) + 3n T(n) = nT(1) + n log(n)
= n log(n) + n
T(n) = 2kT(n/2k) + k n = O(n log(n) )
= 2log nT(n/n) + n log n
= n + n log n Time Complexity = O(n log(n) )
Part 3Divide and Conquer

RECURSION TREE

A recursion tree is a convenient way to visualize what happens when a recurrence is


iterated.

Master Informatique
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
T(n)
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
T(n/2) T(n/2)
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2 cn/2

T(n/4) T(n/4) T(n/4) T(n/4)


Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2 cn/2

cn/4 cn/4 cn/4 cn/4

(1)

September 7, 2005
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn
cn/2 cn/2
h = lg n cn/4 cn/4 cn/4 cn/4

(1)

September 7, 2005
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn cn
cn/2 cn/2
h = lg n cn/4 cn/4 cn/4 cn/4

(1)

September 7, 2005
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn cn
cn/2 cn/2 cn
h = lg n cn/4 cn/4 cn/4 cn/4

(1)
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn cn
cn/2 cn/2 cn
h = lg n cn/4 cn/4 cn/4 cn/4 cn


(1)
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn cn
cn/2 cn/2 cn
h = lg n cn/4 cn/4 cn/4 cn/4 cn


(1) #leaves = n (n)
Recursion tree
Solve T(n) = 2T(n/2) + cn, where c > 0 is constant.
cn cn
cn/2 cn/2 cn
h = lg n cn/4 cn/4 cn/4 cn/4 cn


(1) #leaves = n (n)
Total  (n lg n)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
T(n)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
T(n/4) T(n/2)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2

T(n/16) T(n/8) T(n/8) T(n/4)


Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2
(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
(n/4)2 (n/2)2

(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
5 n2
(n/4)2 (n/2)2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2

(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
5 n2
(n/4)2 (n/2)2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n2
256


(1)
Example of recursion tree
Solve T(n) = T(n/4) + T(n/2) + n2:
n2 n2
5 n2
(n/4)2 (n/2)2
16
(n/16)2 (n/8)2 (n/8)2 (n/4)2 25 n2
256


(1) Total = n2
     
1  16  16
5 5 2 5 3
16

= (n2) geometric series
The master method

The master method applies to recurrences of the form


T(n) = a T(n/b) + f (n) ,
where a  1, b > 1, and f is asymptotically positive.
Three common cases
Compare f (n) withnlogba:
1. f (n) = O(nlogba – ) for some constant  >0, f (n) grows polynomially slower than
nlogba (by an n factor).
Solution: T(n) = (nlogba) .
2. f (n) = (nlogba ) for some constant k  0, f (n) and nlogba grow at similar rates.
Solution: T(n) = (nlogba ) .
3. f (n) = (nlogba + ) for some constant  >0, f (n) grows polynomially faster than
nlogba (by an n factor), and f (n) satisfies the regularity condition that
a f(n/b)  c f(n) for some constant c < 1.
Solution: T(n) = ( f(n)) .
Examples
EX. T(n) = 4T(n/2) + n
a = 4, b = 2  nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – ) for  = 1.
 T(n) = (n2).
Examples
EX. T(n) = 4T(n/2) + n
a = 4, b = 2  nlogba = n2; f (n) = n.
CASE 1: f (n) = O(n2 – ) for  = 1.
 T(n) = (n2).
Examples
EX. T(n) = 4T(n/2) + n3
a = 4, b = 2  nlogba = n2; f (n) =n3.
CASE 3: f (n) = (n2 + ) for  = 1
and 4(n/2)3  cn3 (reg. cond.) for c = 1/2.
 T(n) = (n3).
Examples
EX. T(n) = 4T(n/2) + n3
a = 4, b = 2  nlogba = n2; f (n) = n3.
CASE 3: f (n) = (n2 + ) for  = 1
and 4(n/2)3  cn3 (reg. cond.) for c = 1/2.
 T(n) = (n3).
Quicksort
• Proposed by C.A.R. Hoare in 1962.
• Divide-and-conquer algorithm.
• Sorts “in place” (like insertion sort, but not like merge sort).
Divide and conquer
Quicksort an n-element array:
1. Divide: Partition the array into two subarrays around a pivot x
such that elements in lower subarray  x  elements in upper
subarray.
x x x
2. Conquer: Recursively sort the two subarrays.
3. Combine: Trivial.
Partitioning subroutine
PARTITION(A, p, q) ⊳ A[ p . . q]
x  A[ p] ⊳ pivot = A[ p] Running time
ip = O(n) for n
for j  p + 1 to q
elements.
do if A[ j]  x
then i  i + 1
exchange A[i]  A[ j]
exchange A[ p] A[i]
return i

Invariant: x x x ?
p i j q
Example of partitioning

6 10 13 5 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11

6 5 3 2 8 13 10 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11

6 5 3 2 8 13 10 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11

6 5 3 2 8 13 10 11
i j
Example of partitioning

6 10 13 5 8 3 2 11
6 5 13 10 8 3 2 11

6 5 3 10 8 13 2 11

6 5 3 2 8 13 10 11

2 5 3 6 8 13 10 11
i
Pseudocode for quicksort
QUICKSORT(A, p, r)
if p < r
then q  PARTITION(A, p, r)
QUICKSORT(A, p, q–1)
QUICKSORT(A, q+1, r)
Analysis of quicksort

• Assume all input elements are distinct.


• In practice, there are better partitioning algorithms for when
duplicate input elements may exist.
• Let T(n) = worst-case running time on an array of n elements.
Worst-case of quicksort
• Input sorted or reverse sorted.
• Partition around min or max element.
• One side of partition always has no elements.
T (n)  T (0)  T (n 1)  (n)
 (1)  T (n 1)  (n)
 T (n 1)  (n)
 (n2 ) (arithmetic series)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
T(n)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
cn
T(0) T(n–1)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
cn
T(0) c(n–1)
T(0) T(n–2)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
cn
T(0) c(n–1)
T(0) c(n–2)
T(0)

(1)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
cn  n 
T(0) c(n–1)   k   n2 
 k 1 
T(0) c(n–2)
T(0)

(1)
Worst-case recursion tree
T(n) = T(0) + T(n–1) + cn
cn  n 
(1) c(n–1)   k   n2 
 k 1 
(1) c(n–2)
h=n T(n) = (n) + (n2)
(1) = (n2)

(1)
Best-case analysis
If we’re lucky, PARTITION splits the array evenly:
T(n) = 2T(n/2) + (n)
= (n lg n) (same as merge sort)

1 : 9
What if the split is always 10 10 ?
T (n)  T 10
1
n T 109 n (n)
What is the solution to this recurrence?
Analysis of “almost-best” case
T(n)
Analysis of “almost-best” case
cn
T 101 n  T 109 n 
Analysis of “almost-best” case
cn
1
10
cn 9cn
10

T 100
1
n T 100
9
n  T 100
9
n T 100
81
n
Analysis of “almost-best” case
cn cn
1
10
cn 9cn cn
10
log10/9n
1 cn 9 cn 9cn 81 cn
cn
100 100 100 100


(1) O(n) leaves
(1)
Analysis of “almost-best” case
cn cn
1
10
cn 9cn cn
10
log10 log10/9n
n 1 cn 9 cn 9cn 81 cn
cn
100 100 100 100


(1) O(n) leaves

(n lg n) (1)
Lucky! cn log10n  T(n)  cn log10/9n + (n)
Randomized quicksort
IDEA: Partition around a random element.
• Running time is independent of the input order.
• No assumptions need to be made about the input distribution.
• No specific input elicits the worst-case behavior.
• The worst case is determined only by the output of a random-
number generator.
Randomized quicksort analysis
Let T(n) = the random variable for the running time of
randomized quicksort on an input of size n, assuming random
numbers are independent.
Matrix multiplication
Input: A = [aij], B = [bij].
i, j = 1, 2,… , n.
Output: C = [cij] = A B.
c11 c12 c1n  a11 a12 a1n   b11 b12 b1n 
c c c2n  a21 a22 a2n  b21 b22 b 2n 
 21 22   
     
c c
 n1 n2 cnn  an1 an2 ann  bn1 bn2 bnn 

n
cij  aik  bkj
k 1
Standard algorithm
for i  1 to n
do for j  1 to n
do cij  0
for k  1 to n
do cij  cij + aik bkj
Standard algorithm
for i  1 to n
do for j  1 to n
do cij  0
for k  1 to n
do cij  cij + aik bkj

Running time = (n3)


Divide-and-conquer algorithm
IDEA:
nn matrix = 22 matrix of (n/2)n/2) submatrices:
r s  a b   e f 
 
t u c d  g h 

C = A  B
r = ae + bg
s = af + bh 8 mults of (n/2)n/2) submatrices
t = ce + dg 4 adds of (n/2)n/2) submatrices
u = cf +dh
Divide-and-conquer algorithm
IDEA:
nn matrix = 22 matrix of (n/2)n/2) submatrices:
r s  a b   e f 
 
t u c d  g h 

C = A  B
r = ae + bg recursive
s = af + bh 8 mults of (n/2)n/2) submatrices
^
t = ce + dh 4 adds of (n/2)n/2) submatrices
u = cf +dg
Analysis of D&C algorithm
T(n) = 8 T(n/2) + (n2)

# submatrices work adding


submatrices
submatrix size
Analysis of D&C algorithm
T(n) = 8 T(n/2) + (n2)

# submatrices work adding


submatrices
submatrix size
nlogba = nlog28 = n3  CASE 1  T(n) = (n3).
Analysis of D&C algorithm
T(n) = 8 T(n/2) + (n2)

# submatrices work adding


submatrices
submatrix size
nlogba = nlog28 = n3  CASE 1  T(n) = (n3).

Not better than the ordinary algorithm.


Strassen’s idea
Multiply 22 matrices with only 7 recursive mults.
Strassen’s idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a  ( f – h)
P2 = (a + b)  h
P3 = (c + d)  e
P4 = d  (g – e)
P5 = (a + d)  (e + h)
P6 = (b – d)  (g + h)
P7 = (a – c)  (e + f )
Strassen’s idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a  ( f – h) r = P5 + P4 – P2 + P6
P2 = (a + b)  h s = P1 + P2
P3 = (c + d)  e t = P3 + P4
P4 = d  (g – e) u = P5 + P1 – P3 – P7
P5 = (a + d)  (e + h)
P6 = (b – d)  (g + h)
P7 = (a – c)  (e + f )
Strassen’s idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a  ( f – h) r = P5 + P4 – P2 + P6
P2 = (a + b)  h s = P1 + P2
P3 = (c + d)  e t = P3 + P4
P4 = d  (g – e) u = P5 + P1 – P3 – P7
P5 = (a + d)  (e + h)
P6 = (b – d)  (g + h) 7 mults, 18 adds/subs.
P7 = (a – c)  (e + f ) Note: No reliance on
commutativity of mult!
Strassen’s idea
Multiply 22 matrices with only 7 recursive mults.
P1 = a  ( f – h)
P2 = (a + b)  h
P3 = (c + d)  e
P4 = d  (g – e)
P5 = (a + d)  (e + h)
P6 = (b – d)  (g + h)
P7 = (a – c)  (e + f )
Strassen’s algorithm

1. Divide: Partition A and B into (n/2)(n/2) submatrices. Form


terms to be multiplied using + and – .
2. Conquer: Perform 7 multiplications of (n/2)(n/2) submatrices
recursively.
3. Combine: Form C using + and – on (n/2)(n/2) submatrices.
Strassen’s algorithm
1. Divide: Partition A and B into (n/2)(n/2) submatrices. Form
terms to be multiplied using + and – .
2. Conquer: Perform 7 multiplications of (n/2)(n/2) submatrices
recursively.
3. Combine: Form C using + and – on (n/2)(n/2) submatrices.

T(n) = 7 T(n/2) + (n2)


Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27  n2.81  CASE 1  T(n) = (nlg 7).
Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27  n2.81  CASE 1  T(n) = (nlg 7).
The number 2.81 may not seem much smaller than 3, but because
the difference is in the exponent, the impact on running time is
significant.
Analysis of Strassen
T(n) = 7 T(n/2) + (n2)
nlogba = nlog27  n2.81  CASE 1  T(n) = (nlg 7).

The number 2.81 may not seem much smaller than 3, but because
the difference is in the exponent, the impact on running time is
significant.
Best to date (of theoretical interest only): (n2.376).

You might also like