You are on page 1of 5

Math 2320 B, Lecture 04, Last printed 10/1/2007 10:43 PM

7.3 Divide-and-Conquer Algorithm and Recurrence


Relations
Many recursive algorithms take a problem with a given input and divide it
into one or more smaller problems. This reduction is repeatedly applied until
the solutions of smaller problems can be found quickly. This procedure is
called divide-and-conquer algorithm. These algorithms divide a problem
into one or more instances of the same problem of smaller size and they
conquer the problem by using the solutions of the smaller problems to find a
solution of the original problem, possibly with some additional work.
In this section we look at how recurrence relations can be applied to estimate
the computational complexity of divide-and-conquer algorithms.
Divide-and-Conquer Recurrence Relations
Suppose that a rec. algorithm divides a problem of size n into a parts, where
each sub-problem is of size n/b. Also suppose that a total number of g(n)
extra operations are needed in the conquer step of the algorithm to combine
the solutions of the sub-problems into a solution of the original problem. Let
f(n) be the number of operations required to solve the problem of size n.
Then f satisfies the recurrence relation
f(n)=a f(n/b)+g(n)
and it is called divide-and-conquer recurrence relation.
Example 1 Binary Search Let f(n) be the numbers of comparisons needed
for the search of an element in the list of size n ( suppose n is even). A list of
size n is reduced into two lists, where each list has the size n/2. Then two
comparisons are needed to implement this reduction: one to check which
half of the list to use and the other one is to check if any terms of the list
remain. So f(n)=f(n/2)+2 for even n.
Example 2 Finding the Maximum and Minimum of a List
Let {a1, a2, , an} be a list. If n=1, then a1 is the max and min. Suppose n>1
and f(n) be the total numbers of comparisons needed to find the max and the
min elements of the list with n elements. A list of size n is reduced into two
lists, where each list has the half size or one sub-list has one element more
than the other sub-list. Then two comparisons are needed to implement this
reduction: one to compare the max of the two sub-lists and the other one is
1

Math 2320 B, Lecture 04, Last printed 10/1/2007 10:43 PM

to compare the min of the two sub-lists. So the rec. rel. is f(n)=2 f(n/2)+2
for even n.
Example 4 Fast Multiplication of Integers Divide-and-Conquer
technique will be used. Let a and b be 2n-bit integers. Split each integer into
two blocks, each block with n-bits. Suppose that a and b are integers with
binary expansions of length 2n.
Let
a
=
(a2n-1a2n-2a1a0)2,
b
=
(b2n-1b2n-2b1b0)2
and
where A1=(a2n-1an+1an)2,
A0=(an-1a1a0)2, B1=(b2n-1bn+1bn)2, B0=(bn-1b1b0)2. So we can write
This means that the multiplication of two 2n-bit integers can be carried out
using multiplication of three n-bit integers plus some shifts, subtractions and
additions. So if f(n) is the total number of bit operations needed to multiply
two n-bit integers, then f(2n)=3f(n)+Cn, where Cn is number of shifts,
subtractions and additions which is needed to carry out multiplication of
three n-bit integers: 3f(n).
Example 5 Fast Matrix Multiplication One can easily calculate that
multiplying two nxn matrices needs

multiplications and

additions. Consequently this is


operations. But there are more
efficient divide-and-conquer algorithms for multiplying two nxn matrices. V.
Strassen has reduced the problem for even n, to seven multiplications and 15
additions of two half size matrices. If f(n) is the number of
operations(multiplications and additions) used, then f(n)=7f(n/2)+15 /4
for even n.
So the rec. rel. f(n)=af(n/b)+g(n) appears quite frequently. How can we
estimate the complexity of such f(n) ? Let us suppose that f satisfies the rec.
rel. whenever
, kN. Then
f(n)=a f(n/b)+g(n)

.
.

Math 2320 B, Lecture 04, Last printed 10/1/2007 10:43 PM

Since

, we have

.
Applying this equation for f(n) can be used to estimate the size of functions
which satisfy the divide-and-conquer recursive relations.

Theorem 1
Let f be an increasing function that satisfies the rec. rel. f(n)=af(n/b)+c,
whenever n is divisible by b, where a, b N , b>1 and cR, c > 0. Then
f(n) is

if a>1 and

if a=1.

Furthermore, when
, k N, then f(n)=C1
C1=f(1)+c/(a-1) and C2= -c/(a-1).

+C2, where

. Since g(n)=c, we have

Proof: First let

. If

a=1, then
If n is not power of b, then we have

, for kN. Since f(n) is

increasing,

Therefore, f(n) is

For

a>1,

if a=1.

first

let

N.

k
,

and
the power of b, then
because

Then
because

. If n is not
, for kN. Since f(n) is increasing,

. Therefore, f(n) is

if a>1.

Math 2320 B, Lecture 04, Last printed 10/1/2007 10:43 PM

Example 6 Let f(n)=5f(n/2)+3 and f(1)=7. Find


f(n) assuming that f is an increasing function.

, kN. Also estimate

Solution: Applying Theorem 1 with a=5, b=2, c=3, we know that if n=


(31/4)-3/4. If f is increasing, by Theorem 1 we see that f(n) is
then f(n)=
.
Example 7 Estimate the number of comparisons by a binary search.
Solution: From example 1 we know that f(n)=f(n/2)+2 for even n, if f(n) is
the number of comparisons needed to check if an element x is in a list of size
n. Applying Theorem 1 with a=1, b=2, c=2, it is clear that f(n) is

Example 8 Estimate the number of comparisons needed to find the max and
the min elements of the list with n elements.
Solution: From example 2 we know that f(n)=2f(n/2)+2 for even n, if f(n) is
the number of comparisons needed to find a max and min in a list of size n.
Applying Theorem 1 with a=2, b=2, c=2, it is clear that f(n) is
=O(n).
Following is the more general version of Theorem 1 and it is also called
Master Theorem of complexity analysis of many divide-and-conquer
algorithms.
Theorem 2 MASTER THEOREM
Let f be an increasing function that satisfies the rec. rel.
, whenever
, k N, where a, b N , b>1
and c, dR, c > 0, d is nonnegative. Then f(n) is
if

and

if

if

Example 10 Estimate the number of bit operations needed to multiply two


n-bit integers using the fast multiplication algorithm in Example 4.

Math 2320 B, Lecture 04, Last printed 10/1/2007 10:43 PM

Solution: From example 4 we know that f(n)=3f(n/2)+Cn for even n, if f(n)


is the number of bit operations needed to multiply two n-bit integers using
the fast multiplication algorithm. Applying the Master Theorem for a=3,
b=2, d=1, it is clear that f(n) is
. This is faster than
O( ).
Example 11 Estimate the number of multiplications and additions needed to
multiply two nxn matrices using the matrix multiplication algorithm in
Example 5.
Solution: Let f(n) be the number of multiplications and additions needed to
multiply two nxn matrices. From Example 5 we know that
f(n)=7f(n/2)+15 /4, when n is even. So by the Master Theorem for a=7,
b=2, d=2, c=15/4, we see that f(n) is

You might also like