You are on page 1of 7

Divide and Conquer

General Method

Divide and conquer is a design strategy which is well known to breaking down
efficiency barriers. When the method applies, it often leads to a large
improvement in time complexity. For example, from O (n2 ) to O (n log n) to sort
the elements. Divide and conquer strategy is as follows: divide the problem
instance into two or more smaller instances of the same problem, solve the
smaller instances recursively, and assemble the solutions to form a solution of
the original instance. The recursion stops when an instance is reached which is
too small to divide. When dividing the instance, one can either use whatever
division comes most easily to hand or invest time in making the division
carefully so that the assembly is simplified.

When the instance I of the problem P is sufficiently small, return the answer P(I)
directly, or resort to a different, usually simpler, algorithm that is well suited for
small instances.

Inductive Step:
1. Divide into some number of smaller instances of the same problem P.
2. Conquier on each of the smaller instances to obtain their answers.
3. Combine the answers to produce an answer for the original instance I.

Divide-and-Conquer has several nice properties. Firstly it very closely follows


the structure of an inductive proof, and therefore most often leads to rather
simple proofs of correctness. As in induction, one first needs to prove the base
case is correct. Then one can assume by strong (or structural) induction that the
recursive solutions are correct, and needs to show that given correct solutions to
each smaller instance the combined solution is a correct answer. A second nice
property is that divide-and-conquer can lead to quite efficient solutions to a
problem. However, to be efficient one needs to be sure that the divide and
combine steps are efficient, and that they do not create to many sub instances.
This brings us to the third nice property, which is that the work and span for
divide-and-conquer algorithms can be expressed as a form of mathematical
equations called recurrences. Often these recurrences can be solved without too
much difficulty making analysing the work and span of many divide-and-
conquer algorithms reasonably straightforward. Finally, divide-and-conquer is a
naturally parallel algorithmic technique. Most often we
can solve the sub instances in parallel. This can lead to significant amount of
parallelism since at each level of can create more instances to solve in parallel.
Even if we only divide our instance into two sub instances, each of those sub
instances will themselves generate two more sub-instances, and this repeats.
ALGORITHM:
dac(P)
{
if P is small then return solution(P);
else
{
divide P into smaller instances P1, P2,..., Pk, k≥1;
apply dac to each sub-problems;
return combine(dac(P1), dac(P2),...., dac(Pk));
}
}
Masters Theorem for divide and conquer

It is an analysis theorem that can be used to determine a big-0 value for


recursive relation algorithms. It is used to find the time required by the
algorithm and represent it in asymptotic notation form.

For most of the recursive algorithm, you will be able to find the Time complexity
For the algorithm using the master's theorem, but there are some cases master's
theorem may not be applicable. These are the cases in which the master's
theorem is not applicable. When the problem T(n) is not monotone, for example,
T(n) = sin n. Problem function f(n) is not a polynomial.

As the master theorem to find time complexity is not hot efficient in these cases,
and advanced master theorem for recursive recurrence was designed. It is design
to handle recurrence problem of the form −

T(n) = aT(n/b) + Θ((nk)logpn)

Where,
n is the size of the problem.
a = number of subproblems in recursion, a > 0
n/b = size of each subproblem b > 1, k >= 0 and p is a real number.
To solve the recurrence relation using master theorem, the following conditions
are checked and then evaluated.

Examples:
1. T(n) = 2T(n/2) + 1
here,
a=2
b=2
k = 0 (nk = 1)
p = 0 (logpn = 1)
also, logba = 1 and logba > k. [condition 1]

Therefore, T(n) = Θ(nlogba)


= Θ(n)

2. T(n) = 2T(n/2) + n
here,
a=2
b=2
k = 1 (n1 = n)
p = 0 (logpn = 1)
also, logba = 1 and logba = k. [condition 2]
Therefore, T(n) = Θ(nlogbalogp+1n)
= Θ(n log n)

3. T(n) = T(n/2) + n2
here,
a=1
b=2
k=2
p = 0 (logp n = 1)
also, logba = 0 and logba < k. [condition 3]

Therefore, T(n) = Θ(nklogpn)


= Θ(n2) (since logpn = 1)

4. T(n) = T(n/2) + n2/log n


here,
a=1
b=2
k=2
p = -1 (1/logp n = log- p n)
also, logb a = 0 and logb a < k. [condition 3]
Therefore, T(n) = O(nk)
= O(n2)

In this module, we will look at the following algorithms:


1. Binary Search
2. Merge Sort
3. Selection
4. Strassen’s Matrix Multiplication
1 . Binary Search:
Binary search is the most popular Search algorithm. It is efficient and also one of
the most commonly used techniques that is used to solve problems. It searches a
sorted array by repeatedly dividing the search interval in half. Begin with an
interval covering the whole array. If the value of the search key is less than the
item in the middle of the interval, narrow the interval to the lower half.
Otherwise narrow it to the upper half. Repeatedly check until the value is found
or the interval is empty.
It is the most popular search algorithm and also very efficient.
It only works on sorted set of elements.

Fact: If all the names in the world are written down together in order and you
want to search for the position of a specific name, binary search will accomplish
this in a maximum of 35 iterations.

Recursive Binary Search Algorithm:


Binary Search Example
Time Complexity:
The time complexity of Binary Search can be written as
T(n) = T(n/2) + c
Solving the equation using master method will give
Θ(log n) [condition 2 of Master theorem]

You might also like