You are on page 1of 33

II YEAR / IV SEMESTER B.TECH.

- IT

CS6402 - DESIGN AND ANALYSIS OF ALGORITHMS

UNIT - II

BRUTE FORCE AND DIVIDE-AND-CONQUER

COMPILED BY

M.KARTHIKEYAN, M.E., (AP/IT)

VERIFIED BY

HOD PRINCIPAL CEO/CORRESPONDENT

SENGUNTHAR COLLEGE OF ENGINEERING – TIRUCHENGODE

DEPARTMENT OF INFORMATION TECHNOLOGY

1
UNIT II
BRUTE FORCE AND DIVIDE-AND-CONQUER

 Brute Force
 Closest-Pair and Convex-Hull Problems-Exhaustive Search
 Travelling Salesman Problem
 Knapsack Problem
 Assignment problem.
 Divide and conquer methodology
 Merge sort
 Quick sort
 Binary search
 Multiplication of Large Integers
 Strassen’s Matrix Multiplication

2
List of Important Questions

UNIT II
BRUTE FORCE AND DIVIDE CONQUER
PART A

1. Design a brute force algorithm for computing the value of polynomial p(x) =
anxn + an-1xn-1 +.....+ a1x...a0 at a given point x0 and determine its worst case
efficiency class. [A/M 15][N/D2019]
2. Derive the complexity of binary search algorithm. [A/M 15][N/D2019]
3. What is the time and space complexity of merge sort.[A/M2019]
4. Write The Brute Force algorithm to String Matching.[A/M2019]
5. Write an algorithm for brute force Closest-Pair Problem. [N/D 2016]
6. What is Closest-Pair Problem? [M/J 2016]
7. Give the General Strategy divide and conquer method. [M/J 2016]
8. What is time complexity of binary search? [M/J 12, A/M 15] [N/D 2016]
9. What do you meant by Divide and Conquer strategy? [M/J 13]
10. Write the control abstraction for the ordering paradigm. [M/J 13]
11. List out two drawbacks of binary search algorithm. [N/D 07]
12. Define feasible and optimal solution. [M/J 14,N/D 13]
13. What is the difference between quick sort and merge sort? [A/M 13]
14. Give the control abstraction for Divide and Conquer technique. [N/D 13]
15. Write any four examples for Brute force approach. [N/D 05]
16. Give the time efficiency and drawback of merge sort algorithm. [N/D 05]
17. Define External path length. [M/J 06]
18. Give any two methods for pattern matching. [M/J 06]
19. List out nay two drawbacks of binary search algorithm. [N/D 07]
20. Give an example problem that cannot be solved by a Brute Force attack. [N/D
08]
21. What is Brute Force?
22. What are the different criteria used to improve the effectiveness of algorithm?
23. Define Convex-Hull Problem.

PART B

1. i) Solve the following using brute force algorithm:


Find whether the given string follows the specified pattern and return 0 or 1
accordingly.

3
Examples:
1) Pattern: “abba”,input:”redblueredblue” should return 1
2) Pattern:”aaaa”,input:”asdadasdasd” should return 1
3) Pattern:”aabb”, input:”xyzabcxyzabc” should return 0 [A/M 15] (10)

ii) Explain the convex hull problem and the solution involved behind it. (6) [A/M
15][A/M2019]

2. A pair contains two numbers and its second number is on the right side of the
first one in an array. The difference of a pair is the minus result while subtracting
the second number from the first one. Implement a function which gets the maximal
difference of all pairs in an array (using divide and conquer method). [A/M 15]

3. Explain divide and conquer method with merge sort algorithm. Give an example.
[M/J 12] [OR] Write an algorithm to sort a given list of elements using merge sort.
Show the operaion of the following on the list 38, 27, 43, 3, 9, 82,10. [A/M 12](12)

4. Explain Binary Search algorithm with an example. [N/D 14][OR] What is divide and
conquer strategy and explain the binary search with suitable example problem. [N/D
11][OR] Write an algorithm to perform binary search on a sorted list of elements.
Analyze the algorithm for the best case, worst case and average case. [A/M 11,N/D
12]

5. Explain the Quick sort algorithm with the help of illustrative example. With an
example show that quick sort is not a stable sorting algorithm [N/D 2016] [N/D
12][A/M2019]

6. Write an algorithm for performing matrix multiplication using Strassen’s matrix


multiplication. [OR] Explain the method used for performing multiplication of two
large integers. Explain how Divide and conquer method can be used to solve the
same. [M/J2016][16m][N/D 2019]

7. Find all the solution to the travelling salesman problem (cities and distance
shown beim+) by exhaustive search. Give the optimal solutions. (16) [M/J2016]

4
NOTES
UNIT 2
BRUTE FORCE AND DIVIDE CONQUER
PART A

1. Design a brute force algorithm for computing the value of polynomial p(x) = anxn +
an-1xn-1 +.....+ a1x...a0 at a given point x0 and determine its worst case efficiency
class. [A/M 15][N/D2019]

Design a brute force algorithm for computing the value of a polynomial at a


given point and determinr iots worst case efficiency class.

Algorithm Polynomial Eval


{
int sum,n,i;
Write(“Enter the value of n”);
Read(n);
Sum=0;
Write(“Enter the value of x”);
Read(x);
for(i=n;i>0;i--)
{
Write(“Enter the coefficient”);
Read(a);
Sum = Sum + (a * pow (x,n));
}
Write(“Value of polynomial is”,sum);
}

The result of plynomial is computed for n terms. The basic operation in this
algorithm is for loop which actually computes the evaluated sum. Hence worst case time
complexity of this algorithm is O(n).

2. Derive the complexity of binary search algorithm. [A/M 15][N/D2019]


To analyze the time complexity of Binary Search, first we have to compute
the number of comparisons expected for successful Binary search. So, consider I,
such that 2i>= (N+1) Thus, 2i-1-1 is the maximum number of comparisons that are
left with first comparison. Similarly 2i-2-1 is maximum number of comparisons left
with second comparison. In general we say that 2i-k-1 is the maximum number of
comparisons that are left after „k‟ comparisons. If the number of elements are left
then the desired data item is obtained by ith comparisons. Thus, the maximum
number of comparisons required for successful search is: Cms= i or Cms= log2 (N
+1) For unsuccessful search, the maximum number of comparisons:
Cms = Cmv

for, computing average number of comparisons „Cs‟ indicates successful search.

5
Cs = C/N
C = i*2i - (20 + 21 + 22 +……..2(i-1))
= i + 2i (i-1
Now, consider that the probability for searching a requested data item is 1/n, then,
Cs = (1 + 2i (1- i)/n)
Cs = [1+ (N +1)(log 2 (N +1)-1)]/n
Cs = log2 N (for large N)
It can be easily observed that for successful and unsuccessful search the expected
number of cases is given by:
Cs = C4 = O (log2 N)
It should be noted that Binary Search provides to be more efficient than the
sequential search. But when implemented with linked lists it would not be efficient.
On the basis of the above analysis the time complexity of Binary Search is:
E(n) = [log2 n] +1, it is actually 2E(a)>n, that is O(log2 n).

3. What is the time and space complexity of merge sort[A/M2019]


MergeSort time Complexity is O(nlgn) which is a fundamental knowledge. Merge Sort
space complexity will always be O(n) including with arrays.

4. Write The Brute Force algorithm to String Matching.[A/M2019]


Brute-Force String Matching
Searching for a pattern, P[0...m-1], in text, T[0...n-1]

Algorithm BruteForceStringMatch(T[0...n-1], P[0...m-1])


for i ← 0 to n-m do
j←0
while j < m and P[j] = T[i+j] do
j++
if j = m then return i
return -1

5. Write a algorithm for brute force Closest-Pair Problem. [N/D 2016]

6
Provide a function to find the closest two points among a set of given points in two
dimensions, i.e. to solve the Closest pair of points problem in the planar case.

The straightforward solution is a O(n2) algorithm (which we can call brute-force


algorithm); the pseudo-code (using indexes) could be simply:

bruteForceClosestPair of P(1), P(2), ... P(N)


if N < 2 then
return ∞
else
minDistance ← |P(1) - P(2)|
minPoints ← { P(1), P(2) }
foreach i ∈ [1, N-1]
foreach j ∈ [i+1, N]
if |P(i) - P(j)| < minDistance then
minDistance ← |P(i) - P(j)|
minPoints ← { P(i), P(j) }
endif
endfor
endfor
return minDistance, minPoints
endif

6. What is Closest-Pair Problem? (M/J 2016)


The closest-pair problem finds the two closest points in a set of n points. It is
the simplest of a variety of problems in computational geometry that deals with
proximity of points in the plane or higher-dimensional spaces.

7. Give the General Strategy divide and conquer method.


(M/J 2016)
A divide and conquer algorithm works by recursively breaking down a
problem into two
or more sub-problems of the same (or related) type (divide). until these become
simple enough to
be solved directly (conquer).

Divide-and-conquer algorithms work according to the following general plan:


1. A problem is divided into several subproblems of the same type. ideally of about
equal size.
2. The subproblems are solved (typically recursively. though sometimes a different
algorithm
is employed. especially when subproblems become small enough).
3. If necessary. the solutions to the subproblems are combined to get a solution to

7
the original
problem.
Example: Merge sort. Quick sort. Binary search. Multiplication of Large Integers and
Strassen„s Matrix Multipliwtion.

8. What is time complexity of binary search. [M/J 12,A/M 15] [N/D 2016]

Binary search algorithm:

Class Search
algorithm
Worst case performance O(log n)
Best case performance O(1)
Average case O(log n)
performance
Worst case space O(1)
complexity

9. What do you meant by divide and conquers strategy. [M/J 13]


1. A problems instance is divided into several smaller instances of the same
problem, ideally
about the same size
2. The smaller instances are solved, typically recursively
3. If necessary the solutions obtained are combined to get the solution of the
original problem

10. Write the control abstraction for the ordering paradigm. [M/J 13]

The general rule is that program i is stored on tape Ti mod m .on any given
tape the programs are stored in nondecreasing order of their lengths

Greedy method control abstraction for the ordering paradigm

Algorithm store(n,m)
{
J=0;
For i=1 to n do
{

8
Write(“append program”,I,”to permutation for tape”,j);
J=(j+1) mod m;
}
}

11. List out two drawbacks of binary search algorithm. [N/D 07]

 In binary search the elements have to be arranged either in ascending or


descending order
 Each time the mid element has to br computed in order to partition the list in two
sublists.

12. Define feasible and optimal solution.[M/J 14,N/D 13]

Feasible : It has to satisfy the problem‟s constraints .

Locally optimal : It has to be the best local choice among all feasible choices available on
that step.

13. What is the difference between quick sort and merge sort? [A/M 13]

Quicksort:
For random data, this algorithm tends to partition the dataset into two similarly sized
pieces, placing one item in its final position, and the smaller items on one side and larger
items on the other side. This means that in terms of locality, once we have a piece that fits
in memory, locality is exploited until this piece is fully sorted. Thus, apart from the log(N)
complexity of the algorithm, this method exploits locality quite reasonably.

Merge Sort:
This is used when several runs (pieces of the data) are already sorted. The locality
of the method comes from the fact that each run is traversed sequentially, thus, locality is
exploited reasonably. Also, the heap used to sort is R Log(R) size for R being the number
of runs to be sorted, which keeps locality reasonably well if the heap fits in memory.

14. Give the control abstraction for Divide and Conquer technique.[N/D 13]

Algorithm DC(P)
{
if P is too small then
return solution of P.
Else
{
Divide(P) and obtain P1,P2....Pn
Where n>= 1
Apply DC to each subproblem
return combine(DC(P1),DC(P2)...(DC(Pn));
}
}

15. Write any four examples for Brute force approach.[N/D 05]

 Selection sort

9
 Bubble sort
 Sequential search
 Brute force string mathching

16. Give the time efficiency and drawback of merge sort algorithm. [ N/D 05]

Time Efficiency: For best, average cases the time efficiency is O(nlogn)

Drawback:

 Extra storage space is required.


 Suitable for only large instance size.

17. Define External path length.[M/J 06]

External path length E of an extended binary tree is defined as the sum of the lengths of
the paths taken over all extended nodes from the root to each external node.

E = I + 2n
Where I ->The interval path length
n-> Number of internal nodes in the tree.

18.Give any two methods for pattern matching. [M/J 06]

 Brute Force Srting matching


 Knuth Moris Pratt algorithm
 Bayer Moore algorithm

19. List out nay two drawbacks of binary search algorithm,[N/D 07]

 Input should be in sorted order


 If the Search element exists in the first or last position or unsuccessful approach.

20. Give an example problem that cannot be solved by a Brute Force attack.[N/D 08]

 Travelling sales man problem


 Knapsack problem

21. What is Brute Force?

Brute Force is a straightforward approach to solving problem, usually directly based


on the problem‟s statement and definitions of the concepts involved.

22. What are the different criteria used to improve the effectiveness of algorithm?

• Input- Zero or more quantities are externally supplied.


• Output-At least one quantity is produced
• Definiteness-Each instruction is clear and unambiguous
• Finiteness-If we trace out the instructions of algorithm, then for all case the algorithm
Terminates after finite number of steps
• Effectiveness-Every instruction must be very clear

10
23. Define Convex-Hull Problem.

A set of points (finite or infinite) on the plane is called convex if for any two points P
and Q in the set, the entire line segment with the end points at P and Q belongs to the set
.
PART B

1. i) Solve the following using brute force algorithm:


Find whether the given string follows the specified pattern and return 0 or 1 accordingly.
Examples:
1) Pattern: “abba”,input:”redblueredblue” should return 1
2) Pattern:”aaaa”,input:”asdadasdasd” should return 1
3) Pattern:”aabb”, input:”xyzabcxyzabc” should return 0 [A/M 15] (10)

Solution:

The Brute force approach of string matching algorithm is very simple and straightforward.
According to this approach each character of pattern is compared with each corresponding
character of text.

Consider pattern :abba text


:redblueredblue

In given text/pattern pair


1) If we map 'r' of string “red” with 'a' of pattern and
2) If we map 'b' of string “blue” with 'b' of pattern then the algorithm will return 1.

Algorithm is given below:

int BruteForceAlgo(char t[14],char p[4],int n)


{
int i,j,flag=1;
i=0;
j=0;
while(j<n)
{
if((t[j] == 'r') &&(p[i] = = 'a'))
{
i=i+1;
j=j+3;
}
else if((t[j] == 'b')&&(p[i] = ='b'))
{
i=i+1;
j=j+4;
}
else
{

11
flag=0;
j++;
}
}
return flag;
}

Step 1:

r e d b l u e b l u e r e d

a b b a

Step 2:

r e d b l u e b l u e r e d

a b b a

Step 3:

r e d b l u e b l u e r e d

a b b a

Step 4:

r e d b l u e b l u e r e d

a b b a

2)Consider pattern:”aaaa” and input text:”asdasdasdasd”.

The simple logic to match pattern against text is that match first letter 'a' of string
“asd” with letter 'a' of pattern. The algorithm will be

int BruteForceAlgo(char t[14],char p[4],int n)


{
int i,j,flag=1;
i=0;
j=0;

12
while(j<n)
{
if((t[j] == 'a')&&(p[i]=='a'))
{
i=i+1;
j=j+3;
}
else
{
flag=0;
j++;
}
}
return flag;
}

3)Similarly for pattern aabb and input text:xyzabcxyzabc

we will map “x” of “xyz” string with 'a' and 'a' of “abc” string 'b'.The algorithm will be
int BruteForceAlgo(char t[20],char p[10],int n)
{
int i,j,flag=1;
i=0;
j=0;
while(j<n)
{
if((t[j] = ='x') &&(p[i] = = 'a'))
{
i=i+1;
j=j+3;
}
else if((t[j] = = 'a')&&p[i]=='b'))
{
i=i+1;
j=j+3;
}
else
{
flag = 0;
j++;
}
}
return flag;
}

ii) Explain the convex hull problem and the solution involved behind it. (6) [A/M
15][A/M2019]

Closest Pair Problem:


The brute force algorithm checks the distance between every pair of points and
keep track of the min. The cost is O(n(n-1)/2), quadratic.

13
The general approach of a merge-sort like algorithm is to sort the points along the x-
dimensions then recursively divide the array of points and find the minimum. The only trick
is that we must check distance between points from the two sets. This could have
quadratic cost if we checked each point with the other. But, is there is only a finite number
of points then cost could be less.
Alogrithm Closest Pair
0. Initially sort the n points, Pi = (xi, yi) by their x dimensions.
1. Then recursively divide the n points, S1 = {P1,...,Pn/2} and S2 = {Pn/2+1,...,Pn}
so that S1 points are two the left of x = xn/2 and S2 are to the right of x = xn/2.
2. Recursively find the closest pair in each set, d1 of S1 and d2 for S2, d = min(d1, d2).
3. We must check all the S1 points lying in this strip to every S2 points in the strip, and get
closest distance dbetween
4. To efficiently do the above, need to sort the points along the y dimensions, using a
merge sort approach.
5. Then the minimum distance is minimum distance is min(d, dbetween)
Analyzing and Cost:
Alogrithm Closest Pair
Algorithm closestPoints(p)
min_dist <- infinity
for i<- 1 to n-1 do
for j<-i+1 to n do
dist <-sqrt((xi – xj)2 + (yi -yj)2)
if dist < min_dist
min_dist<- dist
ini<- i;
in2<- j;
return in1,in2;
}

0. Initially sort the n points, Pi = (xi, yi) by their x dimensions.


1. Then recursively divide the n points, S1 = {P1,...,Pn/2} and S2 = {Pn/2+1,...,Pn} so that
S1 points are two the left of x = xn/2 and S2 are to the right of x = xn/2. Cost is O(1) for
each recursive call

14
2. Recursively find the closest pair in each set, d1 of S1 and d2 for S2, d = min(d1, d2).
Cost is O(1) for each recursive call.
Note that d is not the solution because the closest pair could be a pair
between the sets, meaning on from each set.
These points must lie in the vertical stripe described by x = xn/2-d and x =
xn/2+d. Draw the diagram.
3. We must check all the S1 points lying in this strip to every S2 points in the strip, and get
closest distance dbetween
Note that there can be only 6 S2 points. Note the points must lie also [yi - d,
yi + d]. Illustrate the worst case. So the time for this step is Θ(6n/2) = Θ(3n).
Draw diagram showing the six points in S2 with respect to the point in S1.
4. To accomplish this we also need to sort the points along the y dimensions. We do not
want to a sort from scratch for each recursive division. So we use a merge sort approach
and the cost is of maintaining the sort along y is O(n).
5. Then the minimum distance is minimum distance is min(d, dbetween)
The recursive relation is
T(n) = 2T(n/2) + M(n), where M(n) is linear in n.
Using Master's Theorem (a =2, b = 2, d = 1)
T(n) ε O(n lg n)
Note that it has been shown that the best that can be done is Ω(n lg n). So we have found
one of the best solutions.
 Sort the points on x
 Divide the points equally to 2 subsets, and recursively find their convex hulls

Merge the two convex hulls.

 Connect the top points in


the two hulls

 Walk counter clockwise on the left hull, and clockwise on the right hull

15
Follow a similar approach for the bottom side

2. A pair contains two numbers and its second number is on the right side of the
first one in an array. The difference of a pair is the minus result while subtracting
the second number from the first one. Implement a function which gets the maximal
difference of all pairs in an array (using divide and conquer method). [A/M 15]

Solution:

Let, the array contains following elements

10 15 5 20 12 13 11 8

Then maximal difference giving pair will be {20,8} and the maximal differnce will be 12.
For solving this problem using divide and conquer stratergy we will divide the two sub
arrays with same size.
The maximal difference of all pairs can be obtained as follows

 Two numbers of a pair are both in first sub array.


 Two numbers of a pair are both in second sub array
 The maximum number of first array and minimum number of second array thier
difference gives the maximal difference.
Let us solve this problem using divide and conquer

Step 1:

0 1 2 3 4 5 6 7
10 15 5 20 12 13 11 8

16
Divide each sub array of equal size

0 1 2 3 4 5 6 7

10 15 5 20 12 13 11 8

Step 2:

Divide each sub array further

Step 3:
Step 4:

LeftAMxDiff = 10 RightMaxDiff = 5

CrossDifference= 20 – 8 = 12
Hence maximal difference = 12
The function can be written as

Algorithm:

int DivideConquer(int* start,int* end,int* max,int* min)


{
if(end = =start)
{
*max = *min = *start;
return 0x80000000;
}
int* middla = start + (end -start) / 2;int LeftmaxElement,LeftMinElement;
int LeftDifference = DivideConquer(Strat,middle,&LeftMaxElement,&LeftMinElelment);
int RightmaxElement,RightMinElement;
int RightDifference = DivideConquer(middle+1,end,&RightMaxElement,&RightMinElement);
int CrossDifference = LeftMaxElement – RightMinElement;
*max = (LeftMaxElement > RightMaxElement) ? LeftMaxElement:RightMaxElement;
*min = (LeftMinElement < RightMinElement)?LeftMinElement:RightMinElement;
intmaximumDiffernce = (LeftDiffernce > RightDifference)?LeftDiffernce:RightDiffernce;
MaximumDifference = (MaximumDifference > CrossDifference)? MaxmumDiffernce =
(Maximumdiffernce > CrosDiffernce)?MaximumDiffernce:CrossDiffernce;
return MaximumDiffernce;

17
}
int GetMaxDiff(int Array[],unsigned length)
{
if(Array == Null || length < 2)
return 0;
int max,min;
return DivideConquer(Array + length – 1,&max, &min);
}
void main()
{
int Array[] = {10,15,5,20,12,13,11,8};
unsigned length = 8;
printf(“%d”,GetmaxDiff(Array,length));
}

3. Explain divide and conquer method with merge sort algorithm. Give an example.
[M/J 12] [OR] Write an algorithm to sort a given list of elements using merge sort.
Show the operation of the following on the list 38, 27, 43, 3, 9, 82,10. [A/M 12](12)

Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.

Algorithm:
ALGORITHM Mergesort ( A[0… n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order

if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )

18
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )
ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )
//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C

Analysis:

• Input size: Array


size, n
• Basic operation: key comparison
• Best, worst, average case exists:
Worst case: During key comparison, neither of the two arrays becomes empty
before the other one contains just one element.
• Let C(n) denotes the number of times basic operation is executed. Then

19
Advantages:

• Number of comparisons performed is nearly optimal.


• Mergesort will never degrade to O(n2)
• It can be applied to files of any size

Limitations:
• Uses O(n) additional memory.

4. Explain Binary Search algorithm with an example. [N/D 14][OR] What is divide and
conquer strategy and explain the binary search with suitable example problem. [N/D
11][OR] Write an algorithm to perform binary search on a sorted list of elements.
Analyze the algorithm for the best case, worst case and average case. [A/M 11,N/D
12]
Binary tree is a dichotomic divide and conquer search algorithm. Ti inspects the
middle element of the sorted list. If equal to the sought value, then the position has been
found.Otherwise, if the key is less than the middle element, do a binary search on the first
half,else on the second half.
Algorithm:
Algorithm can be implemented as recursive or non-recursive algorithm.
ALGORITHM BinSrch ( A[0 … n-1], key)
//implements non-recursive binary search
//i/p: Array A in ascending order, key k
//o/p: Returns position of the key matched else -1

20
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Depend on
Best – key matched with mid element
Worst – key not found or key sometimes in the list
• Let C(n) denotes the number of times basic operation is executed. Then
Cworst(n) = Worst case efficiency. Since after each comparison the algorithm
divides the problem into half the size, we have
Cworst(n) = Cworst(n/2) + 1 for n > 1
C(1) = 1
• Solving the recurrence equation using master theorem, to give the number of
times the search key is compared with an element in the array, we have:
C(n) = C(n/2) + 1

a=1

Applications of binary search:


• Number guessing game
• Word lists/search dictionary etc
Advantages:
• Efficient on very big list
• Can be implemented iteratively/recursively
Limitations:
• Interacts poorly with the memory hierarchy
• Requires given list to be sorted
• Due to random access of list element, needs arrays instead of linked list.

21
Example:
Lists represented as arrays:

The idea behind the Binary Search is to split this array in half multiple times to
"zero-in" on the value we're looking for. Assume we are looking for the value 44 in the
array, and we want to know the index of the element that this value is located in, if, in fact,
it is in the array at all (remember that we always have to prepare for the case where the
element is not found at all). In this example, the value 44 is located in the 6th element of
the array. In the linear search shown above, it would require six comparisons between
array elements and the search key to find out that the 6th element of the array contained
the value that we are looking for. Let's see how the binary search works now.
In the series of figures below, a sequence of passes is shown for the binary search.
Let's go over them step-by-step. The first step is to look at the array in it's initial state. We
are going to have to keep three "pointers" into the array for this algorithm - three integer
variables that contain the indicies of three different places that we are concerned with in
the array: the low index that we are still looking at, the high index that we are still looking at,
and the midpoint index between the low and high. The figure below shows you these
values. The low and high indices are the first and last element indices of the array, and the
midpoint is shown to be (low+high)/2. Note that we need to do integer division to find
the midpoint. That way, if the number of elements in the array is even, and thus the
"midpoint" is actually not an element, we will set the mid pointer to one less than what
floating point division would give us. If you didn't catch on to what integer division did way
back at the beginning of the semester, now is the time to make sure you do. So if there are
8 elements, then (0+7)/2 would be 3.5 with floating point division, but will return 3 with
integer division.
So, the mid pointer in the example below will start out pointing to element 4, which
contains the value 38. Here comes the key to the Binary Search - pay attention! First, you
compare the value that mid points to to see if it is the value we are looking for (44). It is
not in our case. So, you then ask the question: is the value that mid points to higher or
lower than our search value? In this example, it is lower. Now, since the array is sorted,
we KNOW that the value we are searching for MUST be in the UPPER HALF of the array,
since it is larger than the midpoint element value! So in one comparison, we have
discarded the lower half of the array as elements that we need to search! This is a
powerful tool for searching large arrays! But we still haven't found where the value really is,
so let's continue with the next figure.

22
So now we take the 2nd pass in the Binary Search algorithm. We know that we just
need to search the upper half of the array. We know that the value that mid pointed to
above is NOT the value that we are looking for. So, we now reset our "low" pointer to point
to the index of the element that is one higher than our previous midpoint. This actually
points to our search value, but we don't know that yet. So now low points to index 5, and
high still points to the last index in the array. We recalculate the midpoint, and using
integer division, (5+8)/2 will give 6 as the midpoint index to use. So now we repeat the
process. Does the element at the mid index contain our value? No. Is the value we are
searching for higher or lower than the value of the element at our midpoint index? In this
case, it is lower (44 is less than 77). So now we are going to reset our pointers and do a
third pass. See the next figure.

23
For our third pass, we reset the HIGH pointer since our search value was lower than
the value of the element at the midpoint. In the figure below, you can see that we reset the
high pointer to point to one less than the previous mid pointer (since we already knew that
the mid pointer did NOT point to our value). We leave the low pointer alone. Note that
now, low and high both point to element 5, and so (5+5)/2 = 5, and now the mid pointer
will point to 5 as well. So now we see if the element in the array that mid is pointing to
contains the value that we are searching for. And it does! We have successfully searched
for and found our value in three comparison steps! As noted at the beginning of this
section, the linear search would have taking six comparisons to find the same value.

5. Explain the Quick sort algorithm with the help of illustrative example. With an
example show that quick sort is not a stable sorting algorithm [N/D 2016] [N/D
12][A/M2019]
Quick Sort is also known as “partition-exchange sort”.

Definition:

Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The
steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.

24
Features:

• Developed by C.A.R. Hoare


• Efficient algorithm
• NOT stable sort
• Significantly faster in practice, than other algorithms

25
Analysis:
Worst-case running time:

When quicksort always has the most unbalanced partitions possible, then the
original call takes cncncn time for some constant ccc, the recursive call on n−1n-1n−1
elements takes c(n−1)c(n-1)c(n−1) time, the recursive call on n−2n-2n−2 elements takes
c(n−2)c(n-2)c(n−2) time, and so on. Here's a tree of the subproblem sizes with their
partitioning times:

When we total up the partitioning times for each level, we get

cn+c(n−1)+c(n−2)+⋯+2c=c(n+(n−1)+(n−2)+⋯+2)=c((n+1)(n/2)−1) .

26
The last line is because 1+2+3+⋯+n1 + 2 + 3 + \cdots + n1+2+3+⋯+n is the arithmetic
series, as we saw when we analyzed selection sort [fix link]. (We subtract 1 because for
quicksort, the summation starts at 2, not 1.) We have some low-order terms and constant
coefficients, but when we use big-Θ notation, we ignore them. In big-Θ notation,
quicksort's worst-case running time is Θ(n2)\Theta(n^2)Θ(n2).

Best-case running time

Quicksort's best case occurs when the partitions are as evenly balanced as
possible: their sizes either are equal or are within 1 of each other. The former case occurs
if the subarray has an odd number of elements and the pivot is right in the middle after
partitioning, and each partition has (n−1)/2(n-1)/2(n−1)/2 elements. The latter case occurs
if the subarray has an even number nnn of elements and one partition has n/2n/2n/2
elements with the other having n/2−1n/2-1n/2−1. In either of these cases, each partition
has at most n/2n/2n/2 elements, and the tree of subproblem sizes looks a lot like the tree
of subproblem sizes for merge sort, with the partitioning times looking like the merging
times:

Using big-Θ notation, we get the same result as for merge sort: Θ(nlgn)\Theta(n \lg
n)Θ(nlgn).

Average-case running time

Showing that the average-case running time is also Θ(nlgn)\Theta(n \lg n)Θ(nlgn)
takes some pretty involved mathematics, and so we won't go there. But we can gain some
intuition by looking at a couple of other cases to understand why it might be O(nlgn)O(n \lg
n)O(nlgn).

With an example show that quick sort is not a stable sorting algorithm [N/D
2016]

No, Quick Sort does not preserve the relative order of equal items. To prove this to
yourself make an array of numbers say 1 to 20 followed by 10 to 1, and a compare
function that claims they are always equal. Use quick sort on the array and note your
patten has ben disturbed (you might "luck out" and the order might math up with the
disordering, try a slightly different sized array...or your systems sort might not really be
quick sort, implement you own).

27
lazy try here assumes the system sort is indeed a quick sort:

#include <stdlib.h>

int equal(void *a, void *b) {


return 0;
}

int main(int argc, char *argv[]) {


unsigned char values[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18,
19, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10};
qsort(values, sizeof(values), 1, equal);

for(int i = 0; i < sizeof(values); i++) {


printf("v[%d] = %d\n", i, (int)values[i]);
}
}

sample output:

1. v[0] = 10
2. v[1] = 2
3. v[2] = 3
4. v[3] = 4
5. v[4] = 5
6. v[5] = 6
7. v[6] = 7
8. v[7] = 8
9. v[8] = 9
10. v[9] = 10
11. v[10] = 11
12. v[11] = 12
13. v[12] = 13
14. v[13] = 14
15. v[14] = 15
16. v[15] = 16
17. v[16] = 17
18. v[17] = 18
19. v[18] = 19
20. v[19] = 20
21. v[20] = 19
22. v[21] = 18
23. v[22] = 17
24. v[23] = 16
25. v[24] = 15

28
26. v[25] = 14
27. v[26] = 13
28. v[27] = 12
29. v[28] = 11
30. v[29] = 1

6. Write an algorithm for performing matrix multiplication using Strassen’s matrix


multiplication. [OR] Explain the method used for performing multiplication of two
large integers. Explain how Divide and conquer method can be used to solve the
same. [M/J2016][16m][N/D 2019]
Divide and Conquer | Set 4 (Karatsuba algorithm for fast multiplication)

Given two binary strings that represent value of two integers, find the product of two
strings. For example, if the first bit string is “1100” and second bit string is “1010”, output
should be 120.

For simplicity, let the length of two strings be same and be n.

A Naive Approach is to follow the process we study in school. One by one take all bits of
second number and multiply it with all bits of first number. Finally add all multiplications.
This algorithm takes O(n^2) time.

Using Divide and Conquer, we can multiply two integers in less time complexity. We
divide the given numbers in two halves. Let the given numbers be X and Y.

For simplicity let us assume that n is even

X = Xl*2n/2 + Xr [Xl and Xr contain leftmost and rightmost n/2 bits of X]


Y = Yl*2n/2 + Yr [Yl and Yr contain leftmost and rightmost n/2 bits of Y]

The product XY can be written as following.

29
XY = (Xl*2n/2 + Xr)(Yl*2n/2 + Yr)
= 2n XlYl + 2n/2(XlYr + XrYl) + XrYr

If we take a look at the above formula, there are four multiplications of size n/2, so we
basically divided the problem of size n into for sub-problems of size n/2. But that doesn‟t
help because solution of recurrence T(n) = 4T(n/2) + O(n) is O(n^2). The tricky part of this
algorithm is to change the middle two terms to some other form so that only one extra
multiplication would be sufficient. The following is tricky expression for middle two terms.

XlYr + XrYl = (Xl + Xr)(Yl + Yr) - XlYl- XrYr

So the final value of XY becomes

XY = 2n XlYl + 2n/2 * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr

With above trick, the recurrence becomes T(n) = 3T(n/2) + O(n) and solution of this
recurrence is O(n1.59).

What if the lengths of input strings are different and are not even? To handle the different
length case, we append 0‟s in the beginning. To handle odd length, we put floor(n/2) bits in
left half and ceil(n/2) bits in right half. So the expression for XY changes to following.

XY = 22ceil(n/2) XlYl + 2ceil(n/2) * [(Xl + Xr)(Yl + Yr) - XlYl - XrYr] + XrYr

The above algorithm is called Karatsuba algorithm and it can be used for any base.

Following is C++ implementation of above algorithm.

// C++ implementation of Karatsuba algorithm for bit string multiplication.


#include<iostream>
#include<stdio.h>

using namespace std;

// Helper method: given two unequal sized bit strings, converts them to
// same length by adding leading 0s in the smaller string. Returns the
// the new length
int makeEqualLength(string &str1, string &str2)
{
int len1 = str1.size();
int len2 = str2.size();
if (len1 < len2)

30
{
for (int i = 0 ; i < len2 - len1 ; i++)
str1 = '0' + str1;
return len2;
}
else if (len1 > len2)
{
for (int i = 0 ; i < len1 - len2 ; i++)
str2 = '0' + str2;
}
return len1; // If len1 >= len2
}

// The main function that adds two bit sequences and returns the addition
string addBitStrings( string first, string second )
{
string result; // To store the sum bits

// make the lengths same before adding


int length = makeEqualLength(first, second);
int carry = 0; // Initialize carry

// Add all bits one by one


for (int i = length-1 ; i >= 0 ; i--)
{
int firstBit = first.at(i) - '0';
int secondBit = second.at(i) - '0';

// boolean expression for sum of 3 bits


int sum = (firstBit ^ secondBit ^ carry)+'0';

result = (char)sum + result;

// boolean expression for 3-bit addition


carry = (firstBit&secondBit) | (secondBit&carry) | (firstBit&carry);
}

// if overflow, then add a leading 1


if (carry) result = '1' + result;

return result;
}

// A utility function to multiply single bits of strings a and b


int multiplyiSingleBit(string a, string b)
{ return (a[0] - '0')*(b[0] - '0'); }

// The main function that multiplies two bit strings X and Y and returns
// result as long integer
long int multiply(string X, string Y)
{
// Find the maximum of lengths of x and Y and make length

31
// of smaller string same as that of larger string
int n = makeEqualLength(X, Y);

// Base cases
if (n == 0) return 0;
if (n == 1) return multiplyiSingleBit(X, Y);

int fh = n/2; // First half of string, floor(n/2)


int sh = (n-fh); // Second half of string, ceil(n/2)

// Find the first half and second half of first string.


// Refer http://goo.gl/lLmgn for substr method
string Xl = X.substr(0, fh);
string Xr = X.substr(fh, sh);

// Find the first half and second half of second string


string Yl = Y.substr(0, fh);
string Yr = Y.substr(fh, sh);

// Recursively calculate the three products of inputs of size n/2


long int P1 = multiply(Xl, Yl);
long int P2 = multiply(Xr, Yr);
long int P3 = multiply(addBitStrings(Xl, Xr), addBitStrings(Yl, Yr));

// Combine the three products to get the final result.


return P1*(1<<(2*sh)) + (P3 - P1 - P2)*(1<<sh) + P2;
}

// Driver program to test aboev functions


int main()
{
printf ("%ld\n", multiply("1100", "1010"));
printf ("%ld\n", multiply("110", "1010"));
printf ("%ld\n", multiply("11", "1010"));
printf ("%ld\n", multiply("1", "1010"));
printf ("%ld\n", multiply("0", "1010"));
printf ("%ld\n", multiply("111", "111"));
printf ("%ld\n", multiply("11", "11"));
}

Output:

120
60
30
10
0
49
9

Time Complexity: Time complexity of the above solution is O(n1.59).

32
Time complexity of multiplication can be further improved using another Divide and
Conquer algorithm, fast Fourier transform. We will soon be discussing fast Fourier
transform as a separate post.

7. Find all the solution to the travelling salesman problem (cities and distance
shown beim+) by exhaustive search. Give the optimal solutions. (16) [M/J2016]

The traveling salesman problem (TSP) is one of the combinatorial problems.


The problem asks to find the shortest tow through a given set of n cities that visits each
city exactly once before returning to the city where it started.
The problem can be conveniently modelled by a weighted graph, with the graph's vertices
representing the cities and the edge weights specifying the distances. Then the problem
can be stated as the problem of finding the shortest Hamiltonian circuit of the graph. (A
Hamiltonian circuit is defined as a cycle that passes through all the vertices of the graph
exactly once). A Hamiltonian circuit can also be defined as a sequence of n + 1 adjacent
vertices vio, vi1, ...... vin-1, vio, where the first vertex of the sequence is the same as the last

one and all the other n — 1 vertices are distinct. All circuits start and end at one particular

vertex. Figure presents a small instance of the problem and its solution by this method.

33

You might also like