Professional Documents
Culture Documents
Daa Unit 2 - Completed ND2019 PDF
Daa Unit 2 - Completed ND2019 PDF
- IT
UNIT - II
COMPILED BY
VERIFIED BY
1
UNIT II
BRUTE FORCE AND DIVIDE-AND-CONQUER
Brute Force
Closest-Pair and Convex-Hull Problems-Exhaustive Search
Travelling Salesman Problem
Knapsack Problem
Assignment problem.
Divide and conquer methodology
Merge sort
Quick sort
Binary search
Multiplication of Large Integers
Strassen’s Matrix Multiplication
2
List of Important Questions
UNIT II
BRUTE FORCE AND DIVIDE CONQUER
PART A
1. Design a brute force algorithm for computing the value of polynomial p(x) =
anxn + an-1xn-1 +.....+ a1x...a0 at a given point x0 and determine its worst case
efficiency class. [A/M 15][N/D2019]
2. Derive the complexity of binary search algorithm. [A/M 15][N/D2019]
3. What is the time and space complexity of merge sort.[A/M2019]
4. Write The Brute Force algorithm to String Matching.[A/M2019]
5. Write an algorithm for brute force Closest-Pair Problem. [N/D 2016]
6. What is Closest-Pair Problem? [M/J 2016]
7. Give the General Strategy divide and conquer method. [M/J 2016]
8. What is time complexity of binary search? [M/J 12, A/M 15] [N/D 2016]
9. What do you meant by Divide and Conquer strategy? [M/J 13]
10. Write the control abstraction for the ordering paradigm. [M/J 13]
11. List out two drawbacks of binary search algorithm. [N/D 07]
12. Define feasible and optimal solution. [M/J 14,N/D 13]
13. What is the difference between quick sort and merge sort? [A/M 13]
14. Give the control abstraction for Divide and Conquer technique. [N/D 13]
15. Write any four examples for Brute force approach. [N/D 05]
16. Give the time efficiency and drawback of merge sort algorithm. [N/D 05]
17. Define External path length. [M/J 06]
18. Give any two methods for pattern matching. [M/J 06]
19. List out nay two drawbacks of binary search algorithm. [N/D 07]
20. Give an example problem that cannot be solved by a Brute Force attack. [N/D
08]
21. What is Brute Force?
22. What are the different criteria used to improve the effectiveness of algorithm?
23. Define Convex-Hull Problem.
PART B
3
Examples:
1) Pattern: “abba”,input:”redblueredblue” should return 1
2) Pattern:”aaaa”,input:”asdadasdasd” should return 1
3) Pattern:”aabb”, input:”xyzabcxyzabc” should return 0 [A/M 15] (10)
ii) Explain the convex hull problem and the solution involved behind it. (6) [A/M
15][A/M2019]
2. A pair contains two numbers and its second number is on the right side of the
first one in an array. The difference of a pair is the minus result while subtracting
the second number from the first one. Implement a function which gets the maximal
difference of all pairs in an array (using divide and conquer method). [A/M 15]
3. Explain divide and conquer method with merge sort algorithm. Give an example.
[M/J 12] [OR] Write an algorithm to sort a given list of elements using merge sort.
Show the operaion of the following on the list 38, 27, 43, 3, 9, 82,10. [A/M 12](12)
4. Explain Binary Search algorithm with an example. [N/D 14][OR] What is divide and
conquer strategy and explain the binary search with suitable example problem. [N/D
11][OR] Write an algorithm to perform binary search on a sorted list of elements.
Analyze the algorithm for the best case, worst case and average case. [A/M 11,N/D
12]
5. Explain the Quick sort algorithm with the help of illustrative example. With an
example show that quick sort is not a stable sorting algorithm [N/D 2016] [N/D
12][A/M2019]
7. Find all the solution to the travelling salesman problem (cities and distance
shown beim+) by exhaustive search. Give the optimal solutions. (16) [M/J2016]
4
NOTES
UNIT 2
BRUTE FORCE AND DIVIDE CONQUER
PART A
1. Design a brute force algorithm for computing the value of polynomial p(x) = anxn +
an-1xn-1 +.....+ a1x...a0 at a given point x0 and determine its worst case efficiency
class. [A/M 15][N/D2019]
The result of plynomial is computed for n terms. The basic operation in this
algorithm is for loop which actually computes the evaluated sum. Hence worst case time
complexity of this algorithm is O(n).
5
Cs = C/N
C = i*2i - (20 + 21 + 22 +……..2(i-1))
= i + 2i (i-1
Now, consider that the probability for searching a requested data item is 1/n, then,
Cs = (1 + 2i (1- i)/n)
Cs = [1+ (N +1)(log 2 (N +1)-1)]/n
Cs = log2 N (for large N)
It can be easily observed that for successful and unsuccessful search the expected
number of cases is given by:
Cs = C4 = O (log2 N)
It should be noted that Binary Search provides to be more efficient than the
sequential search. But when implemented with linked lists it would not be efficient.
On the basis of the above analysis the time complexity of Binary Search is:
E(n) = [log2 n] +1, it is actually 2E(a)>n, that is O(log2 n).
6
Provide a function to find the closest two points among a set of given points in two
dimensions, i.e. to solve the Closest pair of points problem in the planar case.
7
the original
problem.
Example: Merge sort. Quick sort. Binary search. Multiplication of Large Integers and
Strassen„s Matrix Multipliwtion.
8. What is time complexity of binary search. [M/J 12,A/M 15] [N/D 2016]
Class Search
algorithm
Worst case performance O(log n)
Best case performance O(1)
Average case O(log n)
performance
Worst case space O(1)
complexity
10. Write the control abstraction for the ordering paradigm. [M/J 13]
The general rule is that program i is stored on tape Ti mod m .on any given
tape the programs are stored in nondecreasing order of their lengths
Algorithm store(n,m)
{
J=0;
For i=1 to n do
{
8
Write(“append program”,I,”to permutation for tape”,j);
J=(j+1) mod m;
}
}
11. List out two drawbacks of binary search algorithm. [N/D 07]
Locally optimal : It has to be the best local choice among all feasible choices available on
that step.
13. What is the difference between quick sort and merge sort? [A/M 13]
Quicksort:
For random data, this algorithm tends to partition the dataset into two similarly sized
pieces, placing one item in its final position, and the smaller items on one side and larger
items on the other side. This means that in terms of locality, once we have a piece that fits
in memory, locality is exploited until this piece is fully sorted. Thus, apart from the log(N)
complexity of the algorithm, this method exploits locality quite reasonably.
Merge Sort:
This is used when several runs (pieces of the data) are already sorted. The locality
of the method comes from the fact that each run is traversed sequentially, thus, locality is
exploited reasonably. Also, the heap used to sort is R Log(R) size for R being the number
of runs to be sorted, which keeps locality reasonably well if the heap fits in memory.
14. Give the control abstraction for Divide and Conquer technique.[N/D 13]
Algorithm DC(P)
{
if P is too small then
return solution of P.
Else
{
Divide(P) and obtain P1,P2....Pn
Where n>= 1
Apply DC to each subproblem
return combine(DC(P1),DC(P2)...(DC(Pn));
}
}
15. Write any four examples for Brute force approach.[N/D 05]
Selection sort
9
Bubble sort
Sequential search
Brute force string mathching
16. Give the time efficiency and drawback of merge sort algorithm. [ N/D 05]
Time Efficiency: For best, average cases the time efficiency is O(nlogn)
Drawback:
External path length E of an extended binary tree is defined as the sum of the lengths of
the paths taken over all extended nodes from the root to each external node.
E = I + 2n
Where I ->The interval path length
n-> Number of internal nodes in the tree.
19. List out nay two drawbacks of binary search algorithm,[N/D 07]
20. Give an example problem that cannot be solved by a Brute Force attack.[N/D 08]
22. What are the different criteria used to improve the effectiveness of algorithm?
10
23. Define Convex-Hull Problem.
A set of points (finite or infinite) on the plane is called convex if for any two points P
and Q in the set, the entire line segment with the end points at P and Q belongs to the set
.
PART B
Solution:
The Brute force approach of string matching algorithm is very simple and straightforward.
According to this approach each character of pattern is compared with each corresponding
character of text.
11
flag=0;
j++;
}
}
return flag;
}
Step 1:
r e d b l u e b l u e r e d
a b b a
Step 2:
r e d b l u e b l u e r e d
a b b a
Step 3:
r e d b l u e b l u e r e d
a b b a
Step 4:
r e d b l u e b l u e r e d
a b b a
The simple logic to match pattern against text is that match first letter 'a' of string
“asd” with letter 'a' of pattern. The algorithm will be
12
while(j<n)
{
if((t[j] == 'a')&&(p[i]=='a'))
{
i=i+1;
j=j+3;
}
else
{
flag=0;
j++;
}
}
return flag;
}
we will map “x” of “xyz” string with 'a' and 'a' of “abc” string 'b'.The algorithm will be
int BruteForceAlgo(char t[20],char p[10],int n)
{
int i,j,flag=1;
i=0;
j=0;
while(j<n)
{
if((t[j] = ='x') &&(p[i] = = 'a'))
{
i=i+1;
j=j+3;
}
else if((t[j] = = 'a')&&p[i]=='b'))
{
i=i+1;
j=j+3;
}
else
{
flag = 0;
j++;
}
}
return flag;
}
ii) Explain the convex hull problem and the solution involved behind it. (6) [A/M
15][A/M2019]
13
The general approach of a merge-sort like algorithm is to sort the points along the x-
dimensions then recursively divide the array of points and find the minimum. The only trick
is that we must check distance between points from the two sets. This could have
quadratic cost if we checked each point with the other. But, is there is only a finite number
of points then cost could be less.
Alogrithm Closest Pair
0. Initially sort the n points, Pi = (xi, yi) by their x dimensions.
1. Then recursively divide the n points, S1 = {P1,...,Pn/2} and S2 = {Pn/2+1,...,Pn}
so that S1 points are two the left of x = xn/2 and S2 are to the right of x = xn/2.
2. Recursively find the closest pair in each set, d1 of S1 and d2 for S2, d = min(d1, d2).
3. We must check all the S1 points lying in this strip to every S2 points in the strip, and get
closest distance dbetween
4. To efficiently do the above, need to sort the points along the y dimensions, using a
merge sort approach.
5. Then the minimum distance is minimum distance is min(d, dbetween)
Analyzing and Cost:
Alogrithm Closest Pair
Algorithm closestPoints(p)
min_dist <- infinity
for i<- 1 to n-1 do
for j<-i+1 to n do
dist <-sqrt((xi – xj)2 + (yi -yj)2)
if dist < min_dist
min_dist<- dist
ini<- i;
in2<- j;
return in1,in2;
}
14
2. Recursively find the closest pair in each set, d1 of S1 and d2 for S2, d = min(d1, d2).
Cost is O(1) for each recursive call.
Note that d is not the solution because the closest pair could be a pair
between the sets, meaning on from each set.
These points must lie in the vertical stripe described by x = xn/2-d and x =
xn/2+d. Draw the diagram.
3. We must check all the S1 points lying in this strip to every S2 points in the strip, and get
closest distance dbetween
Note that there can be only 6 S2 points. Note the points must lie also [yi - d,
yi + d]. Illustrate the worst case. So the time for this step is Θ(6n/2) = Θ(3n).
Draw diagram showing the six points in S2 with respect to the point in S1.
4. To accomplish this we also need to sort the points along the y dimensions. We do not
want to a sort from scratch for each recursive division. So we use a merge sort approach
and the cost is of maintaining the sort along y is O(n).
5. Then the minimum distance is minimum distance is min(d, dbetween)
The recursive relation is
T(n) = 2T(n/2) + M(n), where M(n) is linear in n.
Using Master's Theorem (a =2, b = 2, d = 1)
T(n) ε O(n lg n)
Note that it has been shown that the best that can be done is Ω(n lg n). So we have found
one of the best solutions.
Sort the points on x
Divide the points equally to 2 subsets, and recursively find their convex hulls
Walk counter clockwise on the left hull, and clockwise on the right hull
15
Follow a similar approach for the bottom side
2. A pair contains two numbers and its second number is on the right side of the
first one in an array. The difference of a pair is the minus result while subtracting
the second number from the first one. Implement a function which gets the maximal
difference of all pairs in an array (using divide and conquer method). [A/M 15]
Solution:
10 15 5 20 12 13 11 8
Then maximal difference giving pair will be {20,8} and the maximal differnce will be 12.
For solving this problem using divide and conquer stratergy we will divide the two sub
arrays with same size.
The maximal difference of all pairs can be obtained as follows
Step 1:
0 1 2 3 4 5 6 7
10 15 5 20 12 13 11 8
16
Divide each sub array of equal size
0 1 2 3 4 5 6 7
10 15 5 20 12 13 11 8
Step 2:
Step 3:
Step 4:
LeftAMxDiff = 10 RightMaxDiff = 5
CrossDifference= 20 – 8 = 12
Hence maximal difference = 12
The function can be written as
Algorithm:
17
}
int GetMaxDiff(int Array[],unsigned length)
{
if(Array == Null || length < 2)
return 0;
int max,min;
return DivideConquer(Array + length – 1,&max, &min);
}
void main()
{
int Array[] = {10,15,5,20,12,13,11,8};
unsigned length = 8;
printf(“%d”,GetmaxDiff(Array,length));
}
3. Explain divide and conquer method with merge sort algorithm. Give an example.
[M/J 12] [OR] Write an algorithm to sort a given list of elements using merge sort.
Show the operation of the following on the list 38, 27, 43, 3, 9, 82,10. [A/M 12](12)
Definition:
Merge sort is a sort algorithm that splits the items to be sorted into two groups,
recursively sorts each group, and merges them into a final sorted sequence.
Algorithm:
ALGORITHM Mergesort ( A[0… n-1] )
//sorts array A by recursive mergesort
//i/p: array A
//o/p: sorted array A in ascending order
if n > 1
copy A[0… (n/2 -1)] to B[0… (n/2 -1)]
copy A[n/2… n -1)] to C[0… (n/2 -1)]
Mergesort ( B[0… (n/2 -1)] )
18
Mergesort ( C[0… (n/2 -1)] )
Merge ( B, C, A )
ALGORITHM Merge ( B[0… p-1], C[0… q-1], A[0… p+q-1] )
//merges two sorted arrays into one sorted array
//i/p: arrays B, C, both sorted
//o/p: Sorted array A of elements from B & C
Analysis:
19
Advantages:
Limitations:
• Uses O(n) additional memory.
4. Explain Binary Search algorithm with an example. [N/D 14][OR] What is divide and
conquer strategy and explain the binary search with suitable example problem. [N/D
11][OR] Write an algorithm to perform binary search on a sorted list of elements.
Analyze the algorithm for the best case, worst case and average case. [A/M 11,N/D
12]
Binary tree is a dichotomic divide and conquer search algorithm. Ti inspects the
middle element of the sorted list. If equal to the sought value, then the position has been
found.Otherwise, if the key is less than the middle element, do a binary search on the first
half,else on the second half.
Algorithm:
Algorithm can be implemented as recursive or non-recursive algorithm.
ALGORITHM BinSrch ( A[0 … n-1], key)
//implements non-recursive binary search
//i/p: Array A in ascending order, key k
//o/p: Returns position of the key matched else -1
20
Analysis:
• Input size: Array size, n
• Basic operation: key comparison
• Depend on
Best – key matched with mid element
Worst – key not found or key sometimes in the list
• Let C(n) denotes the number of times basic operation is executed. Then
Cworst(n) = Worst case efficiency. Since after each comparison the algorithm
divides the problem into half the size, we have
Cworst(n) = Cworst(n/2) + 1 for n > 1
C(1) = 1
• Solving the recurrence equation using master theorem, to give the number of
times the search key is compared with an element in the array, we have:
C(n) = C(n/2) + 1
a=1
21
Example:
Lists represented as arrays:
The idea behind the Binary Search is to split this array in half multiple times to
"zero-in" on the value we're looking for. Assume we are looking for the value 44 in the
array, and we want to know the index of the element that this value is located in, if, in fact,
it is in the array at all (remember that we always have to prepare for the case where the
element is not found at all). In this example, the value 44 is located in the 6th element of
the array. In the linear search shown above, it would require six comparisons between
array elements and the search key to find out that the 6th element of the array contained
the value that we are looking for. Let's see how the binary search works now.
In the series of figures below, a sequence of passes is shown for the binary search.
Let's go over them step-by-step. The first step is to look at the array in it's initial state. We
are going to have to keep three "pointers" into the array for this algorithm - three integer
variables that contain the indicies of three different places that we are concerned with in
the array: the low index that we are still looking at, the high index that we are still looking at,
and the midpoint index between the low and high. The figure below shows you these
values. The low and high indices are the first and last element indices of the array, and the
midpoint is shown to be (low+high)/2. Note that we need to do integer division to find
the midpoint. That way, if the number of elements in the array is even, and thus the
"midpoint" is actually not an element, we will set the mid pointer to one less than what
floating point division would give us. If you didn't catch on to what integer division did way
back at the beginning of the semester, now is the time to make sure you do. So if there are
8 elements, then (0+7)/2 would be 3.5 with floating point division, but will return 3 with
integer division.
So, the mid pointer in the example below will start out pointing to element 4, which
contains the value 38. Here comes the key to the Binary Search - pay attention! First, you
compare the value that mid points to to see if it is the value we are looking for (44). It is
not in our case. So, you then ask the question: is the value that mid points to higher or
lower than our search value? In this example, it is lower. Now, since the array is sorted,
we KNOW that the value we are searching for MUST be in the UPPER HALF of the array,
since it is larger than the midpoint element value! So in one comparison, we have
discarded the lower half of the array as elements that we need to search! This is a
powerful tool for searching large arrays! But we still haven't found where the value really is,
so let's continue with the next figure.
22
So now we take the 2nd pass in the Binary Search algorithm. We know that we just
need to search the upper half of the array. We know that the value that mid pointed to
above is NOT the value that we are looking for. So, we now reset our "low" pointer to point
to the index of the element that is one higher than our previous midpoint. This actually
points to our search value, but we don't know that yet. So now low points to index 5, and
high still points to the last index in the array. We recalculate the midpoint, and using
integer division, (5+8)/2 will give 6 as the midpoint index to use. So now we repeat the
process. Does the element at the mid index contain our value? No. Is the value we are
searching for higher or lower than the value of the element at our midpoint index? In this
case, it is lower (44 is less than 77). So now we are going to reset our pointers and do a
third pass. See the next figure.
23
For our third pass, we reset the HIGH pointer since our search value was lower than
the value of the element at the midpoint. In the figure below, you can see that we reset the
high pointer to point to one less than the previous mid pointer (since we already knew that
the mid pointer did NOT point to our value). We leave the low pointer alone. Note that
now, low and high both point to element 5, and so (5+5)/2 = 5, and now the mid pointer
will point to 5 as well. So now we see if the element in the array that mid is pointing to
contains the value that we are searching for. And it does! We have successfully searched
for and found our value in three comparison steps! As noted at the beginning of this
section, the linear search would have taking six comparisons to find the same value.
5. Explain the Quick sort algorithm with the help of illustrative example. With an
example show that quick sort is not a stable sorting algorithm [N/D 2016] [N/D
12][A/M2019]
Quick Sort is also known as “partition-exchange sort”.
Definition:
Quick sort is a well –known sorting algorithm, based on divide & conquer approach. The
steps are:
1. Pick an element called pivot from the list
2. Reorder the list so that all elements which are less than the pivot come before the
pivot and all elements greater than pivot come after it. After this partitioning, the
pivot is in its final position. This is called the partition operation
3. Recursively sort the sub-list of lesser elements and sub-list of greater elements.
24
Features:
25
Analysis:
Worst-case running time:
When quicksort always has the most unbalanced partitions possible, then the
original call takes cncncn time for some constant ccc, the recursive call on n−1n-1n−1
elements takes c(n−1)c(n-1)c(n−1) time, the recursive call on n−2n-2n−2 elements takes
c(n−2)c(n-2)c(n−2) time, and so on. Here's a tree of the subproblem sizes with their
partitioning times:
cn+c(n−1)+c(n−2)+⋯+2c=c(n+(n−1)+(n−2)+⋯+2)=c((n+1)(n/2)−1) .
26
The last line is because 1+2+3+⋯+n1 + 2 + 3 + \cdots + n1+2+3+⋯+n is the arithmetic
series, as we saw when we analyzed selection sort [fix link]. (We subtract 1 because for
quicksort, the summation starts at 2, not 1.) We have some low-order terms and constant
coefficients, but when we use big-Θ notation, we ignore them. In big-Θ notation,
quicksort's worst-case running time is Θ(n2)\Theta(n^2)Θ(n2).
Quicksort's best case occurs when the partitions are as evenly balanced as
possible: their sizes either are equal or are within 1 of each other. The former case occurs
if the subarray has an odd number of elements and the pivot is right in the middle after
partitioning, and each partition has (n−1)/2(n-1)/2(n−1)/2 elements. The latter case occurs
if the subarray has an even number nnn of elements and one partition has n/2n/2n/2
elements with the other having n/2−1n/2-1n/2−1. In either of these cases, each partition
has at most n/2n/2n/2 elements, and the tree of subproblem sizes looks a lot like the tree
of subproblem sizes for merge sort, with the partitioning times looking like the merging
times:
Using big-Θ notation, we get the same result as for merge sort: Θ(nlgn)\Theta(n \lg
n)Θ(nlgn).
Showing that the average-case running time is also Θ(nlgn)\Theta(n \lg n)Θ(nlgn)
takes some pretty involved mathematics, and so we won't go there. But we can gain some
intuition by looking at a couple of other cases to understand why it might be O(nlgn)O(n \lg
n)O(nlgn).
With an example show that quick sort is not a stable sorting algorithm [N/D
2016]
No, Quick Sort does not preserve the relative order of equal items. To prove this to
yourself make an array of numbers say 1 to 20 followed by 10 to 1, and a compare
function that claims they are always equal. Use quick sort on the array and note your
patten has ben disturbed (you might "luck out" and the order might math up with the
disordering, try a slightly different sized array...or your systems sort might not really be
quick sort, implement you own).
27
lazy try here assumes the system sort is indeed a quick sort:
#include <stdlib.h>
sample output:
1. v[0] = 10
2. v[1] = 2
3. v[2] = 3
4. v[3] = 4
5. v[4] = 5
6. v[5] = 6
7. v[6] = 7
8. v[7] = 8
9. v[8] = 9
10. v[9] = 10
11. v[10] = 11
12. v[11] = 12
13. v[12] = 13
14. v[13] = 14
15. v[14] = 15
16. v[15] = 16
17. v[16] = 17
18. v[17] = 18
19. v[18] = 19
20. v[19] = 20
21. v[20] = 19
22. v[21] = 18
23. v[22] = 17
24. v[23] = 16
25. v[24] = 15
28
26. v[25] = 14
27. v[26] = 13
28. v[27] = 12
29. v[28] = 11
30. v[29] = 1
Given two binary strings that represent value of two integers, find the product of two
strings. For example, if the first bit string is “1100” and second bit string is “1010”, output
should be 120.
A Naive Approach is to follow the process we study in school. One by one take all bits of
second number and multiply it with all bits of first number. Finally add all multiplications.
This algorithm takes O(n^2) time.
Using Divide and Conquer, we can multiply two integers in less time complexity. We
divide the given numbers in two halves. Let the given numbers be X and Y.
29
XY = (Xl*2n/2 + Xr)(Yl*2n/2 + Yr)
= 2n XlYl + 2n/2(XlYr + XrYl) + XrYr
If we take a look at the above formula, there are four multiplications of size n/2, so we
basically divided the problem of size n into for sub-problems of size n/2. But that doesn‟t
help because solution of recurrence T(n) = 4T(n/2) + O(n) is O(n^2). The tricky part of this
algorithm is to change the middle two terms to some other form so that only one extra
multiplication would be sufficient. The following is tricky expression for middle two terms.
With above trick, the recurrence becomes T(n) = 3T(n/2) + O(n) and solution of this
recurrence is O(n1.59).
What if the lengths of input strings are different and are not even? To handle the different
length case, we append 0‟s in the beginning. To handle odd length, we put floor(n/2) bits in
left half and ceil(n/2) bits in right half. So the expression for XY changes to following.
The above algorithm is called Karatsuba algorithm and it can be used for any base.
// Helper method: given two unequal sized bit strings, converts them to
// same length by adding leading 0s in the smaller string. Returns the
// the new length
int makeEqualLength(string &str1, string &str2)
{
int len1 = str1.size();
int len2 = str2.size();
if (len1 < len2)
30
{
for (int i = 0 ; i < len2 - len1 ; i++)
str1 = '0' + str1;
return len2;
}
else if (len1 > len2)
{
for (int i = 0 ; i < len1 - len2 ; i++)
str2 = '0' + str2;
}
return len1; // If len1 >= len2
}
// The main function that adds two bit sequences and returns the addition
string addBitStrings( string first, string second )
{
string result; // To store the sum bits
return result;
}
// The main function that multiplies two bit strings X and Y and returns
// result as long integer
long int multiply(string X, string Y)
{
// Find the maximum of lengths of x and Y and make length
31
// of smaller string same as that of larger string
int n = makeEqualLength(X, Y);
// Base cases
if (n == 0) return 0;
if (n == 1) return multiplyiSingleBit(X, Y);
Output:
120
60
30
10
0
49
9
32
Time complexity of multiplication can be further improved using another Divide and
Conquer algorithm, fast Fourier transform. We will soon be discussing fast Fourier
transform as a separate post.
7. Find all the solution to the travelling salesman problem (cities and distance
shown beim+) by exhaustive search. Give the optimal solutions. (16) [M/J2016]
one and all the other n — 1 vertices are distinct. All circuits start and end at one particular
vertex. Figure presents a small instance of the problem and its solution by this method.
33