Chapter-Two
Divide And Conquer
Algorithm
Basic content of the chapter
– General Method of Divide and Conquer
– How the divide and conquer algorithm works.
– Searching Algorithms:
• Linear Search
• Binary Search
– Sorting and Order statistics:
• Merge Sort, Quick Sort, Selection sort, Insertion sort and
heap sort
– Finding Time complexity of:
• Linear Search, Binary Search, Merge Sort, Quick Sort, Selection sort ,insertion sort
and heap sort
Divide and Conquer Algorithm
• Divide-and-conquer is one of the most popular algorithmic
strategies.
• It works in two phases.
1. the problem is divided into sub problems of smaller size till
each problem can be easily solved.
2. the solutions to all such sub problems are gathered together
to get the final solution.
• This approach, especially when used recursively, often yields
efficient solutions to problems in which the sub problems are
smaller versions of the original problem and can be independently
solved.
Divide and Conquer Algorithm…
• Broadly, we can understand divide-and-
conquer approach in a three-step process.
i. Divide
ii. Conquer
iii. Combine
Divide and Conquer Algorithm…
i. Divide/Break
– This step involves breaking the problem into smaller sub-
problems using recursion.
– Sub-problems should represent a part of the original
problem.
– This step generally takes a recursive approach to divide the
problem until no sub-problem is further divisible.
– At this stage, sub-problems become atomic in nature but
still represent some part of the actual problem.
Divide and Conquer Algorithm…
ii. Conquer/Solve
– Solve the smaller sub-problems recursively. If the sub-problem
is small enough, then solve it directly.
– Generally, at this level, the problems are considered 'solved' on their own.
iii. Combine/Merge
– When the smaller sub-problems are solved, this stage recursively
combines them until they formulate a solution of the original
problem.
– This algorithmic approach works recursively and conquer &
merge steps works so close that they appear as one.
General method of Divide and
Conquer algorithm
DAC(p){
if(small(p)){
s(p);
}
else{
Divide p into p1,p2,p3….pk
Apply DAC(p1),DAC(p2),…
Combine (DAC(p1),DAC(p2),..)
}
}
Divide and Conquer: Example #1
Let us understand this concept with the help of an
example.
1. Let the given array be: Array for merge sort
Solution…
2. Divide the array into two halves. Divide the array
into two subparts Again, divide each subpart recursively
into two halves until you get individual elements.
Divide the array into smaller subparts
Solution…
3. Now, combine the individual elements in a sorted
manner. Here, conquer and combine steps go side by
side. Combine the subparts.
Searching Algorithms
• What is Searching?
– Searching is a process of finding a particular element among several given
elements.
– enables its user to find documents, files, media, or any other
type of data held inside a database
– The search is successful if the required element is found.
– Otherwise, the search is unsuccessful.
• Searching Algorithms:
– Searching Algorithms are a family of algorithms used for the purpose of
searching.
– The searching of an element in the given array may be carried out in the
following two ways- Linear Search and Binary Search
Linear Search
• Linear Search:
– Linear search is a very basic and simple search algorithm.
– It searches for an element by comparing it with each element of
the array one by one.
– So, it is also called as Sequential Search.
• Linear Search Algorithm is applied when-
– No information is given about the array.
– The given array is unsorted or the elements are unordered.
– The list of data items is smaller.
Linear Search: Example#1
• Consider-
– We are given the following linear array.
– Element 15 has to be searched in it using Linear Search Algorithm.
•
• Now,
– Linear Search algorithm compares element 15 with all the elements
of the array one by one.
– It continues searching until either the element 15 is found or all the
elements are searched.
Linear Search Algorithm works in
the following steps
• Step-01:
– It compares element 15 with the 1st element 92.
– Since 15 ≠ 92, so required element is not found.
– So, it moves to the next element.
• Step-02:
– It compares element 15 with the 2nd element 87.
– Since 15 ≠ 87, so required element is not found.
– So, it moves to the next element.
Cont’d
Step 03 and Step 04 follows the same procedure
Step-05:
– It compares element 15 with the 5th element 15.
– Since 15 = 15, so required element is found.
– Now, it stops the comparison and returns index 4 at which element
15 is present.
• Note: Linear Search is less efficient when compared with other
algorithms like Binary Search & Hash tables.
• The other algorithms allow significantly faster searching.
Linear Search Algorithm
• Consider-
– There is a linear array ‘a’ of size ‘n’.
– Linear search algorithm is being used to search an
element ‘item’ in this linear array.
– If search ends in success, it sets loc to the index of
the element otherwise it sets loc to -1.
Linear Search Algorithm cont’d
int search(int arr[], int x )
{
for (int i=0; i<array Length; i++)
{
if (arr[i]==x)
{
return i;
}
}// x is not found
return -1;
}
T(n)= 1+(n+1)+n+n+1=3n+3 O(n) worst case
Time Complexity Analysis-Linear
Search
• Linear Search time complexity analysis is done below-
• Best case- In the best possible case,
– The element being searched may be found at the first position.
– In this case, the search terminates in success with just one comparison.
– Thus in best case, linear search algorithm takes O(1) operations.
• Worst Case- In the worst possible case,
– The element being searched may be present at the last position or not present in
the array at all.
– In the former case, the search terminates in success with n comparisons.
– In the later case, the search terminates in failure with n comparisons.
– Thus in worst case, linear search algorithm takes O(n) operations.
Binary Search
• A binary search is an advanced type of search algorithm that finds
and fetches data from a sorted list of items
– i.e. Searches in a sorted array.
• Binary search is a fast search algorithm with run-time complexity
of Ο(log n).
• This search algorithm works on the principle of divide and
conquer.
• Binary search halves the searchable items and thus reduces the
19
count of comparisons to be made to very less numbers.
Why Do We Need Binary Search?
• The following reasons make the binary search a better choice to be used
as a search algorithm:
– Works efficiently on sorted data no matter the size of the data
– Randomly accesses the data to find the required element.
• This makes the search cycles shorter and more accurate.
– Binary search performs comparisons of the sorted data based on an
ordering principle than using equality comparisons, which are
slower and mostly inaccurate. e.g linear search
20
How Binary Search Works?
• Binary search looks for a particular item by comparing the middle
most item of the collection as follows:
– Search item is compared with middle element of list
– If search item <middle element of list, search is restricted to first
half of the list
– If search item >middle element of list, search second half of the
list
– If search item = middle element, search is complete
21
How Binary Search Works? Step-by-Step
Step1:define start and end
Step2: find middle=(start+end)/2
Step3: if A[mid]<element,
//search to the right of the middle element
start=mid+1, end=end
if A[mid]>element,
//search to the left of the middle element.
start=start, end=mid-1.
if A[mid]=element,
Found return=mid
If start=end, 1 element left
A[mid]# element,
Not found return
22
Binary search Example #1
Note: For a binary search to work, it is mandatory for the target
array to be sorted.
We shall learn the process of binary search with a pictorial
example.
The following is our sorted array and let us assume that we need to
search the location of value 31 using binary search.
23
Solution
• First, we shall determine half of the array by using this formula −
mid = (low + high) / 2
• Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the
array.
• Now we compare the value stored at location 4, with the value being
searched, i.e. 31.
• We find that the value at location 4 is 27, which is not a match.
• As the value is greater than 27 and we have a sorted array, so we also
know that the target value must be in the upper portion of the array.
24
Solution…
• We change our low to mid + 1 and find the new mid value again.
– low = mid + 1
– mid = (low + high) / 2
– Our new mid is 7 now.
• We compare the value stored at location 7 with our target value 31.
• The value stored at location 7 is not a match, rather it is more than what
we are looking for.
• So, the value must be in the lower part from this location. 25
Solution…
• Hence, we calculate the mid again. This time it is 5.
• We compare the value stored at location 5 with our target value.
• We find that it is a match.
• We conclude that the target value 31 is stored at location 5.
26
Binary Search: Exercise #1
• Array:
50 7 11 9 25 20 27 2 51 60
a. Search for key=25
b. Search for key=10
27
Recursive Binary search algorithm
Algorithm RBSearch(l,h,key){
if(l=h){
if(A[l]==key) //for small element
return l;
else
return 0; //key not found
}
else{ //for large element
mid=(l+h)/2;
if(key==A[mid])
return mid;
if(key<A[mid])
return RBSearch(l,mid-1,key);
else{
return RBsearch(mid+1,h,key);
}
28
where l=low, h=high, key= elements for search
Time Complexity Analysis-Binary
Search
• Binary Search Time Complexity
• Best case complexity: O(1)
• Average case complexity: O(log n)
• Worst case complexity: O(log n)
29
Sorting Divide-and-conquer Approach
• The following computer algorithms are based on divide-and-
conquer programming approach −
– Merge Sort
– Quick Sort
– Selection Sort
– Insertion Sort
– Heap sort
• Note: There are various ways available to solve any computer
problem, but the mentioned are a good example of divide and
conquer approach. 30
Merge Sort
Merge sort is one of the most efficient sorting algorithm and it’s based on divide and
conquer technique.
• In merge sort, the problem is divided into two sub-problems in every iteration.
• It follows the divide and conquer approach
– Divide break the problem into 2 sub-problem which continues until the problem
set is left with one element only.
– Conquer basically merges the 2 sorted arrays into the original array
• With worst case Time complexity of merge sort is O(n log n) , it is one of the most
respected algorithms.
31
Steps in Merge Sort
• Steps:
– Divide the unsorted list into two sub lists of about half the size.
(divide)
– Sort each of the two sub lists recursively. If they are small
enough just solve them in a straight forward manner.(conquer)
– Merge the two-sorted sub lists back into one sorted list (combine)
32
Merge Sort: Example #1
33
Merge Sort Algorithm
Algorithm mergesort(arr[],l,h) //parameter
if (l < h){ //for single element
then mid= (l + h) / 2 //divide
mergesort(arr[],l, m); Conquer
mergesort(arr[],m+1,h);
merge(arr[], l,m,h); // combine
} where l=low, r=high, m= middle
34
Time Complexity of Merge sort
• In the worst case, in every iteration, we are dividing the problem into
further 2 sub-problems.
• Hence this will perform log n operations and this has to be done for n
iteration resulting in n log n operations total.
• In the best case that is sorted array, we can do some modification by
using a flag to check whether the element is already sorted or not
– Best Time Complexity: O(nlogn)
– Average Time Complexity: O(nlogn)
– Worst Time Complexity: O(nlogn)
35
Merge Sort: Exercise #2
• Find the merge sort for the given array
• Array 9 3 7 5 6 4 8 2
36
Quick Sort
Quick sort is a highly efficient sorting algorithm and is based on
partitioning of array of data into smaller arrays.
A large array is partitioned into two arrays one of which holds values
smaller than the specified value, say pivot, based on which the partition is
made and another array holds values greater than the pivot value.
i.e. It picks an element as pivot and partitions the given array around the picked
pivot.
• There are many different versions of quick sort that pick pivot in different ways:
– always pick first element as pivot, Always pick last element as pivot
– Pick a random element as pivot
37
Quicksort…
Like binary search, this method uses a recursive , divide and
conquer strategy.
Quicksort partitions an array and then calls itself recursively twice
to sort the two resulting sub arrays.
• Quicksort is considered as in in-place sorting algorithm
• It reduce the space complexity and removes the use of the auxiliary
array that is used in merge sort.
Note: This algorithm is quite efficient for large-sized data sets as
its average and worst-case complexity are O(n2), respectively. 38
How Quick sort works?
• Select a pivot (partitioning element)
• Rearrange the list so that all the elements in the positions before the
pivot are smaller than or equal to the pivot and those after the pivot
are larger than or equal to the pivot.
• Exchange the pivot with the last element in the first (i.e., ≤) sub list
– the pivot is now in its final position
• Sort the two sub-lists recursively
39
Quick sort: Example #1
Which element is sorted?
40
Quick Sort: Example #2
• Consider an array having 8 elements
10 16 8 12 15 6 3 9 5
• Arrange the elements in ascending order using quick sort algorithm
41
Solution
Step1. select pivot
Step2. assign first element =i and last element= j
Step3. i start from the pivot and i search for the element > pivot
Step4. j start from the last element and search for the element<pivot
Step5. increment i until>pivot and decrement j until<pivot
Step6. when i>j swap it, then j is the position of pivot
Step7. now pivot is sorted it is in partition position, so we follow the
partition position.
42
How partition is works for quick sort
Partition(A,l, h){
pivot=A[h];
i=l-1
for (j=l to h-1){
if (A[j] <= pivot)
{
i=i+1
exchange(A[i] , A[j])
}
}
exchange(A[i+1], A[h])
43
Algorithm Quicksort
Quicksort(A,l, h){
if(l<h){
j=partition(A,l,h);// j is the position where the partition is done
Quicksort(A,l,j);
Quicksort(A, j+1,h);
}
}
44
Time Complexity of Quick sort
• QuickTime Complexity
• Best case complexity: O(nlog n)
• Average case complexity: O(nlog n)
• Worst case complexity: O(n2)
45
Quick Sort- Example #3
Consider the following array of element
a.
5 2 6 1 3 4
10 15 2 9 16 11
b.
Arrange the elements in ascending order using quick sort algorithm
46
Selection Sort
• Selection sort is one of the easiest approaches to sorting.
• It is inspired from the way in which we sort things out in day to day
life.
• It is an in-place sorting algorithm because it uses no auxiliary data
structures while sorting.
47
How Selection Sort Works?
• Consider the following elements are to be sorted in ascending order
using selection sort- 6, 2, 11, 7, 5
• Selection sort works as-
– It finds the first smallest element (2).
– It swaps it with the first element of the unordered list.
– It finds the second smallest element (5).
– It swaps it with the second element of the unordered list.
– Similarly, it continues to sort the given elements .
• As a result, sorted elements in ascending order are- 2, 5, 6, 7, 11
48
Selection Sort: Example #1
• Consider the following elements are to be sorted in ascending order-
6, 2, 11, 7, 5
• The above selection sort algorithm works as illustrated below-
• Step-01: For i = 0
49
Solution….
50
Solution….
Step-05: For i = 4
• Loop gets terminated as ‘i’ becomes 4.
• The state of array after the loops are finished is as shown-
•
•
• With each loop cycle,
• The minimum element in unsorted sub-array is selected.
• It is then placed at the correct location in the sorted sub-array until array
A is completely sorted. 51
Selection Sort: Example #2
• Consider the following element and sort by using selection sort:
Array element:
a) 15 28 17 12 18 9 6
b) 1 4 10 8 3 7
52
Selection Sort Algorithm
• Let A be an array with n elements. • Here,
Then, selection sort algorithm used
for sorting is as follows- • i = variable to
• Selection sort(A, n){ traverse the array A
for (i = 0 ; i < n-1 ; i++)
{
• min = variable to
min = i; store the of
for(j = i+1 ; j < n ; j++) minimum element
{
if(A[j] < A[min]) • j = variable to
min = j; traverse the unsorted
}
temp= A[i]
sub-array
A[i]=A[min] •
A[min]= temp
}
} 53
Selection Sort: Time Complexity Analysis
• Selection sort algorithm consists of two nested loops.
• Owing to the two nested loops, it has O(n2) time complexity
54
Insertion sort
• Insertion sort- is a simple sorting algorithm that builds the final sorted array (or list)
one item at a time.
• It is much less efficient on large lists than more advanced algorithms such as quicksort,
heap sort, or merge sort.
• However, insertion sort provides several advantages:
– Simple implementation
– Efficient for (quite) small data sets.
– Stable; i.e., does not change the relative order of elements with equal keys
– In-place; i.e., only requires a constant amount O(1) of additional memory space
– Online; i.e., can sort a list as it receives it
55
How Insertion Sort works
• Suppose an array with n elements A[1],A[2]……A[n] in the
memory.
• The insertion sort algorithm scans A from A[1] to A[n]
inserting each element A[k] into its position in the previously
sorted sub array A[1],A[2]…..A[k-1].
56
Insertion Sort: Example #1
• An insertion sort of an array of five integers
57
Insertion Sort: Example #2
• An array contain 8 elements as follows:
77 33 44 11 88 22 66 55
• Table given below show illustration of the insertion algorithm .
• The marked elements indicates the A[K] in each pass of the
algorithm and arrow indicates the proper place for
A[K] .
58
Cont’d
59
Insertion Sort Algorithm
Algo insertionSort(A, n) {
for (int i = 1; i < (n-1) ; i ++) {
if(A[i]>A[i+1])
temp=A[i+1]
loc=i+1
while(loc !=1 $$ A[loc-1]>temp)
A[loc]=A[loc-1]
A[loc]=loc-1
A[loc]=temp
60
}
Insertion sort analysis : Best case
61
Insertion sort analysis :Worst case
62
63