You are on page 1of 34

# Searching and Sorting

Insertion Sort
The insertion sort inserts each element in proper place. This is same as playing cards, in which we place the cards in proper order. There are n elements in the array and we place each element of array at proper place in the previously sorted element list. Let us take there are n elements the array arr. Then process of inserting each element in proper place is asPass 1- arr[0] is already sorted because of only one element. Pass 2-arr[1] is inserted before or after arr[0]. So arr[0] and arr[1] are sorted. Pass 3- arr[2] is inserted before arr[0] or in between arr[0] and arr[1] or after arr[1]. So arr[0], arr[1] and arr[2] are sorted Pass 4- arr[3] is inserted into its proper place in array arr[0], arr[1], arr[2] So, arr[0] arr[1] arr[2] and arr[3] are sorted. ………………………………………. ………………………………………… ……………………………………….. Pass N- arr[n-1] is inserted into its proper place in array. arr[0], arr[1], arr[2],………………………… arr[n-2]. So, arr[0] arr[1],……………………….. arr[n-1] are sorted. Example: Sort below elements in increasing order using insertion sort: 5 2 1 3 6 4

Compare first and second elements i.e., 5 and 2, arrange in increasing order. 2 5 1 3 6 4

Next, sort the first three elements in increasing order. 1 2 5 3 6 4

Next, sort the first four elements in increasing order. 1 2 3 5 6 4

Next, sort the first five elements in increasing order. 1 2 3 5 6 4

Next, sort the first six elements in increasing order. 1 2 3 4 5 6

Analysis: In insertion sort, we insert the element before or after and we start comparison from the first element. Since first element has no other elements before it, so it does not require any comparison. Second element required 1 comparison, third requires 2 comparisons, fourth requires 3 comparisons and so on. The last element requires n-1 comparisons. So, the total number of comparisons will be1+2+3+………………+(n-2)+(n-1) It’s a form of arithmetic progression series, so the sum is n/2*[(2*a)+(n-1)*d]=(n-1)/2(2*(1))+((n-11)*1)=((n*(n-1))/2 Which is of O(n2) It’s a worst case behavior of insertion sort where all the elements are in reverse order. The advantage of insertion sort is its simplicity and it is very efficient when number of elements to be sorted is very less. Because for smaller file size n the difference between O(n2) and O(nlogn) is very less and O(n logn) has complex sorting technique. Insertion sort behaves as of O(n) when elements are in sorted order and also has worth when list of elements are near about sorted. Algorithm: Insertion(int x[ ],int n) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Start set i=1 repeat the steps 4,5,8 and 9 while(i<n) set key=x[i] and j=i-1 repeat the steps 6 and 7 while j>=0 && x[j]>key set x[j+1] =x[j] j=j-1 set x[j+1]=key set i=i+1 stop

//program to arrange the given list of elements in increasing order using insertion sort technique
#include<iostream.h> #include<conio.h> template<class T> class insertion { T a[100];

int i,n; public: void read(void); void insertionsort(void); void display(void); }; template<class T> void insertion<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(i=0;i<n;i++) cin>>a[i]; } template<class T> void insertion<T>::insertionsort(void) { int j; T key; for(i=1;i<n;i++) { key=a[i]; for(j=i-1;j>=0&&a[ j]>key;j--) a[j+1]=a[j]; a[j+1]=key; } } template<class T> void insertion<T>::display(void) { for(i=0;i<n;i++) cout<<a[i]<<"\t"; } void main() { insertion <int> ob; ob.read(); cout<<"\n before sorting:\n"; ob.display(); cout<<"\n after sorting:\n"; ob.insertionsort(); ob.display(); getch(); }

Selection Sort
As the name suggests selection sort is the selection of an element and keeping it in sorted order. If we have a list of elements in unsorted order and we want to make a list of elements in sorted order then first we will take the smallest element and keep in the new list, after that second smallest element and so on until the largest element of list. Let us take an array a[0],a[1],…………a[n-1] of elements. First we will search the position of the smallest element form a[0]……….a[n-1]. Then we will interchange that smallest element with a[0]. Now we will search position of smallest element(second smallest element because a[0] is the first smallest element) from a[1]……..a[n-1],then interchange that smallest element with a[1]. Similarly the process will be for a[2]……………a[n-1].

Example:
1 2 3 4 5 6 7 8 9 10

42

23

74

11

65

58

94

36

99

87---Unsorted list

Find the minimum element and if it is not the first element, then interchange first element and the minimum element. Then the list becomes 11 23 74 42 65 58 94 36 99 87---pass 1

Find the second minimum element and if it is not the second element, then interchange second element and the second minimum element. Then the list becomes 11 23 74 42 65 58 94 36 99 87---pass 2

Continue till pass 9 11 11 23 23 36 36 42 42 65 65 58 58 94 94 74 74 99 99 87---pass 3 94---pass 4

11 11 11 11 11

23 23 23 23 23

36 36 36 36 36

42 42 42 42 42

58 58 58 58 58

65 65 65 65 65

94 94 74 74 74

74 74 94 87 87

99 99 99 99 94

87---pass 5 87---pass 6 87---pass 7 94---pass 8 99---pass 9

Analysis: As we have seen, selection sort algorithm will search the smallest element in the array and then that element will be at proper position. So, in pass 1 it will compare n-1 elements same process will be for pass 2 but this time comparisons will be n-2 because first element is already at proper position. The elements, which are already at correct position, will not be disturbed. Same thing we will do for other passes. We can easily write function for comparisons asThis is arithmetic series in decreasing order, so Sum=(n-1)+(n-2)+(n-3)+……………….+3+2+1 Sum=(n*(n-1))/2 Since, selection sort doesn’t see the order of elements, so its behavior is near about same for worst and best case. The best thing with selection sort is that in every pass one element will be at correct position, very less temporary variables will be required for interchanging the elements and it is simple to implement.

Algorithm:
Selection(int x[ ],int n) 1. start 2. set i=0 3. repeat steps 4,5 and 7 while(i<n-1) 4. set min=i and set j=i+1

repeat the steps 6 while(j<n) if(x[j]<x[min]) set min=j temp=x[i] x[i]=x[min] x[min]=temp 8. stop

5. 6. 7.

//program to arrange the given list of elements in increasing order using selection sort technique
#include<iostream.h> #include<conio.h> template<class T> class selection { T a[100]; int i,n; public: void read(void); void selectionsort(void); void display(void); }; template<class T> void selection<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(i=0;i<n;i++) cin>>a[i]; } template<class T> void selection<T>::selectionsort(void) { int j,minpos; T temp; for(i=0;i<n-1;i++) { minpos=i; for(j=i+1;j<n;j++) { if(a[j]<a[minpos]) minpos=j; } temp=a[i]; a[i]=a[minpos]; a[minpos]=temp; } } template<class T> void selection<T>::display(void) { for(i=0;i<n;i++) cout<<a[i]<<"\t"; } void main() { selection <int> ob; ob.read();

cout<<"\n before sorting:\n"; ob.display(); cout<<"\n after sorting:\n"; ob.selectionsort(); ob.display(); getch(); }

Bubble Sort
If n elements are given in memory then for sorting we do following steps:

1. First compare the 1st and 2nd element of array if 1st <2nd then compare the 2nd with 3rd. 2. If 2nd >3rd Then interchange the value of 2nd and 3rd. 3. Now compare the value of 3rd (which has the value of 2nd) with 4th. 4. Similarly compare until the (n-1)th element is compared with nth element. 5. Now the highest value element is reached at the nth place. 6. Now elements will be compared until n-1 elements.
Example:
42 23 23 11 11 11 11 11 11 11 23 42 11 23 23 23 23 23 23 23 74 11 42 42 42 36 36 36 36 36 11 65 58 58 36 42 42 42 42 42 65 58 65 36 58 58 58 58 58 58 58 74 36 65 65 65 65 65 65 65 94 36 74 74 74 74 74 74 74 74 36 94 87 87 87 87 87 87 87 87 99 87 94 94 94 94 94 94 94 94 87 99—pass1 99—pass2 99—pass3 99—pass4 99—pass5 99—pass6 99—pass7 99—pass8 99—pass9

Algorithm
Bubblesort(int x[ ],int n) 1. start 2. set i=0 3. repeat steps 4,5 and 8 while(i<n-1) 4. j=i+1 5. repeat steps 6,7 while(j<n) 6. if x[j]<x[i] temp=x[i] //exchange x[i] and x[j] x[i]=x[j] x[j]=temp 7. set j=j+1 8. set i=i+1 9. stop

Analysis:
In bubble sort, n-1 comparisons will be in 1st pass, n-2 in 2nd pass, n-3 in 3rd pass and so on, So, its very simple to calculate the number of comparisons. Total number of comparisons will be(n-1)+(n-2)+(n-3)+…………………+3+2+1 It’s a form of arithmetic progression series. So, the sum is (n*(n-1))/2. Which is O(n2) The main advantage is simplicity of algorithm, additional space requirement is only one temporary variable and it behaves as O(n) for sorted array of elements.
Best case performance=O(n) Worst case performance=0.5*n*(n-1)=O(n2) Average case performance=O(n2)

It is acceptable for sorting a table which contains a small number of records, but it becomes difficult for large sized tables.

//program to arrange the given list of elements in increasing order using bubble sort technique
#include<iostream.h> #include<conio.h> template<class T> class bubble { T a[100]; int i,n; public: void read(void); void bubblesort(void); void display(void); }; template<class T> void bubble<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(i=0;i<n;i++) cin>>a[i]; } template<class T> void bubble<T>::bubblesort(void) { int j; T temp; for(i=0;i<n-1;i++) { for(j=0;j<n-1-i;j++) if(a[j]>a[j+1]) { temp=a[j]; a[j]=a[j+1]; a[j+1]=temp;

} } }template<class T> void bubble<T>::display(void) { for(i=0;i<n;i++) cout<<a[i]<<"\t"; } void main() { bubble <int> b; b.read(); cout<<"\n before sorting:\n"; b.display(); cout<<"\n after sorting:\n"; b.bubblesort(); b.display(); getch(); }

Divide-and-Conquer Algorithms The divide and conquer strategy solves a problem by : 1. Breaking into sub problems that are themselves smaller instances of the same type of problem. 2. Recursively solving these sub problems. 3. Appropriately combining their answers. Two types of sorting algorithms which are based on this divide and conquer algorithm : 1. Quick sort: Quick sort also uses few comparisons (somewhat more than the other two). Like heap sort it can sort "in place" by moving data in an array. 2. Merge sort: Merge sort is good for data that's too big to have in memory at once, because its pattern of storage access is very regular. It also uses even fewer comparisons than heap sort, and is especially suited for data stored as linked lists. Merge Sort If there are two sorted lists of array then process of combining these sorted lists into sorted order is called merging. Take one element of each array, compare them and then take the smaller one in third array. Repeat this process until the elements of any array are finished. Then take the remaining elements of unfinished array in third array. Example:
Sorted List1 Sorted List2 List3

1

13

24

26

2

15

27

38

1

13

24

26

2

15

27

38

1

1

13

24

26

2

15

27

38

1 2

1

13

24

26

2

15

27

38

1 2 13

1

13

24

26

2

15

27

38

1 2 13 15

1

13

24

26

2

15

27

38

1 2 13 15 24

1

13

24

26

2

15

27

38

1 2 13 15 24 26

1

13

24

26

2

15

27

38

1 2 13 15 24 26 27

After Merge sort list3 is 1 2 13 15 24 26 27 38

//Program to implement merge sort without recursion #include<iostream.h> #include<conio.h> template<class T> class merge { public: T a[100],b[100],c[100]; int i,n1,n2,n3; void read(void); void bubblesort(int x[],int n); void mergesort(int a[],int b[],int n1,int n2); void display(void); }; template<class T> void merge<T>::read(void) { cout<<"enter n1,n2"; cin>>n1>>n2; cout<<"enter"<<n1<<"elements"; for(i=0;i<n1;i++) cin>>a[i]; cout<<"enter"<<n2<<"elements"; for(i=0;i<n2;i++) cin>>b[i]; n3=n1+n2; } template<class T> void merge<T>::bubblesort(int a[],int n) { int j; T temp; for(i=0;i<n-1;i++) { for(j=i+1;j<n;j++) if(a[j]<a[i])

{ temp=a[i]; a[i]=a[j]; a[j]=temp; } } } template<class T> void merge<T>::display(void) { for(i=0;i<n3;i++) cout<<c[i]<<"\t"; } template<class T> void merge<T>::mergesort(int a[],int b[],int n1,int n2) { int asize,bsize; int apoint,bpoint,cpoint; asize=n1-1; bsize=n2-1; apoint=0; bpoint=0; if(n3!=n1+n2) cout<<"array bounds exception"; for(cpoint=0;((apoint<=asize)&&(bpoint<=bsize));cpoint++) { if(a[apoint]<b[bpoint]) c[cpoint]=a[apoint++]; else c[cpoint]=b[bpoint++]; } while(apoint<=asize) c[cpoint++]=a[apoint++]; while(bpoint<=bsize) c[cpoint++]=b[bpoint++]; } void main() { merge <int> ob; ob.read(); ob.bubblesort(ob.a,ob.n1); ob.bubblesort(ob.b,ob.n2); ob.mergesort(ob.a,ob.b,ob.n1,ob.n2); cout<<"\nafter sorting the elements of the c array\n"; ob.display(); }

Merge sort using recursion

We take the pair of consecutive array elements, merge them in sorted array and then take adjacent pair of array elements and so on until all the elements of array are in single list. //Merge sort using recursion #include<iostream.h> #include<conio.h> class m { public: int a[100]; void read(int); void display(int); void mergesort(int,int); void merge(int,int,int); }; void main() { m obj; int n; cout<<"\nEnter number of elements"; cin>>n; obj.read(n); obj.mergesort(1,n); obj.display(n); getch(); } void m::read(int n) { int i; cout<<"enter n elements"; for(i=1;i<=n;i++) cin>>a[i]; cout<<endl; } void m::display(int n) { for(int i=1;i<=n;i++) cout<<a[i]<<endl; } void m::mergesort(int low,int high) { int mid; if(low<high) { mid=(low+high)/2; mergesort(low,mid); mergesort(mid+1,high); merge(low,mid,high); } }

void m::merge(int low,int mid,int high) { int i,j,k,b[100]; i=low; j=mid+1; k=low; while(i<=mid && j<=high) { if(a[i]<a[j]) b[k++]=a[i++]; else b[k++]=a[j++]; } while(i<=mid) b[k++]=a[i++]; while(j<=high) b[k++]=a[j++]; for(i=low;i<=high;i++) a[i]=b[i]; } Analysis:
Let us take an array of size n is used for merge sort. Because here we take elements in pair and merge with another pair after sorting. So, merge sort requires maximum log 2n passes. Hence we can say merge sort requires n*log2n comparisons which is O(nlog2n). The main disadvantage of merge sort is space requirement. It requires extra space of O(n).

Quick sort/Partition exchange sort
The basic version of quick sort algorithm was invented by C. A. R. Hoare in 1960 and formally introduced quick sort in 1962. It is used on the principle of divide-and-conquer. Quick sort is an algorithm of choice in many situations because it is not difficult to implement, it is a good "general purpose" sort and it consumes relatively fewer resources during execution.
The idea behind this sorting is that sorting is much easier in two short lists rather than one long list. Divide and conquer means divide the big problem into two small problems and then those two small problems into two small ones and so on. As example, we hve a list of 100 names and we want to list them alphabetically then we will make two lists for names. A-L and M-Z from original list. Then we will divide list A-L into A-F and G-L and so on until the list could be easily sorted. Similar policy we will adopt for the list M-Z.

Quicksort( ) if ( p < q ) 1. j = partition(a,p,q+1); 2. quicksort(p, j-1); 3. quicksort(j+1, q);

Quick sort works by partitioning a given array a[p . . q] into two non-empty sub array a[p . . j1] and a[j+1 . . q] such that every key in a[p . . j-1] is less than or equal to every key in A[j+1 . . q]. Then the two subarrays are sorted by recursive calls to Quick sort. The exact position of the partition depends on the given array and index j is computed as a part of the partitioning procedure. Note that to sort entire array, the initial call Quick Sort (a, 1, length[a]) As a first step, Quick Sort chooses as pivot one of the items in the array to be sorted. Then array is then partitioned on either side of the pivot. Elements that are less than or equal to pivot will move toward the left and elements that are greater than or equal to pivot will move toward the right. Partitioning the Array Partitioning procedure rearranges the subarrays in-place. partition(a, m, p) 1. int pivot = a[m]; 2. int i=m; 3. int j=p; 4. repeat steps 5-7 while (i < j) 5. i++; while ( a[i] < pivot ) is true 6. j--; while ( a[j] > pivot ) is true 7. if ( i < j ) interchange a[i] and a[j] 8. a[m]=a[j]; 9. a[j]=pivot; 10. return j; Partition selects the first key, a[m] as a pivot key about which the array will partitioned: Keys ≤ a[m] will be moved towards the left . Keys ≥ a[m] will be moved towards the right. The running time of the partition procedure is O(n) where n = p - m +1 which is the number of keys in the array. Another argument that running time of partition on a subarray of size O(n) is as follows: Pointer i and pointer j start at each end and move towards each other, conveying somewhere in the middle. The total number of times that i can be incremented and j can be decremented is therefore O(n). Associated with each increment or decrement there are O(1) comparisons and swaps. Hence, the total time is O(n). Performance of Quick Sort The running time of quick sort depends on whether partition is balanced or unbalanced, which in turn depends on which elements of an array to be sorted are used for partitioning.

A very good partition splits an array up into two equal sized arrays. A bad partition, on other hand, splits an array up into two arrays of very different sizes. The worst partition puts only one element in one array and all other elements in the other array. If the partitioning is balanced, the Quick sort runs asymptotically as fast as merge sort. On the other hand, if partitioning is unbalanced, the Quick sort runs asymptotically as slow as insertion sort. Best Case The best thing that could happen in Quick sort would be that each partitioning stage divides the array exactly in half. In other words, the best to be a median of the keys in a[p . . q] every time procedure 'Partition' is called. The procedure 'Partition' always split the array to be sorted into two equal sized arrays. If the procedure 'Partition' produces two regions of size n/2. the recurrence relation is then T(n) = T(n/2) + T(n/2) + O(n) = 2T(n/2) + O(n) And from case 2 of Master theorem T(n) = O(n lg n) Worst case Partitioning The worst-case occurs if given array A[1 . . n] is already sorted. The Partition(a,m, p) call always return p so successive calls to partition will split arrays of length n, n-1, n-2, . . . , 2 and running time proportional to n + (n-1) + (n-2) + . . . + 2 = [(n+2)(n-1)]/2 = (n2). The worst-case also occurs if a[1 . . n] starts out in reverse order. Randomized Quick Sort In the randomized version of Quick sort we impose a distribution on input. This does not improve the worst-case running time independent of the input ordering. partition(a, int m, int p) 1. int pivot = a[m]; 2. int i=m; 3. int j=p; 4. Repeat steps 5-7 while (i < j); 5. Increment i while ( a[i] < pivot ) is true 6. Decrement j while ( a[j] > pivot ) is true 7. if ( i < j ) interchange a[i] and a[j] 8. a[m]=a[j]; 9. a[j]=pivot; 10. return j;

RANDOMIZED_QUICKSORT (A, p, r)

Rquicksort(int p, int q) if ( p < q ) { if((q-p)>5) x=random(q-p+1)+p; interchange a[x] and a[p] int j = partition(a,p,q+1); Rquicksort(p,j-1); Rquicksort(j+1, q); } } Like other randomized algorithms, RANDOMIZED_QUICKSORT has the property that no particular input elicits its worst-case behavior; the behavior of algorithm only depends on the random-number generator. Even intentionally, we cannot produce a bad input for RANDOMIZED_QUICKSORT unless we can predict generator will produce next.
Analysis: Time requirement of quick sort depends on the position of pivot in the list, how pivot is dividing list into sub lists. It may be equal division of list or maybe it will not divide also. Average Case: In average case we assume that list is equally divided means list1 is equally divided in to two sub lists, these two sub lists into four sub lists and so on. If there are n elements in the list, where n is approximately 2m. Hence we can say m=log2n. In the list of n elements, pivot is placed in the middle with n comparisons and the left and right subtrees have approximately n/2 elements each. Hence these require again n/2 comparisons each to place their pivots in the middle of the lists. These two lists are again divided into 4 sublists and so on. Total number of comparisons=n+2*n/2+4*n/4+8*n/8+……………….n*n/n =n+n+n+………….+n(m times) =O(n*m) =O(nlog2n ) The number of comparison at any level will be maximum n. So we can say run time of quick sort will be of O(nlogn) Worst Case: Suppose list of elements are already in sorted order. When we find the pivot then it will be first element. So here it produces only 1 sub list which is on right side of first element second element. Similarly other sub lists will be created only at right side. The number of comparison for first element is n, second element requires n-1 comparisons and so on. So the total number of comparisons will be =n+(n-1)+(n-2)+…………+3+2+1 =n(n-1)/2 Which is O(n2)

Worst case =O(n2) Average case=Best Case=O(nlog2n) Example:
Elements in red indicate swaps. Elements in blue indicate comparisons. Special commentary is in green. 3 1 4 5 9 2 6 8 7 Calling quickSort on elements 1 to 9 Definitions: a "small" element is one whose value is less than or equal to the value of the pivot. Likewise, a "large" element is one whose value is larger than that of the pivot. At the beginning, the entire array is passed into the quicksort function and is essentially treated as one large partition. At this time, two indices are initialized: the left-to-right search index, i, and the right-to-left search index, j. The value of i is the index of the first element in the partition, in this case 1, and the value of j is 9, the index of the last element in the partition. The relevance of these variables will be made apparent in the code below. 3 1 4 5 9 2 6 8 7 The first element in the partition, 3, is chosen as the pivot element, around which two subpartitions will be created. The end goal is to have all the small elements at the front of the partition, in no particular order, followed by the pivot, followed by the large elements. To do this, quicksort will scan rightwards for the first large element. Once this is found, it will look for the first small element from the right. These two will then be swapped. Since i is currently set to one, the pivot is actually compared to itself in the search of the first large element. 3 1 4 5 9 2 6 8 7 The search for the first large element continues rightwards. The value of i gets incremented as the search moves to the right. 3 1 4 5 9 2 6 8 7 Since 4 is greater than the pivot, the rightwards search stops here. Thus the value of i remains 3. 3 1 4 5 9 2 6 8 7 Now, starting from the right end of the array, quicksort searches for the first small element. And so j is decremented with each step leftwards through the partition. 3 1 4 5 9 2 3 1 4 5 9 2 3 1 4 5 9 2 Since 2 is 6 8 6 8 6 8 not 7 7 7 greater than the pivot, the leftwards search can stop.

3 1 2 5 9 4 6 8 7 Now elements 4 and 2 (at positions 3 and 6, respectively) are swapped. 3 1 2 5 9 4 6 8 7 Next, the rightwards search resumes where it left off: at position 3, which is stored in the index i. 3 1 2 5 9 4 6 8 7 Immediately a large element is found, and the rightwards search stops with i being equal to 4. 3 1 2 5 9 4 6 8 7 Next the leftwards search, too, resumes where it left off: j was 6 so the element at position 6 is compared to the pivot before j is decremented again in search of a small element.

3 1 2 5 9 4 6 8 7 This continues without any matches for some time... 3 1 2 5 9 4 6 8 7 3 1 2 5 9 4 6 8 7 The small element is finally found, but no swap is performed since at this stage, i is equal to j. This means that all the small elements are on one side of the partition and all the large elements are on the other. 2 1 3 5 9 4 6 8 7 Only one thing remains to be done: the pivot is swapped with the element currently at i. This is acceptable within the algorithm because it only matters that the small element be to the left of the pivot, but their respective order doesn't matter. Now, elements 1 to (i) form the left partition (containing all small elements) and elements j + 1 onward form the right partition (containing all large elements. Calling quickSort on elements 1 to 2 The right partition is passed into the quicksort function. 2 1 3 5 9 4 6 8 7 2 is chosen as the pivot. It is also compared to itself in the search for a small element within the partition. 2 1 3 5 9 4 6 8 7 The first, and in this case only, small element is found. 2 1 3 5 9 4 6 8 7 Since the partition has only two elements, the leftwards search begins at the second element and finds 1. 1 2 3 5 9 4 6 8 7 The only swap to be made is actually the final step where the pivot is inserted between the two partitions. In this case, the left partition has only one element and the right partition has zero elements. Calling quickSort on elements 1 to 1 Now that the left partition of the partition above is quicksorted: there is nothing else to be done Calling quickSort on elements 3 to 2 The right partition of the partition above is quicksorted. In this case the starting index is greater than the ending index due to the way these are generated: the right partition starts one past the pivot of its parent partition and goes until the last element of the parent partition. So if the parent partition is empty, the indices generated will be out of bounds, and thus no quicksorting will take place. Calling quickSort on elements 4 to 9 The right partition of the entire array is now being quicksorted 5 is chosen as the pivot. 1 2 3 5 9 4 6 8 7 1 2 3 5 9 4 6 8 7 The rightwards scan for a large element is initiated. 9 is immediately found. 1 2 3 5 9 4 6 8 7 Thus, the leftwards search for a small element begins... 1 2 3 5 9 4 6 1 2 3 5 9 4 6 1 2 3 5 9 4 6 At last, 4 is 8 7 8 7 8 7 found. Note j = 6.

1 2 3 5 4 9 6 8 7 Thus the first large and small elements to be found are swapped. 1 2 3 5 4 9 6 8 7 The rightwards search for a large element begins anew.

1 2 3 5 4 9 6 8 7 Now that it has been found, the rightward search can stop. 1 2 3 5 4 9 6 8 7 Since j was stopped at 6, this is the index from which the leftward search resumes. 1 2 3 5 4 9 6 8 7 1 2 3 4 5 9 6 8 7 The last step for this partition is moving the pivot into the right spot. Thus the left partition consists only of the element at 4 and the right partition is spans positions 6 to 9 inclusive. Calling quickSort on elements 4 to 4 The left partition is quicksorted (although nothing is done. Calling quickSort on elements 6 to 9 The right partition is now passed into the quicksort function. 1 2 3 4 5 9 6 8 7 9 is chosen as the pivot. 1 2 3 4 5 9 6 8 7 The rightward search for a large element begins. 1 2 3 4 5 9 6 8 7 1 2 3 4 5 9 6 8 7 No large element is found. The search stops at the end of the partition. 1 2 3 4 5 9 6 8 7 The leftwards search for a small element begins, but does not continue since the search indices i and j have crossed. 1 2 3 4 5 7 6 8 9 The pivot is swapped with the element at the position j: this is the last step in splitting this partition into left and right subpartitions. Calling quickSort on elements 6 to 8 The left partition is passed into the quicksort function. 1 2 3 4 5 7 6 8 9 6 is chosen as the pivot. 1 2 3 4 5 7 6 8 9 The rightwards search for a large element begins from the left end of the partition. 1 2 3 4 5 7 6 8 9 The rightwards search stops as 8 is found. 1 2 3 4 5 7 6 8 9 The leftwards search for a small element begins from the right end of the partition. 1 2 3 4 5 7 6 8 9 Now that 6 is found, the leftwards search stops. As the search indices have already crossed, no swap is performed. 1 2 3 4 5 6 7 8 9 So the pivot is swapped with the element at position j, the last element compared to the pivot in the leftwards search. Calling quickSort on elements 6 to 6 The left subpartition is quicksorted. Nothing is done since it is too small. Calling quickSort on elements 8 to 8 Likewise with the right subpartition. Calling quickSort on elements 10 to 9 Due to the "sort the partition startitng one to the right of the pivot" construction of the algorithm, an empty partition is passed into the quicksort

function. Nothing is done for this base case. 1 2 3 4 5 6 7 8 9 Finally, the entire array has been sorted.

//program to arrange the given list of elements in increasing order using quick sort technique
#include<iostream.h> #include<conio.h> template<class T> class quick { public: T a[100]; int n; void read(void); void quicksort(int,int); int partition(T a[],int,int); void display(void); }; template<class T> void quick<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(int i=1;i<=n;i++) cin>>a[i]; } template<class T> void quick<T>::display(void) { for(int i=1;i<=n;i++) cout<<a[i]<<"\t"; } // The partition function template<class T> int quick<T>::partition(T a[], int m, int p) { int pivot = a[m]; int i=m; int j=p; do { do { i++; }while ( a[i] < pivot ); do { j--; }while ( a[j] > pivot ); if ( i < j ) { int tmp = a[i]; a[i] = a[j]; a[j] = tmp; }

} while (i < j); a[m]=a[j]; a[j]=pivot; return j; } // The quicksort recursive function template<class T> void quick<T>::quicksort(int p, int q) { if ( p < q ) { int j = partition(a,p,q+1); quicksort(p, j-1); quicksort(j+1, q); } } void main() { quick <int> ob; ob.read(); cout<<"\n before sorting:\n"; ob.display(); cout<<"\n after sorting:\n"; ob.quicksort(1,ob.n); ob.display(); getch(); }

//program to arrange the given list of elements in increasing order using randomized quick sort technique
#include<iostream.h> #include<conio.h> #include<stdlib.h> template<class T> class quick { public: T a[100]; int n; void read(void); void Rquicksort(int,int); int partition(T a[],int,int); void display(void); }; template<class T> void quick<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(int i=1;i<=n;i++) cin>>a[i]; } template<class T> void quick<T>::display(void) { for(int i=1;i<=n;i++) cout<<a[i]<<"\t"; }

// The partition function template<class T> int quick<T>::partition(T a[], int m, int p) { int pivot = a[m]; int i=m; int j=p; do { do { i++; }while ( a[i] < pivot ); do { j--; }while ( a[j] > pivot ); if ( i < j ) { int tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } while (i < j); a[m]=a[j]; a[j]=pivot; return j; } // The quicksort recursive function template<class T> void quick<T>::Rquicksort(int p, int q) { int x; if ( p < q ) { if((q-p)>5) { x=random(q-p+1)+p; int temp=a[x]; a[x]=a[p]; a[p]=temp; } int j = partition(a,p,q+1); Rquicksort(p,j-1); Rquicksort(j+1, q); } } void main() { quick <int> ob; ob.read(); cout<<"\n before sorting:\n"; ob.display(); cout<<"\n after sorting:\n"; ob.Rquicksort(1,ob.n); ob.display(); getch(); }

Advantages: One of the fastest algorithms on average. Does not need additional memory (the sorting takes place in the array - this is called in-place processing). Compare with mergesort: mergesort needs additional memory for merging. Disadvantages: The worst-case complexity is O(N2) Applications: Commercial applications use Quicksort - generally it runs fast, no additional memory, this compensates for the rare occasions when it runs with O(N2) Never use in applications which require guaranteed response time: Life-critical (medical monitoring, life support in aircraft and space craft) Mission-critical (monitoring and control in industrial and research plants handling dangerous materials, control for aircraft, defence, etc) unless you assume the worst-case response time. Comparison with mergesort: mergesort guarantees O(NlogN) time, however it requires additional memory with size N. quicksort does not require additional memory, however the speed is not guaranteed usually mergesort is not used for main memory sorting, only for external memory sorting. So far, our best sorting algorithm has O(nlog n) performance: can we do any better? In general, the answer is no.

Partition-based general selection algorithm
A general selection algorithm that is efficient in practice, but has poor worst-case performance, was conceived by the inventor of quicksort, C.A.R. Hoare, and is known as Hoare's selection algorithm or quickselect. In quicksort, there is a subprocedure called partition that can, in linear time, group a list (ranging from indices left to right) into two parts, those less than a certain element, and those greater than or equal to the element. In quicksort, we recursively sort both branches, leading to best-case O(n log n) time. However, when doing selection, we already know which partition our desired element lies in, since the pivot is in its final sorted position, with all those preceding it in sorted order and all those following it in sorted order. Thus a single recursive call locates the desired element in the correct partition:
//Selection problem #include<iostream.h> #include<conio.h> template<class T> class quick { public: T a[100]; int n; void read(void); void select(int,int,int); void display(void); }; template<class T> void quick<T>::read(void)

{ cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(int i=1;i<=n;i++) cin>>a[i]; } template<class T> void quick<T>::display(void) { for(int i=1;i<=n;i++) cout<<a[i]<<"\t"; } // The partition based select function template<class T> void quick<T>::select(int m, int p,int index) { int pivot = a[index]; int i=m; int j=p; while(i<j) { while ( a[i] < pivot ) i++; while ( a[j] > pivot ) { j--; } if ( i < j ) { int tmp = a[i]; a[i] = a[j]; a[j] = tmp; } } cout<<j<<"th smallest"; } void main() { quick <int> ob; int ind; ob.read(); cout<<"\n before sorting:\n"; ob.display(); cout<<"enter index"; cin>>ind; ob.select(1,ob.n,ind); getch(); }

Heap Sort: Build a heap tree with the given elements, where a heap tree is a binary tree whose parent node has greater value than its children.

The elements of the heap tree are represented by an array. The root will be largest element of heap tree. Since it maintained in the array, the largest value will be the last element of array. For heap sorting we will keep on deleting the root till there is only one element in the tree. Then the array which represents the heap tree will now contain sorted elements. So we do the following steps for heap sorting1. Replace the root with last node of heap tree. 2. Keep the last node (now root) at the proper position, by adjusting the heap so that parent is greater than its child. Let us take a heap tree and apply heap tree sorting72 64 65

56

32

46

54

29

48

72

64

65

56

32

46

54

29

48

Step 1 Move 72 to the last element of the array and the last node in the i.e., 48 to the root node. Here left and right child of 48 is 64 and 65. Both are greater than 48, but right child 65 is greater than the left child 64, hence replace 48 with right child 65. 48 64 65 56 32 46 54 29 72

48 64 65

56

32

46

54

29 65 64 48

56

32

46

54

Where the right child of 48 is 54, which is greater than 48, hence replace it with 54.
65 64 54

56

32

46

48

29 Now the elements of heap tree in array are as-

65

64

54

56

32

46

48

29

72

Now, 29 is the last node of the tree. So replace it with root 65 and do the same operation. Step 264 54

56

29

32

46

48

64

56

54

29

32

46

48

65

72

Similarly, in the next iteration. Step 356 54

48

56

48

54

29

32

46

64

65

72

Step 454 46

48

29

32

54

48

46

29

32

56

64

65

72

Step 548 46

32

29

48

32

46

29

54

56

64

65

72

Step 646 29

32

46

32

29

48

54

56

64

65

72

Step 7:
32 29

32

29

46

48

54

56

64

65

72

Step 8:

29

29

32

46

48

54

56

64

65

72

Finally we get the sorted array. 29 32 46 48 54 56 64 65 72

//Program to implement heap sort #include<iostream.h> #include<conio.h> class heap { public: int i; void read(int[ ],int);

void buildheap(int[ ],int); void heapsort(int[ ],int); void display(int[ ],int); }; void heap::read(int a[ ],int n) /*integer functions */ { cout<<"\nEnter the elements:"; for(i=1;i<=n;i++) cin>>a[i]; } void heap::buildheap(int a[],int n) { int j,k,temp; for(k=2;k<=n;k++) { i=k; temp=a[k]; j=i/2; while((i>1)&&(temp>a[j])) { a[i]=a[j]; i=j; j=i/2; if(j<1) j=1; } a[i]=temp; } } void heap::heapsort(int a[],int n) { int j,k,temp,value; for(k=n;k>=2;k--) { temp=a[1]; a[1]=a[k]; a[k]=temp; i=1; value=a[1]; j=2; if((j+1)<k) if(a[j+1]>a[j]) j++; while(j<=(k-1)&&(a[j]>value)) { a[i]=a[j]; i=j; j=2*i; if((j+1)<k) if(a[j+1]>a[j])

j++; else if(j>n) j=n; a[i]=value; } } } void heap::display(int a[],int n) { for(int i=1;i<=n;i++) cout<<a[i]<<"\t"; } void main() { int a[10],n; heap h; cout<<"\nEnter the limit:"; cin>>n; h.read(a,n); cout<<"\nBefore sorting..\n"; h.display(a,n); h.buildheap(a,n); h.heapsort(a,n); cout<<"\nAfter sorting..\n"; h.display(a,n); getch(); } Analysis: The heap sort algorithm consists of two phases. 1. Building a Heap tree 2. Sorting. To build a heap, start with one element and insert the next element into the existing heap by satisfying the condition (parent value is greater than its child). The insertion process is repeated until a heap builds for all input data elements. Once Heap is constructed, the largest element in the input must be the root of the heap, located at the position 1. In the Sorting phase a root is placed at its correct position, swapping with element at the bottom of the array. Then the properties of the heap are restored considering all the elements excluding the largest element(s), already placed in the sorted position. This is repeated until the heap is empty. Addition and deletion are both O( N) operations. We ned to perform n additions for building heap, and thus it takes O( N).Restoring the heap n times takes O( N). time. Hence the total time complexity is 2* O( N).= O( N).

The average, best and worst case performances are O( for heap sort algorithm.

N). As there is no best or worst

Searching techniques
Linear Search/Sequential Search-sequential search on unsorted data or on sorted data In this Searching Technique, Search element is compared Sequentially with each element in an Array and process of comparison is stopped when element is matched with array element. If not, element is not found.

1. 2. 3. 4. 5. 6. 7.

Algorithm Linear(int x[ ],int n) start key=element to be searched, set i=0 repeat steps 4,5 while(i<n) if(x[i]==key), key element is found and goto 7 i=i+1 element not found stop Analysis: If there are n items in the list, then it is obvious that in worst case ( i.e when there is no target element in the list) N comparisons are required. Hence the owrst case performance of this algorithm is roughly proportional to N as O(n). The best case, in which the first comparison returns match, it requires a single comparison and hence it is O(1). The average time depends on the probability that the key will be found in the list. Thus the average case roughly requires N/2 comparisons to search the element. That means, the average time, as in worst case, is proportional to N and hence it is O(N). //program to implement linear search #include<iostream.h> #include<stdlib.h> template<class T> class linear { public: T a[100],key; int i,n; void read(void); void linearsearch(); }; template<class T> void linear<T>::read(void)

{ cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements"; for(i=0;i<n;i++) cin>>a[i]; } template<class T> void linear<T>::linearsearch() { cout<<"enter the element to search"; cin>>key; for(i=0;i<n;i++) { if(a[i]==key) { cout<<key<<"is found at"<<i+1<<"location"; exit(0); } } cout<<key<<"is not found"; } void main() { linear <int> ob; ob.read(); ob.linearsearch(); }

Binary search/Divide and conquer scheme-Search on sorted data • Here elements must be in Ascending/Descending order. Algorithm Binarysearch(int x[ ],int n) 1. start 2. Set low=0,high=n-1,key is the element to be searched 3. repeat steps 4,5,6 while(low<=high) 4. mid=(low+high)/2 5. if(x[mid]==key),element is found goto 9 6. if(x[mid]<key),low=mid+1,goto step 3 else if(x[mid]>key),high=mid-1,goto step 3 8. else element not found 9. stop Analysis: The sequential search situation will be in worst case if the element is at the end of the list. For eliminating this problem, we have one efficient search technique called binary search. The condition for binary search is that all the data should be in sorted array. We compare the

element with middle element of the array. If it is less than the middle element then search it in the left portion of the array and if it is greater than the middle element then search will be in the right portion of the array. Now we will take that portion only for search and compare with middle element of that portion. This process will be in iteration until we find the element or middle element has no left or right portion to search. Each step of the algorithm divides the list into two parts, and the search continues in one of them and the other is discarded. The search requires at most K steps of comparison where >=N which results k= N. Thus the running time(both average and worst cases) of a binary search is proportional to N i.e O( N). //program to implement binary search on sorted list of elements #include<iostream.h> #include<stdlib.h> #include<conio.h> template<class T> class binary { public: T a[100],key; int i,n; void read(void); void binarysearch(); }; template<class T> void binary<T>::read(void) { cout<<"enter n"; cin>>n; cout<<"enter"<<n<<"elements in increasing order"; for(i=0;i<n;i++) cin>>a[i]; } template<class T> void binary<T>::binarysearch() { int low=0,high=n-1,mid; cout<<"enter the element to search"; cin>>key; while(low<=high) { mid=(low+high)/2; if(a[mid]==key) { cout<<key<<"is found at position"<<mid+1; exit(0); if(a[mid]<key) low=mid+1;

else if(a[mid]>key) high=mid-1; } cout<<key<<"is not found"; } void main() { binary <int> ob; ob.read(); ob.binarysearch(); }

Sorting

Best Case

Average Case

Worst Case

Insertion

O(n)

O(n2)

O(n2)

Selection

O(n2)

O(n2)

O(n2)

Bubble

O(n)

O(n2)

O(n2)

Quick

O(nlogn)

O(nlogn)

O(n2)

Merge Heap Linear Search

O(nlogn) O(nlogn) O(1)

O(nlogn) O(nlogn) O(n)

O(nlogn) O(nlogn) O(n)

Comparison With other sorts Requires more comparisons than Quick sort but easy Requires more comparisons than Quick sort but easy Easy but requires more comparisons Fast but difficult to implement ----Number of comparisons are more to search an element. Lists are to be in sorted order and number of comparisons are less.

Binary Search

O(1)

O(logn)

O(logn)