You are on page 1of 16

ASSIGNMENT

Q1) Explain the following :-

(i) Linear Search :


Linear search is a very simple search algorithm. In this type of search, a sequential
search is made over all items one by one. Every item is checked and if a match is
found then that particular item is returned, otherwise the search continues till the
end of the data collection.

Example using Algorithm : Linear Search ( Array A, Value x)

Step 1: Set i to 1

Step 2: if i > n then go to step 7

Step 3: if A[i] = x then go to step 6

Step 4: Set i to i + 1

Step 5: Go to Step 2

Step 6: Print Element x Found at index i and go to step 8

Step 7: Print element not found

Step 8: Exit

(ii) Binary Search :


Binary search is a fast search algorithm with run-time complexity of Ο(log n). This
search algorithm works on the principle of divide and conquer. For this algorithm to
work properly, the data collection should be in the sorted form.Binary search looks
for a particular item by comparing the middle most item of the collection. If a match
occurs, then the index of item is returned. If the middle item is greater than the item,
then the item is searched in the sub-array to the left of the middle item. Otherwise,
the item is searched for in the sub-array to the right of the middle item. This process
continues on the sub-array as well until the size of the sub-array reduces to zero.
Example :

For a binary search to work, it is mandatory for the target array to be sorted. We
shall learn the process of binary search with a pictorial example. The following is our
sorted array and let us assume that we need to search the location of value 31 using
binary search.

First, we shall determine half of the array by using this formula −

Mid = Low + (High - Low)/2


Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e.
31. We find that the value at location 4 is 27, which is not a match. As the value is
greater than 27 and we have a sorted array, so we also know that the target value
must be in the upper portion of the array.

We change our low to mid + 1 and find the new mid value again.

Low = Mid +1
Mid = Low + (High - Low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target
value 31.

The value stored at location 7 is not a match, rather it is more than what we are
looking for. So, the value must be in the lower part from this location.
Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a
match.

We conclude that the target value 31 is stored at location 5.

Binary search halves the searchable items and thus reduces the count of
comparisons to be made to very less numbers.

(iii) Interpolation Search:


Interpolation search is an improved variant of binary search. This search algorithm
works on the probing position of the required value. For this algorithm to work
properly, the data collection should be in a sorted form and equally distributed.

Binary search has a huge advantage of time complexity over linear search. Linear
search has worst-case complexity of Ο(n) whereas binary search has Ο(log n).

There are cases where the location of target data may be known in advance. For
example, in case of a telephone directory, if we want to search the telephone
number of Rishabh. Here, linear search and even binary search will seem slow as we
can directly jump to memory space where the names start from 'R' are stored.

Example using Algorithm:

As it is an improvisation of the existing BST algorithm, we are mentioning the steps to


search the 'target' data value index, using position probing −

Step 1 − Start searching data from middle of the list.


Step 2 − If it is a match, return the index of the item, and exit.
Step 3 − If it is not a match, probe position.
Step 4 − Divide the list using probing formula and find the new middle.
Step 5 − If data is greater than middle, search in higher sub-list.
Step 6 − If data is smaller than middle, search in lower sub-list.
Step 7 − Repeat until match.

(iv) Jump Search:


Jump search algorithm, also called as block search algorithm. Only sorted list
of array or table can alone use the Jump search algorithm. In jump search
algorithm, it is not at all necessary to scan every element in the list as we do
in linear search algorithm. We just check the R element and if it is less than
the key element, then we move to the R + R element, where all the elements
between R element and R + R element are skipped. This process is continued
until R element becomes equal to or greater than key element called
boundary value. The value of R is given by R = sqrt(n), where n is the total
number of elements in an array. Once the R element attain the boundary
value, a linear search is done to find the key value and its position in the
array. It must be noted that in Jump search algorithm, a linear search is done
in reverse manner that is from boundary value to previous value of R.

Example :

In 16 elements of array, we need to find our key element 7 using jump


search algorithm.

Step 1: Find the value of R. here R = sqrt (16) i.e) R = 4.

Step 2: Skip the first three elements(1, 2, 3) in the array and check whether
fourth(4) value is equal to or greater than key value(7).

Step 3: If not skip next three elements(5, 6, 7) in the array and check
whether eighth(8) value is equal to or greater than key value(7). In this case
it is greater than Key value.

Step 4: Now by using linear search algorithm, move reverse from value
8(boundary value) to value 4(previous value) to find the key value(7).

Step 5: Thus using linear search algorithm the Key value is calculated and
resulted in position array[6].
Q2) WAP for the following :-

(i) Bubble Sort :

#include <stdio.h>
#include <stdlib.h>

void main()
{
int a[50],j, n,i,s;
cout<<"Enter the number of elements in the array:";
cin>>n;
cout<<"Enter the array:";
for(i=0; i<n; i++)
cin>>a[i];
for(i=0;i<n; i++)
{

for(j=i+1; j<n; j++)


{

if(a[i]>a[j])
//swapping
{

s=a[j];
a[j]=a[i];
a[i]=s;}

}
}
cout<<"Sorted array:";
for(i=0; i<n; i++)
cout<< a[i];

return 0;
}

(ii) Selection Sort :

#include <stdio.h>
#include <stdlib.h>

void main()
{
int a[20],j, n,i, min;
cout<< ”Enter the number of elements in the array:";
cin>>n;
cout"Enter the array:";
for(i=1; i<=n; i++)
cin>>a[i];
for(i=1;i<=n; i++)
{
min=i;
for(j=i+1; j<=n; j++)
{
if(a[j]<a[min])
min=j;
}
int s;
//swappig
//if(a[min]==i)

s=a[min];
a[min]=a[i];
a[i]=s;

}
cout<<"Sorted array:";
for(i=1; i<=n; i++)
cout<< a[i];

return 0;
}

(iii) Merge Sort :

#include <stdio.h>
#include <stdlib.h>

void merge(int arr[], int l, int m, int r)


{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;

int L[n1], R[n2];

for (i = 0; i < n1; i++)


L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1+ j];

i = 0;
j = 0;
k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}

while (i < n1)


{
arr[k] = L[i];
i++;
k++;
}

while (j < n2)


{
arr[k] = R[j];
j++;
k++;
}
}

void mergeSort(int arr[], int l, int r)


{
if (l < r)
{

int m = l+(r-l)/2;

// Sort first and second halves


mergeSort(arr, l, m);
mergeSort(arr, m+1, r);

merge(arr, l, m, r);
}
}

void printArray(int A[], int size)


{
int i;
for (i=0; i < size; i++)
cout<< A[i];
cout<</n;
}

void main()
{
int a[20];
int n,i;

cout<<"Enter the number of elements in the array:";


cin>>n;
cout<<”Enter the array:”;
for(i=1; i<=n; i++)
cin>>a[i];
mergeSort(a, 0, n);

cout<<”Sorted array is :");


printArray(a, n);
return 0;
}

(iv) Quick Sort :

#include<stdio.h>
void quick_sort(int[],int,int);
int partition(int[],int,int);

int main()
{
int a[50],n,i;
cout<<"How many elements?";
cin>>n;
cout<<”Enter array elements:";

for(i=0;i<n;i++)
cin>>a[i];

quick_sort(a,0,n-1);
cout<<”Array after sorting:";

for(i=0;i<n;i++)
cout<<a[i];

return 0;
}

void quick_sort(int a[],int l,int u)


{
int j;
if(l<u)
{
j=partition(a,l,u);
quick_sort(a,l,j-1);
quick_sort(a,j+1,u);
}
}

int partition(int a[],int l,int u)


{
int v,i,j,temp;
v=a[l];
i=l;
j=u+1;

do
{
do
i++;

while(a[i]<v&&i<=u);

do
j--;
while(v<a[j]);

if(i<j)
{
temp=a[i];
a[i]=a[j];
a[j]=temp;
}
}while(i<j);

a[l]=a[j];
a[j]=v;

return(j);
}

(v) Radix Sort :

#include<iostream>
using namespace std;

// A utility function to get maximum value in arr[]


int getMax(int arr[], int n)
{
int mx = arr[0];
for (int i = 1; i < n; i++)
if (arr[i] > mx)
mx = arr[i];
return mx;
}

// A function to do counting sort of arr[] according to


// the digit represented by exp.
void countSort(int arr[], int n, int exp)
{
int output[n]; // output array
int i, count[10] = {0};

// Store count of occurrences in count[]


for (i = 0; i < n; i++)
count[ (arr[i]/exp)%10 ]++;

// Change count[i] so that count[i] now contains actual


// position of this digit in output[]
for (i = 1; i < 10; i++)
count[i] += count[i - 1];

for (i = n - 1; i >= 0; i--)


{
output[count[ (arr[i]/exp)%10 ] - 1] = arr[i];
count[ (arr[i]/exp)%10 ]--;
}

for (i = 0; i < n; i++)


arr[i] = output[i];
}

void radixsort(int arr[], int n)


{
// Find the maximum number to know number of digits
int m = getMax(arr, n);

// Do counting sort for every digit. Note that instead


// of passing digit number, exp is passed. exp is 10^i
// where i is current digit number
for (int exp = 1; m/exp > 0; exp *= 10)
countSort(arr, n, exp);
}

void print(int arr[], int n)


{
for (int i = 0; i < n; i++)
cout << arr[i] << " ";
}

// Driver program to test above functions


int main()
{
int arr[] = {170, 45, 75, 90, 802, 24, 2, 66};
int n = sizeof(arr)/sizeof(arr[0]);
radixsort(arr, n);
print(arr, n);
return 0;
}

(vi) Heap Sort :

#include <stdio.h>
#include <stdlib.h>

void max_heapify(int a[], int i, int heapsize)


{
int tmp, largest;
int l = (2 * i) + 1;
int r = (2 * i) + 2;
if ((l <= heapsize) && (a[l] > a[i]))
largest = l;
else
largest = i;
if ((r <= heapsize) && (a[r] > a[largest]))
largest = r ;
if (largest != i)
{
tmp = a[i];
a[i] = a[largest];
a[largest] = tmp;
max_heapify(a, largest, heapsize);
}

}
void build_max_heap(int a[], int heapsize)
{
int i;
for (i = heapsize/2; i >= 0; i--)
{
max_heapify(a, i, heapsize);
}

}
/*void heap_sort(int a[], int heapsize)
{
int i, tmp;
build_max_heap(a, heapsize);
for (i = heapsize; i > 0; i--)
{
tmp = a[i];
a[i] = a[0];
a[0] = tmp;
heapsize--;
max_heapify(a, 0, heapsize);
}
}*/

void insert(int a[], int x, int n)


{
a[n]=x;
int i=n/2;
max_heapify(a, i, n);

int main()
{
int i, r,x, heapsize,n;
int a[50];
cout<<"Enter the number of terms in the heap:";
cin>>n;
cout<<"Enter the elements:";
for (i = 0; i < n; i++)
cin>>a[i];
heapsize = n - 1;

cout<<\n;
build_max_heap(a, heapsize);
for (i = 0; i < n; i++)
cout<< a[i);
cout<<"Number to be inserted:";
cin>>x;
insert(a,x,n);
for (i = 0; i < n; i++)
cout<<a[i];
return 0;
}

(vii) Shell Sort :

#include<iostream>

using namespace std;

// A function implementing Shell sort.


void ShellSort(int a[], int n)
{
int i, j, k, temp;
// Gap 'i' between index of the element to be compared, initially n/2.
for(i = n/2; i > 0; i = i/2)
{
for(j = i; j < n; j++)
{
for(k = j-i; k >= 0; k = k-i)
{
// If value at higher index is greater, then break the loop.
if(a[k+i] >= a[k])
break;
// Switch the values otherwise.
else
{
temp = a[k];
a[k] = a[k+i];
a[k+i] = temp;
}
}
}
}
}

void main()
{
int n, i;
cout<<"\nEnter the number of data element to be sorted: ";
cin>>n;

int arr[n];
for(i = 0; i < n; i++)
{
cout<<"Enter element "<<i+1<<": ";
cin>>arr[i];
}

ShellSort(arr, n);

// Printing the sorted data.


cout<<"\nSorted Data ";
for (i = 0; i < n; i++)
cout<<"->"<<arr[i];

return 0;
}

Q3) Compare the complexity analysis for Q2 :-

(i) Bubble Sort:

1. Time complexity of Bubble sort in Worst Case is O(N^2), which makes it quite
inefficient for sorting large data volumes. O(N^2) because it sorts only one item in
each iteration and in each iteration it has to compare n-i elements.

2. Time complexity of Bubble sort in Best Case is O(N).When the given data set is
already sorted, in that case bubble sort can identify it in one single iteration hence
O(N). It means while iterating, from i=0 till arr.length, if there is no swapping
required, then the array is already sorted and stop there.

3. Bubble sort can identify when the list is sorted and can stop early.

4. Bubble sort is efficient for (quite) small data sets.

5. It is Stable sort; i.e., does not change the relative order of elements with equal
keys.

6. It takes O(1) extra space.

(ii) Selection Sort:

Selecting the lowest element requires scanning all n elements (this takes n - 1
comparisons) and then swapping it into the first position.

Finding the next lowest element requires scanning the remaining n - 1 elements and
so on,
= (n - 1) + (n - 2) + ... + 2 + 1 = n(n - 1) / 2
= O(n^2) comparisons.

 Best Case : O(n)^2


 Worst Case : O(n)^2
 Average Case : O(n)^2
 Worst Case Space Complexity : O(1)

Stable : No
(iii) Merge Sort:

In sorting n objects, merge sort has an average and worst-case performance of


O(n log n). If the running time of merge sort for a list of length n is T(n), then the
recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the
algorithm to two lists of half the size of the original list, and add the n steps taken to
merge the resulting two lists). The closed form follows from the master theorem for
divide-and-conquer recurrences.
In the worst case, the number of comparisons merge sort makes is equal to or slightly
smaller than (n (lg n)- 2lg n + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg
n)).
In the worst case, merge sort does about 39% fewer comparisons than quicksort does
in the average case. In terms of moves, merge sort's worst case complexity is
O(n log n)—the same complexity as quicksort's best case, and merge sort's best case
takes about half as many iterations as the worst case.
Merge sort is more efficient than quicksort for some types of lists if the data to be
sorted can only be efficiently accessed sequentially, and is thus popular in languages
such as Lisp, where sequentially accessed data structures are very common. Unlike
some (efficient) implementations of quicksort, merge sort is a stable sort.
Merge sort's most common implementation does not sort in place; therefore, the
memory size of the input must be allocated for the sorted output to be stored in (see
below for versions that need only n/2 extra spaces).

(iv) Quick Sort:

The first two terms are for two recursive calls, the last term is for the partition
process. k is the number of elements which are smaller than pivot.
The time taken by QuickSort depends upon the input array and partition strategy.
Following are three cases.

Worst Case: The worst case occurs when the partition process always picks greatest
or smallest element as pivot. If we consider above partition strategy where last
element is always picked as pivot, the worst case would occur when the array is
already sorted in increasing or decreasing order. Following is recurrence for worst
case. The solution of above recurrence is (n2).

Best Case: The best case occurs when the partition process always picks the middle
element as pivot. Following is recurrence for best case.
The solution of above recurrence is (nLogn).

Average Case:
To do average case analysis, we need to consider all possible permutation of array
and calculate time taken by every permutation which doesn’t look easy.
We can get an idea of average case by considering the case when partition puts O(n/9)
elements in one set and O(9n/10) elements in other set. Following is recurrence for
this case. Solution of above recurrence is also O(nLogn)
Although the worst case time complexity of QuickSort is O(n2) which is more than
many other sorting algorithms like Merge Sort and Heap Sort, QuickSort is faster in
practice, because its inner loop can be efficiently implemented on most architectures,
and in most real-world data. QuickSort can be implemented in different ways by
changing the choice of pivot, so that the worst case rarely occurs for a given type of
data. However, merge sort is generally considered better when data is huge and
stored in external storage.

(v) Radix Sort :

Each key is looked at once for each digit (or letter if the keys are alphabetic) of the
longest key. Hence, if the longest key has m digits and there are n keys, radix sort has
order O(m.n).
However, if we look at these two values, the size of the keys will be relatively small
when compared to the number of keys. For example, if we have six-digit keys, we
could have a million different records.
Here, we see that the size of the keys is not significant, and this algorithm is of linear
complexity O(n).

Let there be d digits in input integers. Radix Sort takes O(d*(n+b)) time where b is the
base for representing numbers, for example, for decimal system, b is 10. What is the
value of d? If k is the maximum possible value, then d would be O(log b(k)). So overall
time complexity is O((n+b) * log b(k)) which looks more than the time complexity of
comparison based sorting algorithms for a large k.Let k <= nc where c is a constant. In
that case, the complexity becomes O(n log b(n)). If we set b as n, we get the time
complexity as O(n). In other words, we can sort an array of integers with range from 1
to nc if the numbers are represented in base n (or every digit takes log2(n) bits).

(vi) Heap Sort:

Analysis of Heap Sort Time Complexity. Heap sort worst case, best case and average
case time complexity is guaranteed O(n Log n). Heap sort space complexity is O(1).
The height of a complete binary tree containing n elements is log(n).To fully heapify
an element whose subtrees are already max-heaps, we need to keep comparing the
element with its left and right children and pushing it downwards until it reaches a
point where both its children are smaller than it. In the worst case scenario, we will
need to move an element from the root to the leaf node making a multiple of log(n)
comparisons and swaps.
During the build_max_heap stage, we do that for n/2 elements so the worst case
complexity of the build_heap step is n/2*log(n) ~ nlog n.
During the sorting step, we exchange the root element with the last element and
heapify the root element. For each element, this again takes log n worst time because
we might have to bring the element all the way from the root to the leaf. Since we
repeat this n times, the heap_sort step is also nlog n. Also since the build_max_heap
and heap_sort steps are executed one after another, the algorithmic complexity is not
multiplied and it remains in the order of nlog n.Also it performs sorting in O(1) space
complexity.

(vii) Shell Sort:

The worst-case is O(n^2) and the best-case is O(nlog n) which is reasonable for shell-sort.

The best case = O(nlogn):

The best-case is when the array is already sorted. This would mean that the inner if statement
will never be true, making the inner while loop a constant time operation. Using the bounds
you've used for the other loops gives O(nlogn). The best case of O(n) is reached by using a
constant number of increments.

The worst case = O(n^2):

Given upper bound for each loop we get O((log n)n^2) for the worst-case. But add another
variable for the gap size g. The number of compare/exchanges needed in the inner while is
now <= n/g. The number of compare/exchanges of the middle while is <= n^2/g. Add the
upper-bound of the number of compare/exchanges for each gap together: n^2 + n^2/2 +
n^2/4 + ... <= 2n^2 ∊ O(n^2). This matches the known worst-case complexity for the gaps
you've used.

The worst case = Ω(n^2):

Consider the array where all the even positioned elements are greater than the median. The
odd and even elements are not compared until we reach the last increment of 1. The number
of compare/exchanges needed for the last iteration is Ω(n^2).

You might also like