You are on page 1of 42

Unit – IV

Introduction to sorting

The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search
through it quickly and easily. The simplest example of sorting is a dictionary.

What is sorting in Data Structure?

Sorting is the process of arranging items in a specific order or sequence. It is a common algorithmic problem in computer
science and is used in various applications such as searching, data analysis, and information retrieval.
In other words, you can say that sorting is also used to represent data in a more readable format. Some real-life examples
of sorting are:-
o Contact List in Your Mobile Phone also contains all contacts arranged alphabetically (lexicographically). So if
you look for contact then you don’t have to look randomly and can be searched easily and many others like Apps
on your phone.
o Keywords in Your book are also in a lexicographical manner and you can find it according to Chapter.

Why Sorting is Important?

o When you perform sorting on an array/elements, many problems become easy (e.g. min/max, kth smallest/largest)
o Performing Sorting also gives no. of algorithmic solutions that contain many other ideas such as:
o Iterative
o Divide-and-conquer
o Comparison vs non-comparison based
o Recursive

The main advantage of sorting is time complexity and that’s the most important thing when you solve a problem
because it’s not enough you’re able to solve a problem but you should be able to solve it in the minimum time possible.
Sometimes problems can be solved easily and quickly based on sorting which can prevent you from every Coder’s
Nightmare i.e. TLE (Time Limit Exceeded).

Sorting Categories
The Sorting categories in data structures can be broadly classified into the following types:

Comparison-based Sorting Algorithms: These algorithms compare the elements being sorted to each other
and then place them in the desired order. Examples include Bubble Sort, Selection Sort, Insertion Sort, QuickSort, Merge
Sort, and Heap Sort.

Non-Comparison-based Sorting Algorithms: These algorithms do not compare the elements being sorted to each
other. Instead, they use some specific characteristics of the data to sort them. Examples include Counting Sort, Radix
Sort, and Bucket Sort.

Stable Sorting Algorithms: These algorithms maintain the relative order of elements with equal keys during Sorting.
Examples include Merge Sort and Insertion Sort.
Unstable Sorting Algorithms: These algorithms do not maintain the relative order of the elements with equal keys
during Sorting. Examples include QuickSort and Heap Sort.

Efficiency of algorithm in data structure

An algorithm is considered efficient if its resource consumption, also known as computational cost, is at or below some
acceptable level. Roughly speaking, 'acceptable' means: it will run in a reasonable amount of time or space on an
available computer, typically as a function of the size of the input.

Algorithm Efficiency

Time efficiency - a measure of amount of time for an algorithm to execute.

Space efficiency - a measure of the amount of memory needed for an algorithm to execute.

Complexity theory - a study of algorithm performance

Algorithm is a step-by-step procedure, which defines a set of instructions to be executed in a certain order to get the
desired output. Algorithms are generally created independent of underlying languages, i.e. an algorithm can be
implemented in more than one programming language.
From the data structure point of view, following are some important categories of algorithms −
 Search − Algorithm to search an item in a data structure.
 Sort − Algorithm to sort items in a certain order.
 Insert − Algorithm to insert item in a data structure.
 Update − Algorithm to update an existing item in a data structure.
 Delete − Algorithm to delete an existing item from a data structure.

Characteristics of an Algorithm
Not all procedures can be called an algorithm. An algorithm should have the following characteristics −
 Unambiguous − Algorithm should be clear and unambiguous. Each of its steps (or phases), and their
inputs/outputs should be clear and must lead to only one meaning.
 Input − An algorithm should have 0 or more well-defined inputs.
 Output − An algorithm should have 1 or more well-defined outputs, and should match the desired output.
 Finiteness − Algorithms must terminate after a finite number of steps.
 Feasibility − Should be feasible with the available resources.
 Independent − An algorithm should have step-by-step directions, which should be independent of any
programming code.

How to Write an Algorithm?


There are no well-defined standards for writing algorithms. Rather, it is problem and resource dependent. Algorithms are
never written to support a particular programming code.
As we know that all programming languages share basic code constructs like loops (do, for, while), flow-control (if-
else), etc. These common constructs can be used to write an algorithm.
We write algorithms in a step-by-step manner, but it is not always the case. Algorithm writing is a process and is
executed after the problem domain is well-defined. That is, we should know the problem domain, for which we are
designing a solution.
Example
Let's try to learn algorithm-writing by using an example.
Problem − Design an algorithm to add two numbers and display the result.
Step 1 − START
Step 2 − declare three integers a, b&c
Step 3 − define values of a&b
Step 4 − add values of a&b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithms tell the programmers how to code the program. Alternatively, the algorithm can be written as −
Step 1 − START ADD
Step 2 − get values of a&b
Step 3 − c ← a + b
Step 4 − display c
Step 5 − STOP
In design and analysis of algorithms, usually the second method is used to describe an algorithm. It makes it easy for the
analyst to analyze the algorithm ignoring all unwanted definitions. He can observe what operations are being used and
how the process is flowing.
Writing step numbers, is optional.
We design an algorithm to get a solution of a given problem. A problem can be solved in more than one ways.

Hence, many solution algorithms can be derived for a given problem. The next step is to analyze those proposed solution
algorithms and implement the best suitable solution.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before implementation and after implementation.
They are the following −
 A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an algorithm is measured by
assuming that all other factors, for example, processor speed, are constant and have no effect on the
implementation.
 A Posterior Analysis − This is an empirical analysis of an algorithm. The selected algorithm is implemented
using programming language. This is then executed on target computer machine. In this analysis, actual statistics
like running time and space required, are collected.
We shall learn about a priori algorithm analysis. Algorithm analysis deals with the execution or running time of various
operations involved. The running time of an operation can be defined as the number of computer instructions executed
per operation.

Algorithm Complexity
Suppose X is an algorithm and n is the size of input data, the time and space used by the algorithm X are the two main
factors, which decide the efficiency of X.
 Time Factor − Time is measured by counting the number of key operations such as comparisons in the sorting
algorithm.
 Space Factor − Space is measured by counting the maximum memory space required by the algorithm.
The complexity of an algorithm f(n) gives the running time and/or the storage space required by the algorithm in terms
of n as the size of input data.

Space Complexity
Space complexity of an algorithm represents the amount of memory space required by the algorithm in its life cycle. The
space required by an algorithm is equal to the sum of the following two components −
 A fixed part that is a space required to store certain data and variables, that are independent of the size of the
problem. For example, simple variables and constants used, program size, etc.
 A variable part is a space required by variables, whose size depends on the size of the problem. For example,
dynamic memory allocation, recursion stack space, etc.
Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and S(I) is the variable part of
the algorithm, which depends on instance characteristic I. Following is a simple example that tries to explain the concept

Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop
Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space depends on data types of
given variables and constant types and it will be multiplied accordingly.

Time Complexity
Time complexity of an algorithm represents the amount of time required by the algorithm to run to completion. Time
requirements can be defined as a numerical function T(n), where T(n) can be measured as the number of steps, provided
each step consumes constant time.
For example, addition of two n-bit integers takes n steps. Consequently, the total computational time is T(n) = c ∗ n,
where c is the time taken for the addition of two bits. Here, we observe that T(n) grows linearly as the input size
increases.

Asymptotic Analysis
Asymptotic analysis of an algorithm refers to defining the mathematical boundation/framing of its run-time performance.
Using asymptotic analysis, we can very well conclude the best case, average case, and worst case scenario of an
algorithm.
Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a constant time.
Other than the "input" all other factors are considered constant.
Asymptotic analysis refers to computing the running time of any operation in mathematical units of computation. For
example, the running time of one operation is computed as f(n) and may be for another operation it is computed as g(n2).
This means the first operation running time will increase linearly with the increase in n and the running time of the
second operation will increase exponentially when n increases. Similarly, the running time of both operations will be
nearly the same if n is significantly small.
Usually, the time required by an algorithm falls under three types −
 Best Case − Minimum time required for program execution.
 Average Case − Average time required for program execution.
 Worst Case − Maximum time required for program execution.

Asymptotic Notations
Following are the commonly used asymptotic notations to calculate the running time complexity of an algorithm.

 Ο Notation
 Ω Notation
 θ Notation
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time. It measures the worst
case time complexity or the longest amount of time an algorithm can possibly take to complete.

For example, for a function f(n)


Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }
Omega Notation, Ω
The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It measures the best case
time complexity or the best amount of time an algorithm can possibly take to complete.

For example, for a function f(n)


Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }
Theta Notation, θ
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an algorithm's running time.
It is represented as follows −

θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Common Asymptotic Notations


Following is a list of some common asymptotic notations −

Constant − Ο(1)

Logarithmic − Ο(log n)
Linear − Ο(n)

n log n − Ο(n log n)

Quadratic − Ο(n2)

Cubic − Ο(n3)

Polynomial − nΟ(1)

Exponential − 2Ο(n)

Sorting Algorithms

Sorting is the process of arranging the elements of an array so that they can be placed either in ascending or descending
order. For example, consider an array A = {A1, A2, A3, A4, ?? An }, the array is called to be in ascending order if
element of A are arranged like A1 > A2 > A3 > A4 > A5 > ? > An .

Consider an array;

int A[10] = { 5, 4, 10, 2, 30, 45, 34, 14, 18, 9 )

The Array sorted in ascending order will be given as;

A[] = { 2, 4, 5, 9, 10, 14, 18, 30, 34, 45 }

There are many techniques by using which, sorting can be performed. In this section of the tutorial, we will discuss each
method in detail.

Sorting Algorithms

Sorting algorithms are described in the following table along with the description.

SN Sorting Description
Algorithms

1 Bubble Sort It is the simplest sort method which performs sorting by repeatedly moving the largest element to the
highest index of the array. It comprises of comparing each element to its adjacent element and
replace them accordingly.

2 Bucket Sort Bucket sort is also known as bin sort. It works by distributing the element into the array also called
buckets. In this sorting algorithms, Buckets are sorted individually by using different sorting
algorithm.

3 Comb Sort Comb Sort is the advanced form of Bubble Sort. Bubble Sort compares all the adjacent values while
comb sort removes all the turtle values or small values near the end of the list.

4 Counting Sort It is a sorting technique based on the keys i.e. objects are collected according to keys which are
small integers. Counting sort calculates the number of occurrence of objects and stores its key
values. New array is formed by adding previous key elements and assigning to objects.

5 Heap Sort In the heap sort, Min heap or max heap is maintained from the array elements deending upon the
choice and the elements are sorted by deleting the root element of the heap.

6 Insertion Sort As the name suggests, insertion sort inserts each element of the array to its proper place. It is a very
simple sort method which is used to arrange the deck of cards while playing bridge.

7 Merge Sort Merge sort follows divide and conquer approach in which, the list is first divided into the sets of
equal elements and then each half of the list is sorted by using merge sort. The sorted list is
combined again to form an elementary sorted array.

8 Quick Sort Quick sort is the most optimized sort algorithms which performs sorting in O(n log n) comparisons.
Like Merge sort, quick sort also work by using divide and conquer approach.

9 Radix Sort In Radix sort, the sorting is done as we do sort the names according to their alphabetical order. It is
the lenear sorting algorithm used for Inegers.

10 Selection Sort Selection sort finds the smallest element in the array and place it on the first place on the list, then it
finds the second smallest element in the array and place it on the second place. This process
continues until all the elements are moved to their correct ordering. It carries running time O(n2)
which is worst than insertion sort.

11 Shell Sort Shell sort is the generalization of insertion sort which overcomes the drawbacks of insertion sort by
comparing elements separated by a gap of several positions.
Bubble sort Algorithm

In the algorithm given below, suppose arr is an array of n elements. The assumed swap function in the algorithm will
swap the values of given array elements.

1. begin BubbleSort(arr)
2. for all array elements
3. if arr[i] > arr[i+1]
4. swap(arr[i], arr[i+1])
5. end if
6. end for
7. return arr
8. end BubbleSort

Working of Bubble sort Algorithm

Now, let's see the working of Bubble sort Algorithm.

To understand the working of bubble sort algorithm, let's take an unsorted array. We are taking a short and accurate
array, as we know the complexity of bubble sort is O(n2).
Let the elements of array are -

First Pass

Sorting will start from the initial two elements. Let compare them to check which is greater.
Here, 32 is greater than 13 (32 > 13), so it is already sorted. Now, compare 32 with 26.

Here, 26 is smaller than 36. So, swapping is required. After swapping new array will look like -

Now, compare 32 and 35.

Here, 35 is greater than 32. So, there is no swapping required as they are already sorted.
Now, the comparison will be in between 35 and 10.

Here, 10 is smaller than 35 that are not sorted. So, swapping is required. Now, we reach at the end of the array. After first
pass, the array will be -

Now, move to the second iteration.


Second Pass
The same process will be followed for second iteration.

Here, 10 is smaller than 32. So, swapping is required. After swapping, the array will be -
Now, move to the third iteration.

Third Pass
The same process will be followed for third iteration.

Here, 10 is smaller than 26. So, swapping is required. After swapping, the array will be -

Now, move to the fourth iteration.


Fourth pass
Similarly, after the fourth iteration, the array will be -

Hence, there is no swapping required, so the array is completely sorted.


Bubble sort complexity
Now, let's see the time complexity of bubble sort in the best case, average case, and worst case. We will also see the
space complexity of bubble sort.
1. Time Complexity
o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of bubble sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of bubble sort is O(n2).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of bubble sort is O(n2).

2. Space Complexity

o The space complexity of bubble sort is O(1). It is because, in bubble sort, an extra variable is required for
swapping.
o The space complexity of optimized bubble sort is O(2). It is because two extra variables are required in
optimized bubble sort.
Now, let's discuss the optimized bubble sort algorithm.
Optimized Bubble sort Algorithm
In the bubble sort algorithm, comparisons are made even when the array is already sorted. Because of that, the execution
time increases.
To solve it, we can use an extra variable swapped. It is set to true if swapping requires; otherwise, it is set to false.
It will be helpful, as suppose after an iteration, if there is no swapping required, the value of variable swapped will
be false. It means that the elements are already sorted, and no further iterations are required.
This method will reduce the execution time and also optimizes the bubble sort.

Heap Sort Algorithm

Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to eliminate the elements one by one
from the heap part of the list, and then insert them into the sorted part of the list.

Heapsort is the in-place sorting algorithm.

Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End

BuildMaxHeap(arr)

1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End

MaxHeapify(arr,i)

1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End

Working of Heap sort Algorithm

Now, let's see the working of the Heapsort Algorithm.

In heap sort, basically, there are two phases involved in the sorting of elements. By using the heap sort algorithm, they
are as follows -

o The first step includes the creation of a heap by adjusting the elements of the array.
o After the creation of heap, now remove the root element of the heap repeatedly by shifting it to the end of the
array, and then store the heap structure with the remaining elements.
First, we have to construct a heap from the given array and convert it into max heap.

After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this node, we have to swap it with the last
node, i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-heap, the elements of array are -
In the next step, again, we have to delete the root element (81) from the max heap. To delete this node, we have to swap
it with the last node, i.e. (54). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 81 with 54 and converting the heap into max-heap, the elements of array are -

In the next step, we have to delete the root element (76) from the max heap again. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (54) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (14). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 54 with 14 and converting the heap into max-heap, the elements of array are -

In the next step, again we have to delete the root element (22) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (11). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap, the elements of array are -
In the next step, again we have to delete the root element (14) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 14 with 9 and converting the heap into max-heap, the elements of array are -

In the next step, again we have to delete the root element (11) from the max heap. To delete this node, we have to swap it
with the last node, i.e. (9). After deleting the root element, we again have to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of array are -

Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -


Now, the array is completely sorted.

Heap sort complexity

Now, let's see the time complexity of Heap sort in the best case, average case, and worst case. We will also see the space
complexity of Heapsort.

1. Time Complexity

Case Time Complexity

Best Case O(n logn)

Average Case O(n log n)

Worst Case O(n log n)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of heap sort is O(n log n).

The time complexity of heap sort is O(n logn) in all three cases (best case, average case, and worst case). The height of a
complete binary tree having n elements is logn.

2. Space Complexity

Space Complexity O(1)


Stable N0

o The space complexity of Heap sort is O(1).

Insertion Sort Algorithm

Following are some of the important characteristics of Insertion Sort:

1. It is efficient for smaller data sets, but very inefficient for larger lists.

2. Insertion Sort is adaptive, that means it reduces its total number of steps if a partially sorted array is provided as
input, making it efficient.

3. It is better than Selection Sort and Bubble Sort algorithms.

4. Its space complexity is less. Like bubble Sort, insertion sort also requires a single additional memory space.

5. It is a stable sorting technique, as it does not change the relative order of elements which are equal.

Algorithm

The simple steps of achieving the insertion sort are listed as follows -

Step 1 - If the element is the first element, assume that it is already sorted. Return 1.

Step2 - Pick the next element, and store it separately in a key.

Step3 - Now, compare the key with all elements in the sorted array.

Step 4 - If the element in the sorted array is smaller than the current element, then move to the next element. Else, shift
greater elements in the array towards the right.

Step 5 - Insert the value.

Step 6 - Repeat until the array is sorted.


Complexity Analysis of Insertion Sort

As we mentioned above that insertion sort is an efficient sorting algorithm, as it does not run on preset conditions
using for loops, but instead it uses one while loop, which avoids extra steps once the array gets sorted.

Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion sort algorithm, it will
still execute the outer for loop, thereby requiring n steps to sort an already sorted array of n elements, which makes
its best case time complexity a linear function of n.

Worst Case Time Complexity [ Big-O ]: O(n2)

Best Case Time Complexity [Big-omega]: O(n)

Average Time Complexity [Big-theta]: O(n2)

Space Complexity: O(1)

Merge Sort Algorithm


Merge sort is similar to the quick sort algorithm as it uses the divide and conquer approach to sort the elements. It is one
of the most popular and efficient sorting algorithm. It divides the given list into two equal halves, calls itself for the two
halves and then merges the two sorted halves. We have to define the merge() function to perform the merging.

The sub-lists are divided again and again into halves until the list cannot be divided further. Then we combine the pair of
one element lists into two-element lists, sorting them in the process. The sorted two-element pairs is merged into the
four-element lists, and so on until we get the sorted list.

Algorithm

In the following algorithm, arr is the given array, beg is the starting element, and end is the last element of the array.

1. MERGE_SORT(arr, beg, end)


2.
3. if beg < end
4. set mid = (beg + end)/2
5. MERGE_SORT(arr, beg, mid)
6. MERGE_SORT(arr, mid + 1, end)
7. MERGE (arr, beg, mid, end)
8. end of if
9.
10. END MERGE_SORT

The important part of the merge sort is the MERGE function. This function performs the merging of two sorted sub-
arrays that are A[beg…mid] and A[mid+1…end], to build one sorted array A[beg…end]. So, the inputs of
the MERGE function are A[], beg, mid, and end.
Merge sort complexity

Now, let's see the time complexity of merge sort in best case, average case, and in worst case. We will also see the space
complexity of the merge sort.

1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)


Worst Case O(n*logn)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case
time complexity of merge sort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of merge sort is O(n*logn).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That
means suppose you have to sort the array elements in ascending order, but its elements are in descending order.
The worst-case time complexity of merge sort is O(n*logn).

2. Space Complexity

Space Complexity O(n)

Stable YES

o The space complexity of merge sort is O(n). It is because, in merge sort, an extra variable is required for
swapping.

Quick Sort Algorithm

Sorting is a way of arranging items in a systematic manner. Quicksort is the widely used sorting algorithm that makes n
log n comparisons in average case for sorting an array of n elements. It is a faster and highly efficient sorting algorithm.
This algorithm follows the divide and conquer approach. Divide and conquer is a technique of breaking down the
algorithms into subproblems, then solving the subproblems, and combining the results back together to solve the original
problem.

Divide: In Divide, first pick a pivot element. After that, partition or rearrange the array into two sub-arrays such that
each element in the left sub-array is less than or equal to the pivot element and each element in the right sub-array is
larger than the pivot element.

Conquer: Recursively, sort two subarrays with Quicksort.

Combine: Combine the already sorted array.


Quicksort picks an element as pivot, and then it partitions the given array around the picked pivot element. In quick sort,
a large array is divided into two arrays in which one holds values that are smaller than the specified value (Pivot), and
another array holds the values that are greater than the pivot.

After that, left and right sub-arrays are also partitioned using the same approach. It will continue until the single element
remains in the sub-array.

Choosing the pivot

Picking a good pivot is necessary for the fast implementation of quicksort. However, it is typical to determine a good
pivot. Some of the ways of choosing a pivot are as follows -

o Pivot can be random, i.e. select the random pivot from the given array.

o Pivot can either be the rightmost element of the leftmost element of the given array.

o Select median as the pivot element.

Algorithm

Algorithm:

1. QUICKSORT (array A, start, end)


2. {
3. 1 if (start < end)
4. 2{
5. 3 p = partition(A, start, end)
6. 4 QUICKSORT (A, start, p - 1)
7. 5 QUICKSORT (A, p + 1, end)
8. 6 }
9. }

Partition Algorithm:

The partition algorithm rearranges the sub-arrays in a place.

1. PARTITION (array A, start, end)


2. {
3. 1 pivot ? A[end]
4. 2 i ? start-1
5. 3 for j ? start to end -1 {
6. 4 do if (A[j] < pivot) {
7. 5 then i ? i + 1
8. 6 swap A[i] with A[j]
9. 7 }}
10. 8 swap A[i+1] with A[end]
11. 9 return i+1
12. }
Quicksort complexity

Now, let's see the time complexity of quicksort in best case, average case, and in worst case. We will also see the space
complexity of quicksort.

1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n2)

o Best Case Complexity - In Quicksort, the best-case occurs when the pivot element is the middle element or near
to the middle element. The best-case time complexity of quicksort is O(n*logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly
ascending and not properly descending. The average case time complexity of quicksort is O(n*logn).
o Worst Case Complexity - In quick sort, worst case occurs when the pivot element is either greatest or smallest
element. Suppose, if the pivot element is always the last element of the array, the worst case would occur when
the given array is sorted already in ascending or descending order. The worst-case time complexity of quicksort
is O(n2).

Though the worst-case complexity of quicksort is more than other sorting algorithms such as Merge sort and Heap sort,
still it is faster in practice. Worst case in quick sort rarely occurs because by changing the choice of pivot, it can be
implemented in different ways. Worst case in quicksort can be avoided by choosing the right pivot element.

2. Space Complexity

Space Complexity O(n*logn)

Stable NO

o The space complexity of quicksort is O(n*logn).

Radix Sort Algorithm

In this article, we will discuss the Radix sort Algorithm. Radix sort is the linear sorting algorithm that is used for
integers. In Radix sort, there is digit by digit sorting is performed that is started from the least significant digit to the most
significant digit.
The process of radix sort works similar to the sorting of students names, according to the alphabetical order. In this case,
there are 26 radix formed due to the 26 alphabets in English. In the first pass, the names of students are grouped
according to the ascending order of the first letter of their names. After that, in the second pass, their names are grouped
according to the ascending order of the second letter of their name. And the process continues until we find the sorted
list.

Algorithm

1. radixSort(arr)

2. max = largest element in the given array

3. d = number of digits in the largest element (or, max)

4. Now, create d buckets of size 0 - 9

5. for i -> 0 to d

6. sort the array elements using counting sort (or any stable sort) according to the digits at

7. the ith place


Radix sort complexity

Now, let's see the time complexity of Radix sort in best case, average case, and worst case. We will also see the space
complexity of Radix sort.

1. Time Complexity

Case Time Complexity

Best Case Ω(n+k)

Average Case θ(nk)

Worst Case O(nk)


o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already sorted. The best-case

time complexity of Radix sort is Ω(n+k).

o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly

ascending and not properly descending. The average case time complexity of Radix sort is θ(nk).

o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That

means suppose you have to sort the array elements in ascending order, but its elements are in descending order.

The worst-case time complexity of Radix sort is O(nk).

Radix sort is a non-comparative sorting algorithm that is better than the comparative sorting algorithms. It has linear time
complexity that is better than the comparative algorithms with complexity O(n logn).

2. Space Complexity

o The space complexity of Radix sort is O(n + k).

Shell Sort Algorithm

It is a sorting algorithm that is an extended version of insertion sort. Shell sort has improved the average time complexity
of insertion sort. As similar to insertion sort, it is a comparison-based and in-place sorting algorithm. Shell sort is
efficient for medium-sized data sets.
In insertion sort, at a time, elements can be moved ahead by one position only. To move an element to a far-away
position, many movements are required that increase the algorithm's execution time. But shell sort overcomes this
drawback of insertion sort. It allows the movement and swapping of far-away elements as well.

This algorithm first sorts the elements that are far away from each other, then it subsequently reduces the gap between
them. This gap is called as interval. This interval can be calculated by using the Knuth's formula given below -

1. hh = h * 3 + 1

2. where, 'h' is the interval having initial value 1.

Now, let's see the algorithm of shell sort.

Algorithm

The simple steps of achieving the shell sort are listed as follows -

1. ShellSort(a, n) // 'a' is the given array, 'n' is the size of array

2. for (interval = n/2; interval > 0; interval /= 2)

3. for ( i = interval; i < n; i += 1)

4. temp = a[i];

5. for (j = i; j >= interval && a[j - interval] > temp; j -= interval)

6. a[j] = a[j - interval];

7. a[j] = temp;

8. End ShellSort

shell sort uses insertion sort to sort the array elements.


Shell sort complexity

Now, let's see the time complexity of Shell sort in the best case, average case, and worst case. We will also see the space
complexity of the Shell sort.

1. Time Complexity

Case Time Complexity

Best Case O(n*logn)

Average Case O(n*log(n)2)

Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e., the array is already sorted. The best-

case time complexity of Shell sort is O(n*logn).

o Average Case Complexity - It occurs when the array elements are in jumbled order that is not properly

ascending and not properly descending. The average case time complexity of Shell sort is O(n*logn).

o Worst Case Complexity - It occurs when the array elements are required to be sorted in reverse order. That

means suppose you have to sort the array elements in ascending order, but its elements are in descending order.

The worst-case time complexity of Shell sort is O(n2).

3. Space Complexity

o The space complexity of Shell sort is O(1).


Linear Search Algorithm

Searching is the process of finding some particular element in the list. If the element is present in the list, then the
process is called successful, and the process returns the location of that element; otherwise, the search is called
unsuccessful.

Two popular search methods are Linear Search and Binary Search. So, here we will discuss the popular searching
technique, i.e., Linear Search Algorithm.

Linear search is also called as sequential search algorithm. It is the simplest searching algorithm. In Linear search, we
simply traverse the list completely and match each element of the list with the item whose location is to be found. If the
match is found, then the location of the item is returned; otherwise, the algorithm returns NULL.

It is widely used to search an element from the unordered list, i.e., the list in which items are not sorted. The worst-case
time complexity of linear search is O(n).

The steps used in the implementation of Linear Search are listed as follows -

o First, we have to traverse the array elements using a for loop.

o In each iteration of for loop, compare the search element with the current array element, and -

o If the element matches, then return the index of the corresponding array element.

o If the element does not match, then move to the next element.

o If there is no match or the search element is not present in the given array, return -1.

Now, let's see the algorithm of linear search.

Algorithm
1. Linear_Search(a, n, val) // 'a' is the given array, 'n' is the size of given array, 'val' is the value to search

2. Step 1: set pos = -1

3. Step 2: set i = 1

4. Step 3: repeat step 4 while i <= n

5. Step 4: if a[i] == val

6. set pos = i

7. print pos

8. go to step 6

9. [end of if]

10. set ii = i + 1

11. [end of loop]

12. Step 5: if pos = -1

13. print "value is not present in the array "

14. [end of if]

15. Step 6: exit


Binary Search Algorithm
Searching is the process of finding some particular element in the list. If the element is present in the list, then the
process is called successful, and the process returns the location of that element. Otherwise, the search is called
unsuccessful.

Linear Search and Binary Search are the two popular searching techniques. Here we will discuss the Binary Search
Algorithm.

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an element into some list
using the binary search technique, we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into two halves, and the item is
compared with the middle element of the list. If the match is found then, the location of the middle element is returned.
Otherwise, we search into either of the halves depending upon the result produced through the match.

Algorithm

1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bound' is the index of the first array ele
ment, 'upper_bound' is the index of the last array element, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1
16. print "value is not present in the array"
17. [end of if]
18. Step 6: exit
Comparison of Searching methods in Data Structures
In different cases, we perform different searching schemes to find some keys. In this section we will see what are the
basic differences between two searching techniques, the sequential search and binary search.

Sequential Search Binary Search

Time complexity is O(n) Time complexity is O(log n)

Finds the key present at first position in constant time Finds the key present at center position in constant time

Sequence of elements in the container does not affect. The elements must be sorted in the container

Arrays and linked lists can be used to implement this It cannot be implemented directly into the linked list. We need
to change the basic rules of the list to implement this

Algorithm is iterative in nature Algorithm technique is Divide and Conquer.

Algorithm is easy to implement, and requires less amount of Algorithm is slightly complex. It takes more amount of code to
code. implement.

N number of comparisons are required for worst case. Log n number of comparisons are sufficient in worst case.

You might also like