You are on page 1of 41

SUBJECT NAME: DESIGN AND ANALYSIS OF

ALGORITHM
SUBJECT CODE: CCS53
CLASS : III B.Sc
SEMESTER :V
UNIT : II
1. General Method
2. Binary Search
3. Recurrence Equation for Divide and Conquer –
4.Finding the Maximum and Minimum
5. Merge Sort
6.Quick Sort
7.PerformanceMeasurement
8. Randomized Sorting Algorithm
9. Selection Sort
10. A Worst CaseOptimal Algorithm
11. Implementation of Select2 – Stassen’s Matrix Multiplications.

1.Divide and Conquer Introduction


Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to
take a dispute on a huge input, break the input into minor pieces, decide the problem
on each of the small pieces, and then merge the piecewise solutions into a global
solution. This mechanism of solving the problem is called the Divide & Conquer
Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to
the whole problem.

Generally, we can follow the divide-and-conquer approach in a three-step


process.

Examples: The specific computer algorithms are based on the Divide & Conquer
approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.

Fundamental of Divide & Conquer Strategy:


There are two fundamental of Divide & Conquer Strategy:

1. Relational Formula
2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique.
After generation of Formula we apply D&C Strategy, i.e. we break the problem
recursively & solve the broken subproblems.

2. Stopping Condition: When we break the problem using Divide & Conquer
Strategy, then we need to know that for how much time, we need to apply divide &
Conquer. So the condition where the need to stop our recursion steps of D&C is called
as Stopping Condition.

Applications of Divide and Conquer Approach:


Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is


also called a half-interval search or logarithmic search. It works by comparing
the target value with the middle element existing in a sorted array. After
making the comparison, if the value differs, then the half that cannot contain
the target will eventually eliminate, followed by continuing the search on the
other half. We will again consider the middle element and compare it with the
target value. The process keeps on repeating until the target value is met. If we
found the other half to be empty after ending the search, then it can be
concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as
partition-exchange sort. It starts by selecting a pivot value from an array
followed by dividing the rest of the array elements into two sub-arrays. The
partition is made by comparing each of the elements with the pivot value. It
compares whether the element holds a greater value or lesser value than the
pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making
comparisons. It starts by dividing an array into sub-array and then recursively
sorts each of them. After the sorting is done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This
algorithm emphasizes finding out the closest pair of points in a metric space,
given n points, such that the distance between the pair of points should be
minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is
named after Volker Strassen. It has proven to be much faster than the
traditional algorithm when works on large matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier
Transform algorithm is named after J. W. Cooley and John Turkey. It follows
the Divide and Conquer Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest
multiplication algorithms of the traditional time, invented by Anatoly
Karatsuba in late 1960 and got published in 1962. It multiplies two n-digit
numbers in such a way by reducing it to at most single-digit.

Advantages of Divide and Conquer


o Divide and Conquer tend to successfully solve one of the biggest problems,
such as the Tower of Hanoi, a mathematical puzzle. It is challenging to solve
complicated problems for which you have no basic idea, but with the help of
the divide and conquer approach, it has lessened the effort as it works on
dividing the main problem into two halves and then solve them recursively.
This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it
solves simple subproblems within the cache memory instead of accessing the
slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification
and is handled by systems incorporating parallel processing.

Disadvantages of Divide and Conquer


o Since most of its algorithms are designed by incorporating recursion, so it
necessitates high memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater
than the stack present in the CPU.

2.Binary Search Algorithm


In this article, we will discuss the Binary Search Algorithm. Searching is the process of finding
some particular element in the list. If the element is present in the list, then the process is called
successful, and the process returns the location of that element. Otherwise, the search is called
unsuccessful.

Linear Search and Binary Search are the two popular searching techniques. Here we will discuss
the Binary Search Algorithm.

Binary search is the search technique that works efficiently on sorted lists. Hence, to search an
element into some list using the binary search technique, we must ensure that the list is sorted.

Binary search follows the divide and conquer approach in which the list is divided into two
halves, and the item is compared with the middle element of the list. If the match is found then,
the location of the middle element is returned. Otherwise, we search into either of the halves
depending upon the result produced through the match.
Algorithm
1. Binary_Search(a, lower_bound, upper_bound, val) // 'a' is the given array, 'lower_bou
nd' is the index of the first array element, 'upper_bound' is the index of the last array el
ement, 'val' is the value to search
2. Step 1: set beg = lower_bound, end = upper_bound, pos = - 1
3. Step 2: repeat steps 3 and 4 while beg <=end
4. Step 3: set mid = (beg + end)/2
5. Step 4: if a[mid] = val
6. set pos = mid
7. print pos
8. go to step 6
9. else if a[mid] > val
10. set end = mid - 1
11. else
12. set beg = mid + 1
13. [end of if]
14. [end of loop]
15. Step 5: if pos = -1
16. print "value is not present in the array"
17. [end of if]
18. Step 6: exit

Working of Binary search


Now, let's see the working of the Binary Search Algorithm.

To understand the working of the Binary search algorithm, let's take a sorted array. It
will be easy to understand the working of Binary search with an example.

There are two methods to implement the binary search algorithm -

o Iterative method
o Recursive method
The recursive method of binary search follows the divide and conquer approach.

Let the elements of array are -

Let the element to search is, K = 56

We have to use the below formula to calculate the mid of the array -

1. mid = (beg + end)/2

So, in the given array -

beg = 0

end = 8

mid = (0 + 8)/2 = 4. So, 4 is the mid of the array.


Now, the element to search is found. So algorithm will return the index of the element
matched.

Binary Search complexity


Now, let's see the time complexity of Binary search in the best case, average case, and
worst case. We will also see the space complexity of Binary search.
1. Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(logn)

Worst Case O(logn)

o Best Case Complexity - In Binary search, best case occurs when the element
to search is found in first comparison, i.e., when the first middle element itself
is the element to be searched. The best-case time complexity of Binary search
is O(1).
o Average Case Complexity - The average case time complexity of Binary
search is O(logn).
o Worst Case Complexity - In Binary search, the worst case occurs, when we
have to keep reducing the search space till it has only one element. The worst-
case time complexity of Binary search is O(logn).

2. Space Complexity
Space Complexity O(1)

o The space complexity of binary search is O(1).

3.Recurrence Relation
A recurrence is an equation or inequality that describes a function in terms of its
values on smaller inputs. To solve a Recurrence Relation means to obtain a function
defined on the natural numbers that satisfy the recurrence.
For Example, the Worst Case Running Time T(n) of the MERGE SORT Procedures
is described by the recurrence.
T (n) = θ (1) if n=1

2T + θ (n) if n>1

There are four methods for solving Recurrence:

1. Substitution Method
2. Iteration Method
3. Recursion Tree Method
4. Master Method

1. Substitution Method:
The Substitution Method Consists of two main steps:

1. Guess the Solution.


2. Use the mathematical induction to find the boundary condition and shows that
the guess is correct.

For Example1 Solve the equation by Substitution Method.

T (n) = T +n

We have to show that it is asymptotically bound by O (log n).

Solution:
For T (n) = O (log n)

We have to show that for some constant c

1. T (n) ≤c logn.
Put this in given Recurrence Equation.

T (n) ≤c log +1

≤c log + 1 = c logn-clog2 2+1


≤c logn for c≥1
Thus T (n) =O logn.

Example2 Consider the Recurrence

T (n) = 2T + n n>1

Find an Asymptotic bound on T.

Solution:
We guess the solution is O (n (logn)).Thus for constant 'c'.
T (n) ≤c n logn
Put this in given Recurrence Equation.
Now,

T (n) ≤2c log +n


≤cnlogn-cnlog2+n
=cnlogn-n (clog2-1)
≤cnlogn for (c≥1)
Thus T (n) = O (n logn).

2. Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n and
initial condition.

Example1: Consider the Recurrence

1. T (n) = 1 if n=1
2. = 2T (n-1) if n>1

Solution:

T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)

Repeat the procedure for i times

T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1

4.Finding the Maximum and Minimum

Divide and Conquer (DAC) approach has three steps at each level of recursion:

1. Divide the problem into number of smaller units called sub-problems.


2. Conquer (Solve) the sub-problems recursively.
3. Combine the solutions of all the sub-problems into a solution for the original problem.

Maximum and Minimum:

1. Let us consider simple problem that can be solved by the divide-and conquer technique.
2. The problem is to find the maximum and minimum value in a set of ‘n’ elements.
3. By comparing numbers of elements, the time complexity of this algorithm can be
analyzed.
4. Hence, the time is determined mainly by the total cost of the element comparison.

Explanation:
a. Straight MaxMin requires 2(n-1) element comparisons in the best, average & worst cases.
b. By realizing the comparison of a [i]max is false, improvement in a algorithm can be done.
c. Hence we can replace the contents of the for loop by, If (a [i]> Max) then Max = a [i]; Else if
(a [i]< 2(n-1)
d. On the average a[i] is > max half the time, and so, the avg. no. of comparison is 3n/2-1.
A Divide and Conquer Algorithm for this problem would proceed as follows:
a. Let P = (n, a [i],……,a [j]) denote an arbitrary instance of the problem.
b. Here ‘n’ is the no. of elements in the list (a [i],….,a[j]) and we are interested in finding the
maximum and minimum of the list.
c. If the list has more than 2 elements, P has to be divided into smaller instances.
d. For example, we might divide ‘P’ into the 2 instances, P1=([n/2],a[1],……..a[n/2]) & P2= ( n-
[n/2], a[[n/2]+1],….., a[n]) After having divided ‘P’ into 2 smaller sub problems, we can solve
them by recursively invoking the same divide-and-conquer algorithm.
Algorithm:
Example:

A 1 2 3 4 5 6 7 8 9

Values 22 13 -5 -8 15 60 17 31 47
Tree Diagram:

i. As shown in figure 4, in this Algorithm, each node has 4 items of information: i, j, max & min.
ii. In figure 4, root node contains 1 & 9 as the values of i& j corresponding to the initial call to
MaxMin.
iii. This execution produces 2 new calls to MaxMin, where i& j have the values 1, 5 & 6, 9
respectively & thus split the set into 2 subsets of approximately the same size.
iv. Maximum depth of recursion is 4.
Complexity:
If T(n) represents this no., then the resulting recurrence relations is
T (n)=T([n/2]+T[n/2]+2 n>2
1 n=2
1 n=1
When ‘n’ is a power of 2, n=2k for some positive integer ‘k’, then
T (n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
*
*
= 2k-1 T (2) + Σ 1 ≤ I ≤ k-1 ≤ 2i
= 2k-1+ 2k - 2
T(n) = (3n/2) – 2
Note that (3n/2) - 2 is the best-average and worst-case no. of comparisons when ‘n’ is a power of
2.
5.Merge Sort
Merge sort is yet another sorting algorithm that falls under the category of Divide and
Conquer technique. It is one of the best sorting techniques that successfully build a
recursive algorithm.

Divide and Conquer Strategy


In this technique, we segment a problem into two halves and solve them individually.
After finding the solution of each half, we merge them back to represent the solution
of the main problem.

Suppose we have an array A, such that our main concern will be to sort the
subsection, which starts at index p and ends at index r, represented by A[p..r].

Divide

If assumed q to be the central point somewhere in between p and r, then we will


fragment the subarray A[p..r] into two arrays A[p..q] and A[q+1, r].

Conquer

After splitting the arrays into two halves, the next step is to conquer. In this step, we
individually sort both of the subarrays A[p..q] and A[q+1, r]. In case if we did not
reach the base situation, then we again follow the same procedure, i.e., we further
segment these subarrays followed by sorting them separately.

Combine

As when the base step is acquired by the conquer step, we successfully get our sorted
subarrays A[p..q] and A[q+1, r], after which we merge them back to form a new
sorted array [p..r].

Merge Sort algorithm


The MergeSort function keeps on splitting an array into two halves until a condition is
met where we try to perform MergeSort on a subarray of size 1, i.e., p == r.
And then, it combines the individually sorted subarrays into larger arrays until the
whole array is merged.

1. ALGORITHM-MERGE SORT
2. 1. If p<r
3. 2. Then q → ( p+ r)/2
4. 3. MERGE-SORT (A, p, q)
5. 4. MERGE-SORT ( A, q+1,r)
6. 5. MERGE ( A, p, q, r)

Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.

As you can see in the image given below, the merge sort algorithm recursively divides
the array into halves until the base condition is met, where we are left with only 1
element in the array. And then, the merge function picks up the sorted sub-arrays and
merge them back to sort the entire array.

The following figure illustrates the dividing (splitting) procedure.


1. FUNCTIONS: MERGE (A, p, q, r)
2.
3. 1. n 1 = q-p+1
4. 2. n 2= r-q
5. 3. create arrays [1.....n 1 + 1] and R [ 1.....n 2 +1 ]
6. 4. for i ← 1 to n 1
7. 5. do [i] ← A [ p+ i-1]
8. 6. for j ← 1 to n2
9. 7. do R[j] ← A[ q + j]
10. 8. L [n 1+ 1] ← ∞
11. 9. R[n 2+ 1] ← ∞
12. 10. I ← 1
13. 11. J ← 1
14. 12. For k ← p to r
15. 13. Do if L [i] ≤ R[j]
16. 14. then A[k] ← L[ i]
17. 15. i ← i +1
18. 16. else A[k] ← R[j]
19. 17. j ← j+1

The merge step of Merge Sort


Mainly the recursive algorithm depends on a base case as well as its ability to merge
back the results derived from the base cases. Merge sort is no different algorithm, just
the fact here the merge step possesses more importance.

To any given problem, the merge step is one such solution that combines the two
individually sorted lists(arrays) to build one large sorted list(array).
The merge sort algorithm upholds three pointers, i.e., one for both of the two arrays
and the other one to preserve the final sorted array's current index.

1. Did you reach the end of the array?


2. No:
3. Firstly, start with comparing the current elements of both the arrays.
4. Next, copy the smaller element into the sorted array.
5. Lastly, move the pointer of the element containing a smaller element.
6. Yes:
7. Simply copy the rest of the elements of the non-empty array

Merge( ) Function Explained Step-By-Step


Consider the following example of an unsorted array, which we are going to sort with
the help of the Merge Sort algorithm.

A= (36,25,40,2,7,80,15)

Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array,
then one of the halves will have more elements than the other half.

Step2: After dividing an array into two subarrays, we will notice that it did not
hamper the order of elements as they were in the original array. After now, we will
further divide these two arrays into other halves.

Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a
value that cannot be further divided.

Step4: Next, we will merge them back in the same way as they were broken down.

Step5: For each list, we will first compare the element and then combine them to form
a new sorted list.

Step6: In the next iteration, we will compare the lists of two data values and merge
them back into a list of found data values, all placed in a sorted manner.
Hence the array is sorted.

Analysis of Merge Sort:


Let T (n) be the total time taken by the Merge Sort algorithm.

o Sorting two halves will take at the most 2T time.


o When we merge the sorted lists, we come up with a total n-1 comparison
because the last element which is left will need to be copied down in the
combined list, and there will be no comparison.

Thus, the relational formula will be

But we ignore '-1' because the element will take some time to be copied in merge lists.

So T (n) = 2T + n...equation 1

Note: Stopping Condition T (1) =0 because at last, there will be only 1 element left that need
to be copied, and there will be no comparison.

Put 2 equation in 1 equation

Putting 4 equation in 3 equation


From Stopping Condition:

Apply log both sides:

log n=log2i
logn= i log2

=i

log2n=i

From 6 equation

Best Case Complexity: The merge sort algorithm has a best-case time complexity
of O(n*log n) for the already sorted array.

Average Case Complexity: The average-case time complexity for the merge sort
algorithm is O(n*log n), which happens when 2 or more elements are jumbled, i.e.,
neither in the ascending order nor in the descending order.
Worst Case Complexity: The worst-case time complexity is also O(n*log n), which
occurs when we sort the descending order of an array into the ascending order.

Space Complexity: The space complexity of merge sort is O(n).

6.Quick Sort

Like Merge Sort, QuickSort is a Divide and Conquer algorithm. It picks an


element as a pivot and partitions the given array around the picked pivot.
There are many different versions of quickSort that pick pivot in different
ways.
 Always pick the first element as a pivot.
 Always pick the last element as a pivot (implemented below)
 Pick a random element as a pivot.
 Pick median as the pivot.

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in between
search that each element in left sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.

Conquer: Recursively, sort two sub arrays.

Combine: Combine the already sorted array.

Algorithm:
1. QUICKSORT (array A, int m, int n)
2. 1 if (n > m)
3. 2 then
4. 3 i ← a random index from [m,n]
5. 4 swap A [i] with A[m]
6. 5 o ← PARTITION (A, m, n)
7. 6 QUICKSORT (A, m, o - 1)
8. 7 QUICKSORT (A, o + 1, n)
Partition Algorithm:
Partition algorithm rearranges the sub arrays in a place.

1. PARTITION (array A, int m, int n)


2. 1 x ← A[m]
3. 2o←m
4. 3 for p ← m + 1 to n
5. 4 do if (A[p] < x)
6. 5 then o ← o + 1
7. 6 swap A[o] with A[p]
8. 7 swap A[m] with A[o]
9. 8 return o

Figure: shows the execution trace partition algorithm

Example of Quick Sort:


1. 44 33 11 55 77 90 40 60 99 22 88
Let 44 be the Pivot element and scanning done from right to left

Comparing 44 to the right-side elements, and if right-side elements are smaller than 44, then
swap it. As 22 is smaller than 44 so swap them.

22 33 11 55 77 90 40 60 99 44 88

Now comparing 44 to the left side element and the element must be greater than 44 then swap
them. As 55 are greater than 44 so swap them.

22 33 11 44 77 90 40 60 99 55 88

Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot element 44 &
one right from pivot element.

22 33 11 40 77 90 44 60 99 55 88

Swap with 77:

22 33 11 40 44 90 77 60 99 55 88

Now, the element on the right side and left side are greater than and smaller than 44 respectively.

Now we get two sorted lists:

And these sublists are sorted under the same process as above done.

These two sorted sublists side by side.


Merging Sublists:

7. PerformanceMeasurement

When there are several different algorithms to solve a problem, we evaluate the
performance of all those algorithms. Performance evaluation aids in the selection of
the best algorithm from a set of competing algorithms for a given issue. So, we can
express algorithm performance as a practice of producing evaluation judgments
regarding algorithms.
Factors Determining Algorithm’s
Performance

To compare algorithms, we consider a collection of parameters or components such as


the amount of memory required by the algorithm, its execution speed, how easy it is
to comprehend and execute, and so on. In general, an algorithm’s performance is
determined by the following factors:

 Is that algorithm giving you the perfect solution to your problem?


 Is it straightforward to comprehend?
 Is it simple to put into practice?
 How much memory (space) is needed to solve the problem?
 How long does it take to remedy the issue?
When we aim to analyze an algorithm, we just look at space and time requirements of
that algorithm and neglect anything else. On the basis of this information, an
algorithm’s performance may alternatively be described as a technique of determining
the space and time requirements of an algorithm.

The following metrics are used to evaluate the performance of an algorithm:

 The amount of space necessary to perform the algorithm’s task (Space


Complexity). It consists of both program and data space.
 The time necessary to accomplish the algorithm’s task (Time Complexity).
Space Complexity
When we create a problem-solving algorithm, it demands the use of computer
memory to complete its execution. Memory is necessary for the following purposes in
any algorithm:

 To keep track of software instructions.


 To keep track of constant values.
 To keep track of variable values.
 Additionally, for a few additional things such as function calls, jump statements,
and so on.
When software is running, it often utilizes computer memory for the following
reasons:

 The amount of memory needed to hold compiled versions of instructions, which


is referred to as instruction space.
 The amount of memory utilized to hold information about partially run functions
at the time of a function call, which is known as the environmental stack.
 The amount of memory needed to hold all of the variables and constants, which is
referred to as data space.
Example
We need to know how much memory is required to store distinct datatype values to
compute the space complexity (according to the compiler). Take a look at the
following code:

int square(int a)

return a*a;

In the preceding piece of code, variable ‘a’ takes up 2 bytes of memory, and the return
value takes up another 2 bytes, i.e., it takes a total of 4 bytes of memory to finish its
execution, and for any input value of ‘a’, this 4 byte memory is fixed. Constant Space
Complexity is the name given to this type of space complexity.
Time Complexity
Every algorithm needs a certain amount of computer time to carry out its instructions
and complete the operation. The amount of computer time required is referred to as
time complexity. In general, an algorithm’s execution time is determined by the
following:

 It doesn’t matter if it’s on a single processor or a multi-processor computer.


 It doesn’t matter if it’s a 32-bit or 64-bit computer.
 The machine’s read and write speeds.
 The time taken by an algorithm to complete arithmetic, logical, return value, and
assignment operations, among other things.
 Data to be entered
Example
Calculating an algorithm’s Time Complexity based on the system configuration is a
challenging undertaking since the configuration varies from one system to the next.
We must assume a model machine with a certain setup to tackle this challenge. As a
result, we can compute generalized time complexity using that model machine. Take a
look at the following code:

int sum(int a, int b)

returna+b;

In the following example code, calculating a+b takes 1 unit of time, and returning the
value takes 1 unit of time, i.e., it takes two units of time to perform the task, and it is
unaffected by the input values for a and b. This indicates it takes the same amount of
time for all input values, i.e., 2 units.
Notation of Performance Measurement

The Notation for Performance Measurement of an Algorithm


We must compute the complexity of an algorithm if we wish to do an analysis on it.
However, calculating an algorithm’s complexity does not reveal the actual amount of
resources required. Rather than taking the precise quantity of resources, we describe
the complexity in a generic form (Notation), which yields the algorithm’s essential
structure. That Notation is termed Asymptotic Notation and is a mathematical
representation of the algorithm’s complexity. The following are three asymptotic
notations for indicating time-complexity each of which is based on three separate
situations, namely, the best case, worst case, and average case:
Big – O (Big-Oh)

The top-bound of an algorithm’s execution time is represented by the Big-O notation.


It’s the total amount of time an algorithm takes for all input values. It indicates an
algorithm’s worst-case time complexity.

Big – Ω (Omega)

The omega notation represents the lowest bound of an algorithm’s execution time. It
specifies the shortest time an algorithm requires for all input values. It is the best-case
scenario for the time complexity of an algorithm.
Big – Θ (Theta)

The function is enclosed in the theta notation from above and below. It reflects the
average case of an algorithm’s time complexity and defines the upper and lower
boundaries of its execution time.

8. Randomized Sorting Algorithm


An algorithm that uses random numbers to decide what to do next anywhere in its logic
is called Randomized Algorithm. For example, in Randomized Quick Sort, we use a
random number to pick the next pivot (or we randomly shuffle the array). Typically, this
randomness is used to reduce time complexity or space complexity in other standard
algorithms.

9. Selection Sort

Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each
pass through the rundown. In order to do this, a selection sort searches for the biggest
value as it makes a pass and, after finishing the pass, places it in the best possible area.
Similarly, as with a bubble sort, after the first pass, the biggest item is in the right
place. After the second pass, the following biggest is set up. This procedure proceeds
and requires n-1 goes to sort n item since the last item must be set up after the (n-1) th
pass.

ALGORITHM: SELECTION SORT (A)


1. 1. k ← length [A]
2. 2. for j ←1 to n-1
3. 3. smallest ← j
4. 4. for I ← j + 1 to k
5. 5. if A [i] < A [ smallest]
6. 6. then smallest ← i
7. 7. exchange (A [j], A [smallest])

How Selection Sort works


1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second
element turns out to be smaller than the minimum, we will swap them,
followed by assigning to a minimum to the third element.
3. Else if the second element is greater than the minimum, which is our first
element, then we will do nothing and move on to the third element and then
compare it with the minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has
reached the start of the unsorted list.
5. For each iteration, we will start the indexing from the first element of the
unsorted list. We will repeat the Steps from 1 to 4 until the list gets sorted or all
the elements get correctly positioned.
Consider the following example of an unsorted array that we will sort with the
help of the Selection Sort algorithm.

A [] = (7, 4, 3, 6, 5).
A [] =

1st Iteration:

Set minimum = 7

o Compare a0 and a1

As, a0 > a1, set minimum = 4.

o Compare a1 and a2
As, a1 > a2, set minimum = 3.

o Compare a2 and a3

As, a2 < a3, set minimum= 3.

o Compare a2 and a4

As, a2 < a4, set minimum =3.

Since 3 is the smallest element, so we will swap a0 and a2.

2nd Iteration:

Set minimum = 4

o Compare a1 and a2
As, a1 < a2, set minimum = 4.

o Compare a1 and a3

As, A[1] < A[3], set minimum = 4.

o Compare a1 and a4

Again, a1 < a4, set minimum = 4.

Since the minimum is already placed in the correct position, so there will be no
swapping.
3rd Iteration:

Set minimum = 7

o Compare a2 and a3

As, a2 > a3, set minimum = 6.

o Compare a3 and a4

As, a3 > a4, set minimum = 5.

Since 5 is the smallest element among the leftover unsorted elements, so we will swap
7 and 5.
4th Iteration:

Set minimum = 6

o Compare a3 and a4

As a3 < a4, set minimum = 6.

Since the minimum is already placed in the correct position, so there will be no
swapping.

Complexity Analysis of Selection Sort


Input: Given n input elements.

Output: Number of steps incurred to sort a list.


Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in
the second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the
total number of comparisons can be found by;

Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.

Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time
complexity of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the
selection sort algorithm is O(n2), in which the existing elements are in jumbled
ordered, i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which
occurs when we sort the descending order of an array into the ascending order.

In the selection sort algorithm, the time complexity is O(n2) in all three cases. This is
because, in each step, we are required to find minimum elements so that it can be
placed in the correct position. Once we trace the complete array, we will get our
minimum element.

You might also like