Professional Documents
Culture Documents
It can be understood by taking the example of cooking a new recipe. To cook a new recipe,
one reads the instructions and steps and executes them one by one, in the given sequence.
The result thus obtained is the new dish cooked perfectly. Every time you use your phone,
computer, laptop, or calculator you are using Algorithms. Similarly, algorithms help to do a
task in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain instructions that
can be implemented in any language, and yet the output will be the same, as expected.
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output to be expected when the problem is solved.
5. The solution to this problem, is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it solves the
problem.
Example: Consider the example to add three numbers and print the sum.
Time Complexity is a concept in computer science that deals with the quantification of the
amount of time taken by a set of code or algorithm to process or run as a function of the
amount of input. In other words, the time complexity is how long a program takes to process
a given input. The efficiency of an algorithm depends on two parameters:
Time Complexity
Space Complexity
Time Complexity: It is defined as the number of times a particular instruction set is
executed rather than the total time is taken. It is because the total time took also depends on
some external factors like the compiler used, processor’s speed, etc.
Space Complexity: It is the total memory space required by the program for its execution.
Best case time complexity of different data structures for different
operations
Data structure Access Search Insertion Deletion
Worst Case time complexity of different data structures for different operations
Data structure Access Search Insertion Deletion
Asymptotic Notations
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of Algorithms.
The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants and don’t require algorithms to be implemented
and time taken by programs to be compared. Asymptotic notations are mathematical tools to
represent the time complexity of algorithms for asymptotic analysis.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
Theta notation
2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
3. Reflexive Properties:
Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.
We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).
4. Symmetric Properties:
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).
Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.
Big-O analysis
In our previous articles on Analysis of Algorithms, we had discussed asymptotic notations,
their worst and best case performance, etc. in brief. In this article, we discuss the analysis of
the algorithm using Big – O asymptotic notation in complete detail.
Big-O Analysis of Algorithms
We can express algorithmic complexity using the big-O notation. For a problem of size N:
A constant-time function/method is “order 1” : O(1)
A linear-time function/method is “order N” : O(N)
A quadratic-time function/method is “order N squared” : O(N 2 )
Definition: Let g and f be functions from the set of natural numbers to itself. The function f is
said to be O(g) (read big-oh of g), if there is a constant c > 0 and a natural number n0 such
that f (n) ≤ cg(n) for all n >= n0 .
Note: O(g) is a set!
Abuse of notation: f = O(g) does not mean f ∈ O(g).
The Big-O Asymptotic Notation gives us the Upper Bound Idea, mathematically described
below:
f(n) = O(g(n)) if there exists a positive integer n 0 and a positive constant c, such that f(n)≤c.g(n)
∀ n≥n0
The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Some of the useful properties of Big-O notation analysis are as follow:
Constant Multiplication:
If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.
Polynomial Function:
If f(n) = a0 + a1.n + a2.n2 + —- + am.nm, then O(f(n)) = O(nm).
Summation Function:
If f(n) = f 1(n) + f 2(n) + —- + fm(n) and fi(n)≤fi+1(n) ∀ i=1, 2, —-, m,
then O(f(n)) = O(max(f1(n), f2(n), —-, fm(n))).
Logarithmic Function:
If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n))
; all log functions grow in the same manner in terms of Big-O.
Basically, this asymptotic notation is used to measure and compare the worst-case
scenarios of algorithms theoretically. For any algorithm, the Big-O analysis should be
straightforward as long as we correctly identify the operations that are dependent on n, the
input size.
Runtime Analysis of Algorithms
In general cases, we mainly used to measure and compare the worst-case theoretical
running time complexities of algorithms for the performance analysis.
The fastest possible running time for any algorithm is O(1), commonly referred to
as Constant Running Time. In this case, the algorithm always takes the same amount of time
to execute, regardless of the input size. This is the ideal runtime for an algorithm, but it’s
rarely achievable.
In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size of
the input or the number of operations is required for each input item.
The algorithms can be classified as follows from the best-to-worst performance (Running
Time Complexity):
F0 = 0
Fn=1
Fn=F(n-1)+ F(n-2)
FIB (n)
1. If (n < 2)
2. then return n
3. else return FIB (n - 1) + FIB (n - 2)
Figure: shows four levels of recursion for the call fib (8):
A single recursive call to fib (n) results in one recursive call to fib (n - 1), two recursive calls to fib (n
- 2), three recursive calls to fib (n - 3), five recursive calls to fib (n - 4) and, in general, Fk-1 recursive
calls to fib (n - k) We can avoid this unneeded repetition by writing down the conclusion of recursive
calls and looking them up again if we need them later. This process is called memorization.
MEMOFIB (n)
1 if (n < 2)
2 then return n
3 if (F[n] is undefined)
4 then F[n] ← MEMOFIB (n - 1) + MEMOFIB (n - 2)
5 return F[n]
If we trace through the recursive calls to MEMOFIB, we find that array F [] gets filled from bottom
up. I.e., first F [2], then F [3], and so on, up to F[n]. We can replace recursion with a simple for-loop
that just fills up the array F [] in that order
ITERFIB (n)
1 F [0] ← 0
2 F [1] ← 1
3 for i ← 2 to n
4 do
5 F[i] ← F [i - 1] + F [i - 2]
6 return F[n]
This algorithm clearly takes only O (n) time to compute Fn. By contrast, the original recursive
algorithm takes O (∅n;),∅ = = 1.618. ITERFIB conclude an exponential speedup over the
original recursive algorithm.
What is Recursion?
The process in which a function calls itself directly or indirectly is called recursion and the
corresponding function is called a recursive function. Using a recursive algorithm, certain
problems can be solved quite easily. Examples of such problems are Towers of Hanoi
(TOH), Inorder/Preorder/Postorder Tree Traversals, DFS of Graph, etc. A recursive function
solves a particular problem by calling a copy of itself and solving smaller subproblems of the
original problems. Many more recursive calls can be generated as and when required. It is
essential to know that we should provide a certain case in order to terminate this recursion
process. So we can say that every time the function calls itself with a simpler version of the
original problem.
Need of Recursion
Recursion is an amazing technique with the help of which we can reduce the length of our
code and make it easier to read and write. It has certain advantages over the iteration
technique which will be discussed later. A task that can be defined with its similar subtask,
recursion is one of the best solutions for it. For example; The Factorial of a number.
Properties of Recursion:
Performing the same operations multiple times with different inputs.
In every step, we try smaller inputs to make the problem smaller.
Base condition is needed to stop the recursion otherwise infinite loop will occur.
A Mathematical Interpretation
Let us consider a problem that a programmer has to determine the sum of first n natural
numbers, there are several ways of doing that but the simplest approach is simply to add the
numbers starting from 1 to n. So the function simply looks like this,
approach(1) – Simply adding one by one
f(n) = 1 + 2 + 3 +……..+ n
but there is another mathematical approach of representing this,
approach(2) – Recursive adding
f(n) = 1 n=1
f(n) = n + f(n-1) n>1
There is a simple difference between the approach (1) and approach(2) and that is
in approach(2) the function “ f( ) ” itself is being called inside the function, so this
phenomenon is named recursion, and the function containing recursion is called recursive
function, at the end, this is a great tool in the hand of the programmers to code some
problems in a lot easier and efficient way.
How are recursive functions stored in memory?
Recursion uses more memory, because the recursive function adds to the stack with each
recursive call, and keeps the values there until the call is finished. The recursive function
uses LIFO (LAST IN FIRST OUT) Structure just like the stack data
structure. https://www.geeksforgeeks.org/stack-data-structure/
int fact(int n)
{
if (n < = 1) // base case
return 1;
else
return n*fact(n-1);
}
In the above example, the base case for n < = 1 is defined and the larger value of a number
can be solved by converting to a smaller one till the base case is reached.
How a particular problem is solved using recursion?
The idea is to represent a problem in terms of one or more smaller problems, and add one or
more base conditions that stop the recursion. For example, we compute factorial n if we
know the factorial of (n-1). The base case for factorial would be n = 0. We return 1 when n =
0.
Why Stack Overflow error occurs in recursion?
If the base case is not reached or not defined, then the stack overflow problem may arise. Let
us take an example to understand this.
int fact(int n)
{
// wrong base case (it may cause
// stack overflow).
if (n == 100)
return 1;
else
return n*fact(n-1);
}
If fact(10) is called, it will call fact(9), fact(8), fact(7), and so on but the number will never
reach 100. So, the base case is not reached. If the memory is exhausted by these functions on
the stack, it will cause a stack overflow error.
What is the difference between direct and indirect recursion?
A function fun is called direct recursive if it calls the same function fun. A function fun is
called indirect recursive if it calls another function say fun_new and fun_new calls fun
directly or indirectly. The difference between direct and indirect recursion has been
illustrated in Table 1.
// An example of direct recursion
void directRecFun()
{
// Some code....
directRecFun();
// Some code...
}
indirectRecFun2();
// Some code...
}
void indirectRecFun2()
{
// Some code...
indirectRecFun1();
// Some code...
}
What is the difference between tailed and non-tailed recursion?
A recursive function is tail recursive when a recursive call is the last thing executed by the
function. Please refer tail recursion article for details.
How memory is allocated to different function calls in recursion?
When any function is called from main(), the memory is allocated to it on the stack. A
recursive function calls itself, the memory for a called function is allocated on top of memory
allocated to the calling function and a different copy of local variables is created for each
function call. When the base case is reached, the function returns its value to the function by
whom it is called and memory is de-allocated and the process continues.
Let us take the example of how recursion works by taking a simple function.
Searching Algorithm
Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored. Based on the type of search operation, these algorithms are generally
classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every element is
checked. For example: Linear Search.
2. Interval Search: These algorithms are specifically designed for searching in sorted data-
structures. These type of searching algorithms are much more efficient than Linear
Search as they repeatedly target the center of the search structure and divide the search
space in half. For Example: Binary Search.
Linear Search to find the element “20” in a given list of numbers
Algorithm:
Let's see an example of linear search in java where we are going to search an element sequentially
from an array.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Examples: The specific computer algorithms are based on the Divide & Conquer approach:
1. Relational Formula
2. Stopping Condition
1. Relational Formula: It is the formula that we generate from the given technique. After generation
of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the broken
subproblems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition where
the need to stop our recursion steps of D&C is called as Stopping Condition.
1. Binary Search: The binary search algorithm is a searching algorithm, which is also called a
half-interval search or logarithmic search. It works by comparing the target value with the
middle element existing in a sorted array. After making the comparison, if the value differs,
then the half that cannot contain the target will eventually eliminate, followed by continuing
the search on the other half. We will again consider the middle element and compare it with
the target value. The process keeps on repeating until the target value is met. If we found the
other half to be empty after ending the search, then it can be concluded that the target is not
present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the rest
of the array elements into two sub-arrays. The partition is made by comparing each of the
elements with the pivot value. It compares whether the element holds a greater value or lesser
value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by
dividing an array into sub-array and then recursively sorts each of them. After the sorting is
done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm emphasizes
finding out the closest pair of points in a metric space, given n points, such that the distance
between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large
matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at most
single-digit.
else
mid = (low + high) / 2
if x == arr[mid]
return mid
64 25 12 22 11
Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list.
11 25 12 22 64
Second Pass:
For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.
11 25 12 22 64
After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.
11 12 25 22 64
Third Pass:
Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.
11 12 25 22 64
While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.
11 12 22 25 64
Fourth pass:
Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
As 25 is the 4th lowest value hence, it will place at the fourth position.
11 12 22 25 64
Fifth Pass:
At last the largest value present in the array automatically get placed at the last
position in the array
The resulted array is the sorted array.
11 12 22 25 64
Follow the below steps to solve the problem:
Initialize minimum value(min_idx) to location 0.
Traverse the array to find the minimum element in the array.
While traversing if any element smaller than min_idx is found then swap both the
values.
Then, increment min_idx to point to the next element.
Repeat until the array is sorted.
Insertion Sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing
cards in your hands. The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in the sorted part.
Characteristics of Insertion Sort:
This algorithm is one of the simplest algorithm with simple implementation
Basically, Insertion sort is efficient for small data values
Insertion sort is adaptive in nature, i.e. it is appropriate for data sets which are
already partially sorted.
Working of Insertion Sort algorithm:
Consider an example: arr[]: {12, 11, 13, 5, 6}
12 11 13 5 6
First Pass:
Initially, the first two elements of the array are compared in insertion sort.
12 11 13 5 6
Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at
its correct position. Thus, swap 11 and 12.
So, for now 11 is stored in a sorted sub-array.
11 12 13 5 6
Second Pass:
Now, move to the next two elements and compare them
11 12 13 5 6
Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,
no swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
Now, two elements are present in the sorted sub-array which are 11 and 12
Moving forward to the next two elements which are 13 and 5
11 12 13 5 6
Both 5 and 13 are not present at their correct place so swap them
11 12 5 13 6
After swapping, elements 12 and 5 are not sorted, thus swap again
11 5 12 13 6
5 11 12 13 6
5 11 12 13 6
Clearly, they are not sorted, thus perform swap between both
5 11 12 6 13
5 11 6 12 13
5 6 11 12 13
Illustrations:
Insertion Sort Algorithm
To sort an array of size N in ascending order:
Iterate from arr[1] to arr[N] over the array.
Compare the current element (key) to its predecessor.
If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the swapped
element.
class InsertionSort {
int n = arr.length;
int j = i - 1;
arr[j + 1] = arr[j];
j = j - 1;
arr[j + 1] = key;
int n = arr.length;
for (int i = 0; i < n; ++i)
System.out.println();
// Driver method
ob.sort(arr);
printArray(arr);