You are on page 1of 32

The word Algorithm means ” A set of rules to be followed in calculations or other problem-

solving operations ” Or ” A procedure for solving a mathematical problem in a finite number of


steps that frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular problem.
Algorithms can be simple and complex depending on what you want to achieve.

It can be understood by taking the example of cooking a new recipe. To cook a new recipe,
one reads the instructions and steps and executes them one by one, in the given sequence.
The result thus obtained is the new dish cooked perfectly. Every time you use your phone,
computer, laptop, or calculator you are using Algorithms. Similarly, algorithms help to do a
task in programming to get the expected output.
The Algorithm designed are language-independent, i.e. they are just plain instructions that
can be implemented in any language, and yet the output will be the same, as expected.

What are the Characteristics of an Algorithm?


As one would not follow any written instructions to cook the recipe, but only the standard
one. Similarly, not all written instructions for programming is an algorithms. In order for
some instructions to be an algorithm, it must have the following characteristics:
 Clear and Unambiguous: The algorithm should be clear and unambiguous. Each
of its steps should be clear in all aspects and must lead to only one meaning.
 Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined
inputs.
 Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well.
 Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite
time.
 Feasible: The algorithm must be simple, generic, and practical, such that it can be
executed with the available resources. It must not contain some future technology
or anything.
 Language Independent: The Algorithm designed must be language-independent,
i.e. it must be just plain instructions that can be implemented in any language, and
yet the output will be the same, as expected.
Properties of Algorithm:
 It should terminate after a finite time.
 It should produce at least one output.
 It should take zero or more input.
 It should be deterministic means giving the same output for the same input case.
 Every step in the algorithm must be effective i.e. every step should do some work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm: It is the simplest approach for a problem. A brute force
algorithm is the first approach that comes to finding when we see a problem.
2. Recursive Algorithm: A recursive algorithm is based on recursion. In this case, a problem
is broken into several sub-parts and called the same function again and again.
3. Backtracking Algorithm: The backtracking algorithm basically builds the solution by
searching among all possible solutions. Using this algorithm, we keep on building the
solution following criteria. Whenever a solution fails we trace back to the failure point and
build on the next solution and continue this process till we find the solution or all possible
solutions are looked after.
4. Searching Algorithm: Searching algorithms are the ones that are used for searching
elements or groups of elements from a particular data structure. They can be of different
types based on their approach or the data structure in which the element should be found.
5. Sorting Algorithm: Sorting is arranging a group of data in a particular manner according
to the requirement. The algorithms which help in performing this function are called sorting
algorithms. Generally sorting algorithms are used to sort groups of data in an increasing or
decreasing manner.
6. Hashing Algorithm: Hashing algorithms work similarly to the searching algorithm. But
they contain an index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm: This algorithm breaks a problem into sub-problems,
solves a single sub-problem and merges the solutions together to get the final solution. It
consists of the following three steps:
 Divide
 Solve
 Combine
8. Greedy Algorithm: In this type of algorithm the solution is built part by part. The
solution of the next part is built based on the immediate benefit of the next part. The one
solution giving the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already
found solution to avoid repetitive calculation of the same part of the problem. It divides the
problem into smaller overlapping subproblems and solves them.
10. Randomized Algorithm: In the randomized algorithm we use a random number so it
gives immediate benefit. The random number helps in deciding the expected outcome.
To learn more about the types of algorithms refer to the article about “Types of
Algorithms“.
Advantages of Algorithms:
 It is easy to understand.
 An algorithm is a step-wise representation of a solution to a given problem.
 In Algorithm the problem is broken down into smaller pieces or steps hence, it is
easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
 Writing an algorithm takes a long time so it is time-consuming.
 Understanding complex logic through algorithms can be very difficult.
 Branching and Looping statements are difficult to show in Algorithms(imp).
How to Design an Algorithm?
In order to write an algorithm, the following things are needed as a pre-requisite:

1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output to be expected when the problem is solved.
5. The solution to this problem, is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it solves the
problem.
Example: Consider the example to add three numbers and print the sum.

 Step 1: Fulfilling the pre-requisites


As discussed above, in order to write an algorithm, its pre-requisites must be
fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers
and print their sum.
2. The constraints of the problem that must be considered while
solving the problem: The numbers must contain only digits and no
other characters.
3. The input to be taken to solve the problem: The three numbers to be
added.
4. The output to be expected when the problem is solved: The sum of
the three numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution
consists of adding the 3 numbers. It can be done with the help of ‘+’
operator, or bit-wise, or any other method.
 Step 2: Designing the algorithm
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2 and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2,
and num3 respectively.
4. Declare an integer variable sum to store the resultant sum of the 3
numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END

Time Complexity is a concept in computer science that deals with the quantification of the
amount of time taken by a set of code or algorithm to process or run as a function of the
amount of input. In other words, the time complexity is how long a program takes to process
a given input. The efficiency of an algorithm depends on two parameters:
 Time Complexity
 Space Complexity
Time Complexity: It is defined as the number of times a particular instruction set is
executed rather than the total time is taken. It is because the total time took also depends on
some external factors like the compiler used, processor’s speed, etc.
Space Complexity: It is the total memory space required by the program for its execution.
Best case time complexity of different data structures for different
operations
Data structure Access Search Insertion Deletion

Array O(1) O(1) O(1) O(1)

Stack O(1) O(1) O(1) O(1)

Queue O(1) O(1) O(1) O(1)

Singly Linked list O(1) O(1) O(1) O(1)

Doubly Linked List O(1) O(1) O(1) O(1)

Hash Table O(1) O(1) O(1) O(1)

Binary Search Tree O(log n) O(log n) O(log n) O(log n)

AVL Tree O(log n) O(log n) O(log n) O(log n)

B Tree O(log n) O(log n) O(log n) O(log n)

Red Black Tree O(log n) O(log n) O(log n) O(log n)

Worst Case time complexity of different data structures for different operations
Data structure Access Search Insertion Deletion

Array O(1) O(N) O(N) O(N)

Stack O(N) O(N) O(1) O(1)

Queue O(N) O(N) O(1) O(1)

Singly Linked list O(N) O(N) O(1) O(1)

Doubly Linked List O(N) O(N) O(1) O(1)

Hash Table O(N) O(N) O(N) O(N)


Binary Search Tree O(N) O(N) O(N) O(N)

AVL Tree O(log N) O(log N) O(log N) O(log N)

Binary Tree O(N) O(N) O(N) O(N)

Red Black Tree O(log N) O(log N) O(log N) O(log N)

Average time complexity of different data structures for different operations


Data structure Access Search Insertion Deletion

Array O(1) O(N) O(N) O(N)

Stack O(N) O(N) O(1) O(1)

Queue O(N) O(N) O(1) O(1)

Singly Linked list O(N) O(N) O(1) O(1)

Doubly Linked List O(N) O(N) O(1) O(1)

Hash Table O(1) O(1) O(1) O(1)

Binary Search Tree O(log N) O(log N) O(log N) O(log N)

AVL Tree O(log N) O(log N) O(log N) O(log N)

B Tree O(log N) O(log N) O(log N) O(log N)

Red Black Tree O(log N) O(log N) O(log N) O(log N)

Asymptotic Notations
We have discussed Asymptotic Analysis, and Worst, Average, and Best Cases of Algorithms.
The main idea of asymptotic analysis is to have a measure of the efficiency of algorithms that
don’t depend on machine-specific constants and don’t require algorithms to be implemented
and time taken by programs to be compared. Asymptotic notations are mathematical tools to
represent the time complexity of algorithms for asymptotic analysis.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

1. Theta Notation (Θ-Notation):


Theta notation encloses the function from above and below. Since it represents the upper and
the lower bound of the running time of an algorithm, it is used for analyzing the average-
case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Θ(g), if there are constants c1, c2 > 0 and a natural number n0 such that c1* g(n) ≤ f(n) ≤
c2 * g(n) for all n ≥ n0

Theta notation

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 *
g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the value f(n) is always
between c1 * g(n) and c2 * g(n) for large values of n (n ≥ n0). The definition of theta also
requires that f(n) must be non-negative for values of n greater than n0.
A simple way to get the Theta notation of an expression is to drop low-order terms and
ignore leading constants. For example, Consider the expression 3n3 + 6n2 + 6000 =
Θ(n3), the dropping lower order terms is always fine because there will always be a
number(n) after which Θ(n3) has higher values than Θ(n2) irrespective of the constants
involved. For a given function g(n), we denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.

2. Big-O Notation (O-notation):


Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it
gives the worst-case complexity of an algorithm.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there exist a positive
constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥ n0
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of the Insertion
sort is O(n2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we have to use two
statements for best and worst cases:
 The worst-case time complexity of Insertion Sort is Θ(n 2).
 The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the time complexity of an
algorithm. Many times we easily find an upper bound by simply looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner because O provides exact or
upper bounds .

3. Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the running time of an algorithm. Thus, it
provides the best case complexity of an algorithm.
Let g and f be the function from the set of natural numbers to itself. The function f is said to
be Ω(g), if there is a constant c > 0 and a natural number n0 such that c*g(n) ≤ f(n) for all n ≥
n0
Mathematical Representation of Omega notation :
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort
can be written as Ω(n), but it is not very useful information about insertion sort, as we are
generally interested in worst-case and sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner because Ω provides exact or
lower bounds.

Properties of Asymptotic Notations:


1. General Properties:
If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.
Example:
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.

2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

3. Reflexive Properties:
Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.
We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).

4. Symmetric Properties:
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).
Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.

5. Transpose Symmetric Properties:


If f(n) is O(g(n)) then g(n) is Ω (f(n)).
Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
This property only satisfies O and Ω notations.

6. Some More Properties:


1. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))
2. If f(n) = O(g(n)) and d(n)=O(e(n)) then f(n) + d(n) = O( max( g(n), e(n) ))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)
3. If f(n)=O(g(n)) and d(n)=O(e(n)) then f(n) * d(n) = O( g(n) * e(n))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)
_______________________________________________________________________________
Note: If f(n) = O(g(n)) then g(n) = Ω(f(n))

Big-O analysis
In our previous articles on Analysis of Algorithms, we had discussed asymptotic notations,
their worst and best case performance, etc. in brief. In this article, we discuss the analysis of
the algorithm using Big – O asymptotic notation in complete detail.
Big-O Analysis of Algorithms
We can express algorithmic complexity using the big-O notation. For a problem of size N:
 A constant-time function/method is “order 1” : O(1)
 A linear-time function/method is “order N” : O(N)
 A quadratic-time function/method is “order N squared” : O(N 2 )
Definition: Let g and f be functions from the set of natural numbers to itself. The function f is
said to be O(g) (read big-oh of g), if there is a constant c > 0 and a natural number n0 such
that f (n) ≤ cg(n) for all n >= n0 .
Note: O(g) is a set!
Abuse of notation: f = O(g) does not mean f ∈ O(g).
The Big-O Asymptotic Notation gives us the Upper Bound Idea, mathematically described
below:
f(n) = O(g(n)) if there exists a positive integer n 0 and a positive constant c, such that f(n)≤c.g(n)
∀ n≥n0
The general step wise procedure for Big-O runtime analysis is as follows:
1. Figure out what the input is and what n represents.
2. Express the maximum number of operations, the algorithm performs in terms of n.
3. Eliminate all excluding the highest order terms.
4. Remove all the constant factors.
Some of the useful properties of Big-O notation analysis are as follow:

Constant Multiplication:
If f(n) = c.g(n), then O(f(n)) = O(g(n)) ; where c is a nonzero constant.

Polynomial Function:
If f(n) = a0 + a1.n + a2.n2 + —- + am.nm, then O(f(n)) = O(nm).

Summation Function:
If f(n) = f 1(n) + f 2(n) + —- + fm(n) and fi(n)≤fi+1(n) ∀ i=1, 2, —-, m,
then O(f(n)) = O(max(f1(n), f2(n), —-, fm(n))).

Logarithmic Function:
If f(n) = logan and g(n)=logbn, then O(f(n))=O(g(n))
; all log functions grow in the same manner in terms of Big-O.
Basically, this asymptotic notation is used to measure and compare the worst-case
scenarios of algorithms theoretically. For any algorithm, the Big-O analysis should be
straightforward as long as we correctly identify the operations that are dependent on n, the
input size.
Runtime Analysis of Algorithms
In general cases, we mainly used to measure and compare the worst-case theoretical
running time complexities of algorithms for the performance analysis.
The fastest possible running time for any algorithm is O(1), commonly referred to
as Constant Running Time. In this case, the algorithm always takes the same amount of time
to execute, regardless of the input size. This is the ideal runtime for an algorithm, but it’s
rarely achievable.
In actual cases, the performance (Runtime) of an algorithm depends on n, that is the size of
the input or the number of operations is required for each input item.
The algorithms can be classified as follows from the best-to-worst performance (Running
Time Complexity):

A logarithmic algorithm – O(logn)


Runtime grows logarithmically in proportion to n.

A linear algorithm – O(n)


Runtime grows directly in proportion to n.

A superlinear algorithm – O(nlogn)


Runtime grows in proportion to n.

A polynomial algorithm – O(nc)


Runtime grows quicker than previous all based on n.

A exponential algorithm – O(cn)


Runtime grows even faster than polynomial algorithm based on n.

A factorial algorithm – O(n!)


Runtime grows the fastest and becomes quickly unusable for even
small values of n.
Where, n is the input size and c is a positive constant.

Algorithmic Examples of Runtime Analysis:


Some of the examples of all those types of algorithms (in worst-case scenarios) are
mentioned below:

Logarithmic algorithm – O(logn) – Binary Search.

Linear algorithm – O(n) – Linear Search.

Superlinear algorithm – O(nlogn) – Heap Sort, Merge Sort.

Polynomial algorithm – O(n^c) – Strassen’s Matrix Multiplication, Bubble Sort, Selection


Sort, Insertion Sort, Bucket Sort.

Exponential algorithm – O(c^n) – Tower of Hanoi.


Factorial algorithm – O(n!) – Determinant Expansion by Minors, Brute force Search
algorithm for Traveling Salesman Problem.
Mathematical Examples of Runtime Analysis:
The performances (Runtimes) of different orders of algorithms separate rapidly as n (the
input size) gets larger. Let’s consider the mathematical example:
If n = 10, If n=20,
log(10) = 1; log(20) = 2.996;
10 = 10; 20 = 20;
10log(10)=10; 20log(20)=59.9;
102=100; 202=400;
210=1024; 220=1048576;
10!=3628800; 20!=2.432902e+18 18;
Memory Footprint Analysis of Algorithms
For performance analysis of an algorithm, runtime measurement is not only relevant metric
but also we need to consider the memory usage amount of the program. This is referred to
as the Memory Footprint of the algorithm, shortly known as Space Complexity.
Here also, we need to measure and compare the worst case theoretical space complexities of
algorithms for the performance analysis.
It basically depends on two major aspects described below:
 Firstly, the implementation of the program is responsible for memory usage. For
example, we can assume that recursive implementation always reserves more
memory than the corresponding iterative implementation of a particular problem.
 And the other one is n, the input size or the amount of storage required for each
item. For example, a simple algorithm with a high amount of input size can
consume more memory than a complex algorithm with less amount of input size.
Algorithmic Examples of Memory Footprint Analysis: The algorithms with examples are
classified from the best-to-worst performance (Space Complexity) based on the worst-case
scenarios are mentioned below:

Ideal algorithm - O(1) - Linear Search, Binary Search,


Bubble Sort, Selection Sort, Insertion Sort, Heap Sort, Shell Sort.

Logarithmic algorithm - O(log n) - Merge Sort.

Linear algorithm - O(n) - Quick Sort.

Sub-linear algorithm - O(n+k) - Radix Sort.

Time-Space Trade-Off in Algorithms


A tradeoff is a situation where one thing increases and another thing decreases. It is a way to
solve a problem in:
 Either in less time and by using more space, or
 In very little space by spending a long amount of time.
The best Algorithm is that which helps to solve a problem that requires less space in
memory and also takes less time to generate the output. But in general, it is not always
possible to achieve both of these conditions at the same time. The most common condition is
an algorithm using a lookup table. This means that the answers to some questions for every
possible value can be written down. One way of solving this problem is to write down the
entire lookup table, which will let you find answers very quickly but will use a lot of space.
Another way is to calculate the answers without writing down anything, which uses very
little space, but might take a long time. Therefore, the more time-efficient algorithms you
have, that would be less space-efficient.
Types of Space-Time Trade-off
 Compressed or Uncompressed data
 Re Rendering or Stored images
 Smaller code or loop unrolling
 Lookup tables or Recalculation
Compressed or Uncompressed data: A space-time trade-off can be applied to the problem
of data storage. If data stored is uncompressed, it takes more space but less time. But if the
data is stored compressed, it takes less space but more time to run the decompression
algorithm. There are many instances where it is possible to directly work with compressed
data. In that case of compressed bitmap indices, where it is faster to work with compression
than without compression.
Re-Rendering or Stored images: In this case, storing only the source and rendering it as an
image would take more space but less time i.e., storing an image in the cache is faster than
re-rendering but requires more space in memory.
Smaller code or Loop Unrolling: Smaller code occupies less space in memory but it
requires high computation time that is required for jumping back to the beginning of the
loop at the end of each iteration. Loop unrolling can optimize execution speed at the cost of
increased binary size. It occupies more space in memory but requires less computation time.
Lookup tables or Recalculation: In a lookup table, an implementation can include the
entire table which reduces computing time but increases the amount of memory needed. It
can recalculate i.e., compute table entries as needed, increasing computing time but reducing
memory requirements.
For Example: In mathematical terms, the sequence Fn of the Fibonacci Numbers is defined
by the recurrence relation:
Fn = Fn – 1 + F n – 2 ,
where, F0 = 0 and F1 = 1.
A simple solution to find the Nth Fibonacci term using recursion from the above
There is usually a trade-off between optimal memory use and runtime performance.
In general for an algorithm, space efficiency and time efficiency reach at two opposite ends
and each point in between them has a certain time and space efficiency. So, the more time
efficiency you have, the less space efficiency you have and vice versa.
For example, Mergesort algorithm is exceedingly fast but requires a lot of space to do the
operations. On the other side, Bubble Sort is exceedingly slow but requires the minimum
space.
At the end of this topic, we can conclude that finding an algorithm that works in less running
time and also having less requirement of memory space, can make a huge difference in how
well an algorithm performs.
Fibonacci sequence
Fibonacci sequence is the sequence of numbers in which every next item is the total of the previous
two items. And each number of the Fibonacci sequence is called Fibonacci number.

Example: 0 ,1,1,2,3,5,8,13,21,....................... is a Fibonacci sequence.

The Fibonacci numbers F_nare defined as follows:

F0 = 0
Fn=1
Fn=F(n-1)+ F(n-2)
FIB (n)
1. If (n < 2)
2. then return n
3. else return FIB (n - 1) + FIB (n - 2)

Figure: shows four levels of recursion for the call fib (8):

Figure: Recursive calls during computation of Fibonacci number

A single recursive call to fib (n) results in one recursive call to fib (n - 1), two recursive calls to fib (n
- 2), three recursive calls to fib (n - 3), five recursive calls to fib (n - 4) and, in general, Fk-1 recursive
calls to fib (n - k) We can avoid this unneeded repetition by writing down the conclusion of recursive
calls and looking them up again if we need them later. This process is called memorization.

Here is the algorithm with memorization

MEMOFIB (n)
1 if (n < 2)
2 then return n
3 if (F[n] is undefined)
4 then F[n] ← MEMOFIB (n - 1) + MEMOFIB (n - 2)
5 return F[n]

If we trace through the recursive calls to MEMOFIB, we find that array F [] gets filled from bottom
up. I.e., first F [2], then F [3], and so on, up to F[n]. We can replace recursion with a simple for-loop
that just fills up the array F [] in that order

ITERFIB (n)
1 F [0] ← 0
2 F [1] ← 1
3 for i ← 2 to n
4 do
5 F[i] ← F [i - 1] + F [i - 2]
6 return F[n]

This algorithm clearly takes only O (n) time to compute Fn. By contrast, the original recursive

algorithm takes O (∅n;),∅ = = 1.618. ITERFIB conclude an exponential speedup over the
original recursive algorithm.

What is Recursion?
The process in which a function calls itself directly or indirectly is called recursion and the
corresponding function is called a recursive function. Using a recursive algorithm, certain
problems can be solved quite easily. Examples of such problems are Towers of Hanoi
(TOH), Inorder/Preorder/Postorder Tree Traversals, DFS of Graph, etc. A recursive function
solves a particular problem by calling a copy of itself and solving smaller subproblems of the
original problems. Many more recursive calls can be generated as and when required. It is
essential to know that we should provide a certain case in order to terminate this recursion
process. So we can say that every time the function calls itself with a simpler version of the
original problem.
Need of Recursion
Recursion is an amazing technique with the help of which we can reduce the length of our
code and make it easier to read and write. It has certain advantages over the iteration
technique which will be discussed later. A task that can be defined with its similar subtask,
recursion is one of the best solutions for it. For example; The Factorial of a number.
Properties of Recursion:
 Performing the same operations multiple times with different inputs.
 In every step, we try smaller inputs to make the problem smaller.
 Base condition is needed to stop the recursion otherwise infinite loop will occur.
A Mathematical Interpretation
Let us consider a problem that a programmer has to determine the sum of first n natural
numbers, there are several ways of doing that but the simplest approach is simply to add the
numbers starting from 1 to n. So the function simply looks like this,
approach(1) – Simply adding one by one
f(n) = 1 + 2 + 3 +……..+ n
but there is another mathematical approach of representing this,
approach(2) – Recursive adding
f(n) = 1 n=1
f(n) = n + f(n-1) n>1
There is a simple difference between the approach (1) and approach(2) and that is
in approach(2) the function “ f( ) ” itself is being called inside the function, so this
phenomenon is named recursion, and the function containing recursion is called recursive
function, at the end, this is a great tool in the hand of the programmers to code some
problems in a lot easier and efficient way.
How are recursive functions stored in memory?
Recursion uses more memory, because the recursive function adds to the stack with each
recursive call, and keeps the values there until the call is finished. The recursive function
uses LIFO (LAST IN FIRST OUT) Structure just like the stack data
structure. https://www.geeksforgeeks.org/stack-data-structure/

What is the base condition in recursion?


In the recursive program, the solution to the base case is provided and the solution to the
bigger problem is expressed in terms of smaller problems.

int fact(int n)
{
if (n < = 1) // base case
return 1;
else
return n*fact(n-1);
}
In the above example, the base case for n < = 1 is defined and the larger value of a number
can be solved by converting to a smaller one till the base case is reached.
How a particular problem is solved using recursion?
The idea is to represent a problem in terms of one or more smaller problems, and add one or
more base conditions that stop the recursion. For example, we compute factorial n if we
know the factorial of (n-1). The base case for factorial would be n = 0. We return 1 when n =
0.
Why Stack Overflow error occurs in recursion?
If the base case is not reached or not defined, then the stack overflow problem may arise. Let
us take an example to understand this.
int fact(int n)
{
// wrong base case (it may cause
// stack overflow).
if (n == 100)
return 1;
else
return n*fact(n-1);
}
If fact(10) is called, it will call fact(9), fact(8), fact(7), and so on but the number will never
reach 100. So, the base case is not reached. If the memory is exhausted by these functions on
the stack, it will cause a stack overflow error.
What is the difference between direct and indirect recursion?
A function fun is called direct recursive if it calls the same function fun. A function fun is
called indirect recursive if it calls another function say fun_new and fun_new calls fun
directly or indirectly. The difference between direct and indirect recursion has been
illustrated in Table 1.
// An example of direct recursion
void directRecFun()
{
// Some code....

directRecFun();

// Some code...
}

// An example of indirect recursion


void indirectRecFun1()
{
// Some code...

indirectRecFun2();

// Some code...
}
void indirectRecFun2()
{
// Some code...

indirectRecFun1();

// Some code...
}
What is the difference between tailed and non-tailed recursion?
A recursive function is tail recursive when a recursive call is the last thing executed by the
function. Please refer tail recursion article for details.
How memory is allocated to different function calls in recursion?
When any function is called from main(), the memory is allocated to it on the stack. A
recursive function calls itself, the memory for a called function is allocated on top of memory
allocated to the calling function and a different copy of local variables is created for each
function call. When the base case is reached, the function returns its value to the function by
whom it is called and memory is de-allocated and the process continues.
Let us take the example of how recursion works by taking a simple function.
Searching Algorithm
Searching Algorithms are designed to check for an element or retrieve an element from any data
structure where it is stored. Based on the type of search operation, these algorithms are generally
classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every element is
checked. For example: Linear Search.
2. Interval Search: These algorithms are specifically designed for searching in sorted data-
structures. These type of searching algorithms are much more efficient than Linear
Search as they repeatedly target the center of the search structure and divide the search
space in half. For Example: Binary Search.
Linear Search to find the element “20” in a given list of numbers

Binary Search to find the element “23” in a given list of numbers


Linear Search in Java
Linear search is used to search a key element from multiple elements. Linear search is less used today
because it is slower than binary search and hashing.

Algorithm:

o Step 1: Traverse the array


o Step 2: Match the key element with array element
o Step 3: If key element is found, return the index position of the array element
o Step 4: If key element is not found, return -1

Let's see an example of linear search in java where we are going to search an element sequentially
from an array.

1. public class LinearSearchExample{


2. public static int linearSearch(int[] arr, int key){
3. for(int i=0;i<arr.length;i++){
4. if(arr[i] == key){
5. return i;
6. }
7. }
8. return -1;
9. }
10. public static void main(String a[]){
11. int[] a1= {10,20,30,50,70,90};
12. int key = 50;
13. System.out.println(key+" is found at index: "+linearSearch(a1, key));
14. }
15. }
time complexity of queue is o(n)

time complexity of stack is o(n)

time complexity of linked list is o(n)

time complexity of linear search is o(n)

time complexity of insertion sort is o(n^2)

time complexity of bubble sort is o(n^2)


time complexity of binary search is o(log n)

time complexity of merge sort is o(n logn)

time complexity of quick sort is o(n logn)

time complexity of selection sort is o(n^2)

Divide and Conquer Introduction


Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a dispute
on a huge input, break the input into minor pieces, decide the problem on each of the small pieces,
and then merge the piecewise solutions into a global solution. This mechanism of solving the
problem is called the Divide & Conquer Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the whole
problem.

Generally, we can follow the divide-and-conquer approach in a three-step process.

Examples: The specific computer algorithms are based on the Divide & Conquer approach:

1. Maximum and Minimum Problem


2. Binary Search
3. Sorting (merge sort, quick sort)
4. Tower of Hanoi.

Fundamental of Divide & Conquer Strategy:


There are two fundamental of Divide & Conquer Strategy:

1. Relational Formula
2. Stopping Condition

1. Relational Formula: It is the formula that we generate from the given technique. After generation
of Formula we apply D&C Strategy, i.e. we break the problem recursively & solve the broken
subproblems.
2. Stopping Condition: When we break the problem using Divide & Conquer Strategy, then we
need to know that for how much time, we need to apply divide & Conquer. So the condition where
the need to stop our recursion steps of D&C is called as Stopping Condition.

Applications of Divide and Conquer Approach:


Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is also called a
half-interval search or logarithmic search. It works by comparing the target value with the
middle element existing in a sorted array. After making the comparison, if the value differs,
then the half that cannot contain the target will eventually eliminate, followed by continuing
the search on the other half. We will again consider the middle element and compare it with
the target value. The process keeps on repeating until the target value is met. If we found the
other half to be empty after ending the search, then it can be concluded that the target is not
present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the rest
of the array elements into two sub-arrays. The partition is made by comparing each of the
elements with the pivot value. It compares whether the element holds a greater value or lesser
value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts by
dividing an array into sub-array and then recursively sorts each of them. After the sorting is
done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm emphasizes
finding out the closest pair of points in a metric space, given n points, such that the distance
between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after Volker
Strassen. It has proven to be much faster than the traditional algorithm when works on large
matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at most
single-digit.

Advantages of Divide and Conquer


o Divide and Conquer tend to successfully solve one of the biggest problems, such as the Tower
of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems for which
you have no basic idea, but with the help of the divide and conquer approach, it has lessened
the effort as it works on dividing the main problem into two halves and then solve them
recursively. This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is handled
by systems incorporating parallel processing.

Disadvantages of Divide and Conquer


o Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
o An explicit stack may overuse the space.
o It may even crash the system if the recursion is performed rigorously greater than the stack
present in the CPU.

Binary Search Approach:


Binary Search is a searching algorithm used in a sorted array by repeatedly dividing the
search interval in half. The idea of binary search is to use the information that the array is
sorted and reduce the time complexity to O(Log n).
Binary Search Algorithm: The basic steps to perform Binary Search are:
 Begin with the mid element of the whole array as a search key.
 If the value of the search key is equal to the item then return an index of the search
key.
 Or if the value of the search key is less than the item in the middle of the interval,
narrow the interval to the lower half.
 Otherwise, narrow it to the upper half.
 Repeatedly check from the second point until the value is found or the interval is
empty.
Binary Search Algorithm can be implemented in the following two ways
1. Iterative Method
2. Recursive Method
1. Iteration Method
binarySearch(arr, x, low, high)
repeat till low = high
mid = (low + high)/2
if (x == arr[mid])
return mid

else if (x > arr[mid]) // x is on the right side


low = mid + 1

else // x is on the left side


high = mid - 1
2. Recursive Method (The recursive method follows the divide and conquers approach)
binarySearch(arr, x, low, high)
if low > high
return False

else
mid = (low + high) / 2
if x == arr[mid]
return mid

else if x > arr[mid] // x is on the right side


return binarySearch(arr, x, mid + 1, high)

else // x is on the right side


return binarySearch(arr, x, low, mid - 1)
Illustration of Binary Search Algorithm:

Example of Binary Search Algorithm


Step-by-step Binary Search Algorithm: We basically ignore half of the elements just after
one comparison.
1. Compare x with the middle element.
2. If x matches with the middle element, we return the mid index.
3. Else If x is greater than the mid element, then x can only lie in the right half
subarray after the mid element. So we recur for the right half.
4. Else (x is smaller) recur for the left half.
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the
adjacent elements if they are in the wrong order. This algorithm is not suitable for large data
sets as its average and worst-case time complexity is quite high.
How does Bubble Sort Work?
Input: arr[] = {5, 1, 4, 2, 8}
First Pass:
 Bubble sort starts with very first two elements, comparing them to check which one is
greater.
 ( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two
elements, and swaps since 5 > 1.
 ( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
 ( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
 ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order
(8 > 5), algorithm does not swap them.
Second Pass:
 Now, during second iteration it should look like this:
 ( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
 ( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Third Pass:
 Now, the array is already sorted, but our algorithm does not know if it is completed.
 The algorithm needs one whole pass without any swap to know it is sorted.
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
 ( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Illustration:
Follow the below steps to solve the problem:
 Run a nested for loop to traverse the input array using two variables i and j, such
that 0 ≤ i < n-1 and 0 ≤ j < n-i-1
 If arr[j] is greater than arr[j+1] then swap these adjacent elements, else move on
 Print the sorted array
Java Program for Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the minimum element
(considering ascending order) from the unsorted part and putting it at the beginning.
The algorithm maintains two subarrays in a given array.
 The subarray which already sorted.
 The remaining subarray was unsorted.
In every iteration of the selection sort, the minimum element (considering ascending order)
from the unsorted subarray is picked and moved to the sorted subarray.

Flowchart of the Selection Sort:

How selection sort works?


Lets consider the following array as an example: arr[] = {64, 25, 12, 22, 11}
First pass:
 For the first position in the sorted array, the whole array is traversed from index 0 to
4 sequentially. The first position where 64 is stored presently, after traversing whole
array it is clear that 11 is the lowest value.

64 25 12 22 11
 Thus, replace 64 with 11. After one iteration 11, which happens to be the least value
in the array, tends to appear in the first position of the sorted list.

11 25 12 22 64

Second Pass:
 For the second position, where 25 is present, again traverse the rest of the array in a
sequential manner.

11 25 12 22 64

 After traversing, we found that 12 is the second lowest value in the array and it
should appear at the second place in the array, thus swap these values.

11 12 25 22 64

Third Pass:
 Now, for third place, where 25 is present again traverse the rest of the array and find
the third least value present in the array.

11 12 25 22 64

 While traversing, 22 came out to be the third least value and it should appear at the
third place in the array, thus swap 22 with element present at third position.

11 12 22 25 64

Fourth pass:
 Similarly, for fourth position traverse the rest of the array and find the fourth least
element in the array
 As 25 is the 4th lowest value hence, it will place at the fourth position.

11 12 22 25 64

Fifth Pass:
 At last the largest value present in the array automatically get placed at the last
position in the array
 The resulted array is the sorted array.

11 12 22 25 64
Follow the below steps to solve the problem:
 Initialize minimum value(min_idx) to location 0.
 Traverse the array to find the minimum element in the array.
 While traversing if any element smaller than min_idx is found then swap both the
values.
 Then, increment min_idx to point to the next element.
 Repeat until the array is sorted.
Insertion Sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing
cards in your hands. The array is virtually split into a sorted and an unsorted part. Values
from the unsorted part are picked and placed at the correct position in the sorted part.
Characteristics of Insertion Sort:
 This algorithm is one of the simplest algorithm with simple implementation
 Basically, Insertion sort is efficient for small data values
 Insertion sort is adaptive in nature, i.e. it is appropriate for data sets which are
already partially sorted.
Working of Insertion Sort algorithm:
Consider an example: arr[]: {12, 11, 13, 5, 6}

12 11 13 5 6
First Pass:
 Initially, the first two elements of the array are compared in insertion sort.

12 11 13 5 6

 Here, 12 is greater than 11 hence they are not in the ascending order and 12 is not at
its correct position. Thus, swap 11 and 12.
 So, for now 11 is stored in a sorted sub-array.

11 12 13 5 6

Second Pass:
 Now, move to the next two elements and compare them

11 12 13 5 6

Here, 13 is greater than 12, thus both elements seems to be in ascending order, hence,

no swapping will occur. 12 also stored in a sorted sub-array along with 11
Third Pass:
 Now, two elements are present in the sorted sub-array which are 11 and 12
 Moving forward to the next two elements which are 13 and 5

11 12 13 5 6

 Both 5 and 13 are not present at their correct place so swap them

11 12 5 13 6

 After swapping, elements 12 and 5 are not sorted, thus swap again

11 5 12 13 6

 Here, again 11 and 5 are not sorted, hence swap again

5 11 12 13 6

 here, it is at its correct position


Fourth Pass:
 Now, the elements which are present in the sorted sub-array are 5, 11 and 12
 Moving to the next two elements 13 and 6

5 11 12 13 6

 Clearly, they are not sorted, thus perform swap between both

5 11 12 6 13

 Now, 6 is smaller than 12, hence, swap again

5 11 6 12 13

 Here, also swapping makes 11 and 6 unsorted hence, swap again

5 6 11 12 13

 Finally, the array is completely sorted.

Illustrations:
Insertion Sort Algorithm
To sort an array of size N in ascending order:
 Iterate from arr[1] to arr[N] over the array.
 Compare the current element (key) to its predecessor.
 If the key element is smaller than its predecessor, compare it to the elements
before. Move the greater elements one position up to make space for the swapped
element.

// Java program for implementation of Insertion Sort

class InsertionSort {

/*Function to sort array using insertion sort*/

void sort(int arr[])

int n = arr.length;

for (int i = 1; i < n; ++i) {

int key = arr[i];

int j = i - 1;

/* Move elements of arr[0..i-1], that are

greater than key, to one position ahead

of their current position */

while (j >= 0 && arr[j] > key) {

arr[j + 1] = arr[j];

j = j - 1;

arr[j + 1] = key;

/* A utility function to print array of size n*/

static void printArray(int arr[])

int n = arr.length;
for (int i = 0; i < n; ++i)

System.out.print(arr[i] + " ");

System.out.println();

// Driver method

public static void main(String args[])

int arr[] = { 12, 11, 13, 5, 6 };

InsertionSort ob = new InsertionSort();

ob.sort(arr);

printArray(arr);

} /* This code is contributed by Rajat Mishra. */

Difference between Insertion sort and


Selection sort
Insertion Sort Selection Sort

Finds the minimum / maximum


Inserts the value in the presorted array to sort number from the list and sort it in
1. the set of values in the array. ascending / descending order.

2. It is a stable sorting algorithm. It is an unstable sorting algorithm.

The best-case time complexity is Ω(N) when the


array is already in ascending order. It have For best case, worst case and average
3. Θ(N2) in worst case and average case. selection sort have complexity Θ(N2).

The number of comparison operations The number of comparison operations


performed in this sorting algorithm is less than performed in this sorting algorithm is
4. the swapping performed. more than the swapping performed.
Insertion Sort Selection Sort

It is less efficient than the Insertion


5. It is more efficient than the Selection sort. sort.

The location where to put the element


Here the element is known beforehand, and we is previously known we search for the
6. search for the correct position to place them. element to insert at that position.

The selection sort is used when


 A small list is to be sorted
 The cost of swapping does
not matter
 Checking of all the elements
is compulsory
The insertion sort is used when:
 Cost of writing to memory
 The array is has a small number of matters like in flash
elements memory (number of Swaps
 There are only a few elements left to is O(n) as compared to
7. be sorted O(n2) of bubble sort)

The insertion sort is Adaptive, i.e., efficient for


data sets that are already substantially sorted:
the time complexity is O(kn) when each element
in the input is no more than k places away from Selection sort is an in-place
8. its sorted position comparison sorting algorithm

You might also like