You are on page 1of 31

Algorithm

➢ Algorithm is a step-by-step procedure, which


defines a set of instructions to be executed in
a certain order to get the desired output.
➢ Algorithms are generally created independent
of underlying languages, i.e. an algorithm can
be implemented in more than one
programming language.
some important categories of algorithms −

• Search − Algorithm to search an item in a data


structure.
• Sort − Algorithm to sort items in a certain order.
• Insert − Algorithm to insert item in a data
structure.
• Update − Algorithm to update an existing item in
a data structure.
• Delete − Algorithm to delete an existing item
from a data structure.
Characteristics of an Algorithm

• Unambiguous − Algorithm should be clear and unambiguous. Each


of its steps (or phases), and their inputs/outputs should be clear
and must lead to only one meaning.
• Input − An algorithm should have 0 or more well-defined inputs.
• Output − An algorithm should have 1 or more well-defined outputs,
and should match the desired output.
• Finiteness − Algorithms must terminate after a finite number of
steps.
• Feasibility − Should be feasible with the available resources.
• Independent − An algorithm should have step-by-step directions,
which should be independent of any programming code.
• We design an algorithm to get a solution of a given problem. A
problem can be solved in more than one ways.

• Hence, many solution algorithms can be derived for a given


problem. The next step is to analyze those proposed solution
algorithms and implement the best suitable solution.
Algorithm Analysis
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following −
• A Priori Analysis −
➢ This analysis is done before implementing the algorithm on
certain programming language on any system.
➢ It is analyzed using asymptotic notations.
➢ It is hardware and programming language independent.
➢ It is dependent on number of times statements are
executed
• A Posterior Analysis −
➢ This analysis is done after implementing the algorithm on
certain programming language and system. i.e the
execution time (2 sec,3 sec ,etc).
➢ Generally in industry, they do not do posterior analysis.
A Posteriori analysis A priori analysis
1. Posteriori analysis is a relative
Priori analysis is an absolute analysis.
analysis.

1. It is dependent on language of It is independent of language of compiler


compiler and type of hardware. and types of hardware.

1. It will give exact answer. It will give approximate answer.

It uses the asymptotic notations to


1. It doesn’t use asymptotic notations to
represent how much time the algorithm
represent the time complexity of an
will take in order to complete its
algorithm.
execution.

1. The time complexity of an algorithm


The time complexity of an algorithm using
using a posteriori analysis differ from
a priori analysis is same for every system.
system to system.

1. If the time taken by the program is


If the algorithm running faster, credit goes
less, then the credit will go to
to the programmer.
compiler and hardware.
How to Write an Algorithm?

• There are no well-defined standards for


writing algorithms.
• Rather, it is problem and resource dependent.
Algorithms are never written to support a
particular programming code.
Example
Let's try to learn algorithm-writing by using an example.

Problem − Design an algorithm to add two numbers and display


the result.
Step 1 − START
Step 2 − declare three integers a, b & c
Step 3 − define values of a & b
Step 4 − add values of a & b
Step 5 − store output of step 4 to c
Step 6 − print c
Step 7 − STOP
Algorithmic Efficiency and its Complexity

• 2.1 Time and space complexity


– Time Complexity:
• Time complexity of an algorithm is the amount of CPU
time it needs to run to completion.
Why??
– For making better program
– Comparison of algorithm
Let’s say we have two algorithm as:
– Algorithm A
– Algorithm B

• We know that Algorithm A has high processing and


Algorithm B has low processing.
• So we generally say that algorithm B is better than
algorithm A. or Algorithm B is faster than Algorithm A.
• But what if we run the Algorithm A is fast machine and
Algorithm B is slow Machine.
• Then now we say that actually Algorithm A is better.
• We don’t have the proof so we have to prove it. In
order to prove we have Time Complexity.
Space complexity of Algorithm
– Space complexity:
• Space complexity of an algorithm is the amount of
memory it needs to run to completion.
• A good algorithm keeps space complexity as low as
possible.
Asymptotic notations
They are also used to make meaningful statements about the
efficiency of algorithms . These notations help us to make approx
but meaningful assumptions about the time and space
complexity.
Commonly used asymptotic notations are:
1. Big oh Notation: Upper bound -> for Worst case
2. Omega Notation: LowerBound-> for average case
3. Theta Notation: Tighter bound-> for Best case
Worst case(Longest Time)-> Longest time required for execution of
algorithm. It means the given algorithm will take maximum of
particular time not above that particular time .It can be less than
that particular time not greater than that.
Example: let’s say I have a honda car it takes maximum 6 hours to
reach from 0 to 100 km. or If I have a apple, I will take maximum
secs to complete it
Average Case(Average Time)-> Average time required for
execution of algorithm. Example: let’s say I have a honda car it
takes between 3 to 6 hours to reach from 0 to 100 km. or If I have
a apple, I will take between 3 to 5secs to complete it

Best Case(Shorter Time)-> Shorter time required for execution of


algorithm.
Example: let’s say I have a honda car it takes minimum 6 hours to
reach from 0 to 100 km. or If I have a apple, I will take minimum
5secs to complete it
There are mainly three asymptotic notations:

• Big-O notation
• Omega notation
• Theta notation
Big O Notation
✓Also called upper Bound
✓For Worst Case
This notation gives an upper bound for a
function to within a constant factor. We write
f(n)=O(g(n)) if there are positive constants n0
and C such that to the right of n0, the value of
f(n) always lies on or below c.g(n).
O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
Big O graph
Example of Time Complexity
Now finding worst case using Big O notation:
F(n)=5N+3;
Since we ignore Constants.(i.e 5 and 3 are
ignored.)
So the time complexity=O(N)
Quadratic time-> O(n^2)
Example
for(int i=0;i<n;i++){
for(int j=0;j<n;j++){
}
}
Here first loop is executed n times and second loop is also executed n times.
So, the total complexity will be O(n*n)=O(n^2)

Cubic Time->O(n^3)
Example
for(int i=0;i<n;i++){
for(int j=0;j<n;j++){
for(int k=0;k<nk;k++){
}
}
}
Here first loop is executed n times and second loop is also executed n times
and third loop is executed n times. So, the total complexity will be
O(n*n*n)=O(n^3)
Complexity growth
Complexity Increasing order
O(n^3)

O(n^2)

O(nlogn)

O(n)
O(logn)
O(1)
Examples
As we know Big O notations consider only the
upper bound.
– f(n)-> 2n^2+3n -> O(n^2)-> ignoring the constant
and lower degree
– f(n)-> 4n^4+3n^3->0(n^4)
– f(n)->n^2+logn-> O(n^2)->since logn is smaller
than n^2
– f(n)->|200|->O(1)
Let’s consider full example of
If function f(n)=O(g(n)),If there exists a value of positive integer n and n0 and positive
constant C such that
F(n)≤ C.g(n) for all n≥n0.
Here function g(n) is an upper bound for function f(n), as g(n) grows faster than f(n).
Example:
Prove 3n^2+4n+6=O(n^2)
We know 0≤f(n)≤C.g(n)
0 ≤3n^2+4n+6 ≤C.n^2
C ≥3+4/n +6/n^2 ->putting the minimum value of n i.e n=1
n=1,C ≥3+4/1 + 6/1
c ≥ 13
Now, n=n0=1,c ≥ 13
Equation will be
0 ≤ 3n^2+4n+6 ≤13.g.n^2
Now n0=1 now whatever value of n (i.e ≥1)we keep into the equation the condition
will be true.
Omega Notation(Ω-notation)

• This notation gives a Lower bound for a


function to within a constant factor. We write
f(n)=Ω g(n) if there are positive constants n0
and C such that to the right of n0, the value of
f(n) always lies on or above c.g(n).
• Ω(g(n)) = { f(n): there exist positive constants c
and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Example:
• F(n)=2n^2+3n+5,g(n)=n^2 we have to find lower bound of this function
• Definition of omega notation is:
• 0 ≤C.g(n) ≤f(n)
• 0≤C.n^2 ≤2n^2+3n+5
• C.n^2 ≤2n^2+3n+5
• C ≤2+3/n + 5/n
• We have to put large value of n so that we get the minimum value of c
• Putting highest value of n i.e ∞ something by ∞ is zero.
• Now
C ≤2+0+ 0
C ≤2
Now we get value of C=2
Finding value of n
• 2.n^2 ≤2n^2+3n+5
Here what we see is left side is always less than right side
If we put value of n=1 minimum
Then the condition will be true
2 ≤2+3+5
2 ≤8
So if we put any n value ≥ 1 i.e 1,2,3,4….
The condition will be true
So, C=2, n=n0=1
0 ≤ 2.n^2 ≤2n^2+3n+5
F(n)=Ω(n^2)
Omega graph
Theta notation(Θ-notation)

• This notation bounds a function to within


constant factors. We say f(n)= Θ(g(n)) if there
exists positive constant n0, c1 and c2 such that
to the right of n0 the value of f(n) always lies
between c1g(n) and c2 g(n) inclusive.
Θ(g(n)) = { f(n): there exist positive constants c1, c2
and n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥
n0 }
Theta notation graph
Example:
f(n)=2n^2+3n+5,g(n)=n^2
Θ(n^2) ∈f(n)
We have to prove
0 ≤ C1.n^2 ≤2n^2+3n+5 ≤C2.n^2
Finding C1,n0 so that it satisfies C1.n^2 ≤2n^2+3n+5
C1.n^2 ≤2n^2+3n+5
C1 ≤2+3/n+5/n^2
Have to find lower bound or minimum value C1 by
putting maximum value of n i.e ∞
C1 ≤ 2+3/∞+5/∞
Therefore, C1 ≤2
Putting n=1
C1 ≤ 2+3+5
2 ≤10 ->true
Putting n=2
8 ≤8+6+5
8 ≤19 true
True for all n=1,2,3,….
So, C1=2,n=n0=1
Now proving right hand side
2n^2+3n+5 ≤C2.n^2
2+3/n+5/n^2 ≤C2
Putting minimum value of n=1 to get maximum value of C2 upper bound
2+3+5 ≤C2
C2 ≥10
Now putting n=1,2,3,…
2+3+5 ≤10 ->10 ≤10
So, C2=10,n=n0=1
0 ≤ 2n^2 ≤2n^2+3n+5 ≤10n^2
0 ≤ 2 ≤2+3+5 ≤10 proved
So we can say Θ(n^2) ∈f(n)

You might also like