You are on page 1of 25

Analysis of Algorithms:

An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.

best case
Input Algorithm Output average case
worst case
120
• Most algorithms transform input objects into output objects.
100
• The running time of an algorithm typically grows with the

Running Time
80
input size.
60
• Average case time is often difficult to determine. 40

• We focus on the worst-case running time and best-case 20


running time. 0
1000 2000 3000 4000
– Easier to analyze Input Size

1
Performance Analysis of Algorithms:
Performance analysis of an algorithm is the process of calculating amount of
computer memory and time required to run an algorithm.

Criteria’s for Measurement:

Two criteria’s are used to measure an algorithm:

1.Time Complexity:
Time required to complete the task of that algorithm
2.Space Complexity:
Space required to complete the task of that algorithm

2
Performance Analysis of Algorithms:
Space and Time Calculation
By inspecting the pseudocode, we can determine the data space needed and
the maximum number of primitive operations executed by an algorithm
Primitive operations :
Examples:
• Assigning a value to a variable
• Evaluating an expression
• Condition expression
• Indexing into an array
• Calling a method
• Returning from a method

3
Performance Analysis of Algorithms:
PROBLEM : Exchanging (Swapping) the contents
Algorithm : swap
Input : a,b
Output : Exchange Content

Algorithm Swap(a,b) Algorithm Swap(a,b)


{ {
t := a; a := a+b;
a := b; b := a-b;
b := t; a := a-b;
} }
Memory <---3DS(data space) Memory <---3DS(data space)
Time <--3 assignment Time <--3Assignment +
1 add+ 2 sub 4
Performance Analysis of Algorithms:
Which would suffer more computational pitfalls ? Why

{
t := a;
a := b;
b := t;
}

{
a := a+b;
b := a-b;
a := a-b;
}
5
Performance Analysis of Algorithms:
Algorithm : Sum of two numbers Algorithm: max in two numbers
Input : a,b Input : a,b
Output : Sum Output : max
Algorithm Sum(a,b) Algorithm Maximum(a,b)
{ {
c := a+b; if a > b then
} max:=a;
else
Memory <---3DS(data space) max:= b;
Time <--1 assignment + }
1 addition
Memory <---3DS(data space)
Time <--1 condition +
1 assignment
6
Performance Analysis of Algorithms:
Let us assume that multiplication operator (*) is not available to compute the
product of two integers m and n.
Algorithm : * in terms of repeated + operations
Version:1
Let m=12 , n=6
p:=0;
Version 1 : p:=p + n will be executed 12 times ( m times)
for i:=1 to m do Version 2 : p:=p + m will be executed 6 times ( n times)
p := p +n;
Version:2
p:=0;
for i:=1 to n do
p := p +m;
7
Space Complexity:
• Space Complexity of an algorithm is the amount of memory it needs to run to completion.
• Fixed part : Space needed for instructions, data/variables, and constants, that is independent of
input and output sizes.
1. Such as int a(2Bytes) , float b(4Bytes) etc
2. Constants
denoted with C
• Variable part : Space needed for instance variables, whose size is dependent on the size of a
particular problem instances .
1. Dynamic Array i.e a[]
2. Recursion stack space
denoted with
Memory Space S(P) needed by a problem or algorithm P is :

8
Space Complexity Calculations:
Example1:
Algorithm Fun1 (a, b, c)
{
return a+b+b*c+a*(a+b);
}
what is S(P)=c + Sp

Space Complexity S(P) is 3


Where c=3(three variables i.e
a,b,c so 1 for each total 3) and
Sp=0(no instance varibles)

9
Time Complexity
– Time required to complete the task of that algorithm (Time
Complexity).
• Time complexity is sum of two components
• Fixed Part: Compile time - Independent of the problem instances.
- Also once compiled program will be run several times without
recompilation.
Note: denoted with  C
• Variable Part: Run time - Dependent on particular problem instances.
Note: denoted with 

10
Time Complexity (Conti…)
Time Complexity T(P) needed by a problem or algorithm P consists of two components.

where C is Compile time and is the Run time.


• Compile time(C) is usually ignored , only the Run time( ) is measured.
 How to measure Time Complextiy T(P)?
There are two methods to calculate the Time Complexity of an algorithm
1. Tabular Method
2. Count Variable Method

11
Time Complexity Calculation:
1. Using Tabular Method:
we use step/execution count by build a table.

12
Time Complexity Calculation:
1. Using Tabular Method:
we use step/execution count by build a table.
Exaple1: Iterative function for Sum of array elements:

13
Time Complexity Calculation:
Exaple2: Matrix Addition

14
Time Complexity Calculation:
2. Using Count Varible Method:
we use step count in a program to solve a problem
• In the Second method, we introduce a new global variable count , into the program.

• How to Calculate time complexity using count variable method

-Count is global variable with initial value 0.

-Calculation of time complexity will become easy by counting number of steps

each statement in the program executes.

15
2. Count Variable Method example

Algorithm sum(a,n)
{
s:=0; 1

for i:=1 to n do n+1


s:=s+a[i]; n
return s; 1
}
T(P)=1+n+1+n+1=2n+3

S(P)=n+3

16
2. Count Variable Method ex: count=0
Algorithm sum(a,n)
{
s:=0;
count:=count+1;// this for S;=0 assignment.
for i:=1 to n do
{
count:=count+1; // For for loop condition(true)
s:=s+a[i];
count:=count+1; // For assignment
}
count:=count+1; //For last time of for loop condition(false)
count:=count+1; //For the return
return s;
}
17
Example:

t best
the time for the best exit i.e earliest exit
t worst
the time for the worst exit i.e most delayed exit
18
Example:

19
Example:

20
Asymptotic Notations:
• It deals with the behavior of functions in the limit i.e for sufficiently large values of
parameters
There are 3 types of notations mainly:
1. Big Oh (or) Order of Notation (O) 2.Omega Notation(Ω) 3.Theta
Notation(Ɵ)
1. Big Oh Notation:
• O notation provides an asymptotic Upper Bound on a function
f(n). The lower bound on f(n) indicates that the function f(n)
will be the worst-case that it does not consume more than this
computing time.
• The function f(n)=O(g(n)) [“read as f of n is order of g of n “ ]
if and only if there exist positive constants c and n0 such that
f(n)<=c*g(n) for all n , n>=n0.
21
Asymptotic Notations:

22
Asymptotic Notations:
2. Omega Notation:
• Ω notation provides an asymptotic Lower Bound on a function
f(n). The lower bound on f(n) indicates that the function f(n)
will be the best-case that it does not consume less than this
computing time.
• The function f(n)=Ω(g(n)) [“read as f of n is omega of g of n“]
if and only if there exist positive constants C and n0 such that
f(n)>=c*g(n) for all n , n>=n0.

23
Asymptotic Notations:
3. Theta Notation:
• In some function lower and upper bound may be same i.e, Ω
and O have the same function.
• So, the function f(n) will be the average-case.
• The function f(n)=Ɵ(g(n)) [“read as f of n is theta of g of n“]
if and only if there exist positive constants C1,C2 and n0
such that c1*g(n)<= f(n)<=c2*g(n) for all n , n>=n0.

24
THANK YOU

25

You might also like