You are on page 1of 24

Complexity

• A problem in a Data Structure may be solved by applying more than one Logic.
• Logic for a problem is often expressed in terms of Algorithms.
• There may be ‘n’ number of Algorithms (or n number of ways) to solve a
particular problem.
• Main Objective: To choose out the best Algorithm to solve a Problem by
comparing them on the basis of their Performance.
Performance of an Algorithm is measured in terms of two parameters :
• A)Time Complexity B) Space Complexity.
• Time Complexity:It measures the amount of time taken by an
algorithm to run as a function of the length of the input data.

• Space Complexity: It measures the amount of memory taken by an


algorithm to run as a function of the length of the input data.

• Usually we judge the performance of the Algorithms on the


basis of Time Complexity
Asymptotic Notations are languages that that allow us to compute the Time
Complexity of the Algorithms in terms of Input Data Size
.(or rate of growth of the function with respect to the Input Data Size)
In this,we consider 3 cases
Best Case : Minimum amount of time algorithm takes in terms of its Input size.

Average Case:Average amount of time algorithm takes in terms of its Input size.

Worst Case: Maximum amount of time algorithm takes in terms of its Input size.
• Types of Asymptotic Notation:
• A) Ο Notation (Big O Notation) - measures the Worst Case Time Complexity
• B) Ω Notation (Omega Notation) - measures the Best Case Time Complexity
• C) θ Notation (Theta Notation) - measures the Average Case Time Complexity
How to calculate the Time Complexity of the given Code:
Assumptions :
A)All the arithmetic and logical operations takes 1 unit of time.
B)All the assignment operations takes 1 unit of time.
C)All the return statement takes 1 unit of time.
Number of Times Total Cost = 1.1+1.1 + 2.n+1 + 2. n + 1.1
int Sum_Numbers() Cost Number of Times =1 +1+ 2n +1+2n+1
= 4+ 4n is the running time complexity of the
{ algorithm whose value depends upon the size of
Sum:=0 //only assignment operation 1 1 input ‘n’.
If we take n = 5 , then 4+4.5 = 24units of time it
i=0 1 1
consumes but as we increase the size of the
for( ; i<n ; i++) 2 n+1 input n = 100 ,then 4 + 4.100 = 400units of time
it consumes .
{
As we increase the size of the input ,more will be
sum = sum + i; 2 n its Time Complexity.
4+4n is of form an + b where a and b are
} constants
return Sum; 1 1
} In Asymptotic Notation ,the lower order terms and
constant value are negligible as we increase the size
of n.
Consider the following Code: Cost Number of Times
i =1; 1 1

Sum = 0; 1 1
while(i<=n) 1 N+1

j = 1; 1 N

while(j<=n) 1 (N+1) * N

sum = sum+1; 2 N * N

j = j +1; 2 N *N

i = i + 1; 2 N

Total Cost of f(n) = 1 + 1 + N +1+N+N2+N+2N2+2N2 +2N=5N 2+5N+3

Neglecting all the lower or der ter ms and constants we left with N2 which is equal
to O (n 2).
• Time Complexity:It measures the amount of time taken by an
algorithm to run as a function of the length of the input data.

• Space Complexity: It measures the amount of memory taken by an


algorithm to run as a function of the length of the input data.

• Usually we judge the performance of the Algorithms on the


basis of Time Complexity
Time Complexities
A) statement;
Time Complexity of a Single Statement will be Constant.
The running time of the statement will not change in relation to N.
B) for(i=0; i < N; i++)
{ statement;
}
The time complexity of the single loop will be Linear.
The running time of the loop is directly proportional to N.
When N doubles, so does the running time.
for(i=0; i < N; i++) {
for(j=0; j < N;j++)
{
statement;
}
}
• The time complexity for the above code will be Quadratic.
• The running time of the two loops is proportional to the square of N.
• When N doubles, the running time increases by N * N.
Big Oh(measures only WORST CASE)
• A function f(n) is of order at most g(n) - that is, f(n) =
O(g(n)) - if there exist a constant ‘c’ > 0 and positive
integer n in such a way that f(n) <= c g(n) for all n
>=n0 . n0>=1
In other words:
• Here f(n) represents the Time Complexity of an
Algorithm
g(n) is the Higher order term or most significant term.
To compute the Worst Case Time Complexity we have to
find the (Big) O(g(n)?
a) finding for what value of ‘c’ the f(n)<=c.g(n)
b) and the next finding is for what values of n the
above equation is true.
• At n0 both the functions intersect but as the value
of N increases(size of the Input Data increases) and
g(n) is multiplied with C,the g(n) remain always >
f(n)
which indicates the Upper Bound.
Consider the following f(n) and g(n)

f(n) = 3n + 2

g(n) = n

Compu te the O (g(n))?

a) Choose the value of c for which f(n)<c.g(n)

f(n)<c.g(n) = 3n+2<c.n

H er e if we tak e

a) c = 1, (3n+2)>n , so condition does not met


b) c = 2, (3n+2)>2n , so condition does not met
c) c = 3, (3n+2)>3n , so condition does not met
d) c = 4,(3n+2)<4n so condition met for c =4

B) F inding for what values of n

3n+2<=4.n

a) n=1, 3(1)+2<=4(1) , 5<=4 condition does not met


b) n=2, 3(2)+2<=4(2) , 8<=8 conditions met for n>=2.
c) n=3,3(3)+2<=4(3),11<=12 condition

F or c = 4 and N>=2 the O (g(n))>=f(n)


Omega specifically describes the best case scenario.(Minimum amount
of time taken by the algorithm based on the size of Input Data)

• A function f(n) is o, f(n) = Ω(g(n)) - if there exist a constant ‘c’ > 0


and positive integer n in such a way that
f(n) >= c. g(n) for all n >=n0 . n0>=1
• Consider the following f(n) and g(n)
f(n) = 3n + 2
g(n) = n
Compute the Ω(g(n))?
f(n)>c.g(n) 3n+2>c .n
A) For c =1 ,3n+2>n true
B)For what values of n , n=1 3(1)+2>1.1 = 5>1 true
• So for c = 1,n >=1 the condition true.
Thetha specifically describes the average case scenario.(Average amount
of time taken by the algorithm based on the size of Input Data)
• A function f(n) is o, f(n) = Θ(g(n)) - if there exist a constant ‘c1 ’ > 0 , c2>0 and
positive integer n in such a way that
c1.g(n)<= f(n) >= c2 g(n) for all n >=n0 . n0>=1
Consider the following f(n) and g(n)...
f(n) = 3n + 2
g(n) = n
• If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) >= C2
g(n) for all values of C1, C2 > 0 and n0>= 1
• C1 g(n) <= f(n) >= ⇒C2 g(n)
• C1 n <= 3n + 2 >= C2 nAbove condition is always TRUE for all values of C1 = 1, C2 =
4 and n >= 1.
By using Big - Theta notation we can represent the time compexity as follows...
• 3n + 2 = Θ(n)
Sorting
Arrangement of the elements either in ascending order or descending order.
Sorting Algorithms are:
a) Bubble Sort
b) Selection Sort
c) Insertion Sort
Bubble Sort: It scan the unsorted list from left to right and swap the pair of adjacent elements
that is found to be out of order.After 1 complete pass, the largest element is at the right end of
the unsorted list but the earlier elements may still be out of order. The size of the unsorted list is
decreased by one and the size of the sorted list is increased by one.
This process is repeated till the unsorted list vanishes and the entire list becomes sorted
• for(i = 0; i<n-1; i++) • Total (N-1)passes are needed to sort the entire list.
{ • The outer loop counts the Number of Passes.
for( j= 0 ; j<n-1-i; j++) • In each Pass, the number of comparisons are also (N-1)
{ • The inner loop counts for the Number of Comparisons
if( a[j]> a[j+1] )
• Example N= 4
{ • Total passes = (N-1) = 3
t = a[j]; • Pass 1 ,(N-1) comparison so 3 comparison and the largest
a[j] = a[j+1] element is placed at the right.
• After the completion of Pass 1, N is reduced to 3
a[j+1] = temp
• Pass 2, comparisons are 2 and the largest element is placed
} at the right.
} • After the completion of Pass 2, N is reduced to 2.
• Pass 3, comparison is only one and the largest element is
} placed at the right
• The entire list gets sorted.
• Worst Case Analysis: • T(n) be the time complexity of the Bubble Sort =
Pass Number of Comparisons Addition of the number of comparisons in each pass.
1 N-1 • T(n) = (N-1) +( N-2)+ (N-3 )....... +2+1
2 N-2 • It comes out to be the sum of first (N-1) natural
3 N-3 numbers
. . • = N(N-1)/2= (N2-N)/2 = 1/2 N2 - N/2
. .
• Finding the O(n) by neglecting lower order terms
Last 1 and constants we have only
• • O(N2)
Insertion Sort: It divides the given unsorted List of the array elements into two parts
a) Sorted List b) Unsorted List
Sorted List consist of zero or one element and usually the element lies at index 0 is considered to be first element in the
Sorted List. On the other hand the unsorted List
consist of the rest of the Array elements. In the example , element 8 is the first elemet in the sorted List and unsorted
List consist of the rest element.
Insertion Sort sorts by removing one element from the unsorted List and keeping it in a temporary variable and then
comparing it with the elements present in the Sorted List to find its appropriate position among those elements.
To make room for the insertion ,some of the elements in the sorted List need to be moved from one position to another.
* It needs (n-1) passses to sort the array elements through insertion.

for(i=1;i<n-1;i++)
{
temp = a[i];
pos=i;
while(pos>0 && a[pos-1]>temp)
{
a[pos] = a[pos -1];
pos = pos-1;
}
a[pos] = temp;
}
• Best Case :
• When the elements are sorted
• 12345
• 1|2345
Pass 1 : 1>2(False, One Comparison) 1 2|3 4 5
Pass 2 : 2>3(False, One Comparison) 1 2 3 | 4 5
Pass 3 : 3 >4(False, One Comparison) 1 2 3 4 | 5
Pass 4: 4>5(False,One Comparison) 1 2 3 4 5| Unsorted List Empty
To sort 'n' elements we need 'n-1' passes and in each (n-1 ) passes
we have to calculate how many comparisons we perform.
Here in the Best Case,we go for (n-1) passes and in each pass we perform only 1
comparison ,so = 1 *(N-1) = N-1 , neglecting constants , best case = Ω(n)
• Worst Case
The elements are unsorted 5 4 3 2 1
5|4 3 2 1
In Pass 1, 5>4 only 1 comparison 4 5| 3 2 1
In Pass 2, 5>3 and 4> 3 ,two comaprisons 3 4 5| 2 1
In Pass 3 ,2>5,2>4,2>3, three comparisons , 2 3 4 5 | 1
:
:
:
For Pass ( N-1) there can be (N-1) comparisons
Adding the No of comparisons from each pass
1+2+3+................+(N-1) = N(N-1)/2 = O(N2)

You might also like