You are on page 1of 15

Time Complexity

-Nehu Gumber
Imagine a classroom of 100 students in which you gave your pen to one person. Now, you want
that pen. Here are some ways to find the pen and what the O order is.

O(n2): You go and ask the first person of the class, if he has the pen. Also, you ask this person
about other 99 people in the classroom if they have that pen and so on,

This is what we call O(n2).

O(n): Going and asking each student individually is O(N).

O(log n): Now I divide the class into two groups, then ask: “Is it on the left side, or the right side of
the classroom?” Then I take that group and divide it into two and ask again, and so on. Repeat the
process till you are left with one student who has your pen. This is what you mean by O(log n).
Time Complexity of an algorithm/program is not the measure of actual time taken for the program to be
executed, rather it is the number of times each statement of the logic gets executed to produce the required
output.

In simpler terms, here is what we mean. Let us consider the below code.

#include <stdio.h>

void main()

int i, n = 5;

for (i = 1; i <= n; i++) {

printf("FACE Prep n");

}}
So, the above code when executed using a compiler has given the below output. If you can see,
the compiler shows that the code was executed in 1.166 secs and a lot of us assume this to be
time complexity, but it isn't.
Rather, the time-complexity of this code is dependent on the number of time the statements get
executed. Here, the for loop gets executed 'n' number of times and hence complexity is O(n)

Now what is this ‘O’ in this O(n)?


Why do you need to Calculate Time Complexity?
A lot of times, there is more than one way to solve a particular problem and in such cases, you
should be able to identify the most efficient solution out of all. This is where time-complexity
comes into the picture. Time-complexity helps you compare the performance of different solutions,
thus helping you determine the most efficient solution.
How do you Calculate the Time Complexity of an Algorithm?

Time-complexity can be expressed using the below three terms called as Asymptotic Notations.

● Big Theta (Θ)


● Big Oh(O)
● Big Omega (Ω)

But most times, we will use the Big O notation because it will give us an upper limit of the
execution time i.e. the execution time in the worst-case inputs. Also, an algorithm's running time
may vary among different inputs of the same size, and hence we consider worst-case complexity,
which is the maximum amount of time required for inputs of a given size.
Program 1
#include <stdio.h>
int main()
{
int a = 4;
int b = 6;
int c;
c = a + b;
printf(%d, c);
}
Time Complexity Calculation: The time complexity of the above-given program is O(1), as this
program consists of only assignment, arithmetic operations and all those will be executed only
once.
Program 2
int count(int arr[], int n)

int sum = 0, i;

for(i = 0; i < n; i++) //Control statement

sum = sum + arr[i];

return sum;

}
Time Complexity Calculation: In the above-given snippet, we have a control statement which
executes for n times. Along with that we also have operations like assignment, arithmetic and a
return statement. Hence, the time complexity is O(n + 3).

For larger values of n, the constant values become negligible. So if a program consists of a control
statement, then the complexities of assignment, arithmetic, logical and return statements can be
ignored.

Hence, the final time complexity of the above-given snippet is O(n).


Program 3
int i,j, n = 8;

for (i = 1; i <= n; i++)

for (j = 1; j <= n; j++)

printf("FACE Prep");

}}}
Time Complexity Calculation: In the above snippet, the first & the second for loops get executed n
times individually. So the time complexity accounts to n*n = O(n 2)
Program 4
while(low<=high)
{
mid=(low+high)/2;
if(n<arr[mid])
high=mid-1;
elseif(n>arr[mid])
low=mid+1;
elsebreak;

}
Time Complexity Calculation: This is the algorithm of binary search. It breaks the given set of
elements into two halves and then searches for a particular element. Further, it keeps dividing
these two halves into further halves until each individual element is a set. All such algorithms
which work on the principle of recursive division of elements into halves will have a O(Log n)
complexity.

You might also like