You are on page 1of 6

ASYMPTOTIC ANALYSIS

Asymptotic analysis of an algorithm refers to defining the mathematical


boundation/framing of its run-time performance. Using asymptotic analysis, we can very well
conclude the best case, average case, and worst case scenario of an algorithm. Asymptotic
analysis is input bound i.e., if there's no input to the algorithm, it is concluded to work in a
constant time. Other than the "input" all other factors are considered constant.

There is need to evaluate/measure the performance of an algorithm so that some


important things like user friendliness, modularity, security, maintainability etc. can be taken care
of. To achieve the above features, performance is the key. The reason why we must study
performance is as a result of speed. For example, if we have a text editor that can load a 1000
pages, but can only spell check 1 page per minute or an image editor that takes 1 hour to rotate
an image 90o left, such a software is said to be inefficient and the performance will be rated very
low. A software feature that cannot cope with the scale of task users need to perform is as good
as dead.
When we have two algorithms for a given task, how do we find out which of them is better?
A naïve way of doing this is to implement both algorithms and run the two programs on a
computer for different inputs to see which of them takes less time. This approach is faced with
the following problems:
1. There is a possibility that for some inputs the first algorithm performs better than the second
while for some inputs the second may perform better.
2. It may also be possible that for some inputs the first algorithm may perform better on one
machine and the second algorithm works better on other machine for some other inputs.
The issues above can be handled using Asymptotic analysis in analyzing algorithms. In
Asymptotic analysis, we evaluate the performance of an algorithm in terms of input size (we do
not measure the actual running time). We calculate, how the time (or space) taken by an
algorithm increases with the input size. For example, we can consider a search problem
(searching a given item) in a sorted array. One way to search is Linear Search (order of
growth is linear) and the other way is Binary Search (order of growth is logarithmic) e.g. if
we run a linear search on a fast Computer A and Binary search on a slow computer B and
constant values for the 2 computers are picked so that it tells us exactly how long it takes for
the given machine to perform the search in seconds, for example, lets say the constant for A is
0.2 and the constant for B is 1000, it means that computer A is 5000 times more powerful than
computer B.
For small values of input array size n, the fast computer may take less time, but after a
certain value of input array size, the Binary search will start taking less time compared to the
Linear search even when the Binary search is being run on a slow machine. The reason for this
is that the order of growth of Binary search with respect to input size is logarithmic while the
order of growth of Linear search is linear. Machine dependent constants can always be ignored
after a certain value of input size.
Example of some running times are:
Linear search running time in secs on Computer A: 0.2*n
Binary search running time in secs on Computer B: 1000*log(n)
n Running time on A Running time on B

10 2 secs 1hr

100 20 secs 1.8h

106 55.5h 5.5h

109 ~6.3 years 8.3h

Asymptotic analysis is not always perfect but it is the best available way for analyzing
algorithms for example, if we have two sorting algorithms that take 1000nLogn and 2nLogn
time respectively on a machine, both of these algorithms are asymptotically the same (order of
growth is nLogn). With Asymptotic analysis, we cannot judge which is better because we
ignore constants, we may always talk about input sizes larger than a constant value. It may be
possible that those large inputs are never given to your software and an algorithm which is
asymptotically slower, always performs better for this particular situation. An algorithm that is
asymptotically slower but faster for your software may be chosen.

Worst case, Average case and Best Case


Earlier, we have discussed how Asymptotic analysis overcomes the naïve problems of
analyzing algorithm. Let us take an example of how linear search can be used in analyzing
algorithm using Asymptotic analysis. There are three cases to analyze an algorithm; the Worst
case, the Average case and the Best case.
1. Worst case Analysis (Usually Done): Here, we calculate the upper bound on the running
time of an algorithm. We must know the case that causes a maximum number of operations
to be executed. In Linear search, the Worst case happens when the element to be search (x in
a code) is not present in the array. when x is not present, the search ( ) function compares it
with all the elements of arr[ ] one by one. Hence, the worst-case time complexity of linear
search would be (n).
2. Average case Analysis (sometimes done): In this method, all possible inputs are taken and
computing time for all the input is calculated. All the calculated values are summed and the
sum is divided by the total number of inputs. We must know (or predict) the distribution of
cases. Let us take Linear search problem for example, if we assume that all cases are
uniformly distributed (including the case of x not being present in the array), we can sum all
the cases and divide the sum by (n+1). Below is the value of average-case time complexity.
n+1 (i)
¿1
Average Case Time=
n+1
( ( n+1 )∗( n+ 2)/2)
¿
n+1
3. Best Case Analysis (Bogus): Here, we calculate the lower bound on the running time of an
algorithm. In this case, we must know the case that causes a minimum number of operations
to be executed. In Linear search problem, the best case occurs when x is present at the first
location. In the best case, the number of operations is constant (not dependent on n). Time
complexity in the best case analysis will be (i).

Most of the times, we do worst-case analysis to analyze algorithms. In the case worst analysis,
we guarantee an upper bound on the running time of an algorithm, which is a good information. 
In most practical cases, the average case analysis is not easy to do and it is rarely done. An issue
with this case is that we must know (or predict) the mathematical distribution of all possible
inputs while the Best case analysis is bogus. Guaranteeing a lower bound on an algorithm does
not provide any information as in the worst case, it may take years for an algorithm to run.

Asymptotic Notations
The main idea behind asymptotic analysis is to have a measure of the efficiency of algorithms
that do not depend on machine – specific constants and do not require algorithms to be
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for asymptotic analysis. In
analysis of algorithms, the following three asymptotic notations often used to represent the time
complexity of algorithms. 
 
1. Theta Notation, θ (Average case)
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It measures the average case time complexity an algorithm can
possibly take to complete. The  notation bounds a function from above and below, so it defines
the exact asymptotic behavior.  A simple way to get the  notation of an expression is to drop
low-order terms and ignore leading constants. It is represented as follows –

For example, consider the following expression. 


3n3 + 6n2 + 6000 = Θ(n3) 
Dropping lower order terms is always fine because there will always be a number (n) after which
Θ(n3) has higher values than Θ(n2) irrespective of the constants involved. 
For a given function g(n), we denote Θ(g(n)) in the following set of functions. 
Θ(g(n)) = {f(n): there exist positive constants c1, c2 and n0 such
that 0 <= c1*g(n) <= f(n) <= c2*g(n) for all n >= n0}
The above definition means, if f(n) is theta of g(n), then the value f(n) is always between c1*g(n)
and c2*g(n) for large values of n (n >= n0). The definition of theta also requires that f(n) must be
non-negative for values of n greater than n0. 
 
2. Big Oh Notation, Ο (Worst case)

The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time, it
bounds a function only from above. It measures the worst case time complexity or the longest
amount of time an algorithm can possibly take to complete.It is represented as follows –
For example, consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time. If we use Θ notation to represent time
complexity of Insertion sort, we have to use two statements for best and worst cases: 
i. The worst-case time complexity of Insertion Sort is Θ(n^2). 
ii. The best case time complexity of Insertion Sort is Θ(n). 
The Big O notation is useful when we only have an upper bound on the time complexity of an
algorithm. Most times we easily find an upper bound by simply looking at the algorithm.  
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}

3. Omega Notation, Ω (Best case)


The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time. It
measures the best case time complexity or the best amount of time an algorithm can possibly
take to complete. Just as Big O notation provides an asymptotic upper bound on a function, Ω
(omega) notation provides an asymptotic lower bound. It is represented as follows –
Ω Notation can be useful when we have a lower bound on the time complexity of an algorithm.
As we earlier discussed above, the best case performance of an algorithm is generally not useful,
the Omega notation is the least used notation among all three. 
For a given function g(n), we denote by Ω(g(n)) the set of functions.  
Ω (g(n)) = {f(n): there exist positive constants c and
n0 such that 0 <= c*g(n) <= f(n) for
all n >= n0}.
Let us consider the same Insertion sort example here. The time complexity of Insertion Sort can
be written as Ω(n), but it is not very useful information about insertion sort, as we are generally
interested in worst-case and sometimes in the average case. 

Common Asymptotic Notations


The following is a list of some common asymptotic notations:
Constant - O(1)
Logarithmic - O(log n)
Linear - O(n)
n log n - O(n log n)
Quadratic - O(n2)
Cubic - O(n3_
Polynomial - no(1)
Exponential - 2o(n)

You might also like