Professional Documents
Culture Documents
10 2 secs 1hr
Asymptotic analysis is not always perfect but it is the best available way for analyzing
algorithms for example, if we have two sorting algorithms that take 1000nLogn and 2nLogn
time respectively on a machine, both of these algorithms are asymptotically the same (order of
growth is nLogn). With Asymptotic analysis, we cannot judge which is better because we
ignore constants, we may always talk about input sizes larger than a constant value. It may be
possible that those large inputs are never given to your software and an algorithm which is
asymptotically slower, always performs better for this particular situation. An algorithm that is
asymptotically slower but faster for your software may be chosen.
Most of the times, we do worst-case analysis to analyze algorithms. In the case worst analysis,
we guarantee an upper bound on the running time of an algorithm, which is a good information.
In most practical cases, the average case analysis is not easy to do and it is rarely done. An issue
with this case is that we must know (or predict) the mathematical distribution of all possible
inputs while the Best case analysis is bogus. Guaranteeing a lower bound on an algorithm does
not provide any information as in the worst case, it may take years for an algorithm to run.
Asymptotic Notations
The main idea behind asymptotic analysis is to have a measure of the efficiency of algorithms
that do not depend on machine – specific constants and do not require algorithms to be
implemented and time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for asymptotic analysis. In
analysis of algorithms, the following three asymptotic notations often used to represent the time
complexity of algorithms.
1. Theta Notation, θ (Average case)
The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It measures the average case time complexity an algorithm can
possibly take to complete. The notation bounds a function from above and below, so it defines
the exact asymptotic behavior. A simple way to get the notation of an expression is to drop
low-order terms and ignore leading constants. It is represented as follows –
The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time, it
bounds a function only from above. It measures the worst case time complexity or the longest
amount of time an algorithm can possibly take to complete.It is represented as follows –
For example, consider the case of Insertion Sort. It takes linear time in the best case and
quadratic time in the worst case. We can safely say that the time complexity of Insertion sort is
O(n^2). Note that O(n^2) also covers linear time. If we use Θ notation to represent time
complexity of Insertion sort, we have to use two statements for best and worst cases:
i. The worst-case time complexity of Insertion Sort is Θ(n^2).
ii. The best case time complexity of Insertion Sort is Θ(n).
The Big O notation is useful when we only have an upper bound on the time complexity of an
algorithm. Most times we easily find an upper bound by simply looking at the algorithm.
O(g(n)) = { f(n): there exist positive constants c and
n0 such that 0 <= f(n) <= c*g(n) for
all n >= n0}