Professional Documents
Culture Documents
The term "analysis of algorithms" was coined by Donald Knuth.[1] Algorithm analysis is an
important part of a broader computational complexity theory, which provides theoretical
estimates for the resources needed by any algorithm which solves a given computational
problem. These estimates provide an insight into reasonable directions of search for efficient
algorithms.
Usually, the efficiency or running time of an algorithm is stated as a function relating the input
length to the number of steps (time complexity) or storage locations (space complexity).
Algorithm analysis is an important part of a broader computational complexity theory, which
provides theoretical estimates for the resources needed by any algorithm which solves a given
computational problem. These estimates provide an insight into reasonable directions of search
for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their
complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large
input. Big O notation, Big-omega notation and Big-theta notation are used to this end.
Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually
require certain assumptions concerning the particular implementation of the algorithm, called
model of computation. A model of computation may be defined in terms of an abstract
computer, e.g. Turing machine, and/or by postulating that certain operations are executed in
unit time. For example, if the sorted list to which we apply binary search has n elements, and
we can guarantee that each lookup of an element in the list can be done in unit time, then at
most log2(n) + 1 time units are needed to return an answer.
Shortcomings of empirical metrics
Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose
this program were implemented on Computer A, a state-of-the-art machine, using a linear
search algorithm, and on Computer B, a much slower machine, using a binary search algorithm.
Benchmark testing on the two computers running their respective programs might look
something like the following:
16 8 100,000
63 32 150,000
Based on these metrics, it would be easy to jump to the conclusion that Computer A is running
an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the
input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be
in error:
Orders of growth
Main article: Big O notation
Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical
function if beyond a certain input size n, the function f(n) times a positive constant provides an
upper bound or limit for the run-time of that algorithm. In other words, for a given input size n
greater than some n0 and a constant c, the run-time of that algorithm will never be larger than c
× f(n). This concept is frequently expressed using Big O notation. For example, since the run-
time of insertion sort grows quadratically as its input size increases, insertion sort can be said to
be of order O(n2).
Big O notation is a convenient way to express the worst-case scenario for a given algorithm,
although it can also be used to express the average-case — for example, the worst-case scenario
for quicksort is O(n2), but the average-case run-time is O(n log n).
Cost models
Time efficiency estimates depend on what we define to be a step. For the analysis to correspond
usefully to the actual run-time, the time required to perform a step must be guaranteed to be
bounded above by a constant. One must be careful here; for instance, some analyses count an
addition of two numbers as one step. This assumption may not be warranted in certain contexts.
For example, if the numbers involved in a computation may be arbitrarily large, the time
required by a single addition can no longer be assumed to be constant.
the uniform cost model, also called uniform-cost measurement (and similar variations), assigns
a constant cost to every machine operation, regardless of the size of the numbers involved
the logarithmic cost model, also called logarithmic-cost measurement (and similar variations),
assigns a cost to every machine operation proportional to the number of bits involved
The latter is more cumbersome to use, so it's only employed when necessary, for example in the
analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.
A key point which is often overlooked is that published lower bounds for problems are often
given for a model of computation that is more restricted than the set of operations that you
could use in practice and therefore there are algorithms that are faster than what would naively
be thought possible
Empirical orders of growth
Assuming the run-time follows power rule, t ≈ kna, the coefficient a can be found [8] by taking
empirical measurements of run-time {t1, t2} at some problem-size points {n1, n2}, and calculating
t2/t1 = (n2/n1)a so that a = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the
empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of
growth indeed follows the power rule (and so the line on log–log plot is indeed a straight line),
the empirical value of will stay constant at different ranges, and if not, it will change (and the
line is a curved line)—but still could serve for comparison of any two given algorithms as to
their empirical local orders of growth behaviour. Applied to the above table:
Computer A run-time Local order of growth Computer B run-time Local order of growth
n (list size)
(in nanoseconds) (n^_) (in nanoseconds) (n^_)
15 7 100,000
It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the
power rule. The empirical values for the second one are diminishing rapidly, suggesting it
follows another rule of growth and in any case has much lower local orders of growth (and
improving further still), empirically, than the first one.
Analysis Types
The algorithm complexity can be best, average or worst case analysis. The algorithm analysis can be
expressed using Big O notation. Best, worst, and average cases of a given algorithm express what the
resource usage is at least, at most and on average, respectively. The big o notation simplifies the
comparison of algorithms.
Best Case
Best case performance used in computer science to describe an algorithm’s behavior under optimal
conditions. An example of best case performance would be trying to sort a list that is already sorted using
some sorting algorithm. E.G. [1,2,3] --> [1,2,3]
Average Case
Average case performance measured using the average optimal conditions to solve the problem. For
example a list that is neither best case nor, worst case order that you want to be sorted in a certain order.
E.G. [2,1,5,3] --> [1,2,3,5] OR [ 2,1,5,3] --> [5,3,2,1]
Worst Case
Worst case performance used to analyze the algorithm's behavior under worst case input and least
possible to solve the problem. It determines when the algorithm will perform worst for the given inputs.
An example of the worst case performance would be a a list of names already sorted in ascending order
that you want to sort in descending order. E.G. [Abby, Bill, Catherine] --> [Catherine, Bill, Abby].
Every algorithm gives an output based on some parameters, like the number of loops, sample
input size, and various others. In an experimental analysis, these data points are plotted on a
graph to understand the behavior of the algorithm. We consider the worst-case running times.
The graphs show the running time of an algorithm with increasing input size for worst,
average, and best-case running time in the form of a histogram and a plotted graph. We chose
the x-axis as the input size because the running time depends on the input size. As an
experimental analysis depends on the output results, an algorithm cannot be measured unless
an equivalent program is implemented.
Algorithm theories represent the structure common to a class of algorithms, such as divide-and-conquer
or backtrack. An algorithm theory for a class provides the basis for design tactics—specialized methods
for designing -algorithms from formal problem specifications.
The general step wise procedure for Big-O runtime analysis is as follows:
In computer science, the analysis of algorithms is the process of finding the computational complexity of
algorithms – the amount of time, storage, or other resources needed to execute them.