You are on page 1of 7

Module 3: Complexity of an Algorithm (Week 2)

3.1 Measuring the complexity of Algorithms


Computational complexity or simply complexity of an algorithm is a measure
of the amount of time and/or space required by an algorithm for an input of
a given size. It evaluates the order of count of operations executed by an
algorithm as a function of input data size. To assess the complexity, the order
(approximation) of the count of operation is always considered instead of
counting the exact steps.

The analysis of algorithms is the process of finding the computational


complexity of algorithms.

Usually, this involves determining a function that relates the length of an


algorithm's input to the number of steps it takes (its time complexity) or the
number of storage locations it uses (its space complexity). An algorithm is said
to be efficient when this function's values are small, or grow slowly compared
to a growth in the size of the input.

Suppose X is an algorithm and n is the size of input data, the time and space
used by the algorithm X are the two main factors, which decide the efficiency
of X.
(a) Time Complexity: Time is measured by counting the number of key
operations such as comparisons in the sorting algorithm.
(b) Space Complexity: Space is measured by counting the maximum
memory space required by the algorithm.

The complexity of an algorithm f(n) gives the running time and/or the storage
space required by the algorithm in terms of n as the size of input data.

3.2 Types of Algorithm Analysis


Algorithm analysis deals with the execution or running time of various
operations involved in the algorithm. The running time of an operation can be
defined as the number of computer instructions executed per operation.
Efficiency of an algorithm can be analyzed at two different stages, before
implementation and after implementation. They are the following:

(a) A Priori Analysis: This is a theoretical analysis of an algorithm.


Efficiency of an algorithm is measured by assuming that all other
factors, for example, processor speed, are constant and have no effect
on the implementation.

Page 1 of 7
(b) A Posterior Analysis: This is an empirical analysis of an algorithm. The
selected algorithm is implemented using programming language. This
is then executed on target computer machine. In this analysis, actual
statistics like running time and space required, are collected.

Usually, they are three types of Algorithm analysis which are:


(a) Best Case: Minimum time required for program execution.
(b) Average Case: Average time required for program execution.
(c) Worst Case: Maximum time required for program execution

Rate of growth is defined as the rate at which the running time of the
algorithm is increased when the input size is increased. The growth rate could
be categorized into two types: linear and exponential. If the algorithm is
increased in a linear way with an increasing in input size, it is linear growth
rate. And if the running time of the algorithm is increased exponentially with
the increase in input size, it is exponential growth rate.

3.3 Analytical Tools and Features


The efficiency of an algorithm depends on the amount of time, storage and
other resources required to execute the algorithm. The efficiency is measured
with the help of asymptotic notations. An algorithm may not have the same
performance for different types of inputs and with increase in the input size,
the performance will change. The study of change in performance of the
algorithm with the change in the order of the input size is defined as asymptotic
analysis.

Asymptotic notations are the mathematical notations used to describe the


running time of an algorithm when the input tends towards a particular value
or a limiting value. For example: In bubble sort, when the input array is
already sorted, the time taken by the algorithm is linear i.e. the best case.
But, when the input array is in reverse condition, the algorithm takes the
maximum time (quadratic) to sort the elements i.e. the worst case. When the
input array is neither sorted nor in reverse order, then it takes average time.
These durations are denoted using asymptotic notations.

Asymptotic analysis refers to computing the running time of any operation in


mathematical units of computation. For example, the running time of one
operation is computed as f(n) and may be for another operation, it is computed
as g(n2). This means the first operation running time will increase linearly
with the increase in n and the running time of the second operation will
increase exponentially when n increases. Similarly, the running time of both
operations will be nearly the same if n is significantly small.

Page 2 of 7
There are mainly three asymptotic notations:
✓ Big-O notation
✓ Omega notation
✓ Theta notation

3.3.1 Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an
algorithm. Thus, it gives the worst-case complexity of an algorithm.

Big-O gives the upper bound of a function

For a function g(n), Θ(g(n)) is given by the relation:


O(g(n)) = { f(n): there exist positive constants c and n0
such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}

The above expression can be described as a function f(n) belongs to the set
O(g(n)) if there exists a positive constant c such that it lies between 0 and cg(n),
for sufficiently large n. For any value of n, the running time of an algorithm
does not cross the time provided by O(g(n)).

Since it gives the worst-case running time of an algorithm, it is widely used


to analyze an algorithm as we are always interested in the worst-case
scenario.

3.3.2 Omega Notation (Ω-notation)


Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best-case complexity of an algorithm.

Page 3 of 7
Omega gives the lower bound of a function
For a function g(n), Θ(g(n)) is given by the relation:
Ω(g(n)) = {f(n): there exist positive constants c and n0
such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0}

The above expression can be described as a function f(n) belongs to the set
Ω(g(n)) if there exists a positive constant c such that it lies above cg(n), for
sufficiently large n. For any value of n, the minimum time required by the
algorithm is given by Omega Ω(g(n)).

3.3.3 Theta Notation (Θ-notation)


Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an algorithm,
it is used for analyzing the average-case complexity of an algorithm.

For a function g(n), Θ(g(n)) is given by the relation:


Θ(g(n)) = { f(n): there exist positive constants c1, c2 and n0
such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }

Theta bounds the function within constants factors

Page 4 of 7
The above expression can be described as a function f(n) belongs to the set
Θ(g(n)) if there exist positive constants c1 and c2 such that it can be
sandwiched between c1g(n) and c2g(n), for sufficiently large n. If a function f(n)
lies anywhere in between c1g(n) and c2g(n) for all n ≥ n0, then f(n) is said to be
asymptotically tight bound.

Commonly used asymptotic notations are shown in the table below:

Type Big O notation


Constant O(1)
Linear O(n)
Logarithmic O (log n)
N log n O (nlog (n))
Exponential 2O(n)
Cubic O (n3)
Polynomial nO(1)
Quadratic O (n2)

3.4 Space Complexities and Analysis


Space complexity of an algorithm represents the amount of memory space
required by the algorithm in its life cycle. The space required by an algorithm
is equal to the sum of the following two components:
(a) A fixed part, that is, the space required to store certain data and
variables that are independent of the size of the problem. For example,
simple variables and constants used, program size, etc.
(b) A variable part, that is, the space required to store variables, whose size
depends on the size of the problem. For example, dynamic memory
allocation, recursion stack space, etc.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed
part and S(I) is the variable part of the algorithm, which depends on instance
characteristic I.
Following is a simple example that tries to explain the concept −
Algorithm: SUM(A, B)
Step 1 - START
Step 2 - C ← A + B + 10
Step 3 - Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1 +
3. Now, space depends on data types of given variables and constant types
and it will be multiplied accordingly.

Page 5 of 7
3.5 Time Complexities Analysis
Time complexity of an algorithm represents the amount of time required by
the algorithm to run to completion. Time requirements can be defined as a
numerical function T(n), where T(n) can be measured as the number of steps,
provided each step consumes constant time. For example, addition of two n-
bit integers takes n steps. Consequently, the total computational time is T(n)
= c ∗ n, where c is the time taken for the addition of two bits. Here, we observe
that T(n) grows linearly as the input size increases.

3.6 Algorithmic Trade-offs (Resources)


A tradeoff is a situation where one thing increases and another thing
decreases. An Algorithm tradeoff is a way to solve a problem in either in less
time and by using more space, or in very little space by spending a long
amount of time. The best Algorithm is that which helps to solve a problem
that requires less space in memory and also takes less time to generate the
output. But in general, it is not always possible to achieve both of these
conditions at the same time. A type of Space-Time Trade-off include the
following:

(a) Compressed or Uncompressed data: A space-time trade-off can be


applied to the problem of data storage. If data stored is uncompressed,
it takes more space but less time. But if the data is stored compressed,
it takes less space but more time to run the decompression algorithm.
There are many instances where it is possible to directly work with
compressed data. In that case of compressed bitmap indices, where it
is faster to work with compression than without compression.
(b) Re-rendering or Stored images: In this case, storing only the source
and rendering it as an image would take more space but less time i.e.,
storing an image in the cache is faster than re-rendering but requires
more space in memory.
(c) Smaller code or loop unrolling: Smaller code occupies less space in
memory but it requires high computation time that is required for
jumping back to the beginning of the loop at the end of each iteration.
Loop unrolling can optimize execution speed at the cost of increased
binary size. It occupies more space in memory but requires less
computation time.
(d) Lookup tables or Recalculation: In a lookup table, an implementation
can include the entire table which reduces computing time but
increases the amount of memory needed. It can recalculate i.e.,
compute table entries as needed, increasing computing time but
reducing memory requirements.

Page 6 of 7
3.7 Assignment on Time and Space Complexities

Page 7 of 7

You might also like