You are on page 1of 27

Algorithm Efficiency

OBJECTIVES
• Efficient Algorithms…..Why/How?
• Time & Space Constraints
• Approaches to determine Efficiency
• Order of growth and Growth rate
• Algorithm Complexity
• Big O Notation & Graphs
Good Algorithm
• What makes a good Algorithm
– It must work
– It must complete its task in finite amount of time
Efficient Algorithm
• Two or more algorithms solving the same
problem can satisfy the criteria of being a good
algorithm
• To determine the best one, efficiency of the
algorithm is calculated
Efficiency of algorithm
• There are generally two criteria used to determine
the efficiency of algorithm
– Time requirements
– Space requirements
Due to the fact that today computer have vast amounts of
memory both primary and secondary, the primary
concern will be time requirements.
Differences in space requirements of most algorithms is
insignificant.
• Time/space Tradeoff
Determining Time requirements
• Approach 1 : Write and run program
Write two programs and then run these programs with some
type of timing device.
Measure the running time with system timing utilities.
#include<time.h>
CLOCK_PER_SEC, clock, clock_t

• Approach 2: Mathematical Analysis


A more reliable approach to the analysis of the efficiency of
algorithms is to mathematically measure the speed of the
algorithm in terms of "time units."
Problems with Approach 1
To measure the time efficiency of an algorithm, you can write a program
based on the algorithm, execute it, and measure the time it takes to run.
The execution time that you measure in this case would depend on a
number of factors such as:
Speed of the machine
Compiler
Operating system
Programming language
Input data
However, we would like to determine how the execution time is affected
by the nature of the algorithm.
Approach 2
• A more reliable approach to the analysis of the efficiency of
algorithms is to mathematically measure the speed of the
algorithm in terms of "time uints.“
• Different types of operations or statements may require
different time units.
• Some common time units that are often considered include:
– comparisons (most common)
– assignments (next most common)
– I/O operations
– numerical operations (i.e. additions, subtractions,
multiplications, ...)
Algorithm 1
Set I = 0 // 1 assignment , a time units
While (I < n): // n comparison b time units
Display a[I] // n writes c time units
Increment I by 1 // n increments d time units

The execution time required for the preceding algorithm is given by:
T=a+b×n+c×n+d×n
T = a + n (b + c + d)
Here, T is the total running time of the algorithm expressed as a
linear function of the number of elements (n) in the array. From the
preceding expression, it is clear that T is directly proportional to n.
Algorithm 2
1. Set I = 0 // 1
assignment
2. While (I < n): // n
comparisons
a. Set J = 0 // n
assignments
b. While (J < n): // n × n
comparisons
i. Display (a[I][J]) // n × n writes
Efficiency using comparisons
• In most cases, just comparisons will be used
because a comparison (i.e. decision) is much
more costly (in terms of time) than an
assignment or calculation.
Growth rate of Algorithm
• Algorithms depend on amount of data (n) they
must process
• If n is small, no significant difference in speed of
algorithms and therefore you could use whatever
algorithm you prefer.
• What happens to the speed of algorithm as the
number of data items grows large?
Order of growth
• If algorithm must handle n pieces of data, then
the time required will turn out to be proportional
to some function of n,
The number of time units is some constant
multiple of a function of the size of the data set.

or

time= k * f(n)
Growth rate of algorithms
The rate at which the running time of an
algorithm increases as a result of an increase in
the volume of input data is called the order of
growth of the algorithm.
The order of growth of an algorithm is defined by
using the Big O notation.
The Big O notation has been accepted as a
fundamental technique for describing the
efficiency of an algorithm.
Growth rate of algorithms cntd…
The different orders of growth and their corresponding
big O notations are:
Constant - O(1)
Logarithmic - O(log n)
Linear - O(n)
Log linear - O(n log n)
2
Quadratic - O(n )
3
Cubic - O(n )
n n
Exponential - O(2 ), O(10 )
Selecting an Efficient Algorithm

According to their orders of growth, the big O notations can be arranged in an


increasing order as:
2 3 n n
O(1) < O(log n) < O(n) < O(n log n) < O(n ) < O(n ) < O(2 ) < O(10 )
Graphs depicting orders of growth for various big O notations:
Big O Graphs
Worst Case, Average Case, Best Case
Complexity
• Worst Case Complexity of an algorithm is when the
algorithm takes maximum number of steps on any
instance of the size n.
• Average Case Complexity of an algorithm takes
average number of steps on any instance of size n.
• Best Case Complexity of an algorithm is when the
algorithm takes minimum number of steps on any
instance of size n
• Example of linear search algorithm
Time Complexity and Speed
Complexity 10 20 50 100 1 000 10 000 100 000
O(1) <1s <1s <1s <1s <1s <1s <1s
O(log(n)) <1s <1s <1s <1s <1s <1s <1s
O(n) <1s <1s <1s <1s <1s <1s <1s
O(n*log(n)) <1s <1s <1s <1s <1s <1s <1s
O(n2) <1s <1s <1s <1s <1s 2s 3-4 min
O(n3) <1s <1s <1s <1s 20 s 5 hours 231 days
<1s <1s 260
O(2n) hangs hangs hangs hangs
days
O(n!) <1s hangs hangs hangs hangs hangs hangs
O(nn) 3-4 min hangs hangs hangs hangs hangs hangs

19
int FindMaxElement(int[] array)
{
int max = array[0];
for (int i=0; i<array.length; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}
• Runs in O(n) where n is the size of the array
Complexity Examples (2)
long FindInversions(int[] array)
{
long inversions = 0;
for (int i=0; i<array.Length; i++)
for (int j = i+1; j<array.Length; i++)
if (array[i] > array[j])
inversions++;
return inversions;
}

• Runs in O(n2) where n is the size of the array


• The number of elementary steps is n*(n+1) / 2
Complexity Examples (3)
decimal Sum3(int n)
{
decimal sum = 0;
for (int a=0; a<n; a++)
for (int b=0; b<n; b++)
for (int c=0; c<n; c++)
sum += a*b*c;
return sum;
}

• Runs in cubic time O(n3)


• The number of elementary steps is n3
Complexity Examples (4)
long SumMN(int n, int m)
{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
sum += x*y;
return sum;
}

• Runs in quadratic time O(n*m)


• The number of elementary steps is n*m
Complexity Examples (5)
long SumMN(int n, int m)
{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
if (x==y)
for (int i=0; i<n; i++)
sum += i*x*y;
return sum;
}
• Runs in quadratic time O(n*m)
• The number of elementary steps is n*m + min(m,n)*n
Complexity Examples (6)
decimal Calculation(int n)
{
decimal result = 0;
for (int i = 0; i < (1<<n); i++)
result += i;
return result;
}

• Runs in exponential time O(2n)


• The number of elementary steps is 2n
Complexity Examples (7)
decimal Factorial(int n)
{
if (n==0)
return 1;
else
return n * Factorial(n-1);
}
• Runs in linear time O(n)
• The number of elementary steps is n
QUESTIONS

You might also like