You are on page 1of 83

EE 232 Data Structures

Session-05 , Spring-07

Chapter 2
Algorithm Analysis

Chapter 2: Algorithm Analysis
2.1 Mathematical Background
2.2 Model
2.3 What to Analyze
2.4 Running Time Calculations

Chapter 2: Algorithm Analysis
Our goals: we shall learn

 How to estimate the time  Very efficient algorithms

required for a program to raise a number to a
power and to compute
 How to reduce the the greatest common
running time of a divisor of two numbers
program

 The results of careless

use of recursion

Lets Recall
 What is an Algorithm?
 a clearly specified set of simple instructions to be
followed to solve a problem
• Takes a set of values, as input and
• produces a value, or set of values, as output

 What are Data structures?

 Methods of organizing data

Lets Recall…

 Writing a working program is not good enough!

 The program may be inefficient
 If the program is run on a large data set, then the
running time becomes an issue
• Recall the selection problem

Chapter 2: Algorithm Analysis
2.1 Mathematical Background
2.2 Model
2.3 What to Analyze
2.4 Running Time Calculations

Mathematical Background
 We need a theoretical framework upon which we
can compare algorithms

 The idea is to establish a relative order among

different algorithms, in terms of their relative rates
of growth (of the running time)

 The rates of growth are expressed as functions,

which are generally in terms of the number of inputs
n e.g. a running time function can be expressed as
T(n)

Mathematical Background…

 Such functions describe the utilization of some

precious resource (e.g. time or space) as the size of
the problem increases

 We can determine the exact running time of an

algorithm, however for large inputs, the multiplicative
constants and lower-order terms are dominated by
the effects of the input size itself

Mathematical Background…
 *Consider the two functions
f(n) = n2 and g(n) = n2 – 3n + 2
 Around n = 0, they look very different

Mathematical Background…
 *If we look at a slightly larger range from
n = [0, 10], we begin to note that they are more similar:

Mathematical Background…
 *Extending the range to n = [0, 100], the similarity
increases:

Mathematical Background…
 *And on the range n = [0, 1000], they are (relatively)
indistinguishable:

Mathematical Background…
 *They are different absolutely, for example,
f(1000) = 1 000 000
g(1000) = 997 002
 however, the relative difference is very small

f(1000)  g(1000)
 0.002998  0.3%
f(1000)
 and this difference goes to zero as n → ∞
 The justification for the pair of polynomials being similar
is that, both have the same leading term:
 n2

Mathematical Background…

We can throw away the lower-order terms and ignore

the leading coefficients of the highest term as the
problem size grows really, really large

Mathematical Background…
Asymptotic Analysis
 simplifies the analysis of running time by getting rid of
”details”, which may be affected by specific
implementation and hardware
 like “rounding”: 1,000,001 » 1,000,000
 3n2 » n2

Asymptotic Efficiency
 how the running time of an algorithm increases with
the size of the input in the limit, as the size of input
increases without bound

Mathematical Background…
Asymptotic Notation
 The “big-Oh” O-Notation
 f(n) 1= O(g(n)) , if there
exists positive constants c
and n0, such that c  g ( n)
f(n)  cg(n) for all n ≥ n0 f (n )

Running Time
 f(n) and g(n) are functions
over non-negative
integers
n0 Input Size
 The growth rate of f(n) is
less than or equal to that
of g(n)
1: means set membership i.e. f(n) Є O(g(n))
Mathematical Background…
 g(n) is an upper bound on
f(n)
 O-notation provides an
asymptotic upper bound on
a function c  g ( n)
 Used for worst-case analysis f (n )

Running Time
Examples
 If f(n) = 2n2 , then
n0
f(n) = O(n4) , f(n) = O(n3) and Input Size

f(n) = O(n2) are all technically

correct but the last answer is
the best
Mathematical Background…
 The “big-Omega” W -Notation
 f(n) = W(g(n)) , if there
exists positive constants c
and n0, such that
f (n )

Running Time
cg(n)  f(n) for all n  n0
c  g ( n)
 The growth rate of f(n) is
greater than or equal to
that of g(n) n0 Input Size

Mathematical Background…
 W–notation provides an
asymptotic lower bound
on a function
 Used to describe best-
f (n )

Running Time
case running times
c  g ( n)

Examples
 n3 grows faster than n2 , so n0 Input Size
we can say that n3 = W(n2)

Mathematical Background…
 The “big-Theta” Q-Notation
 f(n) = Q(g(n)) if there
exists positive constants
c1, c2, and n0, such that c 2  g (n )
c1g(n)  f(n)  c2g(n) for all f (n )

Running Time
n  n0 c 1  g (n )

 for all values of n to the

right of n0 , the values of
n0
f(n) lies at or above c1g(n) Input Size

and at or below c2g(n)

Mathematical Background…
 f(n) = Q(g(n)) if and only
if f(n) = O(g(n)) and
f(n) = W(g(n))
 The growth rate of f(n) c 2  g (n )
equals the growth rate of f (n )

Running Time
g(n) c 1  g (n )
 g(n) is an asymptotically
tight bound for f(n)
n0 Input Size
Examples
 ½n2 – 3n = Q(n2)
 6n3 ≠ Q(n2)

Mathematical Background…
 The "Little-Oh" o-notation
f(n) = o(g(n)) if f(n) = O(g(n)) and f(n) ≠ Q (g(n))
 The growth rate of f(n) is less than the growth rate
of g(n)
 This is different from Big-Oh, because Big-Oh
allows the possibility that growth rates are the
same
Examples
 The bound 2n2 = O(n2) is asymptotically tight but
the bound 2n = O(n2) is not
 2n = o(n2) , but 2n2 ≠ o(n2)

Mathematical Background…
Rule 1
 If T1(n) = O(f(n)) and T2(n) = O(g(n)) , then
• T1(n) + T2(n) = max( O(f(n)), O(g(n)) ) ,
• T1(n) * T2(n) = O( f(n) * g(n) )
Rule2
 If T(n) is a polynomial of degree k, then T(n) =
Q(nk)
• e.g. f(n) = an2 + bn +c  f(n) = Q(n2)

Rule3
 logk n = O(n) for any constant k, indicating
logarithms grow very slowly

Mathematical Background…
Asymptotic notation in equations and inequalities
 Asymptotic notation can be used within
mathematical formulas
 2n2 + 3n + 1 = 2n2 + Q(n) means that
2n2 + 3n + 1 = 2n2 + f(n) where f(n) is some function
in the set Q(n)

 When the asymptotic notation stands alone on the

right-hand side of an equation ( or inequality), as in
n = O(n2) , we define the equal sign to mean set
membership: n Є O(n2)

Mathematical Background…
Typical growth rates

Function Name
c constant
logn logarithmic
log2 n log-squared
n linear
nlogn
n3 cubic
2n exponential

Mathematical Background…
A Hierarchy of Growth Rates

log n < log2n < logkn < n < n log n

< n2 < n3 < 2n < 3n < n!

Mathematical Background…

Mathematical Background…

Mathematical Background…
 We can always determine the relative growth rates of
two functions f(n) and g(n) by computing
f( n)
lim
n   g( n )

using L’ Hopital’s rule if necessary

f(n) becomes insignificant
 If the limit is 0  f(n) = o(g(n))  relative to g(n) as n
approaches infinity
 If the limit is c ≠ 0  f(n) = Q(g(n))
 If the limit is ∞  g(n) = o(f(n))

Mathematical Background…
L’Hopital’s Rule

If lim n  f( n)   and lim n  g(n)  

then lim n  f( n)/g(n)  lim n  f ( n)/g( n)

where f ‘(n) and g’(n) are the derivates of f(n) and g(n)
respectively

Mathematical Background…
Example
Let
f(n) = 25n2 + n and
g(n) = n2

f( n)
Then lim  25
n  g( n)

So 25n2 + n = Θ(n2)

Mathematical Background…
Example
Let
f(n) = n logn and
g(n) = n1.5

f( n)
Then lim ?
n  g( n)

Chapter 2: Algorithm Analysis
2.1 Mathematical Background
2.2 Model
2.3 What to Analyze
2.4 Running Time Calculations

Model

 In order to analyze algorithms in our formal

framework, we need a model of computation

 Instructions are executed sequentially 

Instructions are executed one after another, with
no concurrent operations  Not parallel
computers
 Simple instructions such as addition,
multiplication, comparison and assignment and
each instruction is executed exactly in one time
unit  not a realistic assumption

Model…

 Assume infinite memory  never worry about

page faults
 Although the compiler and computer affect
algorithms , we will not model them here

Why do we use a model?

 simple
 easier to prove things for the model than the real
machine
 We can estimate the algorithm behavior
irrespective of the hardware/software
Chapter 2: Algorithm Analysis
2.1 Mathematical Background
2.2 Model
2.3 What to Analyze
2.4 Running Time Calculations

What to analyze

 We only analyse correct algorithms

 An algorithm is correct
 If, for every input instance, it halts with the correct
output

 Incorrect algorithms
 Might not halt at all on some input instances
 Might halt with other than the desired answer

What to analyze…
Analysing an algorithm
 Predicting the resources that the algorithm
requires

 Resources include
• Memory (space complexity)
• Computational time (time complexity)

What to analyze…
 Factors affecting the running time of an algorithm

 computer’s architecture i = j-1 ; if c is the cost and it

executes for n times, so total =
c*n ; c is dependent upon
 compiler’s design computer’s clock and compiler

 algorithm used

 input to the algorithm

• typically, the input size (number of items in the input) is
the main consideration
– e.g. sorting problem  the number of items to be
sorted

What to analyze…
 So we want to evaluate an algorithm as the size of
the problem becomes large

 An algorithm that takes linear time grows more slowly

than one that is quadratic, and

exponential

 Each of these characterizations --- linear, quadratic,

and so on --- identifies a class of functions with
similar growth behavior
What to analyze…
Best-case running time
 if there is a permutation of input data that
minimizes “run time efficiency”, then that minimum
is the best-case run time efficiency

Worst-case running time

 if there is a permutation of input data that
maximizes the “run time efficiency”, then that
maximum is the worst-case run time efficiency

 We use worst-case running time for analysing

algorithms as this provides a bound for all input

What to analyze…
Best/Worst/Average Case*
 For a specific size of input n, investigate running
times for different input instances:

6n

5n

4n

3n

2n

1n

What to analyze…
Best/Worst/Average Case*
 For inputs of all sizes:

worst-case

6n average-case
Running time

5n
best-case
4n

3n

2n

1n

1 2 3 4 5 6 7 8 9 10 11 12 …..
Input instance size
Chapter 2: Algorithm Analysis
2.1 Mathematical Background
2.2 Model
2.3 What to Analyze
2.4 Running Time Calculations

Running Time Calculations
To simplify the analysis
 We adopt the convention that there are no
particular units of time and thus, we can throw

 We will also throw away low-order terms, so we

compute the Big-Oh running time that provides an
upper bound

 We should never underestimate the running time

of a program

Running Time Calculations…
N
A Simple Example  i
i 1
3

int sum (int N) Computing the Time Units

{ ---------------------------------
int i, PartialSum; no time for declarations
PartialSum = 0; 1 for the assignment
for ( i = 1; i <= N; i++) 1 assignment, N+1 tests, and N increments
PartialSum += i * i * i; 4 units for an assignment, an addition, and
two multiplications per loop
return PartialSum; 1 for the return statement
} -------------------------------------------------------
T(N) = 1+(1+N+1+N)+4N+1 = 6N+4 = O(N)
Analysis becomes too complex for larger
programs – there are ways to simplify this
Running Time Calculations…
N
A Simpler Analysis
 i
i 1
3

int sum (int N) Computing the Time Units

{ ---------------------------------
int i, PartialSum;
PartialSum = 0; insignificant compared with the for loop
for ( i = 1; i <= N; i++) N loops
PartialSum += i * i * i; O(1) statement per execution, so no need to
count whether it is two, three or four units
return PartialSum; insignificant compared with the for loop
} --------------------------------------------------------
T(N) = N = O(N)

The Big-Oh is the same

Running Time Calculations…
General Rules
 Rule 1 – for loops
 The running time of a for loop is at most the
running time of the statements inside the for loop
times the number of iterations
N 1
for ( i = 0; i < N; i++)
sum += i; 1  N
i 0

 The T(N) of above example is O(N)

Running Time Calculations…
 Rule 2 – nested for loops
 analyze these inside out
 The total running time of a statement inside a group of
nested loops is the running time of the statement
multiplied by the product of the sizes of all the for
loops
N 1 N 1 N 1
for ( i = 0; i < N; i++)
for ( j = 0; j < N; j++)
k++;
1  1N  N
i 0 j 0 i 0
2

 The T(N) of above example is O(N2)

Running Time Calculations…
 Rule 3 – consecutive statements
 These just add (which means that the maximum is
the one that counts)

for ( i = 0; i < N; i++)

A[ i ] = 0;  N 1   N 1 N 1 
1  1  N  N
for ( i = 0; i < N; i++) 2
for ( j = 0; j < N; j++)
A[ i ] += A[ j ] + i +j;
 i 0   i 0 j 0 

 The T(N) of above example is O(N) + O(N2) = O(N2)

Running Time Calculations…
 Rule 4 – if/else
 For the fragment: if (condition) s1 else s2
 The running time is never more than the running time
of the test plus the larger of the running times of s1
and s2
if (condition == 1)
 N 1 N 1 N 1 
max  1, 1 
for (i = 0; i < N; i++)
A[ i ] = 0;
else  i 0 i 0 j 0 
for ( i = 0; i < N; i++)
for ( j = 0; j < N; j++)
max N , N 2   N 2
A[ i ] += A[ j ] + i +j;
 The T(N) of above example is O(N2)
Running Time Calculations…
Basic strategy
 analyse from the inside (or deepest part) first
and work outwards

analysed first

 This even works for recursive functions

Running Time Calculations…
Recursive Functions
Computing the time units
----------------------------------
long int Factorial (int N) { • N = 0 and N = 1, running time is constant
/*1*/ if (N <= 1) which is the time to do the test and return
/*2*/ return 1; • N ≥ 2, running time is a constant work at
else line 1 + work at line 3
/*3*/ return N * Factorial(N-1); •work at line 3 is one multiplication and
} one function call
------------------------------------------------
T(0) = T(1) = 1, T(N) = T(N-1) + 2

 Let T(N) be the running time for Factorial(N)

 Factorial(N-1) requires T(N-1) time units
Running Time Calculations…
 A recurrence is an equation or inequality that
describes a function in terms of its value on smaller
inputs

Θ(1) N 1
T( N )    recurrence relation
T( N  1)  Θ(1) N  2

 we can replace asymptotic notations with a

representative from it’s equivalent class

 replace Q(1) with 1, and if we had Q(n) we could

replace it with n

Running Time Calculations…
 we substitute the recurrence relation continually on the
R.H.S.
Main equation  T(N) = T(N-1) +1

From the main equation we have  T(N) = T(N-2) +2

Substitute N-2 in the main equation  T(N-2) = T(N-3) +1

 T(N) = T(N-k) + k ; If k = N – 1 then

 T(N) = N = O(N)

Running Time Calculations…
Careless use of recursion
Computing the time units
long int Fib (int N) ----------------------------------
{ • N = 0 and N = 1, running time is constant
/*1*/ if (N <= 1) which is the time to do the test and return
/*2*/ return 1; • N ≥ 2, running time is a constant work at
line 1 + work at line 3
else •work at line 3, is one addition and two
/*3*/ return Fib(N-1)+Fib(N-2); function calls
} ----------------------------------------------------
T(0) = T(1) =1; T(N) = T(N-1) + T(N-2) +2

 Let T(N) be the running time for Fib(N)

 Fib(N-1) requires T(N-1) time units
 Fib(N-2) requires T(N-2) time units
Running Time Calculations…
Careless use of recursion…
 thus for N ≥ 2 , running time of Fib(N) is
 T(N) = T(N-1) + T(N-2) + 2

 since Fib(N) = Fib(N-1) + Fib(N-2) , we can show

T(N) ≥ Fib(N) with the help of induction

 we can also show with the help of induction that for

N > 4 , Fib(N) ≥ (3/2)N

 thus the running time of the program grows

exponentially

Running Time Calculations…
Maximum Subsequence Sum problem
 Given integers A1, A2,…,AN , find the maximum
j
value of A 
k i
k

 For convenience, the maximum subsequence

sum is 0 if all the integers are negative

Example
 -2, 11, -4, 13, -5, -2

Running Time Calculations…
int MaxSubsequenceSum( const int A[ ], int N )
 3rd loop is of size j– i+ 1 {
 Algorithm 1 int ThisSum, MaxSum, i, j, k;
 Exhaustively tries all the
/* 1*/ MaxSum = 0;
possibilities /* 2*/ for( i = 0; i < N; i++ )
/* 3*/ for( j = i; j < N; j++ )
 1st loop is of size N
{
 2nd loop is of size N – i /* 4*/ ThisSum = 0;
 The running time is O(N3) /* 5*/ for( k = i; k <= j; k++ )
/* 6*/ ThisSum += A[ k ];
 Precise analysis
/* 7*/ if( ThisSum > MaxSum )
N 1 N 1 j /* 8*/ MaxSum = ThisSum;

1  ?
i  0 j i k i
/* 9*/
}
return MaxSum;
}
Running Time Calculations…
int MaxSubsequenceSum( const int A[ ], int N )
 Algorithm 2 {
 The running time is int ThisSum, MaxSum, i, j;
/* 1*/ MaxSum = 0;
O(N2) /* 2*/ for( i = 0; i < N; i++ )
{
/* 3*/ ThisSum = 0;
 Precise analysis
/* 4*/ for( j = i; j < N; j++ )
N 1 N 1 {

1  ?
i  0 j i
/* 5*/
/* 6*/
/* 7*/
ThisSum += A[ j ];
if( ThisSum > MaxSum )
MaxSum = ThisSum;
}
}
/* 8*/ return MaxSum;
}
Running Time Calculations…
 Algorithm 3
 This algorithm uses a divide-and-conquer
strategy

 divide  split the problem into two roughly

equal subproblems, which are then solved
recursively

 conquer  patching together the two solutions

of the subproblems, and possibly doing a
small amount of additional work, to arrive at a
solution for the whole problem

Running Time Calculations…

4 -3 5 -2 -1 2 6 -2

 maximum subsequence sum for left half is 6

 maximum subsequence sum for right half is 8
 maximum sum in the left half that includes the last
element in the left half is 4 (A1-A4) and maximum sum
in the right half that includes the first element in the
right half is 7 (A5-A7 )
 Thus the maximum sum that spans both halves and
goes through the middle is 4 + 7 = 11
Running Time Calculations…
 Algorithm 3…
 maximum subsequence sum can be in one of the
three places
1) entirely in the left half of the input,
2) entirely in the right half of the input
3) it crosses the middle and is in both halves

 The first two cases can be solved recursively

 The last case can be obtained by finding largest
sum in the left half that includes its last element
and the largest sum in the right half that includes
its first element
Running Time Calculations…

Running Time Calculations…
 Algorithm 3…
 Let T(N) be the time to solve a maximum
subsequence sum problem of size N
 If N =1, program takes constant time T(1) = 1

Computing Time Units

-------------------------------
Call Algorithm3 recursively for left side; T(N/2)
Call Algorithm3 recursively for right side; T(N/2)
Look for solution in the middle;
two for loops; O(N)
Return the maximum of the three;

Running Time Calculations…
 Algorithm 3…
1 N 1
 Recurrence equation T( N )  
 Replace O(N) with N 2 T( N / 2)  Ο( N ) N  2
T( N )  2T( N/2)  N
T(1)  1
T(2)  2(1)  2  4  2 * (1  1)
T(4)  2(4)  4  12  4 * (2  1)
T(8)  2(12)  8  32  8 * (3  1)
If N  2 k , T( N )  N * (k  1)  Nk  N 
Nlog N  N  O( Nlog N )
Running Time Calculations…
 Algorithm 4 int MaxSubsequenceSum( const int A[ ], int N )
 The running time {
int ThisSum, MaxSum, j;
is O(N) /* 1*/ ThisSum = MaxSum = 0;
/* 2*/ for( j = 0; j < N; j++ )
{
/* 3*/ ThisSum += A[ j ];
/* 4*/
if( ThisSum > MaxSum )
/* 5*/ MaxSum = ThisSum;
/* 6*/ else if( ThisSum < 0 )
/* 7*/ ThisSum = 0;
}
/* 8*/ return MaxSum;
}
Running Time Calculations…
Logarithms in the running time
 General rule
 An algorithm is O(logN) if it takes a constant
O(1) time to cut the problem size by a fraction (
which is usually ½ )

Running Time Calculations…
Binary Search
 Given an integer X and integers A0, A1,…,AN-1 ,
which are presorted and already in memory, find i
such that Ai = X , or return i = -1 if X is not in the
input

Algorithm1
 Scan through the list from left to right
 Running time is linear

Running Time Calculations…
int BinarySearch( const ElementType A[ ],
ElementType X, int N ){
Algorithm 2
 Check if X is the int Low, Mid, High;
middle element /* 1*/ Low = 0; High = N - 1;
 If X is smaller than the /* 2*/ while( Low <= High ){
middle element, we /* 3*/ Mid = ( Low + High ) / 2;
apply the same /* 4*/ if( A[ Mid ] < X )
strategy to the sorted /* 5*/ Low = Mid + 1;
sub array to left of else
middle element /* 6*/ if( A[ Mid ] > X )
 Likewise, if X is larger /* 7*/ High = Mid - 1;
than the middle else
element, we look to the /* 8*/ return Mid;
right half }
/* 9*/ return NotFound;
}
Running Time Calculations…
Running time
 All the work done inside the loop takes O(1) per
iteration , so lets determine the number of times
around the loop
 The loop starts with High – Low = N-1 and finishes
with High – Low ≥ -1
 Every time through the loop the value High – Low
must be at least halved from its previous value
 Thus the number of times around the loop is at most
log( N  1)  2
 Running time is O(log N)

Running Time Calculations…
Euclid’s Algorithm
 The greatest common divisor (Gcd) of two integers
is the largest integer that divides both

 e.g. Gcd(50,15) = 5

 Euclid’s algorithm depends on result that if M=

Nq + r, then gcd(M,N) = gcd(N,r)
 The algorithm works by continually computing
remainders until 0 is reached
 The last nonzero remainder is the answer

Running Time Calculations…
Compute gcd(324,14 8)?

gcd(324,14 8)  324  148 * 2  28

gcd(148,28 )  148  28 * 5  8
gcd(28,8)  28  8*3  4
gcd(8,4)  8  4*2  0
gcd(4,0)  Return 4

gcd(8,4)  4, so gcd(324,14 8)  4

Running Time Calculations…
Running time
 The entire running time of unsigned int Gcd( unsigned int M,
the algorithm depends on unsigned int N )
{
determining how long the unsigned int Rem;
sequence of remainders is
 Though the remainder /* 1*/ while( N > 0 )
does not decrease by a {
/* 2*/ Rem = M % N;
constant factor in one /* 3*/ M = N;
iteration, but after two, it is /* 4*/ N = Rem;
at most the half of its }
original value /* 5*/ return M;
}
 Running time is O(logN)

Running Time Calculations…
Exponentiation
 Raising an integer to a power (which is also an
integer)
 Number of multiplications can be used as the
measurement of running time
 XN uses N-1 multiplications
 The following recursive algorithm does better
 If N is even, we have XN = XN/2 * XN/2 , and
 if N is odd, we have XN = X(N-1)/2 * X(N-1)/2 * X

Running Time Calculations…
Example
 To calculate 3N = 314, we don’t need to do 13
multiplications (3*3*3*3*3*3*3*3*3*3*3*3*3*3)
 Instead we notice that 314 = (32)7
 But 9^7 can be computed as 9*96 = 9*(92)3
 But 81^3 can be computed as 81*(812)
 812 = 6,561
 81 * 6,561 = 531,441
 9 * 531,441 = 4,782,969 which is in fact 314
 Each time we square the number, we cut N in half

Running Time Calculations…
Exponentiation

long int Pow( long int X, unsigned int N )

{ Computing Time Units
/* 1*/ if( N == 0 ) -------------------------------
/* 2*/ return 1; Concentrate on the multiplications
/* 3*/ if( N == 1 )
/* 4*/ return X;
/* 5*/ if( IsEven( N ) )
/* 6*/ return Pow( X * X, N / 2 ); 1 multiplications and T(N/2)
else
/* 7*/ return Pow( X * X, N / 2 ) * X; 2 multiplications and T(N/2)
} ------------------------------------------
T(N) = 2+T(N/2)

Running Time Calculations…
T ( N )  T ( N / 2)  2
T (1)  1
T ( 2)  1  2  3
T ( 4)  3  2  5
T (8)  5  2  7
If N  2 k , T ( N )  2k  1 
2 log N  1  O (log N )

Running Time Calculations…
 Once an analysis has been performed, it is
desirable to see if the answer is correct and as
good as possible
 i.e. verify that some program is O(f(N))
Method 1
• Code up your program and see if the
empirically observed running time matches the
running time predicted by the analysis
• Difficult to differentiate Linear programs from
O(NlogN) purely on empirical evidence

Running Time Calculations…
Method 2
 Compute the values of T(N)/f(N) for a range of N
where T(N) is the empirically observed running
time
 If f(N) is a tight answer for running time, then the
computed values converge to a positive constant
 If f(N) is an overestimate, values converge to zero
 If f(N) is an underestimate and hence wrong , the
values will diverge

Algorithm Analysis: Summary
Covered You now have:
 Asymptotic notations  we learnt how to
analyse the complexity
 Model of computation
of programs , though it
 How to estimate the is not complete and we
coming chapters
 Both Gcd and
exponentiation algorithms
are used in cryptography

Proof that T(N) >= F(N)
Base cases : T( 0 )  1  F( 0 )  1,
T( 1 )  1  F( 1 )  1
T( 2 )  4  F( 2 )  2
We know that T ( N  1)  T ( N )  T ( N  1)
and F ( N  1)  F ( N )  F ( N  1)
Assume theorem holds for all k , 1  k  N
Now prove for the N  1 case :
T ( N  1)  T ( N )  T ( N  1)  F(N)  F(N-1 )  F ( N  1)

Proof that F(N) >= (3/2)N

Base cases : F( 5 )  8  (3 / 2) 5  7.6 ,

F (6 )  13  (3 / 2)  11.4.
6

Assume theorem holds for all k , 5  k  N

Now prove for the N  1 case :
F(N  1 )  F(N)  F(N-1 )  (3 / 2) N  (3 / 2) N 1 
(3 / 2) N (1  (2/3))  (3 / 2) N (5/3) 
N 1
(3 / 2) (3/2)  (3 / 2)
N
.