You are on page 1of 36

Complexity Analysis

Dr.Muhammad Shahid Iqbal


Algorithms

• Algorithm: A well-defined procedure for transforming


some input to a desired output.
• Major concerns:
• Correctness: Is it correct?
• Efficiency: Time complexity? Space complexity?
• Best case? Worst case? Average case?

• Better algorithms?
 How: Faster algorithms? Algorithms with less space
requirement?
 Optimality: Prove that an algorithm is best
possible/optimal? Establish a lower bound?
Complexity Analysis

․Complexity
 A measure of the performance of an
algorithm

 Express the relationship between the input


size N and the amount of time T required to
process the algorithm
Complexity Analysis

Size = N, Time = T
T1 = c*N1 (c is constant)
By increasing the data size by a factor of 5, results in
the increase of time by the same factor
N2 = 5 N1
T2 = 5 T1
Functions (usually complex) could be used to express the
relationship between N and T
Complexity Analysis

․Any term that don’t considerably change the function’s


magnitude should be eliminated from the function
․The resulting function gives only an approximate
measure of the original
․Approximation is sufficiently close to the original (for large
data)
․Measure of Efficiency: Asymptotic Complexity
Asymptotic Complexity

․Calculating a function is difficult, approximation is useful


․Ignore certain terms of a function to express the
efficiency of an Algorithm

․F(N) = N2 + 100 N + log10 N + 1000


․Small values of N
․N = 10, 100, and 1000 …
Asymptotic Complexity

T (n) = 10n2 + n + 7
Dominant term when n is large: 10n2
n T(n) 10n2
10 103 + 17 103
100 105 + 107 105
103 107 + 1007 107
10100 10 201+10100 + 7 10201
T (n) grows like 10n2
T(n) is order of n2 (ignore constant factors)
Computing Asymptotic Complexity

Constant Order
a=3*b+2;
c=c+1
If a, b and c are scalars, this piece of code takes a
constant amount of time
O(1) means a constant, might be 1, 10, 100
Computing Asymptotic Complexity

Linear Loops
running time is, at most, the running time of the
statements inside the loop (including tests) multiplied by
the number of iterations
for (i =1; i <= n; i++)
executed {
n times a = a +2; Constant time
}
Total time = a constant c * n = cn = O(N)
Computing Asymptotic Complexity

Nested Loops : Analyze inside out


running time is the product of the inner loop iterations
and outer loop iterations

outer loop for (i = 1; i<=n; i++) {


executed for(j=1; j<=n; j++){
n times k = k+1 Constant
Time
}
}
Total time = c* n*n = cn2 = O(n2)
Computing Asymptotic Complexity

Nested Loops
Outer loop i =1 inner loop n
Outer loop i =2 inner loop n
Outer loop i =3 inner loop n
……
……
Outer loop i =n inner loop n

Total iterations n * n
Computing Asymptotic Complexity

Logarithmic
An algorithm is O(log N) if it takes a constant time to cut
the problem size by a fraction (usually by 1/2)
e.g. Binary Search
Computing Asymptotic Complexity

Logarithmic Loops
i=1
while (i<=1000) {
i = i *2
}
Total time = O(?)
Computing Asymptotic Complexity

Iteration 1 i=2
Iteration 2 i=4
Iteration 3 i=8
……
Iteration 9 i=512
Exit i=1024

Total Time = log2 n = O(log2 n)


Basic asymptotic efficiency classes

1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial
Basic asymptotic efficiency classes
n lgn nlgn n2 n3 2n
0 #NUM! #NUM! 0 0 1
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65536
32 5 160 1024 32768 4294967296
64 6 384 4096 262144 1.84467E+19
128 7 896 16384 2097152 3.40282E+38
256 8 2048 65536 16777216 1.15792E+77
512 9 4608 262144 134217728 1.3408E+154
1024 10 10240 1048576 1073741824
2048 11 22528 4194304 8589934592
Basic asymptotic efficiency classes

120000

100000

80000

60000

40000 Lg n

n
20000
n lg n

0 n square

n cube

n-> 2 raise to
power n
Basic asymptotic efficiency classes
Execution time for algorithms with the given time complexities
n f(n) = lgn f(n) = n f(n) = nlgn f(n) = n2 f(n) = n3 f(n) = 2n
10 0.003 micro sec 0.01 micro sec 0.033 micro sec 0.1 micro sec 1 micro sec 1 micro sec
20 0.004 micro sec 0.02 micro sec 0.086 micro sec 0.4micro sec 8 micro sec 1 milli sec
30 0.005 micro sec 0.03 micro sec 0.147 micro sec 0.9 micro sec 27micro sec 1 sec
40 0.005 micro sec 0.04 micro sec 0.213 micro sec 1.6 micro sec 64 micro sec 18.3 min
50 0.006 micro sec 0.05 micro sec 0.282 micro sec 2.5 micro sec 125 micro sec 13 days
1 02 0.007 micro sec 0.10 micro sec 0.664 micro sec 10 micro sec 1 milli sec 4 exp 13 years
1 03 0.010 micro sec 1.00 micro sec 9.966 micro sec 1 milli sec 1 sec
1 04 0.013 micro sec 10 micro sec 130 micro sec 100 milli sec 16.7 min
1 05 0.017 micro sec 0.10 milli sec 1.67 milli sec 10 s 11.6 days
1 06 0.020 micro sec 1 milli sec 19.93 milli sec 16.7 min 31.7 years
1 07 0.023 micro sec 0.01sec 0.23 sec 1.16 days 31709 years
1 08 0.027 micro sec 0.10 sec 2.66 sec 115.7 days 3.17 exp 7 years
1 09 0.030 micro sec 1 sec 29.90 sec 31.7 years
Order Notation
O: Upper Bounding Function

• Def: f(n)= O(g(n)) if  c >0 and n0 > 0 such that f(n)  cg(n)
for all n  n0.
• How to show O (Big-Oh) relationships?
f (n )
 f(n) = O(g(n)) iff limn   g (n )
<  including the case where limit is 0

f(n) certainly doesn’t grow at a faster


rate than g(n), it might grow at the
same rate or might grow slowly
 : Lower Bounding Function

• Def: f(n)= (g(n)) if  c > 0 and n0 > 0 such that 0  f(n) 


cg(n) for all n  n0.
• How to show  (Big-Omega) relationships?
f (n )

f(n) =  (g(n)) iff limn   g ( n )> 0 including the case where limit is 

f(n) certainly doesn’t grow at a


slower rate than g(n), it might grow at
the same rate or might grow fastly
: Tightly Bounding Function

• Def: f(n)= (g(n)) if  c1, c2 >0 and n0 > 0 such that 0 


c1g(n)  f(n)  c2 g(n) for all n  n0.
• How to show  relationships?
f (n )
 f(n) =  (g(n)) iff limn  
g (n)
= c where 0 < c < 

f(n) grows at the same rate as


g(n) within constant multiples
>=
(g(n)), functions that grow at least as fast as g(n)

=
g(n) (g(n)), functions that grow at the same rate as g(n)

<=
O(g(n)), functions that grow no faster than g(n)
Some Examples

3n2 + 100n + 6 = O(n2)


3n2 + 100n + 6 = O(n3)
3n2 + 100n + 6 = O(n)

3n2 + 100n + 6 = (n2)


3n2 + 100n + 6 = (n3)
3n2 + 100n + 6 = (n)

3n2 + 100n + 6 = (n2)


3n2 + 100n + 6 = (n3)
3n2 + 100n + 6 = (n)
Complexity analysis example: Sorting

• Input: A sequence of n numbers <a1, a2, …, an>.


• Output: A permutation <a1', a2', …, an'> such that a1' 
a2'  …  an'.

• Input : <8, 6, 9, 7, 5, 2, 3>

• Output: <2, 3, 5, 6, 7, 8, 9 >


Insertion Sort
InsertionSort(A)
1. for j  2 to length[A] do
2. key  A[j];
3. /* Insert A[j] into the sorted sequence A[1..j-1]. */
4. i  j - 1;
5. while i > 0 and A[i] > key do
6. A[i+1]  A[i];
7. i  i - 1;
8. A[i+1]  key;
Exact Analysis of Insertion Sort

• The for loop is executed (n-1) + 1 times. (why?)


• tj: # of times the while loop test for value j
• Step 5 is executed t2 + t3 + … + tn times.
• Step 6 is executed (t2 - 1) + (t3 - 1) + … + (tn - 1) times.
Exact Analysis of Insertion Sort (cont’d)

• Best case: If the input is already sorted, all tj's are 1.


Linear: T(n) = (c1 + c2 + c4 + c5 + c8)n - (c2 + c4 + c5 + c8)
• Worst case: If the array is in reverse sorted order, tj = j,  j.
Quadratic: T(n) = (c5 /2 + c6/ 2 + c7/2 ) n2 + (c1 + c2 + c4 + c5 /2 – c6 /2 –
c7/2 + c8) n – (c2 + c4 + c5 + c8)
• Exact analysis is often hard!
Asymptotic Analysis

• Asymptotic analysis looks at growth of T(n) as n  .


  notation: Drop low-order terms and ignore the leading
constant.
• E.g., 8n3 - 4n2 + 5n - 2 = (n3).
• Worst case: input reverse sorted, while loop is (j)
Divide-and-Conquer Algorithms

• The divide-and-conquer paradigm


• Divide the problem into a number of subproblems.
• Conquer the subproblems (solve them).
• Combine the subproblem solutions to get the solution to the
original problem.

• Merge sort:
• Divide the n-element sequence to be sorted into two n/2-element
sequence.
• Conquer: sort the subproblems, recursively using merge sort.
• Combine: merge the resulting two sorted n/2-element sequences.
Merge Sort: A Divide-and-Conquer Algorithm
8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

MergeSort(A, p, r) T(n)
8 3 2 9 71 5 4
1. If p < r then (1)
2. q   (p+r)/2 (1)
8 3 2 9 7 1 5 4
3. MergeSort (A, p, q) T(n/2)
4. MergeSort (A, q +1, r) T(n/2)
5. Merge(A, p, q, r) (n) 3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
Recurrence: Analyzing Divide-and-Conquer Algorithms
• Recurrence describes a function recursively in terms of itself.
• Recurrence for a divide-and-conquer algorithms
 (1), if n  c
T (n )  
aT (n / b)  D ( n )  C ( n ),otherwise
• a: # of subproblems
• n/b: size of the subproblems
• D(n): time to divide the problem of size n into subproblems
• C(n): time to combine the subproblem solutions to get the answer for the
problem of size n
• Merge sort:
 (1) , if n=1
T (n )  
 2 T(n/2) +  (n) , if n >1

• a = 2: two subproblems
• n/b = n/2: each subproblem has size  n/2
• D(n) = (1): compute midpoint of array
• C(n) = (n): merging by scanning sorted subarrays
Solving Recurrences

• Some general methods for solving recurrences


Recursion Tree Method
Iteration Method
Master Theorem

• Two simplifications that won't affect asymptotic


analysis
• Ignore floors and ceilings.
• Assume base cases are constant, i.e., T(n) = (1) for small n.
Merge Sort Recurrence: Recursion Tree Method

 (1) , if n=1
T (n)  
 2 T(n/2) +  (n ) , if n >1

 (n lg n) grows more slowly than (n2).


• Thus merge sort asymptotically beats insertion
sort in the worst case.
Master Theorem

The solution to the equation using Master Theorem

T(n) = aT(n/b) + (nk), where a >= 1 and b >1 is

O(nlogb a) if a > bk
T(n) = O(nklogn) if a = bk
O(nk) if a < bk
Class Exercise

Solve the following using Master Theorem

You might also like