Professional Documents
Culture Documents
• Better algorithms?
How: Faster algorithms? Algorithms with less space
requirement?
Optimality: Prove that an algorithm is best
possible/optimal? Establish a lower bound?
Complexity Analysis
․Complexity
A measure of the performance of an
algorithm
Size = N, Time = T
T1 = c*N1 (c is constant)
By increasing the data size by a factor of 5, results in
the increase of time by the same factor
N2 = 5 N1
T2 = 5 T1
Functions (usually complex) could be used to express the
relationship between N and T
Complexity Analysis
T (n) = 10n2 + n + 7
Dominant term when n is large: 10n2
n T(n) 10n2
10 103 + 17 103
100 105 + 107 105
103 107 + 1007 107
10100 10 201+10100 + 7 10201
T (n) grows like 10n2
T(n) is order of n2 (ignore constant factors)
Computing Asymptotic Complexity
Constant Order
a=3*b+2;
c=c+1
If a, b and c are scalars, this piece of code takes a
constant amount of time
O(1) means a constant, might be 1, 10, 100
Computing Asymptotic Complexity
Linear Loops
running time is, at most, the running time of the
statements inside the loop (including tests) multiplied by
the number of iterations
for (i =1; i <= n; i++)
executed {
n times a = a +2; Constant time
}
Total time = a constant c * n = cn = O(N)
Computing Asymptotic Complexity
Nested Loops
Outer loop i =1 inner loop n
Outer loop i =2 inner loop n
Outer loop i =3 inner loop n
……
……
Outer loop i =n inner loop n
Total iterations n * n
Computing Asymptotic Complexity
Logarithmic
An algorithm is O(log N) if it takes a constant time to cut
the problem size by a fraction (usually by 1/2)
e.g. Binary Search
Computing Asymptotic Complexity
Logarithmic Loops
i=1
while (i<=1000) {
i = i *2
}
Total time = O(?)
Computing Asymptotic Complexity
Iteration 1 i=2
Iteration 2 i=4
Iteration 3 i=8
……
Iteration 9 i=512
Exit i=1024
1 constant
log n logarithmic
n linear
n log n n-log-n
n2 quadratic
n3 cubic
2n exponential
n! factorial
Basic asymptotic efficiency classes
n lgn nlgn n2 n3 2n
0 #NUM! #NUM! 0 0 1
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65536
32 5 160 1024 32768 4294967296
64 6 384 4096 262144 1.84467E+19
128 7 896 16384 2097152 3.40282E+38
256 8 2048 65536 16777216 1.15792E+77
512 9 4608 262144 134217728 1.3408E+154
1024 10 10240 1048576 1073741824
2048 11 22528 4194304 8589934592
Basic asymptotic efficiency classes
120000
100000
80000
60000
40000 Lg n
n
20000
n lg n
0 n square
n cube
n-> 2 raise to
power n
Basic asymptotic efficiency classes
Execution time for algorithms with the given time complexities
n f(n) = lgn f(n) = n f(n) = nlgn f(n) = n2 f(n) = n3 f(n) = 2n
10 0.003 micro sec 0.01 micro sec 0.033 micro sec 0.1 micro sec 1 micro sec 1 micro sec
20 0.004 micro sec 0.02 micro sec 0.086 micro sec 0.4micro sec 8 micro sec 1 milli sec
30 0.005 micro sec 0.03 micro sec 0.147 micro sec 0.9 micro sec 27micro sec 1 sec
40 0.005 micro sec 0.04 micro sec 0.213 micro sec 1.6 micro sec 64 micro sec 18.3 min
50 0.006 micro sec 0.05 micro sec 0.282 micro sec 2.5 micro sec 125 micro sec 13 days
1 02 0.007 micro sec 0.10 micro sec 0.664 micro sec 10 micro sec 1 milli sec 4 exp 13 years
1 03 0.010 micro sec 1.00 micro sec 9.966 micro sec 1 milli sec 1 sec
1 04 0.013 micro sec 10 micro sec 130 micro sec 100 milli sec 16.7 min
1 05 0.017 micro sec 0.10 milli sec 1.67 milli sec 10 s 11.6 days
1 06 0.020 micro sec 1 milli sec 19.93 milli sec 16.7 min 31.7 years
1 07 0.023 micro sec 0.01sec 0.23 sec 1.16 days 31709 years
1 08 0.027 micro sec 0.10 sec 2.66 sec 115.7 days 3.17 exp 7 years
1 09 0.030 micro sec 1 sec 29.90 sec 31.7 years
Order Notation
O: Upper Bounding Function
• Def: f(n)= O(g(n)) if c >0 and n0 > 0 such that f(n) cg(n)
for all n n0.
• How to show O (Big-Oh) relationships?
f (n )
f(n) = O(g(n)) iff limn g (n )
< including the case where limit is 0
=
g(n) (g(n)), functions that grow at the same rate as g(n)
<=
O(g(n)), functions that grow no faster than g(n)
Some Examples
• Merge sort:
• Divide the n-element sequence to be sorted into two n/2-element
sequence.
• Conquer: sort the subproblems, recursively using merge sort.
• Combine: merge the resulting two sorted n/2-element sequences.
Merge Sort: A Divide-and-Conquer Algorithm
8 3 2 9 7 1 5 4
8 3 2 9 7 1 5 4
MergeSort(A, p, r) T(n)
8 3 2 9 71 5 4
1. If p < r then (1)
2. q (p+r)/2 (1)
8 3 2 9 7 1 5 4
3. MergeSort (A, p, q) T(n/2)
4. MergeSort (A, q +1, r) T(n/2)
5. Merge(A, p, q, r) (n) 3 8 2 9 1 7 4 5
2 3 8 9 1 4 5 7
1 2 3 4 5 7 8 9
Recurrence: Analyzing Divide-and-Conquer Algorithms
• Recurrence describes a function recursively in terms of itself.
• Recurrence for a divide-and-conquer algorithms
(1), if n c
T (n )
aT (n / b) D ( n ) C ( n ),otherwise
• a: # of subproblems
• n/b: size of the subproblems
• D(n): time to divide the problem of size n into subproblems
• C(n): time to combine the subproblem solutions to get the answer for the
problem of size n
• Merge sort:
(1) , if n=1
T (n )
2 T(n/2) + (n) , if n >1
• a = 2: two subproblems
• n/b = n/2: each subproblem has size n/2
• D(n) = (1): compute midpoint of array
• C(n) = (n): merging by scanning sorted subarrays
Solving Recurrences
(1) , if n=1
T (n)
2 T(n/2) + (n ) , if n >1
O(nlogb a) if a > bk
T(n) = O(nklogn) if a = bk
O(nk) if a < bk
Class Exercise