You are on page 1of 5

Running Time (§3.

1)
Most algorithms transform best case
average case
input objects into output
Analysis of Algorithms objects. 120
worst case

The running time of an 100


algorithm typically grows

Running Time
80
with the input size.
60
Average case time is often
difficult to determine. 40

We focus on the worst case 20

Input Algorithm Output running time. 0


1000 2000 3000 4000
„ Easier to analyze Input Size
An algorithm is a step-by-step procedure for „ Crucial to applications such as
solving a problem in a finite amount of time. games, finance and robotics
Analysis of Algorithms 2

Experimental Studies (§ 3.1.1) Limitations of Experiments


Write a program 9000
It is necessary to implement the
implementing the 8000

algorithm 7000
algorithm, which may be difficult
Run the program with 6000 Results may not be indicative of the
Time (ms)

inputs of varying size and 5000 running time on other inputs not included
composition
Use a function, like the
4000
in the experiment.
3000
built-in clock() function, to 2000
In order to compare two algorithms, the
get an accurate measure same hardware and software
1000
of the actual running time
Plot the results
0 environments must be used
0 50 100
Input Size

Analysis of Algorithms 3 Analysis of Algorithms 4

Theoretical Analysis Pseudocode (§3.1.2)


High-level description Example: find max
Uses a high-level description of the of an algorithm element of an array
algorithm instead of an implementation More structured than Algorithm arrayMax(A, n)
Characterizes running time as a English prose Input array A of n integers
Less detailed than a Output maximum element of A
function of the input size, n. program
currentMax ← A[0]
Takes into account all possible inputs Preferred notation for
for i ← 1 to n − 1 do
describing algorithms
Allows us to evaluate the speed of an Hides program design if A[i] > currentMax then
currentMax ← A[i]
algorithm independent of the issues
return currentMax
hardware/software environment
Analysis of Algorithms 5 Analysis of Algorithms 6

1
The Random Access Machine
Pseudocode Details (RAM) Model
Control flow Method/Function call A CPU
„ if … then … [else …] var.method (arg [, arg…])
„ while … do … Return value
„ repeat … until … return expression An potentially unbounded
„ for … do … Expressions bank of memory cells, 2
1
← Assignment 0
„ Indentation replaces braces
(like = in C++)
each of which can hold an
Method declaration = Equality testing arbitrary number or
Algorithm method (arg [, arg…]) (like == in C++) character
Input … n2 Superscripts and other
Output … mathematical Memory cells are numbered and accessing
formatting allowed
any cell in memory takes unit time.
Analysis of Algorithms 7 Analysis of Algorithms 8

Counting Primitive
Primitive Operations Operations (§3.4.1)
Basic computations By inspecting the pseudocode, we can determine the
Examples: maximum number of primitive operations executed by
performed by an algorithm
„ Evaluating an an algorithm, as a function of the input size
Identifiable in pseudocode expression
Largely independent from the „ Assigning a value Algorithm arrayMax(A, n) # operations
to a variable currentMax ← A[0] 2
programming language
„ Indexing into an for i ← 1 to n − 1 do 2+n
Exact definition not important array
if A[i] > currentMax then 2(n − 1)
(we will see why later) „ Calling a method
currentMax ← A[i] 2(n − 1)
Returning from a
Assumed to take a constant „

method
{ increment counter i } 2(n − 1)
amount of time in the RAM return currentMax 1
model Total 7n − 1
Analysis of Algorithms 9 Analysis of Algorithms 10

Estimating Running Time Growth Rate of Running Time


Algorithm arrayMax executes 7n − 1 primitive
Changing the hardware/ software
operations in the worst case. Define:
a = Time taken by the fastest primitive operation
environment
b = Time taken by the slowest primitive operation „ Affects T(n) by a constant factor, but
Let T(n) be worst-case time of arrayMax. Then „ Does not alter the growth rate of T(n)
a (7n − 1) ≤ T(n) ≤ b(7n − 1) The linear growth rate of the running
Hence, the running time T(n) is bounded by two time T(n) is an intrinsic property of
linear functions algorithm arrayMax

Analysis of Algorithms 11 Analysis of Algorithms 12

2
Growth Rates Constant Factors
1E+30 1E+26
Growth rates of 1E+28 Cubic
The growth rate is 1E+24 Quadratic
Quadratic
functions: 1E+26
1E+24 Quadratic
not affected by 1E+22
Linear
1E+20
„ Linear ≈ n 1E+22
Linear
„ constant factors or 1E+18 Linear
1E+20
„ Quadratic ≈ n2 1E+18 „ lower-order terms 1E+16
1E+14

T (n )
Cubic ≈ n3 Examples
T (n )

„ 1E+16
1E+12
1E+14
1E+10
1E+12 „ 102n + 105 is a linear
In a log-log chart, 1E+10
function
1E+8
1E+8 1E+6
the slope of the line 1E+6 „ 105n2 + 108n is a 1E+4
corresponds to the 1E+4 quadratic function 1E+2
1E+2 1E+0
growth rate of the 1E+0
1E+0 1E+2 1E+4 1E+6 1E+8 1E+10
function 1E+0 1E+2 1E+4 1E+6 1E+8 1E+10 n
n

Analysis of Algorithms 13 Analysis of Algorithms 14

Big-Oh Notation (§3.5) Big-Oh Example


10,000
Given functions f(n) and 3n
1,000,000
n^2
g(n), we say that f(n) is Example: the function 100n
1,000 2n+10 100,000
O(g(n)) if there are n2 is not O(n) 10n
positive constants n „ n2 ≤ cn 10,000 n

c and n0 such that 100 „ n≤c


„ The above inequality 1,000
f(n) ≤ cg(n) for n ≥ n0 cannot be satisfied
10
Example: 2n + 10 is O(n) since c must be a 100

„ 2n + 10 ≤ cn constant
1 10
„ (c − 2) n ≥ 10
1 10 100 1,000
„ n ≥ 10/(c − 2) n 1
„ Pick c = 3 and n0 = 10 1 10 100 1,000
n

Analysis of Algorithms 15 Analysis of Algorithms 16

More Big-Oh Examples Big-Oh and Growth Rate


7n-2
7n-2 is O(n)
The big-Oh notation gives an upper bound on the
need c > 0 and n0 ≥ 1 such that 7n-2 ≤ c•n for n ≥ n0 growth rate of a function
this is true for c = 7 and n0 = 1 The statement “f(n) is O(g(n))” means that the growth
rate of f(n) is no more than the growth rate of g(n)
„ 3n3 + 20n2 + 5
We can use the big-Oh notation to rank functions
3n3 + 20n2 + 5 is O(n3)
according to their growth rate
need c > 0 and n0 ≥ 1 such that 3n3 + 20n2 + 5 ≤ c•n3 for n ≥ n0
this is true for c = 4 and n0 = 21 f(n) is O(g(n)) g(n) is O(f(n))
„ 3 log n + log log n g(n) grows more Yes No
3 log n + log log n is O(log n) f(n) grows more No Yes
need c > 0 and n0 ≥ 1 such that 3 log n + log log n ≤ c•log n for n ≥ n0
Same growth Yes Yes
this is true for c = 4 and n0 = 2
Analysis of Algorithms 17 Analysis of Algorithms 18

3
Big-Oh Rules Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines
the running time in big-Oh notation
If is f(n) a polynomial of degree d, then f(n) is To perform the asymptotic analysis
O(nd), i.e., „ We find the worst-case number of primitive operations
executed as a function of the input size
1. Drop lower-order terms „ We express this function with big-Oh notation
2. Drop constant factors Example:
Use the smallest possible class of functions „ We determine that algorithm arrayMax executes at most
7n − 1 primitive operations
„ Say “2n is O(n)” instead of “2n is O(n2)” „ We say that algorithm arrayMax “runs in O(n) time”
Use the simplest expression of the class Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
„ Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
when counting primitive operations

Analysis of Algorithms 19 Analysis of Algorithms 20

Computing Prefix Averages Prefix Averages (Quadratic)


We further illustrate The following algorithm computes prefix averages in
35
asymptotic analysis with X quadratic time by applying the definition
two algorithms for prefix 30 A Algorithm prefixAverages1(X, n)
averages 25 Input array X of n integers
The i-th prefix average of 20 Output array A of prefix averages of X #operations
an array X is average of the
first (i + 1) elements of X: 15 A ← new array of n integers n
for i ← 0 to n − 1 do n
A[i] = (X[0] + X[1] + … + X[i])/(i+1) 10
s ← X[0] n
Computing the array A of 5 for j ← 1 to i do 1 + 2 + …+ (n − 1)
prefix averages of another 0 s ← s + X[j] 1 + 2 + …+ (n − 1)
array X has applications to 1 2 3 4 5 6 7 A[i] ← s / (i + 1) n
financial analysis return A 1
Analysis of Algorithms 21 Analysis of Algorithms 22

Arithmetic Progression Prefix Averages (Linear)


7 The following algorithm computes prefix averages in
The running time of linear time by keeping a running sum
prefixAverages1 is 6
Algorithm prefixAverages2(X, n)
O(1 + 2 + …+ n) 5
Input array X of n integers
The sum of the first n 4 Output array A of prefix averages of X #operations
integers is n(n + 1) / 2 A ← new array of n integers n
3
„ There is a simple visual s←0 1
proof of this fact 2
for i ← 0 to n − 1 do n
Thus, algorithm 1 s ← s + X[i] n
prefixAverages1 runs in A[i] ← s / (i + 1) n
0
O(n2) time return A 1
1 2 3 4 5 6
Algorithm prefixAverages2 runs in O(n) time
Analysis of Algorithms 23 Analysis of Algorithms 24

4
Math you need to Review Relatives of Big-Oh
Summations (Sec. 1.3.1) big-Omega
„ f(n) is Ω(g(n)) if there is a constant c > 0
Logarithms and Exponents (Sec. 1.3.2)
and an integer constant n0 ≥ 1 such that
properties of logarithms:
f(n) ≥ c•g(n) for n ≥ n0
logb(xy) = logbx + logby
big-Theta
logb (x/y) = logbx - logby
„ f(n) is Θ(g(n)) if there are constants c’ > 0 and c’’ > 0 and an
logbxa = alogbx
integer constant n0 ≥ 1 such that c’•g(n) ≤ f(n) ≤ c’’•g(n) for n ≥ n0
logba = logxa/logxb
little-oh
properties of exponentials:
„ f(n) is o(g(n)) if, for any constant c > 0, there is an integer
a(b+c) = aba c constant n0 ≥ 0 such that f(n) ≤ c•g(n) for n ≥ n0
abc = (ab)c
little-omega
ab /ac = a(b-c)
Proof techniques (Sec. 1.3.3) „ f(n) is ω(g(n)) if, for any constant c > 0, there is an integer
b = a logab
constant n0 ≥ 0 such that f(n) ≥ c•g(n) for n ≥ n0
Basic probability (Sec. 1.3.4) bc = a c*logab

Analysis of Algorithms 25 Analysis of Algorithms 26

Intuition for Asymptotic Example Uses of the


Notation Relatives of Big-Oh
„ 5n2 is Ω(n2)
Big-Oh f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
„ f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n)
such that f(n) ≥ c•g(n) for n ≥ n0
big-Omega let c = 5 and n0 = 1
„ f(n) is Ω(g(n)) if f(n) is asymptotically greater than or equal to g(n)
„ 5n2 is Ω(n)
big-Theta f(n) is Ω(g(n)) if there is a constant c > 0 and an integer constant n0 ≥ 1
„ f(n) is Θ(g(n)) if f(n) is asymptotically equal to g(n)
such that f(n) ≥ c•g(n) for n ≥ n0
little-oh let c = 1 and n0 = 1
„ f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n) „ 5n2 is ω(n)
little-omega f(n) is ω(g(n)) if, for any constant c > 0, there is an integer constant n0 ≥
„ f(n) is ω(g(n)) if is asymptotically strictly greater than g(n)
0 such that f(n) ≥ c•g(n) for n ≥ n0
need 5n02 ≥ c•n0 → given c, the n0 that satisfies this is n0 ≥ c/5 ≥ 0

Analysis of Algorithms 27 Analysis of Algorithms 28

You might also like