# Outlines

Programming and Data Structures

Department of Computer Sc & Engg IIT Kharagpur

April 15, 2009

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Complexity

Asymptotic Complexity

Suppose we determine that a program takes 8n + 5 steps to solve a problem of size n What is the signiﬁcance of the 8 and +5 ? As n gets large, the +5 becomes insigniﬁcant The 8 is inaccurate as diﬀerent operations require varying amounts of time What is fundamental is that the time is linear in n Asymptotic Complexity: As n gets large, ignore all lower order terms and concentrate on the highest order term only

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Complexity

Asymptotic Complexity (Contd.)

8n + 5 is said to grow asymptotically like n So does 119n − 45 This gives us an simpliﬁed approximation of the complexity of the algorithm, leaving out details that become insigniﬁcant for larger input sizes

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Complexity

Big-O Notation
We have talked of O(n), O(n2 ) and O(n3 ) before The Big-O notation is used to express the upper bound on a function If f (n) and g (n) are two functions then we can say: f (n) ∈ O(g (n)) if there exist positive constant c and n0 such that 0 ≤ f (n) ≤ cg (n), for all n > n0 cg (n) dominates f (n) for n > n0 (for large n) This is read “f (n) is order g (n)”, or “f (n) is big-O of g (n)” Loosely speaking, f (n) is no larger than g (n) Sometimes people also write f (n) = O(g (n)), but that notation is misleading, as there is no straightforward equality involved This characterisation is not tight, if f (n) ∈ O(n), then f (n) ∈ O(n2 )
Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal Programming and Data Structures

Complexity

Big-Omega Notation
While discussing matrix evaluation by Crammer’s ruled we mentioned that the number of operations to be performed is worse that n! The Big-Omega notation is used to express the lower bound on a function If f (n) and g (n) are two functions then we can say: f (n) ∈ Ω(g (n)) if there exist positive constant c and n0 such that 0 ≤ cg (n) ≤ f (n), for all n > n0 f (n) dominates cg (n) for n > n0 (for large n) Loosely speaking, f (n) is larger than g (n) Sometimes people also write f (n) = Ω(g (n)), but that notation is misleading, as there is no straightforward equality involved This characterisation is also not tight
Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal Programming and Data Structures

Complexity

Big-Theta Notation

The Big-Theta notation is used to express the notion that a function g (n) is a good (preferably simpler) characterisation of another function f (n) If f (n) and g (n) are two functions such that f (n) ∈ O(g (n)) and f (n) ∈ Ω(g (n)), then f (n) ∈ Θ(g (n)) Loosely speaking, f (n) is like g (n) Sometimes people also write f (n) = Θ(g (n)), but that notation is misleading This characterisation is tight

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Complexity

Summary

If f (n) = Θ(g (n)) we say that f (n) and g (n) grow at the same rate asymptotically If f (n) = O(g (n)) but f (n) = Ω(g (n)), then we say that f (n) is asymptotically slower growing than g (n). If f (n) = Ω(g (n)) but f (n) = O(g (n)), then we say that f (n) is asymptotically faster growing than g (n).

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Complexity

Sample Growth Functions

The functions below are given in ascending order O(k) = O(1) O(logb n) = O(log n) O(n) O(n log n) O(n2 ) O(n3 ) ... O(k n ) O(n!) Constant Time Logarithmic Time Linear Time Quadratic Time Cubic Time Exponential Time Exponential Time

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions

T (N) = 1 T (N) = T (N − 1) + 1 T (N) = N ∈ O(N)

for N = 1 for N ≥ 2

(1) (2)

Show that this recurrence captures the running time complexity of determining the maximum element, searching in an un-sorted array

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions (Contd.)

T (N) = 1 T (N) = T (N − 1) + N T (N) =

for N = 1 for N ≥ 2

(1) (2)

N(N + 1) ∈ O(N 2 ) 2 Show that this recurrence captures the running time complexity of bubble/insertion/selection sort

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions (Contd.)

T (N) = 1 T (N) = T (N/2) + 1

for N = 1 for N ≥ 2

(1) (2)

T (N) = lg N + 1 ∈ O(lg N) Show that this recurrence captures the running time complexity of binary search

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions (Contd.)

T (N) = 0 T (N) = T (N/2) + N T (N) = 2N ∈ O(N)

for N = 1 for N ≥ 2

(1) (2)

No problem examined so far in this course whose behaviour is modelled by this recurrence relation

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions (Contd.)

T (N) = 0 T (N) = 2T (N/2) + N

for N = 1 for N ≥ 2

(1) (2)

T (N) = N lg N ∈ O(N lg N) Show that this recurrence captures the running time complexity of quicksort

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures

Common Recurrence Relations

Sample Recurrences and Their Solutions (Contd.)

T (N) = 1 T (N) = 2T (N − 1) + 1 T (N) = 2N − 1 ∈ O(2N )

for N = 1 for N ≥ 2

(1) (2)

Show that this recurrence captures the running time complexity of the towers of Hanoi problem

Dept of Computer Sc & Engg, IIT Kharagpur c C Mandal

Programming and Data Structures