You are on page 1of 99

Running Time

 Running time of an algorithm is the number of


primitive steps that are executed
 Analysis of Insertion Sort
Best Case : T (n) = an+b where a and b
depends on ci
Worst Case: T (n) = an2+ bn+c where a,b and
c depends on ci
Asymptotic Performance
 Asymptotic performance: How does algorithm
behave as the problem size gets very large?
 Running time
 Memory/storage requirements
 Remember that we use the RAM model:
 All memory equally expensive to access
 No concurrent operations
 All reasonable instructions take unit time
 Except, of course, function calls
 Constant word size
 Unless we are explicitly manipulating bits
Analysis
 Best case is time taken for some input data set that
results in best possible performance.You cannot do better.
This is lower bound
 Worst case
 Provides an upper bound on running time
 Longest Running Time
 Average case
 Provides the expected running time
 Use Randomized algorithm which makes random
choices to allow Probilistics Analysis
 Random (equally likely) inputs
 Real-life inputs
Order of Growth
 Simplifications
 Ignore actual and abstract statement costs
 Order of growth is the interesting measure:
Highest-order term is what counts - we only
consider leading term of the the formula -an2
 We neglect the lower order terms as well as

constant co-efficient factors as they are least


significant than rate of growth in determining
Computational efficiency for larger inputs.
 We say InsertionSort’s run time is O(n2)
Read O as “Big-O” (you’ll also hear it as “order”)
 Note: Even though it is correct to say “7n - 3 is O(n3)”, a
better statement is “7n - 3 is O(n)”, that is, one should make
the approximation as tight as possible

 Simple Rule: Drop lower order terms and constant factors

7n-3 is O(n)
8n2log n + 5n2 + n is O(n2log n)
Asymptotic Notation (terminology)
 Special classes of algorithms:

constant: O(1)
logarithmic: O(log n)
linear: O(n)
quadratic: O(n2)
polynomial: O(nk), k ≥ 1
exponential: O(an), a > 1
NOTE: One algorithm is more efficient than another
if its Worst Case Running time has lower order of
Growth
 Comparing the asymptotic running time
-an algorithm that runs in O(n) time is better
than one that runs in O(n2) time
-similarly, O(log n) is better than O(n)
-hierarchy of functions: log n << n << n2 <<
n3 << 2n
Categories of algorithm efficiency
Efficiency Big O

Constant O(1)
Logarithmic O(log n)
Linear O(n)
Linear logarithmic O(n log n)
Quadratic O(n2)
Polynomial O(nk)
Exponential O(cn)
Factorial O(n!)
Designing Algorithms

Divide and Conquer algorithms


 This technique divides a given problem into smaller instances
of the same problem, Solves the smaller problems recursively
and combine solutions to the subproblems to obtain the solution
of original problem.
 We Compute the Worst Case Running time of algo which is
detemined by Recurrence relation
Motivation
 Given a large data set
 (DIVIDE): Partition the data set into smaller
sets
 CONQUER): Solve the problems for the
smaller sets
 (COMBINE): Combine the results of the
smaller sets
Merge Sort
 Sort: 10, 4, 20, 8, 15, 2, 1, -5

10 4 20 8 15 2 1 -5
 DIVIDE:
 DIVIDE: 10 4 20 8 15 2 1 -5
 Sort 4 10 8 20 2 15 -5 1
(Conquer)
 COMBINE:
4 8 10 20 -5 1 2 15
 COMBINE:
-5 1 2 4 8 10 15 20
 Brute Force: n2, Merge Sort: n log n
What did we do?
 We are breaking down a larger problem into
smaller sets (DIVIDE).

 After solving the smaller set problems


(CONQUER), we combine the answers.
MERGE SORT

 The procedure merge sort sorts the


elements in the sub array A[low,high] eg:
A[p,r]
 If pr the sub array has at most one
element and is therefore already sorted.
 Otherwise we divide it into A[p,q] and
array[q+1..r]
Merge_sort(A,p,r)

Merge_sort procedure sorts the elements in subarray A [p..r]

1 if p < r
2 then q └(p+r)/2 ┘
3 Merge_sort(A, p,q)
4 Merge_sort(A, q+1, r)
5 MERGE (A,p, q, r)
Merging Two Sequences

Pseudo-code for merging two sorted sequences into a unique sorted


sequences. To replace current sub-array A [p, r].
Procedure MERGE(A,p,q,r) Effort
1.n1←q-p+1 computes the length n1of A[p,q] (1)
2.n2 ←r- q computes the length n2 of A[q+1,r] (1)
3.create arrays L[1…n1 +1] and R[1…n2 +1] (1)
4.for i ← 1 to n1 (n1+n2)
5. do L [i] ← A [p+i-1] copies the subarray A[p,q] into L[1.. n1] “
6. for j ← 1 to n2 “
7. do R[j] ← A [q+j] copies the subarray A[q+1..r] into R[1.. n2] “
8. L[n1 +1] ← ∞ (1)
9. R[n2 +1] ← ∞ putting sentinels at the end of L and R (1)
10. i ←1 Lines 10 -17 performs r –p + 1 basic steps (1)
11. j ← 1 for merging two subsequences. (1)
12.for k ← p to r (n)
13. do if L [i] ≤ R [j]
14. then A[k] ← L[i]
15. i ← i +1
16. else A[k] ← R[j]
17. j ← j +1

 Here Merge procedure takes Θ(n) time where n= r-p+1 is the no of


elements being merged
Analyzing divide and conquer
algorithms
 Running Time of divide and conquer algorithms can be
described by RECURRENCE RELATION
 A Recurrence for Running time of a divide and conquer
algorithm is based on three steps:
1.Let T(n) be the running time on a problem of size n. If the
problem size is small enough say n ≤ c straight forward
solution takes constant time say (1).
2.Suppose that our division of the problem generates a subproblem
each of size 1/b the size of original.
3.If it takes D(n) time to divide the problem into sub problems and
C(n) time to combine the solutions into the solution of original
problem,we get the relation as:
T(n) = (1) if n  c
= a T(n/b) + D(n) + C(n) otherwise
For merge sort: Worst Case Running time
If only 1 element than merge algorithm takes constant amount of time but if n > 1
then we break runtime as follows:
Divide : Computes middle of the subarray which takes constant time thus,
D(n)=(1)
Conquer: Recursively solves Two subproblems each of the size n/2 which
contributes 2T(n/2) to running time.
Combine: as Merge procedure takes (n) so,C(n) = (n)
Analysis of Merge Sort
Statement Effort

Merge_Sort(A,p,r) { T(n)
if (p < r) { (1)
q = floor((p + r) / 2); (1)
MergeSort(A,p,q); T(n/2)
MergeSort(A, q+1,r); T(n/2)
Merge(A, p, q,r); (n)
}
}
 Worst Case Running time for Merge Sort after adding D(n) and C(n):
So T(n) = (1) when n = 1, and
2T(n/2) + (n) when n > 1
 So what (more succinctly) is T(n)?
Recurrences
 The expression:

 c n 1

T ( n)  
2T    cn n  1
 n
  2 
is a recurrence.
 Recurrence: an equation that describes a
function in terms of its value on smaller
functions
Recurrence Examples

 0 n0  0 n0
s ( n)   s ( n)  
c  s (n  1) n  0 n  s (n  1) n  0

n 1 
 c  c n 1
 
T ( n)   T ( n)  
2T    c n  1
 n  n
  2  aT    cn n  1
 b
Growth of Function
 When the input size is very large then the
better way to do analysis of algorithm is by
doing Asymptotic Anaysis.
 Asymptotic efficiency- In this we are
concerned with how the running time of an
algorithm increases with the size of input in
the limit as the size of input increases without
bound.
 So for simplification of Asymptotic Anaysis
certain standards and Asymptotic Notations
hav been defined.
ASYMPTOTICS NOTATIONS
Θ-notation
For a given function g(n),
(g(n)) is given by:

(g(n)) = {f(n):  +ve


constants c1, c2, and n0
such that 0  c1g(n) 
f(n)  c2g(n),
n  n0 }

g(n) is an asymptotically tight bound for f(n).


Example
(g(n)) = {f(n) :  positive constants c1, c2, and n0,
such that n  n0, 0  c1g(n)  f(n)  c2g(n)}
 10n2 - 3n = (n2)
 What constants for n0, c1, and c2 will work?
 Make c1 a little smaller than the leading
coefficient, and c2 a little bigger,permits the
inequalities in defination of to be satisfied.
 Exercise: Prove that 3n+2= (n)
Verify 6n3 ≠ (n2)
How about 22n (2n)?
Example

 There is no unique set of values for no and c in proving the


asymptotic bounds but for whatever value of no you take, it
should satisfy for all n > no.( Note again, satisfy for all n >
no.)

Example : To prove f(n) = O(g(n)

Suppose we get values c = 100 and no = 5. This means the


asymptotic bound is valid for c = 100 and all n > 5. For the
same question, we could also have had c = 105 and no = 1
which means, the asymptotic bound is valid for c = 105 and
all n > 1. Please note, that all the values of n
O -notation

For function g(n), O(g(n))


is given by:

O(g(n)) = {f(n):  +ve


constants c and n0 such that
0  f(n)  cg(n), n  n0 }

g(n) is an asymptotic upper bound for f(n).


NOTE: f(n) = (g(n))  f(n) = O(g(n)).Since  is more
stronger notation than O notation.thus (g(n))  O(g(n)).
Example:
 Show that 100n+6=O(n)
 Any linear function an + b is in O(n2). How?
 Show that 3n3 = O(n4) for appropriate c and
n 0.
 Show that 3n3 = O(n3) for appropriate c and
n 0.
 -notation

For function g(n), (g(n))


is given by:

(g(n)) = {f(n):  +ve


constants c and n0 such that
0  cg(n)  f(n), n  n0 }

g(n) is an asymptotic lower bound for f(n).


NOTE : f(n) = (g(n))  f(n) = (g(n)).
(g(n))  (g(n)).
Examples

 100n + 5 ≠ (n2)

To find c, n0 such that: 0  cn2  100n + 5

cn2  100n+5  c  100/n + 5/n2

So for n0 =1 we find c  105 but what about


n=2,and c=105 = Inequality does not hold.
 contradiction: n cannot be smaller than a constant
Examples
 100n + 5 ≠ (n2)
For the above question, we had a solution proposed as follows to
prove 100n + 5 = (n2) :-
cn2 < 100n + 5
Let no = 100 . That gives us,
c x 10000 < 100 x 100 + 5
c < 10005 / 10000
So the contant c = 1.0005
But does this work when c = 1.0005 and n = 200 ( remember, it
should satisfy for all n > no . Here n is 200 and no is 100)
No, it doesn’t work. Hence you cannot prove that
100n + 5 ≠(n2)
Running Times
 “Running time is O(f(n))”  Worst case is O(f(n))

 O(f(n)) bound on the worst-case running time  O(f(n)) bound


on the running time of every input.
 (f(n)) bound on the worst-case running time  (f(n)) bound
on the running time of every input.
 “Running time is (f(n))”  Best case is (f(n))

 Can still say “Worst-case running time is (f(n))”

 Means worst-case running time is given by some


unspecified function g(n)  (f(n)).
Asymptotic Notations
 A way to describe behavior of functions in the limit

 Abstracts away low-order terms and constant factors


 How we indicate running times of algorithms

 Describe the running time of an algorithm as n grows to 

 O notation: asymptotic “less than”: f(n) “≤” g(n)

  notation: asymptotic “greater than”: f(n) “≥” g(n)

  notation: asymptotic “equality”: f(n) “=”


g(n)
Asymptotic Notation in Equations
 Can use asymptotic notation in equations to
replace expressions containing lower-order
terms.
 For example,
4n3 + 3n2 + 2n + 1 = 4n3 + 3n2 + (n)
= 4n3 + (n2) = (n3). How to interpret?
 In equations, (f(n)) always stands for an
anonymous function g(n)  (f(n))
 In the example above, (n2) stands for
3n2 + 2n + 1.
Relations Between , O, 
Practical Complexity
250

f(n) = n
f(n) = log(n)
f(n) = n log(n)
f(n) = n^2
f(n) = n^3
f(n) = 2^n

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Relations Between , , O
Theorem
Theorem :: ForFor any
any two
two functions
functions g(n)
g(n) and
and f(n),
f(n),
f(n) == (g(n))
f(n) (g(n)) iff
iff
f(n)
f(n) == O(g(n))
O(g(n)) and f(n) == (g(n)).
and f(n) (g(n)).

 That is, (g(n)) = O(g(n))  (g(n))

 In practice, asymptotically tight bounds are


obtained from asymptotic upper and lower
bounds.
Mappings for n2

Ω (n2 )

Θ(n2)

O(n2 )
o-notation
For a given function g(n), the set little-o:
o(g(n)) = {f(n):  c > 0,  n0 > 0 such that
 n  n0, we have 0  f(n) < cg(n)}.
Ex: 2n = o(n2) but 2n2 ≠ o(n2)

f(n) becomes insignificant relative to g(n) as n approaches infinity:


lim [f(n) / g(n)] = 0
n

g(n) is an upper bound for f(n) that is not asymptotically tight.


Observe the difference in this definition from previous ones. Why?
 -notation
For a given function g(n), the set little-omega:

(g(n)) = {f(n):  c > 0,  n > 0 such that


0
 n  n0, we have 0  cg(n) < f(n)}.
Example: n2 / 2= (n) but n2 /2 ≠ (n2)
f(n) becomes arbitrarily large relative to g(n) as n
approaches infinity:
lim [f(n) / g(n)] = .
n

g(n) is a lower bound for f(n) that is not asymptotically


tight.
Recurrences
 What are recurrence relations?
 What is their relationship to algorithm
design?
 Methods to solve them:
 Substitution (induction in disguise)
 Recursion trees
 Master Theorem
What Are Recurrence Relations?
Def.: Recurrence = an equation or inequality
that describes a function in terms of its value
on smaller inputs, and one or more base cases
E.g.: T(n) = T(n-1) + n
 A recurrence relation expresses the value of a
function f for an argument n in terms of the values
of f for arguments less than n.
Examples: f(n) = 1 if n ≤ 1
2f(n/2) + n if n > 1
Recurrence Relations and Recursive Algorithms

 The running time of a recursive algorithm is easily


expressed using a recurrence relation.
 Example: (Merge-Sort)
Merge-Sort(A, p, r)
1 if r > p
2 then q ← ⌊ (p+r)/2 ⌋
3 Merge-Sort(A, p, q)
4 Merge-Sort(A, q + 1, r)
5 Merge(A, p, q, r)

Running time:
T(n) = O(1) if n ≤ 1
2T(n/2) + O(n) if n > 1
Solving Recurrence Relations

 Given two algorithms for the same problem.


Running times:
Algorithm 1: T1(n) = 2T1(n/2) + O(n lg n)
Algorithm 2: T2(n) = 3T2(n/2) + O(n)
 Which one is faster?
We need closed forms of T1(n) and T2(n).
(Expressions for T1(n) and T2(n) that are not recurrence
relations.)

To “solve” a recurrence relation means to derive such a


closed form from the recurrence relation
Methods for Solving Recurrence Relations

Substitution method:
 Make a guess
 Verify the guess using induction
Recursion trees:
 Visualize how the recurrence unfolds
 May lead to a guess to be verified using substitution
 If done carefully, may lead to an exact solution
Master theorem:
 “Cook-book” solution to a common class of
recurrence relations
Substitution Method

 Three steps:
1. Guess the of the solution
2. Prove that the guess is correct assuming that
it can be verified for all n less than some n0.
(inductive step)
3. Verify the guess for all n ≤ n0. (base case)
 Why do we switch the two parts of the inductive
proof?
Our guess is vague.
Example:
T(n) ≤ cn lg n (that is, T(n) = O(n lg n))

We do not know c, and we do not want to know; we


do not care.
 The inductive step may work only for certain values
of c.
 The base case usually works for any value of c.
Guess-and-Test Method
 In the guess-and-test method, we guess a closed form solution and then
try to prove it is true by induction:
 b if n  2
T (n)  
2T ( n / 2)  bn log n if n  2
 Guess: T(n) < cn log n.

T ( n )  2T ( n / 2)  bn log n
 2( c(n / 2) log(n / 2))  bn log n
 cn (log n  log 2)  bn log n
 cn log n  cn  bn log n
 Wrong: we cannot make this last line be less than cn log n
Guess-and-Test Method
 Recall the recurrence equation:
 b if n  2
T (n )  
2T ( n / 2)  bn log n if n  2
 Guess #2: T(n) < cn log2 n.

T (n)  2T (n / 2)  bn log n
 2(c(n / 2) log 2 (n / 2))  bn log n
 cn(log n  log 2) 2  bn log n
 cn log 2 n  2cn log n  cn  bn log n
 if c > b.  cn log 2 n
 So, T(n) is O(n log2 n).
 In general, to use this method, you need to have a good guess and you
need to be good at induction proofs.
Master Method
Let a ≥ 1 and b > 1, let f(n) be a function over the positive integers, and let
T(n) be given by the following recurrence:
T(n) = aT(n/b) + f(n)
It follows the following three cases:

1. if f (n) is O(n logb a  ), then T (n) is (n logb a )


2. if f ( n) is (n logb a log k n), then T (n) is (n logb a log k 1 n)
log b a  
3. if f (n) is (n ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
Strategy
• Extract a, b, and f(n) from a given recurrence
logb a
• Determine n
logb a
• Compare f(n) and n asymptotically
• Determine appropriate MT case, and apply
• Example merge sort
T (n)  2T (n / 2)  ( n)
a  2, b  2; n logb a  n log2 2  n  (n)
Also f (n)  (n)
 
 Case 2: T (n)   nlogb a lg n    n lg n 
Master Method, Example 1
 The form:  c if n  d
T (n )  
aT ( n / b)  f ( n ) if n  d
 The Master Theorem:

1. if f (n) is O(n logb a  ), then T (n) is (n logb a )


2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k 1 n)
3. if f (n) is (n logb a  ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
 Example:
T (n)  4T (n / 2)  n
Solution: logba=2, since f(n) = n =O(n log a - 1
b )

Here =1 so case 1 says T(n) is = (n2).


Master Method, Example 2
 The form:  c if n  d
T (n )  
aT ( n / b)  f ( n ) if n  d
 The Master Theorem:

1. if f (n) is O(n logb a  ), then T (n) is (n logb a )


2. if f (n) is (n logb a log k n), then T (n) is (n logb a log k 1 n)
3. if f (n) is (n logb a  ), then T (n) is ( f (n)),
provided af (n / b)  f (n) for some   1.
 Example:

T (n)  2T (n / 2)  n log n
Solution:
Here a =2 , b= 2 ,logba=1, f(n) = n log n , k = 1
F(n) = n log n =  (n log n) so case 2 says T(n) is (n log2 n).
Example 3

T (n)  3T (n / 4)  n lg n
a  3, b  4; n log 4 3  n 0.793
f (n)  n lg n, f (n)  (n log 4 3 ) with   0.2
 Case 3:
Regularity condition
af (n / b)  3(n / 4) lg(n / 4)  (3 / 4)n lg n  cf (n) for c  3 / 4
T (n)  (n lg n)

T (n)  2T (n / 2)  n lg n
a  2, b  2; n log2 2  n1
f (n)  n lg n, f (n)  (n1 ) with  ?
also n lg n / n1  lg n
 neither Case 3 nor Case 2!
Master Method, Example

4. T (n)  T (n / 3)  n log n
Solution:
b= 3, a=1, logba = 0, f(n) = nlog n = (nloga +1)
so case 3 says T(n) is (n log n).

Regularity condition
a f(n/b) <= cf(n) for some c <1
For this case
a =1 ,b = 3
1(n/3 log(n/3)) = (1/3)(nlog (n/3)) <= 1/3(n log n)= c f(n) for c=1/3
Therfore solution is T(n) = θ(n log n).
Example:5 T (n)  8T (n / 2)  n 2

Solution: b=2; a = 8 ; logba=3,


f(n) = n2 =O(n3-1); case 1 says T(n) is (n3).

Example 6. T (n)  2T (n / 2)  log n


Solution: logba=1, so case 1 says T(n) is O(n).
Examples (7)
T (n)  4T ( n / 2)  n3
a  4, b  2; n log2 4  n 2
f ( n )  n 3 ; f ( n )  ( n 2 )
 Case 3: T (n)    n3 
Checking the regularity condition
4 f ( n / 2)  cf (n)
4n3 / 8  cn3
n3 / 2  cn 3
c  3/ 4 1
Example 8:

T (n)  T (n / 3)  n

nlogb a  nlog3 1  n 0  1
F (n)  (n0  ) for  1
we still need to check af (n / b)  n / 3  1/ 3( f (n)
T (n)  ( f (n))  (n)(case 3)
Example:9
T (n)  9T ( n / 3)  n 2.5
a  9, b  3, and f (n)  n 2.5
so n logb a  n log3 9  2
f (n)  (n 2  ) with  1/ 2
case3 3 if af (n / b)  cf (n) for some c  1
f (n / b) 9(n / 3) 2.5  (1/ 3) 0.5 f ( n)
u sin g c  (1/ 3)0.5 case 3 applies and
T (n)  (n 2.5)
Suppose T(n) = aT(n/b) + cnk for n>1 and n a
power of b
T(1) = d
Where b >= 2 and k>=0 are integers a>0, c>0 d>=
0
Then T (n)  ( n k ) ifa  b k
 ( n k lg n) if a  b k
 ( n logb a ) if a  b k
Iteration method
 In Iterative substitution we iteratively apply
recurrence equation to itself and see if we
can find a pattern.
 Idea is to expand(iterate) the recurrence and
express it as a summation of terms
depending only on n and the initial conditions.
 Ex
 T(n) = k + T(n/2) if n>1
=c if n=1
T(n) = k + T(n/2)
T(n/2) = k + T(n/4)

T(n) = k + k+ T(n/4)
Repeating this process we get
T(n) = k + k + k + T(n/8)
And repeating over and over we get
T(n) = k + k+ … K + T(1)
= k + k+ … K + c
How many k’s are there? This is no. of times we can divide n
by 2 to get down 1 that is log n.Thus
T(n) = (log n)*k + c where k and c are both constants
Thus
T(n) = O(log n)
Repeated Substitution Method
• Let’s find the running time of merge sort (let’s assume
that n=2k, for some k).
 1 if n  1
T ( n)  
2T (n / 2)  n if n  1

T (n)  2T  n / 2   n substitute
 2  2T  n / 4   n / 2   n expand
 22 T (n / 4)  2n substitute
 22 (2T (n /8)  n / 4)  2n expand
 23 T ( n /8)  3n observe the pattern
T (n)  2i T (n / 2i )  in
 2lg n T (n / n)  n lg n  n  n lg n
Tower of Hanoi Problem
 n discs are stacked on pole A. We should move them
to pole B, keeping the following constraints:
 We can move a single disc at a time.
 We can move only discs that are placed on the top of
their pole.
 A disc may be placed only on top of a larger disc, or
on an empty pole.
 The third tower C can be used to temporarily hold
disks
 Analyze the given solution for the Hanoi towers
problem; how many moves are needed to complete
the task?
Recursive Solution
Recursive Solution
Recursive Solution
Recursive Solution
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Tower of Hanoi
Recursive Algorithm
void Hanoi( int n, string a, string b, string c)
{
if (n == 1) /* base case */
Move( a, b );
else { /* recursion */
Hanoi( n-1, a, c, b );
Move( a, b );
Hanoi( n-1, c, b, a );
}
}
 T(n) = the number of moves needed in order to move n disks
from tower A to tower B.
 T(n-1) = Number of moves required in order to move n-1 disks
from tower A to tower C
 1 = One move is needed in order to put the largest disk in tower
B
 T(n-1) = Number of moves required in order to move n-1 disks
from tower C to tower B
 Show that the number of moves T(n) required by the algorithm
to solve the n-disk problem satisfies the recurrence relation:
 T(1) = 1
T(n) = 2T(n-1) + 1
Guess and Prove
 Calculate M(n) for n M(n)
small n and look for
a pattern. 1 1

 Guess the result and 2 3


prove your guess
correct using 3 7
induction.
4 15

5 31
Analysis Of Recursive Towers of Hanoi Algorithm
By Iterative substitution method
Expanding:
T(1) = a (1)
T(n) = 2T(n – 1) + b if n > 1 (2)
= 2[2T(n – 2) + b] + b = 22 T(n – 2) + 2b + b by substituting T(n – 1) in (2)
= 22 [2T(n – 3) + b] + 2b + b = 23 T(n – 3) + 22b + 2b + b by substituting T(n-2) in (2)
= 23 [2T(n – 4) + b] + 22b + 2b + b = 24 T(n – 4) + 23 b + 22b + 21b + 20b
by substituting T(n – 3) in (2)
= ……
= 2k T(n – k) + b[2k- 1 + 2k– 2 + . . . 21 + 20]

The base case is reached when n – k = 1  k = n – 1, we then have:

Therefore, The method hanoi is O(2n)


Tower Of Hanoi Problem(By Master
Method)
 M(n) = 2M(n-1) + 1
We can not use master theorem directly.
A trick called reparameterization can be used
As: m =b n for some fixed base b and
exponential parameter m.
M(n) = M(log b m) = M’(m) so
M’(m) = M(log b m)
= M(n)
Tower of Hanoi

= 2M(n-1)+1
=2M’(m/b)+1

we can use master theorem


Now

As a = 2 ; f(n) = n 0 = 1
mlogba = mlogb2 for some b >1

So by Case 1:

M’(m) = (m( log b 2))= (2 ** log b m) =  (2n)


Quicksort
 Sorts in place
 Similar to Merge Sort but the partitioning of the array is not into
halves, but by choosing a pivot point, with lesser numbers on
left, greater on right.
 Quick sort partitions the array of numbers to be sorted, but does
not need to keep a temporary array.
 Sorts O(n lg n) in the average case
 Sorts O(n2) in the worst case
 But in practice, it’s quick
 And the worst case doesn’t happen often (but more on this
later…)
Quick Sort Approach
1 n
A

Pivot element

A y≤x x y≥x

Sort recursively Sort recursively


Quicksort
 Another divide-and-conquer algorithm
 Divide : The array A[p..r] is partitioned into
two non-empty subarrays A[p..q] and A[q+1..r]
 Invariant: All elements in A[p..q] are less than all
elements in A[q+1..r]
 Conquer and Combine :The subarrays are
recursively sorted by calls to quicksort
 Unlike merge sort, no combining step: two
subarrays form an already-sorted array. The
entire array A[p..r] is sorted.
Complexity analysis for Quick sort

 The best case occurs when the pivot partitions the


set into halves. The complexity is then O(N log N).
 The worst case occurs when the pivot is the
smallest element. The complexity is then O(N2).
 The average case is also O(N log N) because the
problem is halved at each step.
Quick Sort Algorithm
Input: Unsorted sub-array A[p..r]
Output: Sorted sub-array A[p..r]

QUICKSORT (A, p, r)
if p < r
then q ← PARTITION(A, p, r)
QUICKSORT (A, p, q-1)
QUICKSORT (A, q+1, r)
Partition Algorithm

Input: Sub-array A[p..r]


Output: Sub-array A[p..r] where each element of A[p..q] is ≤ to each element
of A[(q+1)..r]; returns the index q
PARTITION (A, p, r)
1 x ← A[p]
Θ(n) where n = r-p+1
2 i ← p -1
3 j ← r +1
4 while TRUE
5 repeat j ← j - 1
6 until A[j] ≤ x
7 repeat i ← i + 1
8 until A[i] ≥ x
9 if i < j
10 then exchange A[i] ↔A[j]
Partition
 Clearly, all the action takes place in the
partition() function
 Rearranges the subarray in place
 End result:
 Two subarrays
 All values in first subarray  all values in second
 Returns the index of the “pivot” element
separating the two subarrays
 How do you suppose we implement this?
Partition In Words
 Partition(A, p, r):
 Select an element to act as the “pivot” (which?)
 Grow two regions, A[p..i] and A[j..r]
 All elements in A[p..i] <= pivot
 All elements in A[j..r] >= pivot
 Increment i until A[i] >= pivot
 Decrement j until A[j] <= pivot
 Swap A[i] and A[j]
 Repeat until i >= j
Note: slightly different from
 Return j book’s partition()
Picking a pivot

 Choosing the first element in the array for pivot will


give us O(N2) complexity if the array is sorted.
 Choosing the middle point is safe; it will give average
complexity, but does not necessarily give the fastest
running time.
 The median would give us the best running time but it
is time-consuming to compute the median.
 We approximate the median by taking the median of
3 numbers in the array, namely the first, the middle
and the last elements.
Quick Sort

- Each time the function recurses, a "key" value


of the structure is selected to be positioned.
- Values less than the key are passed to the left
side of the structure and values that are greater
than the key are passed to the right side of the
structure.
- These "left to right" and "right to left" scans
and swaps continue until a flag condition tells
them to stop.
- The function then repeatedly scans through the
structure from two directions.
- This occurs when the structure can be divided
into two sorted partitions.
- The key is then "switched into its final
position".
- Next the quick sort function is called to sort
the partition that contains the values that are
less than the key.
- Then the quick sort function is called to sort
the partition that contains the values that are
greater than the key.
- The above algorithm continues until a
condition indicates that the structure is
sorted.
Quick Sort
A Step Through Example 

1. This is the initial array that you are starting the sort
with

1. The array is pivoted about its first element p =3

1. Find first element larger than pivot (underlined) and


the last element not larger than pivot (italicised)
Quick Sort

A Step Through Example 

1. Swap those elements

1. Scan again in both directions

2. Swap
Quick Sort

A Step Through Example 

1. Scan

1. The pointers have crossed: swap pivot with italicised.

1. Pivoting is now complete. Recursively sort subarrays on each side of


pivot.

The array is now sorted.


Quick Sort
Treatment of elements equal to pivot

• If we ignore elements equal to pivot, then i & j do not stop. This


creates wildly uneven subsets in the recursion, & can lead to a
complexity of O(N2).

• If i & j both stop at elements equal to the pivot, & if the array is
made up of all duplicates, then everyone swaps. In this case all
subsets in the recursion are even & the complexity is O(N log N).
Analysis of Quick Sort

• Worst-case: if unbalanced partitioning


– One region with 1 element and the other with n-1
elements
– If this happens in every step  Θ(n2)
– T(n) = T(n-1) + T(1) + Θ(n)  Θ(n2)
n
1 n -1
1 n -2
1 n -3
• Worst-case occurs
– When array is already sorted (increasingly)

• Best-case: if balanced partitioning


– Two regions, each with n/2 elements
– T(n) = 2T(n/2) + Θ(n)  Θ(n lg n)

• Average case closer to the best case than the


worst case

You might also like