You are on page 1of 126

UITC010 - Design and Analysis of

Algorithms
Syllabus

Introduction
Basic Algorithm, Pseudo code for expressing algorithms, Performance Analysis,
Asymptotic Notation, Algorithm Design techniques, Growth of Functions,
Mathematical analysis of Recursive and Non-recursive Algorithms.
 
Divide-and-conquer algorithms
Binary search, finding maximum and minimum, quick sort, merge sort, heap sort,
Multiplication of Large Integers, Strassen’s Matrix multiplication.
 
Greedy method techniques
Knapsack problem, Minimum cost spanning tree, Dijkstra’s Algorithm.
 

2
Dynamic Programming techniques
Floyd-Warshall algorithm, Optimal binary search trees, 0/1 knapsack
problem, Hamiltonian circuit Problem.
 
Backtracking techniques
Eight Queens’s problem - Sum of subsets - Hamiltonian cycle, Knapsack
problem, Traveling Salesman problem.
 
Branch and Bound techniques
Travelling salesman problem, 0/1 knapsack Problem, Assignment Problem,
Computational complexity, Necessity of approximation scheme, Polynomial
time approximation schemes, NP-Hard and NP-Complete problems

3
References
REFERENCES:
• Horowitz E. Sahani S: “Fundamentals of Computer
Algorithm”, Galgotia Publications, 2000.
• Anany Levitin, “Introduction to the Design & Analysis, of
Algorithms”, Pearson Education, 2000.
• Aho, Hop croft, Ullman, "The Design and Analysis of
Computer Algorithm", Pearson Education, 2000.
• Parag H. Dave, Himanshu B. Dave “Design and Analysis
of Algorithms” Pearson Education, 2008.
• Donald E. Knuth, “The Art of Computer Programming”,
Volumes 1& 3 Pearson Education, 2009. Steven S.
Skiena, “The Algorithm Design Manual”, Second Edition,
Springer, 2008. 4
UNIT I INTRODUCTION
Notion of an Algorithm

What is an algorithm?
An algorithm is a sequence of unambiguous instructions for
solving a problem, i.e., for obtaining a required output for any
legitimate input in a finite amount of time.

5
Properties of Algorithm
1. Finiteness
 terminates after a finite number of steps
2. Definiteness
 rigorously and unambiguously specified
3. Clearly specified input
 valid inputs are clearly specified
4. Clearly specified/expected output
 can be proved to produce the correct output given a valid input
5. Effectiveness
 steps are sufficiently simple and basic
6. Multiplicity
 The same algm can be represented in several diff.ways.
7. Speed
 Algms for the same problem can be based on very different
ideas and can solve the problem with diff.speeds.
6
Methods for solving the same problem
• Euclid’s Algorithm for computing greatest common divisor
GCD(m,n)
• Euclid’s algorithm is based on repeated application of
equality
gcd(m,n) = gcd(n, m mod n)

• until the second number becomes 0, which makes the


problem trivial.

• Example: gcd(60,24) = gcd(24,12) = gcd(12,0) = 12

7
Two descriptions of Euclid’s algorithm
1.Structured description

Step 1 If n = 0, return m and stop; otherwise go to Step 2


Step 2 Divide m by n and assign the value of the remainder to r
Step 3 Assign the value of n to m and the value of r to n. Go to
Step 1.
• gcd(60,24) = gcd(24,12) = gcd(12,0) = 12
2.Pseudocode

while n ≠ 0 do
r ← m mod n
m← n
n←r
return m
8
Euclid’s algorithm

Euclid(m,n){
While m does not divide n
r=m mod n
m=n
n=r
end
return m
Eg.m=36, n=48
36 divides 48 - no
r=48mod36=12
n=36,m=12
12 divides 36  return 12
9
Other methods for gcd(m,n) [cont.]
Middle-school procedure
Step 1 Find the prime factorization of m
Step 2 Find the prime factorization of n
Step 3 Find all the common prime factors
Step 4 Compute the product of all the common prime factors
and return it as gcd(m,n)
Eg. m=36 n=48 find gcd(m,n)
m=2*2*3*3
n=2*2*2*2*3
Common factors are 2,2,3.
Gcd=2*2*3=12

10
Other methods for computing gcd(m,n)
Consecutive integer checking algorithm
Step 1 Assign the value of min{m,n} to t
Step 2 Divide m by t. If the remainder is 0, go to Step 3;
otherwise, go to Step 4
Step 3 Divide n by t. If the remainder is 0, return t and stop;
otherwise, go to Step 4
Step 4 Decrease t by 1 and go to Step 2
T=min(m,n)
T divides both m and n return t
T doesnotdecrease t by 1
Eg:m=60,n=12 (t=12)

11
Fundamentals of Algorithmic Problem Solving
• discuss a sequence of steps while designing and analyzing an
algorithm.
• Understanding the Problem
• Decision making on
Ascertaining the Capabilities of the Computational Device
Choosing between Exact and Approximate Problem Solving
Algorithm Design Techniques
Designing an Algorithm and Data Structures
• Methods of Specifying an Algorithm
• Proving an Algorithm’s Correctness
• Analyzing an Algorithm
• Coding an Algorithm

12
Fundamentals of Algorithmic Problem Solving (Contd..)

13
Fundamentals of Algorithmic Problem Solving (Contd..)

Understanding the Problem


• understand the problem completely before designing the
algorithm.
• Read the problem’s description carefully and ask questions if
you have any doubts about the problem.
• An input to an algorithm specifies an instance of the problem
the algorithm .
• to specify exactly the set of instances the algorithm needs to
handle.
• If you fail to do this, your algorithm may work correctly for a
majority of inputs but crash on some “boundary” value.
• correct algorithm is not one that works most of the time, but
one that works correctly for all legitimate inputs. 14
Fundamentals of Algorithmic Problem Solving (Contd..)

Decision making on
Once you completely understand a problem, we need to decide
- the Capabilities of the Computational Devices
- Choosing between Exact and Approximate Problem
Solving
- Algorithm Design Techniques
- Designing an Algorithm and Data Structures

15
Fundamentals of Algorithmic Problem Solving (Contd..)
• need to ascertain the capabilities of the computational device the
algorithm is intended for.
• Classify an algorithm from the execution point of view.
• Instructions are executed one after another, one operation at a
time. Algorithms designed to be executed on such machines are
called sequential algorithms/RAM
• Instructions are executed in parallel, concurrent operation at a
time. Algorithms designed to be executed on such machines are
called parallel algorithms.

16
Fundamentals of Algorithmic Problem Solving (Cont..)

Choosing between Exact and Approximate Problem Solving


• The next principal decision is to choose between solving
the problem exactly (exact algorithm) or solving the
problem approximately (approximation algorithm).
exact algorithm
Eg. extracting square roots, solving nonlinear equations,
and evaluating definite integrals.
approximation algorithm
Eg. Vertex Cover, Traveling Salesman Problem

17
Fundamentals of Algorithmic Problem Solving (Cont..)
Algorithm Design Techniques
An algorithm design technique (or “strategy” or “paradigm”)
is a general approach to solving problems algorithmically that
is applicable to a variety of problems from different areas of
computing.
Eg.
• Brute force technique
• Divide and conquer
• Greedy technique
• Dynamic Programming
• Backtracking
• Branch and bound

18
Fundamentals of Algorithmic Problem Solving (Cont..)
Designing an Algorithm and Data Structures
• Data structure and algorithm work together and they are
interdependent.
• Hence choice of proper data structure is required before
designing the algorithm.
• Implementation of program=data structure + algorithm.

19
Fundamentals of Algorithmic Problem Solving (Cont..)
Methods of Specifying an Algorithm
Various ways to specify an algorithm
• Natural language- is not clear.
• Pseudocode - is a mixture of a natural language and
programming language constructs. Pseudo code is usually
more precise than natural language.
• Flow chart - a method of expressing an algorithm by a
collection of connected geometric shapes containing
descriptions of the algorithm’s steps.

20
Fundamentals of Algorithmic Problem Solving (Cont..)

Proving an Algorithm’s Correctness/verification


• Once an algorithm has been specified, you have to prove its
correctness. i.e you have to prove that the algorithm yields a
required result for every legitimate input in a finite amount of
time.
• A common technique for proving correctness is to use
mathematical induction because an algorithm’s iterations
provide a natural sequence of steps needed for such proofs.

21
Fundamentals of Algorithmic Problem Solving (Cont..)
Analyzing an Algorithm
two kinds of analyzing the algorithm efficiency:
• time efficiency- indicating how fast the algorithm runs
• space efficiency-indicating how much extra memory it uses.
Another desirable characteristic of an algorithm is
• simplicity-generating sequence of instrns are easy to
understand
• Generality -generality of the problem the algorithm
solves and the set of inputs it accepts. Sometimes more
easier to design the algorithm.
• Range of inputs
These factors are not satisfactory then redesign the
algorithm. 22
Fundamentals of Algorithmic Problem Solving (Cont..)
Coding an Algorithm/implementation
Implementation of algorithm is done by suitable programming
languages.
Optimality-writing the program code is optimal. This will
reduce the burden of the compiler

23
Important problem types
Computing problems can be classified as
• Sorting
• Searching
• Numerical
• Geometric
• Combinatorial
• Graph
• String processing

24
Sorting

• Rearrange the items of a given list in increasing and


decreasing order.
• Eg. sort list of nos.,characters from alphabets, character
strings, etc..
• We need to choose a piece of information to guide sorting.
Such kind of piece of information is called key.
Why we want sorting?
• Makes the list easier to answer.
• Useful in searching(eg. telephone books,dictionary)
• Good sorting algm that sort an arbitrary array of size n using
about nlog2n comparision.

25
Properties of sorting

• All algms can be analyzed by 2 important properties.


1) stable-it preserves the relative ordering of items with equal
values.

GPA NAME
9.2 Anu
9.3(equal elmts) Shenba (unsorted list)
9.3(equal elmts) Mahesh(unsorted list)
9.3(equal elmts) Arathana(unsorted list)
9.4 tamil

26
Properties of sorting

GPA NAME
9.2 Anu
9.3(equal elmts) Arathana
9.3(equal elmts) Mahesh
9.3(equal elmts) Shenba
9.4 tamil

27
Properties of sorting

• In place property- does not require extra memory while


sorting the list.
• Not In place property- require extra memory while sorting
the list.

28
Properties of sorting

Searching
• Find out the desired element from the list.
• Searching search-search key
How to choose best searching algm?
• Some algms work faster, they require more memory
• Some algms works better, if the list is almost sorted.
Thus efficiency of algm is varying at varying situations.

29
Numerical pblms

• Mathematical equations, systems of equations, computing


definite integrals, evaluating functions…
• These pblms are solved by approximate algms.

30
Geometric pblms

• It deals with geometric objects such as points, lines &


polygons.
• Used in computer graphics, robotics etc..
• Eg. closest pair, convex hull

31
Combinatorial pblms

• Pblms like permutations and combinations.


• Lot of possible solutions.
• find the optimal solution from possible solution
• Problem size become big, the complexity will become high.
• Eg.graph coloring pblm, tsp

32
Graph problems

Collection of vertices and edges.


Shortest path, graph traversals,tsp

33
String processing pblms

• Collection of characters
• Called pattern matching algm.
• Particular word is searching from the text

34
Fundamentals of the Analysis of Algorithm Efficiency/ Analysis
Framework

Measure the performance of an algorithm by computing 2


factors.
• Space complexity
• Time complexity
• Measuring an input size
• Measuring running time
• Order of growth

35
Space complexity

Space complexity of an algorithm means the amount of space


(memory) taken by an algorithm. By computing space
complexity, we can analyze whether an algorithm requires
more or less space.
• Memory space S(P) needed by a program P, consists of two
components:
– A fixed part: needed for instruction space (byte code),
simple variable space, constants space etc.  c
– A variable part: dependent on a particular instance of
input and output data.  Sp(instance)
S(P) = c + Sp(instance)

36
Space complexity

1. Algorithm add (a, b, c)


2. {
3. return a+b+c;
4. }
For every instance 3 computer words required to store
variables: a, b, and c(assume a,b,c occupy one word size).
Therefore S(P) = 3.

37
Space complexity
1. Algorithm Sum(a[], n)
2. {
3. s:= 0.0;
4. for i = 1 to n do
5. s := s + a[i];
6. return s;
7. }
Every instance needs to store array a[] & n.
– Space needed to store n = 1 word.
– Space needed to store a[ ] = n floating point words (or at
least n words)
– Space needed to store i and s = 2 words
• Hence S(P) = (n + 3).(i.e 1+n+2=n+3)
38
Time complexity
• Time complexity of an algorithm means the amount of time
taken by an algorithm. By computing time complexity, we can
analyze whether an algorithm requires fast or slow speed.
• Difficult to compute physically clocked time.so executing time
depends on many factors such as
• System load
• No. of other programs running time
• Instruction set
• Speed of hardware
• The time complexity is given in terms of frequency count.
• Frequency count is a count denoting number of times of
execution of statement.

39
Time complexity

40
Time complexity

41
Time complexity
Matrix addition
For(i=0;i<n;i++)
{
For(j=0;j<n;j++)
{
C[i][j]=a[i][j]+b[i][j]
}

42
Time complexity
• Frequency count(FC) can be computed as follows
For(i=0;i<n;i++)
• i=0 executes once . Fc=1
• i<n executes for n+1 times
• i++ executes for n times
For(j=0;j<n;j++)
• j=0 executes n*1=n times
• j<n executes n*(n+1) times= n2+n times
• j++ executes n*n= n2 times
• C[i][j]=a[i][j]+b[i][j] executes n*n= n2 times
Totally 3n2+4n+2

43
Measuring an input size

If the i/p size is longer, then usually algorithm runs for a longer
time. Hence we can compute the efficiency of an algorithm as
a function to which i/p size is passed as a parameter.
Eg. multiply of 2*2 matrix-should know order of these
matrices then only we can enter the elements of matrices.

44
Measuring running time
• Time complexity is measured in terms of FC.
• we first identify logic operation of an algm  basic
operation (more time consuming, such basic operation is
located in inner loop)
T (n) ≈ cop.C(n)

• Cop execution time of an algorithm’s basic operation on a


particular computer
• C(n) number of times this operation needs to be executed
for this algorithm.
• T(n)running time of the algorithm

45
Measuring running time

Eg. sorting comparing the elements and then placing them in


the correct locations is a basic operation.
Eg.
PROBLEM INPUT SIZE BASIC
OPERATION
Searching element List of n elements Comparison of key
from list of n with every element
elements of list
Matrix Two matrices with Actual
multiplication order n * n multiplication of the
elements in the
matrices
Gcd of 2 numbers 2 numbers Division
Order of growth
• Measuring the performance of an algorithm in relation with the
input size n is called order of growth.
• For example, the order of growth for varying input size of n is
as given below.

47
Worst-Case, Best-Case, and Average-Case Efficiencies

Worst Case Time Complexity:


• when there are no matching elements or the first matching
element happens to be the last one on the list.
• Algorithm makes the largest number of key comparisons among
all possible inputs of size.
Cworst(n) = n.
• Algorithm runs the longest among all possible inputs of that
size.
• Time complexity for linear search in worst case=O(n)

48
Best case time complexity

• Algorithm runs the fastest among all possible inputs of that


size.
• For example, the best-case inputs for sequential search are
lists of size n with their first element equal to a search key
Cbest(n) = 1
Time complexity for linear search in best case=O(1)

49
Average Case time complexity

• This type of complexity gives information about the behavior


of an algorithm on specific random input.
• Let p->probability of getting successful search
• Let n->total no.of elements in the list
• The first match of the element will occur at ith location.Hence
probability of occuring first match is p/n for every ith element.
• The probability of getting unsuccessful search is (1-p).

50
Cavg(n) = probability of getting successful search(for elements 1 to n in the list) + probability
of getting unsuccessful search.
Cavg(n) =n+1/2
there may be n elements at which chances of not getting element are possible. Hence n.(1-p).
Time complexity for linear search in average case=O(n)

51
Amortized analysis

• amortized analysis is a method for analyzing a given


algorithm's complexity, or how much of a resource,
especially time or memory, it takes to execute.
• The motivation for amortized analysis is that looking at the
worst-case run time per operation.

52
Fundamentals of the Analysis of Algorithm Efficiency (Cont..)

The properties of order of growth:


• If F1 (n) is order of g1 (n) and F2 (n) is order of g2 (n), then
   
F1 (n) +F2 (n)   max g1 n , g 2 n  
• Polynomials of degree m .That means maximum degree is
considered from the polynomial.

53
Asymptotic Notations and its Properties
• the efficiency analysis framework concentrates the order of growth of
an algorithm’s basic operation count as the principal indicator of the
algorithm’s efficiency.
• To compare and rank such orders of growth, computer scientists use

three notations: O (big oh),  (big omega), and (big theta).
• t (n) and g(n) can be any nonnegative functions defined on the set of
natural numbers.
• t (n) will be an algorithm’s running time (usually indicated by its basic
operation count C(n)), and
• g(n) will be some simple function to compare the count with.
• To choose the best algorithm, we need to check efficiency of each
algorithm. The efficiency can be measured by computing time
complexity of each algorithm.
• Asymptotic notation is a shorthand way to represent the time
complexity. Using asymptotic notations we can give time complexity
as “fastest possible”, “slowest possible” or “average time”. 54
Asymptotic Notations and its Properties (Cont..)

The big Oh notation is denoted by O .It is a method of representing


the upper bound of algorithms running time. Using big oh notation
we can give longest amount of time taken by the algorithm to
compute.
DEFINITION
A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),
if t (n) is bounded above by some constant multiple of g(n) for all
large n, i.e., if there exist some positive constant c and some
nonnegative integer n0 such that t (n) ≤ cg(n) for all n ≥ n0.

55
Asymptotic Notations and its Properties (Cont..)

56
Asymptotic Notations and its Properties (Cont..)

• Omega notation is denoted by ‘ Ω’. This notation is used to


represent the lower bound of algorithms running time. Using
omega notation we can denote shortest amount of time taken by
algorithm.
DEFINITION
• A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)),
if t (n) is bounded below by some positive constant multiple of
g(n) for all large n, i.e., if there exist some positive constant c
and some nonnegative integer n0 such that
t (n) ≥ cg(n) for all n ≥ n0.

57
Asymptotic Notations and its Properties (Cont..)

58
Asymptotic Notations and its Properties (Cont..)

• The theta notation is denoted by  By this method the running


time is between upper bound and lower bound.
DEFINITION
• A function t (n) is said to be in (g(n)), denoted t (n) ∈ (g(n)),if
t (n) is bounded both above and below by some positive constant
multiples of g(n) for all large n, i.e., if there exist some positive
constants c1 and c2 and some nonnegative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0
• c1g(n) ≤ t(n) ≤ c2g(n)

59
Asymptotic Notations and its Properties (Cont..)

60
Asymptotic Notations and its Properties (Cont..)

61
Asymptotic Notations and its Properties (Cont..)

• Polynomials of degree m .That means maximum degree is


considered from the polynomial. Eg.3n3+2n2+10 then its time
complexity is O(n3).
• Any constant value leads to O(1) time complexity. that is if
f(n)=c then it ε O(1) time complexity.

62
Basic Efficiency Classes

the time efficiencies of a large number of algorithms fall into only a few classes.

63
Basic Efficiency Classes

64
65
66
67
68
69
70
71
N=2 =2(2)+2 <=

72
Mathematical Analysis for Recursive Algorithms

73
Mathematical Analysis for Recursive
Algorithms (Cont..)
Methods for Solving Recurrence Equations:
• Substitution method
– Forward substitution

– Backward substitution

• Masters method

74
Mathematical Analysis for Recursive
Algorithms (Cont..)

The substitution method guess for the solution is

made. There are two types of substitution:

• Forward substitution

• Backward substitution

75
Mathematical Analysis for Recursive
Algorithms (Cont..)
• Forward substitution method: This method
makes use of an initial condition in the initial
term and value for the next term is generated.
This process is continued until some formula is
guessed.

76
Mathematical Analysis for Recursive
(Cont..)

• Backward substitution method: In this method


values are substituted recursively in order to
derive some formula.

77
Forward substitution

T(n)=T(n-1)+n with initial condition T(0)=0


Let T(n)=T(n-1)+n
If n=1, then T(1)=T(1-1)+1=T(0)+1=0+1=1
If n=2 ,then T(2)=T(2-1)+2=T(1)+2=1+2=3
If n=3 then T(3)=T(3-1)+3=T(2)+3=3+3=6
We can derive the formula
T(n)=n(n+1)/2
=n2/2+n/2
T(n)=O(n2)
78
backward substitution

T(n)=T(n-1)+n
---------------------------------------------------1
with initial condition T(0)=0
Let n=n-1,
T(n-1)=T(n-1-1)+(n-1)
T(n-1)=T(n-2)+(n-1) ----------------------------------------2

Sub eqn2 in eqn1


T(n)=T(n-2)+(n-1)+n
-----------------------------------------3
79
Let n=n-2,
T(n-2)=(T(n-2-1)+(n-2)
T(n-2)=T(n-3)+(n-2) --------------------------------------4

Sub eqn4 in eqn3


T(n)=T(n-3)+(n-2)+(n-1)+n
.
.
.
80
backward substitution
T(n)=T(n-3)+(n-2)+(n-1)+n
General form,
T(n)=T(n-k)+(n-k+1)+(n-k+2)+….+n (assume k=3)

If k=n then
T(n)=T(n-n)+(n-n+1)+(n-n+2)+….+n
T(n)=T(0)+1+2+…..n (initial condition T(0)=0)
T(n)=0+1+2+…..n
We can derive the formula
T(n)=n(n+1)/2
=n2/2+n/2
T(n)=O(n2)

81
Mathematical Analysis for Recursive
Algorithms (Cont..)

Example:

82
Mathematical Analysis for Recursive
Algorithms (Cont..)

83
Mathematical Analysis for Recursive
Algorithms (Cont..)

84
85
86
87
88
89
Mathematical Analysis for Recursive
Algorithms (Cont..)

90
Mathematical Analysis for Recursive
Algorithms (Cont..)

91
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)
General Plan for Analyzing the Time Efficiency of Non-Recursive
Algorithms
• Decide on parameter n indicating input size
• Identify algorithm’s basic operation
• Determine worst, average, and best cases for input
of size n
• Set up a sum for the number of times the basic
operation is executed
• Simplify the sum using standard formulas and
rules 92
93
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)
EXAMPLE 1 Consider the problem of finding the value of the largest
element
in a list of n numbers. For simplicity, we assume that the list is implemented as
an array. The following is pseudocode of a standard algorithm for solving the
problem.
ALGORITHM MaxElement(A[0..n − 1])
//Determines the value of the largest element in a given array
//Input: An array A[0..n − 1] of real numbers
//Output: The value of the largest element in A
maxval ←A[0]
for i ←1 to n − 1 do
if A[i]>maxval
maxval←A[i] //if any value large//searching the maximum element from an
arrayr than current_maxvalue then set new maxvalue by obtained
larger value
return maxval
94
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

• measure of an input’s size here is the number of elements in the array, i.e., n.
• There are two operations in the loop’s body:
• the comparison A[i]> maxval and
• the assignment maxval←A[i].
• Which of these two operations should we consider basic?
• Since the comparison is executed on each repetition of the loop
• and the assignment is not
• we should consider the comparison to be the algorithm’s basic operation.
• Note:The comparisons is made for each value of n there is no need to find
worst, average, and best cases analysis here.
• C(n) the number of times this comparison is executed
• The algorithm makes one comparison on each execution of the loop, which
is repeated for each value of the loop’s variable i within the bounds 1 and n
− 1, inclusive.

95
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)

96
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)
EXAMPLE 2 Consider the element uniqueness problem: check
whether all the elements in a given array of n elements are distinct.
This problem can be solved by the following straightforward algorithm.
ALGORITHM UniqueElements(A[0..n − 1])
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct
// and “false” otherwise
for i ←0 to n − 2 do
for j ←i + 1 to n − 1 do
if A[i]= A[j ] return false //if any two elmts in the array are similar then
return false indicating that the array elmts are not distinct
return true

97
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

• input’s size : n, the number of elements in the array.


• basic operation: innermost loop contains a single operation
(the comparison of two elements)
• the worst case input is an array for which the number of element
comparisons Cworst(n) is the largest among all arrays of size n.
• there are two kinds of worst-case i/ps:they are
1) arrays with no equal elements
2) arrays in which the last two elements are the only pair of equal
elements.
• For such inputs, one comparison is made for each repetition of the
innermost loop, i.e., for each value of the loop variable j between its
limits i + 1 and n − 1; this is repeated for each value of the outer loop,
i.e., for each value of the loop variable i between its limits 0 and n − 2.

98
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

99
100
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1],


B[0..n − 1, 0..n − 1])
//Multiplies two square matrices of order n by the definition-
based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n − 1 do
for j ←0 to n − 1 do
C[i, j ]←0.0
for k←0 to n − 1 do
C[i, j ]←C[i, j ]+ A[i, k] ∗ B[k, j]//
return C
101
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

• input’s size by matrix order n.


• multiplication as the basic operation
• set up a sum for the total number of multiplications M(n)
executed by the algm.
• one multiplication executed on each repetition of the algorithm’s
innermost loop, which is governed by the variable k ranging from
the lower bound 0 to the upper bound n − 1.
• Therefore, the number of multiplications made for every pair of
specific values of variables i and j is

102
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

• the total number of multiplications M(n) is expressed by the following triple


sum:

Time complexity= Θ(n3)

103
Mathematical Analysis for Non-Recursive Algorithms
(Cont..)

EXAMPLE 4 The following algorithm finds the number of


binary digits in the
binary representation of a positive decimal integer.
ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n←floor(n/2)
return count

104
Mathematical Analysis for Non-Recursive Algorithms (Cont..)
While loop(execution) eg: n=5
Count=1
1) While(n>1){ ------------5>1
Count=count+1 -------------1+1=2
n=floor(n/2) --------------5/2=2.5=2 n=2

count=2
2) While(n>1){ ---------------2>1
Count=count+1 ---------------2+1=3
n=floor(n/2) ----------------2/2= 1

count=3
3) While(n>1){ ----------------1>1--false
105
Mathematical Analysis for Non-Recursive Algorithms (Cont..)

Analysis
• i/p size : n
• Basic operation: while loop
• value of n is about halved on each repetition of the loop.
hence the efficiency is log 2 n.
Floor(log 2 n) = Floor( log 2 5)=Floor(2.5) 2
Floor(log 2 n) + 1 --------------- (since 1 for
While(1>1)--------for false condition)
• No. of times while loop executes is Floor(log 2 n) + 1
• Time complexity= Θ(log 2 n)

106
Mathematical Analysis of Recursive Algorithms

General Plan for Analyzing the Time Efficiency of Recursive


Algorithms
1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation. Check whether the
number of times the basic operation is executed can vary
on different inputs of the same size; if it can, the worst-case,
average-case, and best-case efficiencies must be investigated
separately.
4. Set up a recurrence relation, with an appropriate initial condition,
for the number of times the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of
its solution.

107
Mathematical Analysis of Recursive Algorithms
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) ∗ n
input size: n
• The basic operation of the algorithm is multiplication, whose number of
executions we denote M(n).
• Since the function F(n) is computed according to the formula
• F(n) = F(n − 1) * n for n > 0
• the number of multiplications M(n) needed to compute it must satisfy the equality

M(n) = M(n − 1) + 1 for n > 0

• M(n − 1)-- to compute F(n−1) +1-------to multiply F(n−1) by n


108
Mathematical Analysis of Recursive Algorithms
• M(n − 1) multiplications are spent to compute
F(n − 1).
eg .n=55-1=4!0!*1*2*3*4--n-1
multiplications.
• one more multiplication is needed to multiply the
result by n.(0!*1*2*3*4) * 5
• if n = 0 return 1.
• when n = 0, the algorithm performs no
multiplications. Therefore, the initial condition M(0)
=0.
• M(0) ---the calls stop when n = 0
• 0---no multiplications when n = 0 109
• the recurrence relation and initial condition for the algorithm’s
number of multiplications
M(n) = M(n − 1) + 1 for n > 0,
M(0) = 0.
backward substitutions:
M(n) = M(n − 1) + 1 //(substitute M(n − 1) = M(n − 2) + 1)
= [M(n − 2) + 1]+ 1= M(n − 2) + 2 //(substitute M(n − 2) = M(n− 3) + 1)
= [M(n − 3) + 1]+ 2 = M(n − 3) + 3.
General formula for the pattern:
M(n) = M(n − i) + i.
• substitute i = n in the pattern’s formula to get the ultimate
result of our backward substitutions:
• M(n) = M(n − 1) + 1= . . . = M(n − i) + i = . . .
= M(n − n) + n = n. 110
111
112
113
Tower of Hanoi puzzle
• n disks of different sizes that can slide onto any of three pegs.
• Initially, all the disks are on the first peg in order of size, the largest on
the bottom and the smallest on top.
• The goal is to move all the disks to the third peg, using the second one as
an auxiliary, if necessary.
• We can move only one disk at a time, and it is forbidden to place a larger
disk on top of a smaller one.
• To move n>1 disks from peg 1 to peg 3 (with peg 2 as auxiliary),
• we first move recursively n − 1 disks from peg 1 to peg 2 (with peg 3 as
auxiliary),
• Then move the largest disk directly from peg 1 to peg 3, and, finally,
move recursively n − 1 disks from peg 2 to peg 3 (using peg 1 as
auxiliary).
• Of course, if n = 1, we simply move the single disk directly from the
source peg to the destination peg.
114
Tower of Hanoi puzzle

115
313

116
4 DISKS 7 1 7

117
5 DISKS

118
3 1 3-----M(n-1)+1+M(n-1) for n>1

119
120
121
TIME COMPLEXITY=2n-1

122
• input’s size : number of disks n
• moving one disk as the algorithm’s basic operation.
• the number of moves M(n) depends on n only, and we
get the following recurrence equation for it:
M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
initial condition M(1) = 1

123
• we have the following recurrence relation for the number of moves M(n):
M(n) = 2M(n − 1) + 1 for n > 1,
M(1) = 1.

124
125
126

You might also like