You are on page 1of 97

Algorithm

-> is any well-defined computational procedure that takes


some value, or set of values, as input and produces some
value, or set of values, as output.

-> is thus a sequence of computational steps that transform


the input into the output.
Properties of algorithms
 Input: what the algorithm takes in as input

 Output: what the algorithm produces as output

 Definiteness: the steps are defined precisely

 Correctness: should produce the correct output

 Finiteness: the steps required should be finite

 Effectiveness: each step must be able to be performed in a finite amount of


time

 Generality: the algorithm should be applicable to all problems of a similar form


Program vs. Algorithm
A program is one or more algorithms, customized to
solve a specific task under a specific set of
circumstances and expressed in a specific language. (A
Programming Language.)
An Algorithm is a general method;
A Program is a specific method. A program is set of
instructions.
General approaches to algorithm design
Divide and conquer
Greedy method
Dynamic Programming
Basic Search and Traversal Technique
Advanced Data Structures(B-Tree, Fibonacci Heap, Binomial
Heap and Red-black Tree)
Backtracking
Branch and Bound
Approximation Algorithm
Randomized algorithms
NP Problem
What do we analyze about them?
Correctness
Does the input/output relation match algorithm
requirement?
Amount of work done (complexity)
Basic operations to do task
Amount of space used
Memory used
Simplicity, clarity
Verification and implementation.
Complexity
The complexity of an algorithm is simply the amount
of work the algorithm performs to complete its task.
Algorithm 1: Maximum element
procedure max (a1, a2, …, an: integers)
max
max:=:=a1a 1
for
forii:=:=22toton n
if max < ai then max := ai
if max < ai then max := ai max 9
4
7

a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
4 1 7 0 5 2 9 3 6 8

i 9
6
8
4
725310
7
Which algorithm is better?
The algorithms are correct, but which is the best?
Measure the running time (number of operations
needed).
 Measure the amount of memory used.
 Note that the running time of the algorithms increase
as the size of the input increases.
What do we need?
Correctness: Whether the algorithm computes
the correct solution for all instances

Efficiency: Resources needed by the


algorithm
1. Time: Number of steps.
2. Space: amount of memory used.
Measurement “model”: Worst case, Average case
and Best case.
Searching algorithms
Given a list, find a specific element in the list

We will see two types


Linear search
 sequential search
Binary search

10
Algorithm : Linear search
Given a list, find a specific element in the list
List does NOT have to be sorted!

procedure linear_search (x: integer; a1, a2, …, an: integers)


i := 1
while ( i ≤ n and x ≠ ai )
i := i + 1
if i ≤ n then location := i
else location := 0

{location is the subscript of the term that equals x, or it is 0


if x is not found}
11
Algorithm : Linear search, take 1
procedure linear_search (x: integer; a1, a2, …, an: integers)
i := 1
while ( i ≤ n and x ≠ aii )
i := i + 1 x 3
if i ≤ n then location := i
location 8
else location := 0
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
4 1 7 0 5 2 9 3 6 8

i 6
8
4
72531
12
Algorithm : Linear search, take 2
procedure linear_search (x: integer; a1, a2, …, an: integers)
i := 1
while ( i ≤ n and x ≠ aii )
i := i + 1 x 11
if i ≤ n then location := i
location 0
else location := 0
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
4 1 7 0 5 2 9 3 6 8

i 9
6
8
4
7253110
11
13
Linear search running time
How long does this take?

If the list has n elements, worst case scenario is that it


takes n “steps”
Here, a step is considered a single step through the list

14
Algorithm : Binary search
 Given a list, find a specific element in the list
 List MUST be sorted!
 Each time it iterates through, it cuts the list in half
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
i := 1 { i is left endpoint of search interval }
j := n { j is right endpoint of search interval }
while i < j
begin
m := (i+j)/2 { m is the point in the middle }
if x > am then i := m+1
else j := m
end
if x = ai then location := i
else location := 0

{location is the subscript of the term that equals x, or it is 0 if x is not


found}
15
Algorithm : Binary search, take 1
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
i := 1 while i < j if x = ai then location := i
j := n begin else location := 0
m := (i+j)/2
if x > am then i := m+1 x 14
else j := m
end location 7
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
2 4 6 8 10 12 14 16 18 20

i 6
71 m 6
8
75 j 8
710
16
Algorithm : Binary search, take 2
procedure binary_search (x: integer; a1, a2, …, an: increasing integers)
i := 1 while i < j if x = ai then location := Ii
j := n begin else location := 0
m := (i+j)/2
if x > am then i := m+1 x 15
else j := m
end location 0
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10
2 4 6 8 10 12 14 16 18 20

i 6
81 m 8
75 j 810
17
Binary search running time
How long does this take (worst case)?

If the list has 8 elements


It takes 3 steps

If the list has 16 elements


It takes 4 steps

If the list has 64 elements


It takes 6 steps

If the list has n elements


It takes log n steps
2

18
Algorithm Analysis
Measures the efficiency of an algorithm or its
implementation as a program as the input size
becomes very large
We evaluate a new algorithm by comparing its
performance with that of previous approaches
Comparisons are asymtotic analyses of classes of
algorithms
We usually analyze the time required for an
algorithm and the space required for a
datastructure
19
Algorithm Analysis
Many criteria affect the running time of an
algorithm, including
speed of CPU, bus and peripheral hardware
design think time, programming time and
debugging time
language used and coding efficiency of the
programmer
quality of input (good, bad or average)

20
Algorithm Analysis
For a given input size n we express the time T
to run the algorithm as a function T(n).

Concept of growth rate allows us to compare


running time of two algorithms without
writing two programs and running them on
the same computer.

21
ALGORITHM
REPRESENTATION
Flowcharts for three constructs
Pseudo code for three constructs
Example 1

Write an algorithm that finds the average of


two numbers.
Algorithm : Average of two
Average Of Two
Input: Two numbers
1. Add the two numbers
2. Divide the result by 2
3. Return the result by step 2
End
Example 2

Write an algorithm to change a numeric


grade to a pass/no pass grade.
Algorithm : Pass/no Pass Grade
Pass/NoPassGrade
Input: One number
1. if (the number is greater than or equal to
60)
then
1.1 Set the grade to “pass”
else
1.2 Set the grade to “nopass”
End if
2. Return the grade
End
Example 3

Write an algorithm to change a numeric


grade to a letter grade.
Algorithm : Letter grade
Letter Grade
Input: One number
1. if (the number is between 90 and 100,
inclusive)
then
1.1 Set the grade to “A”
End if
2. if (the number is between 80 and 89,
inclusive)
then
2.1 Set the grade to “B”
End if
Algorithm : Letter grade (continued)
3. if (the number is between 70 and 79,
inclusive)
then
3.1 Set the grade to “C”
End if
4. if (the number is between 60 and 69,
inclusive)
then
4.1 Set the grade to “D”
End if
Algorithm : Letter grade (continued)
5. If (the number is less than 60)
then
5.1 Set the grade to “F”
End if
6. Return the grade
End
Example 4

Write an algorithm to find the largest of a set


of numbers. You do not know the number of
numbers.
Algorithm : Find largest
Find Largest
Input: A list of positive integers
1. Set Largest to 0
2. while (more integers)
2.1 if (the integer is greater than Largest)
then
2.1.1 Set largest to the value of the
integer
End if
End while
3. Return Largest
End
Example 5

Write an algorithm to find the largest of


1000 numbers.
Algorithm : Find largest of 1000 numbers
Find Largest
Input: 1000 positive integers
1. Set Largest to 0
2. Set Counter to 0
3. while (Counter less than 1000)
3.1 if (the integer is greater than Largest)
then
3.1.1 Set Largest to the value of the integer
End if
3.2 Increment Counter
End while
4. Return Largest
End
What is Algorithm Analysis?
How to estimate the time required for an algorithm
Techniques that drastically reduce the running time of
an algorithm
A mathematical framework that more rigorously
describes the running time of an algorithm
Asymptotic Performance
In this course, we care most about asymptotic
performance
How does the algorithm behave as the problem size gets
very large?
 Running time
 Memory/storage requirements
Growth of the Functions
Visualizing Orders of Growth
On a graph, as you go to the
right, a faster
growing
fA(n)=30n+8

Value of function 
function
eventually
becomes
larger... fB(n)=n2+1

Increasing n 
The Growth of Functions
“Popular” functions g(n) are
n log n, 1, 2n, n2, n!, n, n3, log n

Listed from slowest to fastest growth:


 1
 log n
 n
 n log n
 n2
 n3
 2n
 n!
Different functions and their graphs

xpression Name
------------------------------------------
constant
og n logarithmic
og2n log squared
linear
log n n log n
2
quadratic
3
cubic
n
exponential
------------------------------------------
Common time complexities
BETTER O(1) constant time
O(log n) log time
O(n) linear time
O(n log n) log linear time
O(n2) quadratic time
O(n3) cubic time
O(2n) exponential time

WORSE
Basic Efficiency Classes
Class Name Comments
1 constant May be in best cases
lgn logarithmic Halving problem size at
each iteration
n linear Scan a list of size n
n×lgn linearithmic Divide and conquer
algorithms, e.g., mergesort

n2 quadratic Two embedded loops, e.g.,


selection sort
n3 cubic Three embedded loops,
e.g., matrix multiplication

2n exponential All subsets of n-elements


set
n! factorial All permutations of an n-
elements set
Some Big-O Complexity Classes
Class name
O(1) constant
O(log log n) bi-logarithmic or log log n
O(log n) logarithmic or log n Note: For any k > 0
O((log n)k) or poly-logarithmic log k n  O(n0.5)
O(logkn(
Example: log700n  O(n0.5)
O(n0.5)
O(n) linear
O(n log n) linear logarithmic or n log n
Note: For any constants k  2 , c
O(n2) quadratic
>1
O(n2 log n) quadratic logarithmic and for sufficiently large n, an
O(n3) cubic algorithm that is O(nk) is more
O(2n) base-2 exponential efficient than one that is O(cn)
O(en) natural exponential
O(3n) base-3 exponential
An algorithm that is O(nk) is
called a polynomial algorithm.
O(n!) factorial
An algorithm that is O(cn) is
O(nn) hyper-exponential called an exponential algorithm.

• Algorithms whose run-time are independent of the size of the problem’s inputs are
said to have constant time complexity: O(1)
Common computing time functions
(1)  (log n)  (n)  (n log n)  (n2)  (n3) 
(2n)  (n!)  (nn)
Exponential algorithm: (2n)
polynomial algorithm

Algorithm A : (n3), algorithm B : (n)


Should Algorithm B run faster than A?
NO !
It is true only when n is large enough!
Rate of Growth ≡Asymptotic Analysis
Using rate of growth as a measure to compare
different functions implies comparing them
asymptotically.
If f(x) is faster growing than g(x), then f(x) always
eventually becomes larger than g(x) in the limit (for
large enough values of x).

47
Complexity of Algorithm
Complexity of an algorithm is a measure of the amount of time
and/or space required by an algorithm for an input of a given size
(n).

What effects run time of an algorithm?

(a) computer used, the hardware platform


(b) representation of abstract data types (ADT's)
(c) efficiency of compiler
(d) competence of implementer (programming skills)
(e) complexity of underlying algorithm
(f) size of the input
Analysis of Algorithm
 Analysis of algorithm is the process of analyzing the problem-solving
capability of the algorithm in terms of the time and size required (the
size of memory for storage while implementation). However, the main
concern of analysis of algorithms is the required time or performance.
Generally, we perform the following types of analysis −

 Worst-case − Inputs are provided in such a way that maximum time is


required to perform an algorithm. In the worst case analysis, we
calculate upper bound on running time of an algorithm.
 Best-case − Inputs are provided in such a way that minimum time is
required to perform an algorithm. In the best case analysis, we
calculate lower bound on running time of an algorithm.
 Average case − Inputs are provided in such a way that average time is
required to perform an algorithm. In the average case analysis, we
calculate tight bound on running time of an algorithm.
Asymptotic Notation
Asymptotic notations are the mathematical notations used to
describe the running time of an algorithm when the input tends
towards a particular value or a limiting value.

For example: In bubble sort, when the input array is already sorted, the
time taken by the algorithm is linear i.e. the best case.

But, when the input array is in reverse condition, the algorithm takes
the maximum time (quadratic) to sort the elements i.e. the worst case.

When the input array is neither sorted nor in reverse order, then it takes
average time. These durations are denoted using asymptotic notations.

50
Asymptotic Notation

 O(Big-oh)(Worst Case)(Upper Bound)


Q(Theta)(Average Case)
 W(Best case)(Lower Bound)

51
Big O-notation
Big-O notation represents the upper bound of the running
time of an algorithm. Thus, it gives the worst case
complexity of an algorithm.

O(g(n)) = { f(n): there exist positive constants c and


n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

The above expression can be


described as a function f(n) belongs
to the set O(g(n)) if there exists a
positive constant c such that it lies
between 0 and cg(n), for sufficiently
large n.
g(n) is an asymptotic upper
bound for f(n).
52
Big-Oh Examples
The function 3n+2 =O(n) as 3n+2<=4n for all n>=2 .
3n+3 =O(n) as 3n+3<=4n for all n>=3.
100n+6 =O(n) as 100n+6<=101n for all n>=6 .
10n2+4n+2=O(n2) 10n2+4n+2<=11n2 for all n>=5.
1000n2+100n-6=O(n2) 1000n2+100n-6<=1001n2 for all n>=100.
6*2n+n2=O(2n) as 6*2n+n2<=7*2n for all n>=4.
3n+3=O(n2) as 3n+3<=3n2 for all n>=2.
10n2+4n+2=O(n4) as 10n2+4n+2<=10n4 for all n>=2.
**************
3n+2≠O(1) as 3n+2 is not less then or equal to c for any constant c and
all n>=n0.
10n2+4n+2 ≠O(n) .
*****************
53
F(n)=3n+2
F(n)=O(g(n))
F(n)<=cg(n)
3n+2<=3n+n n>=2
3n+2<=4n
C=4
g(n)=n
n0=2
F(n)=O(n)
Points about the definition

Note that f is O(g) as long as any values of c


and k exist that satisfy the definition.
But: The particular c, k, values that make the
statement true are not unique: Any larger
value of c and/or k will also work.
You are not required to find the smallest c and
k values that work.

55
Omega Notation( -notation)

Omega notation represents the lower bound of the running


time of an algorithm. Thus, it provides best case complexity of
an algorithm.

Ω(g(n)) = { f(n): there exist positive constants c


and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
The above expression can be described
as a function f(n) belongs to the
set Ω(g(n)) if there exists a positive
constant c such that it lies above cg(n),
for sufficiently large n.
g(n) is an asymptotic lower bound
for f(n).
56
Big-Omega Examples

The function 3n+2 = (n) as 3n+2>=3n for all n>=1 .


3n+3 = (n) as 3n+3>=3n for all n>=1.
100n+6 = (n) as 100n+6>=100n for all n>=1 .
10n2+4n+2= (n2) 10n2+4n+2>=n2 for all n>=1.

6*2n+n2= (2n) as 6*2n+n2 >=2n for all n>=1.


3n+3= (1)
10n2+4n+2= (n)
10n2+4n+2=  (1).

6*2n+n2= (n100)
6*2n+n2= (n50.2)
6*2n+n2= (n2)
6*2n+n2= (1)
57
Theta Notation(-notation)
Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the
running time of an algorithm, it is used for analyzing the
average case complexity of an algorithm.
Θ(g(n)) = { f(n): there exist positive constants c1, c2 and
n0 such that 0 ≤ c1g(n) ≤ f(n) ≤ c2g(n) for all n ≥ n0 }
. The above expression can be described as a
function f(n) belongs to the set Θ(g(n)) if
there exist positive constants c1 and c2 such
that it can be sandwiched
between c1g(n) and c2g(n), for sufficiently
large n.
. If a function f(n) lies anywhere in
between c1g(n) and c2 > g(n) for all n ≥ n0,
then f(n) is said to be asymptotically tight
bound. 58
Intuition for Asymptotic Notation
Big-Oh
f(n) is O(g(n)) if f(n) is asymptotically less than
or equal to g(n)
Big-Omega
f(n) is (g(n)) if f(n) is asymptotically greater
than or equal to g(n)
Big-Theta
f(n) is (g(n)) if f(n) is asymptotically equal to
g(n)

59
No Uniqueness
There is no unique set of values for n0 and c in proving the asymptotic

bounds
Prove that 100n + 5 = O(n2)
 100n + 5 ≤ 100n + n = 101n ≤ 101n2

for all n ≥ 5

n0 = 5 and c = 101 is a solution

 100n + 5 ≤ 100n + 5n = 105n ≤ 105n2

for all n ≥ 1

n0 = 1 and c = 105 is also a solution

Must find SOME constants c and n0 that satisfy the asymptotic notation relation

60
Relations Between Q, W, O
Theorem : For any two functions g(n) and f(n),
f(n) = (g(n)) iff
f(n) = O(g(n)) and f(n) = (g(n)).

i.e., (g(n)) = O(g(n)) Ç W(g(n))

In practice, asymptotically tight bounds are


obtained from asymptotic upper and lower
bounds.

61
FINDING THE COMPLEXITY OF
SMALL ITERATIVE ALGORITHMS
Example 01– Linear Search
INPUT: a sequence of n numbers, key to search for.
OUTPUT: true if key occurs in the sequence, false otherwise.
Linear Search(A, key) cost times

n
1
1 i1 1 1 i2

2 while i ≤ n and A[i] != key 1 n+1


3 do i++ 1 n
4 if i  n 1 1
5 then return true 1 1
6 else return false 1 1

Assign a cost of 1 to all statement executions.


Now, the running time ranges between
1+ 1+ 1 + 1 = 4 – best case T(n)=W (1)
and
1+ (n+1)+ n + 1 + 1 = 2n+4 – worst case T(n)= O(n)

63
Example 02
 To determine the maximum number in an Array.

Algorithm arrayMax(A, n)
# operations
currentMax  A[0] 1
for i  1 to n  1 do n
if A[i]  currentMax then (n  1)
currentMax  A[i] (n  1)
{ increment counter i } (n  1)
return currentMax 1
Total 4n  1
64
Example 03

Algorithm Averages(X, n)
Input array X of n integers
Output array A of prefix averages of X #operations

s0 1
for i  0 to n  1 do n+1
s  s  X[i] n
A[i]  s  (i  1) n
return A 1

Algorithm Averages runs in O(n) time

65
Example-04
Alg.: MIN (a[1], …, a[n])
m ← a[1];
for i ← 2 to n
if a[i] < m
then m ← a[i];
Running time:
the number of primitive operations (steps) executed
before termination
T(n) =1 [first step] + (n) [for loop] + (n-1) [if condition] +
(n-1) [the assignment in then] = 3n - 1
Order (rate) of growth:
 The leading term of the formula
 Expresses the asymptotic behavior of the algorithm
66
Example 06
Associate a "cost" with each statement and find
the "total cost“ by finding the total number of
times each statement is executed.
Express running time in terms of the size of the
problem. 
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1 
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2
67
Example 07
Cost
  sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N x N

68
Example 08
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1 
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

O(n)

69
Example 09
Cost
  sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N x N

O(n2)
70
Example 10
int FindMaxElement(int[] array)
{
int max = array[0];
for (int i=0; i<array.length; i++)
{
if (array[i] > max)
{
max = array[i];
}
}
return max;
}

Runs in O(n) where n is the size of the array


The number of elementary steps is ~ n

71
Example 11
long FindInversions(int[] array)
{
long inversions = 0;
for (int i=0; i<array.Length; i++)
for (int j = i+1; j<array.Length; i++)
if (array[i] > array[j])
inversions++;
return inversions;
}

Runs in O(n2) where n is the size of the array


The number of elementary steps is ~ n*(n+1) /
2

72
Example 12
decimal Sum3(int n)
{
decimal sum = 0;
for (int a=0; a<n; a++)
for (int b=0; b<n; b++)
for (int c=0; c<n; c++)
sum += a*b*c;
return sum;
}

Runs in cubic time O(n3)


The number of elementary steps is ~ n3

73
Example 13
long SumMN(int n, int m)
{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
sum += x*y;
return sum;
}

Runs in quadratic time O(n*m)


The number of elementary steps is ~ n*m

74
Example 14
long SumMN(int n, int m)
{
long sum = 0;
for (int x=0; x<n; x++)
for (int y=0; y<m; y++)
if (x==y)
for (int i=0; i<n; i++)
sum += i*x*y;
return sum;
}

Runs in quadratic time O(n*m)


The number of elementary steps is ~ n*m +
min(m,n)*n
75
Example 15
Algorithm SumTripleArray(X, n)
Input triple array X[ ][ ][ ] of n by n by n integers
Output sum of elements of X #operations
s0 1
for i  0 to n  1 do n
for j  0 to n  1 do n+n+…+n=n2 for k  0 to n 
1 do n2+n2+…+n2 = n3
s  s + X[i][j][k] n2+n2+…+n2 = n3
return s 1

 Algorithm SumTripleArray runs in O(n3) time

76
Example 16
ALGORITHM MaxElement(A[0..n-1])
//Determines largest element
maxval  A[0]
for i  1 to n-1 do Input size: n
if A[i] > maxval Basic operation: > or <-
maxval  A[i]
return maxval

77
Example 17
 

78
Example 18
ALGORITHM MatrixMultiplication(A[0..n-1,
0..n-1], B[0..n-1, 0..n-1])
//Output: C = AB T(n) ≈ cmM(n)+caA(n)
for i <- 0 to n-1 do = c mn3
+ca n 3

for j <- 0 to n-1 do = (cm+ca)n3


C[i, j] = 0.0
for k <- 0 to n-1 do
C[i, j] = C[i, j] + A[i, k]×B[k, j]
 
return C

79
Example 19
 

80
LOGARITHMIC COMPLEXITY
An algorithm is O(log n) if it takes a constant time to cut the problem size by a fraction
(usually by 1/2). As an example let us consider the following program
for(i=1;i<=n;)
i=i*2;
If we observe carefully , the value of i is doubling every time. Initially i=1 , in the next
step i=2 and in subsequent steps i=4,8 and so on. Let us assume that loop is
executed “k” times. At kth step 2k=n and we come out of loop. Taking logarithmic on
both sides , gives
log(2k)=log(n) k log 2=log n k=log2n
Total time=O(log2n)=O(lg n). Similarly for the below case also , worst case rate of
growth is O(lg n).The same discussion holds for decreasing sequence as well .
for(i=n;i>=1;)
i=i/2;
Above loop is used in BINARY SEARCH(Finding a word in a dictionary of n pages).

.
81
LOGARITHMIC COMPLEXITY
An algorithm is O(log n) if it takes a constant time to cut the problem
size by a fraction (in below given case it is 1/3). As an example let us
consider the following program
for(i=n;i>=1;i=i/3)
print ”HAPPY FRIENDSHIP DAY ”;
If we observe carefully , the value of i is reduced by 1/3 every time.
Initially i=n , in the next step i=n/3 and in subsequent steps
i=n/9,n/27 and so on. Let us assume that loop is executed “k” times.
At kth step n/3k=1 i.e, n=3k and we come out of loop. Taking
logarithmic base 2 on both sides , gives
log2n=klog23
k=log2n/log23=log3n
Total time=O(log3n)

82
LOGARITHMIC COMPLEXITY
DAA(n)
{ Cost Number of Times
1. while(n>1) C1 log2n+1
{
2. nn/2 C2 log2n
3. print n C3 log2n
}
}
Total cost =T(n)=C1[log2n+1] + C2[log2n] + C3[log2n]
=(C1+C2+C3) log2n +C1
=Logarithmic in nature.

83
LOGARITHMIC COMPLEXITY

.What is the complexity of the program


void function (int n){
int i,j,k,count=0;
for(i=n/2;i<=n;i++)
for(j=1;j<=n;j=j*2)
for(k=1;k<=n;k=k*2)
count++;

84
LOGARITHMIC COMPLEXITY

.What is the complexity of the program


void function (int n){
int i,j,k,count=0;
for(i=n/2;i<=n;i++)
// Above loop execute n/2 times
for(j=1;j<=n;j=j*2)
//This loop executes lg n times
for(k=1;k<=n;k=k*2)
// This loop executes lg n times
count++;
Complexity of the algorithm is O(nlog2n)

85
LOGARITHMIC COMPLEXITY
.What is the complexity of the program
void function (int n){
int i,j,k,count=0;
for(i=n/2;i<=n;i++)
for(j=1;j+n/2<=n;j=j++)
for(k=1;k<=n;k=k*2)
count++;

86
LOGARITHMIC COMPLEXITY

.What is the complexity of the program


void function (int n){
int i,j,k,count=0;
for(i=n/2;i<=n;i++)
// Outer loop execute n/2 times
for(j=1;j+n/2<=n;j=j++)
//Middle loop executes n/2 times
for(k=1;k<=n;k=k*2)
// Inner loop executes lg n times
count++;
Complexity of the algorithm is O(n2logn)

87
Family of Polynomials
Constant function
f(n)=1
Linear function
F(n)=n
Quadratic function
f(n)=n2
Cubic function
f(n)=n3
A general polynomials
f(n)=a0+a1n+a2n2+a3n3+…+adnd

88
The Logarithm Function
f(n)=log2(n)=log(n)
The default base is 2.
 Definition of logarithm

Some identities

More…
1 1
y  ln x  y '    dx  ln x
x x
The base is
e=2.718281828…. 89
The Exponential Function
f(n)=an
Some identities (for positive a, b, and c)
a(b+c) = aba c
abc = (ab)c
ab /ac = a(b-c)
b = a logab
bc = a c*logab

90
log n!
Recall that 1! = 1 and n! = (n-1)! n.

Theorem: log n! = (n log n)


Proof:
log n! = log 1 + log 2 + … + log n
<= log n + log n + … + log n = n log n
Hence, log n! = O(n log n).

91
log n!
On the other hand,
log n! = log 1 + log 2 + … + log n
>= log ((n+1)/2) + … + log n
>= ((n+1)/2) log ((n+1)/2)
>= n/2 log(n/2)
= (n log n)
For the last step, note that
lim infn-> (n/2 log(n/2))/(n log n) = ½.

92
Monotonicity
f(n) is
monotonically increasing if m  n  f(m)  f(n).
monotonically decreasing if m  n  f(m)  f(n).
strictly increasing if m < n  f(m) < f(n).
strictly decreasing if m > n  f(m) > f(n).

93
Important Summation Formulas
 

94
Exponentials
Useful Identities:
1 1
a 
a
(a m ) n  a mn
a m a n  a m n
Exponentials and polynomials

nb
lim n  0
n a

 n b  o( a n )

95
Logarithms
logb a
x = logba is the ab
exponent for a = bx. log c (ab)  log c a  log c b
n
Natural log: ln a = logea logb a  n logb a
Binary log: lg a = log2a log c a
logb a 
log c b
lg2a = (lg a)2
logb (1 / a )   logb a
lg lg a = lg (lg a)
1
logb a 
log a b
a logb c  c logb a 96
Comparison of Two Algorithms
 Two sorting algorithms
 Merge sort is O(n log n)
 Insertion sort is O(n2 )
 To sort 1M items
 Insertion sort  70 hours
 Merge sort  40 seconds
 For a faster machine
 Insertion sort  40 minutes
 Merge sort  0.5 seconds

97

You might also like