Professional Documents
Culture Documents
Preface
Best regards,
Md shahid (Assistant professor, MIET, MEERUT)
2nd edition
Year-2023
The author can be reached at shahid.55505@gmail.com.
3
Unit-01
Properties of Algorithm
It should terminate after a finite time.
It should produce at least one output.
It should take zero or more input externally.
It should be deterministic (unambiguous).
It should be language independent.
3. return c
While(1)
{
printf(“MIET”);
}
The above example is not an algorithm as it will never
terminate. Though it is a program.
Q. Define 'algorithm,' discuss its main properties, and outline the steps
for designing it.
Q. What do you mean by an algorithm, and what are the main
algorithm design techniques?
6
Analysis of Algorithms
The efficiency of an algorithm can be analyzed at two different stages:
before implementation (A Priori Analysis) and after implementation (A
Posteriori Analysis).
We say that
f(n) = O g(n) if and only if
f(n) <= c . g(n) for some c >0 after n >= no >=0
Question: Find out upper bound for the function f(n) = 3n+2.
Solution:
Steps:
1. We know that definition of upper bound is f(n) <= c. g(n).
2. f(n) = 3n + 2 (Given)
3. We need to find out c and g(n).
4. If we choose c=5 and g(n) = n, then 3n+2 <= 5*n.
5. For c=5 and n0 = 1 (starting value of “n”), f(n)<=c. g(n).
6. Therefore, f(n) = O(n)
Note: Other functions that are larger than f(n), such as n^2, n^3,
nlogn, etc., will also serve as upper bounds for the function f(n).
However, we typically consider only the closest upper bound among
all possible candidates, as the remaining upper bounds are not as
useful.
1. Given f(n) = 2n2 + 3n + 2 and g(n) = n2 , prove that f(n) = O g(n).
Solution:
Steps:
9
1. We have to show f(n) <= c. g(n) where c and n0 are some positive
constants for all n which is >= n0
2. Now, find out the value of c and n0 such that the equation-(1)
gets true.
2. Given f(n) = 5n2 + 6n + 4 and g(n) = n2 , then prove that f(n) is O(n2).
Solution:
f(n) will be O(n2) if and only if the following condition holds good:
5n2 + 6n + 4 < = C. n2
If we put C=15 and n0 = 1 , then we get
15 <= 15 ( which is true. )
3. Solve the function f(n) = 2n + 6n2 + 3n and find the big-oh (O)
notation.
Steps:
1. Find out the greatest degree of “n” from f(n), which is big-oh.
2. Prove it using the formula f(n) <= O(g(n)).
Solution:
Big-oh ( upper bound ) of f(n) = 2n + 6n2 + 3n will be 2n iff
f(n)<= c. 2n for some constant c > 0 and n> = n0 >=0
2n + 6n2 + 3n < c. 2n
If we put c=11 and n0 =1, then we get
11 <= 22 ( It is true.)
It means big-oh of f(n) is 2n when c=11 and n0 = 1
We say that
f(n) = Ω g(n) if and only if
f(n) >= c . g(n) for some c >0 after n >= no >=0
For example :
1. Given f(n) = 3n + 2 and g(n) = n, then prove that f(n) = Ω g(n)
Solution
1. We have to show that f(n) >= c. g(n) where c and n0 are some
positive constants for all n which is >= n0
2. 3n + 2 >= c . n
3. When we put c=1
4. 3n +2 >= n
5. Put n = 1
6. 5 > = 1 [ True ]
7. Hence, when we put c=1 and n0=1, f(n)= Ω g(n).
12
Solution :
Steps:
1. Find out the smallest degree of n from f(n). This will be the
value for lower bound ( best case for the function f(n).
2. Use the formula to find out c and n0 to prove your claim.
We say that
f(n) = ɵ g(n) if and only if
c1.g(n) <= f(n) <= c2.g(n) for all n >=n0>=0 and c >0
For example
1. Given f(n) = 3n +2 and g(n) = n, prove that f(n) = ɵ g(n).
Solution
1. We have to show that c1.g(n) <= f(n) <= c2.g(n) for all n >=n0>=0
and c >0
2. c1.g(n) <= f(n) <= c2.g(n)
3. c1. n <= 3n+2 <= c2.n
4. Put c1 =1, c2 = 4 and n=2 , then 2 <= 8 <=8 [ True ]
14
3. Solve the function: f(n) = 27n2 +16 and find the Tight (average bound)
bound it.
Solution:
1. If we have upper bound (big-oh) and lower bound (big omega)
of f(n) equal, that’s when we can call it Theta-notation of f(n).
2. Use the formula c1. g(n) <= f(n) <= c2. g(n)
We say that
f(n) = o(g(n)) if and only if
0<= f(n) < c. g(n) for all values of c which is >0 and n>=n0>0
Or
Lim f(n)/g(n) = 0
16
n->∞
For example
1. Give f(n) = 2n and g(n) = n2, prove that f(n) = o(g(n))
Solution
Lim 2n/n2
n->∞
Lim 2/n
n->∞
Lim 2/∞ =0
n->∞
Lim f(n)/g(n) = ∞
n->∞
For example
1. Given f(n)= n2/2 and g(n)= n , prove that f(n) = w(g(n)).
17
Solution
Lim n2/2 /n
n->∞
Lim n/2
n->∞
Lim ∞ /2 = ∞
n->∞
Complexity of Algorithms
1. Time complexity
2. Space complexity
The running time of the algorithm is the sum of running times for
each statement executed; a statement that takes Ci steps to execute
and executes “n” times will contribute (Ci * n) to the total running
time.
A(n)
{ Cost Times
int i; C1 1
for( i = 1; i <=n ; i++) C2 n+1
printf(“MIET”); C3 n
}
A (n)
{
int i;
for( i= 1; i<n ; i= i*2){
pf(“MIET”);
}
}
As loop is not getting incremented by one, we will have to carefully
calculate the number of times “MIET” will be executed.
i= 1, 2, 4, . . . , n
After Kth iterations, “i” gets equal to “n”:
i= 1, 2, 4, . . . , n
Iterations= 20, 21, 22, … , 2k
2k = n
Convert it into logarithmic form… [ If ab = c, we can write it logac = b ]
k = log2n
O(logn)
25
A(n) A(n)
{ {
int i , j; int i , j , k;
for( i = 1 to n) for( i = 1 to n)
for(j = 1 to n) for(j = 1 to n)
pf(“MIET”); //It for(k = 1 to n)
will be printed n2 times . pf(“MIET”);//n3
} }
A(n)
{
1. int i = 1, j = 1;
2. while( j <= n)
{
3. i++;
26
4. j = j + i;
5. pf(“MIET”); // We need to know the no of times it’ll be printed
}
}
Solution:
We have to find out the number of times “MIET” will be printed to know
the time complexity of the above program. We can see that there is a
dependency between the line number 2 (while loop) and 4( the value of
“j” which in turns depends on “i”).
i = 1, 2, 3, 4, … k
j = 1, 3, 6, 10 … k(k+1)/2 [sum of the first “K” natural numbers]
k(k+1)/2 = n+1 [ when the value of “n” gets n+1, condition gets false]
k2 = n [ We eliminate constant terms, consider only variable.]
k =√n Time complexity is O(√n)
Pattern05: When there is a dependency between loops (having more
than one loop)
Note – We have to unroll loops in order to find out the number of times
a particular statement gets executed.
27
A(n)
{
int i, j, k;
for(i = 1; i <= n; i++)
{
for(j = 1; j <= i; j++)
{
for(k = 1; k <= 100; k++)
{
Pf(“MIET”);
}
}
}
}
There is a dependency between the second and the first loop; therefore,
we will have to unroll the loops to know the number of times “MIET” will
be printed.
i=1 i=2 i=3 … i=n
j=1 j=2 j=3 j=n
k = 1*100 k = 2 * 100 k = 3* 100 k = n* 100
Let’s learn how to write a recurrence relation for the given program
having a recursive call/function.
29
A(n)
{
if (n > 0) // stopping condition for the recursive call
{
pf(“MIET”);
A(n-1); // Calling itself( recursive function)
}
}
We assume that T(n) is the total time taken to solve A(n) , where n is the
input. It means that this T(n) is split up among all statements inside the
function i.e., time taken by all instructions inside a function is equal to
T(n).
Note: “if” and “print” take constant amount of time step as per the RAM
model, and we can use either 1 or C to indicate it. When “if- condition”
gets false, it again takes constant amount of time — (one-time step).
A(n)
{
If(n>0) …... 1
{
for(i=1; i<=n; i++) …. n+1
{
pf(“MIET”); …... n
}
A(n-1); ….T(n-1)
}
}
#Factorial of a number
fact(n)
{
if(n<=1)
31
return 1;
else
return n*fact(n-1); // here “*” takes constant time step
}
#Fibonacci number Fn
fib(n)
{
if (n== 0 || n==1)
return n;
else
return fib(n-1)+ fib(n-2);
}
T(n-1)= T(n-2) + 1
T(n-2) = T(n-3) +1
T(n) = O(n)
= n+ nlogn= O(nlogn)
V.V.I
37
38
V.V.I
39
Question. State Master Theorem and find time complexity for the
recurrence relation T(n) = 9 T(n/3) +n.
Solution— Let a >= 1 and b > 1 be constants, let f(n) be a function , and
let T(n) be defined on the nonnegative integers by the recurrence
T(n) = aT(n/b) + f(n)
Where we interpret n/b to mean either floor value of (n/b) or ceiling
value of (n/b). Then T(n) has the following asymptotic bounds:
1. If f(n) = O(nlogba-ϵ ) for some constants ϵ >0, then T(n)= ɵ(nlogba)
2. If f(n) = ɵ(nlogba), then T(n) = ɵ(nlogba logn)
3. If f(n) = Ω(nlogba+ϵ ) for some constant ϵ >0, and if af(n/b) <= c(f(n))
for some constant c <1 and all sufficiently larger n, then T(n)=
ɵ(f(n)).
Question. State Master Theorem and find time complexity for the
recurrence relation T(n) = 3 T(n/4) +nlogn.
Solution— Given: a = 3, b= 4 and f(n) =nlogn
Now we need to calculate nlogba as it’s the common term in all 3-cases of
the Master Theorem.
nlogba = nlog43 = n0.793
Since, nlogn = Ω(nlog43+ ϵ) where ϵ = 0.2
Solution:
Step 1: Guess the solution
T(n) = O ( nn ) ……. [1] [ You can easily get it using iteration method.]
It should be true 1, 2, 3, . . ., k
T(k) <= c. kk [ 1<= k <= n]
When we move forward from 1 to n somewhere we get k = n-1.
T(n-1) <= c. (n-1)(n-1) …………………… [2]
Shell-Sort Algorithm
75
76
77
V.V.I
82
83
84
Q Describe the difference between the average-case and the worst-case analysis
of algorithm, and give an example of an algorithm whose average-case running
time is different from worst case analysis.
1. Count sort
2. Radix sort
3. Bucket sort
Counting-Sort Algorithm
Counting Sort assumes that each of the "n" input elements is an integer in the range
from 0 to "k," where "k" is an integer representing the range of values.
92
93
Solution:
94
95
Radix-Sort Algorithm
96
97
Bucket-Sort Algorithm
98
99
Unit-02
Red-Black Tree
It is a Binary Search Tree (BST) with one extra bit of storage per node: its
color, which can be either red or black.
Properties of RB Tree:
1. Every node is either red or black.
2. The root is black.
3. Every leaf node (NIL) is black.
4. If a node is red, then both its children are black. It means we need
to avoid red-red (RR) conflict.
5. For each node, all simple paths from the node to descendant
leaves contain the same number of black nodes.
100
Q5. Construct an RB Tree, and let h be the height of the tree and n be
the number of internal nodes in the tree. Show that h<= 2log2(n+1).
116
117
B-Trees
A B-tree is a self-balancing m-way search tree with the following
restrictions:
1. The root node must have at least two children.
2. Every node, except for the root and leaf nodes, must have at least ⌈m/2⌉
children.
3. All leaf nodes must be at the same level.
4. The creation process is performed bottom-up.
Note— An m-way search tree, also known as an m-ary search tree, does not have
strict height control during its construction. Depending on the order (m) and the
specific keys inserted, an m-way search tree can grow to a height of "n," where "n"
is the number of keys inserted into the tree. To address this issue and maintain a
more balanced structure, B-trees were introduced. B-trees are a type of m-way tree
with specific restrictions and properties designed to keep their height under control
and ensure balanced branching.
Note—The simplest B-tree occurs when t=2. Every internal node then
has either 2, 3, or 4 children, and we have a 2-3-4 tree.
s
128
129
130
Q.
137
Q8. Define Binomial heap, write an algorithm for union of two Binomial
heaps and also write its time complexity.
140
141
142
143
144
BINOMIAL-HEAP-DECREASE-KEY(H, x, k)
1 if k > key[x]
2 then error "new key is greater than current key"
3 key[x] = k
4 y=x
5 z = p[y]
6 while z != NIL and key[y] < key[z]
7 do exchange key[y] and key[z]
8 If y and z have satellite fields, exchange them, too.
9 y=z
10 z = p[y]
Running time T(n) = O(logn)
150
Q10. Define Fibonacci heap and also compare the complexities of Binary
heap, Binomial heap and Fibonacci heap.
151
152
Steps:
1. Delete the minimum node .
2. Join the root list of deleted node’s descendants to the root list of
original root list.
3. Traverse left to right in the root list
3.a Find new minimum.
3.b Merge trees having same degree.
Example-
154
Q. Explain the Search operation in Skip list with a suitable example. Also
write its algorithm.
157
158
159
Q. Insert the following strings into the initially empty trie: DOG, DONE, CAT, CAN, RIGHT, DO,
JUG, DAA, CA, CAME. Then delete the strings DO, CAME, and RIGHT from it.
160
Q14. Insert strings < ten, test, car, card, nest, next, tea, tell, park, part,
see, seek, seen> in an empty Trie data structure and also compress the
Trie.
161
Unit-03
1. Quick sort
2. Merge sort
3. Strassen’s algorithm for matrix multiplication
4. Convex Hull algorithm
5. Closest pair of points
162
3. For a part, find the point P with the maximum distance from line L.
P forms a triangle with the points min_x and max_x.
4. The above step divides the problem into two sub-problems, which
are solved recursively. Now, the line joining the points P and min_x
and the line joining the points P and max_x are new lines, and the
points residing outside the triangle are the set of points. Repeat
line number 3 till there is no point left with the line. Add the
endpoints of this point to the convex hull.
Example:
Find a convex hull of a set of points given below.
164
165
166
Matrix Multiplication
167
168
169
V.V.I
170
Q2. Compare and contrast BFS and DFS. How do they fit into the
decrease and conquer strategy?
Decrease and Conquer Strategy: The name ‘divide and conquer’ is used
only when each problem may generate two or more sub-problems. The
name ‘decrease and conquer’ is used for the single sub-problem class.
The Binary search rather comes under decrease and conquer because
there is one sub-problem. Other examples are BFS and DFS.
5. It is more suitable for game or puzzle problems. We make a decision, and the
then explore all paths through this decision. And if this decision leads to win
situation, we stop.
Greedy(arr[], n)
{
Solution = 0;
for i=1 to n
{
x = select (arr[i]);
if feasible(x)
{
Solution = solution + x;
}
}
}
172
Initially, the solution is assigned with zero value. We pass the array and
number of elements in the greedy algorithm. Inside the for loop, we
select the element one by one and checks whether the solution is
feasible or not. If the solution is feasible, we add it to the solution.
makes good local choices in the hope that the solution should be either
feasible or optimal.
Feasible solution - Most of the problems have ‘n’ inputs and require us
to obtain a subset that satisfies some constraints. Any subset that
satisfies these constraints is called a feasible solution. A problem can
have many feasible solutions.
“A feasible solution satisfies all constraints of the problem.”
Optimal solution - It is the best solution out of all possible feasible
solutions.
174
Greedy Approach:
1. We make a choice that seems best at the moment in the hop that
it will lead to global optimal solution.
2. It does not guarantee an optimal solution.
3. It takes less memory.
4. Fractional Knapsack is an example of Greedy approach.
Q6. Find the optimal solution for the fractional knapsack problem
with n=7 and a knapsack capacity of m=15, where the profits and
weights of the items are as follows: (p1, p2, . . . , p7) = (10, 5, 15, 7,
6, 18, 3) and (w1, w2, . . . , w7) = 2, 3, 5, 7, 1, 4, 1, respectively.
Capacity of knapsack(bag) = 15
We have to put objects in the bag (knapsack) such that we should get
maximum profit.
Profit = Xi * Pi
We are given a set of n jobs. Associated with job i is an integer deadline d i >= 0
and a profit pi >0. For any job i, the profit pi is earned if and only if the job is
completed by its deadline. The objective is to maximize the total profit by
scheduling jobs in a way that they meet their deadlines.
177
Q. Using a greedy method, find the optimal solution for the “job
sequencing problem with deadlines” with n = 4, where (p1, p2, p3,
p4) = (100, 10, 15, 27) and (d1, d2, d3, d4) = (2, 1, 2, 1).
178
Activity A1 A2 A3 A4 A5 A6 A7 A8
Start 1 0 1 4 2 5 3 4
Finish 3 4 2 6 9 8 5 5
180
Solution—
181
182
183
184
185
186
187
Example:
189
Algorithm:
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree
formed so far. If cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Given a graph G = (V, E), we want to find a shortest path from a given
source vertex s ϵ V to each vertex v ϵ V.
192
Q. Apply the greedy single source shortest path algorithm on the graph
given below.
197
Unit-04
Dynamic Programming
Fibonacci series : 0 1 1 2 3 5 . . .
Fn: 0 1 2 3 4 5 . . . (Fibonacci terms)
F3 term= 2 , F1 term=1 , F4 term=3, etc.
int fib(int n)
{
if(n<=1)
return n;
return fib(n-2) + fib(n-1);
}
n if n<=1
T(n) = T(n-2) + T(n-1) +1 if n>1
Now, try to observe repeated recursive calls for the same argument
(input value ) using a recursive tracing tree.
201
if(n <= 1)
return n;
if(F[n] != -1)
return F[n];
void main(void)
{
int i, result=0;
Note – If weights are not given in the increasing order, then arrange
them in the increasing order and also arrange profits accordingly.
Pi = profits
Wi = weights
i = Objects
Formula to fill out cells : mat[i, w] = max ( mat[i-1, w], mat[i-1, w-weight[i]+ p[i])
2. For the first object, check the weight (wi) of the first object, which
is 2. We have capacity w=2, so place profit of this object in the cell
having capacity of 2 units (mat[1][2]=1). So far, we have only one
object to consider, so we can put the first object (i = 1) having 2
units of weight (w1 = 2 ) in the knapsack having capacity (w)
205
3. For the cell(s) left side of the current cell, we just consider the
maximum value between left side and above of the current cell. For
example, for the left side of mat[1][2], we need to pick
max(mat[1][1], and mat[0][2]), which is 0. Therefore, place zero in
the mat[1][1].
4. For the second object, weight is given 3 units. Now, we can consider
two objects ( 1 and 2) together. The second object having 3 units of
weight can be placed in the cell [2][3] having 3 units of capacity.
Both objects together have 5 units of weight, which can be placed
in the cells [2][5], [2][6], [2][7] and [2][8] having 5 units of capacity.
For the cell [2][2], pick max(mat[2][1], mat[1][2]) which is 1. And
follow the same for the cell [2][4].
Selection of objects Xi = X1 X2 X3 X4 ( 0 1 0 1 )
Only objects 2 and 4 have been placed in the knapsack to gain
maximum profit.
Capacity(w)
Instances/
Objects (i)
0 1 2 3 4 5 6 7 8
Pi Wi 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 1 1 1 1 1 1 1
2 3 2 0 0 1 2 2 3 3 3 3
5 4 3 0 0 1 2 5 5 6 7 7
6 5 4 0 0 1 2 5 6 6 7 8
1 2
8
7 2 5 2
1
4 3
207
208
209
210
We are given n matrices A1, A2, …. , An and asked in what order these
matrices should be multiplied so that it would take a minimum number of
computations to derive the result.
Two matrices are called compatible only if the number of columns in the first
matrix and the number of rows in the second matrix are the same. Matrix
multiplication is possible only if they are compatible. Let A and B be two
compatible matrices of dimensions p x q and q x r. Each element of each row
of the first matrix is multiplied with corresponding elements of the appropriate
column in the second matrix.
A1 = 5 x 4
A2 = 4 x 3
Resultant matrix will have 15 elements ( 5 rows and 3 columns ), and each
element in the resultant matrix is derived using 4 multiplications. It means 60
(5 x 4 x 3) multiplications are required.
For n=3
1/3 x 4C2
1/3 x 4! / 2! * (4 – 2)!
1/3 x 4 x 3 x 2! / 2! * 2!
1/3 x 4 x 3 / 2!
=2 ( We can parenthesize these three matrices only in two ways.)
The answer of both multiplication sequences would be the same in the resultant
matrix having 5 rows and 2 columns, but the numbers of multiplications are
different. This leads to the question- what order should be selected for a chain
of matrices to minimize the number of multiplications to reduce time complexity?
212
213
214
215
V.V.I
216
V.V.I
217
V.V.I
218
The goal is to assign colors to the vertices of a graph in such a way that no two adjacent
vertices (connected by an edge) have the same color, while using the minimum number of
colors.
220
221
In this Problem, a salesman must visits n cities. We can say that salesman wishes to make
a tour, visiting each city exactly once and finishing at the city he starts from. The goal is
to find a tour of minimum cost. We assume that every two cities are connected. We can
model the cities as a complete graph of n vertices, where each vertex represents a city.
222
223
224
l
225
Unit-05
Problems
Solvable Unsolvable
Solvable Problems
NP
NPC
P
228
Fig - How P, NP, NP hard and NP complete are related to each other.
Q2. What are approximation algorithms? What is meant by P (n)
approximation algorithms? Discuss approximation algorithm for Vertex
cover problem.
An approximation algorithm is a way of dealing with NP-completeness for
an optimization problem. This technique does not guarantee the best
solution. The goal of the approximation algorithm is to come as close as
possible to the optimal solution in polynomial time.
C = Cost of solution
C*= Cost of optimal solution
For a maximization problem, 0< C < C*, and the ratio of C*/C (approximation
ration) gives the factor by which the cost of an optimal solution is larger than
the cost of the approximate algorithm.
For a minimization problem, 0< C* < C, and the ratio of C/C* gives the factor
by which the cost of an approximate solution is larger than the cost of an
optimal solution.
Vertex Cover Problem – Given an undirected graph, the vertex cover problem
is to find minimum size vertex cover. Although the name is Vertex Cover, the set
covers all edges of the given graph. It is a minimization problem because we have
to find a set of vertices containing minimum number of vertices covering all edges
of the given undirected graph.
229
Example –
a b
c d
Solution –
Line 1. C = solution set, which is empty in the beginning.
Line 2. E ʹ = { (a, b), (a, c), (c, d), (b, d), (d, e)} // set of edges
Line 3. While E ʹ ! = empty
Line 4. Add an arbitrary edge from E ʹ in the solution set on line 5.
Suppose we consider (a, b) from E ʹ, then
230
Line 5. C = C U { a, b }
C = { a, b } // As of now solution set contains two vertices
Line 6. Remove every edge incident on either a or b vertex. Therefore,
Remove the following edges from E ʹ :
(a, b), (b, c), and (a, c) [ These three edges are incident on a and b. ]
Now, E ʹ = { (c, d), (b, d), (d, e) }
As E ʹ is not empty, add another arbitrary edge from E ʹ in the solution
set C. Let’s take the edge (c, d) now.
C = { a, b, c, d}
Using line number 6, remove every edge incident on either vertex c and
d. Therefore, remove (c, d), (b, d) and (d, e) from E ʹ. E ʹ is empty now,
so return C using line number 7, which contains four vertices a, b,
c, and d.
C is a set of minimum number of vertices, which covers all edges.
Randomized algorithm – Algorithms using random numbers to decide
what to do next anywhere in its logic is called Randomized Algorithm. For
example, in Randomized Quick Sort, we use a random number to pick
the next pivot (or we randomly shuffle the array). Typically, this
randomness is used to reduce time complexity or space complexity in
other standard algorithms.
231
Algorithm
NAIVE-STRING-MATCHER (T, P)
1. n ← length [T]
2. m ← length [P]
3. for s ← 0 to n -m
4. do if P [1...m] = T [s + 1....s + m]
5. then print "Pattern occurs with shift" s
Algorithm
Q 3. For q = 11, how many valid and spurious hits are found for
the given Text and Pattern:
T = 3141592653589793
P = 26
Solution –
Step -1 Find p mod q
26 % 11 = 4
T=3141592653589793
31 mod 11 = 9
14 mod 11 = 3
41 mod 11 = 8
15 mod 11 = 4
59 mod 11 = 4
92 mod 11= 4
26 mod 11 = 4
65 mod 11 = 10
53 mod 11 = 9
35 mod 11 = 2
58 mod 11 = 3
89 mod 11 = 1
97 mod 11 = 9
79 mod 11 = 2
93 mod 11 = 5
15 mod 11 = 4
59 mod 11 = 4
92 mod 11= 4
26 mod 11 = 4[ Here, 26 is equal to P (pattern), so it is a valid hit ]
Algorithm
Length of “aba” = 3
Therefore, σ(x) = 3
Transition function δ : Q x ∑ Q
KMP-MATCHER (T, P)
1. n ← length [T]
2. m ← length [P]
3. Π← COMPUTE-PREFIX-FUNCTION (P)
4. q ← 0 // numbers of characters matched
5. for i ← 1 to n// scan S from left to right
6. do while q > 0 and P [q + 1] ≠ T [i]
7. do q ← Π [q] // next character does not match
8. If P [q + 1] = T [i]
9. then q ← q + 1 // next character matches
10. If q = m // is all of p matched?
11. then print "Pattern occurs with shift" i - m
12. q ← Π [q] // look for the next match
Prefix:
Suffix:
Π – It
is also called the longest prefix which is same as some suffix
(LPS).
Index 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
p a b a b b a b b a b b a b a b b a b b
Π 0 0 1 2 0 1 2 0 1 2 0 1 2 3 4 5 6 7 8
Note – Use any short-cut trick to prepare LPS or Π table in the exam.
242