You are on page 1of 48

DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

UNIT I

Introduction: Algorithm Definition, Algorithm Specification, performance Analysis,


Performance measurement, Asymptotic notation, Randomized Algorithms. Sets & Disjoint set
union: introduction, union and find operations.

Basic Traversal & Search Techniques: Techniques for Graphs, connected components and
Spanning Trees, Bi-connected components and DFS.

ALGORITHM

An Algorithm is any well-defined computational procedure that takes some value or set
of values as Input and produces a set of values or some value as output. Thus algorithm is a
sequence of computational steps that transforms the i/p into the o/p.

Definition:

“An Algorithm is a finite set of instructions that, if followed, accomplishes a particular task.”

In addition every algorithm must satisfy the following criteria:

Input: zero or more quantities externally supplied.

Output: At least one quantity is produced.

Definiteness: Each instruction must be clear and unambiguous;

Finiteness: (algorithm terminates after a finite number of steps)

if we trace out the instructions of an algorithm, then for all cases the algorithm will terminate
after a finite number of steps;

Effectiveness: (every instruction must be basic i.e. simple instruction)

Every instruction must be sufficiently basic that it can in principle be carried out by a person
using only pencil and paper. It is not enough that each operation be definite, but it must also be
feasible.

UNIT-1 [1] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Issues or study of Algorithm:

The study of algorithms includes many important and active areas of research.

 How to devise algorithms?

 Creating an algorithm is an art which may never be fully automated.

 How to validate algorithms?

 Once an algorithm is devised, it is necessary to show that it computes the correct


answer for all possible legal inputs.

 A program can be written and then be verified.

 How to test a program?

 Debugging is the process of executing programs on sample data sets to determine


whether the faulty results occur and, if so, to correct them.

 Profiling is the process of executing a correct program on data sets and measuring
the time and space it takes to compute the results.

 How to analyze algorithms?

 Analysis of algorithms refers to the task of determining how much computing


time and storage an algorithm requires.

 Theoretical analysis:

– All possible inputs.

– Independent of hardware/ software implementation

 Experimental Study:

– Some typical inputs.

– Depends on hardware/ software implementation.

UNIT-1 [2] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Algorithm Specification:
Algorithm can be described in three ways.

1. Natural language like English:


When this way is choused care should be taken, we should ensure that each & every statement is
definite.

2. Graphic representation called flowchart:


This method will work well when the algorithm is small& simple.

3. Pseudo-code Method:
This method describes algorithms as program, which resembles language like Pascal & algol.

Pseudo-Code Conventions for expressing algorithms:

1. Comments begin with // and continue until the end of line.

2. Blocks are indicated with matching braces {and}.

3. An identifier begins with a letter. The data types of variables are not explicitly declared.

4. Compound data types can be formed with records. Here is an example,


Node. Record
{
data type –1 data-1;
.
.
.
data type –n data –n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of a record can be
accessed with → and period.
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;

6. There are two Boolean values TRUE and FALSE.


 Logical Operators AND, OR, NOT
 Relational Operators <, <=,>,>=, =, !=

UNIT-1 [3] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

7. The following looping statements are employed.


For, while and repeat-until
While Loop:
While < condition > do
{
<statement-1>
.
.

<statement-n>

}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
<statement-n>
until<condition>

8. A conditional statement has the following forms.


 If <condition> then <statement>
 If <condition> then <statement-1>
Else <statement-1>

Case statement:
Case
{
:<condition-1> :<statement-1>
.
.
:<condition-n> :<statement-n>

:else :<statement-n+1>
}

UNIT-1 [4] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

9. Input and output are done using the instructions read & write.

10. There is only one type of procedure:

Algorithm, the heading takes the form,

Algorithm Name (Parameter lists)

Examples:
 algorithm for find max of two numbers

algorithm Max(A,n)
// A is an array of size n
{
Result := A[1];
for I:= 2 to n do
if A[I] > Result then
Result :=A[I];
return Result;
}

Algorithm for Selection Sort:


Algorithm selection sort (a,n)
// Sort the array a[1:n] into non-decreasing order
{
for i:=1 to n do
{
j:=i;
for k:=i+1 to n do
if (a[k]<a[j])then j:=k;
t:=a[i];
a[i]:=a[j];
a[j]:=t;
}
}

UNIT-1 [5] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Recursive Algorithms:

A Recursive function is a function that is defined in terms of itself.


 Similarly, an algorithm is said to be recursive if the same algorithm is invoked in the
body.

An algorithm that calls itself is Direct Recursive.


 Algorithm „A‟ is said to be Indirect Recursive if it calls another algorithm which in
turns calls „A‟.
 The Recursive mechanism, are externally powerful, but even more importantly, many
times they can express an otherwise complex process very clearly. Or these reasons we
introduce recursion here.
 The following 2 examples show how to develop a recursive algorithms.
In the first, we consider the Towers of Hanoi problem, and in the second, we generate
all possible permutations of a list of characters.

1. Towers of Hanoi:

Towers of Hanoi is a problem in which there will be some disks which of decreasing
sizes and were stacked on the tower in decreasing order of size bottom to top. Besides this there
are two other towers (B and C) in which one tower will be act as destination tower and other act
as intermediate tower. In this problem we have to move the disks from source tower to the
destination tower. The conditions included during this problem are:

1) Only one disk should be moved at a time.


2) No larger disks should be kept on the smaller disks.

UNIT-1 [6] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Consider an example to explain more about towers of Hanoi:

Consider there are three towers A, B, C and there will be three disks present in tower A.
Consider C as destination tower and B as intermediate tower. The steps involved during moving
the disks from A to B are

Step 1: Move the smaller disk which is present at the top of the tower A to C.
Step 2: Then move the next smallest disk present at the top of the tower A to B.
Step 3: Now move the smallest disk present at tower C to tower B
Step 4: Now move the largest disk present at tower A to tower C
Step 5: Move the disk smallest disk present at the top of the tower B to tower A.
Step 6: Move the disk present at tower B to tower C.
Step 7: Move the smallest disk present at tower A to tower C
In this way disks are moved from source tower to destination tower.

ALGORITHM FOR TOWERS OF HANOI:


Algorithm Towers of hanoi (n, X ,Y, Z)
{
if (n>=1) then
{
Towers of hanoi(n-1, X, Z, Y);
Write(“move top disk from tower “,X,“ to top of tower”,Y);
Towers of hanoi (n-1, Z, Y, X);
}
}

TIME COMPLEXITY OF TOWERS OF HANOI:

The recursive relation is:


t(n)=1;if n=0
=2t(n-1)+2if n>=1
Solve the above recurrence relation then the time complexity of towers of Hanoi is O(2^n)

UNIT-1 [7] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Performance Analysis:

The performance of a program is the amount of computer memory and time needed to run a
program. We use two approaches to determine the performance of a program. One is analytical,
and the other experimental. In performance analysis we use analytical methods, while in
performance measurement we conduct experiments.

1. Space Complexity:
The space complexity of an algorithm is the amount of memory it needs to run to compilation.

2. Time Complexity:
The time complexity of an algorithm is the amount of computer time it needs to run to
compilation.

Space Complexity:

Space complexity is the amount of memory used by the algorithm (including the input values to
the algorithm) to execute and produce the result.

Auxiliary Space is the extra space or the temporary space used by the algorithm during it's
execution.

Space Complexity = Auxiliary Space + Input space.

The Space needed by each of these algorithms is seen to be the sum of the following
component.

1. A fixed part that is independent of the characteristics (eg:number,size)of the inputs and
outputs.
The part typically includes the instruction space (ie. Space for the code), space for simple
variable and fixed-size component variables (also called aggregate) space for constants, and so
on.

The space requirement s(p) of any algorithm p may therefore be written as,

S(P) = c + Sp (Instance characteristics)

Where „c‟ is a constant. ( C- Fixed Part, Sp – Variable part)

UNIT-1 [8] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example 1:

Algorithm abc (a,b,c)


{
return a+b*c+(a+b-c)/(a+b) +4.0;
}

In this algorithm sp=0; let assume each variable occupies one word.

Then the space occupied by above algorithm is >=3.

S(P)>=3

Example 2:

Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
returns;
}
In the above algoritm n,s and occupies one word each and array „a‟ occupies n number of words
so S(P)>=n+3

Example 3:
ALGORITHM FOR SUM OF NUMBERS USING RECURSION:
Algorithm RSum (a, n)
{
if(n<=0) then
return 0.0;
else
return RSum(a,n-1)+a[n];
}
The space complexity for above algorithm is:
In the above recursion algorithm the space need for the values of n, return address and pointer to
array. The above recursive algorithm depth is (n+1).
To each recursive call we require space for values of n, return address and pointer to array. So
the total space occupied by the above algorithm is S(P) >= 3(n+1)

UNIT-1 [9] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Time Complexity:

The time T(p) taken by a program P is the sum of the compile time and the run time(execution
time)

The compile time does not depend on the instance characteristics. Also we may assume that a
compiled program will be run several times without recompilation .This rum time is denoted by
tp(instance characteristics).

The number of steps any problem statement is assigned depends on the kind of statement.

For example,

comments 0 steps.
Assignment statements1 steps.
[Which does not involve any calls to other algorithms]

Interactive statement such as for, while & repeat-until Control part of the statement.

->We can determine the number of steps needed by a program to solve a particular problem
instance in Two ways.

1. We introduce a variable, count into the program statement to increment count with initial
value 0.Statement to increment count by the appropriate amount are introduced into the program.

This is done so that each time a statement in the original program is executes count is
incremented by the step count of that statement.

Example1:
Algorithm:
Algorithm sum(a,n) Algorithm sum(a,n)
{ {
s= 0.0; s= 0.0;
for I=1 to n do count = count+1;
{ for I=1 to n do
s=s + a[I]; {
} count =count+1;
return s; s=s+a[I];
} count=count+1;

UNIT-1 [10] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

}
count=count+1;
count=count+1;
return s;
}

If the count is zero to start with, then it will be 2n+3 on termination. So each invocation of sum
execute a total of 2n+3 steps.

Example 2:
Algorithm RSum(a,n)
{
count:=count+1;// For the if conditional
if(n<=0)then
{
count:=count+1; //For the return
return 0.0;
}
else
{
count:=count+1; //For the addition,function invocation and return
return RSum(a,n-1)+a[n];
}
}

UNIT-1 [11] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example 3:

ALGORITHM FOR MATRIX ADDITION

Algorithm Add(a,b,c,m,n)
{
for i:=1 to m do
{
count:=count+1; //For 'for i'
for j:=1 to n do
{
count:=count+1; //For 'for j'
c[i,j]=a[i,j]+b[i,j];
count:=count+1; //For the assignment
}
count:=count+1;//For loop initialization and last time of 'for j'
}
count:=count+1; //For loop initialization and last time of 'for i'

If the count is zero to start with, then it will be 2mn+2m+1 on termination. So each invocation
of sum execute a total of 2mn+2m+1 steps

UNIT-1 [12] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Time and space complexity of the following code:

UNIT-1 [13] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Asymptotic Notations:

The best algorithm can be measured by the efficiency of that algorithm. The efficiency of an
algorithm is measured by computing time complexity. The asymptotic notations are used to find
the time complexity of an algorithm.

Asymptotic notations gives fastest possible, slowest possible time and average time of the
algorithm.

The basic asymptotic notations are


1. Big Oh (O)
2. Big Omega (Ω)
3. Theta (Ɵ) Notation
4. Small oh (o)
5. Small Omega (ω)

1. BIG-OH (O) NOTATION: (Upper Bound)

(i) It is denoted by 'O'.


(ii) It is used to find the upper bound time of an algorithm, that means the maximum time taken
by the algorithm.
Definition :
Let f(n),g(n) are two non-negative functions. If there exists two positive constants c ,n0 . such
that c>0 and for all n>=n0
if f(n)<=c*g(n) then we say that f(n)=O(g(n))

UNIT-1 [14] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

UNIT-1 [15] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

2. OMEGA (Ω) NOTATION: (Lower Bound)

(i) It is denoted by ' Ω'.


(ii) It is used to find the lower bound time of an algorithm, that means the minimum time taken
by an algorithm.
Definition: Let f(n),g(n) are two non-negative functions. If there exists two positive constants c,
n0. such that c>0and for all n>=n0.
if f(n)>=c*g(n) then we say that f(n)=Ω(g(n))

UNIT-1 [16] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example 1:

consider f(n)=2n+5, g(n)=2n

Sol :
Let us assume as c=1

If n=1:
2n+5 >= 2n => 2(1) + 5 >=2(1) => 7 > = 2 (true)

If n=2:

2n+3>=2n=> 2(2)+5>=2(2)=> 9>=4 (true)


if n=3:
2n+3>=2n=> 2(3)+5>=2(3)=> 11>=6 (true)

for all .: n>=1, f(n)=Ω(n) i.e , f(n)=Ω(g(n))

UNIT-1 [17] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

3. THETA (Θ) NOTATION: (Same order)


(i) It is denoted by the symbol called as (Θ).
(ii) It is used to find the time in-between lower bound time and upper bound time of
analgorithm.
Definition : Let f(n),g(n) are two non-negative functions. If there exists positive constants
c1,c2,n0.such that c1>0,c2>0 and for all n>=n0.if c1*g(n)<=f(n)<=c2*g(n) thenwe say
thatf(n)=Θ(g(n))

UNIT-1 [18] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example :

4. LITTLE-OH(O)NOTATION:

Definition: Let f(n),g(n) are two non-negative functions


if lim [f(n) / g(n)] = 0 then we say that f(n)=o(g(n))
n->∞
Example:
consider f(n)=2n+3, g(n)=n^2
sol : let us
lim f(n)/g(n) = 0
n->∞
lim (2n+3) / (n^2)
n->∞

UNIT-1 [19] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

=lim n(2+(3/n)) / (n^2)


n->∞
=lim (2+(3/n)) /n
n->∞
=2/∞
=0
.:f(n)=o(n^2).

5. LITTLEOMEGANOTATION:

Definition: Let f(n) and g(n) are two non-negative functions.

if lim g(n)/f(n) = 0 then we say that f(n)=ω(g(n))


n->∞

Example : consider f(n)=n^2, g(n)=2n+5


sol :
let us
lim g(n)/f(n) = 0
(n->∞)
=lim (2n+5) /(n^2)
(n->∞)
=lim n(2+(5/n)) / (n^2)
(n->∞)
=lim (2+(5/n)) / n=2/ =0
(n->∞)
.: f(n)=ω(n).

UNIT-1 [20] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

RANDOMIZED ALGORITHMS:

Randomized algorithms are non-deterministic


 Makes some random choices in some steps of the algorithms
 Output and/or running time of the algorithm may depend on the random choices made
 If you run the algorithm more than once on the same input data, the results may differ
depending on the random choice.

A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic.
The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in
the hope of achieving good performance in the "average case" over all possible choices of
random bits.

The output or the running time are functions of the input and random bits chosen .

1. Basics of Probability Theory:

Probability theory has the goal of characterizing the outcomes of natural or Conceptual
"experiments.” Example of such experiment is include tossing a Coin ten times, rolling a die
three times, playing a lottery, gambling, picking a ball from an urn containing white and red

Each possible outcome of an experiment is called a sample point and the set of all possible
outcomes is known as the sample space S. In this text e assume that S is finite (such a sample
space is called a discrete sample space).An event E is a subset of the sample space S. If the
sample space consists of n sample points, then there are 2n possible events.

UNIT-1 [21] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example : [Tossing three coins]


When a coin is tossed, there are two possible outcomes: heads (H) and tails (T).Consider the
experiment of throwing three coins. There are eight possible outcomes: HHH,HHT, HTH, HTT,
THH, THT, TTH, and TTT. Each such outcome is a sample point. The
sets{HHT,HTT,TTT},{HHH,TTT},and { } are three possible events. The third event has no
sample points and is the empty set.
For this experiment there are 28 possible events.

Types of Randomized Algorithms:

1. Randomized Las Vegas Algorithms:


2. Randomized Monte Carlo Algorithms:

Randomized Las Vegas Algorithms:

 These algorithms always produce correct or optimum result.


 Running time is a random variable
Example: Randomized Quick Sort

In a Las Vegas randomized algorithm, the answer is guaranteed to be correct no matter which
random choices the algorithm makes. But the running time is a random variable, making the
expected running time the most important parameter. Our first algorithm today, Quicksort, is a
Las Vegas algorithm.

Quicksort is one of the basic sorting algorithms normally encountered in a data structures course.
The idea is simple:
• Choose a pivot element p
• Divide the elements into those less than p, those equal to p, and those greater than p
• Recursively sort the “less than” and “greater than” groups
• Assemble the results into a single list

The algorithm as written here is underspecified because


we haven‟t said how to choose the pivot. The choice turns out to be important! Any pivot will
have some number k of elements less than it, and thus at most n − k – 1 elements greater than it.
The non-recursive parts of the algorithm pretty clearly take O(n) time. We thus get a recurrence:

T(n) ≤ T(k) + T(n − k − 1) + O(n)

UNIT-1 [22] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Randomized Monte Carlo Algorithms:

 Randomized algorithms that always has the same time complexity, but may sometimes
produce incorrect outputs depending on random choices made

 Time complexity is deterministic, correctness is probabilistic Produce correct or optimum


result with some probability.

Running time is deterministic.

Example: Randomized algorithm for approximate median


Let p = probability that it will produce incorrect output
Idea:

Design the algorithm such that p ≤ ½


 Run the algorithm k times on the same input
 Probability that it will produce incorrect output on all k trials ≤ 1/ 2𝑘
 Can make the probability of error arbitrarily small by making k sufficiently large

Sets and Disjoint Set Union:

Set:
A set is a collection of distinct elements. The Set can be represented, for examples, as
S1={1,2,5,10}.

Disjoint Sets:

The disjoints sets are those do not have any common element.
For example S1={1,7,8,9} and S2={2,5,10}, then we can say that S1 and S2 are two disjoint
sets.

Disjoint Set Operations:


The disjoint set operations are
1. Union
2. Find

Disjoint set Union:


If S1 and S2 are two disjoint sets, then their union S1 U S2 consists of all the elements x such
that x is in S1 or S2.

UNIT-1 [23] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example:
S1={1,7,8,9} S2={2,5,10}
S1 U S2={1,2,5,7,8,9,10}

Find:
Given the element I, find the set containing i.

Example:
S1={1,7,8,9} Then, S2={2,5,10} s3={3,4,6}
Find(4)= S3 , Find(5)=S2, Find(7)=S1

UNION operation:
Union(i,j) requires two tree with roots i and j be joined. S1 U S2 is obtained by making any one
of the sets as sub tree of other.

Set Representation:
The set will be represented as the tree structure where all children will store the address of parent
/ root node. The root node will store null at the place of parent address. In the given set of
elements any element can be selected as the root node, generally we select the first node as the
root node.

Example:
S1={1,7,8,9} S2={2,5,10} s3={3,4,6}
Then these sets can be represented as

Disjoint Union:

To perform disjoint set union between two sets Si and Sj can take any one root and make it sub-
tree of the other. Consider the above example sets S1 and S2 then the union of S1 and S2 can be
represented as any one of the following.

UNIT-1 [24] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Find:

To perform find operation, along with the tree structure we need to maintain the name of each
set. So, we require one more data structure to store the set names. The data structure contains two
fields. One is the set name and the other one is the pointer to root.

Union and Find Algorithms:


In presenting Union and Find algorithms, we ignore the set names and identify sets just by the
roots of trees representing them. To represent the sets, we use an array of 1 to n elements where
n is the maximum value among the elements of all sets. The index values represent the nodes
(elements of set) and the entries represent the parent node.

Example:
For the following sets the array representation is as shown below.

UNIT-1 [25] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Algorithm for Union operation:

To perform union the SimpleUnion(i,j) function takes the inputs as the set roots i and j . And
make the parent of i as j i.e, make the second root as the parent of first root.

Algorithm SimpleUnion(i,j)
{
P[i]:=j;
}

Algorithm for find operation:

The SimpleFind(i) algorithm takes the element i and finds the root node of i. It starts at I until it
reaches a node with parent value -1.

Algorithms SimpleFind(i)
{
while( P[i]≥0) do i:=P[i]; return i;
}

Analysis of SimpleUnion(i,j) and SimpleFind(i):

Although the SimpleUnion(i,j) and SimpleFind(i) algorithms are easy to state, their performance
characteristics are not very good. For example, consider the sets

UNIT-1 [26] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

BASIC TRAVERSAL AND SEARCH TECHNIQUES

Search means finding a path or traversal between a start node and one of a set of goal
nodes. Search is a study of states and their transitions.
Search involves visiting nodes in a graph in a systematic manner, and may or may not
result into a visit to all nodes. When the search necessarily involved the examination of every
vertex in the tree, it is called the traversal.

Techniques for Traversal of a Binary Tree:


A binary tree is a finite (possibly empty) collection of elements. When the binary tree is not
empty, it has a root element and remaining elements (if any) are partitioned into two binary trees,
which are called the left and right sub trees.

There are three common ways to traverse a binary tree:


(a) Inorder (Left, Root, Right)
(b) Preorder (Root, Left, Right)
(c) Postorder (Left, Right, Root)
In all the three traversal methods, the left subtree of a node is traversed before the right subtree.
The difference among the three orders comes from the difference in the time at which a node is
visited.

Inorder Traversal:
In this traversal method, the left subtree is visited first, then the root and later the right sub-tree.
We should always remember that every node may represent a subtree itself.
The steps for traversing a binary tree in inorder traversal are:
1. Visit the left subtree, using inorder.
2. Visit the root.
3. Visit the right subtree, using inorder.

UNIT-1 [27] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

algorithm inorder (t)


// t is a binary tree. Each node of t has three fields: lchild, data, and rchild.
{
if t  0 then {
inorder (t-> lchild);
visit (t);
inorder (t -> rchild); } }

We start from A, and following in-order traversal, we move to its left subtree B. B is also
traversed in-order. The process goes on until all the nodes are visited. The output of inorder
traversal of this tree will be −

D→B→E→A→F→C→G
Preorder Traversal:
In this traversal method, the root node is visited first, then the left subtree and finally the right
subtree. Preorder search is also called backtracking. The steps for traversing a binary tree in
preorder traversal are:
1. Visit the root.
2. Visit the left subtree, using preorder.
3. Visit the right subtree, using preorder.

UNIT-1 [28] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

The algorithm for preorder traversal is as follows:


Algorithm Preorder (t)
// t is a binary tree. Each node of t has three fields; lchild, data, and rchild.
{
if t  0 then
{
visit (t);
Preorder (t -> lchild);
Preorder (t -> rchild);
}
}

A→B→D→E→C→F→G

Postorder Traversal:
In this traversal method, the root node is visited last, hence the name. First we traverse the left
subtree, then the right subtree and finally the root node.
The steps for traversing a binary tree in postorder traversal are:
1. Visit the left subtree, using postorder.
2. Visit the right subtree, using postorder
3. Visit the root.

UNIT-1 [29] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

The algorithm for preorder traversal is as follows:


Algorithm Postorder (t)
// t is a binary tree. Each node of t has three fields : lchild, data, and rchild.
{
if t  0 then
{
Postorder (t -> lchild);
Postorder (t -> rchild);
visit(t);
}
}

D→E→B→F→G→C→A

UNIT-1 [30] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

TECHNIQUES FOR GRAPHS

Graph traversal is a technique to visit the each nodes of a graph G. It is also use to calculate the
order of vertices in traverse process. We visit all the nodes starting from one node which is
connected to each other without going into loop. The edges may be director or undirected. This
graph can be represented as G(V, E).

 A graph search (or traversal) technique visits every node exactly one in a systematic
fashion. Two standard graph search techniques have been widely used:
1. Breadth-First Search (BFS)
2. Depth-First Search (DFS)

Breadth-First Search (BFS):

 BFS is a traversing algorithm where we start traversing from a selected source node layer
wise by exploring the neighboring nodes. 
 The data structure used in BFS is a queue and a graph. The algorithm makes sure that
every node is visited not more than once.
 BFS follows the following 4 steps:
1. Begin the search algorithm, by knowing the key which is to be searched. Once the
key/element to be searched is decided the searching begins with the root (source)
first.
2. Visit the contiguous unvisited vertex. Mark it as visited. Display it (if needed). If
this is the required key, stop. Else, add it in a queue.
3. On the off chance that no neighboring vertex is discovered, expel the first vertex
from the Queue.
4. Repeat step 2 and 3 until the queue is empty.
Step Traversal Description

Initialize the queue.

1.

UNIT-1 [31] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

We start from visiting S(starting


node), and mark it as visited.

2.

We then see an unvisited adjacent


node from S. In this example, we
have three nodes but
alphabetically we choose A, mark
3. it as visited and enqueue it.

Next, the unvisited adjacent node


from S is B. We mark it as visited
and enqueue it.

4.

UNIT-1 [32] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Next, the unvisited adjacent node


from S is C. We mark it as visited
and enqueue it.

5.

Now, S is left with no unvisited


adjacent nodes. So, we dequeue
and find A.

6.

From A we have D as unvisited


adjacent node. We mark it as
visited and enqueue it.

At this stage, we are left with no


7. unmarked (unvisited) nodes. But
as per the algorithm we keep on
dequeuing in order to get all
unvisited nodes. When the queue
gets emptied, the program is
over.

UNIT-1 [33] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Time Complexity

 The time complexity of BFS if the entire tree is traversed is O(V) where V is the number
of nodes.
 If the graph is represented as adjacency list:
o Here, each node maintains a list of all its adjacent edges. Let‟s assume that there
are V number of nodes and E number of edges in the graph.

UNIT-1 [34] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

o For each node, we discover all its neighbors by traversing its adjacency list just
once in linear time.
o For a directed graph, the sum of the sizes of the adjacency lists of all the nodes is
E. So, the time complexity in this case is O(V) + O(E) = O(V + E).
o For an undirected graph, each edge appears twice. Once in the adjacency list of
either end of the edge. The time complexity for this case will be O(V) + O (2E) ~
O(V + E).
 If the graph is represented as an adjacency matrix (a V x V array):
o For each node, we will have to traverse an entire row of length V in the matrix to
discover all its outgoing edges.
o Note that each row in an adjacency matrix corresponds to a node in the graph, and
that row stores information about edges emerging from the node. Hence, the time
complexity of BFS in this case is O(V * V) = O(V2).
 The time complexity of BFS actually depends on the data structure being used to
represent the graph. 

Space Complexity

 Since we are maintaining a priority queue (FIFO architecture) to keep track of the visited
nodes, in worst case, the queue could take upto the size of the nodes(or vertices) in the
graph. Hence, the space complexity is O(V).

2. Depth-First Search (DFS)

 The DFS algorithm is the search algorithm which begins the searching from the root node
and goes down till the leaf of a branch at a time looking for a particular key. If the key is
not found, the searching retraces its steps back to the point from where the other branch
was left unexplored and the same procedure is repeated for that branch.
 DFS follows the following 3 steps:
 Visit a node “S”.

UNIT-1 [35] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

 Mark “S” as visited.


 Recursively visit every unvisited node attached to “S”.
 Since DFS is of recursive nature, this can be implemented using stacks. (Refer the FAQ
section to understand why we use stacks)

Step Traversal Description

Initialize the stack.

1.

UNIT-1 [36] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Mark S as visited and put it onto


the stack. Explore any unvisited
adjacent node from S. We have
three nodes and we can pick any
of them. For this example, we
2. shall take the node in an
alphabetical order.

Mark A as visited and put it onto


the stack. Explore any unvisited
adjacent node from A.
Both Sand D are adjacent
to A but we are concerned for
3. unvisited nodes only.

Visit D and mark it as visited and


put onto the stack. Here, we
have B and C nodes, which are
adjacent to D and both are
unvisited. However, we shall
4. again choose in an alphabetical
order.

UNIT-1 [37] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

We choose B, mark it as visited


and put onto the stack.
Here Bdoes not have any
unvisited adjacent node. So, we
pop Bfrom the stack.
5.

We check the stack top for return


to the previous node and check if
it has any unvisited nodes. Here,
we find D to be on the top of the
stack.
6.

Only unvisited adjacent node is


from D is C now. So we visit C,
mark it as visited and put it onto
the stack.

7.
As C does not have any unvisited
adjacent node so we keep
popping the stack until we find a
node that has an unvisited
adjacent node. In this case, there's
none and we keep popping until
the stack is empty.

UNIT-1 [38] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Complexity Analysis of Depth First Search

Time Complexity

 The time complexity of DFS if the entire tree is traversed is O(V) where V is the number
of nodes.
 If the graph is represented as adjacency list:
o Here, each node maintains a list of all its adjacent edges. Let‟s assume that there
are V number of nodes and E number of edges in the graph.
o For each node, we discover all its neighbors by traversing its adjacency list just
once in linear time.
o For a directed graph, the sum of the sizes of the adjacency lists of all the nodes is
E. So, the time complexity in this case is O(V) + O(E) = O(V + E).
o For an undirected graph, each edge appears twice. Once in the adjacency list of
either end of the edge. The time complexity for this case will be O(V) + O (2E) ~
O(V + E).
 If the graph is represented as an adjacency matrix (a V x V array):
o For each node, we will have to traverse an entire row of length V in the matrix to
discover all its outgoing edges.
o Note that each row in an adjacency matrix corresponds to a node in the graph, and
that row stores information about edges emerging from the node. Hence, the time
complexity of DFS in this case is O(V * V) = O(V2).
 The time complexity of DFS actually depends on the data structure being used to
represent the graph.

Space Complexity

 Since we are maintaining a stack to keep track of the last visited node, in worst case, the
stack could take upto the size of the nodes(or vertices) in the graph. Hence, the space
complexity is O(V).

UNIT-1 [39] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Connected components and Spanning Trees


A connected component or simply component of an undirected graph is a subgraph in
which each pair of nodes is connected with each other via a path. A set of nodes forms a
connected component in an undirected graph if any node from the set of nodes can reach any
other node by traversing edges. The main point here is reachability. In connected components,
all the nodes are always reachable from each other.

Spanning Trees: (Connected graph with no cycle)

A spanning tree is a subset of Graph G, which has all the vertices covered with minimum
possible number of edges. Hence, a spanning tree does not have cycles and it cannot be
disconnected. By this definition, we can draw a conclusion that every connected and undirected
Graph G has at least one spanning tree. A disconnected graph does not have any spanning tree,
as it cannot be spanned to all its vertices

G = (V, E) connected; T = (V, E 0 ) spanning tree of G when

UNIT-1 [40] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

We found three spanning trees off one complete graph. A complete undirected graph can have
maximum n n-2 number of spanning trees, where n is the number of nodes. In the above
addressed example, n is 3, hence 3 3−2 = 3spanning trees are possible.

Example:
Consider the following undirected, simple and connected graph G having 4 vertices: A, B, C
and D.

There are many spanning trees possible for this graph. Some of them are shown below:

UNIT-1 [41] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Biconnected Components and DFA:

An undirected graph is said to be a biconnected graph, if there are two vertex-disjoint paths
between any two vertices are present. In other words, we can say that there is a cycle between
any two vertices.

A connected graph with no articulation points is said to be biconnected. Depth-first search is


particularly useful in finding the biconnected components of a graph.

UNIT-1 [42] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Let G = (V, E) be a connected undirected graph. Consider the following definitions:

Articulation Point (or Cut Vertex):

An articulation point in a connected graph is a vertex (together with the removal of any incident
edges) that, if deleted, would break the graph into two or more pieces.

Bridge:

Is an edge whose removal results in a disconnected graph.

Biconnected:

A graph is biconnected if it contains no articulation points. In a biconnected graph, two distinct


paths connect each pair of vertices. A graph that is not biconnected divides into biconnected
components.

This is illustrated in the following figure:

Depth First Search (DFS) based algorithm to find all the articulation points in a graph. Given a
graph, the algorithm first constructs a DFS tree.

DFS Traversal. Using DFS, we will try to find if there is any articulation point is present or not.
We also check whether all vertices are visited by the DFS or not, if not we can say that the graph
is not connected.

UNIT-1 [43] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Algorithm to Find All Articulation Points:

Initially, the algorithm chooses any random vertex to start the algorithm and marks its status as
visited. The next step is to calculate the depth of the selected vertex. The depth of each vertex
is the order in which they are visited.
Next, we need to calculate the lowest discovery number. This is equal to the depth of the
vertex reachable from any vertex by considering one back edge in the DFS tree. An edge is a
back edge if is an ancestor of edge but not part of the DFS tree. But the edge is a part of the
original graph.

After calculating the depth and lowest discovery number for the first picked vertex, the
algorithm then searches for its adjacent vertices. It checks whether the adjacent vertices are
already visited or not. If not, then the algorithm marks it as the current vertex and calculates its
depth and lowest discovery number.

UNIT-1 [44] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Depth first search provides a linear time algorithm to find all articulation points in a connected
graph.

UNIT-1 [45] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

UNIT-1 [46] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

Example-2:

Articulation Point : 3, 2, 5

UNIT-1 [47] k.prasanthi


DESIGN AND ANALYSIS OF ALGORITHMS (III - CSE) – I SEM

UNIT-1 [48] k.prasanthi

You might also like