You are on page 1of 24

SUBJECT NAME : Design and Analysis of Algorithm

SUBJECT CODE : CCS53

SEMESTER : V

UNIT – IV: DYNAMIC POGRAMMING, TRAVERSAL & SEARCHING

1. The General Method


2. Multistage Graphs
3. All Pair Shortest Path
4. Optimal Binary Search Trees
5. String Editing
6. 0/1 Knapsack
7. Reliability Design
8. The Traveling Salesperson Problem.
9. Techniques for Binary Trees
10. Techniques for Graphs – BFS – DFS.
1. The General Method

Dynamic programming is a technique that breaks the problems into sub-problems, and saves the
result for future purposes so that we do not need to compute the result again. The subproblems
are optimized to optimize the overall solution is known as optimal substructure property. The
main use of dynamic programming is to solve optimization problems. Here, optimization
problems mean that when we are trying to find out the minimum or the maximum solution of a
problem. The dynamic programming guarantees to find the optimal solution of a problem if the
solution exists.

The definition of dynamic programming says that it is a technique for solving a complex
problem by first breaking into a collection of simpler subproblems, solving each subproblem just
once, and then storing their solutions to avoid repetitive computations.

Let's understand this approach through an example.

Consider an example of the Fibonacci series. The following series is the Fibonacci series:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…

The numbers in the above series are not randomly calculated. Mathematically, we could write
each of the terms using the below formula:

F(n) = F(n-1) + F(n-2),

With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow the above
relationship. For example, F(2) is the sum f(0) and f(1), which is equal to 1.

Approaches of dynamic programming


There are two approaches to dynamic programming:

o Top-down approach
o Bottom-up approach

Top-down approach

The top-down approach follows the memorization technique, while bottom-up approach follows
the tabulation method. Here memorization is equal to the sum of recursion and caching.
Recursion means calling the function itself, while caching means storing the intermediate results.

Advantages

o It is very easy to understand and implement.


o It solves the subproblems only when it is required.
o It is easy to debug.

Disadvantages

It uses the recursion technique that occupies more memory in the call stack. Sometimes when the
recursion is too deep, the stack overflow condition will occur.

It occupies more memory that degrades the overall performance.

2. Multistage Graphs

A Multistage graph is a directed, weighted graph in which the nodes can be divided
into a set of stages such that all edges are from a stage to next stage only (In other
words there is no edge between vertices of same stage and from a vertex of current
stage to previous stage).

The vertices of a multistage graph are divided into n number of disjoint subsets S = {
S1 , S2 , S3 ……….. Sn }, where S1 is the source and Sn is the sink ( destination ). The
cardinality of S1 and Sn are equal to 1. i.e., |S1| = |Sn| = 1.
We are given a multistage graph, a source and a destination, we need to find shortest
path from source to destination. By convention, we consider source at stage 1 and
destination as last stage.
Following is an example graph we will consider in this article :-
A multistage graph G = (V, E) is a directed graph where vertices are partitioned
into k (where k > 1) number of disjoint subsets S = {s1,s2,…,sk} such that edge (u, v) is in E,
then u Є si and v Є s1 + 1 for some subsets in the partition and |s1| = |sk| = 1.
The vertex s Є s1 is called the source and the vertex t Є sk is called sink.
G is usually assumed to be a weighted graph. In this graph, cost of an edge (i, j) is represented
by c(i, j). Hence, the cost of path from source s to sink t is the sum of costs of each edges in this
path.
The multistage graph problem is finding the path with minimum cost from source s to sink t.

Example

Consider the following example to understand the concept of multistage graph.

According to the formula, we have to calculate the cost (i, j) using the following steps
Step-1: Cost (K-2, j)
In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three options to choose
the minimum cost at this step.
Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7
Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5
Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5
Step-2: Cost (K-3, j)
Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and 3. So, the value
i = 2 and j = 2 and 3.
Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +
Cost(6, 8) + Cost(8, 9)} = 8
Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9), c(3, 6) + Cost(6,
8) + Cost(8, 9)} = 10
Step-3: Cost (K-4, j)
Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5) + Cost(5, 8) +
Cost(8, 9))} = 12
c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13
Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.

3. All Pair Shortest Path


The all pair shortest path algorithm is also known as Floyd-Warshall algorithm is used to
find all pair shortest path problem from a given weighted graph. As a result of this
algorithm, it will generate a matrix, which will represent the minimum distance from any
node to all other nodes in the graph.

At first the output matrix is same as given cost matrix of the graph. After that the output matrix
will be updated with all vertices k as the intermediate vertex.
The time complexity of this algorithm is O(V3), here V is the number of vertices in the graph.
Input − The cost matrix of the graph.
036∞∞∞∞
3021∞∞∞
620142∞
∞1102∞4
∞∞42021
∞∞2∞201
∞∞∞4110
Output − Matrix of all pair shortest path.
0345677
3021344
4201323
5110233

Algorithm

floydWarshal(cost)
Input − The cost matrix of given Graph.
Output − Matrix to for shortest path between any vertex to any vertex.
Begin
for k := 0 to n, do
for i := 0 to n, do
for j := 0 to n, do
if cost[i,k] + cost[k,j] < cost[i,j], then
cost[i,j] := cost[i,k] + cost[k,j]
done
done
done
display the current cost matrix
End

4. Optimal Binary Search Trees

A Binary Search Tree (BST) is a tree where the key values are stored in the internal nodes. The
external nodes are null nodes. The keys are ordered lexicographically, i.e. for each internal node
all the keys in the left sub-tree are less than the keys in the node, and all the keys in the right sub-
tree are greater.
When we know the frequency of searching each one of the keys, it is quite easy to compute the
expected cost of accessing each node in the tree. An optimal binary search tree is a BST, which
has minimal expected cost of locating each node
Search time of an element in a BST is O(n), whereas in a Balanced-BST search time is O(log n).
Again the search time can be improved in Optimal Cost Binary Search Tree, placing the most
frequently used data in the root and closer to the root element, while placing the least frequently
used data near leaves and in leaves.
Here, the Optimal Binary Search Tree Algorithm is presented. First, we build a BST from a set
of provided n number of distinct keys < k1, k2, k3, ... kn >. Here we assume, the probability of
accessing a key Ki is pi. Some dummy keys (d0, d1, d2, ... dn) are added as some searches may be
performed for the values which are not present in the Key set K. We assume, for each dummy
key di probability of access is qi.
Optimal-Binary-Search-Tree(p, q, n)
e[1…n + 1, 0…n],
w[1…n + 1, 0…n],
root[1…n + 1, 0…n]
for i = 1 to n + 1 do
e[i, i - 1] := qi - 1
w[i, i - 1] := qi - 1
for l = 1 to n do
for i = 1 to n – l + 1 do
j = i + l – 1 e[i, j] := ∞
w[i, i] := w[i, i -1] + pj + qj
for r = i to j do
t := e[i, r - 1] + e[r + 1, j] + w[i, j]
if t < e[i, j]
e[i, j] := t
root[i, j] := r
return e and root

Analysis

The algorithm requires O (n3) time, since three nested for loops are used. Each of these loops
takes on at most n values.

Example

Considering the following tree, the cost is 2.80, though this is not an optimal result.
Node Depth Probability Contribution

k1 1 0.15 0.30

k2 0 0.10 0.10

k3 2 0.05 0.15

k4 1 0.10 0.20

k5 2 0.20 0.60

d0 2 0.05 0.15

d1 2 0.10 0.30

d2 3 0.05 0.20

d3 3 0.05 0.20

d4 3 0.05 0.20

d5 3 0.10 0.40

Total 2.80

To get an optimal solution, using the algorithm discussed in this chapter, the following tables are
generated.
In the following tables, column index is i and row index is j.

e 1 2 3 4 5 6
5 2.75 2.00 1.30 0.90 0.50 0.10

root 1 2 3 4 5

5 2 4 5 5 5

4 2 2 4 4

3 2 2 3

2 1 2

4 1.75 1.20 0.60 0.30 0.05

3 1.25 0.70 0.25 0.05

2 0.90 0.40 0.05

1 0.45 0.10

0 0.05

w 1 2 3 4 5 6

5 1.00 0.80 0.60 0.50 0.35 0.10

4 0.70 0.50 0.30 0.20 0.05

3 0.55 0.35 0.15 0.05

2 0.45 0.25 0.05

1 0.30 0.10

0 0.05
From
1 1
these
tables, the optimal tree can be formed.

5. String Editing
Given two strings str1 and str2 and below operations that can be performed on str1. Find
minimum number of edits (operations) required to convert ‘str1’ into ‘str2’.
1. Insert
2. Remove
3. Replace
All of the above operations are of equal cost.
Examples:
Input: str1 = “geek”, str2 = “gesek”
Output: 1
Explanation: We can convert str1 into str2 by inserting a ‘s’.
Input: str1 = “cat”, str2 = “cut”
Output: 1
Explanation: We can convert str1 into str2 by replacing ‘a’ with ‘u’.
Input: str1 = “sunday”, str2 = “saturday”
Output: 3
Explanation: Last three and first characters are same. We basically need to convert “un” to
“atur”. This can be done using below three operations. Replace ‘n’ with ‘r’, insert t, insert a

6. 0/1 Knapsack

Here knapsack is like a container or a bag. Suppose we have given some items which have some
weights or profits. We have to put some items in the knapsack in such a way total value produces
a maximum profit.

For example, the weight of the container is 20 kg. We have to select the items in such a way that
the sum of the weight of items should be either smaller than or equal to the weight of the
container, and the profit should be maximum.

There are two types of knapsack problems:

o 0/1 knapsack problem


o Fractional knapsack problem

We will discuss both the problems one by one. First, we will learn about the 0/1 knapsack
problem.
What is the 0/1 knapsack problem?

The 0/1 knapsack problem means that the items are either completely or no items are filled in a
knapsack. For example, we have two items having weights 2kg and 3kg, respectively. If we pick
the 2kg item then we cannot pick 1kg item from the 2kg item (item is not divisible); we have to
pick the 2kg item completely. This is a 0/1 knapsack problem in which either we pick the item
completely or we will pick that item. The 0/1 knapsack problem is solved by the dynamic
programming.

Example of 0/1 knapsack problem.

Consider the problem having weights and profits are:

Weights: {3, 4, 6, 5}

Profits: {2, 3, 1, 4}

The weight of the knapsack is 8 kg

The number of items is 4

The above problem can be solved by using the following method:

xi = {1, 0, 0, 1}

= {0, 0, 0, 1}

= {0, 1, 0, 1}

The above are the possible combinations. 1 denotes that the item is completely picked and 0
means that no item is picked. Since there are 4 items so possible combinations will be:

24 = 16; So. There are 16 possible combinations that can be made by using the above problem.
Once all the combinations are made, we have to select the combination that provides the
maximum profit.

7. Reliability Design

Say we are to design 3 stage systems with devices D1, D2, and D3 and costs of $30, $15, and
$20, respectively. The cost of this system is to be no more than $105. The reliability of each
device type is 0.9, 0.8, and 0.5.
Cost Reliability Max number of devices

D1 30 0.9 2

D2 15 0.8 3

D3 20 0.5 3

 Total cost used= 30+15+20=6530+15+20=65


 Avalialbe amount = 105-65=40105−65=40

(c-(c1+c2+c3))/c2=1+1(c−(c1+c2+c3))/c2=1+1

 Maximum number of additional D1 devices we can purchage= 40/30=140/30=1


 Maximum number of additional D2 devices we can purchage= 40/15=240/15=2
 Maximum number of additional D3 devices we can purchage= 40/20=240/20=2

Let S0={(1,0)}

S11 ={ (0.9,30)}S11=(0.9,30)

Malfunction of D1 device = 1-0.9=0.1

Relialbillity of two D1 devices = 1- 0.1 * 0.1=0.99

S21= { (0.99,60)}

S1={ (0.9,30), (0.99,60)}

S12={(0.72,45), (0.792,75)}S12=(0.72,45),(0.792,75)
Relialbillity of two D2 devices=1-(1-0.8)2=0.96

S22={(0.864,60)}

The tuple (0.9504,90) is eliminated because we cannot purchase a D3 device with the remaining
amount of $15. Reliability of three D2 devices=1-(1-0.8)3=0.992

S32= {(0.8928,75)}

The tuple (0.792,75) is eliminated as the tuple (0.864,60) dominates (0.792,75).

S2={(0.72,45), ((0.864,60), (0.8928,75)}

S13={(0.36,65), (0.432,80), (0.4464,95)}S13=(0.36,65),(0.432,80),(0.4464,95)

Reliability of two D3 devices = 1-(1-0.5)2 =0.75 S32={(0.54,85),(0.648,100)}

Rliability of three D3 devices = 1-(1-0.5)3 =0.875 S33={(0.63,105)}

The tuple (0.63,105) is dominated by (0.648,100) and the tuple (0.4464,95) is dominated by
(0.54,85)

S3={(0.36,65), (0.432,80),( 0.54,85),(0.648,100)}}

Tracing back through Si’s, we can determine that 1 D1 device, 2 D2 devices, and 2 D3 devices
give the highest reliability.

8. The Traveling Salesperson Problem.

In the traveling salesman Problem, a salesman must visits n cities. We can say that
salesman wishes to make a tour or Hamiltonian cycle, visiting each city exactly once and
finishing at the city he starts from. There is a non-negative cost c (i, j) to travel from the
city i to city j. The goal is to find a tour of minimum cost. We assume that every two
cities are connected. Such problems are called Traveling-salesman problem (TSP).

We can model the cities as a complete graph of n vertices, where each vertex represents a
city.

It can be shown that TSP is NPC.

If we assume the cost function c satisfies the triangle inequality, then we can use the
following approximate algorithm.

Triangle inequality
Let u, v, w be any three vertices, we have

One important observation to develop an approximate solution is if we remove an edge from H*,
the tour becomes a spanning tree.

1. Approx-TSP (G= (V, E))


2. {
3. 1. Compute a MST T of G;
4. 2. Select any vertex r is the root of the tree;
5. 3. Let L be the list of vertices visited in a preorder tree walk of T;
6. 4. Return the Hamiltonian cycle H that visits the vertices in the order L;
7. }

Intuitively, Approx-TSP first makes a full walk of MST T, which visits each edge exactly
two times. To create a Hamiltonian cycle from the full walk, it bypasses some vertices
(which corresponds to making a shortcut

9. Techniques for Binary Trees


Dynamic Programming(DP) is a technique to solve problems by breaking them down
into overlapping sub-problems which follows the optimal substructure. There are
various problems using DP like subset sum, knapsack, coin change etc. DP can also be
applied on trees to solve some specific problems.
Pre-requisite: DFS
Given a tree with N nodes and N-1 edges, calculate the maximum sum of the node
values from root to any of the leaves without re-visiting any node.

Binary Search Trees

A Binary Search tree is organized in a Binary Tree. Such a tree can be defined by a linked data
structure in which a particular node is an object. In addition to a key field, each node contains
field left, right, and p that point to the nodes corresponding to its left child, its right child, and its
parent, respectively. If a child or parent is missing, the appropriate field contains the value NIL.
The root node is the only node in the tree whose parent field is Nil.

Binary Search Tree Property

Let x be a node in a binary search tree.

o If y is a node in the left subtree of x, then key [y] ≤key [k].


o If z is a node in the right subtree of x, then key [x] ≤ key [y].

In this tree key [x] = 15

o If y is a node in the left subtree of x, then key [y] = 5.

1. i.e. key [y] ≤ key[x].


o If y is a node in the right subtree of x, then key [y] = 20.
1. i.e. key [x] ≤ key[y].

Traversal in Binary Search Trees:

1. In-Order-Tree-Walk (x): Always prints keys in binary search tree in sorted order.

INORDER-TREE-WALK (x) - Running time is θ(n)


1. If x ≠ NIL.
2. then INORDER-TREE-WALK (left [x])
3. print key [x]
4. INORDER-TREE-WALK (right [x])

2. PREORDER-TREE-WALK (x): In which we visit the root node before the nodes in either
subtree.

PREORDER-TREE-WALK (x):
1. If x ≠ NIL.
2. then print key [x]
3. PREORDER-TREE-WALK (left [x]).
4. PREORDER-TREE-WALK (right [x]).

3. POSTORDER-TREE-WALK (x): In which we visit the root node after the nodes in its
subtree.

POSTORDER-TREE-WALK (x):
1. If x ≠ NIL.
2. then POSTORDER-TREE-WALK (left [x]).
3. POSTORDER-TREE-WALK (right [x]).
4. print key [x]

Querying a Binary Search Trees:

1. Searching: The TREE-SEARCH (x, k) algorithm searches the tree node at x for a node whose
key value equal to k. It returns a pointer to the node if it exists otherwise NIL.

TREE-SEARCH (x, k)
1. If x = NIL or k = key [x].
2. then return x.
3. If k < key [x].
4. then return TREE-SEARCH (left [x], k)
5. else return TREE-SEARCH (right [x], k)

Clearly, this algorithm runs in O (h) time where h is the height of the tree. The iterative version
of the above algorithm is very easy to implement

ITERATIVE-TREE- SEARCH (x, k)


1. while x ≠ NIL and k ≠ key [k].
2. do if k < key [x].
3. then x ← left [x].
4. else x ← right [x].
5. return x.

2. Minimum and Maximum: An item in a binary search tree whose key is a minimum can
always be found by following left child pointers from the root until a NIL is encountered. The
following procedure returns a pointer to the minimum element in the subtree rooted at a given
node x.

TREE- MINIMUM (x)


1. While left [x] ≠ NIL.
2. do x←left [x].
3. return x.
TREE-MAXIMUM (x)
1. While left [x] ≠ NIL
2. do x←right [x].
3. return x.

3. Successor and predecessor: Given a node in a binary search tree, sometimes we used to find
its successor in the sorted form determined by an in order tree walk. If all keys are specific, the
successor of a node x is the node with the smallest key greater than key[x]. The structure of a
binary search tree allows us to rule the successor of a node without ever comparing keys. The
following action returns the successor of a node x in a binary search tree if it exists, and NIL if x
has the greatest key in the tree:

TREE SUCCESSOR (x)


1. If right [x] ≠ NIL.
2. Then return TREE-MINIMUM (right [x]))
3. y←p [x]
4. While y ≠ NIL and x = right [y]
5. do x←y
6. y←p[y]
7. return y.
The code for TREE-SUCCESSOR is broken into two cases. If the right subtree of node x is
nonempty, then the successor of x is just the leftmost node in the right subtree, which we find in
line 2 by calling TREE-MINIMUM (right [x]). On the other hand, if the right subtree of node x
is empty and x has a successor y, then y is the lowest ancestor of x whose left child is also an
ancestor of x. To find y, we quickly go up the tree from x until we encounter a node that is the
left child of its parent; lines 3-7 of TREE-SUCCESSOR handle this case.

The running time of TREE-SUCCESSOR on a tree of height h is O (h) since we either follow a
simple path up the tree or follow a simple path down the tree. The procedure TREE-
PREDECESSOR, which is symmetric to TREE-SUCCESSOR, also runs in time O (h).

10. Techniques for Graphs – BFS – DFS.

Graph traversals
Graph traversal means visiting every vertex and edge exactly once in a well-defined order. While
using certain graph algorithms, you must ensure that each vertex of the graph is visited exactly
once. The order in which the vertices are visited are important and may depend upon the
algorithm or question that you are solving.

During a traversal, it is important that you track which vertices have been visited. The most
common way of tracking vertices is to mark them.

Breadth First Search (BFS)


There are many ways to traverse graphs. BFS is the most commonly used approach.

BFS is a traversing algorithm where you should start traversing from a selected node (source or
starting node) and traverse the graph layerwise thus exploring the neighbour nodes (nodes which
are directly connected to source node). You must then move towards the next-level neighbour
nodes.

As the name BFS suggests, you are required to traverse the graph breadthwise as follows:

1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer

BFS algorithm

In this article, we will discuss the BFS algorithm in the data structure. Breadth-first search is a
graph traversal algorithm that starts traversing the graph from the root node and explores all the
neighboring nodes. Then, it selects the nearest node and explores all the unexplored nodes.
While using BFS for traversal, any node in the graph can be considered as the root node.
There are many ways to traverse the graph, but among them, BFS is the most commonly used
approach. It is a recursive algorithm to search all the vertices of a tree or graph data structure.
BFS puts every vertex of the graph into two categories - visited and non-visited. It selects a
single node in a graph and, after that, visits all the nodes adjacent to the selected node.

Applications of BFS algorithm

The applications of breadth-first-algorithm are given as follows -

o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all the
neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ this
process to find "seeds" and "peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages. It starts traversing from the source page
and follows the links associated with the page. Here, every web page is considered as a
node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow
network.

Algorithm

The steps involved in the BFS algorithm to explore a graph are given as follows -

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until QUEUE is empty

Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).

Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and set

their STATUS = 2

(waiting state)

[END OF LOOP]
Example of BFS algorithm

Now, let's understand the working of BFS algorithm by using an example. In the example given
below, there is a directed graph having 7 vertices.

1. QUEUE1 = {A}
2. QUEUE2 = {NULL}

Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all neighbors of node A
to queue1.

1. QUEUE1 = {B, D}
2. QUEUE2 = {A}

Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all neighbors of node B
to queue1.

1. QUEUE1 = {D, C, F}
2. QUEUE2 = {A, B}

Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all neighbors of node D
to queue1. The only neighbor of Node D is F since it is already inserted, so it will not be inserted
again.

1. QUEUE1 = {C, F}
2. QUEUE2 = {A, B, D}

Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to
queue1.

1. QUEUE1 = {F, E, G}
2. QUEUE2 = {A, B, D, C}
Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to
queue1. Since all the neighbors of node F are already present, we will not insert them again.

1. QUEUE1 = {E, G}
2. QUEUE2 = {A, B, D, C, F}

Step 6 - Delete node E from queue1. Since all of its neighbors have already been added, so we
will not insert them again. Now, all the nodes are visited, and the target node E is encountered
into queue2.

1. QUEUE1 = {G}
2. QUEUE2 = {A, B, D, C, F, E}

Complexity of BFS algorithm

Time complexity of BFS depends upon the data structure used to represent the graph. The time
complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm explores every
node and edge. In a graph, the number of vertices is O(V), whereas the number of edges is O(E).

The space complexity of BFS can be expressed as O(V), where V is the number of vertices.

Implementation of BFS algorithm

Now, let's see the implementation of BFS algorithm in java.

In this code, we are using the adjacency list to represent our graph. Implementing the Breadth-
First Search algorithm in Java makes it much easier to deal with the adjacency list since we only
have to travel through the list of nodes attached to each node once the node is dequeued from the
head (or start) of the queue.

DFS (Depth First Search) algorithm

In this article, we will discuss the DFS algorithm in the data structure. It is a recursive algorithm
to search all the vertices of a tree data structure or a graph. The depth-first search (DFS)
algorithm starts with the initial node of graph G and goes deeper until we find the goal node or
the node with no children.

Because of the recursive nature, stack data structure can be used to implement the DFS
algorithm. The process of implementing the DFS is similar to the BFS algorithm.

The step by step process to implement the DFS traversal is given as follows -

1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that vertex into the
stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to the
top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the stack's
top.
5. If no vertex is left, go back and pop a vertex from the stack.
6. Repeat steps 2, 3, and 4 until the stack is empty.

Applications of DFS algorithm

The applications of using the DFS algorithm are given as follows -

o DFS algorithm can be used to implement the topological sorting.


o It can be used to find the paths between two vertices.
o It can also be used to detect cycles in the graph.
o DFS algorithm is also used for one solution puzzles.
o DFS is used to determine if a graph is bipartite or not.

Algorithm

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until STACK is empty

Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)

Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS = 1)
and set their STATUS = 2 (waiting state)

[END OF LOOP]

Step 6: EXIT

Pseudocode
1. DFS(G,v) ( v is the vertex where the search starts )
2. Stack S := {}; ( start with an empty stack )
3. for each vertex u, set visited[u] := false;
4. push S, v;
5. while (S is not empty) do
6. u := pop S;
7. if (not visited[u]) then
8. visited[u] := true;
9. for each unvisited neighbour w of uu
10. push S, w;
11. end if
12. end while
13. END DFS()

Example of DFS algorithm

Now, let's understand the working of the DFS algorithm by using an example. In the example
given below, there is a directed graph having 7 vertices.

Now, let's start examining the graph starting from Node H.

Step 1 - First, push H onto the stack.

1. STACK: H

Step 2 - POP the top element from the stack, i.e., H, and print it. Now, PUSH all the neighbors
of H onto the stack that are in ready state.

1. Print: H]STACK: A
Step 3 - POP the top element from the stack, i.e., A, and print it. Now, PUSH all the neighbors
of A onto the stack that are in ready state.

1. Print: A
2. STACK: B, D

Step 4 - POP the top element from the stack, i.e., D, and print it. Now, PUSH all the neighbors
of D onto the stack that are in ready state.

1. Print: D
2. STACK: B, F

Step 5 - POP the top element from the stack, i.e., F, and print it. Now, PUSH all the neighbors of
F onto the stack that are in ready state.

1. Print: F
2. STACK: B

Step 6 - POP the top element from the stack, i.e., B, and print it. Now, PUSH all the neighbors of
B onto the stack that are in ready state.

1. Print: B
2. STACK: C

Step 7 - POP the top element from the stack, i.e., C, and print it. Now, PUSH all the neighbors of
C onto the stack that are in ready state.

You might also like