You are on page 1of 23

Data Structures & Algorithms

SECTION 9:

OBJECTIVES: At the end of the section, the student is expected to be able to


1. explain and illustrate the minimum spanning tree theorem
2. define Prim’s and Kruskal’s algorithms and discuss the issues involved in
implementing these algorithms on the computer
3. define the SSSP and APSP problems
4. describe the Dijksra’s and Floyd’s algorithms and discuss how and why these
algorithms work
5. define transitive closure of an adjacency matrix
6. describe Warshall’s algorithm and discuss how and why the algorithm works

DISCUSSION:

In this section we will examine two important graph problems which arise in
modeling and solving certain “real world” problems, such as minimizing the cost of linking
the various nodes of a communications network or finding the cheapest way of going
from one city to another. These are the problems of:

a. Finding minimum cost spanning trees for undirected graphs, and


b. Finding shortest paths in directed graphs

We are interested in the algorithms to solve these problems not only for their
practical usefulness, but more for the lesson to be derived from them as prime examples
of a particular technique for algorithm design (the ‘greedy’ approach) and also for the
challenge of implementing them efficiently on the computer. The beauty of these
algorithms lie in the brevity and simplicity, and that in it self, is a valuable lesson to learn.

Minimum cost spanning trees for undirected graphs

In the previous section, we have seen how depth-first search or breadth first
search initiated from any vertex in a connected undirected graph G can be used to
generate a spanning tree for G. For the case in which costs are assigned to the edges in
the tree, we define the cost of the spanning tree as the sum of the costs of the edges, or

Section 9 Page 1 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

branches. An important problem related to such connected weighted undirected graphs is


finding a spanning tree of minimum cost.

For example, if we think of a communication network as a weighted graph, in


which vertices are nodes in the network and edges are communication links, then a
minimum cost spanning tree for the graph represents a network connecting all nodes at
minimum cost.

The number of spanning trees which can be constructed for a given graph is
rather large. Specifically, for a complete graph on n vertices, the number of spanning
trees is nn-2. This result follows from the following theorem:

Cayley’s theorem: The number of spanning trees for n distinct vertices is nn-2.

Thus, for a complete graph on four vertices, the number of spanning trees is 16; for ten
vertices, it is 100 million! Even for a graph that is not complete, it is reasonable to expect
that the number of spanning trees is still quite large. Obviously, to find a minimum cost
spanning tree for a given undirected graph by enumeration, i.e., constructing all possible
spanning trees for the graph, computing the cost of each selecting one with minimum
cost, is definitely out of the question. (In any case, such an approach would be tedious
and totally uninteresting.)

Actually, there is a number of elegant and efficient algorithms for generating a


minimum cost spanning tree for a weighted undirected graph. Among these, the two more
popular are Prim’s algorithm and Kruskal’s algorithm. Both are greedy algorithms.
Following Standish, we define greedy algorithm as an algorithm in which sequence of
locally opportunistic choices succeeds in finding a global optimum.

Both algorithms are based on the following theorem:

MST Theorem: Let G = (V, E) be a connected, weighted, undirected graph. Let U


be some proper subset of V and (u, v) be an edge of least cost such that u Є U and v Є
V-U. There exists a minimum cost spanning tree T such that (v, u) is an edge in T.

Section 9 Page 2 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Proof: Suppose T’ is a minimum cost spanning tree for G and edge (u, v) is not in
T’. Now add (u, v) to T’. Clearly a cycle is formed in T’ with (u. v) as one of the edges in
the cycle. Likewise, there must be some edges (p, q), with p Є U and q Є V-U, in the
resulting cycle [see figure below]. Since edge (u, v) is an edge of least cost among those
edges with one vertex in U and the other vertex in V-U, then the cost of (u, v) ≤ cost of (p,
q). Hence, removing (p, q) from T’ +(u, v) yields a spanning tree whose cost cannot be
edge (u, v).
Cost (u, v)
U V-U

u v

P q
cost (p, q)

Prim’s algorithm

Let G = (V,E) be connected, weighted, undirected graph. The minimum cost


spanning tree, T, is generated by initially choosing one vertex, any vertex, in G. The tree
then grows one edge, or branch, at a time, as vertices are successively chosen.

Let U denote the set of vertices already chosen and T denote the set of edges
already taken in at any instance of the algorithm. Initially, U and T are both empty. Prom’s
algorithm may be stated as follows:

1. [Initial vertex] Choose any vertex in V and place it in U.


2. [Next vertex] From among the vertices in V-U choose that vertex, say u, in U by
an edge of least cost. Add vertex v to U and edge (u,v) to T.
3. [All vertices considered?] Repeat Step 2 until U = V. Then, T is a minimum cost
spanning tree for G.

We see from this description of Prim’s algorithm that it is a direct and straightforward
application of MST theorem. To see how the algorithm actually works, and to assess
which steps are crucial in implementing the algorithm on a computer, consider the
following example.

Section 9 Page 3 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Example: Prim’s algorithm to find a minimum cost spanning tree.

22
2 3
10 14 13 23
1 4 5
12 18 15 8

8 7 6
16 17

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .

T U V–U Edges from U to V-U


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . .. . .

1 1 2, 3, 4, 5, 6, 7, 8 (1, 2) -- 10
(1, 7) -- 18
(1, 8) -- 12
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . ..
2
1, 2 3, 4, 5, 6, 7, 8 (1, 7) -- 18
1 (1, 8) -- 12
(2, 3) -- 22
(2, 4) -- 14
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . ..
1, 2, 8 3, 4, 5, 6, 7 (1, 7) -- 18
2
(2, 3) -- 22
1 (2, 4) -- 14
(8, 7) -- 16
8
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . ..

2 1, 2, 4, 8 3, 5, 6, 7 (1, 7) -- 18
1
(2, 3) -- 22
4 (4, 5) -- 13
(4, 7) -- 15
8 (8, 7) -- 16

Section 9 Page 4 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .

T U V-U Edges from U to V- U


. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . .
2 1, 2, 4, 5, 8 3, 6, 7 (1, 7) -- 18
1
(2, 3) -- 22
4 5
(4, 7) -- 15
(5, 3) -- 23
8 (5, 6) -- 8 .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

2 1, 2, 4, 5, 6, 8 3, 7 (1, 7) -- 18
1
(2, 3) -- 22
4 5
(4, 7) -- 15
(5, 3) -- 23
8 6 (6, 7) -- 17

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

2 1, 2, 4, 5, 6,
1
7, 8 3 (2, 3) -- 22
4 5
(5, 3) -- 23
7 6
8
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

22 3
10 2 14 1, 2, 4, 5, 6,
1
13 5
7, 8
4
12 15 8
7 6
8

Cost = 94

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . .

Program implementation of Prim’s algorithm


The major computational effort incurred in implementing Prim’s algorithm by
computer is finding the edge of least cost connecting some vertex u in U to some vertex
in V-U at each step of the algorithm. One way to efficiently carry out the search is by
maintaining a heap which contains all edges connecting the vertices in U to the vertices
in V-U, with the heap ordered cost, from smallest to largest. Then, the edge at the root of
the heap is the desired edge. Let this edge be (u,v). Once vertex v is added to U, the
heap is updated so that it contains only edges from the new U to the new U-V. This
means deleting edges or originating from the old U and terminating in V, and adding

Section 9 Page 5 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

edges originating from v and terminating in the new V-U.


Another approach is to define two vectors of size n, where n is the number of
vertices in the graph, say CLOSEST and LOWCIST, with elements defined as follows:

CLOSEST(i) = vertex in U that is currently closest to vertex i in V-U


LOWCOST(i) = cost of the edge (CLOSEST(i), i)

Example:
U
25 CLOSEST(i) = r
p 30 V-U
LOWCOST(i)= 15
q i
r 15

Initially, each element of CLOSEST is set equal to the starting vertex, say s
(since, initially, s is the only vertex in U). Correspondingly, LOWCOST is initialized to
contain the costs of the edges from each vertex in V-U to vertex s in U. LOWCOST(s) is
set to infinity to indicate that it is already in U. Subsequently, at each step, we find the
smallest element of LOWCOST, say LOWCOST(k), to give us the vertex k in V-U that is
connected to some vertex in U by an edge of least cost [by definition, this vertex is
CLOSEST(k)]. Vertex k is then added to U, edge (CLOSEST(k), k) attached to the
growing tree, and the vectors LOWCOST and CLOSEST accordingly updated.

edge of least cost New U


U New V-U
V-U
p r
q i k i
r s

LOWCOST(k) =least of all LOWCOST(k) = ∞


CLOSEST(i) = r CLOSEST(i) = k
LOWCOST(i) = 17 LOWCOST(i) = 13

Before After

Figure 1. Updating the CLOSEST and LOWCOST vectors


The EASY procedure PRIM implements Prim’s algorithm using this approach. In
this and the succeeding procedures we will assume that the number of vertices n and

Section 9 Page 6 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

edges e are global variables.

procedure PRIM(C, s)
//Generates a minimum cost spanning tree for a connected, weighted, undirected graph
on n vertices using Prim’s algorithm. Graph is represented by its full cost adjacency
matrix C. Vertex s is the start vertex.//

array C(1:n, 1:n), LOWCOST(1:n), CLOSEST(1:n)

//Initializations//
for i Å 1 to n do
CLOSEST(i) Å s
LOWCOST(i) Å C(s, i)
endfor
LOWCOST(s) Å ∞
mincost Å 0 // cost of minimum cost spanning tree//

// Process rest of vertices in V-U//


nodes Å n – 1
while nodes > 0 do
leastcost Å ∞
for i Å 1 to n do //find edge of least cost//
if LOWCOST(i) < leastcost then [ leastcost Å LOWCOST(i)
k Å i]
endfor
output CLOSEST(k), k, leastcost // print edge and corresponding cost//
mincost Å mincost + leastcost
LOWCOST(k) Å ∞
for i Å to n do // update LOWCOST and CLOSEST vectors//
if LOWCOST(i) < ∞ and C(k, i) < LOWCOST(i) then
[LOWCOST (i) ÅC(k, i); CLOSEST(i) Å k]
endfor
nodes Å nodes-1
endwhile
output mincost
end PRIM

For an undirected graph on n vertices and e edges, the time complexity of Prim’s
algorithm as implemented using the LOWCOST and CLOSEST vectors is clearly O(n2).
If a heap is used to store the edges from U to V-U and to find the edge of least cost, an
O(elog2e) version is possible.

Kruskal’s Algorithm

Section 9 Page 7 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Let G = (V, E) be a connected, weighted, undirected graph on n vertices. The


minimum cost spanning tree, T is built edge by edge, with the edges considered in non-
decreasing order of their cost.

Initially, the edge of least cost is chosen. Subsequently, at each step, the edge of
least cost among the remaining edges in E is considered for inclusion in T. If including
this edge in T will create a cycle with the edges already in T, then it is rejected. The
algorithm terminates once n-1 edges have been included in T.

As with Prim’s algorithm, we see from this description of Kruskal’s algorithm that it
is also a straightforward application of the MST theorem. To see how the algorithm
actually works, and to assess what steps are critical in implementing the algorithm on a
computer, consider the following example. For the moment, ignore the FOREST column.

Example: Kruskal’s algorithm to find a minimum cost spanning tree

EDGE COST
===== =====
(1, 7) -- 1
1 2 (3, 4) -- 3
(2, 7) -- 4
(3, 7) -- 9
(2, 3) -- 15
6 7 3
(4, 7) -- 16
(4, 5) -- 17
5 4 (1, 2) -- 20
(1, 6) -- 23
(5, 7) -- 25
(5, 6) -- 28
(6, 7) -- 36

Section 9 Page 8 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

EDGE ACTION T FOREST

1 2 3 4 5 6 7

(1,7) Accept
1 2 3 4 5 6 7
7

(3, 4) Accept
1 2 4 5 6 7

7 3
3 1
4

(2, 7) Accept
1 2 4 5 6 7

3
7 3 1 2

(3, 7) Accept
1 2 5 6 7

7
3 1 2 4

4
3

(2, 3) Reject =As is= 5 6 7

1 2 3 4

(4, 7) Reject =As is= =As is=

Section 9 Page 9 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

EDGE ACTION T FOREST

(4, 5) Accept
1 2 6 7

7
3 1 2 3 4 5

5 4

(1, 2) Reject =As is= =As is=

(1, 6) Accept
1 7
23 1 4 2
8
6 7 3
17 3 1 2 3 4 5 6
5 4

Cost = 57

Program implementation of Kruskal’s algorithm

The major computational effort incurred in using Kruskal’s algorithm is sorting the
edges in nondecreasing order of their cost. If e is the number of edges in the graph, this
takes O(elog2e) time if heapsort, say, is used.

In regards to Kruskal’s algorithm proper, the crucial task is determining whether


accepting a candidate edge will create a cycle with the edges already in T. To this end,
suppose we imagine the vertices of the graph as constituting a forest of trees, where
each tree is initially a root. Then, each edge that is included in T will be joining two such
vertices, say u and v. If u and v belong to two different trees in the forest, then edge (u,v)
is accepted, and the two trees are merged into one. On the other hand, if u and v belong
to the same tree in the forest, then edge (u,v) is rejected, since accepting it will result in a
cycle in T.

It is immediately clear that we have here an instance of the equivalence problem


previously discussed. Each component of the growing tree is an equivalence class, and
an edge that is class incident on two vertices that belong to the same equivalence class
Section 9 Page 10 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

is rejected (because a cycle will otherwise be formed in this particular component of T),
while an edge incident on two vertices that belong to two different equivalence classes is
accepted, resulting in a union of the two classes into one. (In terms of growing tree, this is
the equivalent of two components joined together.) The UNION and FIND procedures of
Session 10 are the key to an efficient implementation of Kruskal’s algorithm, as
abundantly evident in the EASY procedure KRUSKAL.

procedure KRUSKAL (EDGE, COST)


// Generates a minimum cost spanning tree for a connected, weighted, undirected graph
G on n vertices and e edges. The edges of G are stored in the array EDGE in
nondecreasing order of their cost. The vector COST contains the corresponding cost of
each edge in EDGE. KRUSKAL invokes the UNION and FIND procedures. //

array EDGE(1: e, 1:2), COST(1:e), FATHER(1:n)


//Initialize FATHER vector for UNION-FIND algorithms. //
for jÅ 1 to n do
FATHER(j) Å -1
endfor

// Perform Kruskal’s algorithm. //


mincost Å 0 // will contain cost of minimum cost spanning tree//
iÅ1
nedges Å 0 // will contain count of edges already in T. //

while nedges < n-1 do


j Å EDGE(i, 1) // first vertex of i th edge //
k Å EDGE(i, 2) // second vertes of i th edge //
j Å FIND(j)
k Å FIND(k)
if j<> k then [call UNION(j,k) // merge trees //
output EDGE(i, 1), EDGE(i, 2), COST(i)
nedges Å nedges + 1//Update count of edges in T //
mincost Å mincost + COST(i)] // update running cost //
iÅi+1
endwhile
output mincost
end KRUSKAL

Section 9 Page 11 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

//The EASY procedure UNION implements the weighting rule for the UNION operation//
procedure UNION(i, j)
// Merges trees with roots i and j, i<>j, using the weighting rule for union. //
array FATHER(1:n)
count Å FATHER(i) + FATHER(j)
if |FATHER(i)| > |FATHER(j)| then [FATHER(j) Å i
FATHER(i) Å count]
else [FATHER(i) Å j
FATHER(j) Å count]
end UNION

// The EASY procedure FIND implements the collapsing rule for the FIND operation//
procedure FIND(i)
// Finds the root of the tree containing node I and compresses the path from node i to the
root //
array FATHER(1:n)
//Find root//
kÅi
while FATHER(k) > 0 do
k Å FATHER(k)
endwhile

In Kruskal’s algorithm, as implemented by procedure KRUSKAL, a sequence of


O(e) UNION-FIND operations takes O(eG(e)) time, where (if you recall) G (e) ≤ 5 for all e
65536.
≤2 However, using KRUSKAL requires that a sorting algorithm (heapsort etc.) and
the UNION-FIND procedures be available.

Shortest path problems for directed graphs

Let G = (V, E) be a directed graph with nonnegative weights, or costs, assigned to


its edges. We define the length, or cost, of a path in G as the sum of the costs of the
edges comprising the path. Two important path problems on weighted graphs are:

(a) the single source shortest paths (SSSP) problem – determine the cost of the
shortest path from a given vertex, called the source vertex, to every other
vertex in V
(b) the all-pairs shortest paths (APSP) problem – determine the cost of the
shortest path from each vertex to every other vertex in V.

The classical solution to the SSSP problem is called Dijkstra’s algorithm, which
is in the same greedy class as Prim’s and Kruskal’s algorithm. For a graph on n

Section 9 Page 12 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

vertices, we can view the APSP problem as n instances of the SSSP problem. To
solve the APSP problem, therefore, we can apply Dijkstra’s algorithm n times,
taking each vertex in turn as the source vertex. However, there is a more direct
solution to the APSP problem called Floyd’s algorithm.

We will now study these two algorithms in depth.

Dijkstra’s algorithm for the SSSP problem


Given a weighted digraph G = (V, E) on n vertices labeled 1, 2, 3, …, n, Dijkstra’s
algorithm finds the cost of the shortest path from some source vertex, say k, to some
destination vertex, say l, k<> l, 1 < k, l < n. In the process of determining the cost of the
shortest path from k to l, Dijkstra’s algorithm also finds the cost of the shortest path from k
to some (possibly all) of the other vertices in V. In terms of Dijkstra’s algorithm, the
problem of finding the shortest path from one vertex to another vertex in a directed graph
is not any different, in terms of complexity, from the problem of finding the shortest path
from one vertex to every other vertex in the graph.

The general idea behind Dijkstra’s algorithm may be stated as follows: Each
vertex is assigned a class and a value. A class 1 vertex is a vertex whose shortest
distance from the source vertex, say k, has already been found; a class 2 vertex is a
vertex whose shortest distance from k has yet to be found. The value of a class 1 vertex
is equal to its distance from vertex k along a shortest path; the value of a class 2 vertex is
its shortest distance from vertex k found thus far.
Now, the algorithm:

1. Place vertex k in class 1 and all other vertices in class 2

2. Set the value of vertex k to zero and the value of all other vertices to ∞

3. Do the following until vertex l is placed in class 1:

a. Define the pivot vertex as the vertex most recently placed in class 1

b. Adjust all class 2 nodes in the following way:

i. If a vertex is not connected to the pivot vertex, its value remains the
same.
ii. If a vertex is connected to the pivot vertex, replace its value by the
Section 9 Page 13 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

minimum of its current value or the value of the pivot vertex plus
the distance from the pivot vertex to the vertex in class 2.

c. Choose a class 2 vertex with minimal value and place it in class 1.

Dijkstra’s algorithm is a greedy algorithm. The greedy strategy is applied in step


3.c when the class 2 vertex with the smallest value, say vertex j, is placed next in
class 1. That this is the locally best thing to do hinges on two facts about vertex j:
1. The shortest path from the source vertex k to vertex j passes through
class 1 vertices only (call this a special path).
2. Step 3.b.ii correctly computes the cost of this shortest special path
(which is assigned as the value of vertex j).

Suppose to the contrary, that there are class 2 vertices in a special path as
shown in figure 2. If the hypothetical path s2 + s3 is shorter than s1, then it
must be true that s2 < s1, since s3 cannot be negative if all costs are
nonnegative. Now, if s2 < s1, then vertex x must have been placed in class 1
ahead of vertex j. Since this is not in fact the case, then s1 must be shorter
than s2, which means that the shortest path from k to j passes through class 1
vertices only. Note that this argument hinges on the requirement that edge
costs are nonnegative; if there are negative costs, Dijkstra’s algorithm will not
work properly.

class 1
vertices j next class 1 vertex is a class 2
s1 vertex with minimum value; all
intermediate vertices in shortest
s3 path are in class 1.
k s2

x
hypothetical shorter path to j
which passes through an
intermediate vertex x not in
class 1

Figure 2. A shortest path passes through class 1 vertices only

In step 3.b.ii of Dijkstra’s algorithm, the cost of the shortest path from the source

Section 9 Page 14 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

vertex k to some vertex, say j, in class 2 is computed as

Value(j) = minimum[value(j), value(p) + COST(p,j)] Eq. (1)

where p is the current pivot vertex. Vertex 3a shows graphically the meaning of this
formula. Note that in the new, possibly shorter, path from k to j which passes through p,
there is no intermediate vertex in the path from p to j. Could it be that there is a yet
shorter path from k to j passing through p and some intermediate vertex q between p and
j, as shown in Figure 3b? Such a path would in fact be shorter if s2 + s3 < s1. But this
cannot be since q is an older class 1 node than p. hence, the only possible shorter path
from k to j and passing through p is the one depicted in Figure 3a; hence Eq. (1) is
sufficient.

class 1
class 1 j vertices s1
vertices j
k k q
cost(p, j)
s2 s3
p p

(a) (b)

Figure 3. Finding the shortest path from k to j

Dijkstra’s algorithm gives the cost of the shortest path from the source vertex k to
the destination vertex l, but it does not tell which edges in E comprise the path. To
construct the path, we can define a vector of size n, say PATH, such that PATJ(i) = j if
vertex I changes value in step 3.b.ii when the pivot vertex is j. By Dijkstra’s algorithm,
vertex j is simply the predecessor of vertex i in the shortest path from vertex k to vertex l.

Section 9 Page 15 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Example: Dijkstra’s algorithm at work


40
1 3 k
30 10
30

2 4 110
10 70
20 50

60
7
40 10
6 5 1
20

VERTEX CLASS VALUE PATH REMARKS

1 2 ∞ 0
2 2 ∞ 0
3 1 0 0 First pivot vertex is source
4 2 ∞ 0 vertex
5 2 ∞ 0
6 2 ∞ 0
7 2 ∞ 0

1 2 ∞ 0
2 2 10 3
3 1 0 0 Next pivot vertex is vertex 2.
4 2 30 3
5 2 110 3
6 2 ∞ 0
7 2 ∞ 0

1 2 ∞ 0
-2 1 10 3
3 1 0 0 Next pivot vertex is vertex 4.
4 2 30 3
5 2 110 3
6 2 70 2
7 2 ∞ 0

1 2 ∞ 0
2 1 10 3
3 1 0 0 Next pivot vertex is vertex 6
-4 1 30 3
5 2 100 4
6 2 50 4
7 2 80 4

Section 9 Page 16 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

1 2 ∞ 0 Next pivot vertex is vertex 5


2 1 10 3 or vertex l. Vertex l enters
3 1 0 0 class 1 next and the algorithm
4 1 30 3 Terminates.Length of shortest
5 2 70 6 path from 3 to 5 is
6 1 50 4 VALUE (5) = 70.
7 2 80 4

The path from vertex 3 to vertex 5 can be determined from the PATH vector by
tracing the predecessor vertices in reverse order starting at vertex 5, thus:

PATH (5) = 6 (destination vertex is 5)


PATH (6) = 4
PATH (4) = 3 (sources vertex is 3)

Hence, the shortest path is:


30 20 20
3 4 6 5

The EASY procedure DIJKSTRA implements Dijkstra’s algorithm for a directed


graph on n vertices which is represented by its cost adjacency matrix, COST. The
procedure returns true if a shortest path is found from the source vertex k to the end
vertex l; else, it returns false. The cost of the shortest path found is returned in VALUE
and the actual path is encoded in PATH.
Procedure DIJKSTRA (COST, VALUE, PATH, k, l)
//Finds the shortest path, and its cost, from vertex k to vertex l in a weighted
digraph G on n vertices labeled 1, 2, 3,…, n. COST is the cost adjacency matrix
for G. Function value is true if the shortest path is found; else, false//
Array COST (1: n, 1: n), VALUE (1: n), PATH(1:n), CLASS(1:n)
//Initializations//
for I Å 1 to n do
CLASS (i) Å 2; VALUE (i) Å ∞; PATH (i) Å 0
endfor
CLASS (k) Å 1; VALUE (k) Å0
P Å k // first pivot vertex is source vertex //
//Perform Dijkstra’s algorithm//
while CLASS (l) = 2 do
Minval Å ∞
For i Å 1 to n do
If CLASS (i)= 2 then [if COST (p, i) <> ∞ then
[newval Å VALUE (p) + COST (p, i)
If newval < VALUE(i) then [ VALUE (i) Å newval
PATH (i) Å p]]
If VALUE (i) < minval then [j Å i
Minval Å VALUE(i)]]
Endfor
If minval = ∞ then return (false) // no path exists from k to l//
Section 9 Page 17 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

CLASS (j) Å 1; p Å j //next pivot node//


Endwhile
Return(true)
End DIJKSTRA

Note that in the process of calculating the cost of the shortest path from the
source vertex k to the end vertex l, the procedure DIJKSTRA also finds the cost of the
shortest paths from k to the other vertices which entered class l ahead of vertex l. Thus to
solve the SSP problem in full, we need only to modify the condition in the while loop such
that exit from the loop occurs when all vertices are in class l.

The while loop in procedure DIJKSTRA will be executed n-1 times (if vertex l
enters class l last). The inner for loop is executed n times for each outer loop. Hence, the
time complexity of the procedure is 0(n2).

Floyd’s algorithm for the all-pairs shortest paths problem

Given a diagraph G = (V, E) on n vertices labeled 1, 2, 3, …, n and with


nonnegative weights assigned to the edges, Floyd’s algorithm finds the cost of the
shortest path between every pair of vertices in V. The main idea behind the algorithm is
to generate a series of matrices Ak, 0 ≤ k ≤ n, with elements defined as follows:
Ak(i, j) = cost of the shortest path from the vertex i to vertex j which goes through
no intermediate vertex of index greater than k

By this definition, Ao is simply the cost adjacency matrix for the graph. Subsequently,
each successive Ak is generated using the iterative formula.

Ak(i, j) = minimum [Ak-1(i, j), Ak-1(i, k) + Ak-1(k, j)] Eq. (2)

= minimum [ i j i
k
j

For any given pair of vertices i and j, the iterative application of Eq. (2) is
equivalent to systematically considering the other vertices for conclusion in the path from
vertex i to vertex j. If at the kth iteration, including vertex k in the path from i to j results in
a shorter path, then the cost of this shorter path from i to j. Clearly, the nth iteration value
of this cost is the cost of the shortest path from vertex i to vertex j.

Section 9 Page 18 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Floyd’s algorithm may be stated simply as follows

1. [Initialize] Ao(i, j) Å COST(i, j), 1 ≤ i, j ≤ n


2. [Iterate] Repeat for k = 1, 2, 3, …, n

Ak(i, j) = minimum [Ak-1(i, j), Ak-1(i, k) + Ak-1(k, j)], 1 ≤ i, j ≤ n

Then, An(i, j) is the cost of the shortest path from vertex i to vertex j for any 1≤ i, j ≤ n.
Floyd’s algorithm gives the cost of the shortest path between every pair of vertices i
and j, but not the past itself. The intermediate vertices along this shortest can be found by
maintaining an n x n matrix, say PATH, such that

PATH (i, j) = 0 initially, indicating that, initially, the shortest path between i and j is the
edge (i, j), if it exists
= k if, including k in the path from i to j at the kth iteration, yields a shorter
path
Example: Floyd’s algorithm at work
11

10
1 2
2 8

4 2 3 2
3
1 4

4
4 5
2

Section 9 Page 19 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

1 2 3 4 5 1 2 3 4 5

1 0 10 2 4 ∞ 1 0 0 0 0 0

2 11 0 8 ∞ 3 2 0 0 0 0 0

Ao=3 2 8 0 1 4 PATH=3 0 0 0 0 0

4 2 ∞ 1 0 2 4 0 0 0 0 0

5 ∞ 2 4 4 0 5 0 0 0 0 0

1 2 3 4 5 1 2 3 4 5

1 0 10 2 4 ∞ 1 0 0 0 0 0

2 11 0 8 15 3 2 0 0 0 1 0

A1=3 2 8 0 1 4 PATH=3 0 0 0 0 0

4 2 12 1 0 2 4 0 1 0 0 0

5 ∞ 2 4 4 0 5 0 0 0 0 0

1 2 3 4 5 1 2 3 4 5

1 0 10 2 4 13 1 0 0 0 0 2

2 11 0 8 15 3 2 0 0 0 1 0

A2=3 2 8 0 1 4 PATH=3 0 0 0 0 0

4 2 12 1 0 2 4 0 1 0 0 0

5 13 2 4 4 0 5 2 0 0 0 0

Section 9 Page 20 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

1 2 3 4 5 1 2 3 4 5

1 0 10 2 3 6 1 0 0 0 3 3

2 10 0 8 9 3 2 3 0 0 3 0

A3=3 2 8 0 1 4 PATH=3 0 0 0 0 0

4 2 9 1 0 2 4 0 3 0 0 0

5 6 2 4 4 0 5 3 0 0 0 0

1 2 3 4 5 1 2 3 4 5

1 0 10 2 3 5 1 0 0 0 3 4

2 10 0 8 9 3 2 3 0 0 3 0

A4=3 2 8 0 1 3 PATH=3 0 0 0 0 4

4 2 9 1 0 2 4 0 3 0 0 0

5 6 2 4 4 0 5 3 0 0 0 0

1 2 3 4 5 1 2 3 4 5

1 0 7 2 3 5 1 0 5 0 3 4

2 9 0 7 7 3 2 5 0 5 5 0

A5=3 2 5 0 1 3 PATH=3 0 5 0 0 0

4 2 4 1 0 2 4 0 5 0 0 0

5 6 2 4 4 0 5 3 0 0 0 0

The EASY procedure FLOYD implements Floyd’s algorithm for a weighted


directed graph on n vertices, with the added feature of encoding the shortest paths found.

Section 9 Page 21 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Procedure FLOYD(COST, A, PATH)


//Finds the shortest path, and its cost, between every pair of vertices in a weighted,
directed graph G on n vertices labelled 1, 2,3, …,n. COST is the cost adjacency matrix for
G. Upon return to the calling program, A contains the costs of the shortest paths between
all pairs of vertices, and encoded in PATH are the corresponding shortest paths.//
Array COST(1:n, 1:n), A(1:n, 1:n), PATH (1:n, 1:n)
//Initializations//
For iÅ 1 to n do
For jÅ 1 to n do
A(i, j) Å COST (i, j)
PATH(i, j) Å 0
Endfor
Endfor
For kÅ 1 to n do
For iÅ 1 to n do
For jÅ 1 to n do
aikj Å A (i, k) + A(k, j)
if aikj < A(i, j) then [A(i, j) Å aikj; PATH(i, j) Åk]
endfor
endfor
endfor
end FLOYD

It is clear that the time complexity of Floyd’s algorithm is 0(n3). Calling DIJKSTRA
n times solves the APSP problem also in 0(n3) time, but FLOYD involves less
computational effort.

The EASY procedure PRINTPATH constructs the shortest path for any given pair
of vertices i and j from the PATH matrix generated by FLOYD.

Procedure PRINTPATH (i, j)


//Prints the intermediate vertices in the shortest path from vertex i to vertex j. PATH is the
matrix generated by procedure FLOYD.//
Array PATH (1:n, 1:n)
KÅ PATH(i, j)
If k=0 then return
Else [call PRINTPATH (i, k)
Output k
Call PRINTPATH (k, j)]
End PRINTPATH

Section 9 Page 22 of 23
Jennifer Laraya-Llovido
Data Structures & Algorithms

Transitive closure: Warshall’s algorithm

Consider the digraph G = (V, E) and its corresponding adjacency matrix, S. Recall
that the length of a path in G is the number of
1 2 3 4
1 2 1 0 1 0 1
2 0 0 1 0
S= 3 0 0 0 1
4 3 4 0 1 1 0

edges in the path. Now define a matrix T whose elements are

T(i, j)= 1 (true) if there is a path of length ≥ 1 from vertex i to vertex j

= 0 (false) otherwise

T is called the transitive closure of the adjacency matrix, S. It simply indicates the
existence, or nonexistence, of a path of length at least 1 for every pair of vertices i and j
in G. The problem of generating T from S is similar to the problem of generating the least
cost matrix A from the cost adjacency matrix COST in Floyd’s algorithm. The algorithm to
generate T from A is called Warshall’s algorithm, and is, in fact, an older algorithm than
Floyd’s. Warshall’s algorithm may be stated as follows:

1. [Initialize] To (i, j)Å S(i, j), 1 ≤ i, j ≤ n


2. [Iterate] Repeat for k = 1, 2, 3, …, n

Tk(i, j) Å Tk-1(i, j) or (Tk-1(i, k) and Tk-1(k, j)), 1 ≤ i, j ≤ n

Then, Tn is the transitive closure of S.

The iterative step in Warshall’s algorithm simply tests whether there is a path from
vertex i to vertex j if vertices 1, 2… n are successively considered for inclusion as
intermediate vertices in the path, and sets T(i, j) to true once such a path is found.
Clearly, if there is a path from i to j which contains no intermediate vertices of index
greater than k-1, or if there are paths from i to k and from k to j which contain no
intermediate vertices of index greater than k-1, then there must be a path from i to j which
contains no intermediate vertices of index greater than k. The algorithm may repeatedly
set T(i, j) to true in the course of the iterations.
Section 9 Page 23 of 23
Jennifer Laraya-Llovido

You might also like