You are on page 1of 225

Lecture 5

Graphs
[Part 1]

1
Lecture Content

1. Graph Basics
1.1 Definitions and Terminologies
1.2 Data Structures Used to Store Graphs
2. Graph Traversal
2.1 Depth-First Search (DFS)
2.2. Breadth-First Search (BFS)

2
Lecture Content

3. Topological Sorting

Course prerequisites
Topologically sorted order: C F B A E D G H
3
1. Graph Basics

• Tree generalizes linear structures (i.e., singly


linked list), graph generalizes tree.

4
1. Graph Basics

• The key difference between tree and graph is that,


in a graph, there may be more than one path
between two nodes.
• Many real-world problems can be modeled by
using graphs. For example,
- finding the fastest routes for a mass
transportation system (answers to the question:
what is the shortest driving route from city A to city
B),

5
1. Graph Basics

- finding a minimum spanning tree (answers to


the question: how can computers be connected with
the least amount of cable)
- routing electronic email through a computer
network.

6
1. Graph Basics

7
1.1 Definitions and Terminologies

• A graph G consists of a set of vertices (also called


nodes) V and a set of edges (also called arcs) E that
connect the vertices.
• That is, G = (V, E), where V is a set of n vertices
{v0, v1, …, vn-1} and E is a set of m edges {e0, e1,
…, em-1}.
• Each edge e  E is a pair (u, v), where u, v  V
(i.e., e = (u, v)).

8
1.1 Definitions and Terminologies

• The number of vertices and edges of a graph G is


denoted as |V| and |E|, respectively (i.e., |V| = n and
|E| = m).

• If each edge e = (u, v) in G is ordered (i.e., (u, v) ≠


(v, u)), the graph is called directed graph (also
called digraph). Otherwise, the graph is called
undirected graph.

9
1.1 Definitions and Terminologies

• If a graph is directed, the in-degree of a vertex is


the number of edges entering it.
• The out-degree of a vertex is the number of
edges leaving it.
• The degree of a vertex is the sum of its in-degree
and out-degree.

10
1.1 Definitions and Terminologies

• If each edge e = (u, v) in G has a weight w (also


called cost or length), the graph is called weighted
graph.

• Vertex v is adjacent (also called neighbor) to


vertex u if there is an edge from u to v.

• A path in a graph is a sequence of vertices


connected by edges.
11
1.1 Definitions and Terminologies

• In unweighted graphs, a path length is the


number of edges on the path.

• The distance between two vertices is the length of


the shortest path between them.

• A weighted path length is the sum of weights


(costs or lengths) on the path.

12
1.1 Definitions and Terminologies

• If |E| = (|V|2), then the graph G is called dense


graph.

• If |E| = (|V|), then the graph G is called sparse


graph.

• A cycle is a path from a vertex back to itself.

13
1.1 Definitions and Terminologies

• A graph with no cycle is called acyclic graph. A


directed acyclic graph is called a DAG.
• A graph in which every pair of vertices is
connected by a path is said to be connected.
• Let G be a simple graph (i.e., no parallel edges
and no self-loop/cycle) with n vertices and m edges.
If G is undirected, then m ≤ n(n - 1)/2. If G is
directed, then m ≤ n(n - 1).

14
1.2 Data Structures Used to Store Graphs

• A graph can be stored by using an adjacency


matrix (i.e., two-dimensional array, also called
neighbor matrix) or an adjacency list (i.e., singly
linked list).
• Normally, dense and sparse graphs are represented
by using adjacency matrix and an adjacency list,
respectively.

15
Adjacency Matrix

• An adjacency matrix (e.g., int adjMat[][])


is a two-dimensional array in which the elements
indicate whether an edge is present between two
vertices. If a graph has n vertices, an n×n
adjacency matrix is needed to store the graph.

16
Adjacency Matrix

• Example: Consider the following graph

17
Adjacency Matrix

• The adjacency matrix for the graph is as follows.

18
Adjacency List

• An adjacency list is an array of singly linked lists.


Each individual list shows what vertices a given
vertex is adjacent to.
• The adjacency lists for the graph is given next.

19
Lecture Content

1. Graph Basics
1.1 Definitions and Terminologies
1.2 Data Structures Used to Store Graphs
2. Graph Traversal
2.1 Depth-First Search (DFS)
2.2. Breadth-First Search (BFS)

20
2. Graph Traversal

• Three traversals of a tree are preorder, inorder,


and postorder. Tree traversal is always starts at the
root of the tree.
• Two traversals of a graph are depth-first search
(DFS) and breadth-first search (BFS). Since a graph
has no root, we must specify a vertex at which to
begin a traversal.
• Depth-first search is essentially a generalization of
the preorder traversal of a rooted tree.

21
2.1 Depth-First Search (DFS)

• Example: List the order in which the nodes of the


undirected graph shown in the figure below are
visited by a depth-first traversal that starts from
vertex a. Assume that we choose to visit adjacent
vertices in alphabetical order.

22
2.1 Depth-First Search (DFS)

Algorithm DFS // M.H. Alsuwaiyel


Input: A directed or undirected graph G = (V, E).
Output: Numbering of the vertices in
depth-first search order.
1. predfn ← 1; postdfn ← 1
2. for each vertex v  V
3. mark v unvisited
4. end for
5. for each vertex v  V
6. if v is marked unvisited then dfs(v) // starting vertex
7. end for 23
2.1 Depth-First Search (DFS)

Procedure dfs(v) // v is starting vertex, using stack


1. S ← {v} // insert v into stack
2. mark v visited
3. while S ≠ {}
4. v ← Peek(S) // v is current vertex
5. find an unvisited neighbor w of v

24
2.1 Depth-First Search (DFS)

6. if w exists then
7. Push(w, S)
8. mark w visited
9. predfn ← predfn + 1
10. else
11. Pop(S); postdfn ← postdfn + 1
12. end if
13. end while

25
2.1 Depth-First Search (DFS)

• The stack contents during DFS are given below.

26
2.1 Depth-First Search (DFS)

• The order in which the nodes are visited by a DFS


that starts from vertex a is a, b, c, d, e, f, g, h, i, j.
• The resulting tree (i.e., the depth-first search tree)
is

27
2.1 Depth-First Search (DFS)

• Tree edges: edges in the depth-first search tree.


An edge (v, w) is a tree edge if w was first visited
when exploring the edge (v, w).
• Back edges: All other edges.

28
2.1 Depth-First Search (DFS)

In depth-first search traversal of directed graphs,


however, the edges of G are classified into four
types:
• Tree edges: edges in the depth-first search tree.
An edge (v, w) is a tree edge if w was first visited
when exploring the edge (v, w).
• Back edges: edges of the form (v, w) such that w
is an ancestor of v in the depth-first search tree
(constructed so far) and vertex w was marked
visited when (v, w) was explored.
29
2.1 Depth-First Search (DFS)

• Forward edges: edges of the form (v, w) such that


w is a descendant of v in the depth-first search tree
(constructed so far) and vertex w was marked
visited when (v, w) was explored.
• Cross edges: All other edges.

30
2.1 Depth-First Search (DFS)

Procedure dfs(v) // v is starting vertex, using recursion


1. mark v visited
2. predfn ← predfn + 1
3. for each edge (v, w)  E
4. if w is marked unvisited then dfs(w)
5. end for
6. postdfn ← postdfn + 1

31
2.2 Breadth-First Search (BFS)

• Depth-first search algorithm gets as far away from


the starting point as quickly as possible. DFS is
implemented using a stack.
• In contrast, breadth-first search algorithm stays as
close as possible to the starting point. BFS visits all
the vertices adjacent to the starting vertex, and then
goes further afield. BFS is implemented using a
queue.
• The level-order traversal of a tree is an example of
the breadth-first traversal.
32
2.2 Breadth-First Search (BFS)

• Example: List the order in which the nodes of the


undirected graph shown in the figure below are
visited by a breadth-first traversal that starts from
vertex a. Assume that we choose to visit adjacent
vertices in alphabetical order.

33
2.2 Breadth-First Search (BFS)

Algorithm BFS // M.H. Alsuwaiyel


Input: A directed or undirected graph G = (V, E).
Output: Numbering of the vertices in BFS order.
1. bfn ← 1
2. for each vertex v  V
3. mark v unvisited
4. end for
5. for each vertex v  V
6. if v is marked unvisited then bfs(v) // starting vertex
7. end for
34
2.2 Breadth-First Search (BFS)

Procedure bfs(v) // v is starting vertex, using queue


1. Q ← {v} // insert v into queue
2. mark v visited
3. while Q ≠ {}
4. v ← dequeue(Q) // v is current vertex
5. for each edge (v, w)  E
6. if w is marked unvisited then
7. enqueue(w, Q)
8. mark w visited
9. bfn ← bfn + 1
10. end if 35
2.2 Breadth-First Search (BFS)

11. end for


12. end while

36
2.2 Breadth-First Search (BFS)

• The queue contents during BFS are given below.

37
2.2 Breadth-First Search (BFS)

• The order in which the nodes are visited by a BFS


that starts from vertex a is a, b, g, c, f, h, d, e, i, j.
• The resulting tree (i.e., the breadth-first search
tree) is

38
2.2 Breadth-First Search (BFS)

• Tree edges: edges in the breadth-first search tree.


An edge (v, w) is a tree edge if w was first visited
when exploring the edge (v, w).
• Cross edges: All other edges.

39
3. Topological Sorting

• Given a directed acyclic graph (dag for short) G =


(V, E), the problem of topological sorting is to find
a linear ordering of its vertices in such a way that if
(v, w)  E, then v appears before w in the ordering.

40
3. Topological Sorting

• For example, one possible topological sorting of


the vertices in the dag shown in figure (a) above is
b, d, a, c, f, e, g (or a, b, d, c, e, f, g)
• We will assume that the dag has only one vertex,
say s, of indegree 0. If not, we may simply add a
new vertex s and edges from s to all vertices of
indegree 0.

41
3. Topological Sorting

• Next, we simply carry out a depth-first search on


G starting at vertex s.
• When the traversal is complete, the values of the
counter postdfn define a reverse topological
ordering of the vertices in the dag.
• Thus, to obtain the ordering, we may add an
output step to Algorithm DFS just after the counter
postdfn is incremented. The resulting output is
reversed to obtain the desired topological ordering.

42
Exercises

1. Write a complete program to implement the DFS.


2. Write a complete program to implement the BFS.
3. Modify the DFS program to find the
topologically sorted order of a given dag.

43
Exercises

4. List the order in which the nodes of the


undirected graph shown in the figure below are
visited by a depth-first traversal that starts from
vertex a. Repeat this exercise for a depth-first
traversal starting from vertex d.

44
Exercises

5. List the order in which the nodes of the


undirected graph shown in the figure below are
visited by a breadth-first traversal that starts from
vertex a. Repeat this exercise for a breadth-first
traversal starting from vertex d.

45
References

1. Noel Kalicharan. 2008. Data Structures in Java.


CreateSpace. ISBN: 143827517X. Chapter 5
2. Noel Kalicharan. 2008. Data Structures in C.
Createspace Press. ISBN: 1438253273.
Chapter 7
3. Robert Lafore. 2002. Data Structures and
Algorithms in Java. 2nd Ed, SAMS.
ISBN: 0672324539.

46
References

4. M.H. Alsuwaiyel. 1999. Algorithms Design


Techniques and Analysis. World Scientific
Publishing. ISBN: 9810237405. Chapters 1, 4, 6
5. Anany V. Levitin. 2011. Introduction to the
Design and Analysis of Algorithms. 3Ed.
Addison-Wesley. ISBN: 0132316811.

47
CS161, Winter 2011 Handout #16

Notes on Strongly Connected Components


Recall from Section 3.5 of the Kleinberg-Tardos book that the strongly connected components of a directed
graph G are the equivalence classes of the following equivalence relation: u ∼ v if and only if there is a directed
u v path and also there is a directed v u path. (Check that this is indeed an equivalence relation.) For
example, in the directed graph in Figure 1, the strongly connected components are identified by the dashed
circles.

Figure 1: The strongly connected components of a directed graph.

1 The Algorithm
Goal of Lecture: to give a linear-time (i.e., O(m+n)-time) algorithm that computes the strongly connected
components of a directed graph.
The algorithm we present is essentially two passes of depth-first search, plus some extremely clever
additional book-keeping. The algorithm is described in a top-down fashion in Figures 2–4.

Input: a directed graph G = (V, E), in adjacency list representation. Assume that the vertices V are labeled
1, 2, 3, . . . , n.
1. Let Grev denote the graph G after the orientation of all arcs have been reversed.
2. Run the DFS-Loop subroutine on Grev , processing vertices according to the given order, to obtain a
finishing time f (v) for each vertex v ∈ V .
3. Run the DFS-Loop subroutine on G, processing vertices in decreasing order of f (v), to assign a leader
to each vertex v ∈ V .
4. The strongly connected components of G correspond to vertices of G that share a common leader.

Figure 2: The top level of our SCC algorithm. The f -values and leaders are computed in the first and second
calls to DFS-Loop, respectively (see below).

1
Input: a directed graph G = (V, E), in adjacency list representation.
1. Initialize a global variable t to 0.
[This keeps track of the number of vertices that have been fully explored.]
2. Initialize a global variable s to NULL.
[This keeps track of the vertex from which the last DFS call was invoked.]
3. For i = n downto 1:

[In the first call, vertices are labeled 1, 2, . . . , n arbitrarily. In the second call, vertices are labeled by
their f (v)-values from the first call.]

(a) if i not yet explored:


i. set s := i
ii. DFS(G, i)

Figure 3: The DFS-Loop subroutine.

Input: a directed graph G = (V, E), in adjacency list representation, and a source vertex i ∈ V .
1. Mark i as explored.
[It remains explored for the entire duration of the DFS-Loop call.]
2. Set leader(i) := s
3. For each arc (i, j) ∈ G:

(a) if j not yet explored:


i. DFS(G, j)

4. t + +
5. Set f (i) := t

Figure 4: The DFS subroutine. The f -values only need to be computed during the first call to DFS-Loop, and
the leader values only need to be computed during the second call to DFS-Loop.

2
As we’ve seen, each invocation of DFS-Loop can be implemented in linear time. You should think
about how to implement the remaining details of the algorithm so that its overall running time is linear (i.e.,
(O(m + n))).

2 An Example
But why on earth should this algorithm work? An example should increase its plausibility (though it
certainly doesn’t constitute a proof of correctness). Figure 5(a) displays a reversed graph Grev , with its
vertices numbered arbitrarily, and the f -values computed in the first call to DFS-Loop. In more detail, the
first DFS is initiated at node 9. The search must proceed next to node 6. DFS then has to make a choice
between two different adjacent nodes; we have shown the f -values that ensue when DFS visits node 3 before
node 8.1 When DFS visits node 3 it gets stuck; at this point node 3 is assigned a finishing time of 1. DFS
backtracks to node 6, proceeds to node 8, then node 2, and then node 5. DFS then backtracks all the way
back to node 9, resulting in nodes 5, 2, 8, 6, and 9 receiving the finishing times 2, 3, 4, 5, and 6, respectively.
Execution returns to DFS-Loop, and the next (and final) call to DFS begins at node 7.
Figure 5(b) shows the original graph (with all arcs now unreversed), with nodes labeled with their finishing
times. The magic of the algorithm is now evident, as the SCCs of G present themselves to us in order: the
first call to DFS discovers the nodes 7–9 (with leader 9); the second the nodes 1, 5, and 6 (with leader 6);
and the third the remaining three nodes (with leader 4).

f (7) = 9 f (6) = 5 f (8) = 4


9 5 4
f (1) = 7 7 6
f (9) = 6 8
7 6
1
9 2
5

f (5) = 2
8 1 3
4 3 2
f (4) = 8 f (3) = 1 f (2) = 3 leader = 9 leader = 4
leader = 6
(a) First DFS-Loop on Grev (b) Second DFS-Loop on G

Figure 5: Example execution of the strongly connected components algorithm. In (a), nodes are labeled
arbitrarily and their finishing times are shown. In (b), nodes are labeled by their finishing times and their
leaders are shown.

3 Proof of Correctness
3.1 The Acyclic Meta-Graph of SCCs
First, observe that the strongly connected components of a directed graph form an acyclic “meta-graph”,
where the meta-nodes correspond to the SCCs C1 , . . . , Ck , and there is an arc Ch → Cℓ with h 6= ℓ if and
only if there is at least one arc (i, j) in G with i ∈ Ch and j ∈ Cℓ . This directed graph must be acyclic:
since within a SCC you can get from anywhere to anywhere else on a directed path, in a purported directed
cycle of SCCs you can get from every node in a constituent SCC to every other node of every other SCC
in the cycle. Thus the purported cycle of SCCs is actually just a single SCC. Summarizing, every directed
graph has a useful “two-tier” structure: zooming out, one sees a DAG on the SCCs of the graph; zooming
in on a particular SCC exposes its finer-grained structure. For example, the meta-graphs corresponding to
the directed graphs in Figures 1 and 5(b) are shown in Figure 6.

3.2 The Key Lemma


Correctness of the algorithm hinges on the following key lemma.
1 Different choices of which node to visit next generate different sets of f -values, but our proof of correctness will apply to

all ways of resolving these choices.

3
C2

C1 C4

C3
C3 C2 C1

(a) SCC graph for Figure 1 (b) SCC graph for Figure 5(b)

Figure 6: The DAGs of the SCCs of the graphs in Figures 1 and 5(b), respectively.

Key Lemma: Consider two “adjacent” strongly connected components of a graph G: components C1
and C2 such that there is an arc (i, j) of G with i ∈ C1 and j ∈ C2 . Let f (v) denote the finishing time of
vertex v in some execution of DFS-Loop on the reversed graph Grev . Then

max f (v) < max f (v).


v∈C1 v∈C2

Proof of Key Lemma: Consider two adjacent SCCs C1 and C2 , as they appear in the reversed graph Grev
— where there is an arc (j, i), with j ∈ C2 and i ∈ C1 (Figure 7). Because the equivalence relation defining
the SCCs is symmetric, G and Grev have the same SCCs; thus C1 and C2 are also SCCs of Grev . Let v
denote the first vertex of C1 ∪ C2 visited by DFS-Loop in Grev . There are now two cases.
First, suppose that v ∈ C1 (Figure 7(a)). Since there is no non-trivial cycle of SCCs (Section 3.1), there
is no directed path from v to C2 in Grev . Since DFS discovers everything reachable and nothing more, it
will finish exploring all vertices in C1 without reaching any vertices in C2 . Thus, every finishing time in C1
will be smaller that every finishing time in C2 , and this is even stronger than the assertion of the lemma.
(Cf., the left and middle SCCs in Figure 5.)
Second, suppose that v ∈ C2 (Figure 7(b)). Since DFS discovers everything reachable and nothing more,
the call to DFS at v will finish exploring all of the vertices in C1 ∪ C2 before ending. Thus, the finishing
time of v is the largest amongst vertices in C1 ∪ C2 , and in particular is larger than all finishing times in C1 .
(Cf., the middle and right SCCs in Figure 5.) This completes the proof.

C1 C2 C1 C2
i j i j
v v

(a) All f -values in C1 smaller than in C2 (b) v has the largest f -value in C1 ∪ C2

Figure 7: Proof of Key Lemma. Vertex v is the first in C1 ∪ C2 visited during the execution of DFS-Loop
on Grev .

3.3 The Final Argument


The Key Lemma says that traversing an arc from one SCC to another (in the original, unreversed graph)
strictly increases the maximum f -value of the current SCC. For example, if fi denotes the largest f -value of
a vertex in Ci in Figure 6(a), then we must have f1 < f2 , f3 < f4 . Intuitively, when DFS-Loop is invoked

4
on G, processing vertices in decreasing order of finishing times, the successive calls to DFS peel off the SCCs
of the graph one at a time, like layers of an onion.
We now formally prove correctness of our algorithm for computing strongly connected components.
Consider the execution of DFS-Loop on G. We claim that whenever DFS is called on a vertex v, the
vertices explored — and assigned a common leader — by this call are precisely those in v’s SCC in G. Since
DFS-Loop eventually explores every vertex, this claim implies that the SCCs of G are precisely the groups
of vertices that are assigned a common leader.
We proceed by induction. Let S denote the vertices already explored by previous calls to DFS (initially
empty). Inductively, the set S is the union of zero or more SCCs of G. Suppose DFS is called on a vertex v
and let C denote v’s SCC in G. Since the SCCs of a graph are disjoint, S is the union of SCCs of G,
and v ∈ / S, no vertices of C lie in S. Thus, this call to DFS will explore, at the least, all vertices of C.
By the Key Lemma, every outgoing arc (i, j) from C leads to some SCC C ′ that contains a vertex w with
a finishing time larger than f (v). Since vertices are processed in decreasing order of finishing time, w has
already been explored and belongs to S; since S is the union of SCCs, it must contain all of C ′ . Summarizing,
every outgoing arc from C leads directly to a vertex that has already been explored. Thus this call to DFS
explores the vertices of C and nothing else. This completes the inductive step and the proof of correctness.

5
Lecture 7: Minimum Spanning Trees and
Prim’s Algorithm
CLRS Chapter 23

Outline of this Lecture

Spanning trees and minimum spanning trees.

The minimum spanning tree (MST) problem.

The generic algorithm for MST problem.

Prim’s algorithm for the MST problem.

– The algorithm

– Correctness

– Implementation + Running Time

1
Spanning Trees

Spanning Trees: A subgraph of a undirected graph


   

is a spanning tree of if it is a tree and

contains every vertex of .

Example:
       
 a  b

a
  b
  

   d   

            
     d 
     
 

c e
c e
Graph spanning tree 1
       
a b a b
  
      
    

  
  
  d   
  d       
c e c e

spanning tree 2 spanning tree 3

2
Spanning Trees

Theorem: Every connected graph has a spanning


tree.

Question: Why is this true?


Question: Given a connected graph , how can you

find a spanning tree of ?

3
Weighted Graphs

Weighted Graphs: A weighted graph is a graph, in


which each edge has a weight (some real number).

Weight of a Graph: The sum of the weights of all


edges.

Example:
       
 a  b
 a   b
 
  7 32  10         32
10
      d    d  
c 9    
23  
c 23 
e
e 9
weighted graph Tree 1. w=74
       
a b b
 32  a 
 
  7   10   7 32 
  d         23 
  23     d  
 
c e c e
9
Tree 2, w=71 Tree 3, w=72
Minimum spanning tree

4
Minimum Spanning Trees

A Minimum Spanning Tree in an undirected connected


weighted graph is a spanning tree of minimum weight
(among all spanning trees).

Example:
       
 a  b
 a   b
 
  7 32  10         32
10
      d    d  
c 9    
23  
c 23 
e
e 9
weighted graph Tree 1. w=74
       
a b b
 32  a 
    7   10   7 32 
    
 
d  23     d      23
  
c e c e
9
Tree 2, w=71 Tree 3, w=72
Minimum spanning tree

5
Minimum Spanning Trees

Remark: The minimum spanning tree may not be


unique. However, if the weights of all the edges are
pairwise distinct, it is indeed unique (we won’t prove
this now).

Example:

1 1 1

2 2 2 2

24 67 24 67 24 67

100
weighted graph MST1 MST2

6
Minimum Spanning Tree Problem

MST Problem: Given a connected weighted undi-



rected graph , design an algorithm that outputs a

minimum spanning tree (MST) of .

Question: What is most intuitive way to solve?

Generic approach: A tree is an acyclic graph.


The idea is to start with an empty graph and try to add
edges one at a time, always making sure that what is
built remains acyclic. And if we are sure every time the
resulting graph always is a subset of some minimum
spanning tree, we are done.

7
Generic Algorithm for MST problem

Let be a set of edges such that  , where


 

is a MST. An edge is a safe edge for , if


 

  is also a subset of some MST.

 

If at each step, we can find a safe edge , we


can ’grow’ a MST. This leads to the following generic
approach:

Generic-MST(G, w)
Let A=EMPTY;
while A does not form a spanning tree
find an edge (u, v) that is safe for A
add (u, v) to A

return A

How can we find a safe edge?

8
How to find a safe edge

    

We first give some definitions. Let be a


connected and undirected graph. We define:

   

Cut A cut of G is a partition of V.

 
     
Cross An edge crosses the cut


if one of its endpoints is in , and the other is


 
in .

Respect A cut respects a set of edges if no edge


in A crosses the cut.

Light edge An edge is a light edge crossing a cut


if its weight is the minimum of any edge crossing
the cut.

9
How to find a safe edge

Lemma

    

Let be a connected, undirected graph



with a real-valued weight function defined on . Let

be a subset of that is included in some minimum
     

spanning tree for , let be any cut of that
 

respects , and let be a light edge crossing the


    
 

cut . Then, edge is safe for .

It means that we can find a safe edge by

1. first finding a cut that respects ,

2. then finding the light edge crossing that cut.

That light edge is a safe edge.

10
Proof

 

1. Let  , where is a MST. Suppose
.

2. The trick is to construct another MST that con-




 
 

tains both and , thereby showing is


a safe edge for .

11
 
3. Since , and are on opposite sides of the cut
    

, there is at least one edge in on the


   

path from to that crosses the cut. Let be


 

such edge. Since the cut respects , .
 

Since is a light edge crossing the cut, we


 
    


have .

MST T another MST T’

y y
A A
x x

v v

u u

a cut respects A

12
 

4. Add to , it creates a cycle. By removing


an edge from the cycle, it becomes a tree again.
 

In particular, we remove ( ) to make a
new tree .


5. The weight of is



 
  
  

 


 

6. Since is a MST, we must have ,




hence is also a MST.




  

7. Since   is also a subset of (a MST),


 

is safe for .

13
Prim’s Algorithm

The generic algorithm gives us an idea how to ’grow’


a MST.

If you read the theorem and the proof carefully, you


will notice that the choice of a cut (and hence the
corresponding light edge) in each iteration is imma-
terial. We can select any cut (that respects the se-
lected edges) and find the light edge crossing that cut
to proceed.

The Prim’s algorithm makes a nature choice of the cut


in each iteration – it grows a single tree and adds a
light edge in each iteration.

14
Prim’s Algorithm : How to grow a tree

Grow a Tree

Start by picking any vertex to be the root of the


tree.

While the tree does not contain


all vertices in the graph
find shortest edge leaving the tree
and add it to the tree .

Running time is
       
    

15
More Details

 
Step 0: Choose any element ; set  and

. (Take as the root of our spanning tree.)

Step 1: Find a lightest edge such that one endpoint



is in and the other is in . Add this edge to


and its (other) endpoint to .

 
Step 2: If , then stop & output (minimum)


 

spanning tree . Otherwise go to Step 1.

The idea: expand the current tree by adding the


lightest (shortest) edge leaving it and its endpoint.

b 24 h b 24 h
a 20 12 a 20 12
26 26
12 12
c 16 g i c 16 g i
r 23 r
14 14 23
f f
8 8
new
d e d e
new edge
16
Prim’s Algorithm

Worked Example

10
b e
4 8 7 6
a 9 d 5 g Connected graph
9
8 2 2
c f
1

10 Step 0
b e
4 8 7 6 S={a}
a 9 d 5 g
9 V \ S = {b,c,d,e,f,g}
8 2 2
lightest edge = {a,b}
c f
1

17
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.1 before
4 8 7 6 S={a}
a 9 d 5 g V \ S = {b,c,d,e,f,g}
9 A={}
8 2 2 lightest edge = {a,b}
c f
1

10 Step 1.1 after


b e
4 8 7 6 S={a,b}
9 5 V \ S = {c,d,e,f,g}
a d g
9 A={{a,b}}
8 2 2
lightest edge = {b,d}, {a,c}
c f
1

18
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.2 before
4 8 7 6 S={a,b}
a 9 d 5 g V \ S = {c,d,e,f,g}
9 A={{a,b}}
8 2 2 lightest edge = {b,d}, {a,c}
c f
1

10 Step 1.2 after


b e
4 8 7 6 S={a,b,d}
9 5 V \ S = {c,e,f,g}
a d g
9 A={{a,b},{b,d}}
8 2 2
lightest edge = {d,c}
c f
1

19
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.3 before
4 8 7 6
S={a,b,d}
a 9 d 5 g V \ S = {c,e,f,g}
9 A={{a,b},{b,d}}
8 2 2 lightest edge = {d,c}
c f
1

10 Step 1.3 after


b e
4 8 7 6 S={a,b,c,d}
9 5 V \ S = {e,f,g}
a d g
9 A={{a,b},{b,d},{c,d}}
8 2 2
lightest edge = {c,f}
c f
1

20
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.4 before
4 8 7 6
S={a,b,c,d}
a 9 d 5 g
9 V \ S = {e,f,g}
8 2 2 A={{a,b},{b,d},{c,d}}
c f lightest edge = {c,f}
1

10 Step 1.4 after


b e
4 8 7 6 S={a,b,c,d,f}
9 5 V \ S = {e,g}
a d g
9 A={{a,b},{b,d},{c,d},{c,f}}
8 2 2
lightest edge = {f,g}
c f
1

21
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.5 before
4 8 7 6 S={a,b,c,d,f}
a 9 d 5 g V \ S = {e,g}
9 A={{a,b},{b,d},{c,d},{c,f}}
8 2 2 lightest edge = {f,g}
c f
1

10 Step 1.5 after


b e
4 8 7 6 S={a,b,c,d,f,g}
9 5 V \ S = {e}
a d g
9 A={{a,b},{b,d},{c,d},{c,f},
8 2 2 {f,g}}
c f
1 lightest edge = {f,e}

22
Prim’s Algorithm

Prim’s Example – Continued

10
b e Step 1.6 before
4 8 7 6 S={a,b,c,d,f,g}
a 9 d 5 g V \ S = {e}
9 A={{a,b},{b,d},{c,d},{c,f},
8 2 2 {f,g}}
c f
1 lightest edge = {f,e}

10 Step 1.6 after


b e
4 8 7 6 S={a,b,c,d,e,f,g}
9 5 V \ S = {}
a d g
9 A={{a,b},{b,d},{c,d},{c,f},
8 2 2 {f,g},{f,e}}
c f
1 MST completed

23
Recall Idea of Prim’s Algorithm
    
Step 0: Choose any element and set  and  .
(Take as the root of our spanning tree.)

Step 1: Find a lightest edge
 such that one endpoint is in and
the other is  in  . Add this edge to  and its (other)
endpoint to .
  
Step 2: If   , then stop and output the minimum span-
ning tree
 .
Otherwise go to Step 1.

Questions:

Why does this produce a Minimum Spanning


Tree?

How does the algorithm find the lightest edge and


update efficiently?

How does the algorithm update efficiently?

24
Prim’s Algorithm

Question: How does the algorithm update efficiently?

Answer: Color the vertices. Initially all are white.


Change the color to black when the vertex is moved

to . Use color[ ] to store color.

Question: How does the algorithm find the lightest


edge and update efficiently?

Answer:
(a) Use a priority queue to find the lightest edge.

(b) Use pred[ ] to update .

25
Reviewing Priority Queues

Priority Queue is a data structure (can be implemented


as a heap) which supports the following operations:

  
insert( ):
  
Insert with the key value in  .

u = extractMin():
Extract the item with the minimum key value in  .

   
decreaseKey( - ):
   
Decrease ’s key value to - .

Remark: Priority Queues can be implemented so that


     

each operation takes time  . See CLRS!

26
Using a Priority Queue to Find the Lightest Edge
       

Each item of the queue is a triple , ,


where
 
is a vertex in ,


   
is the weight of the lightest edge

from to any vertex in , and
 
is the endpoint of this edge in .
The array is used to build the MST tree.

b 24 h b 24 h
a 20 12 a 20 12
26 26
12 12
c 16 g i c 16 g i
r 23 r
14 14 23
f f
8 8
d e d e
new edge
key[f] = 8, pred[f] = e
key[i] = infinity, pred[i] = nil key[i] = 23, pred[i] = f
key[g] = 16, pred[g] = c After adding the new edge
key[h] = 24, pred[h] = b and vertex f, update the key[v]
and pred[v] for each vertex v
f has the minimum key adjacent to f
27
Description of Prim’s Algorithm

Remark: is given by adjacency lists. The vertices in 
are stored in a priority queue with key=value of lightest edge to
vertex in .

Prim(
 )
for
 
     
each initialize
 ;
  
  ;
 

  

 
   ; ! start at root
 ;
"  "
new
"
PriQueue(  ); put vertices in
while(
 "$#
is nonempty) until all vertices in MST
u= extraxtMin(); )(* lightest edge
&
%  '
for
 each (
    )  

if-(( .  %! )&&(  %! ,+ )) %!
%/  %! ; - . new lightest edge
"$#
decreaseKey( % %! );
 0 
 %!  ;

  1 2
 


"  
When the algorithm terminates, and the MST is
3     0  #
%  4/%&
%! $   
The pred pointers define the MST as an inverted tree
rooted at .
28
Example for Running Prim’s Algorithm

a e
3
1 10
4
3
b d
4 1
2
5
c f

u a b c d e f
key[u]
pred[u]

29
Analysis of Prim’s Algorithm

       
Let and . The data structure PriQueue
supports the following two operations: (See CLRS)

    

to extract each vertex from the queue.


      

Done once for each vertex

    

time to decrease the key value of neigh-


boring vertex.
       

Done at most once for each edge

Total cost is then


     
  

30
Analysis of Prim’s Algorithm – Continued

Prim(G, w, r) {
for each (u in V)
{
key[u] = +infinity; 2n
color[u] = white;
}

key[r] = 0; 1
pred[r] = nil; 1
Q = new PriQueue(V); n

while (Q. nonempty()) 1


{
u = Q.extractMin(); O(log n)
for each (v in adj[u])
{
if ((color[v] == white) & 1
(w(u,v) < key[v]) 1 O(deg(u) log n)
{
key[v] = w(u, v); 1
Q.decreaseKey(v, key[v]); O(log n)
pred[v] = u; 1
}
}
color[u] = black; 1
}
}
[O(log n) + O(deg(u) log n)]
u in V
31
Analysis of Prim’s Algorithm – Continued

So the overall running time is


  

          
    
  


   
         
     


          
 
 
  

      
    

      
   

        
     

32
Lecture 8: Kruskal’s MST Algorithm
CLRS Chapter 23

Main Topics of This Lecture

Kruskal’s algorithm
Another, but different, greedy MST algorithm

Introduction to UNION-FIND data structure.


Used in Kruskal’s algorithm
Will see implementation in next lecture.

1
Idea of Kruskal’s Algorithm

The Kruskal’s Algorithm is based directly on the generic


algorithm. Unlike Prim’s algorithm, we make a differ-
ent choices of cuts.

Initially, trees of the forest are the vertices (no edges).

In each step add the cheapest edge that does not cre-
ate a cycle.

Observe that unlike Prim’s algorithm, which only grows


one tree, Kruskal’s algorithm grows a collection of trees
(a forest).

Continue until the forest ’merge to’ a single tree.


(Why is a single tree created?)

This is a minimum spanning tree


(we must prove this).
2
Outline by Example

10
a b a b
9
5
3 12 c c

e d 2 e d
7
original graph forest MST

edge weight Forest (V, A)


{d, c} 2
{a, e} 3
E {a, d} 5
{e, d} 7
{b, c} 9
{a, b} 10 A={ }
{b, d} 12

3
Outline of Kruskal’s Algorithm

   
Step 0: Set and  , the set of all edges.

Step 1: Choose an edge  in  of minimum weight,


and check whether adding  to creates a cycle.

If “yes”, remove  from  .

If “no”, move  from  to .

 
Step 2: If  , stop and output the minimal span-

ning tree 
. Otherwise go to Step 1.


Remark: Will see later, after each step, 
is a
subgraph of a MST.

4
Outline of Kruskal’s Algorithm

Implementation Questions:

How does algorithm choose edge   with min-


imum weight?

How does algorithm check whether adding  to


creates a cycle?

5
How to Choose the Edge of Least Weight

Question:
How does algorithm choose edge   with mini-
mum weight?


Answer: Start by sorting edges in in order of in-
creasing weight.
Walk through the edges in this order.
(Once edge causes a cycle it will always cause a cycle so it
can be thrown away.)

6
How to Check for Cycles

Observation: At each step of the outlined algorithm,




is acyclic so it is a forest.

If and  are in the same tree, then adding edge



 to creates a cycle.

If and  are not in the same tree, then adding edge



 to does not create a cycle.

Question: How to test whether and  are in the


same tree?

High-Level Answer: Use a disjoint-set data structure


Vertices in a tree are considered to be in same set.
Test if Find-Set( ) = Find-Set(  )?

Low -Level Answer:


The UNION-FIND data structure implements this:
7
The UNION-FIND Data Structure

UNION-FIND supports three operations on collections


of disjoint sets: Let be the size of the universe.


Create-Set( ): 
Create a set containing the single element .


Find-Set( ):  
Find the set containing the element .


Union(  ):  
Merge the sets respectively containing and 
into a common set.

For now we treat UNION-FIND as a black box.


Will see implementation in next lecture.

8
Kruskal’s Algorithm: the Details

       
Sort in increasing order by weight ;   
/* After sorting   
         !#" */

 
 ;
  
for (each in  ) CREATE-SET( );  

         
for %$ &$ '$ from 1 to do    
if (FIND-SET( $ ) != FIND-SET(  $ ) )
( 
add $  $  to ;
UNION( $  $ );
)

return(A);

Remark: With a proper implementation of UNION-FIND, Kruskal’s


algorithm has running time *,+.-/0-21!3546-!0- 7 .

9
Why Kruskal’s Algorithm is correct?

Let be the edge set which has been selected by



Kruskal’s Algorithm, and   be the edge to be added
next. It suffices to show there is a cut which respects

, and   is the light edge crossing that cut.

 
1. Let  denote the tree of the forest

that contains . Consider the cut    .

2. Observe that there is no edge in crosses this


cut, so the cut respects .


3. Since adding   to does not induce a cy-

cle,   crosses the cut. Moreover, since  

is currently the smallest edge,   is the light
edge crossing the cut. This completes the cor-
rectness proof of Kruskal’s Algorithm.

10
Why Kruskal’s Algorithm is correct?

cut (V’, V−V’)

A’=(V’, E’)
u

G=(V, E)

11
Single Source Shortest Paths
 Given a weighted graph G= (V,E) where the weights are >0.
 A source vertex, vo belong to V.
 Find the shortest path from vo to all other nodes in G.
 Shortest paths are generated in increasing order: 1,2,3,… ..

Dijkstra Algorithm
 S: Set of vertices (including vo) whose final shortest paths from the source
vo have already been determined.
 For each node wV-S,
Dist (w): the length of the shortest path starting from vo going through only
vertices which are in S and ending at w.
 The next path is generated as follows:
It's the path of a vertex u which has Dist (u) minimum among all vertices in
V-S
Put u in S.
 Dist (w) for w in V-S may be decreased going though u.
Algorithm:

Procedure SSSP (vo, cost, n)


Array S (1:n);
Begin
/* initialization*/
For i=1 to n do
S(i)=0, Dist (i)= cost (vo ,i)
End for.
S(vo)=1, Dist (vo)=o;
For i=1 to n-1 do.
Choose u s.t. Dist (u)= min {Dist (w) } & S(w)=0
S(u)=1;
For all w with S(w)=0 do.
Dist (w)= min (Dist (w), Dist (u) +Cost (u,w))
End for.
end for.
end.

Example:

2 1
A B
5 2 E

3 This will change to 5 to 4 because


F
Min{(A-F), (A-B-F)}
1 3 2 Min{5,4}
C D
4

Iteration N DB DC DD DE DF
Initial {A} 2 3   5
1 {A,B} 2 3  3 4
2 {A,B, C} 2 3 7(A-C-D) 3 4(no change because
of same cost A-C-F)

3 {A,B, C,E} 2 3 5(min{(A-C-D,A- 3 4


B-E-D)}
4 { A,B, C,E,F} 2 3 5 no change 3 4
5 { A,B, C,E,F,D} 2 3 5 3 4

2 1
A B
5 2 E

3
F
1 3 2
C D
4

2 1
A B
5 2 E

3
F
1 3 2
C D
4

2 1
A B
5 2 E

3
F
1 3 2
C D
4
2 1
A B
5 2 E

3
F
1 3 2
C D
4

2 1
A B
5 2 E

3
F
1 3 2
C D
4

2 1
A B
5 2 E

3
F
1 3 2
C D
4

Implementation using min heap


 Build heap---------------- O(v)
 Extracting min element from min-heap & Adjusting min heap v times-- v
log2 v
 Decrease key operation:
o Delete min key from heap---- O(1).
o Adjust root ------log2 v
o We have to perform decrease key operation on rest of the vertices at
max. When the value change from infinite, we have adjust min heap
which takes log2 v (v time) So v log2v. At max we have perform this
decrease key operation v-1 times so decrease key operation v-1
times take v2 log2v
o v2 log2v we can write it as elog2v because e= v2 in dense graph
worst case.

Time complexity: O(v) + v log2 v + e log2 v

Build heap Extracting min


element from
min-heap &
Adjusting min
heap v times

Time complexity: O(v2) when adjacency matrix if the input is represented using
adjacency list it can be reduced to O((e+v) log v) with the help of binary heap.

Drawback:

Dijkstra Algorithm will fail when there is negative weight cycle in the graph.
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Shortest Paths
Dijkstra Bellman-Ford Floyd All-pairs paths

Lecturer: Georgy Gimel’farb

COMPSCI 220 Algorithms and Data Structures

1 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

1 Single-source shortest path

2 Dijkstra’s algorithm

3 Bellman-Ford algorithm

4 All-pairs shortest path problem

5 Floyd’s algorithm

2 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Paths and Distances Revisited

Cost of a walk / path v0 , v1 , . . . , vk in a digraph G = (V, E) with


edge weights {c(u, v) | (u, v) ∈ E}:
k−1
X
cost(v0 , v1 , . . . , vk ) = c(vi , vi+1 )
i=0

Distance d(u, v) between two vertices u and v of V (G): the


minimum cost of a path between u and v.
Eccentricity of a node u ∈ V : ec[u] = max d(u, v).
v∈V
Radius of G: the minimum eccentricity of u ∈ V : min ec[u].
u∈V
Diameter of G: the maximum eccentricity of u ∈ V : max ec[u].
u∈V
Note: there are analogous definitions for graphs.
3 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Unweighted / Weighted Graphs: Shortest Paths


The shortest path from the vertex A to the vertex D:
A B A B
4

3
F 1 F

3
8
C D C D

2 3
E E

min{2A,C,D , 3A,C,E,D , 3A,B,F,D } min{9A,C,D , 6A,C,E,D , 10A,B,F,D }

4 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Single-source Shortest Path (SSSP) in G = (V, E, c)

Given a source node v, find the shortest (minimum weight) path to


each other node.
• Weight of a path: the sum of weights (costs) on the arcs.
• BFS works only if all weights c(u, v); (u, v) ∈ E, are equal.
• Dijkstra’s algorithm – one of the known solutions.
• A greedy algorithm: each locally best choice is globally
best.
• Works only if all weights are non-negative.
• Initial paths: one-arc paths from s to v of weight
cost(s, v).
• Each step compares the shortest paths with and without
each new node.

5 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Single-source Shortest Path (SSSP) in G = (V, E, c)

1 Build a list S of visited nodes (say, using a priority queue).


2 Iterative propagation of the shortest paths:
1 Choose the closest unvisited node u being on a path with
internal nodes in S.
2 If adding the node u has established shorter paths,
update distances of remaining unvisited nodes v from the
source s.
Complexity depends on data structures used.
• For a priority queue, such as a binary heap, running time
O((m + n) log n) is possible.
• If every node is reachable from the source: O(m log n).
• More sophisticated Fibonacci heaps lead to the best
complexity of O(m + n log n).
6 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm
algorithm Dijkstra( weighted digraph (G, c), node s ∈ V (G) )
array colour[n] = {WHITE, . . . , WHITE}
array dist[n] = {c[s, 0], . . . , c[s, n − 1]}
colour[s] ← BLACK
while there is a WHITE node do
pick a WHITE node u, such that dist[u] is minimum
colour[u] ← BLACK
for each x adjacent to u do
if colour[x] = WHITE  then
dist[x] ← min dist[x], dist[u] + c[u, x]
end if
end for
end while
return dist
end
7 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: Example 1

3 b BLACK dist[x]
List S a b c d e
2 a 0 3 8 ∞ ∞
a
2 ab 0 3 8 5 ∞
3
abd 0 3 7 5 10
1 d abcd 0 3 7 5 9
8 2 abcde 0 3 7 5 9

7 5
c
2
e

8 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: Example 1

3 b BLACK dist[x]
List S a b c d e
2 a 0 3 8 ∞ ∞
a
2 ab 0 3 8 5 ∞
3
abd 0 3 7 5 10
1 d abcd 0 3 7 5 9
8 2 abcde 0 3 7 5 9

7 5
c
2
e

8 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: Example 1

3 b BLACK dist[x]
List S a b c d e
2 a 0 3 8 ∞ ∞
a
2 ab 0 3 8 5 ∞
3
abd 0 3 7 5 10
1 d abcd 0 3 7 5 9
8 2 abcde 0 3 7 5 9

7 5
c
2
e

8 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: Example 1

3 b BLACK dist[x]
List S a b c d e
2 a 0 3 8 ∞ ∞
a
2 ab 0 3 8 5 ∞
3
abd 0 3 7 5 10
1 d abcd 0 3 7 5 9
8 2 abcde 0 3 7 5 9

7 5
c
2
e

8 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: Example 1

3 b BLACK dist[x]
List S a b c d e
2 a 0 3 8 ∞ ∞
a
2 ab 0 3 8 5 ∞
3
abd 0 3 7 5 10
1 d abcd 0 3 7 5 9
8 2 abcde 0 3 7 5 9

7 5
c
2
e

8 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Why Does Dijkstra’s Algorithm Work?


x
Let an S-path be a path starting y
·
at node s and ending at node x u
with all the intermediate nodes ·
coloured BLACK, i.e., from the
s
list S, except possibly x. S

Theorem 6.8: Suppose that all arc weights are nonnegative.


Then these two properties hold at the top of while-loop:
P1: If x ∈ V (G), then dist[x] is the minimum cost of an S-path
from s to x.
P2: If colour[w] = BLACK (i.e., w ∈ S), then dist[w] is the
minimum cost of a path from s to w.

Once a node u is added to S and dist[u] is updated, dist[u] never changes in


subsequent steps. After S = V , dist holds the goal shortest distances.
9 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Dijkstra’s Algorithm Works



The update rule: dist[x] ← min dist[x], dist[u] + c[u, x] .
dist[x] is the length of some path from
x
s to x at every step. y
·
• If x ∈ S, then it is an S-path. u
• Updated dist[v] never increases. ·
To prove P1 and P2: induction on the s
number of times k of going through the S
while-loop (Sk ; S0 = {s}; dist[s] = 0).
• k = 0: P1 and P2 hold as dist[s] = 0.
• Inductive hypothesis: P1 and P2 hold for k ≥ 0; Sk+1 = Sk {u}.
S

• Inductive steps for P2 and P1:


• Consider any s-to-w Sk+1 -path γ = (s, . . . , y, u) of the weight
|γ|.
• If w ∈ Sk , consider the hypothesis.
• If w ∈/ Sk , γ extends some s-to-y Sk -path γ1 = (s, . . . , y).
10 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Dijkstra’s Algorithm Works


Inductive step for P2:
• For w ∈ Sk+1 and w 6= u, P2 holds by inductive hypothesis.
• For w = u, P2 holds, too, because any Sk+1 -path γ = (s, . . . , y, u)
of weight |γ| extends some Sk -path γ1 = (s, . . . , y) of weight |γ1 |:
• By the inductive hypothesis, dist[y] ≤ |γ1 |.
• By the update rule, dist[u] ≤ dist[y] + c(y, u).
• Therefore, dist[u] ≤ |γ| = |γ1 | + c(y, u).

u
y

γ1
s
Sk

11 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Dijkstra’s Algorithm Works


S for P1: x ∈ V (G); γ – any s-to-x Sk+1 -path;
Inductive step
Sk+1 = Sk {u}:
• u∈
/ γ: γ is an Sk -path and |γ| ≤ dist[x] by the inductive hypothesis.
γ1

• u ∈ γ = s, . . . , u, x : by the update rule, |γ| = |γ1 | + c(u, x) ≥ dist[x].


z }| { 

γ1

• u ∈ γ = (s, . . . , u, . . . .y, x , returning to Sk after u: by the update rule,


z }| { 

|γ| = |γ1 | + c(y, x) ≥ |β| + c(y, x) ≥ dist[y] + c(y, x) ≥ dist[x]

where |β| is the min weight of an s-to-y Sk -path.


x
y
·
u
·
s
Sk

12 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6
F E D Node u A B C D E F
0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

9 10 for u ∈ V (G) dist[u] ← c[A, u]

A B
7

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

colour[A] ← BLACK; dist[A] ← 0


9 10

A B
7

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

while-loop:
9 10 WHITE B, C, D, E, F : min dist[B]
colour[B] ← BLACK
for x ∈ V (G)
A B  dist[x] ←
7 min dist[x], dist[B] + c[B, x]

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

while-loop:
9 10 WHITE C, D, E, F : min dist[C]
colour[C] ← BLACK;
for x ∈ V (G)
A B  dist[x] ←
7 min dist[x], dist[C] + c[C, x]

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

while-loop:
9 10 WHITE D, E, F : min dist[F ]
colour[F ] ← BLACK;
for x ∈ V (G)
A B  dist[x] ←
7 min dist[x], dist[F ] + c[F, x]

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

while-loop:
9 10 WHITE D, E: min dist[D]
colour[D] ← BLACK;
for x ∈ V (G)
A B  dist[x] ←
7 min dist[x], dist[D] + c[D, x]

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksra’s Algorithm: Example 2

9 6 Node u A B C D E F
F E D 0 7 9 ∞ ∞ 14
A 0 7 9 ∞ ∞ 14
2 11 A B 0 7 9 22 ∞ 14
A B C 0 7 9 20 ∞ 11
A B C F 0 7 9 20 20 11
A B C DF 0 7 9 20 20 11
14 C 15 A B C DEF 0 7 9 20 20 11

while-loop:
9 10 WHITE E: min dist[E]
colour[E] ← BLACK;
for x ∈ V (G)
A B  dist[x] ←
7 min dist[x], dist[E] + c[E, x]

13 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijksta’s Algorithm: PFS Version


Input: weighted digraph (G, c); source node s ∈ V (G);
priority queue Q; arrays dist[0..n − 1]; colour[0..n − 1]
for u ∈ V (G) do: colour[s] ← GREY yes
Q.is empty()? return dist
colour[u] ← WHITE Q.insert(s, keys = 0)
no
Q.delete() u ← Q.peek()
τ ← Q.getKey(u)

for each x adjacent to u do:

t ← τ + c(u, x)
colour[x] ← GREY
yes
Q.insert(x, t) colour[x] = WHITE?
no
yes yes
Q.decreaseKey(x, t) Q.getKey(x) > t? colour[x] = GREY?
no no

14 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Start at a


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Initialisation:
Priority queue Q = {akey=0 }
v∈V a b c d e f g
keyv 0
dist[v] − − − − − − −
15 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 1 – 2


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← a; t1 ← keya = 0; x ∈ {b, c, d}
x ← b: t2 = t1 + cost(a, b) = 2; Q = {a0 , b2 }
v∈V a b c d e f g
keyv 0 2
dist[v] − − − − − − −
16 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 3


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u = a; t1 = keya = 0; x ∈ {b, c, d}
x ← c: t2 = t1 + cost(a, c) = 3; Q = {a0 , b2 , c3 }
v∈V a b c d e f g
keyv 0 2 3
dist[v] − − − − − − −
17 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 4


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u = a; t1 = keya = 0; x ∈ {b, c, d}
x ← d: t2 = t1 + cost(a, d) = 3; Q = {a0 , b2 , c3 , d3 }
v∈V a b c d e f g
keyv 0 2 3 3
dist[v] − − − − − − −
18 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 5


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = a
dist[a] ← t1 = 0; Q = {b2 , c3 , d3 }
v∈V a b c d e f g
keyv 0 2 3 3
dist[v] 0 − − − − − −
19 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 6 – 7


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← b; t1 ← keyb = 2; x ∈ {c, e}
x ← c: t2 = t1 + cost(b, c) = 2 + 4 = 6; keyc = 3 < t2 = 6
v∈V a b c d e f g
keyv 0 2 3 3
dist[v] 0 − − − − − −
20 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 8


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u = b; t1 = keyb = 2; x ∈ {c, e}
x ← e: t2 = t1 + cost(b, e) = 2 + 3 = 5; Q = {b2 , c3 , d3 , e5 }
v∈V a b c d e f g
keyv 0 2 3 3 5
dist[v] 0 − − − − − −
21 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 9


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = b
dist[b] ← t1 = 2; Q = {c3 , d3 , e5 }
v∈V a b c d e f g
keyv 0 2 3 3 5
dist[v] 0 2 − − − − −
22 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 10 – 11


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← c; t1 ← keyc = 3; x ∈ {d, e, f }
x ← d: t2 = t1 + cost(c, d) = 3 + 5 = 8; keyd = 3 < t2 = 8
v∈V a b c d e f g
keyv 0 2 3 3 5
dist[v] 0 2 − − − − −
23 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 12


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u = c; t1 = keyc = 3; x ∈ {d, e, f }
x ← e: t2 = t1 + cost(c, d) = 3 + 1 = 4; keye = 5 < t2 = 4; keye ← 4
v∈V a b c d e f g
keyv 0 2 3 3 4
dist[v] 0 2 − − − − −
24 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 13


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u = c; t1 = keyc = 3; x ∈ {d, e, f }
x ← f : t2 = t1 + cost(c, f ) = 3 + 6 = 9; Q = {c3 , d3 , e4 , f9 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 − − − − −
25 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 14


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = c
dist[c] ← t1 = 3; Q = {d3 , e4 , f9 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 3 − − − −
26 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 15 – 16


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← d; t1 ← keyd = 3; x ∈ {f }
x ← f : t2 = t1 + cost(d, f ) = 3 + 7 = 10; keyf = 9 < t2 = 10
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 3 − − − −
27 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 17


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = d
dist[d] ← t1 = 3; Q = {e4 , f9 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 3 3 − − −
28 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 18 – 19


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← e; t1 ← keye = 4; x ∈ {f }
x ← f : t2 = t1 + cost(e, f ) = 4 + 8 = 12; keyf = 9 < t2 = 12
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 3 3 − − −
29 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 20


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = e
dist[e] ← t1 = 4; Q = {f9 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9
dist[v] 0 2 3 3 4 − −
30 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 21 – 22


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
u ← f ; t1 ← keyf = 9; x ∈ {g}
x ← g: t2 = t1 + cost(f, g) = 9 + 9 = 18; Q = {f9 , g18 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9 18
dist[v] 0 2 3 3 4 − −
31 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Step 23


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = f
dist[f ] ← t1 = 9; Q = {g18 }
v∈V a b c d e f g
keyv 0 2 3 3 4 9 18
dist[v] 0 2 3 3 4 9 −
32 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Dijkstra’s Algorithm: PFS Version: Steps 24 – 25


a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3
Completing the while-loop for u = g
dist[g] ← t1 = 18; no adjacent verices for g; empty Q = {}
v∈V a b c d e f g
keyv 0 2 3 3 4 9 18
dist[v] 0 2 3 3 4 9 18
33 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

SSSP: Bellman-Ford Algorithm

algorithm Bellman-Ford( weighted digraph (G, c); node s )


array dist[n] = {∞, ∞, . . .}
dist[s] ← 0
for i from 0 to n − 1 do
for x ∈ V (G) do
for v ∈ V (G) do
dist[v] ← min(dist[v], dist[x] + c(x, v))
end for
end for
end for
return dist
end
Time complexity – Θ(n3 ); unlike the Dijkstra’s algorithm, it handles negative
weight arcs (but no negative weight cycles making the SSSP senseless).

34 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

SSSP: Bellman-Ford Algorithm (Alternative Form)

algorithm Bellman-Ford( weighted digraph (G, c); node s )


array dist[n] = {∞, ∞, . . .}
dist[s] ← 0
for i from 0 to n − 1 do
for (x, v) ∈ E(G) do
dist[v] ← min(dist[v], dist[x] + c(x, v))
end for
end for
return dist
end

Replacing the two nested for-loops by the nodes x, v ∈ V (G) with a single
for-loop by the arcs (x, v) ∈ E(G).
Time complexity: Θ(mn) using adjacency lists vs. Θ(n3 ) using an adjacency
matrix.
35 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Bellman-Ford Algorithm

Slower than Dijkstra’s algorithm when all arcs are nonnegative.


Basic idea as in Dijkstra’s: to find the single-source shortest paths
(SSSP) under progressively relaxing restrictions.
• Dikstra’s: one node a time based on their current distance
estimate.
• Bellman-Ford: all nodes at “level” 0, 1, . . . , n − 1 in turn.
• Level of a node v – the minimum possible number of arcs in a
minimum weight path to that node from the source s.

Theorem 6.9
If a graph G contains no negative weight cycles, then after the ith
iteration of the outer for-loop, the element dist[v] contains the
minimum weight of a path to v for all nodes v with level at most i.

36 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Bellman-Ford Algorithm Works

Just as for Dijkstra’s, the update ensures dist[v] never increases.

Induction by the level i of the nodes:


• Base case: i = 0; the result is true due to initialisation:
dist[s] = 0; dist[v] = ∞; v ∈ V \s.
• Induction hypothesis: dist[v]; v ∈ V , are true for i − 1.
• Induction step for a node v at level i:
• Due to no negative weight cycles, a min-weight s-to-v path, γ,
has i arcs.
• If y is the last node before v and γ1 the subpath to y, then
dist[y] ≤ |γ1 | by the induction hypothesis.
• Thus by the update rule:

dist[v] ≤ dist[y] + c(y, v) ≤ |γ1 | + c(y, v) ≤ |γ|

as required at level i.
37 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm

3 b i dist[x]
a b c d e
2 0 0 ∞ ∞ ∞ ∞
a
2 1 0 3 −1 ∞ ∞
-2
2 0 0 −1 3 5
1 d 3 0 0 −1 2 0
-1 2 4 0 0 −1 2 −1

4 -3
c
6
e

38 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm

3 b i dist[x]
3
a b c d e
2 0 0 ∞ ∞ ∞ ∞
a
2 1 0 3 −1 ∞ ∞
-2
2 0 0 −1 3 5
1 ∞ d 3 0 0 −1 2 0
-1 2 4 0 0 −1 2 −1

-1 4 -3
c
6

e

38 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm

3 b i dist[x]
0
a b c d e
2 0 0 ∞ ∞ ∞ ∞
a
2 1 0 3 −1 ∞ ∞
-2
2 0 0 −1 3 5
1 3 d 3 0 0 −1 2 0
-1 2 4 0 0 −1 2 −1

-1 4 -3
c
6
5
e

38 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm

3 b i dist[x]
0
a b c d e
2 0 0 ∞ ∞ ∞ ∞
a
2 1 0 3 −1 ∞ ∞
-2
2 0 0 −1 3 5
1 2 d 3 0 0 −1 2 0
-1 2 4 0 0 −1 2 −1

-1 4 -3
c
6
-1
e

38 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm

3 b i dist[x]
0
a b c d e
2 0 0 ∞ ∞ ∞ ∞
a
2 1 0 3 −1 ∞ ∞
-2
2 0 0 −1 3 5
1 2 d 3 0 0 −1 2 0
-1 2 4 0 0 −1 2 −1

-1 4 -3
c
6
-1
e

38 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm (Alternative Form)


Arc (x, v): a,b a,c b,a b,d c,b c,d c,e d,b d,c d,e
c(x, v): 3 −1 2 2 1 4 6 −2 2 −3

Iteration i = 0
x, v Distance d[v] ← min{d[v], d[x] + c(x, v)} a b c d e
0 ∞ ∞ ∞ ∞
a, b d[b] ← min{∞, 0 + 3} = 3 0 3 ∞ ∞ ∞
a, c d[c] ← min{∞, 0 − 1} = −1 0 3 −1 ∞ ∞
b, a d[a] ← min{0, 3 + 2} = 0 0 3 −1 ∞ ∞
b, d d[d] ← min{∞, 3 + 2} = 5 0 3 −1 5 ∞
c, b d[b] ← min{3, −1 + 1} = 0 0 0 −1 5 ∞
c, d d[d] ← min{5, −1 + 4} = 3 0 0 −1 3 ∞
c, e d[e] ← min{∞, −1 + 6} = 5 0 0 −1 3 5
d, b d[b] ← min{0, 3 − 2} = 0 0 0 −1 3 5
d, c d[c] ← min{−1, 3 + 2} = −1 0 0 −1 3 5
d, e d[e] ← min{5, 3 − 3} = 0 0 0 −1 3 0

39 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm (Alternative Form)

Arc (x, v): a,b a,c b,a b,d c,b c,d c,e d,b d,c d,e
c(x, v): 3 −1 2 2 1 4 6 −2 2 −3

Iteration i = 1
x, v Distance d[v] ← min{d[v], d[x] + c(x, v)} a b c d e
0 0 −1 3 0
a, b d[b] ← min{0, 0 + 3} = 0 0 0 −1 3 0
a, c d[c] ← min{−1, 0 − 1} = −1 0 0 −1 3 0
b, a d[a] ← min{0, 0 + 2} = 0 0 0 −1 3 0
b, d d[d] ← min{3, 0 + 2} = 2 0 0 −1 2 0
c, b d[b] ← min{0, −1 + 1} = 0 0 0 −1 2 0
c, d d[d] ← min{2, −1 + 4} = 2 0 0 −1 2 0
c, e d[e] ← min{0, −1 + 6} = 0 0 0 −1 2 0
d, b d[b] ← min{0, 2 − 2} = 0 0 0 −1 2 0
d, c d[c] ← min{−1, 2 + 2} = −1 0 0 −1 2 0
d, e d[e] ← min{0, 2 − 3} = −1 0 0 −1 2 −1

39 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Bellman-Ford Algorithm (Alternative Form)

Arc (x, v): a,b a,c b,a b,d c,b c,d c,e d,b d,c d,e
c(x, v): 3 −1 2 2 1 4 6 −2 2 −3

Iteration i = 2..4
x, v Distance d[v] ← min{d[v], d[x] + c(x, v)} a b c d e
0 0 −1 2 −1
a, b d[b] ← min{0, 0 + 3} = 0 0 0 −1 2 −1
a, c d[c] ← min{−1, 0 − 1} = −1 0 0 −1 2 −1
b, a d[a] ← min{0, 0 + 2} = 0 0 0 −1 2 −1
b, d d[d] ← min{2, 0 + 2} = 2 0 0 −1 2 −1
c, b d[b] ← min{0, −1 + 1} = 0 0 0 −1 2 −1
c, d d[d] ← min{2, −1 + 4} = 2 0 0 −1 2 −1
c, e d[e] ← min{−1, −1 + 6} = −1 0 0 −1 2 −1
d, b d[b] ← min{0, 3 − 2} = 0 0 0 −1 2 −1
d, c d[c] ← min{−1, 3 + 2} = −1 0 0 −1 2 −1
d, e d[e] ← min{−1, 3 − 3} = −1 0 0 −1 2 −1

39 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Comments on Bellman-Ford Algorithm

• This (non-greedy) algorithm handles negative weight arcs, but


not negative weight cycles.
• Running time with the two innermost nested for-loops:
O(n3 ).
• Runs slower than the Dijkstra’s algorithm since considers all
nodes at “level” i = 0, 1, . . . , n − 1, in turn.
• The alternative form where the two inner-most for-loops are
replaced with: for (u, v) ∈ E(V ) runs in time O(nm).
• The outer for-loop (by i) in this case can be terminated after
no distance changes during the iteration (e.g., after i = 2 in
the example on Slide 39).
• Bellman-Ford algorithm can be modified to detect negative
weight cycle (see Textbook, Exercise 6.3.4)

40 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

All Pairs Shortest Path (APSP) Problem

Given a weighted digraph (G, c), determine for each pair of nodes
u, v ∈ V (G) (the length of) a minimum weight path from u to v.
 
Convenient output: a distance matrix D = D[u, v] u,v∈V (G)
• Time complexity Θ(nAn,m ) of computing the matrix D by
finding the single-source shortest paths (SSSP) from each
node as the source in turn.
• An=|V (G)|,m=|E(G)| – the complexity of the SSSP algorithm.

• The APSP complexity Θ(n3 ) for the adjacency matrix version


of the Dijkstra’s SSSP algorithm: An,m = n2 .

• The APSP complexity Θ(n2 m) for the Bellman-Ford SSSP


algorithm: An,m = mn.

41 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

All Pairs Shortest Path (APSP) Problem


Floyd’s algorithm – one of the known simpler algorithms for computing
the distance matrix (three nested for-loops; Θ(n3 ) time complexity):

1 Number all nodes (say, from 0 to n − 1).


2 At each step k, maintain the matrix of shortest distances from node
i to node j, not passing through nodes higher than k.
3 Update the matrix at each step to see whether the node k shortens
the current best distance.
An alternative to running the SSSP algorithm from each node.
• Better than the Dijkstra’s algorithm for dense graphs, probably not
for sparse ones.
• Unlike the Dijkstra’s algorithm, can handle negative costs.
• Based on Warshall’s algorithm (just tells whether there is a path from
node i to node j, not concerned with length).
42 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm

algorithm Floyd( weighted digraph (G, c) )


Initialisation: for u, v ∈ V (G) do D[u, v] ← c(u, v) end for
for x ∈ V (G) do
for u ∈ V (G) do
for v ∈ V (G) do
D[u, v] ← min{D[u, v], D[u, x] + D[x, v]}
end for
end for
end for

This algorithm is based on dynamic programming principles.


At the bottom of the outer for-x-loop, D[u, v] for each u, v ∈ V (G) is
the length of the shortest path from u to v passing through intermediate
nodes x having been seen in that loop.
43 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm

3 1 0 1 2 3 4
−1 ∞ ∞
 
0 0 3
2 1  2 0 ∞ 2 ∞ 
 
0 2 2  ∞ 1 0 4 6 
-2  
3  ∞ −2 2 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Adjacency/cost matrix c[u, v]
4 -3 0
2 0
6 0
4 0

44 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm: x = 0

3 1  0 1 2 3 4

0 0 3 −1 ∞ ∞
2 1  2 0 1 2 ∞ 
 
0 2 2  ∞ 1 0 4 6 
-2  
3  ∞ −2 2 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Distance matrix D0 [u, v]
-3 D0 [1, 2] = min{∞, 2c[1,0] − 1c[0,1] } = 1
2 4
0
6 0
4 0

45 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm: x = 1

3 1 0 1 2 3 4
0 −1 5 ∞
 
0 3
2 1  2 0 1 2 ∞ 
 
0 2 2  3 1 0 3 6 
-2  
3  0 −2 −1 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Distance matrix D1 [u, v]
-3 D1 [0, 3] = min{∞, 3D0 [0,1] + 2D0 [1,3] } = 5
2 4
D1 [2, 3] = min{4, 1D0 [2,1] + 2D0 [1,3] } = 3
6 D1 [3, 2] = min{2, −2D0 [3,1] + 1D0 [1,2] } = −1
4 0

46 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm: x = 2

3 1 0 1 2 3 4
0 0 2 5
 
0 −1
2 1  2 0 1 2 7 
 
0 2 2  3 1 0 3 6 
-2  
3  0 −2 −1 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Distance matrix D2 [u, v]
-3 D2 [0, 1] = min{3, −1D1 [0,2] + 1D1 [2,1] } = 0
2 4 D2 [0, 3] = min{5, −1D1 [0,2] + 3D1 [2,3] } = 2
6 D2 [0, 4] = min{∞, −1D1 [0,2] + 6D1 [2,4] } = 5
4 D2 [1, 4] = min{∞, 1D1 [1,2] + 6D1 [2,4] } = 7

47 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm: x = 3

3 1 0 1 2 3 4
0 0 −1 −1
 
0 2
2 1  2 0 1 2 −1 
 
0 2 2  3 1 0 3 0 
-2  
3  0 −2 −1 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Distance matrix D3 [u, v]
-3 D3 [0, 4] = min{5, 2D2 [0,3] − 3D2 [3,4] } = −1
2 4
D3 [1, 4] = min{7, 2D1 [1,3] − 3D1 [3,4] } = −1
6 D3 [2, 4] = min{6, 3D1 [2,3] − 3D1 [3,4] } = 0
4 0

48 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm: x = 4

3 1 0 1 2 3 4
−1 −1
 
0 0 0 2
2 1  2 0 1 2 −1 
 
0 2 2  3 1 0 3 0 
-2  
3  0 −2 −1 0 −3 
1 4 ∞ ∞ ∞ ∞ 0
2 3
-1
Final distance matrix D ≡ D4 [u, v]
4 -3 0
2 0
6 0
4 0

49 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Floyd’s Algorithm Works

Theorem 6.12: At the bottom of the outer for-loop, for all nodes u and v,
D[u, v] contains the minimum length of all paths from u to v that are
restricted to using only intermediate nodes that have been seen in the
outer for-loop.

When algorithm terminates, all nodes have been seen and D[u, v] is the length
of the shortest u-to-v path.
Notation: Sk – the set of nodes seen after k passes through this loop; Sk -path
– one with all intermediate nodes in Sk ; Dk – the corresponding value of D.
Induction on the outer for-loop:
• Base case: k = 0; S0 = ∅, and the result holds.
• Induction hypothesis: It holds after k ≥ 0 times through the loop.
• Inductive step: To show that Dk+1 [u, v] after k + 1 passes
through this loop is the minimum length of an u-to-v Sk+1 -path.

50 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Proving Why Floyd’s Algorithm Works

Inductive step: S
Suppose that x is the last node seen in the loop, so Sk+1 = Sk {x}.
• Fix an arbitrary pair of nodes u, v ∈ V (G) and let L be the
min-length of an u-to-v Sk+1 -path, so that obviously
L ≤ Dk+1 [u, v].
• To show that also Dk+1 [u, v] ≤ L, choose an u-to-v Sk+1 -path γ of
length L. If x ∈
/ γ, the result follows from the induction hypothesis.
• If x ∈ γ, let γ1 and γ2 be, respectively, the u-to-x and x-to-v
subpaths. Then γ1 and γ2 are Sk -paths and by the inductive
hypothesis,
L ≥ |γ1 | + |γ2 | ≥ Dk [u, x] + Dk [x, v] ≥ Dk+1 [u, v]

Non-negativity of the weights is not used in the proof, and Floyd’s algorithm
works for negative weights (but negative weight cycles should not be present).

51 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2

a d
3

7
3 5

2 c f g
6 9

8
4 1

b e
3

Computing all-pairs shortest paths

52 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 Initialisation

Initialisation: c(u,v)]
z
 }| {
a 0 2 3 3 ∞ ∞ ∞
b
 2 0 4 ∞ 3 ∞ ∞ 

   c 3 4 0 5 1 6 ∞ 

D[u, v] u,v∈V (G) ← d
 3 ∞ 5 0 ∞ 7 ∞ 

e
 ∞ 3 1 ∞ 0 8 ∞ 

f ∞ ∞ 6 7 8 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
a b c d e f g

for x ∈ V = {a, b, c, d, e, f, g} do
for u ∈ V = {a, b, c, d, e, f, g} do
for v ∈ V = {a, b, c, d, e, f, g} do
D[u, v] ← min {D[u, v], D[u, x] + D[x, v]}
end for
end for
end for
53 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←a

g g g 
 a b c d e f
a 0 2 3 3 ∞ ∞ ∞
f f 4 5 3 ∞ ∞ 
 2
b 0 
c
 3
 4 0 5 1 6 ∞ 
e e

 3
d 5 5 0 ∞ 7 ∞  
e ∞ 3 1 ∞ 0 8 ∞ 
d d
 
f ∞ ∞ 6 7 8 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, a] + D[a, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[b, d] ← min{D[b, d], D[b, a] + D[a, d]}
= min{∞, 2 + 3} = 5
a

54 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←b

g g g 
 a b c d e f
a 0 2 3 3 5 ∞ ∞
f f 4 5 3 ∞ ∞ 
 2
b 0 
c
 3
 4 0 5 1 6 ∞ 
e e

 3
d 5 5 0 8 7 ∞  
e 5 3 1 8 0 8 ∞ 
d d
 
f ∞ ∞ 6 7 8 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, b] + D[b, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[a, e] ← min{D[a, e], D[a, b] + D[b, e]}
= min{∞, 2 + 3} = 5
b

55 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←c

g g g 
 a b c d e f
a 0 2 3 3 4 9 ∞
f f 4 5 3 10 ∞ 
 2
b 0 
c
 3
 4 0 5 1 6 ∞ 
e e

 3
d 5 5 0 6 7 ∞  
e 4 3 1 6 0 7 ∞ 
d d
 
f 9 10 6 7 7 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, c] + D[c, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[a, f ] ← min{D[a, f ], D[a, c] + D[c, f ]}
= min{∞, 3 + 6} = 9
c

56 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←d

g g g 
 a b c d e f
a 0 2 3 3 4 9 ∞
f f 4 5 3 10 ∞ 
 2
b 0 
c
 3
 4 0 5 1 6 ∞ 
e e

 3
d 5 5 0 8 7 ∞  
e 4 3 1 8 0 7 ∞ 
d d
 
f 9 10 6 7 7 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, d] + D[d, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[a, f ] ← min{D[a, f ], D[a, d] + D[d, f ]}
= min{9, 3 + 7} = 9
d

57 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←e

g g g 
 a b c d e f
a 0 2 3 3 4 9 ∞
f f 4 5 3 10 ∞ 
 2
b 0 
c
 3
 4 0 5 1 6 ∞ 
e e

 3
d 5 5 0 8 7 ∞  
e 4 3 1 8 0 7 ∞ 
d d
 
f 9 10 6 7 7 0 9 
g ∞ ∞ ∞ ∞ ∞ 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, e] + D[e, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[b, f ] ← min{D[b, f ], D[b, e] + D[e, f ]}
= min{9, 3 + 7} = 9
e

58 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm: Example 2 x←f

g g g 
 a b c d e f
a 0 2 3 3 4 9 18
f f b 2 0 4 5 3 10 19  
c
 3 4 0 5 1 6 15 

e e

d 3 5 5 0 8 7 16  
e 4 3 1 8 0 7 16 
d d
 
f 9 10 6 7 7 0 9 
g 18 19 15 16 16 9 0
c c | {z }
D[u, v] ← min {D[u, v], D[u, f ] + D[f, v]} ;
b b (u, v) ∈ V 2
E.g.,
a a
D[a, g] ← min{D[a, g], D[a, f ] + D[f, g]}
= min{∞, 9 + 9} = 18
f

59 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Computing Actual Shortest Paths

• In addition to knowing the shortest distances, the shortest


paths are often to be reconstructed.
• The Floyd’s algorithm can be enhanced to compute also the
predecessor matrix Π = [πij ]n,n
i,j=1,1 where vertex πi,j precedes
vertex j on a shortest path from vertex i; 1 ≤ i, j ≤ n.

Compute a sequence Π(0) , Π(1) , . . . Π(n) ,


(k)
where vertex πi,j precedes the vertex j on a shortest path from
vertex i with all intermediate vertices in V(k) = {1, 2, . . . , k}.

For case of no intermediate vertices:



(0) NIL if i = j or c[i, j] = ∞
πi,j =
i if i 6= j and c[i, j] < ∞

60 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Computing Actual Shortest Paths

• In addition to knowing the shortest distances, the shortest


paths are often to be reconstructed.
• The Floyd’s algorithm can be enhanced to compute also the
predecessor matrix Π = [πij ]n,n
i,j=1,1 where vertex πi,j precedes
vertex j on a shortest path from vertex i; 1 ≤ i, j ≤ n.

Compute a sequence Π(0) , Π(1) , . . . Π(n) ,


(k)
where vertex πi,j precedes the vertex j on a shortest path from
vertex i with all intermediate vertices in V(k) = {1, 2, . . . , k}.

For case of no intermediate vertices:



(0) NIL if i = j or c[i, j] = ∞
πi,j =
i if i 6= j and c[i, j] < ∞

60 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Floyd’s Algorithm with Predecessors

algorithm FloydPred( weighted digraph (G, c) )

D←c Create initial distance matrix from weights.


Π← Π(0) Initialize predecessors from c as in Slide 60.

for k from 1 to n do
for i from 1 to n do
for j from 1 to n do
if D[i, j] > D[i, k] + D[k, j] then
D[i, j] ← D[i, k] + D[k, j]; Π[i, j] ← Π[k, j]
end if
end for
end for
end for

61 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors

1 2 3 4 5
 
2 1 0 3 8 ∞ −4
2 ∞ 0 ∞ 1 7
3 4 (0)
 
D = 3
 ∞ 4 0 ∞ ∞ 
7 1 4 2 ∞ −5 0 ∞ 
8
1 3 5 ∞ ∞ ∞ 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 1 1 NIL 1
2 NIL
 NIL NIL 2 2 
Π(0) = 3
 NIL 3 NIL NIL NIL 

5 4 4 4 NIL 4 NIL NIL 
6 5 NIL NIL NIL 5 NIL

62 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors: k = 1

1 2 3 4 5
 
2 1 0 3 8 ∞ −4
2 ∞ 0 ∞ 1 7
3 4 (1)
 
D = 3
 ∞ 4 0 ∞ ∞ 
7 1 4 2 5 −5 0 −2 
8
1 3 5 ∞ ∞ ∞ 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 1 1 NIL 1
2 NIL
 NIL NIL 2 2 
Π(1) = 3
 NIL 3 NIL NIL NIL 

5 4 4 4 1 4 NIL 1 
6 5 NIL NIL NIL 5 NIL

63 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors: k = 2

1 2 3 4 5
 
2 1 0 3 8 4 −4
2 ∞ 0 ∞ 1 7
3 4 (2)
 
D = 3
 ∞ 4 0 5 11 
7 1 4 2 5 −5 0 −2 
8
1 3 5 ∞ ∞ ∞ 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 1 1 2 1
2 NIL
 NIL NIL 2 2 

Π(2) = 3
 NIL 3 NIL 2 2 

5 4 4 4 1 4 NIL 1 
6 5 NIL NIL NIL 5 NIL

64 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors: k = 3

1 2 3 4 5
 
2 1 0 3 8 4 −4
2 ∞ 0 ∞ 1 7
3 4 (3)
 
D = 3
 ∞ 4 0 5 11 

7 1 4 2 −1 −5 0 −2 
8
1 3 5 ∞ ∞ ∞ 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 1 1 2 1
2 NIL
 NIL NIL 2 2 

Π(3) = 3
 NIL 3 NIL 2 2 

5 4 4 4 3 4 NIL 1 
6 5 NIL NIL NIL 5 NIL

65 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors: k = 4

1 2 3 4 5

2 1 0 3 −1 4 −4
2 3 0 −4 1 −1 
3 4
 
D(4) 7
= 3 4 0 5 3
7 1 4 2 −1 −5 0 −2 
8
1 3 5 8 5 1 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 1 4 2 1
2 4 NIL 4 2 1 
 
Π(4) = 3
 4 3 NIL 2 1 

5 4 4 4 3 4 NIL 1 
6 5 4 3 4 5 NIL

66 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating Floyd’s Algorithm with Predecessors: k = 5

1 2 3 4 5

2 1 0 1 −3 2 −4
2 3 0 −4 1 −1 
3 4
 
D(5) 7
= 3 4 0 5 3
7 1 4 2 −1 −5 0 −2 
8
1 3 5 8 5 1 6 0
2
1 2 3 4 5
−5
 
−4 1 NIL 3 4 5 1
2 4 NIL 4 2 1 
 
Π(5) = 3
 4 3 NIL 2 1 

5 4 4 4 3 4 NIL 1 
6 5 4 3 4 5 NIL

67 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Getting Shortest Paths from Π Matrix

The recursive algorithm using the predecessor matrix Π = Π(n) to


print the shortest path between vertices i and j:

algorithm PrintPath( Π, i, j )
if i = j then print i
else
if πi,j = NIL then print “no path from i to j”
else
PrintPath( Π, i, πi,j )
print j
end if
end if

68 / 69
Outline Shortest path Dijkstra Bellman-Ford All-pairs Floyd

Illustrating PrintPath Algorithm

 1 2 3 4 5 
1 NIL 3 4 5 1
2 4 NIL 4 2 1 
 
Π(5) = 3
 4 3 NIL 2 1 

4 4 3 4 NIL 1 
5 4 3 4 5 NIL

PrintPath( Π(5) , 5, 3 ) PrintPath( Π(5) , 1, 2 )


→ PrintPath( Π(5) , 5, π5,3 = 4) → PrintPath( Π(5) , 1, π1,2 = 3)
→ PrintPath( Π(5) , 5, π5,4 = 5) → PrintPath( Π(5) , 1, π1,3 = 4)
print 5 → PrintPath( Π(5) , 1, π1,4 = 5)
print 4 → PrintPath( Π(5) , 1, π1,5 = 1)
print 3 print 1
print 5
print 4
print 3
print 2
69 / 69
Single source shortest paths algorithms

Single source shortest paths algorithms


Shortest paths
Let G = (V , E ) be a directed graph, w : E → R be a weight
function.
The weight of a path P = (v1 P , v2 , . . . , vk ) is the sum of the
weights of its edges: w (P) = ki=2 w (vi−1 , vi ).
A shortest path from u to v is a path from u to v with
minimum weight. The shortest path is not necessarily unique.
δ(u, v ) is the weight of a shortest path from u to v . If v is
not reachable from u then δ(u, v ) = ∞.
1
a b
10
9
s 2 3 4 6
δ(s, b) = 9 5 7
δ(b, s) = 11
c 2 d
Single source shortest paths algorithms
Single-source shortest paths

Given a weighted graph G and a vertex s, the goal is to compute


δ(s, v ) for every vertex v .
1
a b
10
9
s 2 3 4 6

5 7
c 2 d
δ(s, s) = 0, δ(s, a) = 8, δ(s, b) = 9, δ(s, c) = 5, δ(s, d) = 7

Single source shortest paths algorithms


General structure for shortest paths algorithms

δ(s, v ) = minu:(u,v )∈E δ(s, u) + w (u, v ).


Store values v .d and v .π in each vertex.
v .d is the length of the shortest path discovered so far from s
to v (v .d ≥ δ(s, v )).
v .π is the vertex preceeding v on the shortest path discovered
so far from s to v .
When processing an edge (u, v ), we update v .d due to the
newly discovered path s u → v.
Relax(u, v , w )
(1) if u.d + w (u, v ) < v .d 6
(2) v .d ← u.d + w (u, v ) s 2 u3 v
(3) v .π ← u 0 2 6

Single source shortest paths algorithms


General structure for shortest paths algorithms

δ(s, v ) = minu:(u,v )∈E δ(s, u) + w (u, v ).


Store values v .d and v .π in each vertex.
v .d is the length of the shortest path discovered so far from s
to v (v .d ≥ δ(s, v )).
v .π is the vertex preceeding v on the shortest path discovered
so far from s to v .
When processing an edge (u, v ), we update v .d due to the
newly discovered path s u → v.
Relax(u, v , w )
(1) if u.d + w (u, v ) < v .d 6
(2) v .d ← u.d + w (u, v ) s 2 u3 v
(3) v .π ← u 0 2 5

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 ∞ ∞ ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 ∞ ∞ ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 ∞ ∞ ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 ∞ ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 ∞ ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 6 ∞
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 6 4
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 6 4
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 5 4
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 5 4
(3) v .π ← NULL 3 4 2
(4) s.d ← 0

Single source shortest paths algorithms


Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 5 3
(3) v .π ← NULL 3 4 2
(4) s.d ← 0
Claim: When a vertex u is processed in line 3, u.d = δ(s, u).
Proof: Due to the topological order, we already called
Relax(x, u, w ) for every edge (x, u) entering u.
By induction, when Relax(x, u, w ) is called, x.d = δ(s, x).
Thus, u.d = minx:(x,u)∈E δ(s, x) + w (x, u) = δ(s, u).
Single source shortest paths algorithms
Directed acyclic graphs
DAG-Shortest-Paths(G , w , s) // weights can be negative
(1) Initialize(G , s)
(2) Topologically sort the vertices of G
(3) foreach u ∈ G .V in topologically sorted order
(4) foreach v ∈ G .Adj[u]
(5) Relax(u, v , w )
Initialize(G , s) 6 1
(1) foreach v ∈ G .v
t 5 s 2 a 7 b -1 c -2 d
(2) v .d ← ∞ ∞ 0 2 6 5 3
(3) v .π ← NULL 3 4 2
(4) s.d ← 0
Line 1: Θ(V )
Line 2: Θ(V + E )
Lines 3–5: Θ(V + E )
Total time: Θ(V + E )

Single source shortest paths algorithms


Backward dynamic programming

DAG-Shortest-Paths(G , w , s)
(1) Initialize(G , s)
(2) Initialize array AdjR
(3) foreach u ∈ G .V
(4) foreach v ∈ G .Adj[u]
(5) Add u to AdjR [v ]
(6) Topologically sort the vertices of G
(7) foreach v ∈ G .V in topologically sorted order
(8) foreach u ∈ AdjR [v ]
(9) Relax(u, v , w )

6 1
t 5 s 2 a 7 b -1 c -2 d

3 4 2

Single source shortest paths algorithms


Problem variants
Variant 1: Given G , s compute the maximum weight of a path
from s to every vertex v .
Variant 2: Given a directed graph G with weights on the vertices
and a vertex s, compute the maximum weight of a path from s to
every vertex v .
Variant 1 can be solved by changing Relax(u, v , w ) to

(1) if v .d < u.d + w (u, v )


(2) v .d ← u.d + w (u, v )
(3) v .π ← u

Variant 2 can be solved by changing Relax(u, v , w ) to

(1) if v .d < u.d + w (v )


(2) v .d ← u.d + w (v )
(3) v .π ← u
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a b
10
9
s 2 3 4 6

5 7
c 2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a b
Q:sabcd 10
9
s 0 2 3 4 6

5 7
c 2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a b
Q:abcd 10
9
s 0 2 3 4 6

5 7
c 2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 10 b
Q:abcd 10
9
s 0 2 3 4 6

5 7
c 2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 10 b
Q:abcd 10
9
s 0 2 3 4 6

5 7
c 5
2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 10 b
Q:abd 10
9
s 0 2 3 4 6

5 7
c 5
2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 b
Q:abd 10
9
s 0 2 3 4 6

5 7
c 5
2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 14 b
Q:abd 10
9
s 0 2 3 4 6

5 7
c 5
2 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 14 b
Q:abd 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 14 b
Q:ab 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 13 b
Q:ab 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 13 b
Q:b 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 9 b
Q:b 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 9 b
Q:∅ 10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u] 1
(6) Relax(u, v , w )
a 8 9 b
10
9
s 0 2 3 4 6

5 7
c 5
2
7 d
Single source shortest paths algorithms
General graphs
Dijsktra(G , w , s) // weights are non-negative
(1) Initialize(G , s)
(2) Build a priority queue Q on the vertices (keys are d values)
(3) while Q is not empty
(4) u ← ExtractMin(Q)
(5) foreach v ∈ G .Adj[u]
(6) Relax(u, v , w )
Relax(u, v , w )
(1) if u.d + w (u, v ) < v .d
(2) DecreaseKey(Q, v , u.d + w (u, v ))
(3) v .π ← u
Line 1: Θ(V )
Line 2: Θ(V )
Line 4: O(V log V )
Line 6: O(E log V )
Total: O((V + E ) log V )
Single source shortest paths algorithms
Correctness

Claim: During the algorithm, u.d = δ(s, u) for every u ∈


/ Q.
The claim is proved using induction on time.
Consider the time t just before a vertex u is removed from Q.
Let
P = a shortest path from s to u.
y = 1st vertex on P which is in Q.
x = the vertex preceding vertex on P (x ∈
/ Q).
P1 = prefix of P until y .
P2 = suffix of P from y .

s x y u

P1 P2
Single source shortest paths algorithms
Correctness

1. P1 is a shortest path from s to y .


2. By induction, x.d = δ(s, x) at the time x was removed from
Q.
3. After x was removed from Q, Relax(x, y , w ) is called.
From 1,2, y .d ≤ x.d + w (x, y ) = δ(s, x) + w (x, y ) = δ(s, y ).
4. From 1, δ(s, y ) = w (P1 ) ≤ w (P1 ) + w (P2 ) = w (P) = δ(s, u).
5. From 3,4, y .d ≤ δ(s, y ) ≤ δ(s, u) ≤ u.d.
6. If y = u then y .d = u.d. Otherwise, since u is removed from
Q at time t, y .d ≥ u.d.
7. From 5,6, y .d = δ(s, y ) = δ(s, u) = u.d.

s x y u

P1 P2
Single source shortest paths algorithms

You might also like