You are on page 1of 65

Unit 8: Graphs

․Course contents:
⎯ Elementary graph algorithms
⎯ Minimum spanning trees
⎯ Shortest paths
⎯ Maximum flow
․Reading:
⎯ Chapters 22, 23, 24, 25
⎯ Chapter 26.1—26.3

Unit 8 Y.-W. Chang 1


Graphs
․A graph G = (V, E) consists of a set V of vertices
(nodes) and a set E of directed or undirected edges.
⎯ For analysis, we use V for |V| and E for |E|.
․Any binary relation is a graph.
⎯ Network of roads and cities, circuit representation, etc.

Unit 8 Y.-W. Chang 2


Representations of Graphs: Adjacency List
․Adjacency list: An array Adj of |V | lists, one for each
vertex in V. For each u ∈ V, Adj[u] pointers to all the
vertices adjacent to u.
․Advantage: O(V+E) storage, good for sparse graph.
․Drawback: Need to traverse list to find an edge.

Unit 8 Y.-W. Chang 3


Representations of Graphs: Adjacency Matrix
․Adjacency matrix: A |V| r |V| matrix A = (aij) such that

․Advantage: O(1) time to find an edge.


․Drawback: O(V2) storage, more suitable for dense
graph.
․How to save space if the graph is undirected?

Unit 8 Y.-W. Chang 4


Tradeoffs between Adjacency List and Matrix

Unit 8 Y.-W. Chang 5


Breadth-First Search (BFS)
BFS(G,s)
1. for each vertex u ∈ V[G]-{s} ․ color[u]: white
2. color[u] ← WHITE; (undiscovered) → gray
3. d[u] ← ∞;
(discovered) →black
4. π [u] ← NIL;
5. color[s] ← GRAY; (explored: out edges are all
6. d[s] ← 0; discovered)
7. π[s] ← NIL; ․ d[u]: distance from source s;
8. Q ← {s};
9. while Q ≠ ∅ π[u]: predecessor of u.
10. u ← head[Q]; ․ Use queue for gray vertices.
11. for each vertex v ∈ Adj[u]
12. if color[v] = WHITE
․ Time complexity: O(V+E)
13. color[v] ← GRAY; (adjacency list).
14. d[v] ← d[u]+1;
15. π [v] ← u;
16. Enqueue(Q,v);
17. Dequeue(Q);
18. color[u] ← BLACK.

Unit 8 Y.-W. Chang 6


BFS Example
1

․ color[u]: white (undiscovered) → gray (discovered) → black


(explored: out edges are all discovered)
․ Use queue Q for gray vertices.
․ Time complexity: O(V+E) (adjacency list).
⎯ Each vertex is enqueued and dequeued once: O(V) time.

⎯ Each edge is considered once: O(E) time.

․ Breadth-first tree: Gπ = (Vπ, Eπ), Vπ = {v ∈ V | π [v] ≠ NIL} ∪ {s},


Eπ = {(π[v], v) ∈ E | v ∈ Vπ - {s}}.
Unit 8 Y.-W. Chang 7
Depth-First Search (DFS)
DFS(G)
1. for each vertex u ∈ V[G] ․ color[u]: white (undiscovered)
2. color[u] ← WHITE; → gray (discovered) → black
3. π [u] ←NIL; (explored: out edges are all
4. time ← 0;
5. for each vertex u ∈ V[G] discovered)
6. if color[u] = WHITE
7. DFS-Visit(u). ․ d[u]: discovery time (gray);
f[u]: finishing time (black);
DFS-Visit(u) π[u]: predecessor.
1. color[u] ← GRAY;
/* white vertex u has just been ․ Time complexity: O(V+E)
discovered. */
2. d[u] ← time ← time + 1; (adjacency list).
3. for each vertex v ∈ Adj[u]
/* Explore edge (u,v). */
4. if color[v] = WHITE
5. π [v] ← u;
6. DFS-Visit(v);
7. color[u] ← BLACK;
/* Blacken u; it is finished. */
8. f[u] ← time ← time +1.

Unit 8 Y.-W. Chang 8


DFS Example

․ color[u]: white → gray → black.


․ Depth-first forest: Gπ = (V, Eπ), Eπ = {(π[v], v) ∈ E | v ∈ V, π[v] ≠
NIL}.
Unit 8 Y.-W. Chang 9
Parenthesis Property
․In any depth-first search of G, for u, v in G, exactly one
of the following three conditions holds:
⎯ [d[u], f[u]] and [d[v], f[v]] are disjoint.

⎯ [d[u], f[u]] is contained entirely within [d[v], f[v]], and

u is a descendant of v.
⎯ [d[v], f[v]] is contained entirely within [d[u], f[u]], and v

is a descendant of u.

Unit 8 Y.-W. Chang 10


Edge Classification
․Directed graph
⎯ Tree edge: (u, v) with white v.
⎯ Back edge: (u, v) with gray v.
⎯ Forward edge: (u, v) with black v, d[u] < d[v].
⎯ Cross edge: (u, v) with black v, d[u] > d[v].
․Undirected graph
⎯ Tree edge
⎯ Back edge

Unit 8 Y.-W. Chang 11


BFS Application: Lee’s Maze Router
․ Find a path from S to T by “wave propagation.”
․ Discuss mainly on single-layer routing.
․ Strength: Guarantee to find a minimum-length connection
between 2 terminals if it exists.
․ Weakness: Time & space complexity for an M × N grid:
O(MN) (huge!)

Filling
Unit 8 Y.-W. Chang 12
BFS + DFS: Soukup’s Maze Router
․ Depth-first (line) search is first directed toward target T until
an obstacle or T is reached.
․ Breadth-first (Lee-type) search is used to “bubble” around an
obstacle if an obstacle is reached.
․ Time and space complexities: O(MN), but 10--50 times
faster than Lee's algorithm.
․ Find a path between S and T, but may not be the shortest!

Unit 8 Y.-W. Chang 13


DFS Application: Hightower’s Maze Router
․A single escape point on each line segment.
․If a line parallels to the blocked cells, the escape point
is placed just past the endpoint of the segment.
․Time and space complexities: O(L), where L is the # of
line segments generated.

Unit 8 Y.-W. Chang 14


Topological Sort
․ A topological sort of a directed acyclic graph (DAG) G = (V, E) is a
linear ordering of V s.t. (u, v) ∈ E ⇒ u appears before v.
Topological-Sort(G)
1. call DFS(G) to compute finishing times f[v] for each vertex v
2. as each vertex is finished, insert it onto the front of a linked list
3. return the linked list of vertices
․ Time complexity: O(V+E) (adjacency list).

Unit 8 Y.-W. Chang 15


Strongly Connected Component (SCC)
․ A strongly connected component (SCC) of a directed graph
G=(V, E) is a maximal set of vertices U ⊆ V s.t. .
Strongly-Connected-Components(G)
1. call DFS(G) to compute finishing times f[u] for each vertex u
2. compute GT
3. call DFS(GT), but in the main loop of DFS, consider the
vertices in order of decreasing f[u] (as computed in line 1)
4. output the vertices of each tree in the depth-first forest of step
3 as a separate strongly connected component

․ Time complexity: O(V+E) (adjacency list).

Unit 8 Y.-W. Chang 16


Strongly Connected Component (SCC) Properties
․ If two vertices are in the same SCC, no path between them
ever leaves the component.
․ In any DFS forest, all vertices in the same SCC are in the
same tree.
․ Forefather φ(u): vertex w s.t. and f[w] is maximum.
․ The forefather φ(u) of any vertex u in a depth-first tree is an
ancestor of u.
⎯ φ(u) and u lie in the same SCC.
⎯ Forefathers are roots of the depth-first trees.

Unit 8 Y.-W. Chang 17


How Does the SCC Algorithm Work?
․ u and v lie in the same SCC iff they have the same forefather.
⎯ The SCCs are sets of vertices with the same forefather.
․ DFS(G) finds vertices with directed paths from roots; DFS(GT)
finds vertices with directed paths to roots (also with paths
from roots).
․ In line 3, processing vertices in order of decreasing f[u], then
no edge from any previous depth-first tree goes into the
current processed tree.
⎯ “Peel” off the SCCs one by one in decreasing order of finishing
times.

Unit 8 Y.-W. Chang 18


Minimum Spanning Tree (MST)
․ Given an undirected graph G = (V, E) with weights on the
edges, a minimum spanning tree (MST) of G is a subset T
⊆ E such that
⎯ T is connected and has no cycles,
⎯ T covers (spans) all vertices in V, and
⎯ sum of the weights of all edges in T is minimum.
․ |T| = |V| - 1.
․ Applications: circuit interconnection (minimizing tree radius),
communication network (minimizing tree diameter), etc.

Unit 8 Y.-W. Chang 19


Growing a Minimum Spanning Tree (MST)
․ Grows an MST by adding one safe edge at a time.
Generic-MST(G,w)
1. A ← ∅ ;
2. while A does not form a spanning tree
3. find an edge (u,v) that is safe for A;
4. A ← A ∪ {(u,v)};
5. return A.

․ A cut (S, V-S) of a graph G = (V, E) is a partition of V.


․ An edge (u, v) ∈ E crosses the cut (S, V-S) if one of its endpoints
is in S and the other is in V-S.
․ A cut respects the set A of edges if no edges in A crosses the cut.

Unit 8 Y.-W. Chang 20


Greedy-Choice Property for MST
․An edge is a light edge crossing a cut if its weight is
the minimum of any edge crossing the cut.
․A light edge crossing the cut (S, V-S) is safe for A.
⎯ G = (V, E).
⎯ A: subset of E including in some minimum spanning tree for G.
⎯ (S, V-S): any cut respecting A.
․If (u, v) is a light edge, w(T‘) = w(T) - w(x, y) + w(u, v) ≤
w(T).

Unit 8 Y.-W. Chang 21


Kruskal's MST Algorithm
MST-Kruskal(G,w)
1. A ← ∅ ;
2. for each vertex v ∈ V[G]
3. Make-Set(v);
4. sort the edges of E[G] by nondecreasing weight w;
5. for each edge (u,v) ∈ E[G], in order by nondecreasing weight
6. if Find-Set(u) ≠ Find-Set(v)
7. A ← A ∪ {(u, v)};
8. Union(u,v);
9. return A;

․ Add a safe edge at a time to the growing forest by finding an edge of


least weight (from those connecting two trees in the forest).
․ Time complexity: O(E lgE + V) (= O(E lgV + V), why?)
⎯ Lines 2--3: O(V).

⎯ Line 4:O(E lgE).

⎯ Lines 5--8: O(E) operations on the disjoint-set forest, so O(E α(E, V)).

Unit 8 Y.-W. Chang 22


Example: Kruskal's MST Algorithm

Unit 8 Y.-W. Chang 23


Prim's (Prim-Dijkstra's?) MST Algorithm
MST-Prim(G,w,r)
/* Q: priority queue for vertices not in the tree, based on key[]. */
/* key: min weight of any edge connecting to a vertex in the tree. */
1. Q ← V[G];
2. for each vertex u ∈ Q
3. key[u] ← ∞;
4. key[r] ← 0;
5. π[r] ← NIL;
6. while Q ≠ ∅
7. u ← Extract-Min(Q);
8. for each vertex v ∈ Adj[u]
9. if v ∈ Q and w(u,v) < key[v]
10. π[v] ← u;
11. key[v] ← w(u,v)

․ Starts from a vertex and grows until the tree spans all the vertices.
⎯ The edges in A always form a single tree.
⎯ At each step, a safe, light edge connecting a vertex in A to a
vertex in V - A is added to the tree.
Unit 8 Y.-W. Chang 24
Example: Prim's MST Algorithm
Key[b]=4 Key[c]=8 Key[i]=2 Key[d]=7
π[b]=a π[c]=b π[i]=c π[d]=c
Key[a]=0
π[a]=NIL

Key[h]=8 Key[d]=7 Key[h]=8 Key[h]=8 Key[f]=4


π[h]=a π[d]=c π[h]=a π[h]=a π[f]=c

Key[h]=7 Key[g]=6 Key[f]=4


π[h]=i π[f]=c
π[g]=i

Unit 8 Y.-W. Chang 25


Time Complexity of Prim's MST Algorithm
MST-Prim(G,w, r)
1. Q ← V[G];
2. for each vertex u ∈ Q
3. key[u] ← ∞;
4. key[r] ← 0;
5. π[r] ← NIL;
6. while Q ≠ ∅
7. u ← Extract-Min(Q);
8. for each vertex v ∈ Adj[u]
9. if v ∈ Q and w(u,v) < key[v]
10. π[v] ← u;
11. key[v] ← w(u,v)

․ Q is implemented as a binary heap: O(E lg V).


Lines 1--5: O(V).

⎯ Line 7: O(lg V) for Extract-Min, so O(VlgV) with the while loop.

⎯ Lines 8--11: O(E) operations, each takes O(lgV) time for Decrease-
Key (maintaining the heap property after changing a node).
․ Q is implemented as a Fibonacci heap: O(E + VlgV). (Fastest to date!)
․ |E| = O(V) ⇒only O(E lg*V) time. (Fredman & Tarjan, 1987)
Unit 8 Y.-W. Chang 26
Complexity of Mergeable Heaps

․ Make-Heap(): creates and returns a new heap containing no


elements.
․ Minimum(H): returns a pointer to the node with the minimum
key.
․ Extract-Min(H): deletes the node with the minimum key.
․ Decrease-Key(H, x, k): assigns to node x the new key value
k, which is ≤ its current key value.
․ Delete(H, x): deletes node x from heap H.
Unit 8 Y.-W. Chang 27
Single-Source Shortest Paths (SSSP)
․The Single-Source Shortest Path (SSSP) Problem
⎯ Given: A directed graph G=(V, E) with edge weights, and a
specific source node s.
⎯ Goal: Find a minimum weight path (or cost) from s to every
other node in V.
․Applications: weights can be distances, times, wiring
cost, delay. etc.
․Special case: BFS finds shortest paths for the case
when all edge weights are 1 (the same).

Unit 8 Y.-W. Chang 28


Negative Cycles in Shortest Paths
․ A weighted, directed graph G = (V, E) with the weight function w: E
→ R.

k
⎯ Weight of path p = <v0, v1, …, vk>: w(p) = i =1
w(v i −1, v i ) .
⎯ Shortest-path weight from u to v, δ(u, v):

․ Warning! negative-weight edges/cycles are a problem.


⎯ Cycle <e, f, e> has weight -3 < 0 ⇒ δ(s, g) = - ∞.
⎯ Vertices h, i, j not reachable from s ⇒ δ(s, h) = δ(s, i) = δ(s, j) = ∞.
․ Algorithms apply to the cases for negative-weight edges/cycles??

Unit 8 Y.-W. Chang 29


Optimal Substructure of a Shortest Path
․ Subpaths of shortest paths are shortest paths.
⎯ Let p = <v1, v2, …, vk> be a shortest path from vertex v1 to
vertex vk, and let pij = <vi, vi+1, …, vj> be the subpath of p from
vertex vi to vertex vj, 1 ≤ i ≤ j ≤ k. Then, pij is a shortest path
from vi to vj.
․ Suppose that a shortest path p from a source s to a vertex v can
be decomposed into . Then, δ(s, v) = δ(s, u) + w(u, v).
․ For all edges (u, v) ∈ E, δ(s, v) ≤ δ(s, u) + w(u, v).

Unit 8 Y.-W. Chang 30


Relaxation
Initialize-Single-Source(G, s)
1. for each vertex v ∈ V[G] Relax(u, v, w)
2. d[v] ← ∞; 1. if d[v] > d[u]+w(u, v)
/* upper bound on the weight of 2. d[v] ← d[u]+w(u, v);
a shortest path from s to v */ 3. π[v] ← u;
3. π[v] ←NIL; /* predecessor of v */
4. d[s] ← 0;

․ d[v] ≤ d[u] + w(u, v) after calling Relax(u, v, w).


․ d[v] ≥ δ(s, v) during the relaxation steps; once d[v] achieves its
lower bound δ(s, v), it never changes.
․ Let be a shortest path. If d[u] = δ(s, u) prior to the
call Relax(u, v, w), then d[v] = δ(s, v) after the call.

Unit 8 Y.-W. Chang 31


Dijkstra's Shortest-Path Algorithm
Dijkstra(G, w, s)
1. Initialize-Single-Source(G, s);
2. S ← ∅ ;
3. Q ← V[G];
4. while Q ≠ ∅
5. u ← Extract-Min(Q);
6. S ← S ∪ {u};
7. for each vertex v ∈ Adj[u]
8. Relax(u, v, w);

․ Combines a greedy and a dynamic-programming schemes.


․ Works only when all edge weights are nonnegative.
․ Executes essentially the same as Prim's algorithm.
․ Naive analysis: O(V2) time by using adjacency lists.

Unit 8 Y.-W. Chang 32


Example: Dijkstra's Shortest-Path Algorithm
d[u]=10
π[u]=s d[u]=8 d[v]=14
π[u]=x π[v]=x
d[s]=0
π[s]=NIL

d[y]=7
d[x]=5 π[y]=x
π[x]=s

Unit 8 Y.-W. Chang 33


Runtime Analysis of Dijkstra's Algorithm
Dijkstra(G, w, s)
1. Initialize-Single-Source(G, s);
2. S ← ∅ ;
3. Q ← V[G];
4. while Q ≠ ∅
5. u ← Extract-Min(Q);
6. S ← S ∪ {u};
7. for each vertex v ∈ Adj[u]
8. Relax(u, v, w);

․ Q is implemented as a linear array: O(V2).


⎯ Line 5: O(V) for Extract-Min, so O(V2) with the while loop.
⎯ Lines 7--8: O(E) operations, each takes O(1) time.
․ Q is implemented as a binary heap: O(E lg V).
⎯ Line 5: O(lg V) for Extract-Min, so O(V lg V) with the while loop.
⎯ Lines 7--8: O(E) operations, each takes O(lg V) time for Decrease-
Key (maintaining the heap property after changing a node).
․ Q is implemented as a Fibonacci heap: O(E + V lg V).

Unit 8 Y.-W. Chang 34


The Bellman-Ford Algorithm for SSSP
Bellman-Ford(G,w, s)
1. Initialize-Single-Source(G, s);
2. for i ← 1 to |V[G]|-1
3. for each edge (u, v) ∈ E[G]
4. Relax(u, v, w);
5. for each edge (u, v) ∈ E[G]
6. if d[v] > d[u] + w(u, v)
7. return FALSE;
8. return TRUE

․ Solves the case where edge weights can be negative.


․ Returns FALSE if there exists a negative-weight cycle
reachable from the source; TRUE otherwise.
․ Time complexity: O(VE).

Unit 8 Y.-W. Chang 35


Example: The Bellman-Ford Algorithm

Unit 8 Y.-W. Chang 36


SSSPs in Directed Acyclic Graphs (DAGs)
DAG-Shortest-Paths(G, w, s)
1. topologically sort the vertices of G;
2. Initialize-Single-Source(G, s);
3. for each vertex u taken in topologically sorted order
4. for each vertex v ∈ Adj[u]
5. Relax(u, v, w);

․ Time complexity: O(V+E) (adjacency-list representation).

Unit 8 Y.-W. Chang 37


Linear Programming
․ Linear programming (LP): Mathematical program in which the
objective function is linear in the unknowns and the constraints
consist of linear equalities (inequalities).
․ Standard form:
min/max c1 x1 + c2 x2 + … + cn xn
subject to a11x1 + a12x2 + … + a1n xn ≤ b1

am1x1 + am2x2 + … + amnxn ≤ bm
and x1 ≥ 0, x2 ≥ 0, …, xn ≥ 0.

․ Compact vector form:


min/max cTx
subject to Ax ≤ b and x ≥ 0,

where x is an n-dimensional column vector, cT is an n-dimensional


row vector, A is an m × n matrix, and b is an m-dimensional
column vector. x ≥ 0 means that each component of x is
nonnegative.
Unit 8 Y.-W. Chang 38
Difference Constraints
․ System of difference constraints: Each row of the linear-programming
matrix A contains exactly one 1, one -1, and 0's for all other entries.
⎯ Form: xj - xi ≤ bk, 1 ≤ i, j ≤n, 1 ≤ k ≤m.

․ Example:

․ Equivalent to finding the unknowns xi, i = 1, 2, …, 5 s.t. the following


difference constraints are satisfied:

Unit 8 Y.-W. Chang 39


Constraint Graph
․ x = (x1, x2, …, xn) is a solution to Ax ≤ b of difference constraints ⇒ so is
x + d = (x1 + d, x2 + d, …, xn+ d).
․ Constraint graph: Weighted, directed graph G=(V, E).
⎯ V = { v0, v1, …, vn}

⎯ E = {(vi, vj): xj - xi ≤ bk} ∪ {(v0, v1), (v0, v2),…, (v0, vn)}

⎯ w(vi, vj) = bk if xj - xi ≤ bk is a difference constraint.

․ If G contains no negative-weight cycle, then x = (δ(v0, v1), δ(v0, v2), …, δ(v0,


vn)) is a feasible solution; no feasible solution, otherwise.
․ Example: x = (-5, -3, 0, -1, -4) and x' =(d-5, d-3, d, d-1, d-4) are feasible
solutions.

δ(v0, vj) ≤ δ(v0, vi) + w(vi, vj)

xj xi bk

Unit 8 Y.-W. Chang 40


All-Pairs Shortest Paths (APSP)
․ The All-Pairs Shortest Path (APSP) Problem
Given: A directed graph G = (V, E) with edge weights

⎯ Goal: Find a minimum weight path (or cost) between every


pair of vertices in V.
․ Method 1: Extends the SSSP algorithms.
⎯ No negative-weight edges: Run Dijkstra's algorithm |V| times,
once with each v ∈ V as the source.
2
„ Adjacency list + Fibonacci heap: O(V lg V + VE) time.

⎯ With negative-weight edges: Run the Bellman-Ford algorithm


|V| times, once with each v ∈ V as the source.
2
„ Adjacency list: O(V E) time.

․ Method 2: Applies the Floyd-Warshall algorithm (negative-weight


edges allowed).
3
⎯ Adjacency matrix: O(V ) time.

․ Method 3: Applies Johnson's algorithm for sparse graphs


(negative-weight edges allowed).
2
⎯ Adjacency list: O(V lg V + VE) time.

Unit 8 Y.-W. Chang 41


Overview of Floyd-Warshall’s APSP Algorithm
․ Applies dynamic programming.
1. Characterize the structure of an optimal solution?
2. Recursively define the value of an optimal solution?
3. Compute the value of an optimal solution in a bottom-up
fashion?
4. Construct an optimal solution from computed information?
․ Uses adjacency matrix A for G = (V, E) :

․ Goal: Compute |V| r |V| matrix D where D[i, j] = dij is the weight of
a shortest i-to-j path.
․ Allows negative-weight edges.
․ Runs in O(V3) time.

Unit 8 Y.-W. Chang 42


Shortest-Path Structure
․ An intermediate vertex of a simple path <v1, v2, …, vl> is any
vertex in {v2, v3, …, vl-1}.
․ Let be the weight of a shortest path from vertex i to vertex j
with all intermediate vertices in {1, 2, …, k}.
⎯ The path does not use vertex k: .
⎯ The path uses vertex k: .
․ Def: Dk[i, j] = is the weight of a shortest i-to-j path with
intermediate vertices in {1, 2, …, k}.

Unit 8 Y.-W. Chang 43


The Floyd-Warshall Algorithm for APSP
Floyd-Warshall(W)
1. n ← rows[W]; /* W = A */
2. D(0) ← W;
3. for k ← 1 to n
4. for i ← 1 to n
5. for j ← 1 to n
6.
7. return D(n);

․D(k)[i, j] = : weight of a shortest i-to-j path with


intermediate vertices in {1, 2, …, k}.
⎯ D(0) = A: original adjacency matrix (paths are single
edges).
⎯ D(n) = ( ): the final answer ( = δ(i, j)).
․Time complexity: O(V 3).
․Question: How to detect negative-weight cycles?
Unit 8 Y.-W. Chang 44
Constructing a Shortest Path
․ Predecessor matrix Π = (πij): πij is
⎯ NIL if either i=j or there is no path from i to j, or
⎯ Some predecessor of j on a shortest path from i

․ Compute Π(0), Π (1), …, Π (n), where is defined to be the


predecessor of vertex j on a shortest path from i with all
intermediate vertices in {1, 2, …, k}.

Print-All-Pairs-Shortest-Path(Π, i, j)
1. if i=j
2. print i;
3. else if πij = NIL
4. print “no path from i to j exists”;
5. else Print-All-Pairs-Shortest-Path(Π, i, πij);
6. print j;
Unit 8 Y.-W. Chang 45
Example: The Floyd-Warshall Algorithm

Unit 8 Y.-W. Chang 46


Transitive Closure of a Directed Graph
․ The Transitive Closure Problem:
⎯ Input: a directed graph G = (V, E)
⎯ Output: a |V| r |V| matrix T s.t. T[i, j] = 1 iff there is a path from i to j
in G.
․ Method 1: Run the Floyd-Warshall algorithm on unit-weight
graphs. If there is a path from i to j (T[i, j] = 1), then dij < n; dij = ∞
otherwise.
⎯ Runs in O(V3) time and uses O(V2) space.
⎯ Needs to compute lengths of shortest paths.
(k )
․ Method 2: Define t = 1 if there exists a path in G from i to j with
ij

all intermediate vertices in {1, 2, …, k}; 0 otherwise.

(k ) ( k −1) ( k −1) ( k −1)


․ For k ≥ 1, t ij
= t ij ∨ (t ik ∧ t kj ) .
⎯ Runs in O(V3) time and uses O(V2) space, but only single-bit Boolean
values are involved ⇒ faster!

Unit 8 Y.-W. Chang 47


Transitive-Closure Algorithm
Transitive-Closure(G)
1. n ← |V[G]|;
2. for i ← 1 to n
3. for j ← 1 to n
4. if i = j or (i, j) ∈ E[G]
5. ←1
6. else ←0
7. for k ← 1 to n
8. for i ← 1 to n
9. for j ← 1 to n
10.
11. return T(n).

Unit 8 Y.-W. Chang 48


Johnson's Algorithm for Sparse Graphs
․ Idea: All edge weights are nonnegative ⇒ Dijkstra's algorithm is
applicable (+Fibonacci-heap priority queue: O(V2 lg V + VE)).
⎯ Reweighting: If there exists some edge weight w < 0, reweight
w to ≥ 0.
⎯ must satisfy two properties:
1. ∀u, v ∈ V, shortest path from u to v w.r.t. w ⇒ shortest path
from u to v w.r.t. .
2. ∀ e ∈ E, (e) ≥ 0.
․ Preserving shortest paths by reweighting: Given directed G = (V,
E) with weight function w: E → R , let h: V → R . For each edge (u, v)
∈ E, define
(u, v) = w(u, v) + h(u) - h(v).
Let p = <v0, v1, …, vk>. Then
1. w(p) = δ(v0, vk) iff (p) = (v0, vk).
2. G has a negative cycle w.r.t w iff has a negative cycle
w.r.t .

Unit 8 Y.-W. Chang 49


Preserving Shortest Paths by Reweighting

1. Show that w(p) = δ (v0, vk) ⇒ (p) = (v0, vk). Suppose ∃ p'=
<v0, …, vk>, (p') < (p).

⇒ w(p') < w(p), contradiction!


( (p) = (v0, vk) ⇒ w(p) = δ(v0, vk): similar!)
2. Show that G has a negative cycle w.r.t w iff has a negative
cycle w.r.t . Cycle c = <v0, v1, …, vk=v0> ⇒ (c) = w(c) + h(v0)
- h(v0) = w(c).
Unit 8 Y.-W. Chang 50
Producing Nonnegative Weights by Reweighting
1. Construct a new graph G' = (V', E'), where
V' = V ∪ {s}
E' = E ∪ {(s, v): v ∈ V}
w(s, v) = 0, ∀v ∈ V
2. G has no negative cycle ⇔ G' has no negative cycle.
3. If G and G' have no negative cycles, define h(v) = δ(s, v), ∀v ∈ V'.
4. Define the new weight for : (u, v) = w(u, v) + h(u) - h(v)
≥ 0 (since δ(s, v) ≤ δ(s, u) + w(u, v)).

Unit 8 Y.-W. Chang 51


Johnson's Algorithm for APSP
Johnson(G)
1. compute G', where V[G']=V[G] ∪ {s} and
E[G']=E[G] ∪{(s,v):v ∈ V[G]};
2. if Bellman-Ford(G', w, s) = FALSE
3. print “the input graph contains a negative-weight cycle”;
4. else for each vertex v ∈ V[G']
5. set h(v) to the value of δ(s,v)
computed by the Bellman-Ford algorithm;
6. for each edge (u,v) ∈ E[G']
7. (u,v) ← w(u, v) + h(u) - h(v);
8. for each vertex u ∈ V[G]
9. run Dijkstra(G, , u) to compute
(u,v) for all v ∈ V[G];
10. for each vertex v ∈ V[G]
11. duv ← (u, v) + h(v) - h(u);
12. return D;

․ Steps: Compute the new graph ⇒ Bellman-Ford ⇒ Reweighting ⇒


Dijkstra's (+ Fibonacci heap) ⇒ Recompute the weights.
․ Time complexity: O(V) + O(VE) + O(E) + O(V2 lg V + VE) + O(V2) =
O(V2 lg V + VE).
Unit 8 Y.-W. Chang 52
Example: Johnson's Algorithm for APSP

h(v)
h(u)

1 = 2 + (-1) - 0

Unit 8 Y.-W. Chang 53


Maximum Flow
․ Flow network: directed G=(V, E)
⎯ capacity c(u, v) : c(u, v) > 0, ∀(u, v) ∈ E; c(u, v) = 0, ∀(u, v) ∉ E.
⎯ Exactly one node with no incoming (outgoing) edges, called the
source s (sink t).
․ Flow f: V r V → R that satisfies
⎯ Capacity constraint: f(u, v) ≤ c(u, v), ∀u, v ∈ V.
⎯ Skew symmetry: f(u, v) = -f(v, u).
⎯ Flow conservation: ∑v ∈ Vf(u, v) = 0, ∀u ∈ V - {s, t}.
․ Value of a flow f: |f| = ∑v ∈ V f(s, v) = ∑v ∈ V f(v, t), where f(u, v) is the
net flow from u to v.
․ The maximum flow problem: Given a flow network G with source
s and sink t, find a flow of maximum value from s to t.

Unit 8 Y.-W. Chang 54


Network with Multiple Sources and Sinks
․ Reduce a multiple-source, multiple-sink maximum-flow
problem to a problem with a single source and a single sink.
⎯ Given a flow network with sources S = {s1, s2, …, sm} and sinks T = {t1,
t2, …, tn}, introduce a supersource s and edges (s, si), i = 1,
2, …, m, with capacity c(s, si) = ∞ and a supersink t and
edges (ti, t), i = 1, 2, …, n, with capacity c(ti, t) = ∞.
⎯ Any flow in a multiple-source, multiple-sink flow network
corresponds to a flow of identical value in a single-source,
single-sink flow network, and vice versa.
⎯ Reduction cost?

Unit 8 Y.-W. Chang 55


Working with Flows
․If X, Y ⊆ V, f(X, Y) = ∑x ∈ X ∑y ∈ Yf(x, y).
․G= (V, E): flow network, f: flow in G
⎯ ∀ X ⊆ V, f(X, X) = 0.
⎯ ∀ X, Y ⊆ V, f(X, Y) = - f(Y, X).
⎯ ∀ X, Y, Z ⊆ V with X ∩ Y = ∅ ,
f(X ∪Y, Z) = f(X, Z) + f(Y, Z) and f(Z, X ∪Y) = f(Z, X) + f(Z, Y).
․Value of flow = net flow into the sink: |f| = f(V, t).
|f| = f (s, V) /* by definition */
= f (V, V) - f (V - s, V) /* V - s ≡ V - {s} */
= f (V, V - s)
= f (V, t) + f(V, V - s - t)
= f (V, t).
․Cancellating flow going in opposite direction:

Unit 8 Y.-W. Chang 56


Basic Ford-Fulkerson Method
Ford-Fulkerson-Method(G, s, t)
1. Initialize flow f to 0;
2. while there exists an augmenting path p
3. Augment flow f along p;
4. return f;

․ Ford & Fulkerson, 1956


․ Augmenting path: A path from s to t along which we can
push more flow.
․ Need to construct a residual network to find an
augmenting path.

Unit 8 Y.-W. Chang 57


Residual Network
․ Construct a residual network to find an augmenting path.
․ Residual capacity of edge (u, v), cf(u, v): Amount of additional
net flow that can be pushed from u to v before exceeding c(u, v),
cf (u, v) = c(u, v) - f(u, v).
․ Gf = (V, Ef): residual network of G = (V, E) induced by f, where
Ef = {(u, v) ∈ V r V: cf (u, v) > 0}.
․ The residual network contains residual edges that can admit a
positive net flow (|Ef| ≤ 2|E|).
․ Let f and f' be flows in G and Gf, respectively. The flow sum f + f ':
V r V → R : (f + f ')(u, v) = f(u, v) + f'(u, v) is a flow in G with value
|f + f '| = |f| + |f '|.

Unit 8 Y.-W. Chang 58


Augmenting Path
․ An augmenting path p is a simple path from s to t in the residual
network Gf.
⎯ (u, v) ∈ E on p in the forward direction (a forward edge), f(u, v) < c(u,
v).
⎯ (u, v) ∈ E on p in the reverse direction (a backward edge), f(u, v) > 0.
․ Residual capacity of p, cf (p): Maximum amount of net flow that
can be pushed along the augmenting path p, i.e.,
cf (p) = min{cf (u, v) : (u, v) is in p}.
․ Let p be an augmenting path in Gf. Define fp: V r V → R by

Then, fp is a flow in Gf with value |fp| = cf (p) > 0.

Unit 8 Y.-W. Chang 59


Cuts of Flow Networks
․ A cut (S, T) of flow network G = (V, E) is a partition of
V into S and T = V - S such that s ∈ S and t ∈ T.
⎯ Capacity of a cut: c(S, T) =∑u ∈ S, v ∈ T c(u, v). (Count only
forward edges!)
⎯ f(S, T) = |f| ≤ c(S, T), where f(S, T) is net flow across the cut
(S, T).
․ Max-flow min-cut theorem: The following conditions
are equivalent
1. f is a max-flow.
2. Gf contains no augmenting path.
3. |f| = c(S, T) for some cut (S, T).

Unit 8 Y.-W. Chang 60


Ford-Fulkerson Algorithm
Ford-Fulkerson(G, s, t)
1. for each edge (u, v) ∈ E[G]
2. f[u, v] ← 0;
3. f[v, u] ← 0;
4. while there exists a path p from s to t in the residual network Gf
5. cf (p) ←min {cf (u, v): (u, v) is in p};
6. for each edge (u, v) in p
7. f[u, v] ← f[u, v] + cf (p);
8. f[v, u] ← -f[u, v];

․ Time complexity (assume integral capacities): O(E |f *|),


where f * is the maximum flow.
⎯ Each run augments at least flow value 1 ⇒ at most |f *| runs.
⎯ Each run takes O(E) time (using BFS or DFS).
⎯ Polynomial-time algorithm?

Unit 8 Y.-W. Chang 61


Example: Ford-Fulkerson Algorithm

Unit 8 Y.-W. Chang 62


Edmonds-Karp Algorithm
․ The Ford-Fulkerson algorithm may be bad when |f *| is very large.
⎯ 2M iterations in the worst case v.s. 2 iterations.

․ Edmonds-Karp Algorithm (1969): Find an augmenting path by


shortest path (minimum # of edges on the path), i.e., use breadth-
first search (BFS).
⎯ Using the Edmonds-Karp algorithm, the shortest-path distance δf (s, v)
in the residual network Gf increases monotonically with each flow
augmentation.
⎯ The # of flow augmentations performed by the Edmonds-Karp
algorithm is at most O(VE).
⎯ Each flow augmentation can be found in O(E) time by BFS.
⎯ Total running time of the Edmonds-Karp algorithm is O(VE2).
․ Goldberg & Tarjan (1985): O(EV lg(V2/E)); Ahuja, Orlin, Tarjan
(1989): O( ) , U = max. edge capacity.
Unit 8 Y.-W. Chang 63
Maximum Cardinality Matching
․ Given an undirected G=(V, E), M ⊆ E is a matching iff at
most one edge of M is incident on v, ∀ v ∈ V.
⎯ v ∈ V is matched if some edge in M is incident on v; otherwise, v is
unmatched.
․ M is a maximum matching iff |M| ≥ |M'| ∀ matching M'.

Maximal Maximum
matching matching
Maximal
matching

Maximum
matching

Unit 8 Y.-W. Chang 64


Application: Maximum Cardinality Bipartite Matching
․ Given a bipartite graph G = (V, E), V = L ∪ R, construct a unit-
capacity flow network G' = (V', E'):
V' = V ∪ {s, t}
E '= {(s, u): u ∈ L} ∪{(u, v): u ∈ L, v ∈ R, (u, v) ∈ E} ∪{(v, t): v ∈ R}.
․ The cardinality of a maximum matching in G = the value of a
maximum flow in G' (i.e., |M| = |f|).
․ Time complexity: O(VE) by the Ford-Fulkerson algorithm.
⎯ Value of maximum flow ≤ min(L, R) = O(V).

Unit 8 Y.-W. Chang 65

You might also like