You are on page 1of 182

Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow

VU Algorithmics
Part IV: Network Flows
Mario Ruthmair, Markus Leitner, G unther Raidl
Algorithms and Data Structures Group
Institute of Computer Graphics and Algorithms
Vienna University of Technology
WS 2011/12
VU Algorithmics M. Ruthmair 1
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Topics of this part
Maximum Flow Basics
Ford-Fulkerson Maximum Flow Algorithm
Preow-Push Maximum Flow Algorithm
Networks with Lower Capacity Bounds
Minimum Cost Flow Problem
VU Algorithmics M. Ruthmair 2
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows in Networks
VU Algorithmics M. Ruthmair 3
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Maximum ow problem arises in a wide variety of situations:
Transport of petroleum products in a pipeline network.
Transmission of data between two stations in a telecommunication
network.
. . .
Subproblem in the solution of other, more dicult network
problems, e.g. minimum cost ow.
Literature:
R.K. Ahuja, T.L. Magnanti, J.B. Orlin: Network Flows,
Prentice Hall, 1993
VU Algorithmics M. Ruthmair 4
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 1 (Flow Network)
A ow network is a 5-tuple A = (V, A, , s, t), with (V, A) being a
directed graph with node set V and arc set A, a function : A R
+
0
,
and two nodes s, t V, s ,= t. Function associates nonnegative
capacities to each arc (u, v), node s is called the source, node t the
target or sink.
Extension: (a) = 0 arcs a (V V)A.
VU Algorithmics M. Ruthmair 5
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Assumption
The network A is connected
(guaranteed by the extension (a) = 0 a (V V)A).
Assumption
The network A does not contain a directed path from s to t composed
only of innite capacity arcs.
Assumption
The network A does not contain parallel arcs (i.e., two or more arcs with
the same tail and head nodes).
VU Algorithmics M. Ruthmair 6
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 2 (Flow)
A ow is a real function f : V V R with the following three
properties:
1. Skew symmetry:
f (u, v) = f (v, u) u, v V
2. Capacity constraints:
f (u, v) (u, v) u, v V
3. Flow conservation:

vV
f (u, v) = 0 u V s, t
f (u, v) is the net ow from node u to node v:
real ow of 4 units from u to v and of 3 units from v to u then the net ow
f (u, v) = 1 and f (v, u) = 1.
VU Algorithmics M. Ruthmair 7
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
s
v
1
v
2
v
3
v
4
t
16
13
4
10
12
9
7
20
4
14
Figure: A ow network A with associated arc capacities (u, v).
VU Algorithmics M. Ruthmair 8
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
16
13
4
10
12
9
7
20
4
14
11
12
15
7
4
8
1
11
4
s
v
1
v
2
v
3
v
4
t
Figure: A ow f (red) in the network A.
VU Algorithmics M. Ruthmair 9
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 3 (Value of a Flow, Maximum Flow)
The value of a ow f is the total amount of ow reaching the sink t:
[f [ =

vV
f (v, t).
f

is a maximum ow when there is no ow g with [g[ > [f

[.
Denition 4 (Residual Capacity)
Given a ow f in the network A. The residual capacity of an arc
a V V in respect to f is dened as r
f
(a) = (a) f (a).
Notice: An arc a with r
f
(a) > 0 is called a residual arc where additional
ow can be supplemented; otherwise (r
f
(a) = 0) the arc is called
saturated.
VU Algorithmics M. Ruthmair 10
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 3 (Value of a Flow, Maximum Flow)
The value of a ow f is the total amount of ow reaching the sink t:
[f [ =

vV
f (v, t).
f

is a maximum ow when there is no ow g with [g[ > [f

[.
Denition 4 (Residual Capacity)
Given a ow f in the network A. The residual capacity of an arc
a V V in respect to f is dened as r
f
(a) = (a) f (a).
Notice: An arc a with r
f
(a) > 0 is called a residual arc where additional
ow can be supplemented; otherwise (r
f
(a) = 0) the arc is called
saturated.
VU Algorithmics M. Ruthmair 10
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 5 (Residual Network)
The graph G
f
= (V, A
f
) with A
f
being the set of all residual arcs (i.e.,
all arcs a with r
f
(a) > 0) is called the residual network in respect to a
given ow f .
Denition 6 (Augmenting Path)
A path P in the residual network G
f
from source s to sink t is called an
augmenting path in respect to a given ow f .
Notice: An augmenting path in G
f
can be used to increase the ow
from s to t leading to a new ow f

and residual network G
f
.
VU Algorithmics M. Ruthmair 11
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 5 (Residual Network)
The graph G
f
= (V, A
f
) with A
f
being the set of all residual arcs (i.e.,
all arcs a with r
f
(a) > 0) is called the residual network in respect to a
given ow f .
Denition 6 (Augmenting Path)
A path P in the residual network G
f
from source s to sink t is called an
augmenting path in respect to a given ow f .
Notice: An augmenting path in G
f
can be used to increase the ow
from s to t leading to a new ow f

and residual network G
f
.
VU Algorithmics M. Ruthmair 11
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 7 (Push)
The basic operation of augmenting a ow f along an arc (u, v) A by
some value x is referred to as a push.
Notice: f

(u, v) = f (u, v) + x f

(v, u) = f (v, u) x.
Increasing ow f in a network A:
1. Find augmenting path P from s to t in G
f
;
x minimum residual capacity along P.
2. Push x along P (this saturates at least one arc) f

.
3. [f

[ = [f [ + x
VU Algorithmics M. Ruthmair 12
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Denition 7 (Push)
The basic operation of augmenting a ow f along an arc (u, v) A by
some value x is referred to as a push.
Notice: f

(u, v) = f (u, v) + x f

(v, u) = f (v, u) x.
Increasing ow f in a network A:
1. Find augmenting path P from s to t in G
f
;
x minimum residual capacity along P.
2. Push x along P (this saturates at least one arc) f

.
3. [f

[ = [f [ + x
VU Algorithmics M. Ruthmair 12
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
16
13
4
10
12
9
7
20
4
14
11
12
15
7
4
8
1
11
4
s
v
1
v
2
v
3
v
4
t
Figure: A ow f (red) in the network A.
VU Algorithmics M. Ruthmair 13
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
5
5
3
11
12
5
7
15
4
3
8
11
4
11
5
s
v
1
v
2
v
4
v
3
t
Figure: The residual network G
f
in respect to f .
VU Algorithmics M. Ruthmair 14
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
5
5
3
11
12
5
7
15
4
3
8
11
4
11
5
s
v
1
v
2
v
4
v
3
t
Figure: An augmenting path P in the residual network G
f
.
VU Algorithmics M. Ruthmair 15
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
5
5
3
11
12
5
7
15
4
3
8
11
4
11
5
4
4
4
s
v
1
v
2
v
4
v
3
t
Figure: Minimum residual capacity along P: 4.
VU Algorithmics M. Ruthmair 16
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
16
13
4
10
12
9
7
20
4
14
11
12
15
7
4
8
1
11
4
s
v
1
v
2
v
3
v
4
t
Figure: Flow f before push operation.
VU Algorithmics M. Ruthmair 17
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
16
13
4
10
12
9
7
20
4
14
11
12
19
7
12
1
11
4
v
3
t
v
4
s
v
1
v
2
Figure: Flow f

after push operation (4 units along P).
VU Algorithmics M. Ruthmair 18
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Question: Flow f augmenting path P in G
f
pushes along P
new ow f

, but is f

really a ow (as dened before)?
Answer: Yes.
Proof:
Skew symmetry:
See push operation: f (u, v) + x f (v, u) x.
Capacity constraints:
Construction of G
f
: Residual capacity r
f
(a) gives the max.
amount of additional ow the arc a can carry.
Flow conservation:
u V s, t along P the net ow remains 0:
x is added to the incoming as well as outgoing ow of u.
VU Algorithmics M. Ruthmair 19
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flows: Introduction
Question: Flow f augmenting path P in G
f
pushes along P
new ow f

, but is f

really a ow (as dened before)?
Answer: Yes.
Proof:
Skew symmetry:
See push operation: f (u, v) + x f (v, u) x.
Capacity constraints:
Construction of G
f
: Residual capacity r
f
(a) gives the max.
amount of additional ow the arc a can carry.
Flow conservation:
u V s, t along P the net ow remains 0:
x is added to the incoming as well as outgoing ow of u.
VU Algorithmics M. Ruthmair 19
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Denition 8 (Cut, Capacity of a Cut)
A cut is a set of nodes S V with s S and t S, where S := V S;
i.e., a cut is a partition of V into two non-empty sets S and S.
The capacity of a cut is dened as the capacity of all arcs crossing the
cut from S to S:
(S, S) =

uS

vS
(u, v).
The number of possible cuts is 2
|V|2
.
VU Algorithmics M. Ruthmair 20
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Lemma 9 (Flow / Cut Capacity)
No ow f in a network A can have a value greater than the capacity of
any cut S.
Proof (using the denition of a ow 13):
[f [ = f (V, t)
(3)
= f (V, S) = f (S, S) + f (S, S) = f (S, S)
(2)
(S, S),
where f (A, B) =

uA

vB
f (u, v).
Implication: [f [ = f (S, S) for any ow f and any cut S in A.
Denition 10 (Saturated Cut)
A ow f saturates a cut S i f (S, S) = (S, S).
VU Algorithmics M. Ruthmair 21
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Lemma 9 (Flow / Cut Capacity)
No ow f in a network A can have a value greater than the capacity of
any cut S.
Proof (using the denition of a ow 13):
[f [ = f (V, t)
(3)
= f (V, S) = f (S, S) + f (S, S) = f (S, S)
(2)
(S, S),
where f (A, B) =

uA

vB
f (u, v).
Implication: [f [ = f (S, S) for any ow f and any cut S in A.
Denition 10 (Saturated Cut)
A ow f saturates a cut S i f (S, S) = (S, S).
VU Algorithmics M. Ruthmair 21
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Lemma 9 (Flow / Cut Capacity)
No ow f in a network A can have a value greater than the capacity of
any cut S.
Proof (using the denition of a ow 13):
[f [ = f (V, t)
(3)
= f (V, S) = f (S, S) + f (S, S) = f (S, S)
(2)
(S, S),
where f (A, B) =

uA

vB
f (u, v).
Implication: [f [ = f (S, S) for any ow f and any cut S in A.
Denition 10 (Saturated Cut)
A ow f saturates a cut S i f (S, S) = (S, S).
VU Algorithmics M. Ruthmair 21
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Theorem 11 (Max-Flow / Min-Cut)
Let f be a ow in a network A, then the following three conditions are
equivalent:
1. There is a cut S in A saturated by f .
2. f is a maximum ow in A.
3. There is no augmenting path P in the residual network G
f
.
Ford/Fulkerson and Elias/Feinstein/Shannon, 1956
The max-ow min-cut theorem is one of the central statements in
optimization theory!
VU Algorithmics M. Ruthmair 22
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
16
13
4
10
12
9
7
20
4
14
11
12
19
7
12
1
11
4
v
3
t
v
4
s
v
1
v
2
Figure: Flow f

(after a push operation).
VU Algorithmics M. Ruthmair 23
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
5
1
3
11
12
7
19
4
12
11
1
9
3
11
s
v
1
v
2
v
4
v
3
t
Figure: Residual network G
f
in respect to ow f

.
VU Algorithmics M. Ruthmair 24
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
5
1
3
11
12
7
19
4
12
11
1
9
3
11
v
3
t
v
4
v
1
v
2
s
Figure: No augmenting path from s to t in G
f
.
VU Algorithmics M. Ruthmair 25
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
16
13
4
10
12
9
7
20
4
14
11
12
19
7
12
1
11
4
v
3
t s
v
1
v
2
v
4
Figure: Maximum ow f

in network A.
VU Algorithmics M. Ruthmair 26
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Proof (max-ow min-cut theorem, circular reasoning):
12: saturated cut maximum ow [based on Lemma 9]
For any ow g and any cut S [g[ (S, S)
f max. ow, because f (S, S) = (S, S) due to condition 1.
23: maximum ow no augmenting path
If there would be an augmenting path, f could be increased
by some positive value x f would not be a max. ow.
31: no augmenting path saturated cut
Let S be the set of nodes in G
f
reachable from s s S,
and t / S due to condition 3 S is a cut
(u, v) A, u S, v S : f (u, v) = (u, v),
otherwise r
f
(u, v) > 0 (u, v) G
f
v could be reached
from s contradiction to denition of S.
VU Algorithmics M. Ruthmair 27
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Proof (max-ow min-cut theorem, circular reasoning):
12: saturated cut maximum ow [based on Lemma 9]
For any ow g and any cut S [g[ (S, S)
f max. ow, because f (S, S) = (S, S) due to condition 1.
23: maximum ow no augmenting path
If there would be an augmenting path, f could be increased
by some positive value x f would not be a max. ow.
31: no augmenting path saturated cut
Let S be the set of nodes in G
f
reachable from s s S,
and t / S due to condition 3 S is a cut
(u, v) A, u S, v S : f (u, v) = (u, v),
otherwise r
f
(u, v) > 0 (u, v) G
f
v could be reached
from s contradiction to denition of S.
VU Algorithmics M. Ruthmair 27
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Proof (max-ow min-cut theorem, circular reasoning):
12: saturated cut maximum ow [based on Lemma 9]
For any ow g and any cut S [g[ (S, S)
f max. ow, because f (S, S) = (S, S) due to condition 1.
23: maximum ow no augmenting path
If there would be an augmenting path, f could be increased
by some positive value x f would not be a max. ow.
31: no augmenting path saturated cut
Let S be the set of nodes in G
f
reachable from s s S,
and t / S due to condition 3 S is a cut
(u, v) A, u S, v S : f (u, v) = (u, v),
otherwise r
f
(u, v) > 0 (u, v) G
f
v could be reached
from s contradiction to denition of S.
VU Algorithmics M. Ruthmair 27
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Background: Why is it called the Max-Flow Min-Cut theorem?
Let f be a maximum ow in A. Lemma 9 states that [f [ is bounded
above by the capacity of any cut. Therefore a cut S with (S, S) = [f [
has to be a cut of minimum capacity; such a saturated cut exists due to
the theorem (21).
Suppose there is a cut S

with (S

, S

) < (S, S), then there must hold:


[f [ (S

, S

) < (S, S) = [f [ contradiction.


Problem became of major interest during cold war between the United
States and the Soviet Union from the mid-1940s until the early 1990s:
Where to hit the Soviet rail system to prevent transport of troops and
supplies to Eastern Europe?
VU Algorithmics M. Ruthmair 28
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow / Minimum Cut
Harris, Ross: Fundamentals of a Method for Evaluating Rail Net Capacities
Research Memorandum RM-1537, 1955
VU Algorithmics M. Ruthmair 29
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm
VU Algorithmics M. Ruthmair 30
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm
Proposed by Ford and Fulkerson 1962.
Directly motivated by the proof for the max-ow min-cut theorem:
3 (no augmenting path) 2 (maximum ow).
Basic idea:
Start with a null ow.
As long as there can be found an augmenting path:
Increase ow along this path.
Ford-Fulkerson algorithm is restricted to integer capacities!
VU Algorithmics M. Ruthmair 31
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm
Procedure FordFulkerson( )
i 0; /* initialization */ 1
f
i
null ow; 2
while augmenting path P from s to t in G
f
i
do /* algorithm */ 3
x minimum residual capacity along P; 4
augment ow of value x along P; 5
f
i +1
f
i
+ x; 6
i i + 1; 7
return f
i
; 8
VU Algorithmics M. Ruthmair 32
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
s
v
1
v
2
v
3
v
4
t
16
13
4
10
12
9
7
20
4
14
Figure: A ow network A with associated arc capacities (u, v).
VU Algorithmics M. Ruthmair 33
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
Figure: A rst augmenting path in the original network A(G
f
0
).
VU Algorithmics M. Ruthmair 34
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
4
4
4
4
4
Figure: Minimum capacity along the path: 4 ow f
1
.
VU Algorithmics M. Ruthmair 35
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
4
10 7
20 12
5
4
4
4
10
8
4
s
v
1
v
2
v
3
v
4
t
4
Figure: Residual network G
f
1
.
VU Algorithmics M. Ruthmair 36
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
4
10 7
20 12
5
4
4
4
10
8
4
s
v
1
v
2
v
3
v
4
t
4
Figure: Augmenting path in G
f
1
.
VU Algorithmics M. Ruthmair 37
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
4
10 7
20 12
5
4
4
4
10
8
4
s
v
1
v
2
v
3
v
4
t
4
7
7
4
10
v
2
v
4
7
7
7
7
Figure: Minimum capacity along the path: 7.
VU Algorithmics M. Ruthmair 38
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
7+4
4
7+4
4
4
7 7
7
Figure: New ow f
2
.
VU Algorithmics M. Ruthmair 39
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
11
3 7
5
5
4
4
11
3
8
4
s
v
1
v
2
v
4
11
7
13
v
3
t
Figure: Residual network G
f
2
.
VU Algorithmics M. Ruthmair 40
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
11
3 7
5
5
4
4
11
3
8
4
s
v
1
v
2
v
4
11
7
13
v
3
t
Figure: Augmenting path in G
f
2
.
VU Algorithmics M. Ruthmair 41
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
13
11
3 7
5
5
4
4
11
3
8
4
s
v
1
v
2
v
4
11
7
13
v
3
t
8
8
8
8
Figure: Minimum capacity along the path: 8.
VU Algorithmics M. Ruthmair 42
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
11
8+4
11
4
4
7 7-7
8+7
8
1
Figure: New ow f
3
.
VU Algorithmics M. Ruthmair 43
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
3
11 7
5
5
4
4
11
3
v
1
v
4
11
15
5
t
5
8
s
v
2
12
v
3
Figure: Residual network G
f
3
.
VU Algorithmics M. Ruthmair 44
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
3
11 7
5
5
4
4
11
3
v
1
v
4
11
15
5
t
5
8
s
v
2
12
v
3
Figure: Augmenting path in G
f
3
.
VU Algorithmics M. Ruthmair 45
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
3
11 7
5
5
4
4
11
3
v
1
v
4
11
15
5
t
5
8
s
v
2
12
v
3
4
4
4
Figure: Minimum capacity along the path: 4.
VU Algorithmics M. Ruthmair 46
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
11
12
11
4
4-4
7
4+15
4+8
1
Figure: New ow f
4
.
VU Algorithmics M. Ruthmair 47
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
3
11 7
5
4
11
3
v
1
v
4
11
19
1
t
1
12
s
v
2
12
9
v
3
Figure: Residual network G
f
4
, no augmenting s t path.
VU Algorithmics M. Ruthmair 48
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Example
16
13
4
10
12
9
7
20
4
14
s
v
1
v
2
v
3
v
4
t
11
12
11
4
7
19
12
1
Figure: The maximum ow f
4
in the network A.
VU Algorithmics M. Ruthmair 49
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm
Correctness (without proof): Ford-Fulkerson algorithm terminates
computing the maximum ow in case the capacities in the ow network
A are integral.
The maximum ow f

is integral.
Runtime:
Null ow: ([A[).
Find augmenting path, DFS / BFS: ([A[).
Number of augmenting paths: O([f

[).
Integrality condition f is increased at least by 1 in each iteration.
Ford-Fulkerson algorithm runs in time O([A[[f

[) (worst case).
VU Algorithmics M. Ruthmair 50
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm
Correctness (without proof): Ford-Fulkerson algorithm terminates
computing the maximum ow in case the capacities in the ow network
A are integral.
The maximum ow f

is integral.
Runtime:
Null ow: ([A[).
Find augmenting path, DFS / BFS: ([A[).
Number of augmenting paths: O([f

[).
Integrality condition f is increased at least by 1 in each iteration.
Ford-Fulkerson algorithm runs in time O([A[[f

[) (worst case).
VU Algorithmics M. Ruthmair 50
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
Problem: Pseudopolynomial running time:
10
6
s t
v
1
s
10
6
10
6
10
6
1
10
6
1 10
6
10
6
1 10
6
1
1
1
t
v
2
v
1
v
2
Figure: Flow network A with [f

[ = 210
6
, where the
Ford-Fulkerson algorithm requires time ([A[[f

[).
VU Algorithmics M. Ruthmair 51
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
10
6
s t
v
1
s
10
6
10
6
10
6
1
10
6
1 10
6
10
6
1 10
6
1
1
1
t
v
2
v
1
v
2
Improvement: Edmonds-Karp Algorithm (1972)
Two heuristics for choosing augmenting paths:
Fat Pipes:
Augmenting path with largest bottleneck value;
running time O([A[
2
log [A[log [f

[). (modied Prims MST)
Short Pipes:
Shortest (in respect to number of arcs) augmenting path;
running time O([V[[A[
2
). (BFS)
VU Algorithmics M. Ruthmair 52
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
Problem: Capacities R, irrational capacities:
Rational capacities: Scale to integer running time can explode
(pseudopolynomial).
Irrational capacities: Algorithm can loop forever and may converge
to a wrong maximum ow value.
Sketch of Proof
Details: Uri Zwick:
The smallest networks on which the Ford-Fulkerson maximum
ow procedure may fail to terminate. (1993)
VU Algorithmics M. Ruthmair 53
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
Basic idea:
Use a network A to simulate the computation of a sequence a
i
.
a
i
: a
0
= 1
a
1
= =

51
2
0.62
a
i
= a
i 2
a
i 1
i 2
.
.
.
a
2
= a
0
a
1
= 1 =
2
a
3
=
2
= (1 ) =
2
=
3
a
i
=
i
VU Algorithmics M. Ruthmair 54
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
s t
v
2
v
1
v
4
v
3
M
M
M
M
M
M

1
1
a
1
a
0
Figure: Flow network A with irrational capacity
=

51
2
0.62, M = some large integer.
Maximum ow [f

[ = 2M + 1.
VU Algorithmics M. Ruthmair 55
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
v
4

1
1
s t
v
2
v
3
1
1
f

= 1
First ow to initialize the network A; f

= a
0
.
VU Algorithmics M. Ruthmair 56
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1

1
1
s t
v
2
v
4
v
3
0
0
0
f

= 1
The rst residual network
(only the residual arcs and capacities of the critical edges between
the nodes v
i
, i = 1 . . . 4, are illustrated).
VU Algorithmics M. Ruthmair 57
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
Sequence of augmenting paths p
i
, i = 1 . . . 3, to simulate the
computation of a
i
(innite loop):
p
1
p
2
p
1
p
3
. . .
p
1
: p
3
: p
2
:
VU Algorithmics M. Ruthmair 58
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems

1
1
t
0
0
0
v
1
s
v
2
v
4
v
3

f

= 1 +
Flow along augmenting path p
1
; bottleneck arc: v
1
v
2
.
p
1
:
VU Algorithmics M. Ruthmair 59
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
1
s t
v
2
v
4
v
3
0

1
f

= 1 +
Residual network
(capacity of residual arc v
3
v
4
= a
2
).
VU Algorithmics M. Ruthmair 60
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
1
t
v
4
0

1
v
1
s
v
2
v
3

f

= 1 + +
Flow along augmenting path p
2
; bottleneck arc: v
2
v
1
.
p
2
:
VU Algorithmics M. Ruthmair 61
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1

1
s t
v
2
v
4
v
3
0
0
1
f

= 1 + +
Residual network
(capacities of residual arcs v
1
v
2
and v
2
v
3
resetted).
VU Algorithmics M. Ruthmair 62
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems

1
t
0
0

1
v
1
s
v
2
v
4
v
3
1
1
f

= 1 + + + (1)
Flow along augmenting path p
1
; bottleneck arc: v
3
v
4
.
p
1
:
VU Algorithmics M. Ruthmair 63
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
s t
v
2
v
4
v
3
1 (1)
1
1

0
f

= 1 + + + (1)
Residual network
(capacity of residual arc v
1
v
2
= a
3
).
VU Algorithmics M. Ruthmair 64
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
t
1 (1)
1
1

0
s
v
2
v
4
v
3
1
1
f

= 1 + + + (1) + (1)
Flow along augmenting path p
3
; bottleneck arc: v
3
v
2
.
p
3
:
VU Algorithmics M. Ruthmair 65
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
s t
v
2
v
4
v
3
1 (1)
1 0
1
f

= 1 + + + (1) + (1)
Residual network
(capacity of residual arc v
3
v
4
resetted).
VU Algorithmics M. Ruthmair 66
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
v
1
s t
v
2
v
4
v
3
1

2
=
3
a
2
a
3
1 =
2
f

= 1 + + + (1) + (1)
Residual network
(p
1
p
2
p
1
p
3
: a
1
a
3
and a
0
a
2
).
VU Algorithmics M. Ruthmair 67
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
f

= 1 + + + (1) + (1) +. . . =
= 1 + 2 + 2
2
+ 2
3
+. . . =
= 1 + 2

i =1

i
= (geometric series)
= 1 + 2

1
1

2 =
= 2 +

5 < 5.
Remember:
Maximum ow [f

[ in network A: 2M + 1.
s t
v
2
v
1
v
4
v
3
M
M
M
M
M
M

1
1
VU Algorithmics M. Ruthmair 68
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Ford-Fulkerson Algorithm: Problems
Drawback of all augmenting path algorithms:
t

s
.
.
.
1
1
1
1

1
1
1
1
Figure: Sending ow along a s t path is a computationally expensive
operation, it requires O(n) time in worst case.
Improvement: Preow-Push algorithms.
VU Algorithmics M. Ruthmair 69
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preflow-Push Algorithm
VU Algorithmics M. Ruthmair 70
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Proposed by Goldberg and Tarjan 1988.
Running time: O([V[
2
[A[).
For comparison, the Edmonds-Karp algorithms:
O([A[
2
log [A[log [f

[) resp. O([V[[A[
2
).
Basic idea:
Relax ow conservation rule.
Push ow along individual arcs, not along complete s t paths.
Every node has an overfall basin of unlimited size to buer ow.
Direct ow from basins with excess to the target t.
Preow-Push algorithm is not restricted to integer capacities!
VU Algorithmics M. Ruthmair 71
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Denition 12 (Preow)
A preow is a real function f : V V R with the following three
properties:
1. Skew symmetry:
f (u, v) = f (v, u) u, v V
2. Capacity constraints:
f (u, v) (u, v) u, v V
3. Excess condition:

uV
f (u, v) = e
f
(v) 0 v V s
Intermediate stages:
Augmenting path algorithms feasible ows.
Preow-push algorithms infeasible ows (preows).
VU Algorithmics M. Ruthmair 72
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
s
v
1
v
2
v
3
v
4
t
16
13
4
10
12
9
7
20
4
14
Figure: Illustration of the preow-push algorithm (basic idea, no labels).
VU Algorithmics M. Ruthmair 73
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
4
10
12
9
7
20
4
14
16
13
v
1
v
2
v
3
v
4
t
+16
+13
s
Figure: Initialization: Saturate all arcs having s as their source node;
e
f
(v
1
) = 16, e
f
(v
2
) = 13.
VU Algorithmics M. Ruthmair 74
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
4
10
12
9
7
20
4
14
16
13
v
3
v
4
t
+16
+13
s
v
1
v
2
Figure: Residual network G
f
; two nodes ,= t with excess continue.
VU Algorithmics M. Ruthmair 75
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
4
10
12
9
7
20
4
14
16
13
v
3
v
4
t
+17
s
12
4
+12
v
1
v
2
Figure: Push excess of v
1
(e
f
(v
1
) = 16) to nodes v
2
and v
3
;
e
f
(v
2
) = 13 + 4, e
f
(v
3
) = 12.
VU Algorithmics M. Ruthmair 76
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
20
4
14
16
13
v
4
t
+17
s
12
4
+12
v
1
v
2
v
3
Figure: Residual network G
f
; two nodes ,= t with excess continue.
VU Algorithmics M. Ruthmair 77
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
20
4
14
16
13
v
4
t
+17
s
12
4
+12
v
1
v
2
12
v
3
Figure: Push excess of v
3
(e
f
(v
3
) = 12) to the target node t;
e
f
(v
2
) = 17, e
f
(t) = 12.
VU Algorithmics M. Ruthmair 78
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
v
4
+17
s
12
4
+12
v
1
v
2
12 12
8
t
v
3
Figure: Residual network G
f
; one node ,= t with excess continue.
VU Algorithmics M. Ruthmair 79
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+12
v
1
12 12
8
t
v
3
+14
v
2
v
4
14
Figure: Push as much as possible excess of v
2
(e
f
(v
2
) = 17) to node v
4
;
e
f
(v
2
) = 17 14, e
f
(v
4
) = 14, e
f
(t) = 12.
VU Algorithmics M. Ruthmair 80
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+12
v
1
12 12
8
t
v
3
+14
v
2
v
4
14
Figure: Residual network G
f
; two nodes ,= t with excess continue.
VU Algorithmics M. Ruthmair 81
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+16
v
1
12 12
8
t
v
3
+3
v
2
14
4
7
+7
v
4
Figure: Push as much as possible excess of v
4
(e
f
(v
4
) = 14) to v
3
and t;
e
f
(v
2
) = 3, e
f
(v
3
) = 7, e
f
(v
4
) = 14 11, e
f
(t) = 12 + 4.
VU Algorithmics M. Ruthmair 82
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+16
v
1
12 12
8
+3
v
2
v
4
14
4
7
+7
t
v
3
Figure: Residual network G
f
; three nodes ,= t with excess continue.
VU Algorithmics M. Ruthmair 83
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+23
v
1
12+7 19
8
+3
v
2
v
4
14
4
7
t
v
3
Figure: Push excess of v
3
(e
f
(v
3
) = 7) to the target node t;
e
f
(v
2
) = 3, e
f
(v
4
) = 3, e
f
(t) = 16 + 7.
VU Algorithmics M. Ruthmair 84
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+23
v
1
19 12
1
+3
v
2
v
4
14
4
7
t
v
3
Figure: Residual network G
f
; two nodes ,= t with excess continue.
VU Algorithmics M. Ruthmair 85
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+3
s
12
4
+23
v
1
19 12
1
+3
v
2
v
4
14
4
7
t
v
3
Figure: No possibility to push excess of nodes v
2
and v
4
to target t
send excess back to source s.
VU Algorithmics M. Ruthmair 86
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
16
13
8
6
12
9
7
4
14
16
13
+6
s
12
4
+23
v
1
19 12
1
v
2
14-3
4
7
t
v
3
v
4
Figure: Push excess of v
4
(e
f
(v
4
) = 3) back to v
2
;
e
f
(v
2
) = 3 + 3, e
f
(t) = 23.
VU Algorithmics M. Ruthmair 87
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
3
11
16
13
8
6
12
9
7
4
16
13
+6
s
12
4
+23
v
1
19 12
1
v
2
11
4
7
t
v
3
v
4
Figure: Residual network G
f
; one node ,= t with excess continue.
VU Algorithmics M. Ruthmair 88
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
3
11
16
13
8
6
12
9
7
4
16
13
+6
s
12
4-6
+23
v
1
19 12
1
11
4
7
t
v
3
v
4
v
2
Figure: Push excess of v
2
(e
f
(v
2
) = 6) back to v
1
;
e
f
(v
1
) = 6, e
f
(t) = 23.
VU Algorithmics M. Ruthmair 89
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
3
11
16
13
2
12
12
9
7
4
16
13
+6
s
12
+23
v
1
19 12
1
11
4
7
t
v
3
v
4
v
2
2
Figure: Residual network G
f
; one node ,= t with excess continue.
VU Algorithmics M. Ruthmair 90
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
3
11
16
13
2
12
12
9
7
4
16-6
13
+6
s
12
+23
19 12
1
11
4
7
t
v
3
v
4
v
2
2
v
1
Figure: Push excess of v
1
(e
f
(v
1
) = 6) back to source node s;
e
f
(t) = 23.
VU Algorithmics M. Ruthmair 91
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
6
10
3
11
13
2
12
12
9
7
4
10
13
s
12
+23
19 12
1
11
4
7
t
v
3
v
4
v
2
2
v
1
Figure: Residual network G
f
; no node ,= t with excess terminate.
VU Algorithmics M. Ruthmair 92
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
+23
16
13
4
10
12
9
7
20
4
14
10
12
19
13
11
4
v
3
t s
v
1
v
2
v
4
7
2
Figure: Valid maximum ow within network A: [f

[ = e
f
(t) = 23.
VU Algorithmics M. Ruthmair 93
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Denition 13 (Label / Height of a Node; Valid Labeling)
Label / Height: Function d : V N
0
.
A labeling is called valid:
d(s) = [V[ = n and d(t) = 0
d(u) d(v) + 1 residual arcs (u, v) in G
f
h
h +1
h +2
h 1
h 2

u
v
u v u
v
u
v
Figure: Valid and invalid labeling of nodes u and v in a residual network G
f
;
declining arcs are only allowed if the dierence in height is not more than 1.
VU Algorithmics M. Ruthmair 94
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Denition 14 (Admissible Arc)
An arc (u, v) in G
f
(i.e., f (u, v) < (u, v)) is called admissible i
d(u) = d(v) + 1 (declining arc).
Denition 15 (Active Node)
A node v V s, t is called active i the excess e
f
(v) > 0.
Denition 16 (Saturating / Nonsaturating Push)
Let u be an active node.
A saturating push of value x along a residual arc (u, v) in G
f
removes
this arc from the residual network (x = r
f
(u, v)).
A nonsaturating push along (u, v) reduces excess at u to zero
(e
f
(u) = x < r
f
(u, v)).
VU Algorithmics M. Ruthmair 95
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Procedure push(u,v)
/* precondition: u active, (u, v) admissible */
x minr
f
(u, v), e
f
(u); 1
f (u, v) f (u, v) + x; 2
f (v, u) f (u, v); 3
Procedure lift(u)
/* precondition: u active, no admissible arc (u, v) */
d(u) d(u) + 1; 1
VU Algorithmics M. Ruthmair 96
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Procedure GenericPreflowPush(u,v)
d(s) n; /* initialization */ 1
forall v V s do d(v) 0; 2
forall (u, v) A do f (u, v) = f (v, u) 0; 3
forall (s, v) A do 4
f (s, v) (s, v); 5
f (v, s) f (s, v); 6
while active node u G
f
do /* algorithm */ 7
if admissible arc (u, v) G
f
then 8
push(u, v); 9
else 10
lift(u); 11
VU Algorithmics M. Ruthmair 97
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 17 (Labeling / Preow)
The labeling d is always valid and f is always a preow.
Proof:
lift(u):
Preow: After initialization ow f is a preow;
f is not modied by a lift() operation.
Validity: After initialization labeling d is valid (only because of
saturation of arcs (s, v));
preconditions of lift() operation: (u, v) G
f
:
d(u) d(v), otherwise push() would have been called
d(u) + 1 cannot lead to an invalid labeling.
VU Algorithmics M. Ruthmair 98
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 17 (Labeling / Preow)
The labeling d is always valid and f is always a preow.
Proof:
push(u,v):
Preow: Skew symmetry, capacity constraints, excess condition.
Validity: 2 dierent cases:
Nonsaturating push: Perhaps arc (v, u) G
f
after
push() operation; precondition: d(u) = d(v) + 1
d(v) d(u) + 1 valid labeling.
Saturating push: Arc (u, v) will no longer be part of
G
f
no condition to be met.
VU Algorithmics M. Ruthmair 99
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 18 (No Augmenting Path)
Let d be a valid labeling, and f a preow: There exists no augmenting
path from s to t in G
f
.
Proof:
An augmenting path from s to t cannot consist of more than n 1
([V[ = n) arcs. Due to the denition of a valid labeling d(s) = n,
d(t) = 0, and there exists no arc (u, v) G
f
with d(u) > d(v) + 1.
With a valid labeling it is not possible to connect a node at height n with
a node at height 0 without skipping at least one level if the path
consists of only n 1 arcs.
VU Algorithmics M. Ruthmair 100
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 19 (Partial Correctness of Preow Push Algorithm)
In case the generic preow-push algorithm terminates f is a maximum
ow in the network A.
Proof:
Algorithm terminates
no active nodes, i.e., e
f
(v) = 0 v V s, t
f is not only a preow but a ow;
according to Lemma 18 there exists no augmenting s t path in G
f
,
f is a valid ow
f is a maximum ow.
Still to proof: Algorithm terminates. Worst-case runtime?
VU Algorithmics M. Ruthmair 101
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 20 (Excess Nodes Connected To Source)
Let f be a preow and u an active node, i.e., e
f
(u) > 0:
There exists a path from u to source s in the residual graph G
f
.
Proof:
Let T V be the set of nodes reachable from u in G
f
, and T = V T,
then the following holds:

vT
e
f
(v) = f (V, T) = f (T, T) + f (T, T)
(1)
= f (T, T) 0.
VU Algorithmics M. Ruthmair 102
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 20 (Excess Nodes Connected To Source)
Let f be a preow and u an active node, i.e., e
f
(u) > 0:
There exists a path from u to source s in the residual graph G
f
.
Proof:
Let T V be the set of nodes reachable from u in G
f
, and T = V T,
then the following holds:

vT
e
f
(v) = f (V, T) = f (T, T) + f (T, T)
(1)
= f (T, T) 0.
f (T, T) cannot be positive: A ow f (w, v) > 0 from a node w T to a
node v T would lead to a residual arc (v, w) in G
f
contradiction to
the denition of T (v is reachable from u, but an arc (v, w) would make
node w also reachable from u w T and w T ).
VU Algorithmics M. Ruthmair 102
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 20 (Excess Nodes Connected To Source)
Let f be a preow and u an active node, i.e., e
f
(u) > 0:
There exists a path from u to source s in the residual graph G
f
.
Proof:
Let T V be the set of nodes reachable from u in G
f
, and T = V T,
then the following holds:

vT
e
f
(v) = f (V, T) = f (T, T) + f (T, T)
(1)
= f (T, T) 0.
Excess condition (preow denition): e
f
(v) 0 v V s, and u is
an active node (e
f
(u) > 0) there has to be a negative term in the sum
above the source node s has to be element of set T and is therefore
reachable from u.
VU Algorithmics M. Ruthmair 103
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 21 (Height Restriction)
For every node u V: d(u) 2n 1.
Proof:
It is sucient to proof this for active nodes, because inactive nodes are
not lifted:
Lemma 20: There is a path P from u to s in the residual graph G
f
,
which cannot consist of more than n 1 arcs;
d(s) = n and a valid labeling
d(s) + n 1 is an upper bound for the height of u,
i.e., d(u) 2n 1.
VU Algorithmics M. Ruthmair 104
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 22 (Number of Relabel Operations)
The number of relabel resp. lift() operations is bounded above by 2n
2
.
Proof:
Direct consequence of Lemma 21:
lift() increases the height of a node by 1,
no operation decreases the height of a node,
n 2 nodes (without s, t),
2n 1 is an upper bound for the height of each node
a maximum of 2n
2
5n + 2 2n
2
lift() operations can be
performed.
VU Algorithmics M. Ruthmair 105
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 23 (Number of Saturating Pushes)
The number of saturating push() operations is bounded above by 2nm
(m = [A[).
Proof:
For two consecutive saturating pushes along an arc (u, v) A the
heights of the nodes u and v have to increase at least by 2.
u
v
v
v
u
u
=2
=2
Figure: Saturating push along (u, v) removes this arc from
G
f
; to perform another saturating push along it there have
to be a push along (v, u) to bring back (u, v) into G
f
.
VU Algorithmics M. Ruthmair 106
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 23 (Number of Saturating Pushes)
The number of saturating push() operations is bounded above by 2nm
(m = [A[).
Proof:
For two consecutive saturating pushes along an arc (u, v) A the
heights of the nodes u and v have to increase at least by 2.
2n 1 is an upper bound for the height of the nodes u and v
arc (u, v) can be saturated maximum n times,
the number of arcs in G
f
can be up to 2m (for every arc (u, v) A
there can also be an arc (v, u) G
f
)
number of saturating pushes is bounded above by n2m.
VU Algorithmics M. Ruthmair 107
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 24 (Number of Nonsaturating Pushes)
The number of nonsaturating push() operations is 6n
2
m.
Proof:
Let X be the changing over time set of active nodes. We dene the
following potential function:
=

uX
d(u)
At the beginning = 0, and during execution of the algorithm 0.
VU Algorithmics M. Ruthmair 108
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 24 (Number of Nonsaturating Pushes)
The number of nonsaturating push() operations is 6n
2
m.
Proof:
A nonsaturating push along arc (u, v) reduces the excess at u to 0
X = Xu. Let

be the resulting potential.

d(u) if v was already X, or v = t,


d(u) + d(v) if v was / X,

d(u) + d(v) = 1,
because (u, v) has to be an admissible arc d(v) = d(u) 1.
VU Algorithmics M. Ruthmair 109
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 24 (Number of Nonsaturating Pushes)
The number of nonsaturating push() operations is 6n
2
m.
Proof:
Saturating pushes:
Push along (u, v) can insert v into set X,
d(v) 2n 1 (Lemma 21),
number of saturating pushes = 2nm (Lemma 23)
saturating pushes can increase at most by 4n
2
m.
VU Algorithmics M. Ruthmair 110
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Lemma 24 (Number of Nonsaturating Pushes)
The number of nonsaturating push() operations is 6n
2
m.
Proof:
lift() operations:
lift(u) increases d(u) by 1,
number of lift() operations is bounded by 2n
2
(Lemma 22)
lift() operations can increase at most by 2n
2
.
The number of nonsaturating pushes is bounded above by
4n
2
m + 2n
2
6n
2
m.
VU Algorithmics M. Ruthmair 111
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Generic Preow-Push Algorithm
Theorem 25 (Correctness of Preow Push Algorithm)
The generic preow-push algorithm terminates after O(n
2
m) push()
and lift() operations, and calculates the maximum ow f in the
network A.
Proof:
Direct consequence of the Lemmas 17 to 24.
Note: This theorem also proofs that every network A = (V, A, , s, t)
has a maximum ow.
VU Algorithmics M. Ruthmair 112
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements without changing the worst case runtime complexity:
Denition 26 (Maximum Preow)
A preow with the maximum possible ow into target node t is called a
maximum preow.
Denition 27 (V

)
V

V is the set of nodes with no directed path to t in the residual


network G
f
(nodes disconnected from sink).
After initialization V

= s.
VU Algorithmics M. Ruthmair 113
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements without changing the worst case runtime complexity:
Generic preow-push algorithm performs push() and lift() operations
at active nodes until
1. all excess reaches target node t, or
2. excess returns to the source node s.
Maximum preow established
push excess of active nodes back to s (to transform preow into a
ow)
a substantially large number of subsequent push()/lift()
operations is required to raise these nodes until they are suciently
higher than n.
VU Algorithmics M. Ruthmair 114
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements without changing the worst case runtime complexity:
Improvement 1:
Start with set V

= s.
V

u, if d(u) n.
Perform no push()/lift() on nodes V

.
Stop algorithm when there are no active nodes in V V

.
At termination, the current preow is also an optimal preow
convert maximum preow into maximum ow [exercise]
substantial reduce in running time due to empirical tests.
VU Algorithmics M. Ruthmair 115
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements without changing the worst case runtime complexity:
Improvement 2:
Start with set V

= s.
Occasionally perform reverse BFS from t in G
f
to
obtain exact labels / heights, and to
add all nodes not reachable from t to V

.
Perform no push()/lift() on nodes V

.
Stop algorithm when there are no active nodes in V V

.
Occasionally: After n lift() operations ( constant)
does not change the worst case complexity [exercise].
VU Algorithmics M. Ruthmair 116
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements changing the worst case runtime complexity:
Bottleneck: Number of nonsaturating pushes.
Dierent rules to select active nodes various algorithms that can
substantially reduce these bottleneck operations.
Denition 28 (Node Examination)
Whenever an active node u is selected by the algorithm, it keeps pushing
ow from that node until
the excess of u becomes 0 (saturating pushes except the last one
which could be a nonsaturating push), or
u is lifted.
This sequence of operations is referred to as node examination.
VU Algorithmics M. Ruthmair 117
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements changing the worst case runtime complexity:
FIFO Preow-Push Algorithm:
All active nodes are stored in a queue Q.
Get node u from Q, examine u:
Add new active nodes to rear of Q;
if u is lifted (still excess available) add u to rear of Q and continue
with next node in Q;
if u becomes inactive continue with next node in Q.
Stop algorithm when queue of active nodes is empty.
Worst case running time: O(n
3
).
VU Algorithmics M. Ruthmair 118
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Preow-Push Algorithm: Improvements
Improvements changing the worst case runtime complexity:
Highest-Label Preow-Push Algorithm:
Push ow from an active node u with highest distance label d(u).
How to select a node with highest d() without too much eort?
active[k], k = 0, . . . , 2n 1: list of active nodes with d() = k.
level: highest value of k where active[k] is nonempty:
lift(u) of an examined node u level = level + 1
active[k] gets empty without lift() operation
check active[k-1], active[k-2], . . . , until nonempty list found;
total increase in level bounded by 2n
2
decrease = O(2n
2
).
Worst case: O(n
2

m), currently most ecient method in practise.


VU Algorithmics M. Ruthmair 119
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Maximum Flow: Networks
with Lower Capacity Bounds
VU Algorithmics M. Ruthmair 120
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Denition 29 (Flow Network with Lower and Upper Bounds)
A ow network with lower and upper capacity bounds is a 6-tuple
A = (V, A,
L
,
U
, s, t), with (V, A) being a directed graph with node
set V and arc set A, two nodes s, t V, s ,= t, and two functions

L
,
U
: A R
0
, the lower (
L
) and upper (
U
) capacity bounds,
respectively.
It must hold:
L
(a)
U
(a) arcs a A.
Extension:
L
(a) =
U
(a) = 0 arcs a (V V)A.
VU Algorithmics M. Ruthmair 121
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Denition 30 (Flow with Nonnegative Lower Bounds)
A ow is a real function f : V V R with the following two properties:
1. Capacity constraints:

L
(u, v) f (u, v)
U
(u, v) u, v V
2. Flow conservation:

vV
f (v, u)

vV
f (u, v) = 0 u V s, t
Note: Skew symmetry has to be discarded.
VU Algorithmics M. Ruthmair 122
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Problem: No guarantee that there is a feasible solution to the maximum
ow problem in an arbitrary network A with nonnegative lower and
upper bounds:
[5,10]
v s t
[2,4]
[
L
,
U
]
Figure: A ow network A with no feasible solution.
VU Algorithmics M. Ruthmair 123
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Problem: No guarantee that there is a feasible solution to the maximum
ow problem in an arbitrary network A with nonnegative lower and
upper bounds:
[5,10]
v s t
[2,4]
[
L
,
U
]
Figure: A ow network A with no feasible solution.
Two-phase approach to solve the maximum ow problem:
1. Determine whether the problem is feasible, and if so
2. compute a maximum ow in a transformed network A

without lower bounds.


VU Algorithmics M. Ruthmair 123
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 2 (Maximum Flow):
Precondition:
f is a feasible ow, in particular:
L
(a) f (a)
U
(a) a (V V).
Build residual graph G
f
using the following residual capacities:
r
f
(u, v) =

U
(u, v) f (u, v)

f (v, u)
L
(v, u)

Note: r
f
(u, v) is always nonnegative.
Compute maximum ow f
+
in G
f
, and
combine initial feasible ow f and f
+
to get the maximum ow f

of
original network A with lower and upper capacity bounds [exercise].
VU Algorithmics M. Ruthmair 124
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Generalized Maximum Flow / Minimum Cut Theorem:
Denition 31 (Capacity of a Cut [Extension])
The capacity of a s t cut (S, S), s S, t S, in a ow network A
with nonnegative lower bounds is dened as follows:
(S, S) =
U
(S, S)
L
(S, S) =

uS

vS

U
(u, v)

vS

uS

L
(v, u)
VU Algorithmics M. Ruthmair 125
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Generalized Maximum Flow / Minimum Cut Theorem:
Remember:
[f [ = f (S, S) =

uS

vS
f (u, v)

vS

uS
f (v, u).
Substitute ow by the corresponding capacity bounds:
f (u, v)
U
(u, v)
L
(v, u) f (v, u)
[f [

uS

vS

U
(u, v)

uS

vS

L
(v, u) = (S, S)
VU Algorithmics M. Ruthmair 126
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Generalized Maximum Flow / Minimum Cut Theorem:
Optimality criterion for maximum ow: No augmenting s t path in G
f
there exists a s t cut (S, S) with all r
f
(u, v) = 0, u S, v S:
r
f
(u, v) =

U
(u, v) f (u, v)

f (v, u)
L
(v, u)

r
f
(u, v) = 0 f (u, v) =
U
(u, v) f (v, u) =
L
(v, u)
VU Algorithmics M. Ruthmair 127
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Generalized Maximum Flow / Minimum Cut Theorem:
Optimality criterion for maximum ow: No augmenting s t path in G
f
there exists a s t cut (S, S) with all r
f
(u, v) = 0, u S, v S:
r
f
(u, v) =

U
(u, v) f (u, v)

f (v, u)
L
(v, u)

r
f
(u, v) = 0 f (u, v) =
U
(u, v) f (v, u) =
L
(v, u)
Theorem 32 (Generalized Max-Flow / Min-Cut)
Let f be a ow in A, (S, S) dened as above: The maximum value of
ow from s to t equals the minimum capacity among all s t cuts:
[f

[ = min
(S,S)
(S, S) = min
(S,S)
(
U
(S, S)
L
(S, S)).
VU Algorithmics M. Ruthmair 127
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow):
Transformation of the maximum ow problem into a circulation problem:
New arc (t, s) with capacities [0, +].
[0, +]
s t
Figure: Circulation problem.
Note:
Feasible ow feasible circulation, but
feasible circulation
?
feasible ow?
VU Algorithmics M. Ruthmair 128
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow):
Circulation Problem
In a feasible circulation a ow f satises the following constraints:

v:(u,v)A
f (u, v)

v:(v,u)A
f (v, u) = 0 u V

L
(u, v) f (u, v)
U
(u, v) (u, v) A
Note: The ow conservation contraints now hold for every node u V,
including s and t.
VU Algorithmics M. Ruthmair 129
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 1
Replace f (u, v) by f

(u, v) +
L
(u, v) in the ow conservation constraints:

v:(u,v)A

f

(u, v) +
L
(u, v)

v:(v,u)A

f

(v, u) +
L
(v, u)

= 0

v:(u,v)A
f

(u, v)

v:(v,u)A
f

(v, u) = b(u)
b(u) =

v:(v,u)A

L
(v, u)

v:(u,v)A

L
(u, v) u V
VU Algorithmics M. Ruthmair 130
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 1
This way the lower capacity bounds are removed,
0 f

(u, v)
U
(u, v)
L
(u, v) (u, v) A
and supplies / demands b() are introduced.
Note:

uV
b(u) = 0, since each
L
(u, v) appears twice once positive
and once negative in this expression.
There are algorithms to handle multiple sources / sinks.
VU Algorithmics M. Ruthmair 131
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Theorem 33 (Circulation Feasibility Conditions)
A circulation problem with nonnegative lower bounds on arc ows is
feasible i for every arbitrary set S V, S ,= , S = V S, the following
condition holds:

L
(S, S)
U
(S, S)

vS

uS

L
(v, u)

uS

vS

U
(u, v)
Note: Relation to generalized max-ow / min-cut theorem!
VU Algorithmics M. Ruthmair 132
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Theorem 33 is a necessary condition:

uS

vS
f (u, v)

vS

uS
f (v, u) = 0
(generalization of the ow conservation conditions).
Using f (u, v)
U
(u, v) and f (v, u)
L
(v, u):

vS

uS

L
(v, u)

uS

vS

U
(u, v).
VU Algorithmics M. Ruthmair 133
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Algorithmic proof: Theorem 33 is a sucient condition:
Denition 34 (Feasible / Infeasible Arc)
In respect to a ow f an arc (u, v) is called infeasible if
f (u, v) <
L
(u, v), otherwise it is a feasible arc, i.e.,
L
(u, v) f (u, v).
Basic idea:
Start with a ow fullling ow conservation conditions, but violating
lower capacity bounds transform ow (while still ensuring ow
conservation and upper capacity bounds) if possible into circulation
satisfying also the lower capacity bounds.
VU Algorithmics M. Ruthmair 134
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Algorithmic proof: Theorem 33 is a sucient condition:
Denition 34 (Feasible / Infeasible Arc)
In respect to a ow f an arc (u, v) is called infeasible if
f (u, v) <
L
(u, v), otherwise it is a feasible arc, i.e.,
L
(u, v) f (u, v).
Computation of residual capacities:
Feasible arc:
r
f
(u, v) =

U
(u, v) f (u, v)

f (v, u)
L
(v, u)

.
Infeasible arc:
r
f
(u, v) =
U
(u, v) f (u, v).
VU Algorithmics M. Ruthmair 135
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Algorithmic proof: Theorem 33 is a sucient condition:
Function feasible circulation
f (u, v) 0 (u, v) A; /* initialization */ 1
while an infeasible arc (u, v) G
f
do /* algorithm */ 2
nd directed path P(v, u) in G
f
; 3
if P(v, u) then return S = v nodes reachable from v in G
f
; 4
P(v, u) (u, v) augmenting cycle in G
f
; 5
augment ow along P(v, u) (u, v) 6
(u, v) becomes feasible, or cycle cannot carry more ow;
return f ; 7
VU Algorithmics M. Ruthmair 136
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Phase 1 (Feasible Flow): Alternative 2
Algorithmic proof: Theorem 33 is a sucient condition:
Algorithm terminates returning feasible circulation.
Algorithm returns set S, infeasible arc (u, v):
Let S = V S, x S, y S. r
f
(x, y) = 0 in G
f
, otherwise y could
be reached from v f (x, y) =
U
(x, y) and f (y, x)
L
(y, x)
it is not possible to send more ow out of S.
u S (no path from v to u), v S, (u, v) infeasible
f (u, v) <
L
(u, v), i.e., at least one arc (S, S) requires to send more
ow into S

vS

uS
f (v, u) =

uS

vS
f (u, v) (ow conservation)

vS

uS

L
(v, u) >

uS

vS

U
(u, v)
contradiction to conditions of Theorem 33.
VU Algorithmics M. Ruthmair 137
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Problem: Feasible circulation
?
feasible s t ow?
[0,1]
[2,2]
[2,2]
[2,2]
v
1
s t
v
0
v
3
v
2
[0,1]
Figure: Network A with a feasible circulation, but no feasible s t ow: It is
not possible to bring the required ow from s to the circle v
1
v
2
v
3
v
1
.
VU Algorithmics M. Ruthmair 138
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Networks with Lower Capacity Bounds
Problem: Feasible circulation
?
feasible s t ow?
[0,1]
[2,2]
[2,2]
[2,2]
v
1
s t
v
0
v
3
v
2
[0,1]
Figure: Network A with a feasible circulation, but no feasible s t ow: It is
not possible to bring the required ow from s to the circle v
1
v
2
v
3
v
1
.
feasible circulation():
It has to be ensured that arc (t, s) is part of the directed path from
v to u, i.e., P(v, u) = v t s u.
VU Algorithmics M. Ruthmair 138
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow in
Networks
VU Algorithmics M. Ruthmair 139
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Introduction
Denition 35 (Minimum Cost Flow)
Given a directed graph G(V, A) with costs c(u, v) and a capacity (u, v)
associated with each arc (u, v) A, the minimum cost ow problem can
be stated as follows:
Minimize z(f ) =

(u,v)A
c(u, v)f (u, v)
subject to:

v:(u,v)A
f (u, v)

v:(v,u)A
f (v, u) = b(u) u V
0 f (u, v) (u, v) (u, v) A.
VU Algorithmics M. Ruthmair 140
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Introduction
Denition 36 (Supply / Demand)
A value b(v) > 0 denotes a supply of b(v) units of ow at node v,
whereas a value b(v) < 0 denotes a demand at this node.
Assumption
All data, i.e., costs, capacity, supply, and demand, are integral.
Assumption
All arc costs are nonnegative, or at least there is no directed negative
cost cycle of innite capacity.
VU Algorithmics M. Ruthmair 141
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Introduction
Minimum cost ows arise in a lot of dierent applications, respectively
various (sub-)problems can be reformulated as a minimum cost ow
problem:
Shipping and distribution: the transportation problem (e.g. plants
with supplies warehouses with demands, minimizing the shipping
costs).
Optimal loading of a hopping airplane.
Reconstruction of organs (e.g. ventricle) based on x-ray projections.
Scheduling with deferral costs (uniform processing times of jobs).
Ecient solving of linear programs with special structure (01
matrix, consecutive 1s in columns).
. . .
VU Algorithmics M. Ruthmair 142
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Introduction
Assumption
The minimum cost ow problem has a feasible solution, i.e. amongst
others, the supplies and demands satisfy the following condition:

vV
b(v) = 0
Test for feasibility:
Introduce two new, additional nodes s and t.
Introduce new arcs:
For every node v with b(v) > 0: A = A (s, v), (s, v) = b(v).
For every node v with b(v) < 0: A = A (v, t), (v, t) = b(v).
Solve a maximum ow problem on the modied graph. If all the
arcs (s, ) and (, t) are saturated, there exists a feasible solution to
the original minimum cost ow problem.
VU Algorithmics M. Ruthmair 143
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Introduction
Denition 37 (Residual Network)
Given a ow f , each arc (u, v) A is replaced in the residual network G
f
by two arcs:
An arc (u, v) with costs c(u, v) and a residual capacity
r
f
(u, v) = (u, v) f (u, v), and
an arc (v, u) with costs c(v, u) = c(u, v) and r
f
(v, u) = f (u, v).
Note: r
f
is always 0.
VU Algorithmics M. Ruthmair 144
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Negative Cycle Optimality Condition:
A feasible solution f

is an optimal solution of the minimum cost ow
problem i the residual network G
f
contains no directed negative cost
cycle.
Sketch of Proof:
Sending ow along a cycle does not change the ow conservation
conditions at any node of the network.
The residual network G
f
only contains arcs that can carry additional
ow, i.e., it is possible to send ow along such arcs without violating
the capacity bounds.
Consequence: When sending ow along a negative cost cycle in G
f
the ow f remains feasible but the costs can be reduced.
VU Algorithmics M. Ruthmair 145
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
Observation
Optimality condition for shortest path regarding costs:
c
d
(u, v) = d(u) + c(u, v) d(v) 0 (u, v) A
c
d
(u, v) is referred to as the reduced costs for arc (u, v).
Interpretation:
c
d
(u, v) measures the costs of the arc (u, v) relative to the shortest path
distances d(u) and d(v).
Note: If (u, v) is part of the shortest path from a node s to v, then
c
d
(u, v) = 0, otherwise c
d
(u, v) > 0.
VU Algorithmics M. Ruthmair 146
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
Denition 38 (Node Potential)
Each node v V we associate a real number (unrestricted in sign) (v),
the potential of a node.
Interpretation: (v) is the linear programming dual variable
corresponding to the ow conservation condition at node v.
VU Algorithmics M. Ruthmair 147
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
Denition 38 (Node Potential)
Each node v V we associate a real number (unrestricted in sign) (v),
the potential of a node.
Interpretation: (v) is the linear programming dual variable
corresponding to the ow conservation condition at node v.
Denition 39 (Reduced Costs (Minimum Cost Flow))
Based on node potentials (), the reduced cost of an arc (u, v) in G or
G
f
is dened as follows:
c

(u, v) = c(u, v) (u) +(v).


VU Algorithmics M. Ruthmair 147
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
Lemma 40 (Path and Node Potentials)
For any directed path P from u to v the following equation holds:

(i ,j )P
c

(i , j ) =

(i ,j )P
c(i , j ) (u) +(v).
Lemma 41 (Cycle and Node Potentials)
For any directed cycle W the following equation holds:

(i ,j )W
c

(i , j ) =

(i ,j )W
c(i , j ).
Consequence: negative cost cycle with respect to c() negative
cost cycle with respect to c

().
VU Algorithmics M. Ruthmair 148
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
A feasible solution f

is an optimal solution of the minimum cost ow
problem i some set of node potentials () satisfy the reduced cost
optimality conditions:
c

(u, v) 0 (u, v) G
f

Proof:
Direct consequence of the negative cycle optimality condition and the
preceding lemma.
Now assume a solution f

contains no negative cycle in G
f
let d()
be shortest path distance from a xed node to all other nodes in G
f

d(v) d(u) + c(u, v) c(u, v) (d(u)) + (d(v)) 0 with
() = d(): c(u, v) (u) +(v) = c

(u, v) 0.
VU Algorithmics M. Ruthmair 149
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Reduced Costs Optimality Condition:
Economic interpretation:
c(u, v): The cost to send one unit of ow from u to v.
Let (u) = (u) be the cost to obtain a unit of ow at u
c(u, v) +(u) the cost of a unit of ow at v in case arc (u, v) is
used to transport it.
(v) c(u, v) +(u) c(u, v) (u) +(v) 0:
Send a unit of ow to v using arc (u, v), then
(v) = c(u, v) +(u), otherwise there is a cheaper way of getting
this unit of ow to v, i.e., (v) < c(u, v) +(u).
VU Algorithmics M. Ruthmair 150
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Complementary Slackness Optimality Condition:
A feasible solution f

is an optimal solution of the minimum cost ow
problem i for some set of node potentials () the reduced costs and
ow values satisfy the following complementary slackness optimality
conditions for every (u, v) A (original network):
If c

(u, v) > 0, then f



(u, v) = 0.
If 0 < f

(u, v) < (u, v), then c

(u, v) = 0.
If c

(u, v) < 0, then f



(u, v) = (u, v).
VU Algorithmics M. Ruthmair 151
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Optimality Conditions
Complementary Slackness Optimality Condition:
Sketch of Proof:
Node potentials () and the ow f

satisfy the reduced cost optimality
conditions (c

(u, v) 0 (u, v) G
f
) they have to satisfy
complementary slackness optimality condition:
If c

(u, v) > 0 arc (v, u) / G


f
, because
c

(u, v) = c(u, v) (u) +(v) = c

(v, u) c

(v, u) < 0
to optimality condition f

(u, v) = 0.
If 0 < f

(u, v) < (u, v), then G
f
contains both arcs (u, v) and
(v, u) c

(u, v) 0, c

(v, u) 0, c

(u, v) = c

(v, u)
c

(u, v) = c

(v, u) = 0.
If c

(u, v) < 0, arc (u, v) / G


f
(otherwise to assumption)
f

(u, v) = (u, v)
VU Algorithmics M. Ruthmair 152
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
Basic Idea: Establish feasible ow; keep ow feasible but improve costs
until optimum reached.
Procedure Cycle-Canceling( )
establish feasible ow f in network; /* initialization */ 1
while negative cost cycle W in G
f
do /* algorithm */ 2
x minimum residual capacity along W; 3
augment ow by value x along W; 4
update G
f
; 5
VU Algorithmics M. Ruthmair 153
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[5,2]
b(s)=5 b(t)=5
s
v
1
v
2
v
3
t
[3,4]
[1,1]
[2,6]
[,c]
[7,3]
[3,2] 3
2
3
2
f
[3,4] 3
Figure: Initialization: A feasible ow from s to t in network A.
VU Algorithmics M. Ruthmair 154
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[3,-2]
b(s)=5 b(t)=5
[2,-4]
[1,1]
[2,6]
[2,-3]
[3,-2]
[3,-4]
s
v
1
v
2
v
3
t
[2,2]
[5,3]
[1,4]
Figure: Residual network G
f
.
VU Algorithmics M. Ruthmair 155
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[3,-2]
b(s)=5 b(t)=5
[2,-4]
[1,1]
[2,6]
[2,-3]
[3,-2]
[3,-4]
[2,2]
[5,3]
[1,4]
-1
s
v
1
v
2
v
3
t
Figure: Negative cost cycle: t v
2
v
1
s v
3
t, costs: 1.
VU Algorithmics M. Ruthmair 156
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[3,-2]
b(s)=5 b(t)=5
[2,-4]
[1,1]
[2,6]
[2,-3]
[3,-2]
[3,-4]
[2,2]
[5,3]
[1,4]
s
v
1
v
2
v
3
t
3-1
2+1
3-1
2+1
3-1
Figure: Minimum residual capacity: r
f
(s, v
3
) = 1; augment ow along cycle.
VU Algorithmics M. Ruthmair 157
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[2,-2]
b(s)=5 b(t)=5
[3,-4]
[1,1]
[2,6]
[3,-3]
[2,-2]
[2,-4]
[3,2]
[4,3]
s
v
3
v
1
v
2
t
[1,4]
[1,2]
Figure: Resulting residual network G
f
.
VU Algorithmics M. Ruthmair 158
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[2,-2]
b(s)=5 b(t)=5
[3,-4]
[1,1]
[2,6]
[3,-3]
[2,-2]
[2,-4]
[3,2]
[4,3]
s
[1,4]
[1,2]
-2
v
3
v
1
v
2
t
Figure: Negative cost cycle: t v
2
v
1
v
3
t, costs: 2.
VU Algorithmics M. Ruthmair 159
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[2,-2]
b(s)=5 b(t)=5
[3,-4]
[1,1]
[2,6]
[3,-3]
[2,-2]
[2,-4]
[3,2]
[4,3]
s
[1,4]
[1,2]
v
3
v
1
v
2
t
2-1
3+1
2-1
+1
Figure: Minimum residual capacity: r
f
(v
1
, v
3
) = 1; augment ow along cycle.
VU Algorithmics M. Ruthmair 160
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[2,-2]
b(s)=5 b(t)=5
[3,-4]
[1,-1]
[2,6]
[4,-3]
[1,-2]
[1,-4]
[3,2]
[3,3]
s
v
1
v
2
t
[2,4]
[2,2]
v
3
Figure: Resulting residual network G
f
; no negative cost cycle.
VU Algorithmics M. Ruthmair 161
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[5,2]
b(s)=5 b(t)=5
s
v
2
v
3
t
[3,4]
[1,1]
[2,6]
[7,3]
[3,2] 3-1
2+1
3-1-1
2+1+1
[3,4] 3-1-1
+1
v
1
Figure: Original ow and ows augmented along negative cost cycles.
VU Algorithmics M. Ruthmair 162
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
[5,2]
b(s)=5 b(t)=5
s
v
2
v
3
t
[3,4]
[1,1]
[2,6]
[7,3]
[3,2] 2
3
1
4
[3,4] 1
1
v
1
Figure: Minimum cost ow in network A.
VU Algorithmics M. Ruthmair 163
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Cycle-Canceling Algorithm:
Denition 42 (C, U)
C = maxc(u, v) : (u, v) A.
U = max
uv
: (u, v) A
uv
< .
Running time:
Number of iterations: O(mCU) (integrality condition).
Identifying a negative cost cycle: O(nm) (shortest path algorithm).
O(nm
2
CU)
Variation: Network simplex algorithm: Widely considered one of the
fastest algorithms in practice; identies a negative cost cycle in O(m)
(but objective function cannot be reduced in every iteration).
VU Algorithmics M. Ruthmair 164
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Successive Shortest Path Algorithm:
Basic Idea: Start with solution satisfying reduced costs optimality
condition, but not ow conservation (excess / decit pseudoow);
keep optimality condition and transform pseudoow into feasible ow.
Denition 43 (E, D)
e
f
(u) = b(u)

v:(u,v)A
f (u, v) +

v:(v,u)A
f (v, u) u V
E = Set of nodes with excess (e
f
() > 0).
D = Set of nodes with decit (e
f
() < 0).
VU Algorithmics M. Ruthmair 165
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Successive Shortest Path Algorithm:
Procedure Successive Shortest Path( )
f () 0; () 0; /* initialization */ 1
e(v) = b(v) v V; initialize sets E and D; 2
while E ,= do /* algorithm */ 3
select a node u E and a node v D; 4
compute shortest path distances d() from u to all other nodes in G
f
5
with respect to reduced costs c

();
P shortest path from u to v; 6
() () d(); 7
x mine(u), e(v), minr
f
(i , j ) : (i , j ) P; 8
augment ow of value x along P; 9
update f (), G
f
, E, D, c

(); 10
VU Algorithmics M. Ruthmair 166
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Successive Shortest Path Algorithm:
Running time:
Number of iterations: O(nU) (U: upper bound on largest supply).
Shortest path algorithm: S(n, m, C) (nonnegative arc costs).
O(nUS(n, m, nC)) (c

in G
f
bounded by nC).
VU Algorithmics M. Ruthmair 167
Maximum Flow Basics Ford-Fulkerson Preow-Push Lower Bounds Minimum Cost Flow
Minimum Cost Flow: Algorithms
Successive Shortest Path Algorithm:
Running time:
Number of iterations: O(nU) (U: upper bound on largest supply).
Shortest path algorithm: S(n, m, C) (nonnegative arc costs).
O(nUS(n, m, nC)) (c

in G
f
bounded by nC).
So Many Algorithms, So Less Time . . .
VU Algorithmics M. Ruthmair 167

You might also like