You are on page 1of 64

and Combinatorial Optimization
Contents
1. Shortest trees and branchings 3
1.1. Minimum spanning trees 3
1.2. Finding optimum branchings 5
2. Matchings and covers 7
2.1. Matchings, covers, and Gallai’s theorem 7
2.2. K¨ onig’s theorems 8
2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula 9
3. Edge-colouring 12
3.1. Vizing’s theorem 12
3.2. NP-completeness of edge-colouring 13
4. Multicommodity ﬂows and disjoint paths 15
4.1. Introduction 15
4.2. Two commodities 19
5. Matroids 22
5.1. Matroids and the greedy algorithm 22
5.2. Examples of matroids 25
5.3. Duality, deletion, and contraction 27
5.4. Two technical lemmas 30
5.5. Matroid intersection 31
5.6. Weighted matroid intersection 36
5.7. Matroids and polyhedra 39
6. Perfect matchings in regular bipartite graphs 44
6.1. Counting perfect matchings in 3-regular bipartite graphs 44
6.2. The factor
4
3
is best possible 46
7. Minimum circulation of railway stock 48
7.1. The problem 48
7.2. A network model 49
7.3. More types of trains 52
References 57
Name index 61
Subject index 62
3
1. Shortest trees and branchings
1.1. Minimum spanning trees
Let G = (V, E) be a connected graph and let l : E −→R be a function, called the length
function. For any subset F of E, the length l(F) of F is, by deﬁnition:
l(F) :=

e∈F
l(e). (1)
In this section we consider the problem of ﬁnding a spanning tree in G of minimum length.
There is an easy algorithm for ﬁnding a minimum-length spanning tree, essentially due to
Boruvka [1926]. There are a few variants. The ﬁrst one we discuss is sometimes called the
Dijkstra-Prim method (after Prim [1957] and Dijkstra [1959]).
Choose a vertex v
1
∈ V arbitrarily. Determine edges e
1
, e
2
. . . successively as follows.
Let U
1
:= ¦v
1
¦. Suppose that, for some k ≥ 0, edges e
1
, . . . , e
k
have been chosen, spanning
a tree on the set U
k
. Choose an edge e
k+1
∈ δ(U
k
) that has minimum length among all
edges in δ(U
k
).
1
Let U
k+1
:= U
k
∪ e
k+1
.
By the connectedness of G we know that we can continue choosing such an edge until
U
k
= V . In that case the selected edges form a spanning tree T in G. This tree has
minimum length, which can be seen as follows.
Call a forest F greedy if there exists a minimum-length spanning tree T of G that
contains F.
Theorem 1.1. Let F be a greedy forest, let U be one of its components, and let e ∈ δ(U).
If e has minimum length among all edges in δ(U), then F ∪ ¦e¦ is again a greedy forest.
Proof. Let T be a minimum-length spanning tree containing F. Let P be the unique path
in T between the end vertices of e. Then P contains at least one edge f that belongs to
δ(U). So T

:= (T ¸ ¦f¦) ∪ ¦e¦ is a tree again. By assumption, l(e) ≤ l(f) and hence
l(T

) ≤ l(T). Therefore, T

is a minimum-length spanning tree. As F ∪¦e¦ ⊆ T

, it follows
that F ∪ ¦e¦ is greedy.
Corollary 1.1a. The Dijkstra-Prim method yields a spanning tree of minimum length.
Proof. It follows inductively with Theorem 1.1 that at each stage of the algorithm we have
a greedy forest. Hence the ﬁnal tree is greedy — equivalently, it has minimum length.
The Dijkstra-Prim method is an example of a so-called greedy algorithm. We construct
a spanning tree by throughout choosing an edge that seems the best at the moment. Finally
we get a minimum-length spanning tree. Once an edge has been chosen, we never have to
replace it by another edge (no ‘back-tracking’).
There is a slightly diﬀerent method of ﬁnding a minimum-length spanning tree, Kruskal’s
method (Kruskal [1956]). It is again a greedy algorithm, and again iteratively edges e
1
, e
2
, . . .
are chosen, but by some diﬀerent rule.
1
δ(U) is the set of edges e satisfying |e ∩ U| = 1.
4 Chapter 1. Shortest trees and branchings
Suppose that, for some k ≥ 0, edges e
1
, . . . , e
k
have been chosen. Choose an edge
e
k+1
such that ¦e
1
, . . . , e
k
, e
k+1
¦ forms a forest, with l(e
k+1
) as small as possible. By the
connectedness of G we can (starting with k = 0) iterate this until the selected edges form
a spanning tree of G.
Corollary 1.1b. Kruskal’s method yields a spanning tree of minimum length.
Proof. Again directly from Theorem 1.1.
In a similar way one ﬁnds a maximum-length spanning tree.
Application 1.1: Minimum connections. There are several obvious practical situations where
ﬁnding a minimum-length spanning tree is important, for instance, when designing a road system,
electrical power lines, telephone lines, pipe lines, wire connections on a chip. Also when clustering
data say in taxonomy, archeology, or zoology, ﬁnding a minimum spanning tree can be helpful.
Application 1.2: The maximum reliability problem. Often in designing a network one is not
primarily interested in minimizing length, but rather in maximizing ‘reliability’ (for instance when
designing energy or communication networks). Certain cases of this problem can be seen as ﬁnding a
maximum length spanning tree, as was observed by Hu [1961]. We give a mathematical description.
Let G = (V, E) be a graph and let s : E −→ R
+
be a function. Let us call s(e) the strength of
edge e. For any path P in G, the reliability of P is, by deﬁnition, the minimum strength of the edges
occurring in P. The reliability r
G
(u, v) of two vertices u and v is equal to the maximum reliability
of P, where P ranges over all paths from u to v.
Let T be a spanning tree of maximum strength, i.e., with

e∈ET
s(e) as large as possible. (Here
ET is the set of edges of T.) So T can be found with any maximum spanning tree algorithm.
Now T has the same reliability as G, for each pair of vertices u, v. That is:
r
T
(u, v) = r
G
(u, v) for each u, v ∈ V . (2)
We leave the proof as an exercise (Exercise 1.5).
Exercises
1.1. Find, both with the Dijkstra-Prim algorithm and with Kruskal’s algorithm, a spanning tree
of minimum length in the graph in Figure 1.1.
3 2
2 4 1
5 3
3 6 3
5 4 2 4 6 3
4 3 5 7 4 2
Figure 1.1
Section 1.2. Finding optimum branchings 5
1.2. Find a spanning tree of minimum length between the cities given in the following distance
table:
Ame Ams Ape Arn Ass BoZ Bre Ein Ens s-G Gro Haa DH s-H Hil Lee Maa Mid Nij Roe Rot Utr Win Zut Zwo
Amersfoort 0 47 47 46 139 123 86 111 114 81 164 67 126 73 18 147 190 176 63 141 78 20 109 65 70
Amsterdam 47 0 89 92 162 134 100 125 156 57 184 20 79 87 30 132 207 175 109 168 77 40 151 107 103
Apeldoorn 47 89 0 25 108 167 130 103 71 128 133 109 154 88 65 129 176 222 42 127 125 67 66 22 41
Arnhem 46 92 25 0 132 145 108 78 85 116 157 112 171 63 64 154 151 200 17 102 113 59 64 31 66
Assen 139 162 108 132 0 262 225 210 110 214 25 182 149 195 156 68 283 315 149 234 217 159 143 108 69
Bergen op Zoom 123 134 167 145 262 0 37 94 230 83 287 124 197 82 119 265 183 59 128 144 57 103 209 176 193
Breda 86 100 130 108 225 37 0 57 193 75 250 111 179 45 82 228 147 96 91 107 49 66 172 139 156
Eindhoven 111 125 103 78 210 94 57 0 163 127 235 141 204 38 107 232 125 153 61 50 101 91 142 109 114
Enschede 114 156 71 85 110 230 193 163 0 195 135 176 215 148 132 155 237 285 102 187 192 134 40 54 71
’s-Gravenhage 81 57 128 116 214 83 75 127 195 0 236 41 114 104 72 182 162 124 133 177 26 61 180 146 151
Groningen 164 184 133 157 25 287 250 235 135 236 0 199 147 220 178 58 309 340 174 259 242 184 168 133 94
Haarlem 67 20 109 112 182 124 111 141 176 41 199 0 73 103 49 141 226 165 130 184 67 56 171 127 123
Den Helder 126 79 154 171 149 197 179 204 215 114 147 73 0 166 109 89 289 238 188 247 140 119 220 176 144
’s-Hertogenbosch 73 87 88 63 195 82 45 38 148 104 220 103 166 0 69 215 123 141 46 81 79 53 127 94 129
Hilversum 18 30 65 64 156 119 82 107 132 72 178 49 109 69 0 146 192 172 81 150 74 16 127 83 88
Leeuwarden 147 132 129 154 68 265 228 232 155 182 58 141 89 215 146 0 306 306 171 256 208 162 183 139 91
Maastricht 190 207 176 151 283 183 147 125 237 162 309 226 289 123 192 306 0 243 135 50 191 176 213 183 218
Middelburg 176 175 222 200 315 59 96 153 285 124 340 165 238 141 172 306 243 0 187 203 98 156 264 231 246
Nijmegen 63 109 42 17 149 128 91 61 102 133 174 130 188 46 81 171 135 187 0 85 111 76 81 48 83
Roermond 141 168 127 102 234 144 107 50 187 177 259 184 247 81 150 256 50 203 85 0 151 134 166 133 168
Rotterdam 78 77 125 113 217 57 49 101 192 26 242 67 140 79 74 208 191 98 111 151 0 58 177 143 148
Utrecht 20 40 67 59 159 103 66 91 134 61 184 56 119 53 16 162 176 156 76 134 58 0 123 85 90
Winterswijk 109 151 66 64 143 209 172 142 40 180 168 171 220 127 127 183 213 264 81 166 177 123 0 44 92
Zutphen 65 107 22 31 108 176 139 109 54 146 133 127 176 94 83 139 183 231 48 133 143 85 44 0 48
Zwolle 70 103 41 66 69 193 156 144 71 151 94 123 144 129 88 91 218 246 83 168 148 90 92 48 0
1.3. Let G = (V, E) be a graph and let l : E −→ R be a ‘length’ function. Call a forest T good if
l(ET

) ≥ l(ET) for each forest T

satisfying |ET

| = |ET|. (ET is the set of edges of T.)
Let T be a good forest and e be an edge not in T such that T ∪ {e} is a forest and such that
l(e) is as small as possible. Show that T ∪ {e} is good again.
1.4. Let G = (V, E) be a complete graph and let l : E −→ R
+
be a length function satisfying
l(uw) ≥ min{l(uv), l(vw)} for all distinct u, v, w ∈ V . Let T be a longest spanning tree in G.
Show that for all u, w ∈ V , l(uw) is equal to the minimum length of the edges in the unique
u −w path in T.
1.5. Prove (2).
1.2. Finding optimum branchings
Let D = (V, A) be a directed graph. A subset S

of A is called an branching, with root
r, or an r-branching (or r-arborescence) if (V, A

) is a rooted tree with root r.
Thus if A

is an r-branching then there is exactly one r −s path in A

, for each s ∈ V .
Moreover, D contains an r-branching if and only if each vertex of D is reachable in D from
r.
Given a directed graph D = (V, A), a root s, and a length function l : A −→ Q
+
, a
minimum-length s-arborescence can be found as follows (Edmonds [1967]).
Let A
0
:= ¦a ∈ A[l(a) = 0¦. If A
0
contains an s-arborescence B, then B is a minimum-
length s-arborescence. If A
0
does not contain an s-arborescence, there is a strong component
K of (V, A
0
) such that s ,∈ K and such that l(a) > 0 for each a ∈ δ
in
(K). Let ε :=
min¦l(a)[a ∈ δ
in
(K)¦. Set l

(a) := l(a) −ε if a ∈ δ
in
(K) and l

(a) := l(a) otherwise.
Find (recursively) a minimum-length s-arborescence B with respect to l

. Since K
is a strong component of (V, A
0
), we can choose B so that [B ∩ δ
in
(K)[ = 1, since if
B∩δ
in
(K)[ ≥ 2, then for each a
0
∈ B∩δ
in
(K), (B¸ ¦a
0
¦) ∩A
0
contains an s-arborescence,
say B

, with l

(B

) ≤ l

(B) −l

(a
0
) ≤ l

(B).
6 Chapter 1. Shortest trees and branchings
Then B is also a minimum-length s-arborescence with respect to l, since for any s-
arborescence C:
l(C) = l

(C) +ε[C ∩ δ
in
(K)[ ≥ l

(C) +ε ≥ l

(B) +ε = l(B). (3)
Since the number of iterations is O(m), we have:
Theorem 1.2. An optimum s-arborescence can be found in polynomial time.
Proof. See above.
In fact, direct analysis gives:
Theorem 1.3. An optimum s-arborescence can be found in time O(nm).
Proof. First note that there are at most 2n−3 iterations. This can be seen as follows. Let
/ be the collection of components K to which we applied the algorithm in some iteration.
Then [/[ ≤ 2n − 3, since if K, L ∈ / then K ∩ L = ∅, or K ⊆ L, or L ⊆ K. Moreover, to
any K ∈ /, the iteration is applied exactly once, since after application, some arc leaving
K has length 0.
Next, each iteration can be performed in time O(m). Indeed, in time O(m) we can
identify the set U of vertices not reachable in (V, A
0
) from s. Next, one can identify the
strong components of the subgraph of (V, A
0
) induced by U, in time O(m). Moreover, we
can order the vertices in U pre-topologically. Then the ﬁrst vertex in this order belongs to
a strong component K so that each arc a entering K has l(a) > 0.
For more on algorithms for optimum branchings, see Edmonds [1967], Fulkerson [1974],
Chu and Liu [1965], Bock [1971], Tarjan [1977].
7
2. Matchings and covers
2.1. Matchings, covers, and Gallai’s theorem
Let G = (V, E) be a graph. A coclique is a subset C of V such that e ,⊆ C for each edge
e of G. A vertex cover is a subset W of V such that e ∩ W ,= ∅ for each edge e of G. It is
not diﬃcult to show that for each U ⊆ V :
U is a coclique ⇐⇒ V ¸ U is a vertex cover. (1)
A matching is a subset M of E such that e∩e

= ∅ for all e, e

∈ M with e ,= e

. A matching
is called perfect if it covers all vertices (that is, has size
1
2
[V [). An edge cover is a subset
F of E such that for each vertex v there exists e ∈ F satisfying v ∈ e. Note that an edge
cover can exist only if G has no isolated vertices.
Deﬁne:
α(G) := max¦[C[ [C is a coclique¦,
ρ(G) := min¦[F[ [F is an edge cover¦,
τ(G) := min¦[W[ [W is a vertex cover¦,
ν(G) := max¦[M[ [M is a matching¦.
(2)
These numbers are called the coclique number, the edge cover number, the vertex cover
number, and the matching number of G, respectively.
It is not diﬃcult to show that:
α(G) ≤ ρ(G) and ν(G) ≤ τ(G). (3)
The triangle K
3
shows that strict inequalities are possible. In fact, equality in one of the
relations (3) implies equality in the other, as Gallai [1958,1959] proved:
Theorem 2.1 (Gallai’s theorem). For any graph G = (V, E) without isolated vertices one
has
α(G) +τ(G) = [V [ = ν(G) +ρ(G). (4)
Proof. The ﬁrst equality follows directly from (1).
To see the second equality, ﬁrst let M be a matching of size ν(G). For each of the
[V [ −2[M[ vertices v missed by M, add to M an edge covering v. We obtain an edge cover
of size [M[ + ([V [ −2[M[) = [V [ −[M[. Hence ρ(G) ≤ [V [ −ν(G).
Second, let F be an edge cover of size ρ(G). For each v ∈ V delete from F, d
F
(v) − 1
edges incident with v. We obtain a matching of size at least [F[ −

v∈V
(d
F
(v) − 1) =
[F[ −(2[F[ −[V [) = [V [ −[F[. Hence ν(G) ≥ [V [ −ρ(G).
This proof also shows that if we have a matching of maximum cardinality in any graph
G, then we can derive from it a minimum cardinality edge cover, and conversely.
8 Chapter 2. Matchings and covers
2.2. K¨ onig’s theorems
A classical min-max relation due to K˝ onig [1931] (extending a result of Frobenius [1917])
characterizes the maximum size of a matching in a bipartite graph:
Theorem 2.2 (K¨ onig’s matching theorem). For any bipartite graph G = (V, E) one has
ν(G) = τ(G). (5)
That is, the maximum cardinality of a matching in a bipartite graph is equal to the minimum
cardinality of a vertex cover.
Proof. Let G = (V, E) be a bipartite graph, with colour classes U and W, say. By (3) it
suﬃces to show that ν(G) ≥ τ(G), which we do by induction on [V [. We distinguish two
cases.
Case 1: There exists a vertex cover C with [C[ = τ(G) intersecting both U and W.
Let U

:= U ∩ C, U

:= U ¸ C, W

:= W ¸ C and W

:= W ∩ C. Let G

and G

be the
subgraphs of G induced by U

∪ W

and U

∪ W

respectively.
We show that τ(G

) ≥ [U

[. Let K be a vertex cover of G

of size τ(G

). Then K ∪ W

is a vertex cover of G, since K intersects all edges of G that are contained in U

∪ W

and
W

intersects all edges of G that are not contained in U

∪ W

(since each edge intersects
C = U

∪ W

). So [K ∪ W

[ ≥ τ(G) = [U

[ +[W

[ and hence [K[ ≥ [U

[.
So τ(G

) ≥ [U

[. It follows by our induction hypothesis that G

contains a matching of
size [U

[. Similarly, G

contains a matching of size [W

[. Combining the two matchings we
obtain a matching of G of size [U

[ +[W

[ = τ(G).
Case 2: There exists no such vertex cover C.
Let e = uw be any edge of G. Let G

be the subgraph of G induced by V ¸ ¦u, w¦. We
show that τ(G

) ≥ τ(G) −1. Suppose to the contrary that G

contains a vertex cover K of
size τ(G) −2. Then C := K∪¦u, w¦ would be a vertex cover of G of size τ(G) intersecting
both U and W, a contradiction.
So τ(G

) ≥ τ(G) −1, implying by our induction hypothesis that G

contains a matching
M of size τ(G) −1. Hence M ∪ ¦e¦ is a matching of G of size τ(G).
Combination of Theorems 2.1 and 2.2 yields the following result of K˝ onig [1932].
Corollary 2.2a (K¨ onig’s edge cover theorem). For any bipartite graph G = (V, E), without
isolated vertices, one has
α(G) = ρ(G). (6)
That is, the maximum cardinality of a coclique in a bipartite graph is equal to the minimum
cardinality of an edge cover.
Proof. Directly from Theorems 2.1 and 2.2, as α(G) = [V [ −τ(G) = [V [ −ν(G) = ρ(G).
Section 2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula 9
Exercises
2.1. (i) Prove that a k-regular bipartite graph has a perfect matching (if k ≥ 1).
(ii) Derive that a k-regular bipartite graph has k disjoint perfect matchings.
(iii) Give for each k > 1 an example of a k-regular graph not having a perfect matching.
2.2. Prove that in a matrix, the maximum number of nonzero entries with no two in the same line
(=row or column), is equal to the minimum number of lines that include all nonzero entries.
2.3. Let A = (A
1
, . . . , A
n
) be a family of subsets of some ﬁnite set X. A subset Y of X is called
a transversal or a system of distinct representatives (SDR) of A if there exists a bijection
π : {1, . . . , n} −→ Y such that π(i) ∈ A
i
for each i = 1, . . . , n.
Decide if the following collections have an SDR:
(i) {3, 4, 5}, {2, 5, 6}, {1, 2, 5}, {1, 2, 3}, {1, 3, 6},
(ii) {1, 2, 3, 4, 5, 6}, {1, 3, 4}, {1, 4, 7}, {2, 3, 5, 6}, {3, 4, 7}, {1, 3, 4, 7}, {1, 3, 7}.
2.4. Let A = (A
1
, . . . , A
n
) be a family of subsets of some ﬁnite set X. Prove that A has an SDR
if and only if
|

i∈I
A
i
| ≥ |I| (7)
for each subset I of {1, . . . , n}.
[Hall’s ‘marriage’ theorem (Hall [1935]).]
2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula
A basic result on matchings was found by Tutte [1947]. It characterizes graphs that
have a perfect matching. A perfect matching (or a 1−factor) is a matching M covering all
vertices of the graph.
In order to give Tutte’s characterization, let for each subset U of the vertex set V of a
graph G let
o(U) := number of odd components of the subgraph G[U of G induced by
U.
(8)
Here a component is odd (even, respectively) if it has an odd (even) number of vertices.
An important inequality is that for each matching M and each subset U of V one has
[M[ ≤
[V [ +[V ¸ U[ −o(U)
2
. (9)
This follows from the fact that at most ([U[ −o(U))/2 edges of M are contained in U, while
at most [V ¸ U[ edges of M intersect V ¸ U. So [M[ ≤ ([U[ − o(U))/2 + [V ¸ U[, implying
(9).
It will turn out that there is always a matching M and a subset U of V attaining equality
in (9).
Theorem 2.3 (Tutte’s 1-factor theorem). A graph G = (V, E) has a perfect matching if
and only if
o(U) ≤ [V ¸ U[ for each U ⊆ V . (10)
10 Chapter 2. Matchings and covers
Proof. Necessity of (10) follows directly from (9). To see suﬃciency, suppose there exist
graphs G = (V, E) satisfying the condition, but not having a perfect matching. Fixing
V , take G such that G is simple and [E[ is as large as possible. Let U := ¦v ∈ V [v is
nonadjacent to at least one other vertex of G¦. We show:
if a, b, c ∈ U and ab, bc ∈ E then ac ∈ E. (11)
For suppose ac ,∈ E. By the maximality of [E[, adding ac to E makes that G has a perfect
matching (condition (10) is maintained under adding edges). So G has a matching M
missing only a and c. As b ∈ U, there exists a d with bd ,∈ E. Again by the maximality of
[E[, G has a matching N missing only b and d. Now each component of M´N contains the
same number of edges in M as in N — otherwise there would exist an M- or N-augmenting
path, and hence a perfect matching in G, a contradiction. So the component P of M´N
containing d is a path starting in d, with ﬁrst edge in M and last edge in N, and hence
ending in a or c; by symmetry we may assume it ends in a. Moreover, P does not traverse
b. Then extending P by the edge ab gives an N-augmenting path, and hence a perfect
matching in G — a contradiction. This shows (11).
By (11), each component of G[U is a complete graph. Moreover, by (10), G[U has at
most [V ¸ U[ odd components. This implies that G has a perfect matching, contradicting
our assumption.
This proof is due to Lov´ asz [1975]. For another proof, see Anderson [1971].
We derive from Tutte’s 1-factor theorem a min-max formula for the maximum cardinality
of a matching in a graph, the Tutte-Berge formula.
Corollary 2.3a (Tutte-Berge formula). For each graph G = (V, E)
ν(G) = min
U⊆V
[V [ +[V ¸ U[ −o(U)
2
. (12)
Proof. (9) implies that ≤ holds in (12). To see the reverse inequality, let m be the minimum
value in (12). Extend G by a set W of [V [ −2m new vertices, so that each vertex in W is
adjacent to each vertex in V ∪ W. This makes the graph G

. If G

has a perfect matching
M

, then at most [V [ −2m edges in M

intersect W. Deleting these edges, gives a matching
M in G with [M[ ≥ [M

[ −([V [ −2m) =
1
2
([V [ +[W[) −([V [ −2m) = m.
So we may assume that G

does not have a perfect matching. Then by Tutte’s 1-factor
theorem, there is a subset U of V ∪W such that the subgraph G[U of G

induced by U has
more than [(V ∪W) ¸U[ odd components. If U intersects W, G[U has only one component,
and hence [(V ∪W) ¸U[ = 0, that is, U = V ∪W. But then o(U) = 0 since [V ∪W[ is even.
So U ∩ W = ∅, giving o(U) ≤ [V [ +[V ¸ U[ −2m = [(V ∪ W) ¸ U[, a contradiction.
Stating this corollary diﬀerently, each graph G = (V, E) has a matching M and a
subset U of V having equality in (9). So M is a maximum-size matching and U attains
the minimum in (12). In the following sections we will show how to ﬁnd such M and U
algorithmically. It yields an alternative proof of the results in this section.
Section 2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula 11
With Gallai’s theorem, the Tutte-Berge formula implies a formula for the edge cover
number ρ(G):
Corollary 2.3b. Let G = (V, E) be a graph without isolated vertices. Then
ρ(G) = max
U⊆V
[U[ +o(U)
2
. (13)
Proof. By Gallai’s theorem (Theorem 2.1) and the Tutte-Berge formula (Corollary 2.3a),
ρ(G) = [V [ −ν(G) = [V [ − min
W⊆V
[V [ +[W[ −o(V ¸ W)
2
= max
U⊆V
[U[ +o(U)
2
. (14)
Exercises
2.5. (i) Show that a tree has at most one perfect matching.
(ii) Show (not using Tutte’s 1-factor theorem) that a tree G = (V, E) has a perfect matching
if and only if the subgraph G−v has exactly one odd component, for each v ∈ V .
2.6. Let G be a 3-regular graph without any isthmus. Show that G has a perfect matching.
2.7. Let G = (V, E) be a graph and let T be a subset of V . Then G has a matching covering T, if
and only if the number of odd components of G−W contained in T is at most |W|, for each
W ⊆ V .
12 Chapter 3. Edge-colouring
3. Edge-colouring
3.1. Vizing’s theorem
We recall some deﬁnitions and notation. Let G = (V, E) be a graph. An edge-colouring is
a partition of E into matchings. Each matching in an edge-colouring, is called a colour or an
edge-colour. A k-edge-colouring is an edge-colouring with k colours. G is k-edge-colourable
if a k-edge-colouring exists. The smallest k for which there exists a k-edge-colouring is
called the edge-colouring numberof G, denoted by χ(G). Since an edge-colouring of G is a
(vertex-)colouring of the line-graph L(G) of G, we have that χ(G) = γ(L(G)).
The maximum degree of G is denoted by ∆(G).
Vizing [1964,1965] showed the following (we follow the proof of Ehrenfeucht, Faber, and
Theorem 3.1 (Vizing’s theorem). For any simple graph G one has ∆(G) ≤ χ(G) ≤
∆(G) + 1.
Proof. The inequality ∆(G) ≤ χ(G) being trivial, we show χ(G) ≤ ∆(G) + 1. Let G =
(V, E) be a simple graph. We show the theorem by induction on [V [. Let k := ∆(G) + 1.
Given any partial k-edge-colouring, let F
u
be the set of colours that miss u, and let
F
vu
:= F
v
∩ F
u
.
Choose a vertex v ∈ V , and suppose we have k-edge-coloured the graph G − v. Next
colour a maximum number of edges incident with v in such a way that
for each uncoloured edge vu one has F
vu
,= ∅, and there is at most one
uncoloured edge vu with [F
vu
[ = 1.
(1)
Such a colouring exists, since colouring no edge incident with v gives (1).
Let U := ¦u ∈ V [e = vu is an uncoloured edge¦. Assume that U ,= ∅. So [F
v
[ ≥ [U[ +1,
since at most k −1 −[U[ edges incident with v are coloured (as v has degree at most k −1).
Suppose there exists an i ∈

u∈U
F
vu
such that i belongs to F
vu
for at most one u ∈ U
with [F
vu
[ ≤ 2. Then we can give colour i to edge vu for which [F
vu
[ is smallest, without
violating (1). This contradicts our maximality assumption.
So we know that for each i ∈

u∈U
F
vu
there exist two distinct vertices u ∈ U with i ∈
F
vu
and [F
vu
[ ≤ 2. Hence [

u∈U
F
vu
[ ≤ [U[, and therefore there is a colour j ∈ F
v
¸

u∈U
F
u
.
Choose w ∈ U such that [F
vu
[ ≥ 2 for each u ∈ U with u ,= w. Choose i ∈ F
vw
.
Consider the ji path P starting at w, interchange colours j and i on P, and give edge vw
colour j. Then for all u ∈ U with u ,= w the set F
vu
is unchanged, except for at most one
u ,= w (if the end vertex of P belongs to U), for which F
vu
is replaced by F
vu
¸ ¦i¦. So (1)
is maintained.
In this theorem we cannot delete the condition that G be simple: the graph G obtained
from K
3
by replacing each edge by two parallel edges, has χ(G) = 6 and ∆(G) = 4.
Section 3.2. NP-completeness of edge-colouring 13
3.2. NP-completeness of edge-colouring
Holyer [1981] showed:
Theorem 3.2. It is NP-complete to decide if a given 3-regular graph is 3-edge-colourable.
Proof. We show that the 3-satisﬁability problem (3-SAT) can be reduced to the edge-
colouring problem of 3-regular graphs. To this end, consider the graph fragment called the
inverting component given by the left picture of Figure 3.1, where the right picture gives its
symbolic representation if we take it as part of larger graphs.
a
b
c
d
e
Figure 3.1 The inverting component and its symbolic representation.
This graph fragment has the property that a colouring of the edges a, b, c, d and
e can be properly extended to a colouring of the edges spanned by the fragment, if
and only if either a and b have the same colour while c, d, and e have three distinct
colours, or c and d have the same colour while a, b, and e have three distinct colours.
The pairs a, b and c, d are called the output pairs.
Consider now an instance of the 3-satisﬁability problem. From the inverting component
we build larger graph fragments. First we construct for each variable u a variable-setting
component given by Figure 3.2.
The ﬁgure shows the case where u occurs (as u or u in exactly four clauses). The
general case, where u occurs (as u or u) in exactly k clauses, is constructed similarly, with
2k inverting components and k output pairs.
Next we construct, for each clause C a satisfaction-testing component given by Figure
3.3.
Now if a variable u occurs in a clause C as u, we connect one of the output pairs of u
with one of the output pairs of C. If a variable u occurs in a clause C as u, we connect
one of the output pairs of u with one side of an inverting component, and connect the other
side of it with one of the output pairs of C.
In this way we can match up all output pairs of the variable-setting and satisfaction-
testing components, yielding a fragment F with only single edges leaving it. We complete
the graph G by making a copy F

of F and connecting any single edge end to its copy in
F

.
Now, given the properties of the fragments, one easily checks that the input of the 3-
satisﬁability problem is satisﬁable if and only if G is 3-edge-colourable.
14 Chapter 3. Edge-colouring
Figure 3.2 The variable-setting component for a variable u occurring in four clauses.
This graph fragment has the property that a colouring of the edges leaving the frag-
ment can be extended to a proper colouring of the edges spanned by the fragment,
if and only if either each of the output pairs leaving the fragment is monochromatic,
or none of them is monochromatic.
Figure 3.3 The satisfaction-testing component for a clause C.
This graph fragment has the property that a colouring of the edges leaving the frag-
ment can be extended to a proper colouring of the edges spanned by the fragment, if
and only if at least one of the output pairs leaving the fragment is monochromatic.
15
4. Multicommodity ﬂows and disjoint paths
4.1. Introduction
The problem of ﬁnding a maximum ﬂow from one ‘source’ r to one ‘sink’ s is highly
tractable. There is a very eﬃcient algorithm, which outputs an integer maximum ﬂow if all
capacities are integer. Moreover, the maximum ﬂow value is equal to the minimum capacity
of a cut separating r and s. If all capacities are equal to 1, the problem reduces to ﬁnding
arc-disjoint paths. Some direct transformations give similar results for vertex capacities
and for vertex-disjoint paths.
Often in practice however, one is not interested in connecting only one pair of source
and sink by a ﬂow or by paths, but several pairs of sources and sinks simultaneously. One
may think of a large communication or transportation network, where several messages or
goods must be transmitted all at the same time over the same network, between diﬀerent
pairs of terminals. A recent application is the design of very large-scale integrated (VLSI)
circuits, where several pairs of pins must be interconnected by wires on a chip, in such a
way that the wires follow given ‘channels’ and that the wires connecting diﬀerent pairs of
pins do not intersect each other.
Mathematically, these problems can be formulated as follows. First, there is the multi-
commodity ﬂow problem (or k-commodity ﬂow problem):
given: a directed graph G = (V, E), pairs (r
1
, s
1
), . . . , (r
k
, s
k
) of vertices of G,
a ‘capacity’ function c : E −→Q
+
, and ‘demands’ d
1
, . . . , d
k
,
ﬁnd: for each i = 1, . . . , k, an r
i
− s
i
ﬂow x
i
∈ Q
E
+
so that x
i
has value d
i
and so that for each arc e of G:
k

i=1
x
i
(e) ≤ c(e).
(1)
The pairs (r
i
, s
i
) are called commodities. (We assume r
i
,= s
i
throughout.)
If we require each x
i
to be an integer ﬂow, the problem is called the integer multicom-
modity ﬂow problem or integer k-commodity ﬂow problem. (To distinguish from the integer
version of this problem, one sometimes adds the adjective fractional to the name of the
problem if no integrality is required.)
The problem has a natural analogue to the case where G is undirected. We replace
each undirected edge e = ¦v, w¦ by two opposite arcs (v, w) and (w, v) and ask for ﬂows
x
1
, . . . , x
k
of values d
1
, . . . , d
k
, respectively, so that for each edge e = ¦v, w¦ of G:
k

i=1
(x
i
(v, w) +x
i
(w, v)) ≤ c(e). (2)
Thus we obtain the undirected multicommodity ﬂow problem or undirected k-commodity ﬂow
problem. Again, we add integer if we require the x
i
to be integer ﬂows.
16 Chapter 4. Multicommodity ﬂows and disjoint paths
If all capacities and demands are 1, the integer multicommodity ﬂow problem is equiv-
alent to the arc- or edge-disjoint paths problem:
given: a (directed or undirected) graph G = (V, E), pairs (r
1
, s
1
), . . . , (r
k
, s
k
)
of vertices of G,
ﬁnd: pairwise edge-disjoint paths P
1
, . . . , P
k
where P
i
is an r
i
− s
i
path
(i = 1, . . . , k).
(3)
Related is the vertex-disjoint paths problem:
given: a (directed or undirected) graph G = (V, E), pairs (r
1
, s
1
), . . . , (r
k
, s
k
)
of vertices of G,
ﬁnd: pairwise vertex-disjoint paths P
1
, . . . , P
k
where P
i
is an r
i
− s
i
path
(i = 1, . . . , k).
(4)
We leave it as an exercise (Exercise 4.1) to check that the vertex-disjoint paths problem
can be transformed to the directed edge-disjoint paths problem.
The (fractional) multicommodity ﬂow problem can be easily described as one of solving
a system of linear inequalities in the variables x
i
(e) for i = 1, . . . , k and e ∈ E. The
constraints are the ﬂow conservation laws for each ﬂow x
i
separately, together with the
inequalities given in (1). Therefore, the fractional multicommodity ﬂow problem can be
solved in polynomial time with any polynomial-time linear programming algorithm.
In fact, the only polynomial-time algorithm known for the fractional multicommodity
ﬂow problem is any general linear programming algorithm. Ford and Fulkerson [1958]
designed an algorithm based on the simplex method, with column generation.
The following cut condition trivially is a necessary condition for the existence of a
solution to the fractional multicommodity ﬂow problem (1):
for each W ⊆ V the capacity of δ
out
E
(W) is not less than the demand of
δ
out
R
(W),
(5)
where R := ¦(r
1
, s
1
), . . . , (r
k
, s
k
)¦. However, this condition is in general not suﬃcient, even
not in the two simple cases given in Figure 4.1 (taking all capacities and demands equal to
1).
r
s
r =s
2
2 1
1
r =s
2 1 2
=s
1
r
Figure 4.1
One may derive from the max-ﬂow min-cut theorem that the cut condition is suﬃcient
if r
1
= r
2
= = r
k
(similarly if s
1
= s
2
= = s
k
) — see Exercise 4.3.
Section 4.1. Introduction 17
Similarly, in the undirected case a necessary condition is the following cut condition:
for each W ⊆ V, the capacity of δ
E
(W) is not less than the demand of δ
R
(W) (6)
(taking R := ¦¦r
1
, s
1
¦, . . . , ¦r
k
, s
k
¦¦). In the special case of the edge-disjoint paths problem
(where all capacities and demands are equal to 1), the cut condition reads:
for each W ⊆ V, [δ
E
(W)[ ≥ [δ
R
(W)[. (7)
Figure 4.2 shows that this condition again is not suﬃcient.
r
=r s
=r
2 1
s
=r
3 2
s
s
4 4
3 1
Figure 4.2
However, Hu [1963] showed that the cut condition is suﬃcient for the existence of a
fractional multicommodity ﬂow, in the undirected case with k = 2 commodities. He gave
an algorithm that yields a half-integer solution if all capacities and demands are integer.
This result was extended by Rothschild and Whinston [1966]. We discuss these results in
Section 4.2.
Similar results were obtained by Okamura and Seymour [1981] for arbitrary k, provided
that the graph is planar and all terminals r
i
, s
i
are on the boundary of the unbounded face.
The integer multicommodity ﬂow problem is NP-complete, even in the undirected case
with k = 2 commodities and all capacities equal to 1, with arbitrary demands d
1
, d
2
(Even,
Itai, and Shamir [1976]). This implies that the undirected edge-disjoint paths problem is
NP-complete, even if [¦¦r
1
, s
1
¦, . . . , ¦r
k
, s
k
¦¦[ = 2.
In fact, the disjoint paths problem is NP-complete in all modes (directed/undirected,
vertex/edge disjoint), even if we restrict the graph G to be planar (D.E. Knuth (see Karp
[1975]), Lynch [1975], Kramer and van Leeuwen [1984]). For general directed graphs the
arc-disjoint paths problem is NP-complete even for k = 2 ‘opposite’ commodities (r, s) and
(s, r) (Fortune, Hopcroft, and Wyllie [1980]).
On the other hand, it is a deep result of Robertson and Seymour [1995] that the undi-
rected vertex-disjoint paths problem is polynomially solvable for any ﬁxed number k of
commodities. Hence also the undirected edge-disjoint paths problem is polynomially solv-
able for any ﬁxed number k of commodities.
Robertson and Seymour observed that if the graph G is planar and all terminals r
i
, s
i
are on the boundary of the unbounded face, there is an easy ‘greedy-type’ algorithm for the
vertex-disjoint paths problem.
18 Chapter 4. Multicommodity ﬂows and disjoint paths
It is shown by Schrijver [1994] that for each ﬁxed k, the k disjoint paths problem is
solvable in polynomial time for directed planar graphs. For the directed planar arc-disjoint
version, the complexity is unknown. That is, there is the following research problem:
Research problem. Is the directed arc-disjoint paths problem polynomially solvable for
planar graphs with k = 2 commodities? Is it NP-complete?
Application 4.1: Multicommodity ﬂows. Certain goods or messages must be transported
through the same network, where the goods or messages may have diﬀerent sources and sinks.
This is a direct special case of the problems described above.
Application 4.2: VLSI-routing. On a chip certain modules are placed, each containing a number
of ’pins’. Certain pairs of pins should be connected by an electrical connection (a ‘wire’) on the chip,
in such a way that each wire follows a certain (very ﬁne) grid on the chip and that wires connecting
diﬀerent pairs of pins are disjoint.
Determining the routes of the wires clearly is a special case of the disjoint paths problem.
Exercises
4.1. Show that each of the following problems (a), (b), (c) can be reduced to problems (b), (c),
(d), respectively:
(a) the undirected edge-disjoint paths problem,
(b) the undirected vertex-disjoint paths problem,
(c) the directed vertex-disjoint paths problem,
(d) the directed arc-disjoint paths problem.
4.2. Show that the undirected edge-disjoint paths problem for planar graphs can be reduced to the
directed arc-disjoint paths problem for planar graphs.
4.3. Derive from the max-ﬂow min-cut theorem that the cut condition (5) is suﬃcient for the
existence of a fractional multicommodity ﬂow if r
1
= · · · = r
k
.
4.4. Show that if the undirected graph G = (V, E) is connected and the cut condition (7) is
violated, then it is violated by some W ⊆ V for which both W and V \ W induce connected
subgraphs of G.
4.5. (i) Show with Farkas’ lemma
2
: the fractional multicommodity ﬂow problem (1) has a solu-
tion, if and only if for each ‘length’ function l : E −→Q
+
one has:
k

i=1
d
i
· dist
l
(r
i
, s
i
) ≤

e∈E
l(e)c(e). (8)
(Here dist
l
(r, s) denotes the length of a shortest r −s path with respect to l.)
(ii) Interprete the cut condition (5) as a special case of this condition.
2
Farkas’ lemma states: let A be an m × n matrix and let b ∈ R
m
; then there exists a vector x ∈ R
n
satisfying Ax ≤ b, if and only if for each vector y ∈ R
m
+
with y
T
A = 0 one has y
T
b ≥ 0.
Section 4.2. Two commodities 19
4.2. Two commodities
Hu [1963] gave a direct combinatorial method for the undirected two-commodity ﬂow
problem and he showed that in this case the cut condition suﬃces. In fact, he showed
that if the cut condition holds and all capacities and demands are integer, there exists a
half-integer solution. We ﬁrst give a proof of this result due to Sakarovitch [1973].
Consider a graph G = (V, E), with commodities ¦r
1
, s
1
¦ and ¦r
2
, s
2
¦, a capacity function
c : E −→Z
+
and demands d
1
and d
2
.
Theorem 4.1 (Hu’s two-commodity ﬂow theorem). The undirected two-commodity ﬂow
problem, with integer capacities and demands, has a half-integer solution, if and only if the
cut condition (6) is satisﬁed.
Proof. Suppose the cut condition holds. Orient the edges of G arbitrarily, yielding the
directed graph D = (V, A). For any a ∈ A we denote by c(a) the capacity of the underlying
undirected edge.
Deﬁne for any x ∈ R
A
and any v ∈ V :
f(x, v) :=

a∈δ
out
(v)
x(a) −

a∈δ
in
(v)
x(a). (9)
So f(x, v) is the ‘net loss’ of x in vertex v.
By the max-ﬂow min-cut theorem there exists a function x

: A −→Z satisfying:
f(x

, r
1
) = d
1
, f(x

, s
1
) = −d
1
, f(x

, r
2
) = d
2
, f(x

, s
2
) = −d
2
,
f(x

, v) = 0 for each other vertex v,
[x

(a)[ ≤ c(a) for each arc a of D.
(10)
This can be seen by extending the undirected graph G by adding two new vertices r

and s

and four new edges ¦r

, r
1
¦, ¦s
1
, s

¦ (both with capacity d
1
) and ¦r

, r
2
¦, ¦s
2
, s

¦ (both with
capacity d
2
) as in Figure 4.3.
1
r’ s’
2
s r
1 2
r s
G
Figure 4.3
Then the cut condition for the two-commodity ﬂow problem implies that the minimum
capacity of any r

−s

cut in the extended graph is equal to d
1
+d
2
. Hence, by the max-ﬂow
min-cut theorem, there exists an integer-valued r

−s

ﬂow in the extended graph of value
d
1
+d
2
. This gives x

satisfying (10).
20 Chapter 4. Multicommodity ﬂows and disjoint paths
Similarly, the max-ﬂow min-cut theorem implies the existence of a function x

: A −→Z
satisfying:
f(x

, r
1
) = d
1
, f(x

, s
1
) = −d
1
, f(x

, r
2
) = −d
2
, f(x

, s
2
) = d
2
,
f(x

, v) = 0 for each other vertex v,
[x

(a)[ ≤ c(a) for each arc a of D.
(11)
To see this we extend G with vertices r

and s

and edges ¦r

, r
1
¦, ¦s
1
, s

¦ (both with
capacity d
1
) and ¦r

, s
2
¦, ¦r
2
, s

¦ (both with capacity d
2
) (cf. Figure 4.4).
r"
r
1
2
s"
s
r
2
s
1
G
Figure 4.4
After this we proceed as above.
Now consider the vectors
x
1
:=
1
2
(x

+x

) and x
2
:=
1
2
(x

−x

). (12)
Since f(x
1
, v) =
1
2
(f(x

, v) +f(x

, v)) for each v, we see from (10) and (11) that x
1
satisﬁes:
f(x
1
, r
1
) = d
1
, f(x
1
, s
1
) = −d
1
, f(x
1
, v) = 0 for all other v. (13)
So x
1
gives a half-integer r
1
−s
1
ﬂow in G of value d
1
. Similarly, x
2
satisﬁes:
f(x
2
, r
2
) = d
2
, f(x
2
, s
2
) = −d
2
, f(x
2
, v) = 0 for all other v. (14)
So x
2
gives a half-integer r
2
−s
2
ﬂow in G of value d
2
.
Moreover, x
1
and x
2
together satisfy the capacity constraint, since for each edge a of D:
[x
1
(a)[ +[x
2
(a)[ =
1
2
[x

(a) +x

(a)[ +
1
2
[x

(a) −x

(a)[
= max¦[x

(a)[, [x

(a)[¦ ≤ c(a).
(15)
(Note that
1
2
[α +β[ +
1
2
[α −β[ = max¦[α[, [β[¦ for all reals α, β.)
Section 4.2. Two commodities 21
So we have a half-integer solution to the two-commodity ﬂow problem.
This proof also directly gives a polynomial-time algorithm for ﬁnding a half-integer ﬂow.
The cut condition is not enough to derive an integer solution, as is shown by Figure 4.5
(taking all capacities and demands equal to 1).
s
s
1 2
r
r
2 1
Figure 4.5
Moreover, as mentioned, the undirected integer two-commodity ﬂow problem is NP-complete
(Even, Itai, and Shamir [1976]).
However, Rothschild and Whinston [1966] showed that an integer solution exists if the
cut condition holds, provided that the following Euler condition is satisﬁed:

e∈δ(v)
c(e) ≡ 0 (mod 2) if v ,= r
1
, s
1
, r
2
, s
2
,
≡ d
1
(mod 2) if v = r
1
, s
1
,
≡ d
2
(mod 2) if v = r
2
, s
2
.
(16)
(Equivalently, the graph obtained from G by replacing each edge e by c(e) parallel edges and
i
parallel edges connecting r
i
and s
i
(i = 1, 2), should be an Eulerian graph.)
Exercises
4.6. Derive from Theorem 4.1 the following max-biﬂow min-cut theorem of Hu: Let G = (V, E)
be a graph, let r
1
, s
1
, r
2
, s
2
be distinct vertices, and let c : E −→Q
+
be a capacity function.
Then the maximum value of d
1
+d
2
so that there exist r
i
−s
i
ﬂows x
i
of value d
i
(i = 1, 2),
together satisfying the capacity constraint, is equal to the minimum capacity of a cut both
separating r
1
and s
1
and separating r
2
and s
2
.
4.7. Derive from Theorem 4.1 that the cut condition suﬃces to have a half-integer solution to
the undirected k-commodity ﬂow problem (with all capacities and demands integer), if there
exist two vertices u and w so that each commodity {r
i
, s
i
} intersects {u, w}. (Dinits (cf.
22 Chapter 5. Matroids
5. Matroids
5.1. Matroids and the greedy algorithm
Let G = (V, E) be a connected undirected graph and let w : E −→ Z be a ‘weight’
function on the edges. We saw that a minimum-weight spanning tree can be found quite
straightforwardly with Kruskal’s so-called greedy algorithm.
The algorithm consists of selecting successively edges e
1
, e
2
, . . . , e
r
. If edges e
1
, . . . , e
k
have been selected, we select an edge e ∈ E so that:
(i) e ,∈ ¦e
1
, . . . , e
k
¦ and ¦e
1
, . . . , e
k
, e¦ is a forest,
(ii) w(e) is as small as possible among all edges e satisfying (i).
(1)
We take e
k+1
:= e. If no e satisfying (1)(i) exists, that is, if ¦e
1
, . . . , e
k
¦ forms a spanning
tree, we stop, setting r := k. Then ¦e
1
, . . . , e
r
¦ is a spanning tree of minimum weight.
By replacing ‘as small as possible’ in (1)(ii) by ‘as large as possible’, one obtains a
spanning tree of maximum weight.
It is obviously not true that such a greedy approach would lead to an optimal solution
for any combinatorial optimization problem. We could think of such an approach to ﬁnd a
matching of maximum weight. Then in (1)(i) we replace ‘forest’ by ‘matching’ and ‘small’
by ‘large’. Application to the weighted graph in Figure 5.1 would give e
1
= cd, e
2
= ab.
1
3
a b
c d
3
4
Figure 5.1
However, ab and cd do not form a matching of maximum weight.
It turns out that the structures for which the greedy algorithm does lead to an optimal
solution, are the matroids. It is worth studying them, not only because it enables us to
recognize when the greedy algorithm applies, but also because there exist fast algorithms
for ‘intersections’ of two diﬀerent matroids.
The concept of matroid is deﬁned as follows. Let X be a ﬁnite set and let 1 be a
collection of subsets of X. Then the pair (X, 1) is called a matroid if it satisﬁes the following
conditions:
(i) ∅ ∈ 1,
(ii) if Y ∈ 1 and Z ⊆ Y then Z ∈ 1,
(iii) if Y, Z ∈ 1 and [Y [ < [Z[ then Y ∪ ¦x¦ ∈ 1 for some x ∈ Z ¸ Y .
(2)
Section 5.1. Matroids and the greedy algorithm 23
For any matroid M = (X, 1), a subset Y of X is called independent if Y belongs to 1,
and dependent otherwise.
Let Y ⊆ X. A subset B of Y is called a basis of Y if B is an inclusionwise maximal
independent subset of B. That is, for any set Z ∈ 1 with B ⊆ Z ⊆ Y one has Z = B.
It is not diﬃcult to see that condition (2)(iii) is equivalent to:
for any subset Y of X, any two bases of Y have the same cardinality. (3)
(Exercise 5.1.) The common cardinality of the bases of a subset Y of X is called the rank
of Y , denoted by r
M
(Y ).
We now show that if G = (V, E) is a graph and 1 is the collection of forests in G, then
(E, 1) indeed is a matroid. Conditions (2)(i) and (ii) are trivial. To see that condition (3)
holds, let E

⊆ E. Then, by deﬁnition, each basis Y of E

is an inclusionwise maximal forest
contained in E

. Hence Y forms a spanning tree in each component of the graph (V, E

).
So Y has [V [ −k elements, where k is the number of components of (V, E

). So each basis
of E

has [V [ −k elements, proving (3).
A set is called simply a basis if it is a basis of X. The common cardinality of all bases
is called the rank of the matroid. If 1 is the collection of forests in a connected graph
G = (V, E), then the bases of the matroid (E, 1) are exactly the spanning trees in G.
We next show that the matroids indeed are those structures for which the greedy algo-
rithm leads to an optimal solution. Let X be some ﬁnite set and let 1 be a collection of
subsets of X satisfying (2)(i) and (ii).
For any weight function w : X −→R we want to ﬁnd a set Y in 1 maximizing
w(Y ) :=

y∈Y
w(y). (4)
The greedy algorithm consists of selecting y
1
, . . . , y
r
successively as follows. If y
1
, . . . , y
k
have been selected, choose y ∈ X so that:
(i) y ,∈ ¦y
1
, . . . , y
k
¦ and ¦y
1
, . . . , y
k
, y¦ ∈ 1,
(ii) w(y) is as large as possible among all y satisfying (i).
(5)
We stop if no y satisfying (5)(i) exist, that is, if ¦y
1
, . . . , y
k
¦ is a basis.
Now:
Theorem 5.1. The pair (X, 1) satisfying (2)(i) and (ii) is a matroid, if and only if the
greedy algorithm leads to a set Y in 1 of maximum weight w(Y ), for each weight function
w : X −→R
+
.
Proof. Suﬃciency. Suppose the greedy algorithm leads to an independent set of maximum
weight for each weight function w. We show that (X, 1) is a matroid.
Conditions (2)(i) and (ii) are satisﬁed by assumption. To see condition (2)(iii), let
Y, Z ∈ 1 with [Y [ < [Z[. Suppose that Y ∪ ¦z¦ ,∈ 1 for each z ∈ Z ¸ Y .
Consider the following weight function w on X. Let k := [Y [. Deﬁne:
w(x) := k + 2 if x ∈ Y ,
w(x) := k + 1 if x ∈ Z ¸ Y ,
w(x) := 0 if x ∈ X ¸ (Y ∪ Z).
(6)
24 Chapter 5. Matroids
Now in the ﬁrst k iterations of the greedy algorithm we ﬁnd the k elements in Y . By
assumption, at any further iteration, we cannot choose any element in Z ¸ Y . Hence any
further element chosen, has weight 0. So the greedy algorithm will yield a basis of weight
k(k + 2).
However, any basis containing Z will have weight at least [Z∩Y [(k+2)+[Z¸Y [(k+1) ≥
[Z[(k+1) ≥ (k+1)(k+1) > k(k+2). Hence the greedy algorithm does not give a maximum-
weight independent set.
Necessity. Now let (X, 1) be a matroid. Let w : X −→ R
+
be any weight function on X.
Call an independent set Y greedy if it is contained in a maximum-weight basis. It suﬃces
to show that if Y is greedy, and x is an element in X ¸ Y such that Y ∪ ¦x¦ ∈ 1 and such
that w(x) is as large as possible, then Y ∪ ¦x¦ is greedy.
As Y is greedy, there exists a maximum-weight basis B ⊇ Y . If x ∈ B then Y ∪ ¦x¦
is greedy again. If x ,∈ B, then there exists a basis B

containing Y ∪ ¦x¦ and contained
in B ∪ ¦x¦. So B

= (B ¸ ¦x

¦) ∪ ¦x¦ for some x

∈ B ¸ Y . As w(x) is chosen maximum,
w(x) ≥ w(x

). Hence w(B

) ≥ w(B), and therefore B

is a maximum-weight basis. So
Y ∪ ¦x¦ is greedy.
Note that by replacing “as large as possible” in (5) by “as small as possible”, one obtains
an algorithm for ﬁnding a minimum-weight basis in a matroid. Moreover, by ignoring
elements of negative weight, the algorithm can be adapted to yield an independent set of
maximum weight, for any weight function w : X −→R.
Exercises
5.1. Show that condition (3) is equivalent to condition (2)(iii) (assuming (2)(i) and (ii)).
5.2. Let X be a ﬁnite set and let B be a nonempty collection of subsets of X. Show that B is the
collection of bases of some matroid on X, if and only if:
if B, B

∈ B and x ∈ B \ B

, then there exists an y ∈ B

\ B such that
(B \ {x}) ∪ {y} ∈ B.
(7)
5.3. Let X be a ﬁnite set and let r : P(X) −→ Z. Show that r = r
M
for some matroid M on X,
if and only if:
(i) 0 ≤ r(Y ) ≤ |Y | for each subset Y of X;
(ii) r(Z) ≤ r(Y ) whenever Z ⊆ Y ⊆ X;
(iii) r(Y ∩ Z) +r(Y ∪ Z) ≤ r(Y ) +r(Z) for all Y, Z ⊆ X.
(8)
5.4. Let M = (X, I) be a matroid. A subset C of X is called a circuit if C is an inclusionwise
minimal dependent set. Show that if C and C

are diﬀerent circuits of M and x ∈ C ∩C

and
y ∈ C \ C

, then (C ∪ C

) \ {x} contains a circuit containing y.
5.5. Let M = (X, I) be a matroid, let B be a basis of M, and let x ∈ X \ B. Show that there
exists a unique circuit C with the property that x ∈ C and C ⊆ B ∪ {x}.
5.6. Let M = (X, I) be a matroid. Two elements x, y of X are called parallel if {x, y} is a
circuit. Show that if x and y are parallel and Y is an independent set with x ∈ Y , then also
(Y \ {x}) ∪ {y} is independent.
Section 5.2. Examples of matroids 25
5.7. Let M = (X, I) be a matroid, and order the elements of X as x
1
, x
2
, x
3
, . . . , x
n
. Deﬁne
Y := {x
i
| r
M
({x
1
, . . . , x
i
}) > r
M
({x
1
, . . . , x
i−1
})}. (9)
Prove that Y is a basis of M.
5.2. Examples of matroids
In this section we describe some classes of examples of matroids.
I. Graphic matroids. As a ﬁrst example we consider the matroids described in Section
5.1.
Let G = (V, E) be a graph. Let 1 be the collection of all forests in G. Then M = (E, 1)
is a matroid, as we saw in Section 5.1.
The matroid M is called the cycle matroid of G, denoted by M(G). Any matroid
obtained in this way, or isomorphic to such a matroid, is called a graphic matroid.
Note that the bases of M(G) are exactly those forests F of G for which the graph (V, F)
has the same number of components as G. So if G is connected, the bases are the spanning
trees.
Note also that the circuits of M(G), in the matroid sense, are exactly the circuits of G,
in the graph sense.
II. Cographic matroids. There is an alternative way of obtaining a matroid from a graph
G = (V, E). Let 1 be the collection of those subsets F of E that satisfy:
the graph (V, E ¸ F) has the same number of components as G. (10)
Then:
Theorem 5.2. (E, 1) is a matroid.
Proof. Condition (2)(i) is trivial. Condition (2)(ii) follows from the fact that if F

⊆ F,
then the graph (V, E ¸ F) has at least as many components as (V, E ¸ F

).
To see condition (2)(iii), let G have k components and let F and F

be subsets of E so
that (V, E ¸ F) and (V, E ¸ F

) have k components and so that [F[ < [F

[. We must show
that (V, E ¸ (F ∪ ¦e¦)) has k components for some e ∈ F

¸ F.
Suppose to the contrary that (V, E ¸ (F ∪ ¦e¦)) has more than k components, for every
e ∈ F

¸ F. That is, each e in F

¸ F is an isthmus in the graph (V, E ¸ F). Hence
(V, E ¸ (F ∪ F

)) has k + [F

¸ F[ components. Now E ¸ F

= (E ¸ (F ∪ F

)) ∪ (F ¸ F

).
This implies that (V, E ¸ F

) has at least k + [F ¸ F[ − [F ¸ F

[ components. However,
k +[F

¸ F[ −[F ¸ F

[ = k +[F

[ −[F[ > k, contradicting the fact that F

belongs to 1.
The matroid (E, 1) is called the cocycle matroid of G, denoted by M

(G). Any matroid
obtained in this way, or isomorphic to such a matroid, is called a cographic matroid.
Note that the bases of M

(G) are exactly those subsets E

of E for which E ¸ E

is a
forest and (V, E¸E

) has the same number of components as G. So if G is connected, these
are exactly the complements of the spanning trees in G.
26 Chapter 5. Matroids
By deﬁnition, a subset C of E is a circuit of M

(G) if it is an inclusionwise minimal set
with the property that (V, E ¸ C) has more components than G. Hence C is a circuit of
M

(G) if and only if C is an inclusionwise minimal nonempty cutset in G.
III. Linear matroids. Let A be an m n matrix. Let X = ¦1, . . . , n¦ and let 1 be
the collection of all those subsets Y of X so that the columns with index in Y are linearly
independent. That is, so that the submatrix of A consisting of the columns with index in
Y has rank [Y [.
Now:
Theorem 5.3. (X, 1) is a matroid.
Proof. Again, conditions (2)(i) and (ii) are easy to check. To see condition (2)(iii), let Y
and Z be subsets of X so that the columns with index in Y are linearly independent, and
similarly for Z, and so that [Y [ < [Z[.
Suppose that Y ∪¦x¦ ,∈ 1 for each x ∈ Z ¸ Y . This means that each column with index
in Z ¸ Y is spanned by the columns with index in Y . Trivially, each column with index
in Z ∩ Y is spanned by the columns with index in Y . Hence each column with index in
Z is spanned by the columns with index in Y . This contradicts the fact that the columns
indexed by Y span an [Y [-dimensional space, while the columns indexed by Z span an
[Z[-dimensional space, with [Z[ > [Y [.
Any matroid obtained in this way, or isomorphic to such a matroid, is called a linear
matroid.
Note that the rank r
M
(Y ) of any subset Y of X is equal to the rank of the matrix
formed by the columns indexed by Y .
IV. Transversal matroids. Let X
1
, . . . , X
m
be subsets of the ﬁnite set X. A set Y =
¦y
1
, . . . , y
n
¦ is called a partial transversal (of X
1
, . . . , X
m
), if there exist distinct indices
i
1
, . . . , i
n
so that y
j
∈ X
i
j
for j = 1, . . . , n. A partial transversal of cardinality m is called
a transversal (or a system of distinct representatives, or an SDR).
Another way of representing partial transversals is as follows. Let ( be the bipartite
graph with vertex set 1 := ¦1, . . . , m¦∪X and with edges all pairs ¦i, x¦ with i ∈ ¦1, . . . , m¦
and x ∈ X
i
. (We assume here that ¦1, . . . , m¦ ∩ X = ∅.)
For any matching M in (, let ρ(M) denote the set of those elements in X that belong
to some edge in M. Then it is not diﬃcult to see that:
Y ⊆ X is a partial transversal, if and only if Y = ρ(M) for some matching
M in (.
(11)
Now let 1 be the collection of all partial transversals for X
1
, . . . , X
m
. Then:
Theorem 5.4. (X, 1) is a matroid.
Proof. Again, conditions (2)(i) and (ii) are trivial. To see (2)(iii), let Y and Z be partial
transversals with [Y [ < [Z[. Consider the graph ( constructed above. By (11) there exist
matchings M and M

in ( so that Y = ρ(M) and Z = ρ(M

). So [M[ = [Y [ < [Z[ = [M

[.
Section 5.3. Duality, deletion, and contraction 27
Consider the union M ∪ M

of M and M

. Each component of the graph (1, M ∪ M

)
is either a path, or a circuit, or a singleton vertex. Since [M

[ > [M[, at least one of these
components is a path P with more edges in M

than in M. The path consists of edges
alternatingly in M

and in M, with end edges in M

.
Let N and N

denote the edges in P occurring in M and M

, respectively. So [N

[ =
[N[ + 1. Since P has odd length, exactly one of its end vertices belongs to X; call this end
vertex x. Then x ∈ ρ(M

) = Z and x ,∈ ρ(M) = Y . Deﬁne M

:= (M ¸ N) ∪ N

. Clearly,
M

is a matching with ρ(M

) = Y ∪ ¦x¦. So Y ∪ ¦x¦ belongs to 1.
Any matroid obtained in this way, or isomorphic to such a matroid, is called a transversal
matroid. If the sets X
1
, . . . , X
m
form a partition of X, one speaks of a partition matroid.
These four classes of examples show that the greedy algorithm has a wider applicability
than just for ﬁnding minimum-weight spanning trees. There are more classes of matroids
(like ‘algebraic matroids’, ‘gammoids’), for which we refer to Welsh [1976].
Exercises
5.8. Show that a partition matroid is graphic and cographic.
5.9. Let M = (V, I) be the transversal matroid derived from subsets X
1
, . . . , X
m
of X as in
Example IV.
(i) Show with K¨ onig’s matching theorem that:
r
M
(X) = min
J⊆{1,...,m}
(|

j∈J
X
j
| +m−|J|). (12)
(ii) Derive a formula for r
M
(Y ) for any Y ⊆ X.
5.10. (i) Let (X
1
, . . . , X
m
) be a partition of the ﬁnite set X. Let I be the collection of all subsets
Y of X such that |Y ∩ X
i
| ≤ 1 for each i = 1, . . . , m. Show that (X, I) is a matroid.
Such matroids are called partition matroids.
(ii) Show that partition matroids are graphic, cographic, linear, and transversal matroids.
5.11. Let G = (V, E) be a graph. Let I be the collection of those subsets Y of E so that F has at
most one circuit. Show that (E, I) is a matroid.
5.12. Let G = (V, E) be a graph. Call a collection C of circuits a circuit basis of G if each circuit of
G is a symmetric diﬀerence of circuits in C. (We consider circuits as edge sets.)
Give a polynomial-time algorithm to ﬁnd a circuit basis C of G that minimizes

C∈C
|C|.
(The running time of the algorithm should be bounded by a polynomial in |V | +|E|.)
5.3. Duality, deletion, and contraction
There are a number of operations that transform a matroid to another matroid. First
we consider the ‘dual’ of a given matroid.
In Exercise 5.2 we saw that a nonempty collection B of subsets of some ﬁnite set X is
the collection of bases of some matroid on X, if and only if (7) is satisﬁed. Now deﬁne
B

:= ¦X ¸ B [ B ∈ B¦. (13)
28 Chapter 5. Matroids
We show:
Theorem 5.5. If B is the collection of bases of some matroid M, then B

also is the
collection of bases of some matroid on X, denoted by M

.
The rank function r
M
∗ of the dual matroid M

satisﬁes:
r
M
∗(Y ) = [Y [ +r
M
(X ¸ Y ) −r
M
(X). (14)
Proof. Let ¸ denote the collection of subsets J of X so that J ⊆ D for some D ∈ B

.
Deﬁne
ρ(Y ) := max¦[Z[ [Z ∈ ¸, Z ⊆ Y ¦ (15)
for Y ⊆ X. The function ρ clearly satisﬁes conditions (8)(i) and (ii). To see (8)(iii), observe
that
ρ(Y ) = max
C∈B

[Y ∩ C[ = max
B∈B
[Y ¸ B[ (16)
= max
B∈B
[(X ¸ Y ) ∩ B[ +[Y [ −r
M
(X) = r
M
(X ¸ Y ) +[Y [ −r
M
(X).
Since r
M
satisﬁes condition (8)(iii), (16) implies that also ρ satisﬁes condition (8)(iii).
Hence ρ is the rank function of some matroid N = (X, 1

). Now for each Y ⊆ X:
Y ∈ 1

⇐⇒ ρ(Y ) = [Y [ ⇐⇒ Y ∈ ¸. (17)
Hence 1

= ¸, and therefore (X, ¸) is a matroid with basis collection B

and rank function
given by (14).
The matroid M

is called the dual matroid of M. Since (B

)

= B, we know (M

)

= M.
In fact, in the examples I and II above we saw that for any undirected graph G, the
cocycle matroid of G is the dual matroid of the cycle matroid of G. That is, M

(G) =
(M(G))

.
Another way of constructing matroids from matroids is by ‘deletion’ and ‘contraction’.
Let M = (X, 1) be a matroid and let Y ⊆ X. Deﬁne
1

:= ¦Z [ Z ⊆ Y, Z ∈ 1¦. (18)
Then M

= (Y, 1

) is a matroid again, as one easily checks. M

is called the restriction of
M to Y . If Y = X ¸ Z with Z ⊆ X, we say that M

arises by deleting Z, and denote M

by M ¸ Z.
Contracting Z means replacing M by (M

¸Z)

. This matroid is denoted by M/Z. One
may check that if G is a graph and e is an edge of G, then contracting edge ¦e¦ in the
cycle matroid M(G) of G corresponds to contracting e in the graph. That is, M(G)/¦e¦ =
M(G/¦e¦), where G/¦e¦ denotes the graph obtained from G by contracting e.
If matroid M

arises from M by a series of deletions and contractions, M

is called a
minor of M.
Exercises
Section 5.3. Duality, deletion, and contraction 29
5.13. Let G = (V, E) be a connected graph. For each subset E

of E, let κ(V, E

) denote the number
of components of the graph (V, E

). Show that for each E

⊆ E:
(i) r
M(G)
(E

) = |V | −κ(V, E

);
(ii) r
M

(G)
(E

) = |E

| −κ(V, E \ E

) + 1.
5.14. Let G be a planar graph and let G

be a planar graph dual to G. Show that the cycle matroid
M(G

) of G

is isomorphic to the cocycle matroid M

(G) of G.
5.15. Show that the dual matroid of a linear matroid is again a linear matroid.
5.16. Let G = (V, E) be a loopless undirected graph. Let A be the matrix obtained from the V ×E
incidence matrix of G by replacing in each column, exactly one of the two 1’s by −1.
(i) Show that a subset Y of E is a forest if and only if the columns of A with index in Y
are linearly independent.
(ii) Derive that any graphic matroid is a linear matroid.
(iii) Derive (with the help of Exercise 5.15) that any cographic matroid is a linear matroid.
5.17. (i) Let X be a ﬁnite set and let k be a natural number. Let I := {Y ⊆ X | |Y | ≤ k}. Show
that (X, I) is a matroid. Such matroids are called k-uniform matroids.
(ii) Show that k-uniform matroids are transversal matroids. Give an example of a k-uniform
matroid that is neither graphic nor cographic.
5.18. Let M = (X, I) be a matroid and let k be a natural number. Deﬁne I

:= {Y ∈ I | |Y | ≤ k}.
Show that (X, I

) is again a matroid (called the k-truncation of M).
5.19. Let M = (X, I) be a matroid, let U be a set disjoint from X, and let k ≥ 0. Deﬁne
I

:= {U

∪ Y | U

⊆ U, Y ∈ I, |U

∪ Y | ≤ k}. (19)
Show that (U ∪ X, I

) is again a matroid.
5.20. Let M = (X, I) be a matroid and let x ∈ X.
(i) Show that if x is not a loop, then a subset Y of X\{x} is independent in the contracted
matroid M/{x} if and only if Y ∪ {x} is independent in M.
(ii) Show that if x is a loop, then M/{x} = M \ {x}.
(iii) Show that for each Y ⊆ X : r
M/{x}
(Y ) = r
M
(Y ∪ {x}) −r
M
({x}).
5.21. Let M = (X, I) be a matroid and let Y ⊆ X.
(ii) Let B be a basis of Y . Show that a subset U of X \ Y is independent in the contracted
matroid M/Y , if and only if U ∪ B is independent in M.
(ii) Show that for each U ⊆ X \ Y
r
M/Y
(U) = r
M
(U ∪ Y ) −r
M
(Y ). (20)
5.22. Let M = (X, I) be a matroid and let Y, Z ⊆ X. Show that (M \ Y )/Z = (M/Z) \ Y . (That
is, deletion and contraction commute.)
5.23. Let M = (X, I) be a matroid, and suppose that we can test in polynomial time if any subset
Y of X belongs to I. Show that then the same holds for the dual matroid M

.
30 Chapter 5. Matroids
5.4. Two technical lemmas
In this section we prove two technical lemmas as a preparation to the coming sections
on matroid intersection.
Let M = (X, 1) be a matroid. For any Y ∈ 1 deﬁne a bipartite graph H(M, Y ) as
follows. The graph H(M, Y ) has vertex set X, with colour classes Y and X ¸ Y . Elements
y ∈ Y and x ∈ X ¸ Y are adjacent if and only if
(Y ¸ ¦y¦) ∪ ¦x¦ ∈ 1. (21)
Then we have:
Lemma 5.1. Let M = (X, 1) be a matroid and let Y, Z ∈ 1 with [Y [ = [Z[. Then H(M, Y )
contains a perfect matching on Y ´Z.
3
Proof. Suppose not. By K¨ onig’s matching theorem there exist a subset S of Y ¸ Z and a
subset S

of Z¸Y such that for each edge ¦y, z¦ of H(M, Y ) satisfying z ∈ S

one has y ∈ S
and such that [S[ < [S

[.
As [(Y ∩ Z) ∪ S[ < [(Y ∩ Z) ∪ S

[, there exists an element z ∈ S

such that T := (Y ∩
Z) ∪S∪¦z¦ belongs to 1. This implies that there exists an U ∈ 1 such that T ⊆ U ⊆ T ∪Y
and [U[ = [Y [. So U = (Y ¸ ¦x¦) ∪ ¦z¦ for some x ,∈ S. As ¦x, z¦ is an edge of H(M, Y )
this contradicts the choice of S and S

.
The following forms a counterpart:
Lemma 5.2. Let M = (X, 1) be a matroid and let Y ∈ 1. Let Z ⊆ X be such that
[Y [ = [Z[ and such that H(M, Y ) contains a unique perfect matching N on Y ´Z. Then Z
belongs to 1.
Proof. By induction on k := [Z ¸ Y [, the case k = 0 being trivial. Let k ≥ 1.
By the unicity of N there exists an edge ¦y, z¦ ∈ N, with y ∈ Y ¸Z and z ∈ Z ¸Y , with
the property that there is no z

∈ Z ¸ Y such that z

,= z and ¦y, z

¦ is an edge of H(M, Y ).
Let Z

:= (Z ¸ ¦z¦) ∪ ¦y¦ and N

:= N ¸ ¦¦y, z¦¦. Then N

is the unique matching in
H(M, Y ) with union Y ´Z

. Hence by induction, Z

belongs to 1.
There exists an S ∈ 1 such that Z

¸ ¦y¦ ⊆ S ⊆ (Y ¸ ¦y¦) ∪ Z and [S[ = [Y [ (since
(Y ¸ ¦y¦) ∪ Z = (Y ¸ ¦y¦) ∪ ¦z¦ ∪ Z

and since (Y ¸ ¦y¦) ∪ ¦z¦ belongs to 1). Assuming
Z ,∈ 1, we know z ,∈ S and hence r((Y ∪Z

) ¸ ¦y¦) = [Y [. Hence there exists an z

∈ Z

¸ Y
such that (Y ¸ ¦y¦) ∪ ¦z

¦ belongs to 1. This contradicts the choice of y.
Exercises
5.24. Let M = (X, I) be a matroid, let B be a basis of M, and let w : X −→ R be a weight
function. Show that B is a basis of maximum weight, if and only if w(B

) ≤ w(B) for every
basis B

with |B

\ B| = 1.
3
A perfect matching on a vertex set U is a matching M with

M = U.
Section 5.5. Matroid intersection 31
5.25. Let M = (X, I) be a matroid and let Y and Z be independent sets with |Y | = |Z|. For any
y ∈ Y \ Z deﬁne δ(y) as the set of those z ∈ Z \ Y which are adjacent to y in the graph
H(M, Y ).
(i) Show that for each y ∈ Y \ Z the set (Z \ δ(y)) ∪ {y} belongs to I.
(Hint: Apply inequality (8)(iii) to X

:= (Z\δ(y))∪{y} and X

:= (Z\δ(y))∪(Y \{y}).)
(ii) Derive from (i) that for each y ∈ Y \ Z there exists an z ∈ Z \ Y so that {y, z} is an
edge both of H(M, Y ) and of H(M, Z).
5.5. Matroid intersection
Edmonds [1970] discovered that the concept of matroid has even more algorithmic power,
by showing that there exist fast algorithms also for intersections of matroids.
Let M
1
= (X, 1
1
) and M
2
= (X, 1
2
) be two matroids, on the same set X. Consider
the collection 1
1
∩ 1
2
of common independent sets. The pair (X, 1
1
∩ 1
2
) is generally not a
matroid again (cf. Exercise 5.26).
What Edmonds showed is that, for any weight function w on X, a maximum-weight
common independent set can be found in polynomial time. In particular, a common inde-
pendent set of maximum cardinality can be found in polynomial time.
We consider ﬁrst some applications.
Example 5.5a. Let G = (V, E) be a bipartite graph, with colour classes V
1
and V
2
, say.
Let 1
1
be the collection of all subsets F of E so that no two edges in F have a vertex in V
1
in common. Similarly, let 1
2
be the collection of all subsets F of E so that no two edges in
F have a vertex in V
2
in common. So both (X, 1
1
) and (X, 1
2
) are partition matroids.
Now 1
1
∩ 1
2
is the collection of matchings in G. Finding a maximum-weight common
independent set amounts to ﬁnding a maximum-weight matching in G.
Example 5.5b. Let X
1
, . . . , X
m
and Y
1
, . . . , Y
m
be subsets of X. Let M
1
= (X, 1
1
) and
M
2
= (X, 1
2
) be the corresponding transversal matroids.
Then common independent sets correspond to common partial transversals. The collec-
tions (X
1
, . . . , X
m
) and (Y
1
, . . . , Y
m
) have a common transversal, if and only if the maximum
cardinality of a common independent set is equal to m.
Example 5.5c. Let D = (V, A) be a directed graph. Let M
1
= (A, 1
1
) be the cycle matroid
of the underlying undirected graph. Let 1
2
be the collection of subsets Y of A so that each
vertex of D is entered by at most one arc in Y . So M
2
:= (A, 1
2
) is a partition matroid.
Now the common independent sets are those subsets Y of A with the property that each
component of (V, Y ) is a rooted tree. Moreover, D has a rooted spanning tree, if and only
if the maximum cardinality of a set in 1
1
∩ 1
2
is equal to [V [ −1.
Example 5.5d. Let G = (V, E) be a connected undirected graph. Then G has two edge-
disjoint spanning trees, if and only if the maximum cardinality of a common independent
set in the cycle matroid M(G) of G and the cocycle matroid M

(G) of G is equal to [V [ −1.
In this section we describe an algorithm for ﬁnding a maximum-cardinality common
32 Chapter 5. Matroids
independent sets in two given matroids. In the next section we consider the more general
maximum-weight problem.
For any two matroids M
1
= (X, 1
1
) and M
2
= (X, 1
2
) and any Y ∈ 1
1
∩1
2
, we deﬁne a
directed graph H(M
1
, M
2
, Y ) as follows. Its vertex set is X, while for any y ∈ Y, x ∈ X¸Y ,
(y, x) is an arc of H(M
1
, M
2
, Y ) if and only if (Y ¸ ¦y¦) ∪ ¦x¦ ∈ 1
1
,
(x, y) is an arc of H(M
1
, M
2
, Y ) if and only if (Y ¸ ¦y¦) ∪ ¦x¦ ∈ 1
2
.
(22)
These are all arcs of H(M
1
, M
2
, Y ). In fact, this graph can be considered as the union of
directed versions of the graphs H(M
1
, Y ) and H(M
2
, Y ) deﬁned in Section 5.4.
The following is the basis for ﬁnding a maximum-cardinality common independent set
in two matroids.
Cardinality common independent set augmenting algorithm
input: matroids M
1
= (X, 1
1
) and M
2
= (X, 1
2
) and a set Y ∈ 1
1
∩ 1
2
;
output: a set Y

∈ 1
1
∩ 1
2
with [Y

[ > [Y [, if it exists.
description of the algorithm: We assume that M
1
and M
2
are given in such a way that
for any subset Z of X we can check in polynomial time if Z ∈ 1
1
and if Z ∈ 1
2
.
Consider the sets
X
1
:= ¦y ∈ X ¸ Y [ Y ∪ ¦y¦ ∈ 1
1
¦,
X
2
:= ¦y ∈ X ¸ Y [ Y ∪ ¦y¦ ∈ 1
2
¦.
(23)
Moreover, consider the directed graph H(M
1
, M
2
, Y ) deﬁned above. There are two cases.
Case 1. There exists a directed path P in H(M
1
, M
2
, Y ) from some vertex in X
1
to some
vertex in X
2
. (Possibly of length 0 if X
1
∩ X
2
,= ∅.)
We take a shortest such path P (that is, with a minimum number of arcs). Let P
traverse the vertices y
0
, z
1
, y
1
, . . . , z
m
, y
m
of H(M
1
, M
2
, Y ), in this order. By construction
of the graph H(M
1
, M
2
, Y ) and the sets X
1
and X
2
, this implies that y
0
, . . . , y
m
belong to
X ¸ Y and z
1
, . . . , z
m
belong to Y .
Now output
Y

:= (Y ¸ ¦z
1
, . . . , z
m
¦) ∪ ¦y
0
, . . . , y
m
¦. (24)
Case 2. There is no directed path in H(M
1
, M
2
, Y ) from any vertex in X
1
to any vertex
vertex in X
2
. Then Y is a maximum-cardinality common independent set.
This ﬁnishes the description of the algorithm. The correctness of the algorithm is given
in the following two theorems.
Theorem 5.6. If Case 1 applies, then Y

∈ 1
1
∩ 1
2
.
Proof. Assume that Case 1 applies. By symmetry it suﬃces to show that Y

belongs to 1
1
.
To see that Y

¸ ¦y
0
¦ belongs to 1
1
, consider the graph H(M
1
, Y ) deﬁned in Section
5.4. Observe that the edges ¦z
j
, y
j
¦ form the only matching in H(M
1
, Y ) with union
Section 5.5. Matroid intersection 33
equal to ¦z
1
, . . . , z
m
, y
1
, . . . , y
m
¦ (otherwise P would have a shortcut). So by Lemma 5.2,
Y

¸ ¦y
0
¦ = (Y ¸ ¦z
1
, . . . , z
m
¦) ∪ ¦y
1
, . . . , y
m
¦ belongs to 1
1
.
To show that Y

belongs to 1
1
, observe that r
M
1
(Y ∪ Y

) ≥ r
M
1
(Y ∪ ¦y
0
¦) = [Y [ + 1,
and that, as (Y

¸ ¦y
0
¦) ∩ X
1
= ∅, r
M
1
(Y ∪ Y

∪ ¦y
0
¦) = [Y [. As Y

¸ ¦y
0
¦ ∈ 1
1
, we know
Y

∈ 1
1
.
Theorem 5.7. If Case 2 applies, then Y is a maximum-cardinality common independent
set.
Proof. As Case 2 applies, there is no directed X
1
− X
2
path in H(M
1
, M
2
, Y ). Hence
there is a subset U of X containing X
1
such that U ∩ X
2
= ∅ and such that no arc of
H(M
1
, M
2
, Y ) leaves U. We show
r
M
1
(U) +r
M
2
(X ¸ U) = [Y [. (25)
To this end, we ﬁrst show
r
M
1
(U) = [Y ∩ U[. (26)
Clearly, as Y ∩ U ∈ 1
1
, we know r
M
1
(U) ≥ [Y ∩ U[. Suppose r
M
1
(U) > [Y ∩ U[. Then
there exists an x in U ¸ Y so that (Y ∩U) ∪¦x¦ ∈ 1
1
. Since Y ∈ 1
1
, this implies that there
exists a set Z ∈ 1
1
with [Z[ ≥ [Y [ and (Y ∩ U) ∪ ¦x¦ ⊆ Z ⊆ Y ∪ ¦x¦. Then Z = Y ∪ ¦x¦
or Z = (Y ¸ ¦y¦) ∪ ¦x¦ for some y ∈ Y ¸ U.
In the ﬁrst alternative, x ∈ X
1
, contradicting the fact that x belongs to U. In the second
alternative, (y, x) is an arc of H(M
1
, M
2
, Y ) entering U. This contradicts the deﬁnition of
U (as y ,∈ U and x ∈ U).
This shows (26). Similarly we have that r
M
2
(X ¸ U) = [Y ¸ U[. Hence we have (25).
Now (25) implies that for any set Z in 1
1
∩ 1
2
one has
[Z[ = [Z ∩ U[ +[Z ¸ U[ ≤ r
M
1
(U) +r
M
2
(X ¸ U) = [Y [. (27)
So Y is a common independent set of maximum cardinality.
The algorithm clearly has polynomially bounded running time, since we can construct
the auxiliary directed graph H(M
1
, M
2
, Y ) and ﬁnd the path P (if it exists), in polynomial
time.
It implies the result of Edmonds [1970]:
Theorem 5.8. A maximum-cardinality common independent set in two matroids can be
found in polynomial time.
Proof. Directly from the above, as we can ﬁnd a maximum-cardinality common independent
set after applying at most [X[ times the common independent set augmenting algorithm.
The algorithm also yields a min-max relation for the maximum cardinality of a common
independent set, as was shown again by Edmonds [1970].
34 Chapter 5. Matroids
Theorem 5.9 (Edmonds’ matroid intersection theorem). Let M
1
= (X, 1
1
) and M
2
=
(X, 1
2
) be matroids. Then
max
Y ∈1
1
∩1
2
[Y [ = min
U⊆X
(r
M
1
(U) +r
M
2
(X ¸ U)). (28)
Proof. The inequality ≤ follows similarly as in (27). The reverse inequality follows from
the fact that if the algorithm stops with set Y , we obtain a set U for which (25) holds.
Therefore, the maximum in (28) is at least as large as the minimum.
Exercises
5.26. Give an example of two matroids M
1
= (X, I
1
) and M
2
= (X, I
2
) so that (X, I
1
∩ I
2
) is not
a matroid.
5.27. Derive K¨ onig’s matching theorem from Edmonds’ matroid intersection theorem.
5.28. Let (X
1
, . . . , X
m
) and (Y
1
, . . . , Y
m
) be subsets of the ﬁnite set X. Derive from Edmonds’
matroid intersection theorem: (X
1
, . . . , X
m
) and (Y
1
, . . . , Y
m
) have a common transversal, if
and only if
|(

i∈I
X
i
) ∩ (

j∈J
Y
j
)| ≥ |I| +|J| −m (29)
for all subsets I and J of {1, . . . , m}.
5.29. Reduce the problem of ﬁnding a Hamiltonian cycle in a directed graph to the problem of
ﬁnding a maximum-cardinality common independent set in three matroids.
5.30. Let G = (V, E) be a graph and let the edges of G be coloured with |V | − 1 colours. That
is, we have partitioned E into classes X
1
, . . . , X
|V |−1
, called colours. Show that there exists
a spanning tree with all edges coloured diﬀerently, if and only if (V, E

) has at most |V | − t
components, for any union E

of t colours, for any t ≥ 0.
5.31. Let M = (X, I) be a matroid and let X
1
, . . . , X
m
be subsets of X. Then (X
1
, . . . , X
m
) has an
independent transversal, if and only if the rank of the union of any t sets among X
1
, . . . , X
m
is at least t, for any t ≥ 0. (Rado [1942].)
5.32. Let M
1
= (X, I
1
) and M
2
= (X, I
2
) be matroids. Deﬁne
I
1
∨ I
2
:= {Y
1
∪ Y
2
| Y
1
∈ I
1
, Y
2
∈ I
2
}. (30)
(i) Show that the maximum cardinality of a set in I
1
∨ I
2
is equal to
min
U⊆X
(r
M
1
(U) +r
M
2
(U) +|X \ U|). (31)
(Hint: Apply the matroid intersection theorem to M
1
and M

2
.)
(ii) Derive that for each Y ⊆ X:
max{|Z| | Z ⊆ Y, Z ∈ I
1
∨ I
2
} = (32)
min
U⊆Y
(r
M
1
(U) +r
M
2
(U) +|Y \ U|).
(iii) Derive that (X, I
1
∨ I
2
) is again a matroid.
(Hint: Use Exercise 5.3.)
This matroid is called the union of M
1
and M
2
, denoted by M
1
∨ M
2
. (Edmonds and
Fulkerson [1965], Nash-Williams [1967].)
Section 5.5. Matroid intersection 35
(iv) Let M
1
= (X, I
1
), . . . , M
k
= (X, I
k
) be matroids and let
I
1
∨ . . . ∨ I
k
:= {Y
1
∪ . . . ∪ Y
k
| Y
1
∈ I
1
, . . . , Y
k
∈ I
k
}. (33)
Derive from (iii) that M
1
∨ . . . ∨ M
k
:= (X, I
1
∨ . . . ∨ I
k
) is again a matroid and give a
formula for its rank function.
5.33. (i) Let M = (X, I) be a matroid and let k ≥ 0. Show that X can be covered by k
independent sets, if and only if |U| ≤ k · r
M
(U) for each subset U of X.
(Hint: Use Exercise 5.32.) (Edmonds [1965].)
(ii) Show that the problem of ﬁnding a minimum number of independent sets covering X in
a given matroid M = (X, I), is solvable in polynomial time.
5.34. Let G = (V, E) be a graph and let k ≥ 0. Show that E can be partitioned into k forests, if
and only if each nonempty subset W of V contains at most k(|W| −1) edges of G.
(Hint: Use Exercise 5.33.) (Nash-Williams [1964].)
5.35. Let X
1
, . . . , X
m
be subsets of X and let k ≥ 0.
(i) Show that X can be partitioned into k partial transversals of (X
1
, . . . , X
m
), if and only
if
k(m−|I|) ≥ |X \

i∈I
X
i
| (34)
for each subset I of {1, . . . , m}.
(ii) Derive from (i) that {1, . . . , m} can be partitioned into classes I
1
, . . . , I
k
so that (X
i
| i ∈
I
j
) has a transversal, for each j = 1, . . . , k, if and only if Y contains at most k|Y | of the
X
i
as a subset, for each Y ⊆ X.
(Hint: Interchange the roles of {1, . . . , m} and X.) (Edmonds and Fulkerson [1965].)
5.36. (i) Let M = (X, I) be a matroid and let k ≥ 0. Show that there exist k pairwise disjoint
bases of M, if and only if k(r
M
(X) −r
M
(U)) ≥ |X \ U| for each subset U of X.
(Hint: Use Exercise 5.32.) (Edmonds [1965].)
(ii) Show that the problem of ﬁnding a maximum number of pairwise disjoint bases in a
given matroid, is solvable in polynomial time.
5.37. Let G = (V, E) be a connected graph and let k ≥ 0. Show that there exist k pairwise edge-
disjoint spanning trees, if and only if for each t, for each partition (V
1
, . . . , V
t
) of V into t
classes, there are at least k(t −1) edges connecting diﬀerent classes of this partition.
(Hint: Use Exercise 5.36.) (Nash-Williams [1961], Tutte [1961].)
5.38. Let M
1
and M
2
be matroids so that, for i = 1, 2, we can test in polynomial time if a given set
is independent in M
i
. Show that the same holds for the union M
1
∨ M
2
.
5.39. Let M = (X, I) be a matroid and let B and B

be two disjoint bases. Let B be partitioned
into sets Y
1
and Y
2
. Show that there exists a partition of B

into sets Z
1
and Z
2
so that both
Y
1
∪ Z
1
∪ Z
2
and Z
1
∪ Y
2
are bases of M.
(Hint: Assume without loss of generality that X = B ∪ B

. Apply the matroid intersection
theorem to the matroids (M \ Y
1
)/Y
2
and (M

\ Y
1
)/Y
2
.)
5.40. The following is a special case of a theorem of Nash-Williams [1985]:
Let G = (V, E) be a simple, connected graph and let b : V −→Z
+
. Call a graph
˜
G = (
˜
V ,
˜
E)
a b-detachment of G if there is a function φ :
˜
V −→ V such that |φ
−1
(v)| = b(v) for each
36 Chapter 5. Matroids
v ∈ V , and such that there is a one-to-one function ψ :
˜
E −→ E with ψ(e) = {φ(v), φ(w)} for
each edge e = {v, w} of
˜
G.
Then there exists a connected b-detachment, if and only if for each U ⊆ V the number of
components of the graph induced by V \ U is at most b(U) −|E
U
| + 1.
Here E
U
denotes the set of edges intersecting U.
5.6. Weighted matroid intersection
We next consider the problem of ﬁnding a maximum-weight common independent set,
in two given matroids, with a given weight function. The algorithm, again due to Edmonds
[1970], is an extension of the algorithm given in Section 5.5. In each iteration, instead of
ﬁnding a path P with a minimum number of arcs in H, we will now require P to have
minimum length with respect to some length function deﬁned on H.
To describe the algorithm, if matroid M
1
= (X, 1
1
) and M
2
= (X, 1
2
) and a weight
function w : X −→Q are given, call a set Y ∈ 1
1
∩ 1
2
a max-weight common independent
set if w(Y

) ≤ w(Y ) for each Y

∈ 1
1
∩ 1
2
with [Y

[ = [Y [.
Weighted common independent set augmenting algorithm
input: matroids M
1
= (X, 1
1
) and M
2
= (X, 1
2
), a weight function w :X−→ Q, and a
max-weight common independent set Y ;
output: a max-weight common independent set Y

with [Y

[ = [Y [ + 1.
description of the algorithm: Consider again the sets X
1
and X
2
and the directed graph
H(M
1
, M
2
, Y ) on X as in the cardinality case.
For any x ∈ X deﬁne the ‘length’ l(x) of x by:
l(x) := w(x) if x ∈ Y ,
l(x) := −w(x) if x ,∈ Y.
(35)
The length of a path P, denoted by l(P), is equal to the sum of the lengths of the vertices
traversed by P, counting multiplicities.
We consider two cases.
Case 1. There exists a directed path P in H(M
1
, M
2
, Y ) from X
1
to X
2
. We choose P so
that l(P) is minimal and so that it has a minimum number of arcs among all minimum-
length X
1
−X
2
paths.
Let P traverse y
0
, z
1
, y
1
, . . . , y
m
, z
m
, in this order. Set
Y

:= (Y ¸ ¦z
1
, . . . , z
m
¦) ∪ ¦y
0
, , . . . , y
m
¦, (36)
and repeat.
Case 2. There is no directed X
1
− X
2
path in H(M
1
, M
2
, Y ). Then Y is a maximum-
cardinality common independent set.
This ﬁnishes the description of the algorithm. The correctness of the algorithm if Case
2 applies follows directly from Theorem 5.7. In order to show the correctness if Case 1
applies, we ﬁrst prove the following.
Section 5.6. Weighted matroid intersection 37
We ﬁrst show a basic property of the length function l. Let y ∈ Y and x ∈ X ¸ Y and
let P be an y − x path in H(M
1
, M
2
, Y ). Let P traverse y = z
1
, y
1
, z
2
, y
2
, . . . , z
m
, y
m
= x,
in this order. (So the z
i
belong to Y and the y
i
belong to X ¸ Y .)
Proposition 1. One of the following holds:
(i) there exists an y − x path P

with l(P

) ≤ l(P) and traversing fewer
vertices than P,
or (ii) there exists a directed circuit C with l(C) < 0 and traversing fewer
vertices than P,
or (iii) (Y
k
¸ ¦z
1
, . . . , z
m
¦) ∪ ¦y
1
, . . . , y
m
¦ ∈ 1
1
.
(37)
Proof. Suppose (37)(iii) does not hold. Then by Lemma 5.2
¦z
1
, y
1
¦, . . . , ¦z
m
, y
m
¦ (38)
is not the only matching in H(M
1
, Y
k
) with union ¦z
1
, . . . , z
m
, y
1
, . . . , y
m
¦. That is, there ex-
ists a proper permutation (j
1
, . . . , j
m
) of (1, . . . , m) so that (z
i
, y
j
i
) is an arc of H(M
1
, M
2
, Y
k
)
for each i = 1, . . . , m.
Now consider the arcs
(z
1
, y
1
), . . . , (z
m
, y
m
), (z
1
, y
j
1
), . . . , (z
m
, y
j
m
),
(y
1
, z
2
), . . . , (y
m−1
, z
m
), (y
1
, z
2
), . . . , (y
m−1
, z
m
),
(39)
counting multiplicities. Now each of y
1
, z
2
, y
2
, . . . , y
m−1
, z
m
is entered and left by exactly
two arcs in (39), while z
1
is left by exactly two arcs in (39) and y
m
is entered by exactly two
arcs in (39). Moreover, since (j
1
, . . . , j
m
) ,= (1, . . . , m), the arcs in (39) contain a directed
circuit. Hence the arcs in (39) can be decomposed into two simple directed z
1
− y
m
paths
P

and P

and a number of simple directed circuits C
1
, . . . , C
t
with t ≥ 1. We have
l(P

) +l(P

) +l(C
1
) + +l(C
t
) = 2 l(P). (40)
Now if l(C
j
) < 0 for some j we have (37)(ii). So we may assume that l(C
j
) ≥ 0 for each
j = 1, . . . , t. Hence l(P

) +l(P

) ≤ 2 l(P). If both P

and P

traverse fewer vertices than
P, one of them will satisfy (37)(i). If one of them, P

say, traverses the same vertices as P,
then l(P

) ≤ 2 l(P) −l(P

) = l(P), and hence P

satisﬁes (37)(i). (Note that P

traverses
fewer vertices than P, since t ≥ 1.)
This implies:
Theorem 5.10. If Y is a max-weight common independent set, then H(M
1
, M
2
, Y ) has
no directed circuit of negative length.
Proof. Suppose H(M
1
, M
2
, Y ) has a cycle C of negative length. Let C traverse z
1
, y
1
, . . . ,
z
m
,y
m
, in this order, with the z
i
in Y and the y
i
in X ¸ Y . Choose C so that m is minimal.
Now consider Z := (Y ¸¦z
1
, . . . , z
m
¦)∪¦y
1
, . . . , y
m
¦. Since w(Z) = w(Y )−l(C) > w(Y ),
while [Z[ = [Y [, we know that Z ,∈ 1
1
∩1
2
. Without loss of generality, Z ,∈ 1
1
. So (37)(i) or
38 Chapter 5. Matroids
(ii) applies. This would give a directed circuit of negative length, traversing fewer vertices
than C. This contradicts the minimality of m.
This implies that if Case 1 in the algorithm applies, the set Y

belongs to 1
1
∩ 1
2
: this
follows from the fact that P is a path without shortcuts (since each directed circuit has
nonnegative length, by Theorem 5.10; hence Theorem 5.6 implies that Y

∈ 1
1
∩ 1
2
.
This gives:
Theorem 5.11. If Case 1 applies, Y

is a max-weight common independent set.
Suppose Z ∈ 1
1
∩ 1
2
with [Z[ = k + 1 and w(Z) > w(Y

).
Since [Z[ > [Y [, there exist y, z ∈ Z ¸ Y so that Y ∪ ¦y¦ ∈ 1
1
and Y ∪ ¦z¦ ∈ 1
2
. So
y ∈ X
1
and z ∈ X
2
.
Now by Lemma 5.1, there exist pairwise disjoint arcs in H(M
1
, M
2
, Y ) so that the tails
are the elements in Y ¸ Z and the heads are the elements in Z ¸ (Y ∪ ¦y¦). Similarly, there
exist pairwise disjoint arcs in H(M
1
, M
2
, Y ) so that the tails are the elements in Z¸(Y ∪¦z¦)
and the heads are the elements in Y ¸ Z.
These two sets of arcs together form a disjoint union of one y −z path Q and a number
of directed circuits C
1
, . . . , C
t
. Now each C
j
has nonnegative length, by Theorem 5.10. This
implies
l(Q) ≤ l(Q) +
t

j=1
l(C
j
) = w(Y ) −w(Z) < w(Y ) −w(Y

) = l(P). (41)
This contradicts the fact that P is a minimum-length X
1
−X
2
path.
So the weighted common independent set augmenting algorithm is correct. It obviously
has polynomially bounded running time. Thus we obtain the result of Edmonds [1970]:
Theorem 5.12. A maximum-weight common independent set in two matroids can be found
in polynomial time.
Proof. Starting with the max-weight common independent set Y
0
:= ∅ we can ﬁnd itera-
tively max-weight common independent sets Y
0
, Y
1
, . . . , Y
k
, where [Y
i
[ = i for i = 0, . . . , k
and where Y
k
is a maximum-cardinality common independent set. Taking one among
Y
0
, . . . , Y
k
of maximum weight, we have a maximum-weight common independent set.
Exercises
5.41. Give an example of two matroids M
1
= (X, I
1
) and M
2
= (X, I
2
) and a weight function
w : X −→ Z
+
so that there is no maximum-weight common independent set which is also a
maximum-cardinality common independent set.
5.42. Reduce the problem of ﬁnding a maximum-weight common basis in two matroids to the
problem of ﬁnding a maximum-weight common independent set.
Section 5.7. Matroids and polyhedra 39
5.43. Let D = (V, A) be a directed graph, let r ∈ V , and let l : A −→ Z
+
be a length function.
Reduce the problem of ﬁnding a minimum-length rooted tree with root r, to the problem of
ﬁnding a maximum-weight common independent set in two matroids.
5.44. Let B be a common basis of the matroids M
1
= (X, I
1
) and M
2
= (X, I
2
) and let w : X −→Z
be a weight function. Deﬁne length function l : X −→ Z by l(x) := w(x) if x ∈ B and
l(x) := −w(x) if x ∈ B.
Show that B has maximum-weight among all common bases of M
1
and M
2
, if and only if
H(M
1
, M
2
, B) has no directed circuit of negative length.
5.7. Matroids and polyhedra
The algorithmic results obtained in the previous sections have interesting consequences
for polyhedra associated with matroids.
Let M = (X, 1) be a matroid. The matroid polytope P(M) of M is, by deﬁnition, the
convex hull of the incidence vectors of the independent sets of M. So P(M) is a polytope
in R
X
.
Each vector z in P(M) satisﬁes the following linear inequalities:
z(x) ≥ 0 for x ∈ X,
z(Y ) ≤ r
M
(Y ) for Y ⊆ X.
(42)
This follows from the fact that the incidence vector χ
Y
of any independent set Y of M
satisﬁes (42).
Note that if z is an integer vector satisfying (42), then z is the incidence vector of some
independent set of M.
Edmonds [1970] showed that system (42) in fact fully determines the matroid polytope
P(M). It means that for each weight function w : X −→ R, the linear programming
problem
maximize w
T
z,
subject to z(x) ≥ 0 (x ∈ X)
z(Y ) ≤ r
M
(Y ) (Y ⊆ X)
(43)
has an integer optimum solution z. This optimum solution necessarily is the incidence vector
of some independent set of M. In order to prove this, we also consider the LP-problem dual
to (43):
minimize

Y ⊆X
y
Y
r
M
(Y ),
subject to y
Y
≥ 0 (Y ⊆ X)

Y ⊆X,x∈Y
y
Y
≥ w(x) (x ∈ X).
(44)
We show:
Theorem 5.13. If w is integer, then (43) and (44) have integer optimum solutions.
40 Chapter 5. Matroids
Proof. Order the elements of X as y
1
, . . . , y
m
in such a way that w(y
1
) ≥ w(y
2
) ≥ . . . w(y
m
).
Let n be the largest index for which w(y
n
) ≥ 0. Deﬁne X
i
:= ¦y
1
, . . . , y
i
¦ for i = 0, . . . , m
and
Y := ¦y
i
[ i ≤ n; r
M
(X
i
) > r
M
(X
i−1
)¦. (45)
Then Y belongs to 1 (cf. Exercise 5.7). So z := χ
Y
is an integer feasible solution of (43).
Moreover, deﬁne a vector y in R
P(X)
by:
y
Y
:= w(y
i
) −w(y
i+1
) if Y = X
i
for some i = 1, . . . , n −1,
y
Y
:= w(y
n
) if Y = X
n
,
y
Y
:= 0 for all other Y ⊆ X
(46)
Then y is an integer feasible solution of (44).
We show that z and y have the same objective value, thus proving the theorem:
w
T
z = w(Y ) =

x∈Y
w(x) =
n

i=1
w(y
i
) (r
M
(X
i
) −r
M
(X
i−1
)) (47)
= w(y
n
) r
M
(X
n
) +
n

i=1
(w(y
i
) −w(y
i+1
)) r
M
(X
i
) =

Y ⊆X
y
Y
r
M
(Y ).
So system (42) is totally dual integral. This directly implies:
Corollary 5.13a. The matroid polytope P(M) is determined by (42).
Proof. Immediately from Theorem 5.13.
An even stronger phenomenon occurs at intersections of matroid polytopes. It turns out
that the intersection of two matroid polytopes gives exactly the convex hull of the common
independent sets, as was shown again by Edmonds [1970].
To see this, we ﬁrst derive a basic property:
Theorem 5.14. Let M
1
= (X, 1
1
) and M
2
= (X, 1
2
) be matroids, let w : X −→ Z be a
weight function and let B be a common basis of maximum weight w(B). Then there exist
functions w
1
, w
2
: X −→ Z so that w = w
1
+ w
2
, and so that B is both a maximum-weight
basis of M
1
with respect to w
1
and a maximum-weight basis of M
2
with respect to w
2
.
Proof. Consider the directed graph H(M
1
, M
2
, B) with length function l as deﬁned in
Exercise 5.44. Since B is a maximum-weight basis, H(M
1
, M
2
, B) has no directed circuits
of negative length. Hence there exists a function φ : X −→Z so that φ(y) −φ(x) ≤ l(y) for
each arc (x, y) of H(M
1
, M
2
, B). Using the deﬁnition of H(M
1
, M
2
, B) and l, this implies
that for y ∈ B, x ∈ X ¸ B:
φ(x) −φ(y) ≤ −w(x) if (B ¸ ¦y¦) ∪ ¦x¦ ∈ 1
1
,
φ(y) −φ(x) ≤ w(x) if (B ¸ ¦y¦) ∪ ¦x¦ ∈ 1
2
.
(48)
Section 5.7. Matroids and polyhedra 41
Now deﬁne
w
1
(y) := φ(y), w
2
(y) := w(y) −φ(y) for y ∈ B
w
1
(x) := w(x) +φ(x), w
2
(x) := −φ(x) for x ∈ X ¸ B.
(49)
Then w
1
(x) ≤ w
1
(y) whenever (B¸ ¦y¦) ∪¦x¦ ∈ 1
1
. So by Exercise 5.24, B is a maximum-
weight basis of M
1
with respect to w
1
. Similarly, B is a maximum-weight basis of M
2
with
respect to w
2
.
Note that if B is a maximum-weight basis of M
1
with respect to some weight function
w, then also after adding a constant function to w this remains the case.
This observation will be used in showing that a theorem similar to Theorem 5.14 holds
for independent sets instead of bases.
Theorem 5.15. Let M
1
= (X, 1
1
) and M
2
= (X, 1
2
) be matroids, let w : X −→ Z be a
weight function, and let Y be a maximum-weight common independent set. Then there exist
weight functions w
1
, w
2
: X −→Z so that w = w
1
+w
2
and so that Y is both a maximum-
weight independent set of M
1
with respect to w
1
and a maximum-weight independent set of
M
2
with respect to w
2
.
Proof. Let U be a set of cardinality [X[ + 2 disjoint from X. Deﬁne
¸
1
:= ¦Y ∪ W [ Y ∈ 1
1
, W ⊆ U, [Y ∪ W[ ≤ [X[ + 1¦,
¸
2
:= ¦Y ∪ W [ Y ∈ 1
2
, W ⊆ U, [Y ∪ W[ ≤ [X[ + 1¦.
(50)
Then M

1
:= (X∪U, ¸
1
) and M
2
:= (X ∪U, ¸
2
) are matroids again. Deﬁne ˜ w : X −→Z by
˜ w(x) := w(x) if x ∈ X,
˜ w(x) := 0 if x ∈ U.
(51)
Let W be a subset of U of cardinality [X ¸ Y [ + 1. Then Y ∪ W is a common basis of
M

1
and M

2
. In fact, Y ∪W is a maximum-weight common basis with respect to the weight
function ˜ w. (Any basis B of larger weight would intersect X in a common independent set
of M
1
and M
2
of larger weight than Y .)
So by Theorem 5.14, there exist functions ˜ w
1
, ˜ w
2
: X −→ Z so that ˜ w
1
+ ˜ w
2
= ˜ w and
so that Y ∪W is both a maximum-weight basis of M

1
with respect to ˜ w
1
and a maximum-
weight basis of M

2
with respect to ˜ w
2
.
Now, ˜ w
1
(u

) ≤ ˜ w
1
(u

) whenever u

∈ W, u

∈ U ¸ W. Otherwise we can replace u

in Y ∪ W by u

to obtain a basis of M

1
of larger ˜ w
1
-weight. Similarly, ˜ w
2
(u

) ≤ ˜ w
2
(u

)
whenever u

∈ W, u

∈ U ¸ W.
Since ˜ w
1
(u) + ˜ w
2
(u) = ˜ w(u) = 0 for all u ∈ U, this implies that ˜ w
1
(u

) = ˜ w
1
(u

)
whenever u

∈ W, u

∈ U ¸ W. As ∅ ,= W ,= U, it follows that ˜ w
1
and ˜ w
2
are constant on
U. Since we can add a constant function to ˜ w
1
and subtracting the same function from ˜ w
2
without spoiling the required properties, we may assume that ˜ w
1
and ˜ w
2
are 0 on U.
Now deﬁne w
1
(x) := ˜ w
1
(x) and w
2
(x) := ˜ w
2
(x) for each x ∈ X. Then Y is both a
maximum-weight independent set of M
1
with respect to w
1
and a maximum-weight inde-
pendent set of M
2
with respect to w
2
.
42 Chapter 5. Matroids
Having this theorem, it is quite easy to derive that the intersection of two matroid
polytopes has integer vertices, being incidence vectors of common independent sets.
By Theorem 5.13 the intersection P(M
1
) ∩ P(M
2
) of the matroid polytopes associated
with the matroids M
1
= (X, 1
1
) and M
2
= (X, 1
2
) is determined by:
z(x) ≥ 0 (x ∈ X),
z(Y ) ≤ r
M
1
(Y ) (Y ⊆ X),
z(Y ) ≤ r
M
2
(Y ) (Y ⊆ X),
(52)
The corresponding linear programming problem is, for any w : X −→R:
maximize w
T
z,
subject to z(x) ≥ 0 (x ∈ X),
z(Y ) ≤ r
M
1
(Y ) (Y ⊆ X),
z(Y ) ≤ r
M
2
(Y ) (Y ⊆ X).
(53)
Again we consider the dual linear programming problem:
minimize

Y ⊆X
(y

Y
r
M
1
(Y ) +y

Y
r
M
2
(Y ))
subject to y

Y
≥ 0 (Y ⊆ X),
y

Y
≥ 0 (Y ⊆ X),

Y ⊆X,x∈Y
(y

Y
+y

Y
) ≥ w(x) (x ∈ X).
(54)
Now
Theorem 5.16. If w is integer, then (53) and (54) have integer optimum solutions.
Proof. Let Y be a common independent set of maximum weight w(Y ). By Theorem 5.14,
there exist functions w
1
, w
2
: X −→ Z so that w
1
+ w
2
= w and so that Y is a maximum-
weight independent set in M
i
with respect to w
i
(i = 1, 2).
Applying Theorem 5.13 to w
1
and w
2
, respectively, we know that there exist integer opti-
mum solutions y

and y

, respectively, for problem (44) with respect to M
1
, w
1
and M
2
, w
2
,
respectively. One easily checks that y

, y

forms a feasible solution of (54). Optimality
follows from:
w(Z) = w
1
(Z) +w
2
(Z) =

Y ⊆X
y

Y
r
M
1
(Y ) +

Y ⊆X
y

Y
r
M
2
(Y )
=

Y ⊆X
(y

Y
r
M
1
(Y ) +y

Y
r
M
2
(Y )).
(55)
So system (52) is totally dual integral. Again, this directly implies:
Corollary 5.16a. The convex hull of the common independent sets of two matroids M
1
and M
2
is determined by (52).
Proof. Directly from Theorem 5.16.
Section 5.7. Matroids and polyhedra 43
Exercises
5.45. Give an example of three matroids M
1
, M
2
, and M
3
on the same set X so that the intersection
P(M
1
) ∩ P(M
2
) ∩ P(M
3
) is not the convex hull of the common independent sets.
5.46. Derive Edmonds’ matroid intersection theorem (Theorem 5.9) from Theorem 5.16.
44 Chapter 6. Perfect matchings in regular bipartite graphs
6. Perfect matchings in regular bipartite graphs
6.1. Counting perfect matchings in 3-regular bipartite graphs
We ﬁrst consider the following theorem of Voorhoeve [1979]:
Theorem 6.1. Let G = (V, E) be a 3-regular bipartite graph with 2n vertices. Then G has
at least (
4
3
)
n
perfect matchings.
Proof. For any graph G, let φ(G) denote the number of perfect matchings in G.
We now deﬁne three functions. Let h(n) be the minimum of φ(G) taken over all 3-regular
bipartite graphs on 2n vertices. Let f(n) be the minimum of φ(G) taken over all bipartite
graphs on 2n vertices where two vertices have degree 2, while all other vertices have degree
3. Let g(n) be the minimum of φ(G) taken over all bipartite graphs on 2n vertices where
one vertex has degree 2, one vertex has degree 4, while all other vertices have degree 3.
So we must show
h(n) ≥ (
4
3
)
n
(1)
for each n. We show a number of relations between the three functions h, f, and g that
imply (1).
First we have
h(n) ≥
3
2
f(n) (2)
for each n. Indeed, let G be a 3-regular bipartite graph on 2n vertices with φ(G) = h(n).
Choose a vertex u, and let e
1
, e
2
, e
3
be the edges incident with u.
Let G−e
i
be the graph obtained from G by deleting e
i
. Sop G−e
i
is a graph with two
vertices of degree 2, while all other vertices have degree 3. So, by deﬁnition of f(n),
φ(G−e
i
) ≥ f(n). (3)
Now each perfect matching M in G is also a perfect matching in two of the graphs G −
e
1
, . . . , G−e
3
. Hence we have:
2h(n) = 2φ(G) = φ(G−e
1
) +φ(G−e
2
) +φ(G−e
3
) ≥ 3f(n), (4)
implying (2).
Having (2), it suﬃces to show:
f(n) ≥ (
4
3
)
n
(5)
for each n.
Trivially one has
f(1) = 2. (6)
Next we show that for each n:
g(n) ≥
4
3
f(n). (7)
Section 6.1. Counting perfect matchings in 3-regular bipartite graphs 45
Let G be a bipartite graph with 2n vertices, with one vertex of degree 2, one vertex of
degree 4, while all other vertices have degree 3, such that φ(G) = g(n).
Note that the vertices of degree 2 and 4 belong to the same colour class of G.
Let u be the vertex of degree 4, and let e
1
, . . . , e
4
be the four edges incident with u.
Then for each i = 1, . . . , 4, the graph G − e
i
has two vertices of degree 2, while all other
vertices have degree 3. So φ(G−e
i
) ≥ f(n) for each i = 1, . . . , 4.
Moreover, each perfect matching M of G is a perfect matching of exactly three of the
graphs G−e
1
, . . . , G−e
4
. Hence
3g(n) = 3φ(G) = φ(G−e
1
) + +φ(G−e
4
) ≥ 4f(n) (8)
implying (7).
We next show
f(n) ≥
4
3
f(n −1). (9)
Then (5) follows by induction on n.
To see (9), let G be a bipartite graph on 2n vertices with two vertices, u and w say,
of degree 2, all other vertices having degree 3. Note that u and v necessarily belong to
diﬀerent colour classes of G.
Let v
1
and v
2
be the two neighbours of u.
There are a number of cases (the ﬁrst case being most general).
Case 1: v
1
,= v
2
,= w ,= v
1
. So v
1
and v
2
are distinct vertices of degree 3. Contract the
edges uv
1
and uv
2
. We obtain a graph G

with one vertex w of degree 2, one vertex (the
new vertex arisen by the contraction) of degree 4, while all other vertices have degree 3.
Moreover, G

has 2(n − 1) vertices. Each perfect matching in G

is the image of a perfect
matching in G. So
f(n) = φ(G) ≥ φ(G

) ≥ g(n −1) ≥
4
3
f(n −1), (10)
using (7). (In fact one has φ(G) = φ(G

), but that is not needed in the proof.)
Case 2: v
1
,= v
2
= w. So v
1
has degree 3 and v
2
has degree 2. Contract the edges uv
1
and
uv
2
. We obtain a 3-regular bipartite graph G

with 2(n − 1) vertices. Again, each perfect
matching in G

is the image of a perfect matching in G. So
f(n) = φ(G) ≥ φ(G

) ≥ h(n −1) ≥
3
2
f(n −1) ≥
4
3
f(n −1), (11)
using (2).
Case 3: v
1
= v
2
,= w. So there are two parallel edges connecting u and v
1
. Consider the
graph G

= G − u − v
1
(the graph obtained by deleting vertices u and v
1
and all edges
incident with them). Then G

is a bipartite graph with 2(n −1) vertices, with two vertices
of degree 2, while all other vertices have degree 3. Moreover, each perfect matching M in
G

can be extended in two ways to a perfect matching in G (since there are two parallel
edges connecting u and v
1
). So
f(n) = φ(G) ≥ 2φ(G

) ≥ 2f(n −1) ≥
4
3
f(n −1). (12)
46 Chapter 6. Perfect matchings in regular bipartite graphs
Case 3: v
1
= v
2
= w. So u and v
1
form a component of G, with two parallel edges
connecting u and v
1
. Again, consider the graph G

= G − u − v
1
. Then G

is a 3-regular
bipartite graph with 2(n −1) vertices. Each perfect matching M in G

can be extended in
two ways to a perfect matching in G (since there are two parallel edges connecting u and
v
1
). So
f(n) = φ(G) ≥ 2φ(G

) ≥ 2h(n −1) ≥ 3f(n −1) ≥
4
3
f(n −1), (13)
using (2).
6.2. The factor
4
3
is best possible
Let α be the largest real number with the property that each 3-regular bipartite graph
with 2n vertices has at least α
n
perfect matchings.
We show
Theorem 6.2. α =
4
3
.
Proof. The fact that α ≥
4
3
follows directly from Theorem 6.1. To see the reverse inequality,
ﬁx n. Let Π be the set of permutations of ¦1, . . . , 3n¦. For any π ∈ Π, let G
π
be the bipartite
graph with vertices u
1
, . . . , u
n
, v
1
, . . . , v
n
and edges e
1
, . . . , e
3n
, where
e
i
connects u

i
3

and v

π(i)
3

(14)
for i = 1, . . . , 3n. (Here ¸x| denotes the upper integer part of x.) So G
π
is a 3-regular
bipartite graph with 2n vertices. Hence, by deﬁnition of α,
φ(G
π
) ≥ α
n
, (15)
where φ(G
π
) denotes the number of perfect matchings in G
π
.
On the other hand,

π∈Π
φ(G
π
) = 3
n
3
n
n!(2n)!. (16)
This can be seen as follows. The left hand side is equal to the number of pairs (π, I), where
π is a permutation of ¦1 . . . , 3n¦ and where I is a subset of ¦1, . . . , 3n¦ such that ¦e
i
[i ∈ I¦
forms a perfect matching in G
π
; that is, such that
(i) [I ∩ ¦3j −2, 3j −1, 3j¦[ = 1 for each j = 1, . . . , n,
(ii) [π(I) ∩ ¦3j −2, 3j −1, 3j¦[ = 1 for each j = 1, . . . , n.
(17)
Now by ﬁrst choosing I satisfying (17)(i) (which can be done in 3
n
ways), and next choosing
a permutation π of ¦1, . . . , 3n¦ satisfying (17)(ii) (which can be done in 3
n
n!(2n)! ways),
we obtain (16).
Since [Π[ = (3n)!, (15) and (16) imply
α ≤ (
3
2n
n!(2n)!
(3n)!
)
1/n
(18)
Section 6.2. The factor
4
3
is best possible 47
yielding (??), with Stirling’s formula, which says that
n! ≈ (
n
e
)
n

2πn; (19)
in fact,
lim
n→∞
n!
1/n
n
=
1
e
. (20)
48 Chapter 7. Minimum circulation of railway stock
7. Minimum circulation of railway stock
7.1. The problem
Nederlandse Spoorwegen (Dutch Railways) runs an hourly train service on its route
Amsterdam-Schiphol Airport-Leyden-The Hague-Rotterdam-Dordrecht-Roosendaal-Middel-
burg-Vlissingen vice versa, with the following timetable, for each day from Monday till
Friday:
ride number 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191
Amsterdam d 6. 48 7.55 8.56 9.56 10.56 11.56 12.56 13.56 14.56 15.56 16.56 17.56 18.56 19.56 20.56 21.56 22.56
Rotterdam a 7. 55 8.58 9.58 10.58 11.58 12.58 13.58 14.58 15.58 16.58 17.58 18.58 19.58 20.58 21.58 22.58 23.58
Rotterdam d 7. 00 8. 01 9.02 10.03 11.02 12.03 13.02 14.02 15.02 16.00 17.01 18.01 19.02 20.02 21.02 22.02 23.02
Roosendaal a 7. 40 8. 41 9.41 10.43 11.41 12.41 13.41 14.41 15.41 16.43 17.43 18.42 19.41 20.41 21.41 22.41 23.54
Roosendaal d 7. 43 8. 43 9.43 10.45 11.43 12.43 13.43 14.43 15.43 16.45 17.45 18.44 19.43 20.43 21.43
Vlissingen a 8. 38 9. 38 10.38 11.38 12.38 13.38 14.38 15.38 16.38 17.40 18.40 19.39 20.38 21.38 22.38
ride number 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176
Vlissingen d 5.30 6.54 7.56 8.56 9.56 10.56 11.56 12.56 13.56 14.56 15.56 16.56 17.56 18.56 19.55
Roosendaal a 6.35 7.48 8.50 9.50 10.50 11.50 12.50 13.50 14.50 15.50 16.50 17.50 18.50 19.50 20.49
Roosendaal d 5. 29 6.43 7.52 8.53 9.53 10.53 11.53 12.53 13.53 14.53 15.53 16.53 17.53 18.53 19.53 20.52 21.53
Rotterdam a 6. 28 7.26 8.32 9.32 10.32 11.32 12.32 13.32 14.32 15.32 16.32 17.33 18.32 19.32 20.32 21.30 22.32
Rotterdam d 5. 31 6. 29 7.32 8.35 9.34 10.34 11.34 12.34 13.35 14.35 15.34 16.34 17.35 18.34 19.34 20.35 21.32 22.34
Amsterdam a 6. 39 7. 38 8.38 9.40 10.38 11.38 12.38 13.38 14.38 15.38 16.40 17.38 18.38 19.38 20.38 21.38 22.38 23.38
Table 1. Timetable Amsterdam-Vlissingen vice versa
The trains have more stops, but for our purposes only those given in the table are of
interest.
For each of the stages of any scheduled train, Nederlandse Spoorwegen has determined
an expected number of passengers, divided into ﬁrst class and second class, given in the
following table:
train number 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191
Amsterdam-Rotterdam
47 100 61 41 31 46 42 33 39 84 109 78 44 28 21 28 10
340 616 407 336 282 287 297 292 378 527 616 563 320 184 161 190 123
Rotterdam-Roosendaal
4 35 52 41 26 25 27 27 28 52 113 98 51 29 22 13 8
58 272 396 364 240 221 252 267 287 497 749 594 395 254 165 130 77
Roosendaal-Vlissingen
14 19 27 26 24 32 15 21 23 41 76 67 43 20 15
328 181 270 237 208 188 180 195 290 388 504 381 276 187 136
train number 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176
Vlissingen-Roosendaal
28 100 48 57 24 19 19 17 19 22 39 30 19 15 11
138 448 449 436 224 177 184 181 165 225 332 309 164 142 121
Roosendaal-Rotterdam
16 88 134 57 71 34 26 22 21 25 35 51 32 20 14 14 7
167 449 628 397 521 281 214 218 174 206 298 422 313 156 155 130 64
Rotterdam-Amsterdam
7 26 106 105 56 75 47 36 32 34 39 67 74 37 23 18 17 11
61 230 586 545 427 512 344 303 283 330 338 518 606 327 169 157 154 143
Table 2. Numbers of required ﬁrst class (up) and second class (down) seats
The problem to be solved is: What is the minimum amount of train stock necessary to
perform the service in such a way that at each stage there are enough seats?
In order to answer this question, one should know a number of further characteristics
and constraints. In a ﬁrst variant of the problem considered, the train stock consists of one
type of two-way train-units, each consisting of three carriages. The number of seats in any
unit is:
ﬁrst class 38
second class 163
Table 3. Number of seats
Section 7.2. A network model 49
Each unit has at both ends an engineer’s cabin, and units can be coupled together, up to
a certain maximum number of units. This maximum is trajectory-dependent, and depends
on lengths of station platforms, curvature of bends, required acceleration speed and braking
distance, etc. On each of the trajectories of the line Amsterdam-Vlissingen this maximum
number is 15 carriages, meaning 5 train-units.
The train length can be changed, by coupling or decoupling units, at the terminal
stations of the line, that is at Amsterdam and Vlissingen, and en route at two intermediate
stations: Rotterdam and Roosendaal. Any train-unit decoupled from a train arriving at
place X at time t can be linked up to any other train departing from X at any time later
than t. (The Amsterdam-Vlissingen schedule is such that in practice this gives enough time
to make the necessary switchings.)
A last condition put is that for each place X ∈ ¦Amsterdam, Rotterdam, Roosendaal,
Vlissingen¦, the number of train-units staying overnight at X should be constant during the
week (but may vary for diﬀerent places). This requirement is made to facilitate surveying
the stock, and to equalize at any place the load of overnight cleaning and maintenance
throughout the week. It is not required that the same train-unit, after a night in Roosendaal,
say, should return to Roosendaal at the end of the day. Only the number of units is of
importance.
Given these problem data and characteristics, one may ask for the minimum number of
train-units that should be available to perform the daily cycle of train rides required.
It is assumed that if there is suﬃcient stock for Monday till Friday, then this should also
be enough for the weekend services, since in the weekend a few early trains are cancelled,
and on the remaining trains there is a smaller expected number of passengers. Moreover, it
is not taken into consideration that stock can be exchanged during the day with other lines
of the network. In practice this will happen, but initially this possibility is ignored. (We
will return below to this issue.)
Another point left out of consideration is the regular maintenance and repair of stock
and the amount of reserve stock that should be maintained, as this generally amounts to
just a ﬁxed percentual addition on top of the net minimum.
7.2. A network model
If only one type of railway stock is used, a classical method can be applied to solve the
problem, based on min-cost circulations in networks (see Bartlett [1957], cf. also Boldyreﬀ
[1955], Feeney [1957], Ferguson and Dantzig [1955], Norman and Dowling [1968], van Rees
[1965], White and Bomberault [1969]).
To this end, a directed graph D = (V, A) is constructed as follows. For each place
X ∈ ¦Amsterdam, Rotterdam, Roosendaal, Vlissingen¦ and for each time t at which any
train leaves or arrives at X, we make a vertex (X, t). So the vertices of D correspond to all
198 time entries in the timetable (Table 1).
For any stage of any train ride, leaving place X at time t and arriving at place Y at
time t

, we make a directed arc from (X, t) to (Y, t

). For instance, there is an arc from
(Roosendaal, 7.43) to (Vlissingen, 8.38).
Moreover, for any place X and any two successive times t, t

at which any time leaves
or arrives at X, we make an arc from (X, t) to (X, t

). Thus in our example there will be arcs,
50 Chapter 7. Minimum circulation of railway stock
e.g., from (Rotterdam, 8.01) to (Rotterdam, 8.32), from (Rotterdam, 8.32) to (Rotterdam, 8.35),
from (Vlissingen, 8.38) to (Vlissingen, 8.56), and from (Vlissingen, 8.56) to (Vlissingen, 9.38).
17
19
20
22
24
1
10
6
7
8
9
11
12
13
14
15
16
18
21
23
2
3
4
5
Rotterdam
Roosendaal
Vlissingen
Amsterdam
Figure 7.1 The graph D. All arcs are oriented clockwise
Finally, for each place X there will be an arc from (X, t) to (X, t

), where t is the last
time of the day at which any train leaves or arrives at X and where t

is the ﬁrst time of the
day at which any train leaves or arrives at X. So there is an arc from (Roosendaal, 23.54)
to (Roosendaal, 5.29).
We can now describe any possible routing of train stock as a function f : A −→ Z
+
,
where f(a) denotes the following. If a corresponds to a ride stage, then f(a) is the number
of units deployed for that stage. If a corresponds to an arc from (X, t) to (X, t

), then
f(a) is equal to the number of units present at place X in the time period t–t

(possibly
overnight).
First of all, this function is a circulation. That is, at any vertex v of D one should have:

a∈δ
+
(v)
f(a) =

a∈δ

(v)
f(a), (1)
the ﬂow conservation law. Here δ
+
(v) denotes the set of arcs of D that are entering vertex
v and δ

(v) denotes the set of arcs of D that are leaving v.
Moreover, in order to satisfy the demand and capacity constraints, f should satisfy the
Section 7.2. A network model 51
following condition for each arc a corresponding to a stage:
d(a) ≤ f(a) ≤ c(a). (2)
Here c(a) gives the ‘capacity’ for the stage, in our example c(a) = 15 throughout. Further-
more, d(a) denotes the ‘demand’ for that stage, that is, the lower bound on the number
of units required by the expected number of passengers as given in Table 2. That is, with
Table 3 we obtain the following lower bounds on the numbers of train-units:
train number 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191
Amsterdam-Rotterdam 3 4 3 3 2 2 2 2 3 4 4 4 2 2 1 2 1
Rotterdam-Roosendaal 1 2 3 3 2 2 2 2 2 4 5 4 3 2 2 1 1
Roosendaal-Vlissingen 3 2 2 2 2 2 2 2 2 3 4 3 2 2 1
train number 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176
Vlissingen-Roosendaal 1 3 3 3 2 2 2 2 2 2 3 2 2 1 1
Roosendaal-Rotterdam 2 3 4 3 4 2 2 2 2 2 2 3 2 1 1 1 1
Rotterdam-Amsterdam 1 2 4 4 3 4 3 2 2 3 3 4 4 3 2 1 1 1
Table 4. Lower bounds on the number of train-units
Note that, by the ﬂow conservation law, at any section of the graph in Figure 7.1, the
total ﬂow on the arcs crossing the section is independent of the choice of the section. It
gives the number of train-units that are used. This number is also equal to the total ﬂow
on the ‘overnight’ arcs. So if we wish to minimize the total number of units deployed, we
could restrict ourselves to:
Minimize

a∈A

f(a). (3)
Here A

denotes the set of overnight arcs. So [A

[ = 4 in the example.
It is easy to see that this fully models the problem. Hence determining the minimum
number of train-units amounts to solving a minimum-cost circulation problem, where the
cost function is quite trivial: we have cost(a) = 1 if a is an overnight arc, and cost(a) = 0
for all other arcs.
Having this model, we can apply standard min-cost circulation algorithms, based on min-
cost augmenting paths and cycles ( Jewell [1958], Iri [1960], Busacker and Gowen [1960],
Edmonds and Karp [1972]) or on ‘out-of-kilter’ (Fulkerson [1961], Minty [1960]. Implemen-
tation gives solutions of the problem (for the above data) in about 0.05 CPUseconds on an
SGI R4400. (See also the classical standard reference Ford and Fulkerson [1962] and the
recent encyclopedic treatment Ahuja, Magnanti, and Orlin [1993].
Alternatively, the problem can be solved easily with any linear programming package,
since by the integrality of the input data and by the total unimodularity of the underlying
matrix the optimum basic solution will have integer values only. With the fast linear
programming package CPLEX (version 2.1) the following optimum solution was obtained
in 0.05 CPUseconds (on an SGI R4400):
train number 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191
Amsterdam-Rotterdam 3 4 3 3 2 2 2 2 5 5 4 4 2 2 1 2 1
Rotterdam-Roosendaal 1 2 3 3 2 2 2 2 2 4 5 4 3 2 2 1 1
Roosendaal-Vlissingen 3 2 2 2 2 2 2 2 2 3 4 3 2 2 1
train number 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176
Vlissingen-Roosendaal 1 3 3 3 2 2 2 2 2 2 3 2 2 1 4
Roosendaal-Rotterdam 2 4 4 3 4 2 2 2 2 3 2 4 3 1 1 1 1
Rotterdam-Amsterdam 1 2 4 4 3 4 3 2 2 3 3 4 4 3 2 1 1 1
Table 5. Minimum circulation with one type of stock
52 Chapter 7. Minimum circulation of railway stock
Required are 22 units, divided during the night over the four couple-stations as follows:
number of number of
units carriages
Amsterdam 4 12
Rotterdam 2 6
Roosendaal 8 24
Vlissingen 8 24
Total 22 66
Table 6. Required stock (one type)
It is quite direct to modify and extend the model so as to contain several other problems.
Instead of minimizing the number of train-units one can minimize the amount of carriage-
kilometers that should be made every day, or any linear combination of both quantities. In
addition, one can put an upper bound on the number of units that can be stored at any of
the stations.
Instead of considering one line only, one can more generally consider networks of lines
that share the same stock of railway material, including trains that are scheduled to be
split or combined. (Nederlandse Spoorwegen has trains from The Hague and Rotterdam
to Leeuwarden and Groningen that are combined to one train on the common trajectory
between Utrecht and Zwolle.)
If only one type of unit is employed for that part of the network, each unit having the
same capacity, the problem can be solved fast even for large networks.
7.3. More types of trains
The problem becomes harder if there are several types of trains that can be deployed
for the train service. Clearly, if for each scheduled train we would prescribe which type of
unit should be deployed, the problem could be decomposed into separate problems of the
type above. But if we do not make such a prescription, and if some of the types can be
coupled together to form a train of mixed composition, we should extend the model to a
‘multi-commodity circulation’ model.
Let us restrict ourselves to the case Amsterdam-Vlissingen again, where now we can
deploy two types of two-way train-units, that can be coupled together. The two types are
type III, each unit of which consists of 3 carriages, and type IV, each unit of which consists
of 4 carriages. The capacities are given in the following table:
type III IV
ﬁrst class 38 65
second class 163 218
Table 7. Number of seats
Again, the demands of the train stages are given in Table 2. The maximum number of
carriages that can be in any train is again 15. This means that if a train consists of x units
of type III and y units of type IV then 3x + 4y ≤ 15 should hold.
It is quite easy to extend the model above to the present case. Again we consider the
directed graph D = (V, A) as above. At each arc a let f(a) be the number of units of
Section 7.3. More types of trains 53
type III on the stage corresponding to a and let g(a) similarly represent type IV. So both
f : A −→Z
+
and g : A −→Z
+
are circulations, that is, satisfy the ﬂow circulation law:

a∈δ

(v)
f(a) =

a∈δ
+
(v)
f(a),

a∈δ

(v)
g(a) =

a∈δ
+
(v)
g(a),
(4)
for each vertex v. The capacity constraint now is:
3f(a) + 4g(a) ≤ 15 (5)
for each arc a representing a stage.
The demand constraint can be formulated as follows:
38f(a) + 65g(a) ≥ p
1
(a),
163f(a) + 218g(a) ≥ p
2
(a),
(6)
for each arc a representing a stage, where p
1
(a) and p
2
(a) denote the number of ﬁrst class
and second class seats required (Table 2). Note that in contrary to the case of one type
of unit, now we cannot speak of a minimum number of units required: there are now two
dimensions, so that minimum train compositions need not be unique.
Let cost
III
and cost
IV
represent the cost of purchasing one unit of type III and of type
IV, respectively. Although train-units of type IV are more expensive than those of type III,
they are cheaper per carriage; that is:
cost
III
< cost
IV
<
4
3
cost
III
. (7)
This is due to the fact that engineer’s cabins are relatively expensive.
One variant of the problem is to ﬁnd f and g so as to
Minimize

a∈A

(cost
III
f(a) + cost
IV
g(a)). (8)
However, the classical min-cost circulation algorithms do not apply now. One could
implement variants of augmenting paths and cycles techniques, but they generally lead to
fractional circulations, that is, with certain values being non-integer.
Similarly, when solving the problem as a linear programming problem, we loose the
pleasant phenomenon observed above that we automatically would obtain an optimum so-
lution f, g : A −→R with integer values only. (Also Ford and Fulkerson’s column generation
technique Ford and Fulkerson [1958] yields fractional solutions.)
So the problem is an integer linear programming problem, with 198 integer variables.
Solving the problem in this form with the integer programming package CPLEX (version
2.1) would give (for the Amsterdam-Vlissingen example) a running time of several hours,
which is too long if one wishes to compare several problem data. This long running time
is caused by the fact that, despite a fractional optimum solution is found quickly, a large
number of possibilities should be checked in a branching tree (corresponding to rounding
fractional values up or down) before one has found an integer-valued optimum solution.
However, there are ways of speeding up the process, by sharpening the constraints and
by exploiting more facilities oﬀered by CPLEX.
54 Chapter 7. Minimum circulation of railway stock
The conditions (5) and (6) can be sharpened in the following way. For each arc a
representing a stage, the two-dimensional vector (f(a), g(a)) should be an integer vector in
the polygon
P
a
:= ¦(x, y)[x ≥ 0, y ≥ 0, 3x+4y ≤ 15, 38x+65y ≥ p
1
(a), 163x+218y ≥ p
2
(a)¦. (9)
For instance, the trajectory Rotterdam-Amsterdam of train 2132 gives the polygon
P
a
= ¦(x, y)[x ≥ 0, y ≥ 0, 3x+4y ≤ 15, 38x+65y ≥ 47, 163x+218y ≥ 344¦. (10)
In a picture:
0
1
2
3
4
0
1 2 3
4 5
6
type III
t
y
p
e

I
V
Figure 1.2. The polygon P
a
In a sense, the inequalities are too wide. The constraints given in (10) could be tightened
so as to describe exactly the convex hull of the integer vectors in the polygon P
a
(the ‘integer
hull’), as in:
0
1
2
3
4
0
1 2 3
4 5
6
type III
t
y
p
e

I
V
Figure 1.3. The integer hull of P
a
Thus for segment Rotterdam-Amsterdam of train 2132 the constraints (10) can be sharp-
ened to:
x ≥ 0, y ≥ 0, x +y ≥ 2, x + 2y ≥ 3, y ≤ 3, 3x + 4y ≤ 5. (11)
Section 7.3. More types of trains 55
Doing this for each of the 99 polygons representing a stage gives a sharper set of inequalities,
which helps to obtain more easily an integer optimum solution from a fractional solution.
(This is a weak form of application of the technique of polyhedral combinatorics.) Finding all
these inequalities can be done in a pre-processing phase, and takes about 0.04 CPUseconds.
Another ingredient that improves the performance of CPLEX when applied to this
problem is to give it an order in which the branch-and-bound procedure should select
variables. In particular, one can give higher priority to variables that correspond to peak
hours (as one may expect that they form the bottleneck in obtaining a minimum circulation),
and lower priority to those corresponding to oﬀ-peak periods.
Implementation of these techniques makes that CPLEX gives a solution to the Amster-
dam-Vlissingen problem in 1.58 CPUseconds (taking cost
III
= 4 and cost
IV
= 5).
train number 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191
Amsterdam-Rotterdam 0+2 0+3 4+0 0+2 0+2 1+2 0+2 1+1 0+3 2+1 0+3 1+2 0+2 0+1 1+2 0+1 0+1
Rotterdam-Roosendaal 0+1 0+2 0+2 4+0 0+2 0+2 1+3 0+3 1+1 0+3 2+2 0+3 0+2 1+1 2+0 1+3 1+0
Roosendaal-Vlissingen 0+2 0+2 0+2 2+0 0+1 0+1 0+2 0+2 2+0 0+2 2+1 0+2 0+2 2+0 0+1
train number 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176
Vlissingen-Roosendaal 1+0 0+3 1+2 0+2 0+2 0+1 1+1 1+1 0+1 0+2 0+2 2+0 0+2 2+0 0+1
Roosendaal-Rotterdam 1+2 3+0 0+3 0+2 1+2 0+2 2+1 1+3 0+1 0+3 1+3 0+3 1+1 0+1 2+2 0+1 1+0
Rotterdam-Amsterdam 0+1 0+2 4+0 0+3 0+3 1+2 0+2 2+0 0+2 1+1 0+3 1+2 0+3 1+1 0+1 0+2 0+1 0+1
Table 8. Minimum circulation with two types of stock.
x +y means: x units of type III and y units of type IV
In total, one needs 7 units of type III and 12 units of type IV, divided during the night
as follows:
56 Chapter 7. Minimum circulation of railway stock
number of number of total total
units units number of number of
type III type IV units carriages
Amsterdam 0 2 2 8
Rotterdam 0 2 2 8
Roosendaal 3 3 6 21
Vlissingen 2 5 7 26
Total 5 12 17 63
Table 9. Required stock (two types)
So comparing this solution with the solution for one type only (Table 6), the possibility
of having two types gives both a decrease in the number of train-units and in the number
of carriages needed.
Interestingly, it turns out that 17 is the minimum number of units needed and 63 is the
minimum number of carriages needed. (This can be shown by ﬁnding a minimum circulation
ﬁrst for cost
III
= cost
IV
= 1 and next for cost
III
= 3, cost
IV
= 4.)
So any feasible circulation with stock of Types III and IV requires at least 17 train-
units and at least 63 carriages. In other words, the circulation is optimum for any cost
function satisfying (7). We observed a similar phenomenon when checking other input data
(although there is no mathematical reason for this fact and it is not diﬃcult to construct
examples where it does not show up).
Again variants as described at the end of Section 7.2 also apply to this more extended
model. One can include minimizing the number of carriage-kilometers as an objective, or
the option that in some of the trains a buﬀet section is scheduled (where some of the types
contain a buﬀet). Moreover, one can consider networks of lines.
Our research for NS in fact has focused on more extended problems that require more
complicated models and techniques. One requirement is that in any train ride Amsterdam-
Vlissingen there should be at least one unit that makes the whole trip. Moreover, it is
required that, at any of the four stations given (Amsterdam, Rotterdam, Roosendaal,
Vlissingen) one may either couple units to or decouple units from a train, but not both
simultaneously. Moreover, one may couple fresh units only to the front of the train, and
decouple laid oﬀ units only from the rear. (One may check that these conditions are not
met by all trains in the solution given in Table 8.)
This all causes that the order of the diﬀerent units in a train does matter, and that
conditions have a more global impact: the order of the units in a certain morning train can
still inﬂuence the order of some evening train. This does not ﬁt directly in the circulation
model described above, and requires an extension. The method we have developed for NS so
far, based on introducing extra variables, extending the graph described above and utilizing
some heuristic arguments, yields a running time (with CPLEX) of about 30 CPUseconds
for the Amsterdam-Vlissingen problem.
References 57
References
[1975] G.M. Adel’son-Vel’ski˘ı, E.A. Dinits, A.V. Karzanov, Potokovye algoritmy [Russian; Flow Al-
gorithms], Izdatel’stvo “Nauka”, Moscow, 1975.
[1993] R.K. Ahuja, T.L. Magnanti, J.B. Orlin, Network Flows — Theory, Algorithms, and Applica-
tions, Prentice Hall, Englewood Cliﬀs, New Jersey, 1993.
[1971] I. Anderson, Perfect matchings of a graph, Journal of Combinatorial Theory, Series B 10
(1971) 183–186.
[1957] T.E. Bartlett, An algorithm for the minimum number of transport units to maintain a ﬁxed
schedule, Naval Research Logistics Quarterly 4 (1957) 139–149.
[1971] F. Bock, An algorithm to construct a minimum directed spanning tree in a directed network,
in: Developments in Operations Research, Volume 1 (Proceedings of the Third Annual Israel
Conference on Operations Research, Tel Aviv, 1969; B. Avi-Itzhak, ed.), Gordon and Breach,
New York, 1971, pp. 29–44.
[1955] A.W. Boldyreﬀ, Determination of the maximal steady state ﬂow of traﬃc through a railroad
network, Journal of the Operations Research Society of America 3 (1955) 443–465.
[1926] O. Boruvka, O jist´em probl´emu minim´ aln´ım [Czech, with German summary; On a minimal
problem], Pr´ ace Moravsk´e Pˇr´ırodovˇedeck´e Spoleˇcnosti Brno [Acta Societatis Scientiarum Nat-
uralium Moravi[c]ae] 3 (1926) 37–58.
[1960] R.G. Busacker, P.J. Gowen, A Procedure for Determining a Family of Minimum-Cost Network
Flow Patterns, Technical Paper ORO-TP-15, Operations Research Oﬃce, The Johns Hopkins
University, Bethesda, Maryland, 1960.
[1965] Y.-j. Chu, T.-h. Liu, On the shortest arborescence of a directed graph, Scientia Sinica
[Peking] 14 (1965) 1396–1400.
[1959] E.W. Dijkstra, A note on two problems in connexion with graphs, Numerische Mathematik 1
(1959) 269–271.
[1965] J. Edmonds, Minimum partition of a matroid into independent subsets, Journal of Research
National Bureau of Standards Section B 69 (1965) 67–72.
[1967] J. Edmonds, Optimum branchings, Journal of Research National Bureau of Standards Section
B 71 (1967) 233–240 [reprinted in: Mathematics of the Decision Sciences Part 1 (Proceedings
Fifth Summer Seminar on the Mathematics of the Decision Sciences, Stanford, California,
1967; G.B. Dantzig, A.F. Veinott, Jr, eds.) [Lectures in Applied Mathematics Vol. 11],
American Mathematical Society, Providence, Rhode Island, 1968, pp. 346–361].
[1970] J. Edmonds, Submodular functions, matroids, and certain polyhedra, in: Combinatorial Struc-
tures and Their Applications (Proceedings Calgary International Conference on Combinatorial
Structures and Their Applications, Calgary, Alberta, June 1969; R. Guy, H. Hanani, N. Sauer,
J. Sch¨ onheim, eds.), Gordon and Breach, New York, 1970, pp. 69–87.
[1965] J. Edmonds, D.R. Fulkerson, Transversals and matroid partition, Journal of Research National
Bureau of Standards Section B 69 (1965) 147–153.
[1972] J. Edmonds, R.M. Karp, Theoretical improvements in algorithmic eﬃciency for network ﬂow
problems, Journal of the Association for Computing Machinery 19 (1972) 248–264.
[1984] A. Ehrenfeucht, V. Faber, H.A. Kierstead, A new method of proving theorems on chromatic
index, Discrete Mathematics 52 (1984) 159–164.
58 References
[1976] S. Even, A. Itai, A. Shamir, On the complexity of timetable and multicommodity ﬂow prob-
lems, SIAM Journal on Computing 5 (1976) 691–703.
[1957] G.J. Feeney, The empty boxcar distribution problem, Proceedings of the First International
Conference on Operational Research (Oxford 1957) (M. Davies, R.T. Eddison, T. Page, eds.),
Operations Research Society of America, Baltimore, Maryland, 1957, pp. 250–265.
[1955] A.R. Ferguson, G.B. Dantzig, The problem of routing aircraft, Aeronautical Engineering Re-
view 14 (1955) 51–55.
[1958] L.R. Ford, Jr, D.R. Fulkerson, A suggested computation for maximal multi-commodity net-
work ﬂows, Management Science 5 (1958) 97–101.
[1962] L.R. Ford, Jr, D.R. Fulkerson, Flows in Networks, Princeton University Press, Princeton, New
Jersey, 1962.
[1980] S. Fortune, J. Hopcroft, J. Wyllie, The directed subgraph homeomorphism problem, Theoret-
ical Computer Science 10 (1980) 111–121.
[1917] G. Frobenius,
¨
Uber zerlegbare Determinanten, Sitzungsberichte der K¨ oniglich Preußischen
Akademie der Wissenschaften zu Berlin (1917) 274–277 [reprinted in: Ferdinand Georg Frobe-
nius, Gesammelte Abhandlungen, Band III (J.-P. Serre, ed.), Springer, Berlin, 1968, pp. 701–
704].
[1961] D.R. Fulkerson, An out-of-kilter method for minimal-cost ﬂow problems, Journal of the Society
for Industrial and Applied Mathematics 9 (1961) 18–27.
[1974] D.R. Fulkerson, Packing rooted directed cuts in a weighted directed graph, Mathematical
Programming 6 (1974) 1–13.
[1958] T. Gallai, Maximum-minimum S¨ atze ¨ uber Graphen, Acta Mathematica Academiae Scien-
tiarum Hungaricae 9 (1958) 395–434.
[1959] T. Gallai,
¨
Uber extreme Punkt- und Kantenmengen, Annales Universitatis Scientiarum Bu-
dapestinensis de Rolando E¨ otv¨ os Nominatae, Sectio Mathematica 2 (1959) 133–138.
[1935] P. Hall, On representatives of subsets, The Journal of the London Mathematical Society 10
(1935) 26–30 [reprinted in: The Collected Works of Philip Hall (K.W. Gruenberg, J.E. Rose-
blade, eds.), Clarendon Press, Oxford, 1988, pp. 165–169].
[1981] I. Holyer, The NP-completeness of edge-coloring, SIAM Journal on Computing 10 (1981)
718–720.
[1961] T.C. Hu, The maximum capacity route problem, Operations Research 9 (1961) 898–900.
[1963] T.C. Hu, Multi-commodity network ﬂows, Operations Research 11 (1963) 344–360.
[1960] M. Iri, A new method of solving transportation-network problems, Journal of the Operations
Research Society of Japan 3 (1960) 27–87.
[1958] W.S. Jewell, Optimal Flows Through Networks [Sc.D. Thesis], Interim Technical Report No.
8, Army of Ordnance Research: Fundamental Investigations in Methods of Operations Re-
search, Operations Research Center, Massachusetts Institute of Technology, Cambridge, Mas-
sachusetts, 1958 [partially reprinted in Jewell [1966]].
[1975] R.M. Karp, On the computational complexity of combinatorial problems, Networks 5 (1975)
45–68.
[1931] D. K˝ onig, Graphok ´es matrixok [Hungarian; Graphs and matrices], Matematikai ´es Fizikai
Lapok 38 (1931) 116–119.
References 59
[1932] D. K˝ onig,
¨
Uber trennende Knotenpunkte in Graphen (nebst Anwendungen auf Determinanten
und Matrizen), Acta Litterarum ac Scientiarum Regiae Universitatis Hungaricae Francisco-
Josephinae, Sectio Scientiarum Mathematicarum [Szeged] 6 (1932-34) 155–179.
[1984] M.R. Kramer, J. van Leeuwen, The complexity of wire-routing and ﬁnding minimum area
layouts for arbitrary VLSI circuits, in: VLSI-Theory (F.P. Preparata, ed.) [Advances in
Computing Research, Volume 2], JAI Press, Greenwich, Connecticut, 1984, pp. 129–146.
[1956] J.B. Kruskal, Jr, On the shortest spanning subtree of a graph and the traveling salesman
problem, Proceedings of the American Mathematical Society 7 (1956) 48–50.
[1975] L. Lov´ asz, Three short proofs in graph theory, Journal of Combinatorial Theory, Series B 19
(1975) 269–271.
[1975] J.F. Lynch, The equivalence of theorem proving and the interconnection problem, (ACM)
[1960] G.J. Minty, Monotone networks, Proceedings of the Royal Society of London, Series A. Math-
ematical and Physical Sciences 257 (1960) 194–212.
[1961] C.St.J.A. Nash-Williams, Edge-disjoint spanning trees of ﬁnite graphs, The Journal of the
London Mathematical Society 36 (1961) 445–450.
[1964] C.St.J.A. Nash-Williams, Decomposition of ﬁnite graphs into forests, The Journal of the
London Mathematical Society 39 (1964) 12.
[1967] C.St.J.A. Nash-Williams, An application of matroids to graph theory, in: Theory of Graphs —
International Symposium — Th´eorie des graphes — Journ´ees internationales d’´etude (Rome,
1966; P. Rosenstiehl, ed.), Gordon and Breach, New York, and Dunod, Paris, 1967, pp. 263–
265.
[1985] C.St.J.A. Nash-Williams, Connected detachments of graphs and generalized Euler trails, The
Journal of the London Mathematical Society (2) 31 (1985) 17–29.
[1968] A.R.D. Norman, M.J. Dowling, Railroad Car Inventory: Empty Woodrack Cars on the Louisville
and Nashville Railroad, Technical Report 320-2926, IBM New York Scientiﬁc Center, New
York, 1968.
[1981] H. Okamura, P.D. Seymour, Multicommodity ﬂows in planar graphs, Journal of Combinatorial
Theory, Series B 31 (1981) 75–81.
[1957] R.C. Prim, Shortest connection networks and some generalizations, The Bell System Technical
Journal 36 (1957) 1389–1401.
[1942] R. Rado, A theorem on independence relations, The Quarterly Journal of Mathematics (Ox-
ford) (2) 13 (1942) 83–89.
[1965] J.W.H.M.T.S.J. van Rees, Een studie omtrent de circulatie van materieel, Spoor- en Tramwe-
gen 38 (1965) 363–367.
[1995] N. Robertson, P.D. Seymour, Graph minors. XIII. The disjoint paths problem, Journal of
Combinatorial Theory, Series B 63 (1995) 65–110.
[1966] B. Rothschild, A. Whinston, Feasibility of two commodity network ﬂows, Operations Research
14 (1966) 1121–1129.
[1973] M. Sakarovitch, Two commodity network ﬂows and linear programming, Mathematical Pro-
gramming 4 (1973) 1–20.
[1994] A. Schrijver, Finding k disjoint paths in a directed planar graph, SIAM Journal on Computing
23 (1994) 780–788.
60 References
[1977] R.E. Tarjan, Finding optimum branchings, Networks 7 (1977) 25–35.
[1947] W.T. Tutte, The factorization of linear graphs, The Journal of the London Mathematical
Society 22 (1947) 107–111.
[1961] W.T. Tutte, On the problem of decomposing a graph into n connected factors, The Journal
of the London Mathematical Society 36 (1961) 221–230 [reprinted in: Selected Papers of W.T.
Tutte Volume I (D. McCarthy, R.G. Stanton, eds.), The Charles Babbage Research Centre,
St. Pierre, Manitoba, 1979, pp. 214–225].
[1964] V.G. Vizing, Ob otsenke khromaticheskogo klassa p-grafa [Russian; On an estimate of the
chromatic class of a p-graph], Diskretny˘ı Analiz 3 (1964) 25–30.
[1965] V.G. Vizing, Khromaticheski˘ı klass mul’tigrafa [Russian], Kibernetika [Kiev] 1965:3 (1965)
29–39 [English translation: The chromatic class of a multigraph, Cybernetics 1:3 (1965) 32–
41].
[1979] M. Voorhoeve, A lower bound for the permanents of certain (0, 1)-matrices, Indagationes
Mathematicae 41 (1979) 83–86.
[1976] D.J.A. Welsh, Matroid Theory, Academic Press, London, 1976.
[1969] W.W. White, A.M. Bomberault, A network algorithm for empty freight car allocation, IBM
Systems Journal 8 (1969) 147–169.
Name index 61
Name index
Ahuja, R.K. 51,57
Anderson, I. 10,57
Bartlett, T.E. 49,57
Berge, C. 1,9-11
Bock, F.C. 6,57
Boldyreﬀ, A.W. 49,57
Bomberault, A.M. 49,60
Boruvka, O. 3,57
Busacker, R.G. 51,57
Chu, Y.-j. 6,57
Dantzig, G.B. 49,58
Dijkstra, E.W. 3-4,57
Dinits, E.A. 21,57
Dowling, M.J. 49,59
Edmonds, J.R. 5-6,31,33-36,38-40,43,51,57
Ehrenfeucht, A. 12,57
Even, S. 17,21,58
Faber, V. 12,57
Farkas, Gy. 18
Feeney, G.J. 49,58
Ferguson, A.R. 49,58
Ford, Jr, L.R. 16,51,53,58
Fortune, S. 17,58
Frobenius, F.G. 8,58
Fulkerson, D.R. 6,16,34-35,51,53,57-58
Gallai, T. 1,7,11,58
Gowen, P.J. 51,57
Hall, P. 9,58
Holyer, I. 13,58
Hopcroft, J.E. 17,58
Hu, T.C. 4,17,19,21,58
Iri, M. 51,58
Itai, A. 17,21,58
Jewell, W.S. 51,58
Karp, R.M. 17,51,57-58
Karzanov, A.V. 21,57
Knuth, D.E. 17
K¨ onig, D. 1,8,27,30,34
K˝ onig, D. 8,58-59
Kramer, M.R. 17,59
Kruskal, Jr, J.B. 3-4,22,59
Leeuwen, J. van 17,59
Liu, T.-h. 6,57
Lov´ asz, L. 10,59
Lynch, J.F. 17,59
Magnanti, T.L. 51,57
Minty, G.J. 51,59
Nash-Williams, C.St.J.A. 34-35,59
Norman, A.R.D. 49,59
Okamura, H. 17,59
Orlin, J.B. 51,57
Prim, R.C. 3-4,59
Rees, J.W.H.M.T.S.J. van 49,59
Robertson, G.N. 17,59
Rothschild, B. 17,21,59
Sakarovitch, M. 19,59
Schrijver, A. 18,59
Seymour, P.D. 17,59
Shamir, A. 17,21,58
Tarjan, R.E. 6,60
Tutte, W.T. 1,9-11,35,60
Vizing, V.G. 1,12,60
Voorhoeve, M. 44,60
Welsh, D.J.A. 27,60
Whinston, A. 17,21,59
White, W.W. 49,60
Wyllie, J. 17,58
62 Subject index
Subject index
arborescence
r- 5
arc-disjoint paths problem 16-18
b-detachment 35
basis in a matroid 23
bipartite matching 8-9,31
cardinality 8-9
branching 5
r- 5
cardinality bipartite matching 8-9
cardinality common independent set algorithm
32-33
cardinality common independent set
augmenting algorithm 32-33
cardinality common independent set problem
31-35
cardinality matching 8-11
cardinality matroid intersection 31-35
cardinality matroid intersection algorithm
32-33
cardinality nonbipartite matching 9-11
circuit basis 27
circuit in a matroid 24
coclique 7
coclique number 7
cocycle matroid 25
cographic matroid 25-26,29
colour 12
edge- 12
colourable
k-edge- 12
colouring
edge- 12
k-edge- 12
colouring number
edge- 12
commodity ﬂow problem
fractional k- 15-18
integer k- 15-18
integer undirected k- 15-18
k- 15-18
undirected k- 15-18
common independent set 31
common independent set algorithm
cardinality 32-33
weighted 36-38
common independent set augmenting
algorithm
cardinality 32-33
weighted 36-38
common independent set problem 31-39
cardinality 31-35
weighted 36-39
common SDR 31,34
common system of distinct representatives
31,34
common transversal 31,34
component
even 9
inverting 13
odd 9
satisfaction-testing 13
variable-setting 13
contraction in a matroid 28-29
cover
edge 7
vertex 7
cover number
edge 7,11
vertex 7
cut condition 16,18
cycle matroid 25
deletion in a matroid 28-29
dependent set in a matroid 23
detachment
b- 35
Dijkstra-Prim method 3
disjoint paths problem
arc- 16-18
edge- 16-18
vertex- 16-18
disjoint spanning trees problem
edge- 31,35
distinct representatives
system of 9,34
dual matroid 27-29
dual of a matroid 29
edge-colour 12
edge-colourable
k- 12
edge-colouring 12
k- 12
Subject index 63
edge-colouring number 12
edge cover 7
edge cover number 7,11
edge cover theorem
K¨ onig’s 8
edge-disjoint paths problem 16-18
edge-disjoint spanning trees problem 31,35
Edmonds’ matroid intersection theorem 34,43
Euler condition 21
even component 9
factor
1- 9-11
factor theorem
Tutte’s 1- 9-10
ﬂow problem
fractional k-commodity 15-18
fractional multicommodity 15-18
integer k-commodity 15-18
integer multicommodity 15-18
integer undirected k-commodity 15-18
integer undirected multicommodity
15-18
k-commodity 15-18
multicommodity 15-18
undirected k-commodity 15-18
undirected multicommodity 15-18
fractional k-commodity ﬂow problem 15-18
fractional multicommodity ﬂow problem
15-18
Gallai’s theorem 7
good forest 5
graphic matroid 25,29
greedy algorithm 22-25
greedy forest 3
Hall’s marriage theorem 9
Hu’s two-commodity ﬂow theorem 19-21
independent set algorithm
cardinality common 32-33
weighted common 36-38
independent set augmenting algorithm
cardinality common 32-33
independent set in a matroid 23
independent set problem
cardinality common 31-35
common 31-39
weighted common 36-39
integer k-commodity ﬂow problem 15-18
integer multicommodity ﬂow problem 15-18
integer undirected k-commodity ﬂow problem
15-18
integer undirected multicommodity ﬂow
problem 15-18
inverting component 13
k-commodity ﬂow problem 15-18
fractional 15-18
integer 15-18
integer undirected 15-18
undirected 15-18
k-edge-colourable 12
k-edge-colouring 12
k-truncation of a matroid 29
k-uniform matroid 29
K¨ onig’s edge cover theorem 8
K¨ onig’s matching theorem 8,34
Kruskal’s method 3-4
linear matroid 26,29
marriage theorem
Hall’s 9
matching 7-11
bipartite 8-9,31
cardinality 8-11
cardinality bipartite 8-9
cardinality nonbipartite 9-11
nonbipartite 9-11
perfect 7,9-11
matching number 7,10
matching theorem
K¨ onig’s 8,34
matroid 22-43
matroid intersection 31-39
cardinality 31-35
weighted 36-39
matroid intersection algorithm
cardinality 32-33
weighted 36-38
matroid intersection theorem
Edmonds’ 34,43
matroid polytope 39-42
max-biﬂow min-cut theorem 21
maximum reliability problem 4
minimum spanning tree 3-5
minor of a matroid 28
multicommodity ﬂow problem 15-18
fractional 15-18
integer 15-18
integer undirected 15-18
64 Subject index
undirected 15-18
nonbipartite matching 9-11
cardinality 9-11
odd component 9
1-factor 9-11
1-factor theorem
Tutte’s 9-10
output pairs 13
parallel elements in a matroid 24
partition matroid 27
perfect matching 7,9-11
polytope
matroid 39-42
r-arborescence 5
r-branching 5
rank of a matroid 23
reliability problem
maximum 4
representatives
system of distinct 9,34
restriction in a matroid 28
root of an branching 5
rooted tree 31,39
routing
VLSI- 18
satisfaction-testing component 13
SDR 9,34
spanning tree
minimum 3-5
spanning trees problem
edge-disjoint 31,35
system of distinct representatives 9,34
transversal 9,34
transversal matroid 26-27
tree
minimum spanning 3-5
trees problem
edge-disjoint spanning 31,35
truncation of a matroid 29
k- 29
Tutte-Berge formula 10
Tutte’s 1-factor theorem 9-10
two-commodity ﬂow theorem
Hu’s 19-21
undirected k-commodity ﬂow problem 15-18
integer 15-18
undirected multicommodity ﬂow problem
15-18
integer 15-18
uniform matroid 29
k- 29
union of matroids 34
variable-setting component 13
vertex cover 7
vertex cover number 7
vertex-disjoint paths problem 16-18
VLSI-routing 18
weighted common independent set algorithm
36-38
weighted common independent set augmenting
algorithm 36-38
weighted common independent set problem
36-39
weighted matroid intersection 36-39
weighted matroid intersection algorithm
36-38

References Name index Subject index

57 61 62

3

1. Shortest trees and branchings
1.1. Minimum spanning trees Let G = (V, E) be a connected graph and let l : E −→ R be a function, called the length function. For any subset F of E, the length l(F ) of F is, by deﬁnition: (1) l(F ) :=
e∈F

l(e).

In this section we consider the problem of ﬁnding a spanning tree in G of minimum length. There is an easy algorithm for ﬁnding a minimum-length spanning tree, essentially due to Boruvka [1926]. There are a few variants. The ﬁrst one we discuss is sometimes called the Dijkstra-Prim method (after Prim [1957] and Dijkstra [1959]). Choose a vertex v1 ∈ V arbitrarily. Determine edges e1 , e2 . . . successively as follows. Let U1 := {v1 }. Suppose that, for some k ≥ 0, edges e1 , . . . , ek have been chosen, spanning a tree on the set Uk . Choose an edge ek+1 ∈ δ(Uk ) that has minimum length among all edges in δ(Uk ).1 Let Uk+1 := Uk ∪ ek+1 . By the connectedness of G we know that we can continue choosing such an edge until Uk = V . In that case the selected edges form a spanning tree T in G. This tree has minimum length, which can be seen as follows. Call a forest F greedy if there exists a minimum-length spanning tree T of G that contains F . Theorem 1.1. Let F be a greedy forest, let U be one of its components, and let e ∈ δ(U ). If e has minimum length among all edges in δ(U ), then F ∪ {e} is again a greedy forest. Proof. Let T be a minimum-length spanning tree containing F . Let P be the unique path in T between the end vertices of e. Then P contains at least one edge f that belongs to δ(U ). So T := (T \ {f }) ∪ {e} is a tree again. By assumption, l(e) ≤ l(f ) and hence l(T ) ≤ l(T ). Therefore, T is a minimum-length spanning tree. As F ∪ {e} ⊆ T , it follows that F ∪ {e} is greedy. Corollary 1.1a. The Dijkstra-Prim method yields a spanning tree of minimum length. Proof. It follows inductively with Theorem 1.1 that at each stage of the algorithm we have a greedy forest. Hence the ﬁnal tree is greedy — equivalently, it has minimum length. The Dijkstra-Prim method is an example of a so-called greedy algorithm. We construct a spanning tree by throughout choosing an edge that seems the best at the moment. Finally we get a minimum-length spanning tree. Once an edge has been chosen, we never have to replace it by another edge (no ‘back-tracking’). There is a slightly diﬀerent method of ﬁnding a minimum-length spanning tree, Kruskal’s method (Kruskal [1956]). It is again a greedy algorithm, and again iteratively edges e 1 , e2 , . . . are chosen, but by some diﬀerent rule.
1

δ(U ) is the set of edges e satisfying |e ∩ U | = 1.

5). electrical power lines. Choose an edge ek+1 such that {e1 . 2 5 3 4 3 3 4 5 5 4 4 6 3 7 6 Figure 1.2: The maximum reliability problem. the reliability of P is. edges e1 . or zoology.e. .1: Minimum connections. v ∈ V . . Certain cases of this problem can be seen as ﬁnding a maximum length spanning tree. ek have been chosen. as was observed by Hu [1961]. where P ranges over all paths from u to v. Now T has the same reliability as G. .. v) for each u. but rather in maximizing ‘reliability’ (for instance when designing energy or communication networks). There are several obvious practical situations where ﬁnding a minimum-length spanning tree is important. In a similar way one ﬁnds a maximum-length spanning tree. (Here ET is the set of edges of T . Corollary 1. Application 1. Again directly from Theorem 1. v) = rG (u. with e∈ET s(e) as large as possible. Exercises 1. Let T be a spanning tree of maximum strength. both with the Dijkstra-Prim algorithm and with Kruskal’s algorithm. Kruskal’s method yields a spanning tree of minimum length. i. E) be a graph and let s : E −→ R+ be a function. by deﬁnition. v) of two vertices u and v is equal to the maximum reliability of P . for each pair of vertices u. For any path P in G. Let us call s(e) the strength of edge e. for some k ≥ 0.1. when designing a road system.) So T can be found with any maximum spanning tree algorithm. ek+1 } forms a forest. . with l(ek+1 ) as small as possible.1. . ﬁnding a minimum spanning tree can be helpful. The reliability rG (u. archeology.4 Chapter 1. pipe lines. telephone lines. v. ek . . . Application 1. Also when clustering data say in taxonomy. Find.1b. the minimum strength of the edges occurring in P .1.1 1 2 3 2 4 3 2 . . wire connections on a chip. We give a mathematical description. Proof. That is: (2) rT (u. We leave the proof as an exercise (Exercise 1. Shortest trees and branchings Suppose that. Often in designing a network one is not primarily interested in minimizing length. a spanning tree of minimum length in the graph in Figure 1. By the connectedness of G we can (starting with k = 0) iterate this until the selected edges form a spanning tree of G. for instance. Let G = (V.

or an r-branching (or r-arborescence) if (V. l(uw) is equal to the minimum length of the edges in the unique u − w path in T . Finding optimum branchings Let D = (V. 1. say B . A0 ). with l (B ) ≤ l (B) − l (a0 ) ≤ l (B). A) be a directed graph. Since K is a strong component of (V. Prove (2). Let G = (V. Show that for all u. and a length function l : A −→ Q+ . v.5. A0 ) such that s ∈ K and such that l(a) > 0 for each a ∈ δ in (K). A ) is a rooted tree with root r.2. we can choose B so that |B ∩ δ in (K)| = 1.2. Let ε := min{l(a)|a ∈ δ in (K)}. A). l(vw)} for all distinct u. Let A0 := {a ∈ A|l(a) = 0}.) Let T be a good forest and e be an edge not in T such that T ∪ {e} is a forest and such that l(e) is as small as possible. for each s ∈ V .2. a root s. Let T be a longest spanning tree in G. there is a strong component K of (V. a minimum-length s-arborescence can be found as follows (Edmonds [1967]). Set l (a) := l(a) − ε if a ∈ δ in (K) and l (a) := l(a) otherwise. (ET is the set of edges of T . (B \ {a0 }) ∩ A0 contains an s-arborescence. with root r. since if B ∩ δ in (K)| ≥ 2.Section 1. Show that T ∪ {e} is good again. D contains an r-branching if and only if each vertex of D is reachable in D from r. E) be a complete graph and let l : E −→ R+ be a length function satisfying l(uw) ≥ min{l(uv). Thus if A is an r-branching then there is exactly one r − s path in A .4. Finding optimum branchings 5 1. Find a spanning tree of minimum length between the cities given in the following distance table: Amersfoort Amsterdam Apeldoorn Arnhem Assen Bergen op Zoom Breda Eindhoven Enschede ’s-Gravenhage Groningen Haarlem Den Helder ’s-Hertogenbosch Hilversum Leeuwarden Maastricht Middelburg Nijmegen Roermond Rotterdam Utrecht Winterswijk Zutphen Zwolle Ame 0 47 47 46 139 123 86 111 114 81 164 67 126 73 18 147 190 176 63 141 78 20 109 65 70 Ams 47 0 89 92 162 134 100 125 156 57 184 20 79 87 30 132 207 175 109 168 77 40 151 107 103 Ape 47 89 0 25 108 167 130 103 71 128 133 109 154 88 65 129 176 222 42 127 125 67 66 22 41 Arn 46 92 25 0 132 145 108 78 85 116 157 112 171 63 64 154 151 200 17 102 113 59 64 31 66 Ass 139 162 108 132 0 262 225 210 110 214 25 182 149 195 156 68 283 315 149 234 217 159 143 108 69 BoZ 123 134 167 145 262 0 37 94 230 83 287 124 197 82 119 265 183 59 128 144 57 103 209 176 193 Bre 86 100 130 108 225 37 0 57 193 75 250 111 179 45 82 228 147 96 91 107 49 66 172 139 156 Ein 111 125 103 78 210 94 57 0 163 127 235 141 204 38 107 232 125 153 61 50 101 91 142 109 144 Ens 114 156 71 85 110 230 193 163 0 195 135 176 215 148 132 155 237 285 102 187 192 134 40 54 71 s-G 81 57 128 116 214 83 75 127 195 0 236 41 114 104 72 182 162 124 133 177 26 61 180 146 151 Gro 164 184 133 157 25 287 250 235 135 236 0 199 147 220 178 58 309 340 174 259 242 184 168 133 94 Haa 67 20 109 112 182 124 111 141 176 41 199 0 73 103 49 141 226 165 130 184 67 56 171 127 123 DH 126 79 154 171 149 197 179 204 215 114 147 73 0 166 109 89 289 238 188 247 140 119 220 176 144 s-H 73 87 88 63 195 82 45 38 148 104 220 103 166 0 69 215 123 141 46 81 79 53 127 94 129 Hil 18 30 65 64 156 119 82 107 132 72 178 49 109 69 0 146 192 172 81 150 74 16 127 83 88 Lee 147 132 129 154 68 265 228 232 155 182 58 141 89 215 146 0 306 306 171 256 208 162 183 139 91 Maa 190 207 176 151 283 183 147 125 237 162 309 226 289 123 192 306 0 243 135 50 191 176 213 183 218 Mid 176 175 222 200 315 59 96 153 285 124 340 165 238 141 172 306 243 0 187 203 98 156 264 231 246 Nij 63 109 42 17 149 128 91 61 102 133 174 130 188 46 81 171 135 187 0 85 111 76 81 48 83 Roe 141 168 127 102 234 144 107 50 187 177 259 184 247 81 150 256 50 203 85 0 151 134 166 133 168 Rot 78 77 125 113 217 57 49 101 192 26 242 67 140 79 74 208 191 98 111 151 0 58 177 143 148 Utr 20 40 67 59 159 103 66 91 134 61 184 56 119 53 16 162 176 156 76 134 58 0 123 85 90 Win 109 151 66 64 143 209 172 142 40 180 168 171 220 127 127 183 213 264 81 166 177 123 0 44 92 Zut 65 107 22 31 108 176 139 109 54 146 133 127 176 94 83 139 183 231 48 133 143 85 44 0 48 Zwo 70 103 41 66 69 193 156 114 71 151 94 123 144 129 88 91 218 246 83 168 148 90 92 48 0 1. then B is a minimumlength s-arborescence. w ∈ V . Given a directed graph D = (V. Let G = (V. E) be a graph and let l : E −→ R be a ‘length’ function. then for each a0 ∈ B ∩ δ in (K). 1. 1. If A0 contains an s-arborescence B. Find (recursively) a minimum-length s-arborescence B with respect to l . Moreover. w ∈ V .3. . A subset S of A is called an branching. If A0 does not contain an s-arborescence. Call a forest T good if l(ET ) ≥ l(ET ) for each forest T satisfying |ET | = |ET |.

See above.3. For more on algorithms for optimum branchings. Proof. A0 ) from s. In fact. Fulkerson [1974]. we can order the vertices in U pre-topologically. since if K. Proof. Let K be the collection of components K to which we applied the algorithm in some iteration. An optimum s-arborescence can be found in polynomial time. or K ⊆ L. Moreover. some arc leaving K has length 0. we have: Theorem 1. First note that there are at most 2n − 3 iterations. L ∈ K then K ∩ L = ∅. since for any sarborescence C: (3) l(C) = l (C) + ε|C ∩ δ in (K)| ≥ l (C) + ε ≥ l (B) + ε = l(B). direct analysis gives: Theorem 1. Then the ﬁrst vertex in this order belongs to a strong component K so that each arc a entering K has l(a) > 0. in time O(m).2. Bock [1971]. Next. to any K ∈ K.6 Chapter 1. or L ⊆ K. Tarjan [1977]. Chu and Liu [1965]. Since the number of iterations is O(m). This can be seen as follows. Indeed. A0 ) induced by U . each iteration can be performed in time O(m). Then |K| ≤ 2n − 3. see Edmonds [1967]. in time O(m) we can identify the set U of vertices not reachable in (V. . one can identify the strong components of the subgraph of (V. the iteration is applied exactly once. Moreover. Shortest trees and branchings Then B is also a minimum-length s-arborescence with respect to l. since after application. Next. An optimum s-arborescence can be found in time O(nm).

let F be an edge cover of size ρ(G). For any graph G = (V.7 2. min{|W | |W is a vertex cover}. the edge cover number. We obtain an edge cover of size |M | + (|V | − 2|M |) = |V | − |M |. dF (v) − 1 edges incident with v. Hence ν(G) ≥ |V | − ρ(G). Proof. ﬁrst let M be a matching of size ν(G). then we can derive from it a minimum cardinality edge cover. In fact. respectively. To see the second equality.1 (Gallai’s theorem). For each of the |V | − 2|M | vertices v missed by M .1. E) without isolated vertices one has (4) α(G) + τ (G) = |V | = ν(G) + ρ(G). The triangle K3 shows that strict inequalities are possible. equality in one of the relations (3) implies equality in the other. and Gallai’s theorem Let G = (V. and the matching number of G. A vertex cover is a subset W of V such that e ∩ W = ∅ for each edge e of G. For each v ∈ V delete from F . the vertex cover number. max{|M | |M is a matching}. Second. It is not diﬃcult to show that: (3) α(G) ≤ ρ(G) and ν(G) ≤ τ (G). E) be a graph. This proof also shows that if we have a matching of maximum cardinality in any graph G. and conversely.1959] proved: Theorem 2. e ∈ M with e = e . A matching 1 is called perfect if it covers all vertices (that is. An edge cover is a subset F of E such that for each vertex v there exists e ∈ F satisfying v ∈ e. Hence ρ(G) ≤ |V | − ν(G). Matchings. Matchings and covers 2. A matching is a subset M of E such that e ∩ e = ∅ for all e. Note that an edge cover can exist only if G has no isolated vertices. A coclique is a subset C of V such that e ⊆ C for each edge e of G. as Gallai [1958. covers. min{|F | |F is an edge cover}. has size 2 |V |). We obtain a matching of size at least |F | − v∈V (dF (v) − 1) = |F | − (2|F | − |V |) = |V | − |F |. add to M an edge covering v. . It is not diﬃcult to show that for each U ⊆ V : (1) U is a coclique ⇐⇒ V \ U is a vertex cover. Deﬁne: (2) α(G) ρ(G) τ (G) ν(G) := := := := max{|C| |C is a coclique}. These numbers are called the coclique number. The ﬁrst equality follows directly from (1).

8 2.2. K¨nig’s theorems o

Chapter 2. Matchings and covers

A classical min-max relation due to K˝nig [1931] (extending a result of Frobenius [1917]) o characterizes the maximum size of a matching in a bipartite graph: Theorem 2.2 (K¨nig’s matching theorem). For any bipartite graph G = (V, E) one has o (5) ν(G) = τ (G).

That is, the maximum cardinality of a matching in a bipartite graph is equal to the minimum cardinality of a vertex cover. Proof. Let G = (V, E) be a bipartite graph, with colour classes U and W , say. By (3) it suﬃces to show that ν(G) ≥ τ (G), which we do by induction on |V |. We distinguish two cases. Case 1: There exists a vertex cover C with |C| = τ (G) intersecting both U and W . Let U := U ∩ C, U := U \ C, W := W \ C and W := W ∩ C. Let G and G be the subgraphs of G induced by U ∪ W and U ∪ W respectively. We show that τ (G ) ≥ |U |. Let K be a vertex cover of G of size τ (G ). Then K ∪ W is a vertex cover of G, since K intersects all edges of G that are contained in U ∪ W and W intersects all edges of G that are not contained in U ∪ W (since each edge intersects C = U ∪ W ). So |K ∪ W | ≥ τ (G) = |U | + |W | and hence |K| ≥ |U |. So τ (G ) ≥ |U |. It follows by our induction hypothesis that G contains a matching of size |U |. Similarly, G contains a matching of size |W |. Combining the two matchings we obtain a matching of G of size |U | + |W | = τ (G). Case 2: There exists no such vertex cover C. Let e = uw be any edge of G. Let G be the subgraph of G induced by V \ {u, w}. We show that τ (G ) ≥ τ (G) − 1. Suppose to the contrary that G contains a vertex cover K of size τ (G) − 2. Then C := K ∪ {u, w} would be a vertex cover of G of size τ (G) intersecting both U and W , a contradiction. So τ (G ) ≥ τ (G) − 1, implying by our induction hypothesis that G contains a matching M of size τ (G) − 1. Hence M ∪ {e} is a matching of G of size τ (G). Combination of Theorems 2.1 and 2.2 yields the following result of K˝nig [1932]. o Corollary 2.2a (K¨nig’s edge cover theorem). For any bipartite graph G = (V, E), without o isolated vertices, one has (6) α(G) = ρ(G).

That is, the maximum cardinality of a coclique in a bipartite graph is equal to the minimum cardinality of an edge cover. Proof. Directly from Theorems 2.1 and 2.2, as α(G) = |V | − τ (G) = |V | − ν(G) = ρ(G).

Section 2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula
Exercises 2.1. (ii) Derive that a k-regular bipartite graph has k disjoint perfect matchings. (i) Prove that a k-regular bipartite graph has a perfect matching (if k ≥ 1).

9

(iii) Give for each k > 1 an example of a k-regular graph not having a perfect matching. 2.2. Prove that in a matrix, the maximum number of nonzero entries with no two in the same line (=row or column), is equal to the minimum number of lines that include all nonzero entries. 2.3. Let A = (A1 , . . . , An ) be a family of subsets of some ﬁnite set X. A subset Y of X is called a transversal or a system of distinct representatives (SDR) of A if there exists a bijection π : {1, . . . , n} −→ Y such that π(i) ∈ Ai for each i = 1, . . . , n. Decide if the following collections have an SDR: (ii) {1, 2, 3, 4, 5, 6}, {1, 3, 4}, {1, 4, 7}, {2, 3, 5, 6}, {3, 4, 7}, {1, 3, 4, 7}, {1, 3, 7}. 2.4. Let A = (A1 , . . . , An ) be a family of subsets of some ﬁnite set X. Prove that A has an SDR if and only if (7) | Ai | ≥ |I| (i) {3, 4, 5}, {2, 5, 6}, {1, 2, 5}, {1, 2, 3}, {1, 3, 6},

i∈I

for each subset I of {1, . . . , n}. [Hall’s ‘marriage’ theorem (Hall [1935]).]

2.3. Tutte’s 1-factor theorem and the Tutte-Berge formula A basic result on matchings was found by Tutte [1947]. It characterizes graphs that have a perfect matching. A perfect matching (or a 1−factor) is a matching M covering all vertices of the graph. In order to give Tutte’s characterization, let for each subset U of the vertex set V of a graph G let o(U ) := number of odd components of the subgraph G|U of G induced by U. Here a component is odd (even, respectively) if it has an odd (even) number of vertices. An important inequality is that for each matching M and each subset U of V one has (9) |V | + |V \ U | − o(U ) . 2 This follows from the fact that at most (|U | − o(U ))/2 edges of M are contained in U , while at most |V \ U | edges of M intersect V \ U . So |M | ≤ (|U | − o(U ))/2 + |V \ U |, implying (9). It will turn out that there is always a matching M and a subset U of V attaining equality in (9). |M | ≤ (8)

Theorem 2.3 (Tutte’s 1-factor theorem). A graph G = (V, E) has a perfect matching if and only if (10) o(U ) ≤ |V \ U | for each U ⊆ V .

10

Chapter 2. Matchings and covers

Proof. Necessity of (10) follows directly from (9). To see suﬃciency, suppose there exist graphs G = (V, E) satisfying the condition, but not having a perfect matching. Fixing V , take G such that G is simple and |E| is as large as possible. Let U := {v ∈ V |v is nonadjacent to at least one other vertex of G}. We show: (11) if a, b, c ∈ U and ab, bc ∈ E then ac ∈ E.

For suppose ac ∈ E. By the maximality of |E|, adding ac to E makes that G has a perfect matching (condition (10) is maintained under adding edges). So G has a matching M missing only a and c. As b ∈ U , there exists a d with bd ∈ E. Again by the maximality of |E|, G has a matching N missing only b and d. Now each component of M N contains the same number of edges in M as in N — otherwise there would exist an M - or N -augmenting path, and hence a perfect matching in G, a contradiction. So the component P of M N containing d is a path starting in d, with ﬁrst edge in M and last edge in N , and hence ending in a or c; by symmetry we may assume it ends in a. Moreover, P does not traverse b. Then extending P by the edge ab gives an N -augmenting path, and hence a perfect matching in G — a contradiction. This shows (11). By (11), each component of G|U is a complete graph. Moreover, by (10), G|U has at most |V \ U | odd components. This implies that G has a perfect matching, contradicting our assumption. This proof is due to Lov´sz [1975]. For another proof, see Anderson [1971]. a We derive from Tutte’s 1-factor theorem a min-max formula for the maximum cardinality of a matching in a graph, the Tutte-Berge formula. Corollary 2.3a (Tutte-Berge formula). For each graph G = (V, E) (12) ν(G) = min
U ⊆V

|V | + |V \ U | − o(U ) . 2

Proof. (9) implies that ≤ holds in (12). To see the reverse inequality, let m be the minimum value in (12). Extend G by a set W of |V | − 2m new vertices, so that each vertex in W is adjacent to each vertex in V ∪ W . This makes the graph G . If G has a perfect matching M , then at most |V | − 2m edges in M intersect W . Deleting these edges, gives a matching 1 M in G with |M | ≥ |M | − (|V | − 2m) = 2 (|V | + |W |) − (|V | − 2m) = m. So we may assume that G does not have a perfect matching. Then by Tutte’s 1-factor theorem, there is a subset U of V ∪ W such that the subgraph G|U of G induced by U has more than |(V ∪ W ) \ U | odd components. If U intersects W , G|U has only one component, and hence |(V ∪ W ) \ U | = 0, that is, U = V ∪ W . But then o(U ) = 0 since |V ∪ W | is even. So U ∩ W = ∅, giving o(U ) ≤ |V | + |V \ U | − 2m = |(V ∪ W ) \ U |, a contradiction. Stating this corollary diﬀerently, each graph G = (V, E) has a matching M and a subset U of V having equality in (9). So M is a maximum-size matching and U attains the minimum in (12). In the following sections we will show how to ﬁnd such M and U algorithmically. It yields an alternative proof of the results in this section.

Show that G has a perfect matching. Then (13) ρ(G) = max U ⊆V |U | + o(U ) . Tutte’s 1-factor theorem and the Tutte-Berge formula 11 With Gallai’s theorem.3a).3.3b. (ii) Show (not using Tutte’s 1-factor theorem) that a tree G = (V. Then G has a matching covering T . By Gallai’s theorem (Theorem 2. 2 Proof. 2. Let G be a 3-regular graph without any isthmus. E) be a graph without isolated vertices. Let G = (V. . E) has a perfect matching if and only if the subgraph G − v has exactly one odd component. for each v ∈ V . 2. U ⊆V 2 2 Exercises 2. (i) Show that a tree has at most one perfect matching. Let G = (V.5. E) be a graph and let T be a subset of V . if and only if the number of odd components of G − W contained in T is at most |W |. the Tutte-Berge formula implies a formula for the edge cover number ρ(G): Corollary 2.6. (14) ρ(G) = |V | − ν(G) = |V | − min W ⊆V |V | + |W | − o(V \ W ) |U | + o(U ) = max .7.1) and the Tutte-Berge formula (Corollary 2. for each W ⊆V.Section 2.

E) be a simple graph. Each matching in an edge-colouring. Faber. interchange colours j and i on P . The maximum degree of G is denoted by ∆(G). we show χ(G) ≤ ∆(G) + 1. Choose i ∈ Fvw . Edge-colouring 3. and therefore there is a colour j ∈ Fv \ u∈U Fu . Vizing’s theorem We recall some deﬁnitions and notation. Let G = (V. Choose a vertex v ∈ V . Let U := {u ∈ V |e = vu is an uncoloured edge}. and suppose we have k-edge-coloured the graph G − v. Assume that U = ∅. and let Fvu := Fv ∩ Fu . denoted by χ(G). without violating (1). Since an edge-colouring of G is a (vertex-)colouring of the line-graph L(G) of G. Then we can give colour i to edge vu for which |Fvu | is smallest. A k-edge-colouring is an edge-colouring with k colours. Then for all u ∈ U with u = w the set Fvu is unchanged. Suppose there exists an i ∈ u∈U Fvu such that i belongs to Fvu for at most one u ∈ U with |Fvu | ≤ 2. We show the theorem by induction on |V |. let Fu be the set of colours that miss u. for which Fvu is replaced by Fvu \ {i}. Proof. since colouring no edge incident with v gives (1). Edge-colouring 3. Vizing [1964. except for at most one u = w (if the end vertex of P belongs to U ). Let G = (V. This contradicts our maximality assumption. The smallest k for which there exists a k-edge-colouring is called the edge-colouring numberof G. E) be a graph. Next colour a maximum number of edges incident with v in such a way that (1) for each uncoloured edge vu one has Fvu = ∅. has χ(G) = 6 and ∆(G) = 4.1 (Vizing’s theorem). So |Fv | ≥ |U | + 1. Such a colouring exists. So we know that for each i ∈ u∈U Fvu there exist two distinct vertices u ∈ U with i ∈ Fvu and |Fvu | ≤ 2. since at most k − 1 − |U | edges incident with v are coloured (as v has degree at most k − 1). So (1) is maintained.1. The inequality ∆(G) ≤ χ(G) being trivial. . we have that χ(G) = γ(L(G)). is called a colour or an edge-colour.1965] showed the following (we follow the proof of Ehrenfeucht. and Kierstead [1984]): Theorem 3. G is k-edge-colourable if a k-edge-colouring exists. Choose w ∈ U such that |Fvu | ≥ 2 for each u ∈ U with u = w. Given any partial k-edge-colouring. Let k := ∆(G) + 1. An edge-colouring is a partition of E into matchings. For any simple graph G one has ∆(G) ≤ χ(G) ≤ ∆(G) + 1. Hence | u∈U Fvu | ≤ |U |. In this theorem we cannot delete the condition that G be simple: the graph G obtained from K3 by replacing each edge by two parallel edges. and give edge vw colour j.12 Chapter 3. Consider the ji path P starting at w. and there is at most one uncoloured edge vu with |Fvu | = 1.

we connect one of the output pairs of u with one of the output pairs of C. Now if a variable u occurs in a clause C as u. we connect one of the output pairs of u with one side of an inverting component. yielding a fragment F with only single edges leaving it. . or c and d have the same colour while a. is constructed similarly. To this end. First we construct for each variable u a variable-setting component given by Figure 3.2.3. It is NP-complete to decide if a given 3-regular graph is 3-edge-colourable. if and only if either a and b have the same colour while c. The general case. d are called the output pairs. In this way we can match up all output pairs of the variable-setting and satisfactiontesting components. for each clause C a satisfaction-testing component given by Figure 3. d. d and e can be properly extended to a colouring of the edges spanned by the fragment.1. and connect the other side of it with one of the output pairs of C. where u occurs (as u or ¬u) in exactly k clauses.Section 3. Now.2. b and c. We complete the graph G by making a copy F of F and connecting any single edge end to its copy in F. one easily checks that the input of the 3satisﬁability problem is satisﬁable if and only if G is 3-edge-colourable. This graph fragment has the property that a colouring of the edges a. b. consider the graph fragment called the inverting component given by the left picture of Figure 3. and e have three distinct colours. and e have three distinct colours. with 2k inverting components and k output pairs. From the inverting component we build larger graph fragments. given the properties of the fragments. e a c b d Figure 3. The ﬁgure shows the case where u occurs (as u or ¬u in exactly four clauses).2.2. where the right picture gives its symbolic representation if we take it as part of larger graphs. b. We show that the 3-satisﬁability problem (3-SAT) can be reduced to the edgecolouring problem of 3-regular graphs. Proof. Next we construct. c. NP-completeness of edge-colouring 3. Consider now an instance of the 3-satisﬁability problem. NP-completeness of edge-colouring Holyer [1981] showed: 13 Theorem 3.1 The inverting component and its symbolic representation. The pairs a. If a variable u occurs in a clause C as ¬u.

This graph fragment has the property that a colouring of the edges leaving the fragment can be extended to a proper colouring of the edges spanned by the fragment.14 Chapter 3. . if and only if at least one of the output pairs leaving the fragment is monochromatic. if and only if either each of the output pairs leaving the fragment is monochromatic.3 The satisfaction-testing component for a clause C. This graph fragment has the property that a colouring of the edges leaving the fragment can be extended to a proper colouring of the edges spanned by the fragment.2 The variable-setting component for a variable u occurring in four clauses. or none of them is monochromatic. Edge-colouring Figure 3. Figure 3.

. which outputs an integer maximum ﬂow if all capacities are integer. Multicommodity ﬂows and disjoint paths 4. . between diﬀerent pairs of terminals. Moreover. v)) ≤ c(e). Some direct transformations give similar results for vertex capacities and for vertex-disjoint paths. . sk ) of vertices of G. w) and (w. si ) are called commodities. . . dk . There is a very eﬃcient algorithm. Thus we obtain the undirected multicommodity ﬂow problem or undirected k-commodity ﬂow problem. . . . . dk . one is not interested in connecting only one pair of source and sink by a ﬂow or by paths. k. there is the multicommodity ﬂow problem (or k-commodity ﬂow problem): (1) given: a directed graph G = (V. we add integer if we require the xi to be integer ﬂows. . in such a way that the wires follow given ‘channels’ and that the wires connecting diﬀerent pairs of pins do not intersect each other. so that for each edge e = {v. respectively. ﬁnd: for each i = 1. . an ri − si ﬂow xi ∈ QE so that xi has value di + and so that for each arc e of G: k i=1 xi (e) ≤ c(e). v) and ask for ﬂows x1 . (rk . . the problem is called the integer multicommodity ﬂow problem or integer k-commodity ﬂow problem.1. Again. . but several pairs of sources and sinks simultaneously. . a ‘capacity’ function c : E −→ Q+ . If all capacities are equal to 1. . the problem reduces to ﬁnding arc-disjoint paths. xk of values d1 .) If we require each xi to be an integer ﬂow. . Mathematically. Introduction The problem of ﬁnding a maximum ﬂow from one ‘source’ r to one ‘sink’ s is highly tractable. one sometimes adds the adjective fractional to the name of the problem if no integrality is required. A recent application is the design of very large-scale integrated (VLSI) circuits. where several pairs of pins must be interconnected by wires on a chip. E). w} of G: k (2) i=1 (xi (v. s1 ). First.) The problem has a natural analogue to the case where G is undirected. . and ‘demands’ d1 . . One may think of a large communication or transportation network. these problems can be formulated as follows. (We assume ri = si throughout. the maximum ﬂow value is equal to the minimum capacity of a cut separating r and s. . (To distinguish from the integer version of this problem. where several messages or goods must be transmitted all at the same time over the same network. . w} by two opposite arcs (v. w) + xi (w.15 4. . We replace each undirected edge e = {v. The pairs (ri . pairs (r1 . Often in practice however.

. . . . s1 ). . pairs (r1 . . . Related is the vertex-disjoint paths problem: (4) given: a (directed or undirected) graph G = (V. . even not in the two simple cases given in Figure 4. (rk . Therefore. The (fractional) multicommodity ﬂow problem can be easily described as one of solving a system of linear inequalities in the variables xi (e) for i = 1. We leave it as an exercise (Exercise 4. . . . k and e ∈ E. pairs (r1 . ﬁnd: pairwise edge-disjoint paths P1 . r1 r2 =s1 s2 r1 =s2 r2 =s1 Figure 4. . In fact. the only polynomial-time algorithm known for the fractional multicommodity ﬂow problem is any general linear programming algorithm. . (rk . sk )}. k).1 (taking all capacities and demands equal to 1).3. E). . . Pk where Pi is an ri − si path (i = 1. . Multicommodity ﬂows and disjoint paths If all capacities and demands are 1. δR where R := {(r1 . s1 ). The following cut condition trivially is a necessary condition for the existence of a solution to the fractional multicommodity ﬂow problem (1): (5) out for each W ⊆ V the capacity of δE (W ) is not less than the demand of out (W ). the fractional multicommodity ﬂow problem can be solved in polynomial time with any polynomial-time linear programming algorithm. . . sk ) of vertices of G. with column generation. . (rk .16 Chapter 4.or edge-disjoint paths problem: (3) given: a (directed or undirected) graph G = (V. . this condition is in general not suﬃcient. . . . . Ford and Fulkerson [1958] designed an algorithm based on the simplex method. . . The constraints are the ﬂow conservation laws for each ﬂow xi separately. However. .1) to check that the vertex-disjoint paths problem can be transformed to the directed edge-disjoint paths problem. sk ) of vertices of G. . ﬁnd: pairwise vertex-disjoint paths P1 .1 One may derive from the max-ﬂow min-cut theorem that the cut condition is suﬃcient if r1 = r2 = · · · = rk (similarly if s1 = s2 = · · · = sk ) — see Exercise 4. . k). together with the inequalities given in (1). . . Pk where Pi is an ri − si path (i = 1. . s1 ). the integer multicommodity ﬂow problem is equivalent to the arc. . E).

. . We discuss these results in Section 4. Lynch [1975]. s1 }. . the disjoint paths problem is NP-complete in all modes (directed/undirected. Hu [1963] showed that the cut condition is suﬃcient for the existence of a fractional multicommodity ﬂow. even if |{{r1 .2. and Wyllie [1980]). with arbitrary demands d1 .2 shows that this condition again is not suﬃcient. Hence also the undirected edge-disjoint paths problem is polynomially solvable for any ﬁxed number k of commodities. . s3 =r1 r4 s1=r2 s4 s2 =r3 Figure 4. In the special case of the edge-disjoint paths problem (where all capacities and demands are equal to 1). {rk . . He gave an algorithm that yields a half-integer solution if all capacities and demands are integer. there is an easy ‘greedy-type’ algorithm for the vertex-disjoint paths problem.2 However. s1 }. it is a deep result of Robertson and Seymour [1995] that the undirected vertex-disjoint paths problem is polynomially solvable for any ﬁxed number k of commodities. the capacity of δE (W ) is not less than the demand of δR (W ) (taking R := {{r1 .E. and Shamir [1976]). d2 (Even. . in the undirected case with k = 2 commodities. the cut condition reads: (7) for each W ⊆ V. Knuth (see Karp [1975]). Itai. si are on the boundary of the unbounded face. The integer multicommodity ﬂow problem is NP-complete. For general directed graphs the arc-disjoint paths problem is NP-complete even for k = 2 ‘opposite’ commodities (r. In fact. provided that the graph is planar and all terminals ri . Similar results were obtained by Okamura and Seymour [1981] for arbitrary k. . {rk . On the other hand. Introduction Similarly. r) (Fortune. in the undirected case a necessary condition is the following cut condition: (6) 17 for each W ⊆ V. .1. Robertson and Seymour observed that if the graph G is planar and all terminals ri . Figure 4. Hopcroft. |δE (W )| ≥ |δR (W )|. . even if we restrict the graph G to be planar (D. sk }}| = 2.Section 4. s) and (s. vertex/edge disjoint). si are on the boundary of the unbounded face. This implies that the undirected edge-disjoint paths problem is NP-complete. even in the undirected case with k = 2 commodities and all capacities equal to 1. sk }}). Kramer and van Leeuwen [1984]). This result was extended by Rothschild and Whinston [1966].

For the directed planar arc-disjoint version. Show that if the undirected graph G = (V. then there exists a vector x ∈ satisfying Ax ≤ b. (b) the undirected vertex-disjoint paths problem. (c) can be reduced to problems (b). E) is connected and the cut condition (7) is violated. respectively: (a) the undirected edge-disjoint paths problem. Multicommodity ﬂows and disjoint paths It is shown by Schrijver [1994] that for each ﬁxed k. Certain pairs of pins should be connected by an electrical connection (a ‘wire’) on the chip. (c). Show that each of the following problems (a). s) denotes the length of a shortest r − s path with respect to l.2. if and only if for each ‘length’ function l : E −→ Q+ one has: k (8) i=1 di · distl (ri . That is. (d) the directed arc-disjoint paths problem. si ) ≤ l(e)c(e). 4. (d).1: Multicommodity ﬂows. Show that the undirected edge-disjoint paths problem for planar graphs can be reduced to the directed arc-disjoint paths problem for planar graphs. there is the following research problem: Research problem. Is the directed arc-disjoint paths problem polynomially solvable for planar graphs with k = 2 commodities? Is it NP-complete? Application 4.18 Chapter 4. in such a way that each wire follows a certain (very ﬁne) grid on the chip and that wires connecting diﬀerent pairs of pins are disjoint.5.3. This is a direct special case of the problems described above. 4. e∈E (Here distl (r. each containing a number of ’pins’. Application 4. On a chip certain modules are placed. + Rn . (b).2: VLSI-routing. then it is violated by some W ⊆ V for which both W and V \ W induce connected subgraphs of G. the complexity is unknown. Determining the routes of the wires clearly is a special case of the disjoint paths problem.4. (c) the directed vertex-disjoint paths problem. Derive from the max-ﬂow min-cut theorem that the cut condition (5) is suﬃcient for the existence of a fractional multicommodity ﬂow if r1 = · · · = rk . 4. Certain goods or messages must be transported through the same network. where the goods or messages may have diﬀerent sources and sinks. Exercises 4.1. (i) Show with Farkas’ lemma2 : the fractional multicommodity ﬂow problem (1) has a solution. 4. if and only if for each vector y ∈ Rm with y T A = 0 one has y T b ≥ 0.) (ii) Interprete the cut condition (5) as a special case of this condition. the k disjoint paths problem is solvable in polynomial time for directed planar graphs. 2 Farkas’ lemma states: let A be an m × n matrix and let b ∈ Rm .

3. a capacity function c : E −→ Z+ and demands d1 and d2 . We ﬁrst give a proof of this result due to Sakarovitch [1973]. has a half-integer solution. {s1 . Theorem 4. Suppose the cut condition holds. if and only if the cut condition (6) is satisﬁed.1 (Hu’s two-commodity ﬂow theorem). v) is the ‘net loss’ of x in vertex v. This can be seen by extending the undirected graph G by adding two new vertices r and s and four new edges {r . by the max-ﬂow min-cut theorem. yielding the directed graph D = (V. r1 }. s } (both with capacity d2 ) as in Figure 4. with commodities {r1 . A). he showed that if the cut condition holds and all capacities and demands are integer. Consider a graph G = (V. there exists an integer-valued r − s ﬂow in the extended graph of value d1 + d2 . Hence. there exists a half-integer solution. s2 }. This gives x satisfying (10). E). {s2 . with integer capacities and demands. v) = 0 for each other vertex v.3 s1 Then the cut condition for the two-commodity ﬂow problem implies that the minimum capacity of any r − s cut in the extended graph is equal to d1 + d2 . In fact. Proof. v) := a∈δ out (v) x(a) − x(a). |x (a)| ≤ c(a) for each arc a of D.Section 4. By the max-ﬂow min-cut theorem there exists a function x : A −→ Z satisfying: (10) f (x . s1 } and {r2 . s2 ) = −d2 . r1 ) = d1 .2. Two commodities 4.2. Two commodities 19 Hu [1963] gave a direct combinatorial method for the undirected two-commodity ﬂow problem and he showed that in this case the cut condition suﬃces. f (x . f (x . r1 s2 r’ G s’ r2 Figure 4. a∈δ in (v) So f (x. f (x . s } (both with capacity d1 ) and {r . Orient the edges of G arbitrarily. s1 ) = −d1 . Deﬁne for any x ∈ RA and any v ∈ V : (9) f (x. The undirected two-commodity ﬂow problem. r2 ) = d2 . f (x . r2 }. . For any a ∈ A we denote by c(a) the capacity of the underlying undirected edge.

v) = 0 for each other vertex v. x2 satisﬁes: (14) So x2 gives a half-integer r2 − s2 ﬂow in G of value d2 . Multicommodity ﬂows and disjoint paths Similarly. r2 ) = −d2 . s2 }. f (x . Figure 4. x1 and x2 together satisfy the capacity constraint. s1 ) = −d1 . we see from (10) and (11) that x1 satisﬁes: (13) f (x1 . |β|} for all reals α. s } (both with capacity d1 ) and {r . Now consider the vectors (12) 1 1 x1 := 2 (x + x ) and x2 := 2 (x − x ). since for each edge a of D: (15) 1 1 |x1 (a)| + |x2 (a)| = 2 |x (a) + x (a)| + 2 |x (a) − x (a)| = max{|x (a)|. s2 ) = −d2 . v) = 0 for all other v. f (x2 . s } (both with capacity d2 ) (cf. f (x2 . f (x . f (x .) . s1 ) = −d1 . Similarly. Moreover. {r2 . v) + f (x . s2 ) = d2 .4 After this we proceed as above. So x1 gives a half-integer r1 − s1 ﬂow in G of value d1 . |x (a)|} ≤ c(a). β. f (x2 . f (x1 . the max-ﬂow min-cut theorem implies the existence of a function x : A −→ Z satisfying: (11) f (x . v) = 2 (f (x . r1 ) = d1 . r1 ) = d1 .20 Chapter 4. r1 }. {s1 . |x (a)| ≤ c(a) for each arc a of D. To see this we extend G with vertices r and s and edges {r . v) = 0 for all other v. f (x1 . f (x . 1 Since f (x1 . v)) for each v. 1 1 (Note that 2 |α + β| + 2 |α − β| = max{|α|. r2 ) = d2 . r" r1 G s2 r2 s1 s" Figure 4.4).

and let c : E −→ Q+ be a capacity function. (Dinits (cf. the graph obtained from G by replacing each edge e by c(e) parallel edges and by adding di parallel edges connecting ri and si (i = 1. s1 . s1 .Section 4. 2). w}. Two commodities So we have a half-integer solution to the two-commodity ﬂow problem. Derive from Theorem 4. The cut condition is not enough to derive an integer solution. E) be a graph.7. let r1 .5 (taking all capacities and demands equal to 1). 2). as is shown by Figure 4. Itai.6.5 Moreover. if v = r2 . r2 . r2 . r1 s2 r2 s1 Figure 4. is equal to the minimum capacity of a cut both separating r1 and s1 and separating r2 and s2 . if v = r1 . s2 . s1 .) Exercises 4.1 that the cut condition suﬃces to have a half-integer solution to the undirected k-commodity ﬂow problem (with all capacities and demands integer). s2 be distinct vertices. Rothschild and Whinston [1966] showed that an integer solution exists if the cut condition holds. Derive from Theorem 4.) ı. 21 This proof also directly gives a polynomial-time algorithm for ﬁnding a half-integer ﬂow. However. together satisfying the capacity constraint. provided that the following Euler condition is satisﬁed: (16) e∈δ(v) c(e) ≡0 ≡ d1 ≡ d2 (mod 2) (mod 2) (mod 2) if v = r1 . (Equivalently. if there exist two vertices u and w so that each commodity {ri . and Shamir [1976]).1 the following max-biﬂow min-cut theorem of Hu: Let G = (V. Then the maximum value of d1 + d2 so that there exist ri − si ﬂows xi of value di (i = 1. should be an Eulerian graph. the undirected integer two-commodity ﬂow problem is NP-complete (Even. s2 . si } intersects {u. . 4. and Karzanov [1975]). Adel’son-Vel’ski˘ Dinits. as mentioned.2.

The concept of matroid is deﬁned as follows. e} is a forest. The algorithm consists of selecting successively edges e1 . By replacing ‘as small as possible’ in (1)(ii) by ‘as large as possible’. ek } and {e1 .1 would give e1 = cd. . (iii) if Y. It is obviously not true that such a greedy approach would lead to an optimal solution for any combinatorial optimization problem. If no e satisfying (1)(i) exists. . Z ∈ I and |Y | < |Z| then Y ∪ {x} ∈ I for some x ∈ Z \ Y . . ek have been selected. It turns out that the structures for which the greedy algorithm does lead to an optimal solution. . not only because it enables us to recognize when the greedy algorithm applies. (ii) if Y ∈ I and Z ⊆ Y then Z ∈ I. . Then in (1)(i) we replace ‘forest’ by ‘matching’ and ‘small’ by ‘large’. we select an edge e ∈ E so that: (1) (ii) w(e) is as small as possible among all edges e satisfying (i). Then the pair (X. . . we stop. that is. . 3 3 d 4 c Figure 5. . . Matroids and the greedy algorithm Let G = (V. . Then {e1 . ek . if {e1 . er . . e2 = ab.22 Chapter 5. Matroids 5. We take ek+1 := e. Let X be a ﬁnite set and let I be a collection of subsets of X. . . ek } forms a spanning tree. . Matroids 5. E) be a connected undirected graph and let w : E −→ Z be a ‘weight’ function on the edges. a 1 b (i) e ∈ {e1 . . . . I) is called a matroid if it satisﬁes the following conditions: (2) (i) ∅ ∈ I. . er } is a spanning tree of minimum weight. . . We saw that a minimum-weight spanning tree can be found quite straightforwardly with Kruskal’s so-called greedy algorithm. It is worth studying them.1 However. . one obtains a spanning tree of maximum weight. but also because there exist fast algorithms for ‘intersections’ of two diﬀerent matroids. Application to the weighted graph in Figure 5. . e2 . ab and cd do not form a matching of maximum weight. . We could think of such an approach to ﬁnd a matching of maximum weight.1. setting r := k. . are the matroids. If edges e1 .

(Exercise 5. yk have been selected. . E). . if {y1 . The greedy algorithm consists of selecting y1 . Suﬃciency. . and dependent otherwise. yr successively as follows. Let X be some ﬁnite set and let I be a collection of subsets of X satisfying (2)(i) and (ii). by deﬁnition. y} ∈ I. where k is the number of components of (V. E ). We now show that if G = (V.Section 5. then the bases of the matroid (E. I) are exactly the spanning trees in G. I) is a matroid. That is. . choose y ∈ X so that: (5) (ii) w(y) is as large as possible among all y satisfying (i). Let Y ⊆ X. I) satisfying (2)(i) and (ii) is a matroid. . . A set is called simply a basis if it is a basis of X. . .1. For any weight function w : X −→ R we want to ﬁnd a set Y in I maximizing (4) w(Y ) := y∈Y w(y). To see that condition (3) holds. if x ∈ Z \ Y .1. Hence Y forms a spanning tree in each component of the graph (V. Matroids and the greedy algorithm 23 For any matroid M = (X. let E ⊆ E. if and only if the greedy algorithm leads to a set Y in I of maximum weight w(Y ). . Suppose that Y ∪ {z} ∈ I for each z ∈ Z \ Y . Proof. Suppose the greedy algorithm leads to an independent set of maximum weight for each weight function w. Deﬁne: (6) w(x) := k + 2 w(x) := k + 1 w(x) := 0 if x ∈ Y . Consider the following weight function w on X. proving (3). let Y. a subset Y of X is called independent if Y belongs to I. . for each weight function w : X −→ R+ . . Conditions (2)(i) and (ii) are satisﬁed by assumption. . If I is the collection of forests in a connected graph G = (V. We show that (X. each basis Y of E is an inclusionwise maximal forest contained in E . Z ∈ I with |Y | < |Z|. . E ). . A subset B of Y is called a basis of Y if B is an inclusionwise maximal independent subset of B. The pair (X. . The common cardinality of all bases is called the rank of the matroid. I). To see condition (2)(iii).) The common cardinality of the bases of a subset Y of X is called the rank of Y .1. yk } and {y1 . then (E. . So each basis of E has |V | − k elements. for any set Z ∈ I with B ⊆ Z ⊆ Y one has Z = B. if x ∈ X \ (Y ∪ Z). that is. . (i) y ∈ {y1 . Then. E) is a graph and I is the collection of forests in G. Let k := |Y |. yk } is a basis. . Conditions (2)(i) and (ii) are trivial. I) indeed is a matroid. Now: Theorem 5. denoted by rM (Y ). . . . yk . We stop if no y satisfying (5)(i) exist. So Y has |V | − k elements. We next show that the matroids indeed are those structures for which the greedy algorithm leads to an optimal solution. It is not diﬃcult to see that condition (2)(iii) is equivalent to: (3) for any subset Y of X. any two bases of Y have the same cardinality. If y1 .

It suﬃces to show that if Y is greedy. 5. y} is a circuit. if and only if: (7) if B. y of X are called parallel if {x. I) be a matroid. the algorithm can be adapted to yield an independent set of maximum weight.6. we cannot choose any element in Z \ Y . Z ⊆ X. 5.5. Now let (X. there exists a maximum-weight basis B ⊇ Y . Moreover.24 Chapter 5. (iii) r(Y ∩ Z) + r(Y ∪ Z) ≤ r(Y ) + r(Z) for all Y. then also (Y \ {x}) ∪ {y} is independent. any basis containing Z will have weight at least |Z ∩Y |(k +2)+|Z \Y |(k +1) ≥ |Z|(k+1) ≥ (k+1)(k+1) > k(k+2). and let x ∈ X \ B. then there exists an y ∈ B \ B such that (B \ {x}) ∪ {y} ∈ B. Matroids Now in the ﬁrst k iterations of the greedy algorithm we ﬁnd the k elements in Y . Necessity. then (C ∪ C ) \ {x} contains a circuit containing y. Let w : X −→ R+ be any weight function on X.1. Hence the greedy algorithm does not give a maximumweight independent set. Show that condition (3) is equivalent to condition (2)(iii) (assuming (2)(i) and (ii)). B ∈ B and x ∈ B \ B . So Y ∪ {x} is greedy. If x ∈ B. and x is an element in X \ Y such that Y ∪ {x} ∈ I and such that w(x) is as large as possible. Let M = (X. Exercises 5. I) be a matroid. Show that if x and y are parallel and Y is an independent set with x ∈ Y . If x ∈ B then Y ∪ {x} is greedy again. However. one obtains an algorithm for ﬁnding a minimum-weight basis in a matroid. if and only if: (8) (i) 0 ≤ r(Y ) ≤ |Y | for each subset Y of X. Hence w(B ) ≥ w(B). let B be a basis of M . Show that r = rM for some matroid M on X. Call an independent set Y greedy if it is contained in a maximum-weight basis. then Y ∪ {x} is greedy. Show that there exists a unique circuit C with the property that x ∈ C and C ⊆ B ∪ {x}. (ii) r(Z) ≤ r(Y ) whenever Z ⊆ Y ⊆ X. Let X be a ﬁnite set and let B be a nonempty collection of subsets of X. Let X be a ﬁnite set and let r : P(X) −→ Z. w(x) ≥ w(x ). Let M = (X. Note that by replacing “as large as possible” in (5) by “as small as possible”. As Y is greedy. at any further iteration. by ignoring elements of negative weight. Two elements x. 5.3.4. Hence any further element chosen. Show that if C and C are diﬀerent circuits of M and x ∈ C ∩ C and y ∈ C \ C . has weight 0. for any weight function w : X −→ R. As w(x) is chosen maximum. I) be a matroid. .2. and therefore B is a maximum-weight basis. Show that B is the collection of bases of some matroid on X. 5. By assumption. then there exists a basis B containing Y ∪ {x} and contained in B ∪ {x}. So the greedy algorithm will yield a basis of weight k(k + 2). So B = (B \ {x }) ∪ {x} for some x ∈ B \ Y . A subset C of X is called a circuit if C is an inclusionwise minimal dependent set. I) be a matroid. Let M = (X. 5.

Condition (2)(ii) follows from the fact that if F ⊆ F . Cographic matroids. . That is. Let G = (V. . E \ E ) has the same number of components as G. E \ F ). let G have k components and let F and F be subsets of E so that (V. . are exactly the circuits of G. as we saw in Section 5. Let I be the collection of those subsets F of E that satisfy: (10) Then: Theorem 5. . F ) has the same number of components as G. . 25 5. This implies that (V. As a ﬁrst example we consider the matroids described in Section 5. E \ F ) has at least k + |F \ F | − |F \ F | components. xi }) > rM ({x1 . . Examples of matroids 5.1. contradicting the fact that F belongs to I. I) is a matroid. We must show that (V. E) be a graph. Let M = (X. the bases are the spanning trees. E \ F ) and (V. Note also that the circuits of M (G). Prove that Y is a basis of M . I. . Examples of matroids In this section we describe some classes of examples of matroids. in the graph sense. xn . To see condition (2)(iii). Note that the bases of M (G) are exactly those forests F of G for which the graph (V. I) be a matroid. the graph (V. Then M = (E. So if G is connected. x2 . . denoted by M ∗ (G). . each e in F \ F is an isthmus in the graph (V. (E. Proof. Condition (2)(i) is trivial. I) is called the cocycle matroid of G. denoted by M (G). II. . E \ (F ∪ {e})) has k components for some e ∈ F \ F . So if G is connected. Suppose to the contrary that (V. E \ F ) have k components and so that |F | < |F |. xi−1 })}. in the matroid sense. Note that the bases of M ∗ (G) are exactly those subsets E of E for which E \ E is a forest and (V. E \ (F ∪ F )) has k + |F \ F | components. E \ F ) has at least as many components as (V. . Any matroid obtained in this way. Hence (V. then the graph (V. x3 .2. these are exactly the complements of the spanning trees in G. E \ (F ∪ {e})) has more than k components. is called a cographic matroid. Let I be the collection of all forests in G.2. for every e ∈ F \ F . k + |F \ F | − |F \ F | = k + |F | − |F | > k.2. I) is a matroid. Deﬁne (9) Y := {xi | rM ({x1 . or isomorphic to such a matroid.7. and order the elements of X as x1 . There is an alternative way of obtaining a matroid from a graph G = (V.Section 5. Graphic matroids. is called a graphic matroid. The matroid M is called the cycle matroid of G. However. Any matroid obtained in this way. E \ F ) has the same number of components as G. The matroid (E. . .1. E \ F ). Now E \ F = (E \ (F ∪ F )) ∪ (F \ F ). or isomorphic to such a matroid. E).

Suppose that Y ∪ {x} ∈ I for each x ∈ Z \ Y . So |M | = |Y | < |Z| = |M |. and similarly for Z. Let A be an m × n matrix. I) is a matroid. . . . . in so that yj ∈ Xij for j = 1. That is. . Again. . Consider the graph G constructed above. . IV. Proof. m}∪X and with edges all pairs {i. . Let G be the bipartite graph with vertex set V := {1. . To see (2)(iii). . a subset C of E is a circuit of M ∗ (G) if it is an inclusionwise minimal set with the property that (V.) For any matching M in G. . . . (X. Let X = {1. This means that each column with index in Z \ Y is spanned by the columns with index in Y . m} ∩ X = ∅. is called a linear matroid. I) is a matroid. Hence each column with index in Z is spanned by the columns with index in Y . while the columns indexed by Z span an |Z|-dimensional space. n.26 Chapter 5. (We assume here that {1. Proof. Xm . if there exist distinct indices i1 . . Hence C is a circuit of M ∗ (G) if and only if C is an inclusionwise minimal nonempty cutset in G. . let Y and Z be subsets of X so that the columns with index in Y are linearly independent. . or an SDR). . each column with index in Z ∩ Y is spanned by the columns with index in Y . m} and x ∈ Xi . Trivially. . (X. . . x} with i ∈ {1. . Xm ). . . .3. . E \ C) has more components than G. conditions (2)(i) and (ii) are trivial. Transversal matroids. Again. Linear matroids. . Another way of representing partial transversals is as follows. Matroids By deﬁnition. Any matroid obtained in this way. This contradicts the fact that the columns indexed by Y span an |Y |-dimensional space. .4. . . A set Y = {y1 . Now: Theorem 5. with |Z| > |Y |. By (11) there exist matchings M and M in G so that Y = ρ(M ) and Z = ρ(M ). . conditions (2)(i) and (ii) are easy to check. yn } is called a partial transversal (of X1 . . let Y and Z be partial transversals with |Y | < |Z|. . and so that |Y | < |Z|. Note that the rank rM (Y ) of any subset Y of X is equal to the rank of the matrix formed by the columns indexed by Y . A partial transversal of cardinality m is called a transversal (or a system of distinct representatives. so that the submatrix of A consisting of the columns with index in Y has rank |Y |. if and only if Y = ρ(M ) for some matching M in G. Now let I be the collection of all partial transversals for X1 . Then it is not diﬃcult to see that: (11) Y ⊆ X is a partial transversal. . . . . . Then: Theorem 5. or isomorphic to such a matroid. let ρ(M ) denote the set of those elements in X that belong to some edge in M . . . III. . To see condition (2)(iii). . Xm be subsets of the ﬁnite set X. Let X1 . n} and let I be the collection of all those subsets Y of X so that the columns with index in Y are linearly independent.

. The path consists of edges alternatingly in M and in M . Then x ∈ ρ(M ) = Z and x ∈ ρ(M ) = Y . If the sets X1 .8. E) be a graph. I) is a matroid. . m. Deﬁne M := (M \ N ) ∪ N . . 5. Any matroid obtained in this way. Since P has odd length.) 5. .3. M ∪ M ) is either a path. . (The running time of the algorithm should be bounded by a polynomial in |V | + |E|. if and only if (7) is satisﬁed.. exactly one of its end vertices belongs to X. call this end vertex x. Clearly. In Exercise 5. Duality. deletion. I) be the transversal matroid derived from subsets X1 .. one speaks of a partition matroid. . (i) Let (X1 . and transversal matroids. Xm ) be a partition of the ﬁnite set X. .) Give a polynomial-time algorithm to ﬁnd a circuit basis C of G that minimizes C∈C |C|. Exercises 5. Each component of the graph (V.11. These four classes of examples show that the greedy algorithm has a wider applicability than just for ﬁnding minimum-weight spanning trees. (ii) Show that partition matroids are graphic. is called a transversal matroid. with end edges in M .10. or a singleton vertex. and contraction 27 Consider the union M ∪ M of M and M . Let M = (V. . First we consider the ‘dual’ of a given matroid.. .m} min (| j∈J Xj | + m − |J|). Let G = (V.3.Section 5. 5. . or isomorphic to such a matroid. (ii) Derive a formula for rM (Y ) for any Y ⊆ X. M is a matching with ρ(M ) = Y ∪ {x}. I) is a matroid.. There are more classes of matroids (like ‘algebraic matroids’. . Show that (E. and contraction There are a number of operations that transform a matroid to another matroid.12. linear. So Y ∪ {x} belongs to I. 5. (We consider circuits as edge sets. Call a collection C of circuits a circuit basis of G if each circuit of G is a symmetric diﬀerence of circuits in C. . (i) Show with K¨nig’s matching theorem that: o (12) rM (X) = J⊆{1. . at least one of these components is a path P with more edges in M than in M .9. Such matroids are called partition matroids. ‘gammoids’). Let N and N denote the edges in P occurring in M and M . Now deﬁne (13) B ∗ := {X \ B | B ∈ B}. Let G = (V. . . Since |M | > |M |. . Show that (X. Show that a partition matroid is graphic and cographic. Xm form a partition of X. So |N | = |N | + 1. Xm of X as in Example IV. cographic. Let I be the collection of those subsets Y of E so that F has at most one circuit. E) be a graph. Duality. Let I be the collection of all subsets Y of X such that |Y ∩ Xi | ≤ 1 for each i = 1. deletion. for which we refer to Welsh [1976]. respectively. 5.2 we saw that a nonempty collection B of subsets of some ﬁnite set X is the collection of bases of some matroid on X. or a circuit. .

M ∗ (G) = (M (G))∗ . Exercises . Hence ρ is the rank function of some matroid N = (X. That is. observe that (16) ρ(Y ) = max |Y ∩ C| = max |Y \ B| ∗ C∈B B∈B = max |(X \ Y ) ∩ B| + |Y | − rM (X) = rM (X \ Y ) + |Y | − rM (X). Deﬁne (15) ρ(Y ) := max{|Z| |Z ∈ J . Since (B ∗ )∗ = B. In fact. and therefore (X. One may check that if G is a graph and e is an edge of G. M is called a minor of M . Now for each Y ⊆ X: (17) Y ∈ I ⇐⇒ ρ(Y ) = |Y | ⇐⇒ Y ∈ J . we know (M ∗ )∗ = M . in the examples I and II above we saw that for any undirected graph G. Another way of constructing matroids from matroids is by ‘deletion’ and ‘contraction’. Contracting Z means replacing M by (M ∗ \ Z)∗ . If Y = X \ Z with Z ⊆ X. I) be a matroid and let Y ⊆ X.5. If matroid M arises from M by a series of deletions and contractions. Z ∈ I}. The rank function rM ∗ of the dual matroid M ∗ satisﬁes: (14) rM ∗ (Y ) = |Y | + rM (X \ Y ) − rM (X). and denote M by M \ Z. The function ρ clearly satisﬁes conditions (8)(i) and (ii). Let M = (X. Then M = (Y. denoted by M ∗ . Deﬁne (18) I := {Z | Z ⊆ Y. the cocycle matroid of G is the dual matroid of the cycle matroid of G. I ). Z ⊆ Y } for Y ⊆ X. M (G)/{e} = M (G/{e}). Hence I = J .28 We show: Chapter 5. J ) is a matroid with basis collection B ∗ and rank function given by (14). I ) is a matroid again. then B ∗ also is the collection of bases of some matroid on X. we say that M arises by deleting Z. To see (8)(iii). as one easily checks. then contracting edge {e} in the cycle matroid M (G) of G corresponds to contracting e in the graph. The matroid M ∗ is called the dual matroid of M . That is. M is called the restriction of M to Y . where G/{e} denotes the graph obtained from G by contracting e. This matroid is denoted by M/Z. (16) implies that also ρ satisﬁes condition (8)(iii). B∈B Since rM satisﬁes condition (8)(iii). If B is the collection of bases of some matroid M . Matroids Theorem 5. Let J denote the collection of subsets J of X so that J ⊆ D for some D ∈ B ∗ . Proof.

I) be a matroid and let Y ⊆ X. Let M = (X. I) be a matroid. let κ(V. . Let G be a planar graph and let G∗ be a planar graph dual to G. Let G = (V. I) is a matroid. then a subset Y of X \ {x} is independent in the contracted matroid M/{x} if and only if Y ∪ {x} is independent in M . 5. E ). deletion and contraction commute. (ii) Let B be a basis of Y .16. Show that the dual matroid of a linear matroid is again a linear matroid. Let A be the matrix obtained from the V × E incidence matrix of G by replacing in each column. 5.15) that any cographic matroid is a linear matroid. if and only if U ∪ B is independent in M . Deﬁne (19) I := {U ∪ Y | U ⊆ U. E ) denote the number of components of the graph (V. |U ∪ Y | ≤ k}. Give an example of a k-uniform matroid that is neither graphic nor cographic. (ii) Show that if x is a loop. (ii) Derive that any graphic matroid is a linear matroid. 5.21. (ii) Show that k-uniform matroids are transversal matroids. Y ∈ I. Let G = (V. (i) rM (G) (E ) = |V | − κ(V. I) be a matroid and let x ∈ X. Show that (X. exactly one of the two 1’s by −1.13. I) be a matroid and let k be a natural number. (That is. (i) Let X be a ﬁnite set and let k be a natural number. and contraction 29 5. I ) is again a matroid (called the k-truncation of M ). and let k ≥ 0.22. (iii) Derive (with the help of Exercise 5.) 5. I ) is again a matroid. E) be a connected graph. Z ⊆ X. Let M = (X. rM/Y (U ) = rM (U ∪ Y ) − rM (Y ). E ). Duality. Deﬁne I := {Y ∈ I | |Y | ≤ k}. 5. 5. (ii) Show that for each U ⊆ X \ Y (20) 5. 5. Let I := {Y ⊆ X | |Y | ≤ k}. Show that (U ∪ X. Show that (X. Show that (M \ Y )/Z = (M/Z) \ Y . (iii) Show that for each Y ⊆ X : rM/{x} (Y ) = rM (Y ∪ {x}) − rM ({x}). 5. (i) Show that a subset Y of E is a forest if and only if the columns of A with index in Y are linearly independent. Let M = (X. and suppose that we can test in polynomial time if any subset Y of X belongs to I.14. Show that the cycle matroid M (G∗ ) of G∗ is isomorphic to the cocycle matroid M ∗ (G) of G.17. For each subset E of E. Such matroids are called k-uniform matroids. E) be a loopless undirected graph. Show that for each E ⊆ E: (ii) rM ∗ (G) (E ) = |E | − κ(V. deletion. let U be a set disjoint from X.19. then M/{x} = M \ {x}. Let M = (X.23. (i) Show that if x is not a loop. Show that then the same holds for the dual matroid M ∗ . E \ E ) + 1. Let M = (X.20.Section 5. I) be a matroid. Show that a subset U of X \ Y is independent in the contracted matroid M/Y .3. Let M = (X. 5. I) be a matroid and let Y.18.15.

30 5. Let M = (X. Exercises 5. Y ) contains a perfect matching on Y Z. Matroids In this section we prove two technical lemmas as a preparation to the coming sections on matroid intersection. There exists an S ∈ I such that Z \ {y} ⊆ S ⊆ (Y \ {y}) ∪ Z and |S| = |Y | (since (Y \ {y}) ∪ Z = (Y \ {y}) ∪ {z} ∪ Z and since (Y \ {y}) ∪ {z} belongs to I). z} of H(M. Let M = (X. I) be a matroid. if and only if w(B ) ≤ w(B) for every basis B with |B \ B| = 1. Y ) contains a unique perfect matching N on Y Z. Elements y ∈ Y and x ∈ X \ Y are adjacent if and only if (21) (Y \ {y}) ∪ {x} ∈ I. the case k = 0 being trivial. This implies that there exists an U ∈ I such that T ⊆ U ⊆ T ∪ Y and |U | = |Y |. . Let Z ⊆ X be such that |Y | = |Z| and such that H(M. Let M = (X. This contradicts the choice of y. I) be a matroid. Y ) with union Y Z . Hence there exists an z ∈ Z \ Y such that (Y \ {y}) ∪ {z } belongs to I. By the unicity of N there exists an edge {y. So U = (Y \ {x}) ∪ {z} for some x ∈ S. z} ∈ N . there exists an element z ∈ S such that T := (Y ∩ Z) ∪ S ∪ {z} belongs to I. Z belongs to I. Show that B is a basis of maximum weight. By induction on k := |Z \ Y |. I) be a matroid and let Y. Let Z := (Z \ {z}) ∪ {y} and N := N \ {{y. Assuming Z ∈ I. The following forms a counterpart: Lemma 5. Suppose not. 3 A perfect matching on a vertex set U is a matching M with M = U. z}}. with the property that there is no z ∈ Z \ Y such that z = z and {y. Proof. z } is an edge of H(M. Let M = (X. Then Z belongs to I. I) be a matroid and let Y ∈ I. Then H(M. Then we have: Lemma 5. As {x.24. As |(Y ∩ Z) ∪ S| < |(Y ∩ Z) ∪ S |. and let w : X −→ R be a weight function. Y ) as follows. Let k ≥ 1. Z ∈ I with |Y | = |Z|. The graph H(M. Y ). By K¨nig’s matching theorem there exist a subset S of Y \ Z and a o subset S of Z \ Y such that for each edge {y. we know z ∈ S and hence r((Y ∪ Z ) \ {y}) = |Y |. z} is an edge of H(M.4. with colour classes Y and X \ Y .2. For any Y ∈ I deﬁne a bipartite graph H(M. Hence by induction. Then N is the unique matching in H(M. Y ) this contradicts the choice of S and S . let B be a basis of M . Y ) has vertex set X. with y ∈ Y \ Z and z ∈ Z \ Y . Y ) satisfying z ∈ S one has y ∈ S and such that |S| < |S |.3 Proof.1. Two technical lemmas Chapter 5.

Matroid intersection Edmonds [1970] discovered that the concept of matroid has even more algorithmic power. for any weight function w on X. Let D = (V. Z). So M2 := (A. Y ) is a rooted tree. The collections (X1 . In particular. z} is an edge both of H(M. . on the same set X. I2 ) are partition matroids. . Similarly. (Hint: Apply inequality (8)(iii) to X := (Z \δ(y))∪{y} and X := (Z \δ(y))∪(Y \{y}). Matroid intersection 31 5. I1 ) and M2 = (X. Let I1 be the collection of all subsets F of E so that no two edges in F have a vertex in V1 in common. . .5c. Let X1 . For any y ∈ Y \ Z deﬁne δ(y) as the set of those z ∈ Z \ Y which are adjacent to y in the graph H(M. . Now I1 ∩ I2 is the collection of matchings in G.25. Xm and Y1 . I1 ∩ I2 ) is generally not a matroid again (cf. Let M1 = (A. I1 ) be the cycle matroid of the underlying undirected graph. We consider ﬁrst some applications. Consider the collection I1 ∩ I2 of common independent sets. let I2 be the collection of all subsets F of E so that no two edges in F have a vertex in V2 in common. Example 5.) (ii) Derive from (i) that for each y ∈ Y \ Z there exists an z ∈ Z \ Y so that {y. . . 5. . Y ). I1 ) and (X. Then G has two edgedisjoint spanning trees. I2 ) be the corresponding transversal matroids. (i) Show that for each y ∈ Y \ Z the set (Z \ δ(y)) ∪ {y} belongs to I. A) be a directed graph. Let M1 = (X. if and only if the maximum cardinality of a common independent set in the cycle matroid M (G) of G and the cocycle matroid M ∗ (G) of G is equal to |V | − 1. Finding a maximum-weight common independent set amounts to ﬁnding a maximum-weight matching in G. Xm ) and (Y1 . Let M = (X. In this section we describe an algorithm for ﬁnding a maximum-cardinality common . . . by showing that there exist fast algorithms also for intersections of matroids. . Example 5. a maximum-weight common independent set can be found in polynomial time. if and only if the maximum cardinality of a common independent set is equal to m. Then common independent sets correspond to common partial transversals. . . I1 ) and M2 = (X. Let G = (V.5a. with colour classes V1 and V2 . I) be a matroid and let Y and Z be independent sets with |Y | = |Z|. Ym be subsets of X. . What Edmonds showed is that. a common independent set of maximum cardinality can be found in polynomial time. Exercise 5. .Section 5.5b. So both (X. Let M1 = (X. Y ) and of H(M. E) be a connected undirected graph. Let G = (V. I2 ) be two matroids. say.5. I2 ) is a partition matroid. Let I2 be the collection of subsets Y of A so that each vertex of D is entered by at most one arc in Y .5d. E) be a bipartite graph. . Example 5.5. Now the common independent sets are those subsets Y of A with the property that each component of (V.26). The pair (X. if and only if the maximum cardinality of a set in I1 ∩ I2 is equal to |V | − 1. Example 5. Moreover. D has a rooted spanning tree. Ym ) have a common transversal.

Y ) as follows. Y ) from some vertex in X1 to some vertex in X2 .32 Chapter 5. then Y ∈ I 1 ∩ I 2 . y1 . x) is an arc of H(M1 . this graph can be considered as the union of directed versions of the graphs H(M1 . . I2 ) and a set Y ∈ I 1 ∩ I 2 . z1 . x ∈ X \ Y . Y ) with union .) We take a shortest such path P (that is. (22) (y. . Y ). we deﬁne a directed graph H(M1 . zm . M2 . . . (x. These are all arcs of H(M1 . Assume that Case 1 applies. zm }) ∪ {y0 . Case 2. M2 . . In fact. in this order. I1 ) and M2 = (X. By construction of the graph H(M1 .4. Y ) deﬁned in Section 5. description of the algorithm: We assume that M1 and M2 are given in such a way that for any subset Z of X we can check in polynomial time if Z ∈ I 1 and if Z ∈ I 2 .4. M2 . with a minimum number of arcs). if it exists. . I 1 ) and M2 = (X. . M2 . ym of H(M1 . consider the graph H(M1 . Case 1. To see that Y \ {y0 } belongs to I 1 . Y ) and H(M2 . M2 .6. The correctness of the algorithm is given in the following two theorems. . ym belong to X \ Y and z1 . . Proof. zm belong to Y . Y ). For any two matroids M1 = (X. output: a set Y ∈ I 1 ∩ I 2 with |Y | > |Y |. Matroids independent sets in two given matroids. Now output (24) Y := (Y \ {z1 . . I 2 ) and any Y ∈ I 1 ∩ I 2 . M2 . (Possibly of length 0 if X1 ∩ X2 = ∅. . . Y ) deﬁned in Section 5. Y ) deﬁned above. X2 := {y ∈ X \ Y | Y ∪ {y} ∈ I2 }. Its vertex set is X. . . Y ) if and only if (Y \ {y}) ∪ {x} ∈ I 2 . . This ﬁnishes the description of the algorithm. while for any y ∈ Y. Observe that the edges {zj . y) is an arc of H(M1 . There is no directed path in H(M1 . Theorem 5. There exists a directed path P in H(M1 . ym }. Y ) and the sets X1 and X2 . . By symmetry it suﬃces to show that Y belongs to I1 . this implies that y0 . M2 . . M2 . There are two cases. consider the directed graph H(M1 . M2 . yj } form the only matching in H(M1 . . Y ) if and only if (Y \ {y}) ∪ {x} ∈ I 1 . Moreover. Cardinality common independent set augmenting algorithm input: matroids M1 = (X. If Case 1 applies. The following is the basis for ﬁnding a maximum-cardinality common independent set in two matroids. . In the next section we consider the more general maximum-weight problem. . Y ) from any vertex in X1 to any vertex vertex in X2 . Then Y is a maximum-cardinality common independent set. Consider the sets (23) X1 := {y ∈ X \ Y | Y ∪ {y} ∈ I1 }. Let P traverse the vertices y0 .

. . To this end. . It implies the result of Edmonds [1970]: Theorem 5. we ﬁrst show (26) rM1 (U ) = |Y ∩ U |. So Y is a common independent set of maximum cardinality. M2 . . Hence there is a subset U of X containing X1 such that U ∩ X2 = ∅ and such that no arc of H(M1 . . . A maximum-cardinality common independent set in two matroids can be found in polynomial time. To show that Y belongs to I 1 . as we can ﬁnd a maximum-cardinality common independent set after applying at most |X| times the common independent set augmenting algorithm. we know rM1 (U ) ≥ |Y ∩ U |. Then there exists an x in U \ Y so that (Y ∩ U ) ∪ {x} ∈ I1 . As Case 2 applies.8. . Since Y ∈ I1 . ym } (otherwise P would have a shortcut). rM1 (Y ∪ Y ∪ {y0 }) = |Y |. zm . Directly from the above. . Hence we have (25). Y ) entering U . Matroid intersection 33 equal to {z1 . Proof. . . Y ) leaves U . We show (25) rM1 (U ) + rM2 (X \ U ) = |Y |. as Y ∩ U ∈ I1 . Proof. M2 . This shows (26). M2 . Then Z = Y ∪ {x} or Z = (Y \ {y}) ∪ {x} for some y ∈ Y \ U .5. . y1 . . . as (Y \ {y0 }) ∩ X1 = ∅. there is no directed X1 − X2 path in H(M1 . In the ﬁrst alternative. In the second alternative. as was shown again by Edmonds [1970].7. Theorem 5. .2. ym } belongs to I 1 . Now (25) implies that for any set Z in I1 ∩ I2 one has (27) |Z| = |Z ∩ U | + |Z \ U | ≤ rM1 (U ) + rM2 (X \ U ) = |Y |. x) is an arc of H(M1 . this implies that there exists a set Z ∈ I1 with |Z| ≥ |Y | and (Y ∩ U ) ∪ {x} ⊆ Z ⊆ Y ∪ {x}. As Y \ {y0 } ∈ I 1 . in polynomial time. since we can construct the auxiliary directed graph H(M1 . observe that rM1 (Y ∪ Y ) ≥ rM1 (Y ∪ {y0 }) = |Y | + 1. zm }) ∪ {y1 . (y. So by Lemma 5. Y ) and ﬁnd the path P (if it exists). . If Case 2 applies.Section 5. This contradicts the deﬁnition of U (as y ∈ U and x ∈ U ). Similarly we have that rM2 (X \ U ) = |Y \ U |. . . Y \ {y0 } = (Y \ {z1 . we know Y ∈ I 1. Clearly. M2 . then Y is a maximum-cardinality common independent set. contradicting the fact that x belongs to U . x ∈ X1 . Suppose rM1 (U ) > |Y ∩ U |. The algorithm clearly has polynomially bounded running time. Y ). The algorithm also yields a min-max relation for the maximum cardinality of a common independent set. and that.

Xm be subsets of X. . Let (X1 . if and only if (29) |( Xi ) ∩ ( Yj )| ≥ |I| + |J| − m i∈I j∈J for all subsets I and J of {1. Xm is at least t. Ym ) have a common transversal. . I1 ∨ I2 ) is again a matroid. 5. Nash-Williams [1967].28. . . if and only if (V. m}. . . . o 5. the maximum in (28) is at least as large as the minimum. Then (X1 . E ) has at most |V | − t components. Let M1 = (X. That is. Xm ) has an independent transversal. Reduce the problem of ﬁnding a Hamiltonian cycle in a directed graph to the problem of ﬁnding a maximum-cardinality common independent set in three matroids. (Hint: Use Exercise 5. . . . I1 ) and M2 = (X. . I2 ) be matroids. . Let G = (V. .26. Exercises 5.31. . . (31) min (rM1 (U ) + rM2 (U ) + |X \ U |). . Derive K¨nig’s matching theorem from Edmonds’ matroid intersection theorem. I2 ) be matroids. I1 ) and M2 = (X. . I) be a matroid and let X1 . 5.34 Chapter 5. . . (iii) Derive that (X. we have partitioned E into classes X1 .29. . .) This matroid is called the union of M1 and M2 . Y2 ∈ I2 }. if and only if the rank of the union of any t sets among X1 . Therefore.) . . Then (28) Y ∈I 1 ∩I 2 max |Y | = min (rM1 (U ) + rM2 (X \ U )). . Show that there exists a spanning tree with all edges coloured diﬀerently. for any t ≥ 0. Z ∈ I1 ∨ I2 } = min (rM1 (U ) + rM2 (U ) + |Y \ U |). 5. .) 5.32. 5. . for any union E of t colours. The inequality ≤ follows similarly as in (27).30.) (ii) Derive that for each Y ⊆ X: (32) U ⊆Y max{|Z| | Z ⊆ Y. Give an example of two matroids M1 = (X. (i) Show that the maximum cardinality of a set in I1 ∨ I2 is equal to U ⊆X ∗ (Hint: Apply the matroid intersection theorem to M1 and M2 . . . U ⊆X Proof. . . E) be a graph and let the edges of G be coloured with |V | − 1 colours. Xm ) and (Y1 . . . called colours.27. (Edmonds and Fulkerson [1965]. . Xm ) and (Y1 . . Let M = (X.9 (Edmonds’ matroid intersection theorem). Deﬁne (30) I1 ∨ I2 := {Y1 ∪ Y2 | Y1 ∈ I1 . for any t ≥ 0. I1 ∩ I2 ) is not a matroid. . Derive from Edmonds’ matroid intersection theorem: (X1 . we obtain a set U for which (25) holds. . Ym ) be subsets of the ﬁnite set X. .3. I2 ) so that (X. (Rado [1942]. I1 ) and M2 = (X. denoted by M1 ∨ M2 . Let M1 = (X. Matroids Theorem 5. The reverse inequality follows from the fact that if the algorithm stops with set Y . X|V |−1 .

. k. The following is a special case of a theorem of Nash-Williams [1985]: ˜ ˜ ˜ Let G = (V.) for each subset I of {1. . .Section 5. Matroid intersection (iv) Let M1 = (X. I1 ∨ . (i) Show that X can be partitioned into k partial transversals of (X1 . . . (Hint: Use Exercise 5. . Show that there exist k pairwise edgedisjoint spanning trees. Show that the same holds for the union M1 ∨ M2 .33. . (i) Let M = (X. m} and X. if and only if for each t. . . if and only if k(rM (X) − rM (U )) ≥ |X \ U | for each subset U of X. . for each j = 1. ∨ Ik := {Y1 ∪ . .) (ii) Show that the problem of ﬁnding a minimum number of independent sets covering X in a given matroid M = (X. Let X1 . Let B be partitioned into sets Y1 and Y2 . . Show that there exists a partition of B into sets Z1 and Z2 so that both Y1 ∪ Z1 ∪ Z2 and Z1 ∪ Y2 are bases of M . Let M = (X. . E) be a graph and let k ≥ 0. Call a graph G = (V .) (Edmonds [1965].) 5. .) (Nash-Williams [1964]. (Hint: Assume without loss of generality that X = B ∪ B . . 2. ∨ Ik ) is again a matroid and give a formula for its rank function. . .) (Nash-Williams [1961]. (i) Let M = (X. 5. m} can be partitioned into classes I1 . . .37. Xm ). . .36.5. is solvable in polynomial time.) (ii) Show that the problem of ﬁnding a maximum number of pairwise disjoint bases in a given matroid. (Hint: Interchange the roles of {1. for each Y ⊆ X. . . there are at least k(t − 1) edges connecting diﬀerent classes of this partition. if and only if (34) k(m − |I|) ≥ |X \ i∈I Xi | (ii) Derive from (i) that {1. Show that there exist k pairwise disjoint bases of M . for each partition (V1 . I).36. 5. (Hint: Use Exercise 5. . . I) be a matroid and let k ≥ 0. Show that X can be covered by k independent sets.40. if and only if |U | ≤ k · rM (U ) for each subset U of X. Let G = (V. ∪ Yk | Y1 ∈ I1 .) 5. 5.) 5. Yk ∈ Ik }. . connected graph and let b : V −→ Z+ .33. .35. . Let M1 and M2 be matroids so that.) (Edmonds [1965]. . . 5. . . ∨ Mk := (X. . we can test in polynomial time if a given set is independent in Mi . . Mk = (X. Vt ) of V into t classes. 5.34. E) be a connected graph and let k ≥ 0. I) be a matroid and let B and B be two disjoint bases. Show that E can be partitioned into k forests. I) be a matroid and let k ≥ 0. if and only if Y contains at most k|Y | of the Xi as a subset. Tutte [1961]. .32.) (Edmonds and Fulkerson [1965]. Ik ) be matroids and let (33) I1 ∨ . . . Xm be subsets of X and let k ≥ 0. . Apply the matroid intersection theorem to the matroids (M \ Y1 )/Y2 and (M ∗ \ Y1 )/Y2 .38. 35 Derive from (iii) that M1 ∨ . m}. . . . I1 ). for i = 1. E) ˜ a b-detachment of G if there is a function φ : V −→ V such that |φ−1 (v)| = b(v) for each . Let G = (V. . is solvable in polynomial time. .39. (Hint: Use Exercise 5.32. . Ik so that (Xi | i ∈ Ij ) has a transversal. . . . E) be a simple. if and only if each nonempty subset W of V contains at most k(|W | − 1) edges of G. . (Hint: Use Exercise 5.

.36 Chapter 5.7. . Weighted common independent set augmenting algorithm input: matroids M1 = (X. call a set Y ∈ I 1 ∩ I 2 a max-weight common independent set if w(Y ) ≤ w(Y ) for each Y ∈ I 1 ∩ I 2 with |Y | = |Y |. description of the algorithm: Consider again the sets X1 and X2 and the directed graph H(M1 . Y := (Y \ {z1 . if matroid M1 = (X. w} of G. if and only if for each U ⊆ V the number of components of the graph induced by V \ U is at most b(U ) − |EU | + 1. The algorithm. zm }) ∪ {y0 . if x ∈ Y. We choose P so that l(P ) is minimal and so that it has a minimum number of arcs among all minimumlength X1 − X2 paths. . again due to Edmonds [1970]. in two given matroids. in this order. with a given weight function. There exists a directed path P in H(M1 . . Case 2. In each iteration. we ﬁrst prove the following. 5. For any x ∈ X deﬁne the ‘length’ l(x) of x by: (35) l(x) := w(x) l(x) := −w(x) if x ∈ Y . M2 . Matroids ˜ v ∈ V . . . I 1 ) and M2 = (X. counting multiplicities. z1 . M2 . and such that there is a one-to-one function ψ : E −→ E with ψ(e) = {φ(v). Then Y is a maximumcardinality common independent set. Let P traverse y0 . y1 .5. In order to show the correctness if Case 1 applies. a weight function w :X−→ Q. Weighted matroid intersection We next consider the problem of ﬁnding a maximum-weight common independent set. zm . We consider two cases. φ(w)} for ˜ each edge e = {v. I 2 ) and a weight function w : X −→ Q are given. . ym . instead of ﬁnding a path P with a minimum number of arcs in H. Set (36) and repeat. I 1 ) and M2 = (X. This ﬁnishes the description of the algorithm. Y ) from X1 to X2 . is an extension of the algorithm given in Section 5. is equal to the sum of the lengths of the vertices traversed by P . and a max-weight common independent set Y .6. Y ). . denoted by l(P ). Here EU denotes the set of edges intersecting U . . There is no directed X1 − X2 path in H(M1 . . Case 1. . . ym }. . Y ) on X as in the cardinality case. To describe the algorithm. output: a max-weight common independent set Y with |Y | = |Y | + 1. M2 . The length of a path P . I 2 ). we will now require P to have minimum length with respect to some length function deﬁned on H. Then there exists a connected b-detachment. . The correctness of the algorithm if Case 2 applies follows directly from Theorem 5.

Y ) has no directed circuit of negative length. . yj1 ). . jm ) = (1. or (ii) there exists a directed circuit C with l(C) < 0 and traversing fewer vertices than P . y1 . . . . . z2 ). . . . . . Moreover. . Y ). . . M2 . . . . . since (j1 . . That is.ym . . . ym }. . . . (zm . . . . (Note that P traverses fewer vertices than P . ym−1 . y2 . yjm ). . . . . . zm ).2 (38) {z1 . (y1 . Without loss of generality. Hence the arcs in (39) can be decomposed into two simple directed z1 − ym paths P and P and a number of simple directed circuits C1 . one of them will satisfy (37)(i).6. . Suppose H(M1 . zm ). . . ym } is not the only matching in H(M1 . . jm ) of (1. counting multiplicities. . . . Now each of y1 . Yk ) with union {z1 . . If both P and P traverse fewer vertices than P . . . . . while |Z| = |Y |. . m. Hence l(P ) + l(P ) ≤ 2 · l(P ). y1 }. .Section 5. . zm . t. . with the zi in Y and the yi in X \ Y . . (z1 . . Z ∈ I 1 . If Y is a max-weight common independent set. . . So (37)(i) or . . . (So the zi belong to Y and the yi belong to X \ Y . zm is entered and left by exactly two arcs in (39). If one of them. Ct with t ≥ 1. there exists a proper permutation (j1 . ym }. in this order. . (ym−1 . the arcs in (39) contain a directed circuit. . . we know that Z ∈ I 1 ∩ I 2 . ym } ∈ I 1 . . traverses the same vertices as P . . and hence P satisﬁes (37)(i). Suppose (37)(iii) does not hold. Since w(Z) = w(Y )−l(C) > w(Y ). . {zm . in this order. then l(P ) ≤ 2 · l(P ) − l(P ) = l(P ). Weighted matroid intersection 37 We ﬁrst show a basic property of the length function l. ym ). M2 . . . y1 . So we may assume that l(Cj ) ≥ 0 for each j = 1. y1 ). z2 ). . Then by Lemma 5. . . . P say. . m) so that (zi . then H(M1 .10.) Proposition 1. zm . zm }) ∪ {y1 . . One of the following holds: (37) (i) there exists an y − x path P with l(P ) ≤ l(P ) and traversing fewer vertices than P . Now consider Z := (Y \{z1 .) This implies: Theorem 5. Proof. . y2 . zm })∪{y1 . Let P traverse y = z1 . . since t ≥ 1. m). Choose C so that m is minimal. Now if l(Cj ) < 0 for some j we have (37)(ii). zm . (zm . or (iii) (Yk \ {z1 . . M2 . y1 . z2 . Now consider the arcs (39) (z1 . . (y1 . . . . Let C traverse z1 . (ym−1 . . . ym = x. Let y ∈ Y and x ∈ X \ Y and let P be an y − x path in H(M1 . . while z1 is left by exactly two arcs in (39) and ym is entered by exactly two arcs in (39). z2 . . yji ) is an arc of H(M1 . . Proof. Yk ) for each i = 1. . . . Y ) has a cycle C of negative length. We have (40) l(P ) + l(P ) + l(C1 ) + · · · + l(Ct ) = 2 · l(P ). M2 . .

.41. . there exist pairwise disjoint arcs in H(M1 . Y1 . Yk . Matroids (ii) applies. there exist y. the set Y belongs to I 1 ∩ I 2 : this follows from the fact that P is a path without shortcuts (since each directed circuit has nonnegative length.12. . .6 implies that Y ∈ I 1 ∩ I 2 . Suppose Z ∈ I 1 ∩ I 2 with |Z| = k + 1 and w(Z) > w(Y ). .42. M2 . we have a maximum-weight common independent set. I 2 ) and a weight function w : X −→ Z+ so that there is no maximum-weight common independent set which is also a maximum-cardinality common independent set. A maximum-weight common independent set in two matroids can be found in polynomial time. Reduce the problem of ﬁnding a maximum-weight common basis in two matroids to the problem of ﬁnding a maximum-weight common independent set. z ∈ Z \ Y so that Y ∪ {y} ∈ I 1 and Y ∪ {z} ∈ I 2 . Exercises 5. hence Theorem 5.10. If Case 1 applies. Proof. Similarly. . Now each Cj has nonnegative length. Taking one among Y0 . Thus we obtain the result of Edmonds [1970]: Theorem 5. Since |Z| > |Y |. . This gives: Theorem 5. Y ) so that the tails are the elements in Y \ Z and the heads are the elements in Z \ (Y ∪ {y}).38 Chapter 5. Y is a max-weight common independent set. So y ∈ X1 and z ∈ X2 . . . This contradicts the fact that P is a minimum-length X1 − X2 path. Ct .11. These two sets of arcs together form a disjoint union of one y − z path Q and a number of directed circuits C1 . Now by Lemma 5. This implies t (41) l(Q) ≤ l(Q) + j=1 l(Cj ) = w(Y ) − w(Z) < w(Y ) − w(Y ) = l(P ). 5. . So the weighted common independent set augmenting algorithm is correct. . . Y ) so that the tails are the elements in Z \(Y ∪{z}) and the heads are the elements in Y \ Z. M2 . Starting with the max-weight common independent set Y0 := ∅ we can ﬁnd iteratively max-weight common independent sets Y0 .1. . by Theorem 5. . Give an example of two matroids M1 = (X.10. k and where Yk is a maximum-cardinality common independent set. . where |Yi | = i for i = 0. This contradicts the minimality of m. . traversing fewer vertices than C. I 1 ) and M2 = (X. Yk of maximum weight. there exist pairwise disjoint arcs in H(M1 . This would give a directed circuit of negative length. . This implies that if Case 1 in the algorithm applies. by Theorem 5. It obviously has polynomially bounded running time.

z(x) z(Y ) ≥ ≤ 0 rM (Y ) (x ∈ X) (Y ⊆ X) has an integer optimum solution z. for Y ⊆ X. Matroids and polyhedra 39 5. We show: Theorem 5. the linear programming problem (43) maximize subject to w T z.x∈Y subject to ≥ ≥ 0 w(x) (Y ⊆ X) (x ∈ X). 5. B) has no directed circuit of negative length.13. Let M = (X. This optimum solution necessarily is the incidence vector of some independent set of M . A) be a directed graph. we also consider the LP-problem dual to (43): (44) minimize Y ⊆X yY rM (Y ).44. Matroids and polyhedra The algorithmic results obtained in the previous sections have interesting consequences for polyhedra associated with matroids. This follows from the fact that the incidence vector χY of any independent set Y of M satisﬁes (42). . Show that B has maximum-weight among all common bases of M1 and M2 .43. It means that for each weight function w : X −→ R. I 1 ) and M2 = (X. Let D = (V.7. Let B be a common basis of the matroids M1 = (X. then (43) and (44) have integer optimum solutions. Deﬁne length function l : X −→ Z by l(x) := w(x) if x ∈ B and l(x) := −w(x) if x ∈ B. the convex hull of the incidence vectors of the independent sets of M . If w is integer. 5. by deﬁnition. So P (M ) is a polytope in RX . then z is the incidence vector of some independent set of M .Section 5. Each vector z in P (M ) satisﬁes the following linear inequalities: (42) z(x) z(Y ) ≥ ≤ 0 rM (Y ) for x ∈ X. Reduce the problem of ﬁnding a minimum-length rooted tree with root r. if and only if H(M1 . The matroid polytope P (M ) of M is.7. I 2 ) and let w : X −→ Z be a weight function. and let l : A −→ Z+ be a length function. yY yY Y ⊆X. Edmonds [1970] showed that system (42) in fact fully determines the matroid polytope P (M ). I) be a matroid. Note that if z is an integer vector satisfying (42). to the problem of ﬁnding a maximum-weight common independent set in two matroids. In order to prove this. let r ∈ V . M2 .

M2 . To see this. . Since B is a maximum-weight basis. . The matroid polytope P (M ) is determined by (42).44. Y ⊆X = w(yn ) · rM (Xn ) + i=1 (w(yi ) − w(yi+1 )) · rM (Xi ) = So system (42) is totally dual integral. M2 .13. this implies that for y ∈ B. Consider the directed graph H(M1 . We show that z and y have the same objective value. . Let n be the largest index for which w(yn ) ≥ 0. I 1 ) and M2 = (X.14.7). m and (45) Y := {yi | i ≤ n. . and so that B is both a maximum-weight basis of M1 with respect to w1 and a maximum-weight basis of M2 with respect to w2 . x ∈ X \ B: (48) φ(x) − φ(y) φ(y) − φ(x) ≤ ≤ −w(x) w(x) if (B \ {y}) ∪ {x} ∈ I 1 . if (B \ {y}) ∪ {x} ∈ I 2 . . n − 1. An even stronger phenomenon occurs at intersections of matroid polytopes. . Then there exist functions w1 . Then Y belongs to I (cf. Let M1 = (X. thus proving the theorem: n (47) wT z = w(Y ) = x∈Y w(x) = i=1 n w(yi ) · (rM (Xi ) − rM (Xi−1 )) yY rM (Y ). Exercise 5. y) of H(M1 . . . if Y = Xn .40 Chapter 5. B) with length function l as deﬁned in Exercise 5. B) has no directed circuits of negative length. . M2 . M2 . . Deﬁne Xi := {y1 . So z := χY is an integer feasible solution of (43). B). Immediately from Theorem 5. . . let w : X −→ Z be a weight function and let B be a common basis of maximum weight w(B). .13a. deﬁne a vector y in RP(X) by: (46) yY yY yY := := := w(yi ) − w(yi+1 ) w(yn ) 0 if Y = Xi for some i = 1. . Hence there exists a function φ : X −→ Z so that φ(y) − φ(x) ≤ l(y) for each arc (x. Proof. yi } for i = 0. . B) and l. It turns out that the intersection of two matroid polytopes gives exactly the convex hull of the common independent sets. . This directly implies: Corollary 5. . ym in such a way that w(y1 ) ≥ w(y2 ) ≥ . for all other Y ⊆ X Then y is an integer feasible solution of (44). . as was shown again by Edmonds [1970]. . Order the elements of X as y1 . Matroids Proof. Moreover. we ﬁrst derive a basic property: Theorem 5. H(M1 . rM (Xi ) > rM (Xi−1 )}. w2 : X −→ Z so that w = w1 + w2 . Proof. I 2 ) be matroids. w(ym ). Using the deﬁnition of H(M1 .

w2 (u ) ≤ w2 (u ) ˜ ˜ ˜ whenever u ∈ W. Similarly. J2 := {Y ∪ W | Y ∈ I 2 . then also after adding a constant function to w this remains the case. w2 : X −→ Z so that w1 + w2 = w and ˜ ˜ ˜ ˜ ˜ so that Y ∪ W is both a maximum-weight basis of M1 with respect to w1 and a maximum˜ weight basis of M2 with respect to w2 . Note that if B is a maximum-weight basis of M1 with respect to some weight function w. Let M1 = (X. Then Y is both a ˜ ˜ maximum-weight independent set of M1 with respect to w1 and a maximum-weight independent set of M2 with respect to w2 . W ⊆ U. w2 : X −→ Z so that w = w1 + w2 and so that Y is both a maximumweight independent set of M1 with respect to w1 and a maximum-weight independent set of M2 with respect to w2 . Matroids and polyhedra Now deﬁne (49) w1 (y) w1 (x) := := φ(y). Then there exist weight functions w1 .14 holds for independent sets instead of bases. u ∈ U \ W . u ∈ U \ W . B is a maximumweight basis of M1 with respect to w1 . this implies that w1 (u ) = w1 (u ) ˜ ˜ ˜ ˜ ˜ whenever u ∈ W. ˜ Then M1 := (X ∪ U.Section 5. Let U be a set of cardinality |X| + 2 disjoint from X. w(x) + φ(x). So by Exercise 5. Otherwise we can replace u ˜ ˜ in Y ∪ W by u to obtain a basis of M1 of larger w1 -weight. let w : X −→ Z be a weight function. B is a maximum-weight basis of M2 with respect to w2 . it follows that w1 and w2 are constant on ˜ ˜ U . Y ∪ W is a maximum-weight common basis with respect to the weight function w.15. and let Y be a maximum-weight common independent set. Deﬁne w : X −→ Z by (51) Let W be a subset of U of cardinality |X \ Y | + 1. Proof. In fact. I 2 ) be matroids. Theorem 5. Since we can add a constant function to w1 and subtracting the same function from w2 ˜ ˜ without spoiling the required properties. if x ∈ U . I 1 ) and M2 = (X. u ∈ U \ W . ˜ ˜ Now deﬁne w1 (x) := w1 (x) and w2 (x) := w2 (x) for each x ∈ X.24. . Then Y ∪ W is a common basis of M1 and M2 . there exist functions w1 . Similarly. ˜ Now. W ⊆ U. we may assume that w1 and w2 are 0 on U . |Y ∪ W | ≤ |X| + 1}.) So by Theorem 5. w(x) ˜ w(x) ˜ := := w(x) 0 if x ∈ X. w2 (y) w2 (x) := := w(y) − φ(y) −φ(x) for y ∈ B for x ∈ X \ B. |Y ∪ W | ≤ |X| + 1}. (Any basis B of larger weight would intersect X in a common independent set ˜ of M1 and M2 of larger weight than Y . J2 ) are matroids again. Deﬁne (50) J1 := {Y ∪ W | Y ∈ I 1 . This observation will be used in showing that a theorem similar to Theorem 5. 41 Then w1 (x) ≤ w1 (y) whenever (B \ {y}) ∪ {x} ∈ I 1 .7. As ∅ = W = U . Since w1 (u) + w2 (u) = w(u) = 0 for all u ∈ U .14. w1 (u ) ≤ w1 (u ) whenever u ∈ W. J1 ) and M2 := (X ∪ U.

w1 and M2 . (Y ⊆ X). Optimality follows from: (55) w(Z) = w1 (Z) + w2 (Z) = Y ⊆X yY rM1 (Y ) + Y ⊆X yY rM2 (Y ) = Y ⊆X (yY rM1 (Y ) + yY rM2 (Y )). I 1 ) and M2 = (X. being incidence vectors of common independent sets. w2 . Again. respectively. (Y ⊆ X).14. respectively.42 Chapter 5. Matroids Having this theorem. w2 : X −→ Z so that w1 + w2 = w and so that Y is a maximumweight independent set in Mi with respect to wi (i = 1.13 the intersection P (M1 ) ∩ P (M2 ) of the matroid polytopes associated with the matroids M1 = (X. By Theorem 5.13 to w1 and w2 . The corresponding linear programming problem is. One easily checks that y . Applying Theorem 5. we know that there exist integer optimum solutions y and y . z(x) z(Y ) z(Y ) (x ∈ X). (Y ⊆ X). respectively. Directly from Theorem 5. y forms a feasible solution of (54). it is quite easy to derive that the intersection of two matroid polytopes has integer vertices.x∈Y Now Theorem 5. then (53) and (54) have integer optimum solutions.16. The convex hull of the common independent sets of two matroids M1 and M2 is determined by (52). (Y ⊆ X). there exist functions w1 . So system (52) is totally dual integral. (Y ⊆ X). subject to Y ⊆X. I 2 ) is determined by: (52) z(x) z(Y ) z(Y ) ≥ ≤ ≤ 0 rM1 (Y ) rM2 (Y ) w T z. 2). Again we consider the dual linear programming problem: (54) minimize Y ⊆X (yY rM1 (Y ) + yY rM2 (Y )) yY yY (yY + yY ) ≥ ≥ ≥ 0 0 w(x) (Y ⊆ X).16a. for problem (44) with respect to M1 . for any w : X −→ R: (53) maximize subject to ≥ ≤ ≤ 0 rM1 (Y ) rM2 (Y ) (x ∈ X). (x ∈ X). Let Y be a common independent set of maximum weight w(Y ). Proof. . Proof. By Theorem 5. If w is integer. this directly implies: Corollary 5.16.

46. .Section 5.7. M2 .45.16. Derive Edmonds’ matroid intersection theorem (Theorem 5. Matroids and polyhedra Exercises 43 5. and M3 on the same set X so that the intersection P (M1 ) ∩ P (M2 ) ∩ P (M3 ) is not the convex hull of the common independent sets. Give an example of three matroids M1 . 5.9) from Theorem 5.

Trivially one has (6) f (1) = 2.1. We show a number of relations between the three functions h. . We now deﬁne three functions. Having (2). Let G − ei be the graph obtained from G by deleting ei . . it suﬃces to show: (5) 4 f (n) ≥ ( 3 )n for each n. Now each perfect matching M in G is also a perfect matching in two of the graphs G − e1 . . and g that imply (1). So we must show (1) 4 h(n) ≥ ( 3 )n for each n. implying (2). Indeed. Sop G − ei is a graph with two vertices of degree 2. Let g(n) be the minimum of φ(G) taken over all bipartite graphs on 2n vertices where one vertex has degree 2. First we have (2) h(n) ≥ 3 f (n) 2 for each n.1. So. Then G has at least ( 4 )n perfect matchings. G − e3 . (3) φ(G − ei ) ≥ f (n). while all other vertices have degree 3. Let G = (V. 3 Proof. f . while all other vertices have degree 3. while all other vertices have degree 3. . and let e1 . Choose a vertex u. Hence we have: (4) 2h(n) = 2φ(G) = φ(G − e1 ) + φ(G − e2 ) + φ(G − e3 ) ≥ 3f (n). For any graph G. let φ(G) denote the number of perfect matchings in G. one vertex has degree 4. Perfect matchings in regular bipartite graphs 6. e3 be the edges incident with u. Let h(n) be the minimum of φ(G) taken over all 3-regular bipartite graphs on 2n vertices. by deﬁnition of f (n). Next we show that for each n: (7) 4 g(n) ≥ 3 f (n). e2 . E) be a 3-regular bipartite graph with 2n vertices. Perfect matchings in regular bipartite graphs 6. let G be a 3-regular bipartite graph on 2n vertices with φ(G) = h(n). Counting perfect matchings in 3-regular bipartite graphs We ﬁrst consider the following theorem of Voorhoeve [1979]: Theorem 6. . Let f (n) be the minimum of φ(G) taken over all bipartite graphs on 2n vertices where two vertices have degree 2.44 Chapter 6.

There are a number of cases (the ﬁrst case being most general). with one vertex of degree 2.Section 6. Then (5) follows by induction on n. To see (9).1. the graph G − ei has two vertices of degree 2. all other vertices having degree 3. So (11) using (2). of degree 2. 3 . (In fact one has φ(G) = φ(G ). 4. We obtain a 3-regular bipartite graph G with 2(n − 1) vertices. Case 3: v1 = v2 = w. . . Let u be the vertex of degree 4. Hence (8) 3g(n) = 3φ(G) = φ(G − e1 ) + · · · + φ(G − e4 ) ≥ 4f (n) implying (7). Contract the edges uv1 and uv2 . G − e4 . . Then G is a bipartite graph with 2(n − 1) vertices. 3 f (n) = φ(G) ≥ φ(G ) ≥ h(n − 1) ≥ 2 f (n − 1) ≥ 4 f (n − 1). Contract the edges uv1 and uv2 . So (12) 4 f (n) = φ(G) ≥ 2φ(G ) ≥ 2f (n − 1) ≥ 3 f (n − 1). So there are two parallel edges connecting u and v1 . one vertex (the new vertex arisen by the contraction) of degree 4. e4 be the four edges incident with u. while all other vertices have degree 3. So (10) 4 f (n) = φ(G) ≥ φ(G ) ≥ g(n − 1) ≥ 3 f (n − 1). Moreover. Note that u and v necessarily belong to diﬀerent colour classes of G. . Consider the graph G = G − u − v1 (the graph obtained by deleting vertices u and v1 and all edges incident with them). 4. G has 2(n − 1) vertices. Moreover. while all other vertices have degree 3. but that is not needed in the proof. . . So v1 has degree 3 and v2 has degree 2. while all other vertices have degree 3. . . while all other vertices have degree 3. Let v1 and v2 be the two neighbours of u. . Moreover. Note that the vertices of degree 2 and 4 belong to the same colour class of G. Then for each i = 1. . We obtain a graph G with one vertex w of degree 2. u and w say. each perfect matching M of G is a perfect matching of exactly three of the graphs G − e1 . We next show (9) 4 f (n) ≥ 3 f (n − 1). . each perfect matching M in G can be extended in two ways to a perfect matching in G (since there are two parallel edges connecting u and v1 ). using (7). Counting perfect matchings in 3-regular bipartite graphs 45 Let G be a bipartite graph with 2n vertices. Case 1: v1 = v2 = w = v1 . and let e1 . Again. let G be a bipartite graph on 2n vertices with two vertices. . . such that φ(G) = g(n). one vertex of degree 4. . . with two vertices of degree 2. . each perfect matching in G is the image of a perfect matching in G. So v1 and v2 are distinct vertices of degree 3. So φ(G − ei ) ≥ f (n) for each i = 1. Each perfect matching in G is the image of a perfect matching in G.) Case 2: v1 = v2 = w.

. I). 3j}| = 1 for each j = 1. . consider the graph G = G − u − v1 . where (14) ei connects u i 3 and v π(i) 3 for i = 1. Let Π be the set of permutations of {1. 3n}. where π is a permutation of {1 . Now by ﬁrst choosing I satisfying (17)(i) (which can be done in 3n ways). . such that (17) (i) |I ∩ {3j − 2. . On the other hand. . (ii) |π(I) ∩ {3j − 2. 3j − 1. 3n} and where I is a subset of {1. (15) φ(Gπ ) ≥ αn . . . e3n . . un . with two parallel edges connecting u and v1 . . and next choosing a permutation π of {1. . Since |Π| = (3n)!. . Then G is a 3-regular bipartite graph with 2(n − 1) vertices. . 3j}| = 1 for each j = 1. Again. . . . 3j − 1. . . where φ(Gπ ) denotes the number of perfect matchings in Gπ . The factor 4 3 is best possible Let α be the largest real number with the property that each 3-regular bipartite graph with 2n vertices has at least αn perfect matchings. that is. . n. . (15) and (16) imply (18) α≤( 32n n!(2n)! 1/n ) (3n)! . For any π ∈ Π. we obtain (16). So u and v1 form a component of G. . (16) π∈Π φ(Gπ ) = 3n 3n n!(2n)!. by deﬁnition of α. .46 Chapter 6. . (Here x denotes the upper integer part of x. . 3n} such that {ei |i ∈ I} forms a perfect matching in Gπ . . . . The fact that α ≥ 4 follows directly from Theorem 6. α = 4 . . We show Theorem 6. . Perfect matchings in regular bipartite graphs Case 3: v1 = v2 = w. . Hence. The left hand side is equal to the number of pairs (π. let Gπ be the bipartite graph with vertices u1 . . 3n. . 3 Proof.2. So (13) using (2). . To see the reverse inequality. v1 . Each perfect matching M in G can be extended in two ways to a perfect matching in G (since there are two parallel edges connecting u and v1 ). . . 3n} satisfying (17)(ii) (which can be done in 3n n!(2n)! ways). . 4 f (n) = φ(G) ≥ 2φ(G ) ≥ 2h(n − 1) ≥ 3f (n − 1) ≥ 3 f (n − 1). . .) So Gπ is a 3-regular bipartite graph with 2n vertices. 6. This can be seen as follows. n.1. . 3 ﬁx n. vn and edges e1 . .2.

which says that n √ (19) n! ≈ ( )n 2πn. The factor 4 3 is best possible 47 yielding (??). with Stirling’s formula.2. e in fact.Section 6. (20) 1 n!1/n = . n→∞ n e lim .

58 17.56 19.41 20.52 8. divided into ﬁrst class and second class.56 10. 38 21.58 23.35 14.56 9.32 15. 29 6.56 13.43 16.53 8.56 19.38 13.34 11.38 15. each consisting of three carriages.35 15.38 11.56 12.38 12. 48 7.38 14. but for our purposes only those given in the table are of interest.53 19.53 14.03 12.32 16.43 16.58 20.56 17.53 22.50 17. In a ﬁrst variant of the problem considered.50 9.58 12.38 Table 1.56 13.56 17.40 14.02 11.56 16.41 21.34 12.38 8.02 23.34 19.50 19. 38 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176 d a d a d 5. the train stock consists of one type of two-way train-units. 28 6.38 20.41 13.35 9.34 20.02 20.1. The number of seats in any unit is: ﬁrst class second class 38 163 Table 3.45 17.35 18. Number of seats .43 9.56 22.01 17.58 11.26 7.41 15.56 20.41 8.52 21.03 10.38 9.56 15.53 13.53 15.35 6.38 11.32 22. 38 10.58 18.53 9. 43 8.56 22.32 10. for each day from Monday till Friday: ride number Amsterdam Rotterdam Rotterdam Roosendaal Roosendaal Vlissingen ride number Vlissingen Roosendaal Roosendaal Rotterdam Rotterdam Amsterdam d a d a d a 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191 6.34 10. 41 9.58 15.43 12.38 11.38 13.56 12.53 17.32 8. Numbers of required ﬁrst class (up) and second class (down) seats The problem to be solved is: What is the minimum amount of train stock necessary to perform the service in such a way that at each stage there are enough seats? In order to answer this question.38 18.56 18.32 20.56 11.41 21.58 14.02 22.02 13. 31 a 6.43 10.58 19.50 12.56 14.02 15.56 8.38 6.56 7.48 Chapter 7. 40 7.35 21.58 13. Minimum circulation of railway stock 7.58 23.43 17.55 20.01 18.56 16.32 22.58 10. 55 8.32 18.00 16.53 16.32 12.43 13. 29 7.50 16.41 19.49 20.56 14.32 13.43 20.58 21.43 21.45 11.40 16.56 15.32 14.41 14.02 21.38 17.43 14. 39 5.53 20.50 18.41 11.30 6.38 18.50 14.34 9.38 12.38 16.38 19.32 11.40 15.53 10.32 19.38 19.34 23.50 11.56 10.02 14.30 21.58 16.38 10. 00 7.41 12.50 15. 43 9.54 7. one should know a number of further characteristics and constraints.56 21.50 13.48 8. with the following timetable. Timetable Amsterdam-Vlissingen vice versa The trains have more stops.32 9.33 17. given in the following table: train number Amsterdam-Rotterdam Rotterdam-Roosendaal Roosendaal-Vlissingen train number Vlissingen-Roosendaal Roosendaal-Rotterdam Rotterdam-Amsterdam 7 61 16 167 26 230 4 58 14 328 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191 47 340 35 272 19 181 100 616 52 396 27 270 61 407 41 364 26 237 41 336 26 240 24 208 31 282 25 221 32 188 46 287 27 252 15 180 42 297 27 267 21 195 33 292 28 287 23 290 39 378 52 497 41 388 84 527 113 749 76 504 109 616 98 594 67 381 78 563 51 395 43 276 44 320 29 254 20 187 28 184 22 165 15 136 21 161 13 130 28 190 8 77 10 123 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176 28 138 88 449 106 586 100 448 134 628 105 545 48 449 57 397 56 427 57 436 71 521 75 512 24 224 34 281 47 344 19 177 26 214 36 303 19 184 22 218 32 283 17 181 21 174 34 330 19 165 25 206 39 338 22 225 35 298 67 518 39 332 51 422 74 606 30 309 32 313 37 327 19 164 20 156 23 169 15 142 14 155 18 157 11 121 14 130 17 154 7 64 11 143 Table 2.32 8.02 19. For each of the stages of any scheduled train.50 7.54 7. The problem Nederlandse Spoorwegen (Dutch Railways) runs an hourly train service on its route Amsterdam-Schiphol Airport-Leyden-The Hague-Rotterdam-Dordrecht-Roosendaal-Middelburg-Vlissingen vice versa.02 8.42 18.43 15. Nederlandse Spoorwegen has determined an expected number of passengers. 01 9. Minimum circulation of railway stock 7.56 18.34 17.40 10.38 13.58 8.38 12.44 19.39 17.50 10.53 18.56 9.45 18.55 7.43 7.58 22.34 16.38 5.53 11.43 22.

For any stage of any train ride.) A last condition put is that for each place X ∈ {Amsterdam. Ferguson and Dantzig [1955]. up to a certain maximum number of units. It is not required that the same train-unit. since in the weekend a few early trains are cancelled. A network model 49 Each unit has at both ends an engineer’s cabin. at the terminal stations of the line. by coupling or decoupling units. and on the remaining trains there is a smaller expected number of passengers.38). say. This maximum is trajectory-dependent.2. should return to Roosendaal at the end of the day. one may ask for the minimum number of train-units that should be available to perform the daily cycle of train rides required.43) to (Vlissingen. Roosendaal. based on min-cost circulations in networks (see Bartlett [1957]. we make an arc from (X. Any train-unit decoupled from a train arriving at place X at time t can be linked up to any other train departing from X at any time later than t. etc. Vlissingen} and for each time t at which any train leaves or arrives at X. t ). On each of the trajectories of the line Amsterdam-Vlissingen this maximum number is 15 carriages. t) to (X. and units can be coupled together. A) is constructed as follows. we make a directed arc from (X. van Rees [1965]. cf. leaving place X at time t and arriving at place Y at time t . also Boldyreﬀ [1955]. as this generally amounts to just a ﬁxed percentual addition on top of the net minimum. meaning 5 train-units.2. To this end. it is not taken into consideration that stock can be exchanged during the day with other lines of the network. we make a vertex (X. A network model If only one type of railway stock is used.) Another point left out of consideration is the regular maintenance and repair of stock and the amount of reserve stock that should be maintained. required acceleration speed and braking distance. curvature of bends. t ). The train length can be changed. Moreover. Feeney [1957]. and en route at two intermediate stations: Rotterdam and Roosendaal. Given these problem data and characteristics. but initially this possibility is ignored. and depends on lengths of station platforms. 7. Norman and Dowling [1968]. White and Bomberault [1969]). Thus in our example there will be arcs. In practice this will happen. 7. For instance. Vlissingen}. after a night in Roosendaal. (The Amsterdam-Vlissingen schedule is such that in practice this gives enough time to make the necessary switchings. It is assumed that if there is suﬃcient stock for Monday till Friday. for any place X and any two successive times t. Roosendaal. that is at Amsterdam and Vlissingen. a classical method can be applied to solve the problem. Rotterdam. This requirement is made to facilitate surveying the stock. (We will return below to this issue.Section 7. there is an arc from (Roosendaal. t at which any time leaves or arrives at X. Moreover. t). Rotterdam. and to equalize at any place the load of overnight cleaning and maintenance throughout the week. the number of train-units staying overnight at X should be constant during the week (but may vary for diﬀerent places). So the vertices of D correspond to all 198 time entries in the timetable (Table 1). . For each place X ∈ {Amsterdam. then this should also be enough for the weekend services. a directed graph D = (V. t) to (Y. 8. Only the number of units is of importance.

We can now describe any possible routing of train stock as a function f : A −→ Z+ . f should satisfy the . 8.56). Here δ + (v) denotes the set of arcs of D that are entering vertex v and δ − (v) denotes the set of arcs of D that are leaving v. from (Rotterdam. the ﬂow conservation law.35).38).01) to (Rotterdam. t ). All arcs are oriented clockwise Finally. at any vertex v of D one should have: (1) a∈δ + (v) f (a) = a∈δ − (v) f (a). 8. this function is a circulation. Minimum circulation of railway stock e. 8. t) to (X. 12 11 10 13 14 9 15 8 16 7 17 6 18 5 Vlissingen 19 4 Roosendaal 20 3 Rotterdam 21 2 1 Amsterdam 22 23 24 Figure 7. t ). then f (a) is the number of units deployed for that stage. where t is the last time of the day at which any train leaves or arrives at X and where t is the ﬁrst time of the day at which any train leaves or arrives at X.38) to (Vlissingen. 8. in order to satisfy the demand and capacity constraints. and from (Vlissingen. for each place X there will be an arc from (X. 8.54) to (Roosendaal. then f (a) is equal to the number of units present at place X in the time period t–t (possibly overnight). from (Rotterdam. First of all. If a corresponds to an arc from (X. That is.32). So there is an arc from (Roosendaal. from (Vlissingen. where f (a) denotes the following.56) to (Vlissingen.50 Chapter 7. t) to (X.1 The graph D. Moreover. 5.g. If a corresponds to a ride stage. 9. 8.32) to (Rotterdam. 23. 8..29).

That is. Having this model. A network model following condition for each arc a corresponding to a stage: (2) d(a) ≤ f (a) ≤ c(a). It gives the number of train-units that are used. we could restrict ourselves to: (3) Minimize a∈A◦ f (a).1) the following optimum solution was obtained in 0. (See also the classical standard reference Ford and Fulkerson [1962] and the recent encyclopedic treatment Ahuja. This number is also equal to the total ﬂow on the ‘overnight’ arcs. With the fast linear programming package CPLEX (version 2. So if we wish to minimize the total number of units deployed.1. and Orlin [1993]. the total ﬂow on the arcs crossing the section is independent of the choice of the section. Hence determining the minimum number of train-units amounts to solving a minimum-cost circulation problem. So |A◦ | = 4 in the example. based on mincost augmenting paths and cycles ( Jewell [1958]. the problem can be solved easily with any linear programming package. and cost(a) = 0 for all other arcs. Magnanti. 51 Here c(a) gives the ‘capacity’ for the stage. Busacker and Gowen [1960].05 CPUseconds (on an SGI R4400): train number Amsterdam-Rotterdam Rotterdam-Roosendaal Roosendaal-Vlissingen train number Vlissingen-Roosendaal Roosendaal-Rotterdam Rotterdam-Amsterdam 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191 1 3 3 2 2 4 3 2 3 3 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 5 4 3 5 5 4 4 4 3 4 3 2 2 2 2 2 2 1 1 1 2 1 1 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176 2 2 1 4 4 3 4 4 3 3 3 3 4 4 2 2 3 2 2 2 2 2 2 2 2 3 2 3 3 2 2 4 3 4 4 2 3 3 2 1 2 1 1 1 4 1 1 1 1 1 Table 5. the lower bound on the number of units required by the expected number of passengers as given in Table 2. Minty [1960]. d(a) denotes the ‘demand’ for that stage. Minimum circulation with one type of stock . Iri [1960]. Edmonds and Karp [1972]) or on ‘out-of-kilter’ (Fulkerson [1961]. in our example c(a) = 15 throughout.05 CPUseconds on an SGI R4400. Furthermore. Implementation gives solutions of the problem (for the above data) in about 0. Here A◦ denotes the set of overnight arcs.Section 7. Lower bounds on the number of train-units Note that. where the cost function is quite trivial: we have cost(a) = 1 if a is an overnight arc. at any section of the graph in Figure 7.2. with Table 3 we obtain the following lower bounds on the numbers of train-units: train number Amsterdam-Rotterdam Rotterdam-Roosendaal Roosendaal-Vlissingen train number Vlissingen-Roosendaal Roosendaal-Rotterdam Rotterdam-Amsterdam 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191 1 3 3 2 2 4 3 2 3 3 2 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 4 3 4 5 4 4 4 3 4 3 2 2 2 2 2 2 1 1 1 2 1 1 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176 2 2 1 3 4 3 4 4 3 3 3 3 4 4 2 2 3 2 2 2 2 2 2 2 2 3 2 2 3 2 2 4 3 3 4 2 2 3 2 1 2 1 1 1 1 1 1 1 1 1 Table 4. by the ﬂow conservation law. that is. Alternatively. It is easy to see that this fully models the problem. we can apply standard min-cost circulation algorithms. since by the integrality of the input data and by the total unimodularity of the underlying matrix the optimum basic solution will have integer values only.

52 Chapter 7. The maximum number of carriages that can be in any train is again 15. each unit of which consists of 3 carriages. and type IV. the demands of the train stages are given in Table 2.3. and if some of the types can be coupled together to form a train of mixed composition. At each arc a let f (a) be the number of units of . The two types are type III. In addition. the problem can be solved fast even for large networks. A) as above. Required stock (one type) It is quite direct to modify and extend the model so as to contain several other problems. each unit of which consists of 4 carriages. Minimum circulation of railway stock Required are 22 units. that can be coupled together. (Nederlandse Spoorwegen has trains from The Hague and Rotterdam to Leeuwarden and Groningen that are combined to one train on the common trajectory between Utrecht and Zwolle. Instead of minimizing the number of train-units one can minimize the amount of carriagekilometers that should be made every day. This means that if a train consists of x units of type III and y units of type IV then 3x + 4y ≤ 15 should hold. including trains that are scheduled to be split or combined. More types of trains The problem becomes harder if there are several types of trains that can be deployed for the train service. Again we consider the directed graph D = (V. It is quite easy to extend the model above to the present case. each unit having the same capacity. the problem could be decomposed into separate problems of the type above. Clearly. one can put an upper bound on the number of units that can be stored at any of the stations. if for each scheduled train we would prescribe which type of unit should be deployed. Let us restrict ourselves to the case Amsterdam-Vlissingen again. divided during the night over the four couple-stations as follows: number of units 4 2 8 8 22 number of carriages 12 6 24 24 66 Amsterdam Rotterdam Roosendaal Vlissingen Total Table 6. or any linear combination of both quantities. one can more generally consider networks of lines that share the same stock of railway material. The capacities are given in the following table: type ﬁrst class second class III 38 163 IV 65 218 Table 7. Instead of considering one line only.) If only one type of unit is employed for that part of the network. But if we do not make such a prescription. we should extend the model to a ‘multi-commodity circulation’ model. Number of seats Again. 7. where now we can deploy two types of two-way train-units.

However. they are cheaper per carriage. Solving the problem in this form with the integer programming package CPLEX (version 2. when solving the problem as a linear programming problem. More types of trains 53 type III on the stage corresponding to a and let g(a) similarly represent type IV. Although train-units of type IV are more expensive than those of type III. . we loose the pleasant phenomenon observed above that we automatically would obtain an optimum solution f. where p1 (a) and p2 (a) denote the number of ﬁrst class and second class seats required (Table 2). This long running time is caused by the fact that. which is too long if one wishes to compare several problem data. g : A −→ R with integer values only. for each arc a representing a stage. (Also Ford and Fulkerson’s column generation technique Ford and Fulkerson [1958] yields fractional solutions. The demand constraint can be formulated as follows: (6) 38f (a) + 65g(a) ≥ p1 (a). that is: (7) costIII < costIV < 4 costIII . Let costIII and costIV represent the cost of purchasing one unit of type III and of type IV. One could implement variants of augmenting paths and cycles techniques.1) would give (for the Amsterdam-Vlissingen example) a running time of several hours. but they generally lead to fractional circulations. with certain values being non-integer. 3 This is due to the fact that engineer’s cabins are relatively expensive.) So the problem is an integer linear programming problem. So both f : A −→ Z+ and g : A −→ Z+ are circulations. there are ways of speeding up the process. so that minimum train compositions need not be unique.Section 7. satisfy the ﬂow circulation law: (4) a∈δ − (v) f (a) = a∈δ + (v) f (a). with 198 integer variables. g(a). a large number of possibilities should be checked in a branching tree (corresponding to rounding fractional values up or down) before one has found an integer-valued optimum solution. a∈δ + (v) g(a) = a∈δ − (v) for each vertex v. One variant of the problem is to ﬁnd f and g so as to (8) Minimize a∈A◦ (costIII f (a) + costIV g(a)). now we cannot speak of a minimum number of units required: there are now two dimensions. 163f (a) + 218g(a) ≥ p2 (a). Note that in contrary to the case of one type of unit. respectively.3. However. The capacity constraint now is: (5) 3f (a) + 4g(a) ≤ 15 for each arc a representing a stage. that is. Similarly. despite a fractional optimum solution is found quickly. that is. the classical min-cost circulation algorithms do not apply now. by sharpening the constraints and by exploiting more facilities oﬀered by CPLEX.

y ≥ 0. g(a)) should be an integer vector in the polygon (9) Pa := {(x. y)|x ≥ 0. the trajectory Rotterdam-Amsterdam of train 2132 gives the polygon (10) In a picture: 2 1 0 0 1 2 type III 3 4 5 6 Figure 1. 38x+65y ≥ p1 (a). 3x + 4y ≤ 5. Pa = {(x.2. 3x + 4y ≤ 15. The polygon Pa In a sense. Minimum circulation of railway stock The conditions (5) and (6) can be sharpened in the following way. 163x+218y ≥ p2 (a)}. x + y ≥ 2. 163x + 218y ≥ 344}. y ≤ 3. y)|x ≥ 0. as in: 4 3 type IV 2 1 0 0 1 2 type III 3 4 5 6 Figure 1.54 Chapter 7. For each arc a representing a stage. the two-dimensional vector (f (a). 38x + 65y ≥ 47. y ≥ 0. x + 2y ≥ 3. the inequalities are too wide. The integer hull of Pa Thus for segment Rotterdam-Amsterdam of train 2132 the constraints (10) can be sharpened to: (11) x ≥ 0. The constraints given in (10) could be tightened so as to describe exactly the convex hull of the integer vectors in the polygon Pa (the ‘integer hull’). . 3x+4y ≤ 15. 4 3 type IV For instance.3. y ≥ 0.

which helps to obtain more easily an integer optimum solution from a fractional solution. train number Amsterdam-Rotterdam Rotterdam-Roosendaal Roosendaal-Vlissingen train number Vlissingen-Roosendaal Roosendaal-Rotterdam Rotterdam-Amsterdam 2123 2127 2131 2135 2139 2143 2147 2151 2155 2159 2163 2167 2171 2175 2179 2183 2187 2191 0+2 0+3 4+0 0+2 0+2 1+2 0+2 1+1 0+3 2+1 0+3 1+2 0+2 0+1 1+2 0+1 0+1 0+1 0+2 0+2 4+0 0+2 0+2 1+3 0+3 1+1 0+3 2+2 0+3 0+2 1+1 2+0 1+3 1+0 0+2 0+2 0+2 2+0 0+1 0+1 0+2 0+2 2+0 0+2 2+1 0+2 0+2 2+0 0+1 2108 2112 2116 2120 2124 2128 2132 2136 2140 2144 2148 2152 2156 2160 2164 2168 2172 2176 1+0 0+3 1+2 0+2 0+2 0+1 1+1 1+1 0+1 0+2 0+2 2+0 0+2 2+0 0+1 1+2 3+0 0+3 0+2 1+2 0+2 2+1 1+3 0+1 0+3 1+3 0+3 1+1 0+1 2+2 0+1 1+0 0+1 0+2 4+0 0+3 0+3 1+2 0+2 2+0 0+2 1+1 0+3 1+2 0+3 1+1 0+1 0+2 0+1 0+1 Table 8. one can give higher priority to variables that correspond to peak hours (as one may expect that they form the bottleneck in obtaining a minimum circulation). More types of trains 55 Doing this for each of the 99 polygons representing a stage gives a sharper set of inequalities.) Finding all these inequalities can be done in a pre-processing phase. Implementation of these techniques makes that CPLEX gives a solution to the Amsterdam-Vlissingen problem in 1. (This is a weak form of application of the technique of polyhedral combinatorics. one needs 7 units of type III and 12 units of type IV. divided during the night as follows: . Minimum circulation with two types of stock.Section 7. In particular.3. and lower priority to those corresponding to oﬀ-peak periods. x + y means: x units of type III and y units of type IV In total. Another ingredient that improves the performance of CPLEX when applied to this problem is to give it an order in which the branch-and-bound procedure should select variables. and takes about 0.58 CPUseconds (taking costIII = 4 and costIV = 5).04 CPUseconds.

. based on introducing extra variables. and that conditions have a more global impact: the order of the units in a certain morning train can still inﬂuence the order of some evening train. one may couple fresh units only to the front of the train. One can include minimizing the number of carriage-kilometers as an objective. One requirement is that in any train ride AmsterdamVlissingen there should be at least one unit that makes the whole trip. Interestingly. but not both simultaneously. Moreover. and decouple laid oﬀ units only from the rear. Roosendaal. and requires an extension. Minimum circulation of railway stock number of units type IV 2 2 3 5 12 total number of units 2 2 6 7 17 total number of carriages 8 8 21 26 63 Amsterdam Rotterdam Roosendaal Vlissingen Total Table 9. one can consider networks of lines. at any of the four stations given (Amsterdam. Rotterdam. We observed a similar phenomenon when checking other input data (although there is no mathematical reason for this fact and it is not diﬃcult to construct examples where it does not show up). the possibility of having two types gives both a decrease in the number of train-units and in the number of carriages needed. yields a running time (with CPLEX) of about 30 CPUseconds for the Amsterdam-Vlissingen problem. Our research for NS in fact has focused on more extended problems that require more complicated models and techniques. (One may check that these conditions are not met by all trains in the solution given in Table 8. Required stock (two types) So comparing this solution with the solution for one type only (Table 6). Vlissingen) one may either couple units to or decouple units from a train. Moreover. This does not ﬁt directly in the circulation model described above. In other words.56 number of units type III 0 0 3 2 5 Chapter 7. or the option that in some of the trains a buﬀet section is scheduled (where some of the types contain a buﬀet). The method we have developed for NS so far. it is required that. it turns out that 17 is the minimum number of units needed and 63 is the minimum number of carriages needed.2 also apply to this more extended model. costIV = 4. Again variants as described at the end of Section 7. Moreover. the circulation is optimum for any cost function satisfying (7).) This all causes that the order of the diﬀerent units in a train does matter.) So any feasible circulation with stock of Types III and IV requires at least 17 trainunits and at least 63 carriages. (This can be shown by ﬁnding a minimum circulation ﬁrst for costIII = costIV = 1 and next for costIII = 3. extending the graph described above and utilizing some heuristic arguments.

Maryland. [1955] A.B. New Jersey. American Mathematical Society. B. Determination of the maximal steady state ﬂow of traﬃc through a railroad network. eds. Hanani.L. June 1969. T. V.M. New York. H. Liu. [1971] I. 1969. Tel Aviv. Karzanov. A Procedure for Determining a Family of Minimum-Cost Network Flow Patterns. Theoretical improvements in algorithmic eﬃciency for network ﬂow problems. An algorithm to construct a minimum directed spanning tree in a directed network. [1959] E. Gordon and Breach. Algorithms. Providence. 1968. Edmonds. Bock. 29–44. J. A note on two problems in connexion with graphs. Chu. 11]. Edmonds. On the shortest arborescence of a directed graph.A. D. o [1965] J. Veinott. [1972] J.).E. Journal of Research National Bureau of Standards Section B 69 (1965) 147–153. R. Fulkerson. Sch¨nheim. in: Combinatorial Structures and Their Applications (Proceedings Calgary International Conference on Combinatorial Structures and Their Applications. Magnanti. [1970] J. Alberta. Volume 1 (Proceedings of the Third Annual Israel Conference on Operations Research. and certain polyhedra. G. 1975. H.G. On a minimal e e a ım problem]. Anderson. Edmonds. 1970. ed. [1965] J. 1993. Edmonds. Sauer.B. [1984] A. Calgary. pp. with German summary. O jist´m probl´mu minim´ln´ [Czech. Optimum branchings. Journal of the Association for Computing Machinery 19 (1972) 248–264. Series B 10 (1971) 183–186. Dijkstra.-h. in: Developments in Operations Research. Guy. [1926] O. Technical Paper ORO-TP-15. Submodular functions.) [Lectures in Applied Mathematics Vol. [1957] T. A. New York. Prentice Hall. Izdatel’stvo “Nauka”. Dantzig. Avi-Itzhak. Boldyreﬀ. Scientia Sinica [Peking] 14 (1965) 1396–1400. Journal of Combinatorial Theory. Orlin. Numerische Mathematik 1 (1959) 269–271. Transversals and matroid partition. A. T. 69–87. matroids. Potokovye algoritmy [Russian. 1967. J. Bartlett. 1971. [1965] Y. Network Flows — Theory. Rhode Island. P. 1960. 346–361]. Karp.J. [1971] F. pp. Jr. Bethesda. Pr´ce Moravsk´ Pˇ´ a e rırodovˇdeck´ Spoleˇnosti Brno [Acta Societatis Scientiarum Nate e c uralium Moravi[c]ae] 3 (1926) 37–58. [1993] R.F.W.References 57 References [1975] G. N.A. Stanford. Englewood Cliﬀs. Flow Alı. Minimum partition of a matroid into independent subsets. Edmonds.K. eds. [1960] R. Ehrenfeucht. gorithms]. Moscow. The Johns Hopkins University. R. [1967] J. Journal of Research National Bureau of Standards Section B 69 (1965) 67–72.V. Gowen.W. Adel’son-Vel’ski˘ E. Ahuja. . Discrete Mathematics 52 (1984) 159–164. Journal of Research National Bureau of Standards Section B 71 (1967) 233–240 [reprinted in: Mathematics of the Decision Sciences Part 1 (Proceedings Fifth Summer Seminar on the Mathematics of the Decision Sciences. pp. California. An algorithm for the minimum number of transport units to maintain a ﬁxed schedule. Dinits. Operations Research Oﬃce. Kierstead.-j.R. Naval Research Logistics Quarterly 4 (1957) 139–149. Boruvka.M.). A new method of proving theorems on chromatic index. Faber. Journal of the Operations Research Society of America 3 (1955) 443–465. and Applications. Gordon and Breach. Busacker. Perfect matchings of a graph.

Even. Jr. Princeton. Sectio Mathematica 2 (1959) 133–138.D. Princeton University Press.R. Itai. pp. o o [1935] P. SIAM Journal on Computing 10 (1981) 718–720. Thesis]. Roseblade. Massachusetts. The NP-completeness of edge-coloring. 1988. [1962] L.C. New Jersey. On the computational complexity of combinatorial problems.-P. Jewell. SIAM Journal on Computing 5 (1976) 691–703. Dantzig. A. The problem of routing aircraft. A new method of solving transportation-network problems. Graphs and matrices]. ¨ [1959] T. Berlin. Gallai. eds.C. Ford. Fulkerson.58 References [1976] S. 8. Clarendon Press. Fortune. [1960] M. Ford. On the complexity of timetable and multicommodity ﬂow problems.R. Theoretical Computer Science 10 (1980) 111–121. Gruenberg. Fulkerson. Flows in Networks. Jr. Eddison. Operations Research 9 (1961) 898–900. 250–265. J. Hopcroft. Army of Ordnance Research: Fundamental Investigations in Methods of Operations Research. Networks 5 (1975) 45–68. o e e [1931] D. ed. J. Shamir. Uber zerlegbare Determinanten. Wyllie. Operations Research 11 (1963) 344–360. K˝nig. pp. Aeronautical Engineering Review 14 (1955) 51–55. Graphok ´s matrixok [Hungarian.M. [1955] A. Multi-commodity network ﬂows. Fulkerson.). Operations Research Center. D. eds. Ferguson. [1957] G.).R.E. . 701– 704]. Baltimore. On representatives of subsets.S. pp. Iri. The empty boxcar distribution problem. Mathematical Programming 6 (1974) 1–13. The Journal of the London Mathematical Society 10 (1935) 26–30 [reprinted in: The Collected Works of Philip Hall (K. [1958] W. Matematikai ´s Fizikai Lapok 38 (1931) 116–119. D.). Hu. Holyer. Proceedings of the First International Conference on Operational Research (Oxford 1957) (M. Serre. Cambridge. Operations Research Society of America. Sitzungsberichte der K¨niglich Preußischen o Akademie der Wissenschaften zu Berlin (1917) 274–277 [reprinted in: Ferdinand Georg Frobenius.R. Management Science 5 (1958) 97–101. [1974] D.W. Acta Mathematica Academiae Sciena ¨ tiarum Hungaricae 9 (1958) 395–434. Oxford. Optimal Flows Through Networks [Sc. Feeney. Page. A. Gallai. J.R. Packing rooted directed cuts in a weighted directed graph. [1961] T. Massachusetts Institute of Technology.J. Maryland. [1958] T. Karp.T. [1958] L. [1981] I. The directed subgraph homeomorphism problem.B. Hall. Journal of the Society for Industrial and Applied Mathematics 9 (1961) 18–27. Hu. Journal of the Operations Research Society of Japan 3 (1960) 27–87. An out-of-kilter method for minimal-cost ﬂow problems. Frobenius. Annales Universitatis Scientiarum Budapestinensis de Rolando E¨tv¨s Nominatae. Fulkerson. [1980] S. G. Davies. The maximum capacity route problem. T. Band III (J. [1963] T. Maximum-minimum S¨tze uber Graphen. Interim Technical Report No. [1961] D. 165–169]. 1962.R. 1957. 1968. A suggested computation for maximal multi-commodity network ﬂows. Uber extreme Punkt. [1975] R. 1958 [partially reprinted in Jewell [1966]].und Kantenmengen. Springer. Gesammelte Abhandlungen. ¨ [1917] G. R.R.

SIAM Journal on Computing 23 (1994) 780–788. Minty. Decomposition of ﬁnite graphs into forests. Proceedings of the Royal Society of London. 1968.R. [1975] J. and Dunod.P. The Bell System Technical Journal 36 (1957) 1389–1401.J. XIII. Feasibility of two commodity network ﬂows. Rado. [1981] H. Dowling. [1964] C.J.J. Series A. Greenwich. van Rees. Whinston. The Journal of the London Mathematical Society 36 (1961) 445–450. Sectio Scientiarum Mathematicarum [Szeged] 6 (1932-34) 155–179. Operations Research 14 (1966) 1121–1129. ed. The complexity of wire-routing and ﬁnding minimum area layouts for arbitrary VLSI circuits. Okamura. Rothschild. An application of matroids to graph theory.J. in: VLSI-Theory (F. Nash-Williams. Seymour.M. Nash-Williams. Rosenstiehl.St. Robertson. Series B 19 a (1975) 269–271.D.) [Advances in Computing Research. Spoor. New York.J. [1994] A. (ACM) SIGDA Newsletter 5:3 (1975) 31–36. Een studie omtrent de circulatie van materieel.T.F. [1975] L. [1984] M. Connected detachments of graphs and generalized Euler trails. Journal of Combinatorial Theory. [1960] G.A.en Tramwegen 38 (1965) 363–367.St. Edge-disjoint spanning trees of ﬁnite graphs. Mathematical Programming 4 (1973) 1–20. Uber trennende Knotenpunkte in Graphen (nebst Anwendungen auf Determinanten o und Matrizen).S. Two commodity network ﬂows and linear programming. 129–146.W. pp.References 59 ¨ [1932] D. [1968] A. M. in: Theory of Graphs — International Symposium — Th´orie des graphes — Journ´es internationales d’´tude (Rome. Kramer. The Journal of the London Mathematical Society (2) 31 (1985) 17–29.). The disjoint paths problem. Journal of Combinatorial Theory. Nash-Williams. K˝nig. The Journal of the London Mathematical Society 39 (1964) 12. A theorem on independence relations. Journal of Combinatorial Theory. [1961] C. [1967] C. The equivalence of theorem proving and the interconnection problem. Prim.St. Gordon and Breach.A. Technical Report 320-2926. pp. Three short proofs in graph theory. New York. 1967. Monotone networks. P. Mathematical and Physical Sciences 257 (1960) 194–212. [1995] N.A. Lynch. Graph minors. Sakarovitch. [1973] M. The Quarterly Journal of Mathematics (Oxford) (2) 13 (1942) 83–89.R.H. Proceedings of the American Mathematical Society 7 (1956) 48–50. A. van Leeuwen. Shortest connection networks and some generalizations. IBM New York Scientiﬁc Center. Nash-Williams. 263– 265. Finding k disjoint paths in a directed planar graph.J. [1942] R. Series B 63 (1995) 65–110. Seymour.A. Preparata. Acta Litterarum ac Scientiarum Regiae Universitatis Hungaricae FranciscoJosephinae. Railroad Car Inventory: Empty Woodrack Cars on the Louisville and Nashville Railroad. Series B 31 (1981) 75–81. [1985] C. Kruskal. [1965] J. J. Connecticut. 1984. ed. Volume 2]. . P. [1966] B. Lov´sz.B. On the shortest spanning subtree of a graph and the traveling salesman problem. JAI Press.D. Jr. Norman.St. Multicommodity ﬂows in planar graphs. Paris.C. P. [1956] J.D. e e e 1966. [1957] R. Schrijver.J.

Kibernetika [Kiev] 1965:3 (1965) ı 29–39 [English translation: The chromatic class of a multigraph. The Journal of the London Mathematical Society 22 (1947) 107–111. The Journal of the London Mathematical Society 36 (1961) 221–230 [reprinted in: Selected Papers of W.).T. Tutte Volume I (D. Stanton. A. R. 1)-matrices. Khromaticheski˘ klass mul’tigrafa [Russian]. Academic Press.E. Diskretny˘ Analiz 3 (1964) 25–30. Tutte.M. Voorhoeve. 1979. On the problem of decomposing a graph into n connected factors.W. eds. On an estimate of the chromatic class of a p-graph]. Matroid Theory. Tutte. pp. [1964] V.G.60 [1977] R. Vizing. [1976] D. Finding optimum branchings. The Charles Babbage Research Centre. [1969] W. St. A network algorithm for empty freight car allocation.J. 1976. References [1947] W. McCarthy. ı [1965] V. London. Manitoba. Welsh. Indagationes Mathematicae 41 (1979) 83–86.T. Tarjan. A lower bound for the permanents of certain (0. The factorization of linear graphs. Cybernetics 1:3 (1965) 32– 41]. . Ob otsenke khromaticheskogo klassa p-grafa [Russian.G. [1961] W. Vizing. [1979] M. Bomberault.A. White. 214–225].G. IBM Systems Journal 8 (1969) 147–169. Networks 7 (1977) 25–35.T. Pierre.

L.58 Tarjan.E.59 Rees.34-35.57 Berge.C.59 a Lynch. 4. 17.R. J. 3-4.59 Bartlett.E. A. P.9-11.21. 17.16. J. J.57 Busacker.57 Dinits. I.59 Sakarovitch.M.58 Dijkstra. 17. 49. Gy. 49.58 Itai.21. O. L.T. S. J.7. R.51. 8. 12.H. 51.Name index 61 Name index Adel’son-Vel’ski˘ G.27. 44. 18 Feeney. 17.F. 51.57 Chu.57 Farkas.57 Ehrenfeucht.57 Knuth. W. 1. M.J. Norman.R. 3.58 Jewell.59 49.59 Liu.J. M. H. R. D. R. 3-4.59 Shamir. 1. A. 9.A. M.R.43. 49.53.St. 49.21.G. Jr. R.59 Rothschild. 10.59 White. M.59 Seymour.B.58 Iri. 51.58-59 o . 5-6.59 Magnanti.9-11 Bock. A.59 Schrijver.57 Kramer. 49.30.58 Ford.B. 12. 13. 49. J.59 Edmonds. D. 17. 17.J.8. R.51. H.-j. A. F.58 Gowen.58 Holyer.A.W. R. 6. T.58 Faber. 49. V. S.G. T. W.J. C.57 Even. 6. 10.R.57-58 Gallai.E. 19.21. 49. 21. A.17. 8. D. J. B. J. D.L.D.60 Vizing. 12. C.-h.C. A.S.59 Nash-Williams. 16.60 1. M. F.E.35.58 Frobenius.M.11.57 Minty. I.57 Kierstead.V. Ahuja. E. A. J.31.59 Leeuwen.19.B. A.53. G.K.12.59 Kruskal.58 Fortune. 17. 1.51. 51. W.57 Hall.S.58 Ferguson.A. G. 49.60 Whinston. 17 K¨nig.N. 51. 17. 1.60 Wyllie. van 17. 51.W.57 Prim.J.T. 17.58 Hopcroft. V.60 Boruvka.60 Welsh. ı. Tutte.38-40. van Robertson. 21.R. 51.58 Karp.J.M. 18. D.57 Bomberault. 34.60 Voorhoeve. 17. A. G.58 Fulkerson. 17. 17.57 Lov´sz.34 o K˝nig. Y.C.59 34-35. Jr.59 Okamura.D.33-36.57-58 Karzanov.22.58 Hu. 6. 27. 17.59 Orlin. T.A.W.G. 17. 51. 3-4. T.M. P. 6. E.58 Rado.J.R.21. P.57 21.21. G. A.57 Boldyreﬀ. 6.57 Anderson. T.51.57 Dantzig.57 Dowling.W.

35 Dijkstra-Prim method 3 disjoint paths problem arc.34 dual matroid 27-29 dual of a matroid 29 edge-colour 12 edge-colourable k.5 arc-disjoint paths problem 16-18 b-detachment 35 basis in a matroid 23 bipartite matching 8-9.15-18 integer k.12 k-edge.31.29 colour 12 edge.11 vertex 7 cut condition 16.15-18 common independent set 31 common independent set algorithm cardinality 32-33 weighted 36-38 common independent set augmenting algorithm cardinality 32-33 weighted 36-38 common independent set problem 31-39 cardinality 31-35 weighted 36-39 common SDR 31.31 cardinality 8-9 branching 5 r.12 commodity ﬂow problem fractional k.18 cycle matroid 25 deletion in a matroid 28-29 dependent set in a matroid 23 detachment b.12 colouring edge.34 common system of distinct representatives 31.15-18 k.5 cardinality bipartite matching 8-9 cardinality common independent set algorithm 32-33 cardinality common independent set augmenting algorithm 32-33 cardinality common independent set problem 31-35 cardinality matching 8-11 cardinality matroid intersection 31-35 cardinality matroid intersection algorithm 32-33 cardinality nonbipartite matching 9-11 circuit basis 27 circuit in a matroid 24 coclique 7 coclique number 7 cocycle matroid 25 cographic matroid 25-26.12 colouring number edge.16-18 vertex.15-18 integer undirected k.12 .34 common transversal 31.12 colourable k-edge.35 distinct representatives system of 9.62 Subject index Subject index arborescence r.34 component even 9 inverting 13 odd 9 satisfaction-testing 13 variable-setting 13 contraction in a matroid 28-29 cover edge 7 vertex 7 cover number edge 7.15-18 undirected k.12 edge-colouring 12 k.16-18 edge.16-18 disjoint spanning trees problem edge.

11 edge cover theorem K¨nig’s 8 o edge-disjoint paths problem 16-18 edge-disjoint spanning trees problem 31.29 greedy algorithm 22-25 greedy forest 3 Hall’s marriage theorem 9 Hu’s two-commodity ﬂow theorem 19-21 independent set algorithm cardinality common 32-33 weighted common 36-38 independent set augmenting algorithm cardinality common 32-33 independent set in a matroid 23 independent set problem cardinality common 31-35 common 31-39 weighted common 36-39 integer k-commodity ﬂow problem 15-18 integer multicommodity ﬂow problem 15-18 63 integer undirected k-commodity ﬂow problem 15-18 integer undirected multicommodity ﬂow problem 15-18 inverting component 13 k-commodity ﬂow problem 15-18 fractional 15-18 integer 15-18 integer undirected 15-18 undirected 15-18 k-edge-colourable 12 k-edge-colouring 12 k-truncation of a matroid 29 k-uniform matroid 29 o K¨nig’s edge cover theorem 8 K¨nig’s matching theorem 8.43 matroid polytope 39-42 max-biﬂow min-cut theorem 21 maximum reliability problem 4 minimum spanning tree 3-5 minor of a matroid 28 multicommodity ﬂow problem 15-18 fractional 15-18 integer 15-18 integer undirected 15-18 .9-10 ﬂow problem fractional k-commodity 15-18 fractional multicommodity 15-18 integer k-commodity 15-18 integer multicommodity 15-18 integer undirected k-commodity 15-18 integer undirected multicommodity 15-18 k-commodity 15-18 multicommodity 15-18 undirected k-commodity 15-18 undirected multicommodity 15-18 fractional k-commodity ﬂow problem 15-18 fractional multicommodity ﬂow problem 15-18 Gallai’s theorem 7 good forest 5 graphic matroid 25.10 matching theorem K¨nig’s 8.34 o Kruskal’s method 3-4 linear matroid 26.29 marriage theorem Hall’s 9 matching 7-11 bipartite 8-9.Subject index edge-colouring number 12 edge cover 7 edge cover number 7.9-11 matching number 7.31 cardinality 8-11 cardinality bipartite 8-9 cardinality nonbipartite 9-11 nonbipartite 9-11 perfect 7.43 Euler condition 21 even component 9 factor 1.9-11 factor theorem Tutte’s 1.34 o matroid 22-43 matroid intersection 31-39 cardinality 31-35 weighted 36-39 matroid intersection algorithm cardinality 32-33 weighted 36-38 matroid intersection theorem Edmonds’ 34.35 Edmonds’ matroid intersection theorem 34.

39 routing VLSI.64 undirected 15-18 nonbipartite matching 9-11 cardinality 9-11 odd component 9 1-factor 9-11 1-factor theorem Tutte’s 9-10 output pairs 13 parallel elements in a matroid 24 partition matroid 27 perfect matching 7.34 transversal 9.29 Tutte-Berge formula 10 Tutte’s 1-factor theorem 9-10 two-commodity ﬂow theorem Hu’s 19-21 undirected k-commodity ﬂow problem 15-18 Subject index integer 15-18 undirected multicommodity ﬂow problem 15-18 integer 15-18 uniform matroid 29 k.34 restriction in a matroid 28 root of an branching 5 rooted tree 31.34 transversal matroid 26-27 tree minimum spanning 3-5 trees problem edge-disjoint spanning 31.29 union of matroids 34 variable-setting component 13 vertex cover 7 vertex cover number 7 vertex-disjoint paths problem 16-18 VLSI-routing 18 weighted common independent set algorithm 36-38 weighted common independent set augmenting algorithm 36-38 weighted common independent set problem 36-39 weighted matroid intersection 36-39 weighted matroid intersection algorithm 36-38 .35 truncation of a matroid 29 k.35 system of distinct representatives 9.34 spanning tree minimum 3-5 spanning trees problem edge-disjoint 31.18 satisfaction-testing component 13 SDR 9.9-11 polytope matroid 39-42 r-arborescence 5 r-branching 5 rank of a matroid 23 reliability problem maximum 4 representatives system of distinct 9.