You are on page 1of 5

3.

5 The Theorem of Minkowski-Weyl


Proof of Theorem 3.13.
⇒ Let P = {x ∈ Rn : Ax ≤ b}.
• Let C = {(x, y) ∈ Rn+1 : Ax − by ≤ 0, y ≥ 0}.
• Theorem 3.11 ⇒ C is finitely generated.
• Since y ≥ 0 is valid for C, ∃v1, . . . , vp, r1, . . . , rq ∈
Rn s.t.
 1  p  1  q 
v v r r
C = cone ,..., , ,..., .
1 1 0 0
(*)

• Since P = {x : (x, 1) ∈ C},

⇒ P = conv(v 1, . . . , v p) + cone(r1, . . . , rq ).

⇐ Let P = conv(v 1, . . . , v p) + cone(r1, . . . , rq ).


• Let C be the finitely generated cone (*).
• Theorem 3.11 ⇒ C is a polyhedral cone.
• ⇒ ∃A, b s.t. C = {(x, y) ∈ Rn+1 : Ax − by ≤ 0}.
• Since P = {x : (x, 1) ∈ C},

⇒ P = {x ∈ Rn : Ax ≤ b}.

BACK TO SLIDES

1
3.14 Carathéodory’s Theorem
Proof of Theorem 3.40.
• Let S be an inclusionwise minimal subset of X s.t.
v ∈ cone(S).
• S is finite, say S = {v 1, . . . , v k }, and ∃λ ∈ Rk+ s.t.
Pk i
i=1 λi v = v.
• It suffices to show that the vectors in S are lin.ind.
• Suppose not, and let µ ∈ Rk , µ ̸= 0, s.t. ki=1 µiv i =
P

0.
• Wlog µ has a positive component.
• Let λ′ := λ − θµ, where θ := mini:µi>0 µλii = µλh .
Pk h
• We have i=1 λiv = v, λh = 0, and λ ≥ 0.
′ i ′ ′

• ⇒ v ∈ cone(S \ {v h}), contradicting the minimality


of S.

Proof of Corollary  3.41.


v
• v ∈ conv(X) ⇒ ∈ cone(X × {1}).
1
• Theorem
 1 3.40 ⇒ k∃v 1
, . . . , v k
∈ X and λ ∈ R
 i +
k

v v v
s.t. are lin.ind., ki=1 λi
P
,..., =
1 1 1
 
v
.
1
• ⇒ v 1, . . . , v k are aff.ind., ki=1 λiv i = v, ki=1 λi =
P P

1.

BACK TO SLIDES

2
Proximity
Proof of Proximity Theorem
• For simplicity, assume {x : Ax ≤ b} pointed.
• Let y ∗ be an optimal solution of LP and let x∗ be an
optimal solution of IP.
• Split Ax ≤ b as A1x ≤ b1, A2x ≤ b2 s.t. A1y ∗ <
A1x∗, A2y ∗ ≥ A2x∗, and consider the cone
C := {r : A1r ≤ 0, A2r ≥ 0}.
• y ∗ − x∗ ∈ C. Carathéodory’s Theorem ⇒
y ∗ − x∗ = λ1r1 + · · · + λq rq
for some λ1, . . . , λq ≥ 0 and lin.ind. extreme rays
r1, . . . , rq of C.
• Let r be an extreme ray of C. Exercise: We can
assume wlog r ∈ Zn and ∥r∥∞ ≤ ∆. Hint: Write a
system of equations defining (a scaling of) r and use
Cramer’s rule.
• Claim: If 0 ≤ µi ≤ λi ∀i, then
x∗ + µ1 r 1 + · · · + µq r q
satisfies Ax ≤ b.
Proof: Since A1ri ≤ 0 ∀i:
A1(x∗ + µ1r1 + · · · + µq rq ) ≤ A1x∗ ≤ b1,
Since A2ri ≥ 0 ∀i:
A2(x∗ + µ1r1 + · · · + µq rq ) =
= A2(y ∗ − (λ1 − µ1)r1 − · · · − (λq − µq )rq )
≤ A 2 y ∗ ≤ b2 .
3
• Let

x⋄ := x∗ + ⌊λ1⌋ r1 + · · · + ⌊λq ⌋ rq ∈ Zn.

Claim ⇒ Ax⋄ ≤ b.
• Optimality. We show cx⋄ ≥ cx∗. This holds be-
cause cri ≥ 0 ∀i. We show the latter. As A1y ∗ <
A1x∗ ≤ b1, by complementary slackness, ∃v ≥ 0 with
vA2 = c. Hence ∀r ∈ C, cr = v(A2r) ≥ 0.
• Distance. Denoting by {λi} := λi − ⌊λi⌋, we have

x⋄ = y ∗ − {λ1}r1 − · · · − {λq }rq .

Thus
∗ ⋄ 1 q

∥y − x ∥∞ = {λ1}r + · · · + {λq }r ∞ ≤

Xq
i
≤ r ≤ n∆.

i=1

This shows 1.

• Let
y ⋄ := y ∗ − ⌊λ1⌋ r1 − · · · − ⌊λq ⌋ rq
= x∗ + {λ1}r1 + · · · − {λq }rq .
Claim ⇒ Ay ⋄ ≤ b.
• Optimality. We show cy ⋄ ≥ cy ∗. Otherwise ∃i
with ⌊λi⌋ > 0 and cri > 0. Claim ⇒ x∗ + ⌊λi⌋ ri is
an integral solution of Ax ≤ b, and c(x∗ + ⌊λi⌋ ri) >
cx∗, contradicting the optimality of x∗.

4
• Distance. We have

∥x∗ − y ⋄∥∞ = {λ1}r1 + · · · + {λq }rq ∞ ≤ n∆.


This shows 2.

You might also like