Professional Documents
Culture Documents
The Dual Simplex Method: Combinatorial Problem Solving (CPS)
The Dual Simplex Method: Combinatorial Problem Solving (CPS)
2 / 35
Basic Idea
min −x1 − x2 min −x1 − x2
min −x1 − x2
2x 1 + x 2 ≥ 3
2x1 + x2 ≥ 3
2x1 + x2 − x3 = 3
2x1 + x2 ≤ 6 −2x1 − x2 ≥ −6
=⇒ =⇒ −2x1 − x2 − x4 = −6
x1 + 2x2 ≤ 6 −x1 − 2x2 ≥ −6
−x1 − 2x2 − x5 = −6
x1 ≥ 0 x1 ≥ 0
x1 , x2 , x3 , x4 , x5 ≥ 0
x2 ≥ 0 x2 ≥ 0
min −6 +x2 + x5
Basis (x1 , x3 , x4 ) is optimal
x1 = 6 − 2x2 − x5
x3 = 9 − 3x2 − 2x5 (= satisfies optimality conditions)
x4 = −6 + 3x2 + 2x5
but is not feasible!
3 / 35
Basic Idea
x2
2x1 + x2 ≤ 6
min −x1 − x2
2x1 + x2 ≥ 3 x1 + 2x2 ≤ 6
x1 ≥ 0
x2 ≥ 0 x1
(6, 0)
4 / 35
Basic Idea
■ Let us make a violating basic variable non-negative ...
5 / 35
Basic Idea
■ Let us make a violating basic variable non-negative ...
5 / 35
Basic Idea
1 1
min −6 + x2 + x5 min −4 + 3 x 4 + 3 x5
x1 = 2 − 23 x4 + 13 x5
x1 = 6 − 2x2 − x5
=⇒ 1 2
x3 = 9 − 3x2 − 2x5
x 2 = 2 + 3 x 4 − 3 x5
x4 = −6 + 3x2 + 2x5 x3 = 3 − x4
6 / 35
Basic Idea
1 1
min −6 + x2 + x5 min −4 + 3 x 4 + 3 x5
x1 = 2 − 23 x4 + 13 x5
x1 = 6 − 2x2 − x5
=⇒ 1 2
x3 = 9 − 3x2 − 2x5
x 2 = 2 + 3 x 4 − 3 x5
x4 = −6 + 3x2 + 2x5 x3 = 3 − x4
6 / 35
Basic Idea
x2
2x1 + x2 ≤ 6
min −x1 − x2
(2, 2)
2x1 + x2 ≥ 3 x1 + 2x2 ≤ 6
x1 ≥ 0
x2 ≥ 0 x1
(6, 0)
7 / 35
Outline of the Dual Simplex
8 / 35
Duality
■ To understand better how the dual simplex works: theory of duality
■ We can get lower bounds on LP optimum value
by adding constraints in a convenient way
min −x1 − x2
2x1 + x2 ≥ 3
−x1 − 2x2 ≥ −6
−2x1 − x2 ≥ −6
x2 ≥ 0
−x1 − 2x2 ≥ −6
x1 ≥ 0
−x1 − x2 ≥ −6
x2 ≥ 0
9 / 35
Duality
■ In general we can get lower bounds on LP optimum value
by linearly combining constraints with convenient multipliers
1·( 2x1 + x2 ≥ 3 )
min −x1 − x2 2·( −2x1 − x2 ≥ −6 )
2x1 + x2 ≥ 3
1·( x1 ≥ 0 )
−2x1 − x2 ≥ −6
−x1 − 2x2 ≥ −6
x1 ≥ 0 2x1 + x2 ≥ 3
−4x1 − 2x2 ≥ −12
x2 ≥ 0
x1 ≥ 0
−x1 − x2 ≥ −9
10 / 35
Duality
µ1 · ( 2x1 + x2 ≥ 3 )
µ2 · ( −2x1 − x2 ≥ −6 )
µ3 · ( −x1 − 2x2 ≥ −6 )
■ Let µ1 , . . . , µ5 ≥ 0: µ4 · ( x1 ≥ 0 )
min −x1 − x2 µ5 · ( x2 ≥ 0 )
2x1 + x2 ≥ 3
−2x1 − x2 ≥ −6
2µ1 x1 + µ1 x2 ≥ 3µ1
−x1 − 2x2 ≥ −6
−2µ2 x1 − µ2 x2 ≥ −6µ2
x1 ≥ 0
−µ3 x1 − 2µ3 x2 ≥ −6µ3
x2 ≥ 0
µ4 x 1 ≥ 0
µ5 x 2 ≥ 0
µ1 · ( 2x1 + x2 ≥ 3 )
■ We have: µ2 · ( −2x1 − x2 ≥ −6 )
min −x1 − x2
µ3 · ( −x1 − 2x2 ≥ −6 )
2x1 + x2 ≥ 3
−2x1 − x2 ≥ −6
max 3µ1 − 6µ2 − 6µ3
2µ1 − 2µ2 − µ3 ≤ −1
µ1 − µ2 − 2µ3 ≤ −1
µ1 , µ2 , µ3 ≥ 0
13 / 35
Duality
■ Best possible lower bound with this “trick” can be found by solving
max 3µ1 − 6µ2 − 6µ3
2µ1 − 2µ2 − µ3 ≤ −1
µ1 − µ2 − 2µ3 ≤ −1
µ1 , µ2 , µ3 ≥ 0
0·( 2x1 + x2 ≥ 3 )
1
3 · ( −2x1 − x2 ≥ −6 )
1
3 · ( −x1 − 2x2 ≥ −6 )
Matches the optimum!
−x1 − x2 ≥ −4
13 / 35
Dual Problem
■ If we multiply Ax ≥ b by multipliers y T ≥ 0 we get y T Ax ≥ y T b
■ If y T A ≤ cT then we get a lower bound y T b for the cost function cT x
■ Given an LP (called primal problem)
min cT x
Ax ≥ b
x≥0
Proof:
max bT y − min (−b)T y
AT y ≤ c =⇒ −AT y ≥ −c
y≥0 y≥0
15 / 35
Dual Problem
min cT x
max bT y
■ Prop. Ax = b and form a primal-dual pair
AT y ≤ c
x≥0
Proof:
min cT x
min cT x
Ax ≥ b
Ax = b =⇒
−Ax ≥ −b
x≥0
x≥0
max bT y1 − bT y2
y := y1 −y2 max bT y
AT y1 − AT y2 ≤ c =⇒
AT y ≤ c
y1 , y2 ≥ 0
16 / 35
Duality Theorems
■ Th. (Weak Duality) Let (P, D) be a primal-dual pair
min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
Proof:
c − AT y ≥ 0, i.e., cT − y T A ≥ 0, and x ≥ 0 imply cT x − y T Ax ≥ 0.
So cT x ≥ y T Ax, and
cT x ≥ y T Ax = y T b = bT y
17 / 35
Duality Theorems
18 / 35
Duality Theorems
min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
If any of P or D has a feasible solution and a finite optimum then the
same holds for the other problem and the two optimum values are equal.
18 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:
Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c
19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:
Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c
Moreover, cTB β = cTB B −1 b = π T b = bT π
By the theorem of weak duality, π is optimum for D
19 / 35
Duality Theorems
■ Proof (Th. of Strong Duality):
By duality it is sufficient to prove only one direction.
Wlog. let us assume P is feasible with finite optimum.
After executing the Simplex algorithm to P we find
B optimal feasible basis. Then:
Hence cTB B −1 A ≤ cT .
So π T := cTB B −1 is a dual feasible solution: π T A ≤ cT , i.e., AT π ≤ c
Moreover, cTB β = cTB B −1 b = π T b = bT π
By the theorem of weak duality, π is optimum for D
■ If B is an optimal feasible basis for P ,
then simplex multipliers π T := cTB B −1 are optimal feasible solution for D
■ We can solve the dual by applying the simplex algorithm on the primal
■ We can solve the primal by applying the simplex algorithm on the dual 19 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair
min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
Proof:
Let us prove (1) by contradiction.
If y were a feasible solution to D,
by the weak duality theorem, objective of P would be lower bounded!
20 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair
min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
20 / 35
Duality Theorems
■ Prop. Let (P, D) be a primal-dual pair
min cT x
max bT y
(P ) Ax = b and (D)
AT y ≤ c
x≥0
20 / 35
Duality Theorems
21 / 35
Karush Kuhn Tucker Opt. Conds.
■ Consider a primal-dual pair of the form
min cT x max bT y
max bT y
Ax = b and ⇐⇒ AT y + w = c
AT y ≤ c
x≥0 w≥0
• Ax = b • x, w ≥ 0
• AT y + w = c • xT w = 0 (complementary slackness)
22 / 35
Karush Kuhn Tucker Opt. Conds.
(KKT )
• Ax = b
min cT x max bT y • AT y + w = c
(P ) Ax = b (D) AT y + w = c
x≥0 w≥0 • x, w ≥ 0
• xT w = 0
Proof:
⇒ By 0 = xT w = xT (c − AT y) = cT x − bT y, and Weak Duality
23 / 35
Relating Bases
■ Consider a primal-dual pair of the form
min z = cT x max Z = bT y
(P ) Ax = b (D) AT y + w = c
x≥0 w≥0
aT1 y
w1
... ...
T
BT y
!
am y wm + wB
AT y + w = + =
aT y wm+1 R T y + wR
m+1
... ...
aTn y wn
24 / 35
Relating Bases
■ Hence we have
BT y
!
+ wB
AT y +w =
R T y + wR
BT
0
B̂ =
RT I
B −T
0
B̂ −1 =
−RT B −T I
26 / 35
Relating Bases
■ Dual variables B̂ = (y, wR ) determine a basis of D:
BT
0
B̂ =
RT I
B −T
0
B̂ −1 =
−RT B −T I
26 / 35
Relating Bases
■ Dual variables B̂ = (y, wR ) determine a basis of D:
BT
0
B̂ =
RT I
B −T
0
B̂ −1 =
−RT B −T I
B −T B −T c
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR
27 / 35
Dual Feasibility = Primal Optimality
■ If B̂ is feasible for dual D, what about B in primal P ?
■ Let us compute the basic solution of basis B̂ in the dual problem D
B −T B −T c
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR
27 / 35
Dual Feasibility = Primal Optimality
■ If B̂ is feasible for dual D, what about B in primal P ?
■ Let us compute the basic solution of basis B̂ in the dual problem D
B −T B −T c
y 0 cB B
= B̂ −1 c = =
wR −RT B −T I cR −RT B −T c B + cR
27 / 35
Dual Optimality = Primal Feasibility
■ If B̂ satisfies the optimality conds. for dual D, what about B in primal P ?
■ Let us formulate the optimality conds. of basis B̂ in the dual problem D
■ Non B̂-basic vars: wB with costs (0)
■ B̂-basic vars: (y | wR ) with costs (bT | 0)
I
■ Matrix of non B̂-basic vars:
0
28 / 35
Dual Optimality = Primal Feasibility
■ If B̂ satisfies the optimality conds. for dual D, what about B in primal P ?
■ Let us formulate the optimality conds. of basis B̂ in the dual problem D
■ Non B̂-basic vars: wB with costs (0)
■ B̂-basic vars: (y | wR ) with costs (bT | 0)
I
■ Matrix of non B̂-basic vars:
0
B −T c B −T B −T c tB −T e
B 0 tep − B p
= − =
dR −RT B −T I 0 dR + tRT B −T ep
29 / 35
Improving a Non-Optimal Solution
■ Of all basic dual variables, only wR variables need to be ≥ 0
■ For j ∈ R
dq
■ Variable wq is blocking when ΘD = −αpq
30 / 35
Improving a Non-Optimal Solution
1. If ΘD = +∞ (there is no j ∈ R such that αjp < 0):
Value of dual objective can be increased infinitely.
Dual LP is unbounded.
Primal LP is infeasible.
31 / 35
Update
■ We do not actually need to form the dual LP:
it is enough to have a representation of the primal LP
32 / 35
Algorithmic Description
1. Initialization: Find an initial dual feasible basis B
Compute B −1 , β = B −1 b,
y T = cTB B −1 , dTR = cTR − y T R, Z = bT y
2. Dual Pricing:
If for all i ∈ B, βi ≥ 0 then return OPTIMAL
Else let p be such that βp < 0.
Compute ρTp = eTp B −1 and αjp = ρTp aj for j ∈ R
33 / 35
Algorithmic Description
4. Update:
B̄ = B − {kp } ∪ {q}
Z̄ = Z − ΘD βp
Dual solution
ȳ = y − ΘD ρp
d¯j = dj + ΘD αjp if j ∈ R, d¯kp = ΘD
Primal solution
βp
Compute αq = B −1 aq and ΘP = αpq
β̄p = ΘP , β̄i = βi − ΘP αqi if i 6= p
B̄ −1 = EB −1
Go to 2.
34 / 35
Primal vs. Dual Simplex
PRIMAL DUAL
35 / 35