You are on page 1of 47

E1

 251  Linear  and  Nonlinear  


Op2miza2on  

Chapter  11:    Linear  programming  

1  
11.1.  Standard  form  of  a  linear  program  

Minimize cT x
subject to Ax = b and x ≥ 0

2  
Transforming  problems  with  inequality:  case  I  
Minimize c1 x1 + c2 x2 +  + cn xn
subject to
a11 x1 + a12 x2 +  + a1n xn ≤ b1
a 21 x1 + a22 x2 +  + a2n xn ≤ b2
 
a m1 x1 + am 2 x2 +  + amn xn ≤ bm
x1 ≥ 0, x2 ≥ 0,…, xn ≥ 0
The above problem in matrix
form:
Minimize cT x subject to Ax ≤ b and x ≥ 0
where c = [c1 cn ]T , x = [x1 xn ]T , b = [b1 bm ]T
{A}i, j = aij
3  
Minimize c1 x1 + c2 x2 +  + cn xn
subject to a11 x1 + a12 x2 +  + a1n xn + y1 = b1
a 21 x1 + a22 x2 +  + a2n xn + y2 = b2
 
a m1 x1 + am 2 x2 +  + amn xn + ym = bm
x1 ≥ 0, x2 ≥ 0,…, xn ≥ 0
y1 ≥ 0, y2 ≥ 0,…, ym ≥ 0
The above problem in matrix form:
Minimize c′T x′ subject to A′x′ = b and x′ ≥ 0 where
c′ = [c1 cn 00]T , x′ = [x1 xn y1 ym ]T
A = [A I].
4  
Transforming  problems  with  inequality:  case  II  

Minimize c1 x1 + c2 x2 +  + cn xn
subject to
a11 x1 + a12 x2 +  + a1n xn ≥ b1
a 21 x1 + a22 x2 +  + a2n xn ≥ b2
 
a m1 x1 + am 2 x2 +  + amn xn ≥ bm
x1 ≥ 0, x2 ≥ 0,…, xn ≥ 0

5  
Minimize c1 x1 + c2 x2 +  + cn xn
subject to a11 x1 + a12 x2 +  + a1n xn − y1 = b1
a 21 x1 + a22 x2 +  + a2n xn − y2 = b2
 
a m1 x1 + am 2 x2 +  + amn xn − ym = bm
x1 ≥ 0, x2 ≥ 0,…, xn ≥ 0
y1 ≥ 0, y2 ≥ 0,…, ym ≥ 0
The above problem in matrix form:
Minimize c′T x′ subject to A′x′ = b and x′ ≥ 0 where
c′ = [c1 cn 00]T , x′ = [x1 xn y1 ym ]T
A = [A -I].
6  
11.  2.  ApplicaAon  of  opAmality  condiAons  
The  problem:  
Minimize cT x subject to Ax = b and x ≥ 0
x : n × 1 vector; b : m × 1 vector; A : m × n matrix.
Regularity  is  not  required:  
T (x* ) and M (x* ) are obviously equivalent subspaces.
KKT  condiAons:  
A T λ * + µ * = c, µ * ≥ 0, µ *T x* = 0, Ax* = b, x* ≥ 0
Sufficiency  of  KKT  condiAons:  
For a point (x* , λ * , µ * ) satisfying KKT conditions, we have
T
cT x* = (A T λ * + µ * )T x* = (Ax

*
) λ *
+ µ

* *
x = b λ
T *

bT 0

For any feasible point x, we have cT x = (A T λ * + µ * )T x = bT λ * + x T µ *


= cT x * + x T µ * .
Since x T µ * ≥ 0, we have cT x ≥ cT x* .
7  
11.  3.  Dual  problem  
The  problem:  
The Lagrangian
L(x, λ , µ ) = cT x + λ T (b − Ax) − µ T x (cross-check with chap 6)
= (cT − λ T A − µ T )x + λ T b.

L(x, λ , µ ) has finite infimum with respect to x only if cT − λ T A − µ T = 0.

Hence the dual function is q(λ , µ ) = bT λ with domain specified by


the constraint cT − λ T A − µ T = 0.

Hence the dual problem is:


maximize bT λ subject to c − A T λ − µ = 0, µ ≥ 0

Equivalent problem:
minimize − bT λ subject to A T λ − c ≤ 0.
8  
OpAmality  (KKT)  

The problem:
minimize f (λ ) = −bT λ subject to g(λ ) = A T λ − c ≤ 0.
The KKT equations are
∇f (λ * ) + J g (λ * )x* = 0 ⇒ − b + Ax* = 0
x* ≥ 0
AT λ * − c ≤ 0
x*T (A T λ * − c) = 0

Define µ = − A T λ * + c. Then the KKT equations become


identical to KKT equations of the primal problem.

9  
Sufficiency  of  KKT  condiAons  for  dual  problem  
Given the solution to the KKT conditions (x* , λ * , µ * ), we have for any
feasible point λ that
bT λ = (x* )T A T λ
( ) (
= (x* )T A T λ − c + cT x* = (x* )T A T λ − c + cT x*)
= -(x* )T µ + cT x*
Since (x* )T µ ≥ 0, bT λ ≤ cT x* = bT λ *

An observation:
cT x − b T λ = (c − A T λ )x = µ T x ≥ 0.
⇒ when both dual and primal variable are feasible, dual
cost lower bounds the primal cost and the primal cost
upper bound the dual cost (weak duality theorem)
The    strong  duality  theorem  (Th.  11.3A):  

(i) If either the primal or the dual problem has an optimal


solution, so does the other and their optimal values are
equal.

(ii) If either the primal or the dual problem is unbounded,


the other problem is infeasible.
om 11.  
the4.  nGcolumns
eometric  
of A aspects  of  alinear  
we select set of pmrograms  
linearly independent columns
uch a set exists if the rank of A is m). For notational simplicity assume that we
lectSome  
the first definiAons:  
m columns of A and denote the m × m matrix determined by these
lumns by B. The matrix B isT then nonsingular and we may uniquely solve the
Minimize c x ⎫
uation. ⎪
subject to Ax = b ....(E1) ⎫ ⎬ ......(E3)
BxB = b⎬ .....(E2) ⎪ (9)
and x ≥ 0 ⎭ ⎭
r the m-vector xB . By putting x = "xB ! 0# (that is, setting the first m components
x equal to those of xB and the remaining components equal to zero), we obtain
solution to Ax =nb.>>
Assumption: m leads
This and the matrix
to the A is of
following full rank (rank m).
definition.
Definition. Given the set of m simultaneous linear equations in n unknowns
(E1), let B be
(8),Chapter any nonsingular
2 Basic Properties m of × Programsmade up of columns of A.
m submatrix
Linear
Then, if all n − m components of x not associated with columns of B are set
equal to zero, the solution to the resulting set of equations is said to be a basic
basictovariables
solution
The (8) withinrespect
(E1) a basictosolution areB.not
the basis The components
necessarily of x associated
all nonzero. This is
tedwith columns
by the of B definition.
following are called basic variables.
In the above definition
Definition. we refer
If one or more of the B as avariables
to basic in aBbasic
basis, since consists of mhas
solution linearly
value
dependent
zero, thatcolumns
solutionthat can to
is said beberegarded as a basis
a degenerate basicforsolution.
the space E . The basic
m

lution corresponds to an expression for the vector b as a linear combination 12  


of
eseWe notevectors.
basis that inThis
a nondegenerate
interpretationbasic solutionfurther
is discussed the basic variables,
in the and hence
next section.
milar definitions apply when these constraints are also considered. Thus, consider
w the system of constraints
Minimize cT x Ax = b ⎫

subject to Ax = b x....(E1)
! 0! ⎫ .....(E2) ⎬ ......(E3) (10)
⎬ ⎪
and x ≥ 0 ⎭ ⎭
ich represent the constraints of a linear program in standard form.
Definition. A vector x satisfying (10) (E2) is said to be feasible for these
constraints. A feasible solution to the constraints (10)
(E2) that is also basic is said to
be a basic feasible solution; if this solution is also a degenerate basic solution,
it is called a degenerate basic feasible solution.

4 Definition.
THE FUNDAMENTAL
A solution to the problemTHEOREM
(E3) is called OF LINEAR
an optimal feasible
PROGRAMMING
solution; if this solution is basic, it is called basic optimal feasible solution.
this section, through the fundamental theorem of linear programming, we
ablish the primary importance of basic feasible solutions in solving linear
grams. The method of proof of the theorem is in many respects as important 13  
as
result itself, since it represents the beginning of the development of the simplex
Theorem  11.4A      (fundamental  theorem  of  linear  
Programming)  

Minimize cT x ⎫

subject to Ax = b ....(E1) ⎫ ⎬ ......(E3)
⎬ .....(E2) ⎪
and x ≥ 0 ⎭ ⎭

Given a linear program in standard form (E3) where A is an m × n


matrix of rank m,
(i) if there is a feasible solution, there is a basic feasible solution;
(ii) if there is an optimal feasible solution, there is an optimal basic
feasible solution.

14  
opolytopes.
Appendix We B for a more this
establish complete summary of
correspondence as concepts
follows. related to convexity,
The reader is referredbut
he
Geometric  
definition
interpretaAon   o f   linear   p rograms  
Appendix B of
foran extreme
a more point is
complete stated here.
summary of concepts related to convexity, but
e definition of an extreme point is stated here.
Definition. A point x in a convex set C is said to be an extreme point of C
if there are no
Definition. A two
pointdistinct convexx1setand
x in a points C xis2 said
in C tosuch
be that x = !x1point
an extreme + "1 −
of !#x C 2
iffor some
there no 0two
are !$ <! < 1. points x1 and x2 in C such that x = !x1 + "1 − !#x2
distinct
for some !$ 0 < ! < 1.
An extreme point is thus a point that does not lie strictly within a line segment
nonnecting
extreme two pointother
is thus a point
points of thethat
set. does not lie points
The extreme strictlyofwithin a lineforsegment
a triangle, example,
nnecting
re twovertices.
its three other points of the set. The extreme points of a triangle, for example,
e its three vertices.
Theorem.
Theorem 11.4B:(Equivalence of extreme points and basic solutions). Let A be an
Theorem.
m × n matrix (Equivalence
of rank mofand extreme
b an points and basic
m-vector. Let Ksolutions). Let A polytope
be the convex be an
× n matrix
mconsisting of of
all rank m and
n-vectors b an m-vector. Let K be the convex polytope
x satisfying
consisting of all n-vectors x satisfying
Ax = b
Ax = b (17)
(E1)
x ! 0% (17)
x ! 0%
A vector x is an extreme point of K if and only if x is a basic feasible solution
(E1) x is an extreme point of K if and only if x is a basic feasible solution
Atovector
(17).
to (17).
roof. Suppose first that x = "x1 $ x2 $ % % % $ xm $ 0$ 0$ % % % $ 0# is a basic feasible 15  
Proof of 11.4A.(i):
(1) suppose there exits a feasible solution with number of positive elements
equal p and WLOG let the first p elements of the solution vector, x1 , x2 ,…, x p ,
be the positive elements.
Then x1a1 + x2 a 2 +!+ x p a p = b, where
a1 ,a 2 ,!,a p are the first p columns of A.
(2) If p > m, then a1 ,a 2 ,!,a p are necessarily dependent since the rank of A
is m.
Hence there exists y1 , y2 ,…, y p such that y1a1 + y2 a 2 +!+ y p a p = 0.

( )
Hence, for any ε > 0, ( x1 − εy1 ) a1 + ( x2 − εy2 ) a 2 +!+ x p − εy p a p = b,
which shows that x1 − εy1 , x2 − εy2 ,…, x p − εy p is also a new feasible solution
for sufficiently small ε.
16  
(3) If we choose  such that  = min{xi / yi ; yi > 0}, then x1 − y1 ,…, x p − y p
will have at most p − 1 positive elements with other elements equal to zero.

(4) This shows that if there exist a feasible solution with p postive elements
with p > m, then it also possible find a feasible solution with p − 1 positive
elements. Repeat this until we get at most m postive elements in the soln.

17  
Proof of 11.4A.(ii):
(1) suppose there exits an optimal feasible solution with number of positive
elements equal p and WLOG let the first p elements of the solution vector,
x1 , x2 ,…, x p , be the positive elements.
Then x1a1 + x2 a 2 +!+ x p a p = b, where a1 ,a 2 ,!,a p are the first p columns
of A.

(2) If p > m, then a1 ,a 2 ,!,a p are necessarily dependent since the rank of A
is m.
Hence there exists y1 , y2 ,…, y p such that y1a1 + +y2 a 2 +!+ +y p a p = 0.

( )
Hence, for any ε > 0, ( x1 − εy1 ) a1 + ( x2 − εy2 ) a 2 +!+ x p − εy p a p = b,
which shows that x1 − εy1 , x2 − εy2 ,…, x p − εy p is also a new feasible solution
for sufficiently low ε.
18  
(3) Next, note that both v and -v are feasible
directions at x ′ where v = [y1 !y p 0 !0]T and x ′ = [x1 !x p 0 !0]T ,
since Av = 0.

Since x ′ is a local minimum of the cT x, we have cT (x ′ + v) ≥ cT x ′


and cT (x ′ − v) ≥ cT x ′.

Hence cT v = 0. This implies that x1 − εy1 …, x p − εy p is


also an optimal feasible solution.

19  
(4) If we choose ε such that ε = min{xi / yi ; yi > 0}, then x1 − εy1 ,…, x p − εy p
will have at most p − 1 positive elements with other elements equal to zero.

(5) This shows that if there exist a feasible solution with p postive elements
with p > m, then it also possible find a feasible solution with p − 1 positive
elements. Repeat this until we get at most m postive elements in the soln.

20  
Proof of 11.4B:

To show that: If x is a basic solution, and if there exist feasible


solutions y and z such that x = α y + (1− α )z, 0 ≤ α ≤ 1,
then x = y = z.

Let x = [x1 x2 ... xm 0 .... 0]T be the basic solution. Then we have
x1a1 + x2 a 2 + ....+ xm a m = b.

Let y and z be feasible vectors such that x = α y + (1− α )z, 0 ≤ α ≤ 1.


Since the equality is satified for all α , and x is basic, and the elements of
y and z are non-negative, we conclude that y and z are basic as well.

21  
Hence we have
x1a1 + x2 a 2 + ....+ xm a m = b
y1a1 + y2 a 2 + ....+ ym a m = b
z1a1 + z2 a 2 + ....+ zm a m = b

Since a1 ,a 2 ,...,a m are linearly independent by definition of basic


solution, we have x1 = y1 = z1 , x2 = y2 = z2 ,...., xm = ym = zm .

Hence x = y = z, which means that x is an extreme point.

22  
To show that: For an extreme point x with non-zero elements given by
{x1 , x2 ,..., xk }, the corresponding columns of A given by {a1 ,a 2 ,...,a k }
are linearly independent, which will mean that k ≤ m, i.e., x is a basic
solution.

If the vectors {a1 ,a 2 ,...,a k } are linearly dependent, then we have some
scalars {y1 , y2 ,..., yk } such that y1a1 + y2 a 2 + ....+ yk a k = 0.

Then for any ε, x + εy and x − εy are the solutions of Ax = b, where


y = [y1 y2 ... yk 0 .... 0]T .

Since x ≥ 0, there exists an ε such that x + εy ≥ 0


and x − εy ≥ 0 meaning that x + εy and x − εy are feasible as well.
23  
Then x = 0.5(x + εy) + 0.5(x − εy), meaning that x is a convex combination
two other feasible solutions, which contradict the assumption that x is an
extreme point.

Hence y = 0, i.e., {a1 ,a 2 ,...,a k } are linearly independent.


Hence, k ≤ m, i.e, x is basic solution.

24  
11.5.  The  Simplex  method  
11.5.1  Solving  the  KKT  with  chosen  set  of  basic  variables  

Minimize cT x ⎫

subject to Ax = b ....(E1) ⎫ ⎬ ......(E3)
⎬ .....(E2) ⎪
and x ≥ 0 ⎭ ⎭

Assumption: first m variables, if solved for (E1) by forcing other variable to be zero,
will lead to non-degenerate solution for (E2).

Consider the partition: A = [B N] (B : m × m matrix, N : (n − m) × m)


xT = [xTB xTN ] cT = [cTB cTN ]
Goal: to solve the the following equations
A T λ * + µ * = c, µ * ≥ 0, µ *T x* = 0, Ax* = b, x* ≥ 0
with the restriction x N = 0, µ B = 0

25  
Selected KKT equations with partition:
BT λ * = c B N T λ * + µ N* = c N , Bx*B = b

Candidate solution:
x*B = B−1b, x*N =0 ⎫

λ = B cB
* −T
⎬ .........(BFS1)
µ B* = 0 µ N* = c N − NT λ * = c N − NT B−T c B ⎪⎭
(BFS1) solves all KKT equations except the condition that µ N* ≥ 0
provided x*B > 0.

26  
11.5.2 Interpretation of negative values of µ N* in BFS1 if x*B > 0

Suppose we want to increase one of the non-basic variable


corresponding to index q from zero.

Let x ′ = [ x ′BT 0 ... 0 xq′ 0 ... 0 ]T be the corresponding


solution to (E2).

Then x ′B = x*B − B−1a q xq′ , where a q is the qth column of A.

The new cost is


cT x ′ = cTB x ′B + cq xq′ , where cq is the qth element of c.
= cTB x*B − cTB B−1a q xq′ + cq xq′
= cTB x*B + (cq − aTq B−T c B ) xq′
27  
Comparing with BFS1, we see that (cq − aTq B−T c B ) is a non-basic
component of µ * .

Hence from a given BFS x* = [x*T


B 0 ... 0 0 0 ... 0 ]T
, if we want to
move to x ′ = [ x ′BT 0 ... 0 xq′ 0 ... 0 ]T , such that x ′B = x*B − B−1a q xq′ ,
the new cost is
f (x) = cT x* + µq* xq′

Hence for the BFS x* = [x*T


B 0 ... 0 0 0 ... 0 ]T
,

( )
T
d = ⎡− B aq 0 ... 0 1 0 ... 0 ⎤
T
−1
is a feasible direction with µq*
⎣ ⎦
as the rate of change in cost.

28  
11.5.3  How  far  can  you  go  along  the  feasible  direcAon  ?  

This is determined by the restriction that x ′B = x*B − B−1a q xq′ ≥ 0.


If B−1a q has some postive elements:
min ⎛ {x*B } ⎞
Maximum allowable value for xq′ is:
{ } > 0 ⎝ B−1a q ⎟⎠

i
i, B−1a q
i { }i

This maximum value will give a new BFS.

If B−1a q has no positive elements:

The problem is unbounded.

29  
11.5.4  The  simplex  algorithm  
Given the partition: A = [B N] and B−1 such that B−1b ≥ 0
step 1). compute x*B = B−1b, µ N* = c N − N T B−T c B .
If µ N* ≥ 0, stop ("optimal BFS found"). Else goto step 2.
step 2) select q ∈[m + 1,n] such that µq* < 0.
compute d q = B−1a q .
If d q ≤ 0, stop ("problem is unbounded"). Otherwise goto step 3.
min ⎛ {x*B } ⎞
step 3) compute xq′ =
{ } > 0 ⎝ d q ⎟⎠

i
.
i, d q
i { } i

Let p be the index that gives minimum.


Compute x ′B = x*B − xq′ d q .
Define x* = [ x ′BT 0 ... 0 xq′ 0 ... 0 ]T
step 4) Rearrage x* , A, and c, using p and q, such that the new
solution x* has first m nonzero entries.

Step 5) Compute B−1 . Goto step 1. 30  


11.6.  Choosing  the  iniAal  basic  variable  
Case: I If the standard form was derived from inequality constraints, select the
auxilliary variables as the the basic variables.

Minimize Minimize
c1 x1 + c2 x2 +  + cn ′ xn ′ c1 x1 + c2 x2 +  + cn ′ xn ′
subject to subject to
a11 x1 + a12 x2 +  + a1n ′ xn ′ ≤ b1 a11 x1 + a12 x2 +  + a1n ′ xn ′ + y1 = b1
a 21 x1 + a22 x2 +  + a2 n ′ xn ′ ≤ b2 a 21 x1 + a22 x2 +  + a2 n ′ xn ′ + y2 = b2
   
a m1 x1 + am 2 x2 +  + amn ′ xn ′ ≤ bm a m1 x1 + am 2 x2 +  + amn ′ xn ′ + ym = bm
x1 ≥ 0, x2 ≥ 0,…, xn ′ ≥ 0 x1 ≥ 0, x2 ≥ 0,…, xn ′ ≥ 0
y1 ≥ 0, y2 ≥ 0,…, ym ≥ 0
Select y1 , y2 ,,…, ym as the basic variables with the solution
y1* = b1 , y2 * = b2 ,,…, ym * = bm
Case II: If the problem was originally in standard form, construct an
auxilliary linear program.

Original problem:
Minimize cT x subject to Ax = b and x ≥ 0

Auxilliay problem:
Minimize [1 1 1]y subject to Ax + y = b and x ≥ 0, y ≥ 0.

1) Basic feasible solution for the auxilliary problem can be


intialized to y = b, and x = 0.
2) The minimum of the auxilliary problem will be a basic
feasible for the original solution
11.7  The  simplex  algorithm  in  tableau  form  
 11.7.1  The  relaAve  cost  expression  

Given BFS, x* = [x*T


B 0], let x ′ = [ x ′
B
T
x ′ T T
B ] be a new feasible solution.

Then x ′B = x*B − B−1Nx ′N .

The new cost is


cT x ′ = cTB x ′B + cTN x ′N,
= cTB x*B − cTB B−1Nx ′N + cTN x ′N
= cTB x*B + (c N − N T B−T c B )T x ′N = cTB x*B + µ N*T x ′N

33  
 11.7.2  The  tableau  form  
Add a variable z which is equal to the value of the function and add
a constraint c1 x1 + c2 x2 +!+ cn xn − z = 0.
⎡ B N 0 b ⎤
Define tableau: T = ⎢ T ⎥.
⎢⎣ c B c N −1 0 ⎥⎦
T

Apply a tranformation by pivoting such that we get the standard form


⎡ I N′ 0 b′ ⎤
tableau T ′ = ⎢ ⎥.
⎢⎣ 0 rN
T
−1 −z0 ⎥⎦
Then N ′ = B−1N, b ′ = B−1b = x*B , z0 = cTB B−1b (verify).

Hence the cost is f (x ′ ) = z = z0 + rNT x ′N .

Comparing with the previous page, rNT = cTN − N T B−T c B = µ N*T .

Further, d q = B−1a q is (q − m)th column of N ′. 34  


11.7.3  The  simplex  algorithm  in  tableau  form  
Given the partition: A = [B N] such that B−1b ≥ 0
⎡ B N b ⎤
form tableau T = ⎢ T ⎥ (initilaization). Apply row reduction to get
⎢⎣ c B cTn 0 ⎥

⎡ I N′ b′ ⎤
standard form T = ⎢ ⎥ (step 0)
⎢⎣ 0 µ *T
−z0 ⎥
N

step 1). If µ N* ≥ 0, stop ("optimal BFS found"). x B* = B−1b. Else goto step 2.
step 2) select q ∈[m + 1,n] such that µq* < 0.
Get d q from N ′ ((q − m)th column)
If d q ≤ 0, stop ("problem is unbounded"). Otherwise goto step 3.

{ } > 0 ({b′} / {d } ). Let p be the index that gives minimum.


min
step 3) compute xq′ = q i
i, d q i
i

Compute x ′B = b ′ − xq′ d q . Define x* = [ x ′BT 0 ... 0 xq′ 0 ... 0 ]T


step 4) Rearrage x* , and T, using p and q, such that the new solution x* has first m
nonzero entries.
35  
Step 5) Reduce T to standard form and goto step 1.
Tableau  at  iniAalizaAon  

⎡ a1,1 a1,2 ! a1,m a1,m+1 a1,m+2 ! a1,n b1 ⎤


⎢ ⎥
⎢ a2,1 a2,2 ! a2,m a2,m+1 a2,m+2 ! a2,n b2 ⎥
⎢ " " # " " " # " " ⎥⎥

⎢ am,1 am,2 ! am,m am,m+1 am,m+2 ! am,n bm ⎥
⎢ c1 c2 ! cm cm+1 cm+2 ! cn 0 ⎥⎥
⎢⎣ ⎦
Tableau  aVer  step  0  

⎡ 1 0 ! 0 y1,m+1 y1,m+2 ! y1,n y1,0 ⎤


⎢ ⎥
⎢ 0 1 ! 0 y2,m+1 y2,m+2 ! y2,n y2,0 ⎥
⎢ " " # " " " # " " ⎥ ⎥

⎢ 0 0 ! 1 ym,m+1 ym,m+2 ! ym,n ym,0 ⎥
⎢ 0 0 ! 0 rm+1 rm+2 ! rn −z0 ⎥⎥
⎢⎣ ⎦
Example  tableau  at  the  end  of  step  4    

⎡ 1 y1,m+2 ! 0 y1,m+1 0 ! y1,n y1,0 ⎤


⎢ ⎥
⎢ 0 y2,m+2 ! 0 y2,m+1 1 ! y2,n y2,0 ⎥
⎢ " " # " " " # " " ⎥ ⎥

⎢ 0 ym,m+2 ! 1 ym,m+1 0 ! ym,n ym,0 ⎥
⎢ 0 rm+2 ! 0 rm+1 0 ! rn −z0 ⎥⎥
⎢⎣ ⎦
11.  8.  Dual  simplex  method  
Primal  problem:   Dual    problem:  

Minimize cT x subject to Ax = b
Maximize bT λ subject to c − A T λ ≤ 0,
and x ≥ 0

KKT  condiAons:  

A T λ * + µ * = c, µ * ≥ 0, µ *T x* = 0, Ax* = b, x* ≥ 0

Given the partition: A = [B N] and B−1


x*B = B−1b, µ N* = c N − NT B−T c B , λ * = B−T c B
µ *T = [ µ B*T µ N*T ] and x*T = [x*T
B x N ], x N = 0, µ B = 0.
*T * *

39  
In simplex method, x B* ≥ 0, but µ N * does not satisfy µ N * ≥ 0, i.e., the solution set
is feasible for primal problem but not optimal. Hence we hunt for the basis set
such that µ N * ≥ 0.

In dual simplex method, µ N * ≥ 0, but x B* does not satisfy x B* ≥ 0, i.e., the solution set
is feasible for dual problem but not optimal. Hence we hunt for the basis set
such that x B* ≥ 0.

40  
InterpretaAon  of  negaAve  components  of  basic  variables  
Suppose xq* is negative and we want to make it non-basic.
Hence we want to make qth component in µ B equal to some α > 0.
Let α v be the corresponding adjustment in λ .
Then from KKT equation A T λ * + µ * = c
substitute
⎡ 0 ⎤
µ =⎢
*
⎥ + α eq .
µ ′
⎢⎣ N ⎥⎦
We get BT (λ * + α v) + α e q = c B ---------- (E1)
N T (λ * + α v) + µ N′ = c N -------------(E2)
where α v is the corresponding adjustment in λ .
Since BT λ * = c B , from (E1) we get
−BT v = e q .
The new dual cost is
bT (λ * + α v) = bT λ * + α bT v = bT λ * − α bT B−T e q
= bT λ * − α x*T
B e q = b λ − α xq
T * *

xq* is the rate of decrease in the dual function along direction v.


41  
How to adjust other components of µ N* ?

Let µ N ′ = µ N* − α w be the adjustment.


From KKT,
µ N* − α w = c N − NT (λ * + α v)

Since µ N* = c N − N T λ * , we have w = N T v = N T B−T e q .

α is computed such that µ N ′ ≥ 0 and µ N ′ has exactly one zero.

min {µ }
*
N
α= j
wj > 0 wj

arg min {µ }*
N
p= j

j ∈[m + 1,n] wj

p : index of the variable that becomes basic.


42  
11.  9.  OpAmality  and  duality  for  inequality  problem  
The  problem:  
Minimize cT x subject to Ax ≥ b and x ≥ 0
x : n × 1 vector; b : m × 1 vector; A : m × n matrix.

KKT  condiAons:  
A T λ * + µ * = c, µ * ≥ 0, λ * ≥ 0,
µ *T x* = 0, λ *T (Ax* − b) = 0.
x* ≥ 0, Ax* − b ≥ 0,

Sufficiency  of  KKT  condiAons:  


For a point (x* , λ * , µ * ) satisfying KKT conditions, we have
cT x* = (A T λ * + µ * )T x* = bT λ * + !
µ *x * = b T λ *
0

For any feasible point x, we have cT x = (A T λ * + µ * )T x ≥ bT λ * + x T µ *


= cT x * + x T µ * .
Since x T µ * ≥ 0, we have cT x ≥ cT x* . 43  
The  dual  problem:  
The Lagrangian
L(x, λ , µ ) = cT x + λ T (b − Ax) − µ T x
= (cT − λ T A − µ T )x + λ T b.

L(x, λ , µ ) has infimum with respect to x only if cT − λ T A − µ T = 0.

Hence the dual function is q(λ , µ ) = bT λ with domain specified by


the constraint cT − λ T A − µ T = 0.

Hence the dual problem is:


maximize bT λ subject to c − A T λ − µ = 0, λ ≥ 0, µ ≥ 0

Equivalent problem:
minimize − bT λ subject to A T λ − c ≤ 0, λ ≥ 0
44  
Dual  OpAmality  (KKT)  
The problem:
minimize f (λ ) = −bT λ subject to g(λ ) = A T λ − c ≤ 0, λ ≥ 0
The KKT equations are
∇f (λ * ) + J g (λ * )x* − y* = 0
⇒ − b + Ax* = y* Ax* − b ≥ 0
x* ≥ 0 x* ≥ 0
y* ≥ 0
AT λ * − c ≤ 0 AT λ * − c ≤ 0
λ* ≥ 0 λ* ≥ 0
x*T (A T λ * − c) = 0 x*T (A T λ * − c) = 0
y*T λ * = 0
( ) =0
T
Ax −
*
b λ *

Define µ = − A T λ * + c. Then the KKT equations become


identical to KKT equations of the primal problem. 45  
Sufficiency  of  KKT  condiAons  for  dual  problem  
Given the solution to the KKT conditions (x* , λ * , µ * ), we have for any
feasible point λ that
bT λ ≤ (x* )T A T λ
( ) (
≤ (x* )T A T λ − c + cT x* = (x* )T A T λ − c + cT x* )
≤ -(x* )T µ + cT x*
Since (x* )T µ ≥ 0, bT λ ≤ cT x* = bT λ *

An observation:
cT x − b T λ = (c − A T λ )x = µ T x ≥ 0.
⇒ when both dual and primal variable are feasible, dual
cost lower bounds the primal cost and the primal cost
upper bound the dual cost (weak duality theorem)
The    strong  duality  theorem  (Th.  11.9A):  

(i) For inequality problem, if either the primal or the dual


problem has an optimal solution, so does the other and
their optimal values are equal.

(ii) For inequality problem, if either the primal or the dual


problem is unbounded, the other problem is infeasible.

You might also like