You are on page 1of 22

Linear Programming (LP)

LP is a class of optimization problems where the cost function is linear in the


decision variable and the feasibility set is a polyhedron.

LP in standard equality form:

minn c> x
x2R
s.t.Ax = b, (P)
x 0.

79
Obtaining a lower bound on the cost function

80
Finding best possible lower bound

This happens to be another linear program:

maxb> y
y2Rm

s.t.A> y  c. (D)

The above problem is referred to as the dual of problem (P).


A LP stated as above is called standard inequality form.
We can show that the dual of (D) is (P).

81
Properties

Theorem 7
For the primal-dual pair of optimization problems stated above, the following
are true.
1. If (P) is infeasible, and (D) is feasible, then (D) is unbounded.
2. If (P) is unbounded, then (D) is infeasible.
3. Weak Duality: For any feasible solution x̄ and ȳ of the respective
problems, we always have c> x̄ b> ȳ.
4. Strong Duality: Show that for the respective optimal solutions x⇤
and y ⇤ , we must have c> x⇤ = b> y ⇤ .

HW: Give an example of (P) and (D) where both are infeasible.

82
Proof

83
Farkas’ Lemma

To prove the strong duality theorem, we will make use of an alternative form of
Farka’s lemma.
Lemma 2 (Farkas’ Lemma). Let A 2 Rm⇥n and b 2 Rm . Then, exactly one
of the following sets must be empty:
1. {x 2 Rn | Ax = b, x 0}
2. {y 2 Rm | A> y  0, b> y > 0}.
Lemma 3 (Alternative form of Farkas’ Lemma). Let A 2 Rm⇥n and b 2 Rm .
Then, exactly one of the following sets must be empty:
1. {x 2 Rn |Ax  b}
2. {y 2 Rm |y 0, y > A = 0, y > b < 0}.

84
Lagrangian Function

Consider the following optimization problem in standard form:

min f (x)
x2Rn
s.t. gi (x)  0, i 2 [m] := {1, 2, . . . , m},
hj (x) = 0, j 2 [p].

The Lagrangian function L : Rn ⇥ Rm ⇥ Rp : R is defined as


X X
L(x, , µ) := f (x) + i gi (x) + µj hj (x),
i2[m] j2[p]

where

i is the Lagrange multiplier associated with gi (x)  0


µj is the Lagrange multiplier associated with hj (x) = 0.

Lower Bound Property:


Lemma 4. If x̄ is feasible and ¯ 0, then f (x̄) L(x̄, ¯ , µ).

85
Lagrangian Dual

From the previous lemma, we know that if x̄ is feasible and ¯ 0, then

f (x̄) L(x̄, ¯ , µ) inf L(x, ¯ , µ) =: d( ¯ , µ).


x

where ⇥ X X ⇤
d( , µ) := inf f (x) + i gi (x) + µi hj (x) .
x
i2[m] j2[p]

d( , µ) requires solving an unconstrained optimization problem.


Given any 0, µ, d( , µ)  f ⇤ where f ⇤ is the optimal value.
d( , µ) may take value 1 for some choice of and µ.
d( , µ) is concave in and µ.

86
Lagrangian Dual Optimization Problem

Let us compute the best lower bound on f ⇤ :

max d( , µ)
2Rm ,µ2Rp
s.t. 0,
( , µ) 2 dom(d).

The above is a convex optimization problem since d( , µ) is concave in


and µ.
Let the optimal value be denoted d⇤ .

87
Example 1: Lagrangian Dual of LP

minn c> x
x2R
s.t. Ax = b, x 0.

Find L, d and dom(d).

88
Example 2: Least Norm Solution

Least norm solution:


1 >
minn x x
x2R 2
s.t. Ax = b.

Find L and d.

89
Weak and Strong Duality

Weak Duality: d⇤  f ⇤ always holds (even for non-convex problems).

Strong Duality: d⇤ = f ⇤ is guaranteed to hold for convex problems satisfying


certain conditions, referred to as constraint qualification conditions.

90
Example 3

min x2
x2R
s.t. x 1  0, x  0.

Find the optimal value of the above problem, derive the dual and determine
whether strong duality holds.

91
Example 4

min2 x21 x22


x2R
s.t. x21 + x22 1  0.

Find the optimal value of the above problem, derive the dual and determine
whether strong duality holds.

92
KKT Optimality Conditions

For the above primal and dual optimization problems, x̄, ¯ and µ̄ are said to
satisfy KKT optimality conditions if the following holds:
Primal Feasibility: gi (x̄)  0, i 2 [m], hj (x̄) = 0, j 2 [p].
Dual Feasibility: ¯ 0.
Complementary Slackness: ¯ i gi (x̄) = 0 for all i 2 [m].
P P
Stationarity: rx f (x̄) + i2[m] ¯ i rx gi (x̄) + j2[p] µ̄i rx hj (x̄) = 0.

93
Sufficient Condition for Optimality

Let x̄, ¯ and µ̄ satisfy KKT conditions stated above. From primal and dual
feasibility we have
⇥ X X ⇤
¯
d( , µ̄) = inf f (x) + ¯ i gi (x) + µ̄i hj (x)
x
i2[m] j2[p]
X X
 f (x̄) + ¯ i gi (x̄) + µ̄i hj (x̄)  f (x̄).
i2[m] j2[p]

Further, both inequalities hold with equality.


Thus, when the primal problem is convex, we have:
d( ¯ , µ̄) = f (x̄) (strong duality)
x̄ is optimal solution of primal problem.
( ¯ , µ̄) are optimal solution of dual problem.

94
Necessary and Sufficient Condition for Optimality

Theorem 8
Suppose the primal optimization problem is convex which satisfies Slater’s
constraint qualification condition: there exists x̄ 2 int(D) in the domain of
the optimization problem for which gi (x̄) < 0 for all i 2 [m] and hi (x̄) = 0
for all i 2 [p].

Then, strong duality holds. Equivalently, a feasible solution x⇤ is optimal if


and only if there exist ⇤ , µ⇤ such that (x⇤ , ⇤ , µ⇤ ) satisfy KKT optimality
conditions.

Constraint qualification is required for the necessity part of the proof.

95
Convex Theorem of the Alternative

Consider the following general form of optimization problem:

min f (x)
x2Rn
s.t. gi (x)  0, i 2 [m] := {1, 2, . . . , m},

where f and gi are convex functions.


Theorem 9
Let the constraint functions gi satisfy slater’s condition: there exists x̄
such that gi (x̄) < 0 for all i 2 [m]. Then, exactly one of the following two
systems must be empty.
{x 2 Rn |f (x) < 0, gi (x)  0, i 2 [m]}
P
{ 2 Rm | inf x2Rn [f (x) + i2[m] i gi (x)] 0}.

Proof: Blackboard.

96
Strong Duality Theorem

Consider the following general form of optimization problem:

min f (x)
x2Rn
s.t. gi (x)  0, i 2 [m] := {1, 2, . . . , m},

where f and gi are convex functions satisfying Slater’s condition.


Theorem 10
x⇤ is an optimal solution to the P above problem if and only if there exists

0 such that inf x2Rn [f (x) + i2[m] i gi (x)] f (x⇤ ).

Proof: Blackboard.

97
Other notions of constraint qualification

If all the constraint functions gi (x) and hj (x) are affine, then constraint
qualification holds.

Relaxed Slater Condition: If some of the inequality constraints are


affine, then they need not hold with strict inequality. It is sufficient to find
x̄ 2 relint(D) such that gi (x̄) < 0 for all gi that are not affine.

Linear Independence Constraint Qualification holds at a feasible


solution x⇤ if the vectors

rhj (x⇤ ), j 2 [p],


rgi (x⇤ ), i 2 {k 2 [m]|gk (x⇤ ) = 0}

are linearly independent.

98

You might also like