You are on page 1of 22

Lagrange relaxation and KKT conditions 1.

Lagrange relaxation Global optimality conditions KKT conditions for convex problems Applications 2. KKT conditions for general nonlinear optimization problems.

Opt & Syst, KTH

Lagrange relaxation and KKT conditions

Lagrange relaxation

We consider the optimization problem minimize f (x) s.t. gi (x) 0, i = 1, . . . , m where f (x) and gi (x) are real valued functions. If g(x) = g1 (x) . . . gm (x)
T

(1)

then (1) can be written

minimize f (x) s.t. g(x) 0.

(2)

Opt & Syst, KTH

Lagrange relaxation and KKT conditions

The id ea behind Lagrange relaxation is to put non-negative prices yi 0, on the constraints and then add these to the objective function. This gives the (unconstrained) optimization problem:
m

minimize f (x) +
i=1 T

y i gi ( x )

(3)

which using y = y1 . . . ym

can be written

minimize f (x) + yT g(x) The price yi is called a Lagrange multiplicator. Denition 1. The function L : Rn Rm R dened by L(x, y) = f (x) + yT g(x) is called the Lagrange function to (2).
Opt & Syst, KTH 3 Lagrange relaxation and KKT conditions

Weak duality Theorem 1 (Weak duality). For an arbitrary y 0 it holds that min L(x, y) f ( x), where x is an optimal solution to (2).
x

Proof: Since x is a feasible solution to (2) it holds that g( x) 0. We get x). min L(x, y) L( x, y) = f ( x) + yT g( x) f (
x 0 0

(4)

minimizing the Lagrange function provides lower bounds to the optimization problem (2). By an appropriate choice of y a good approximation of the optimal solution to (2) is searched for. In practical algorithms one tries to solve max min L(x, y). The next theorem gives conditions for the Lagrange multiplicator providing equality in (4).
Opt & Syst, KTH 4 Lagrange relaxation and KKT conditions

y 0

Global optimality conditions Theorem 2. If ( x, y ) Rn Rm satises the conditions (1) L( x, y ) = min L(x, y ),
x

(2) g( x) 0, (3) y 0, (4) y T g( x) = 0. then x is an optimal solution to (2). Proof: If x is an arbitrary feasible solution to (2) it holds that g(x) 0, which shows that f (x) f (x) + y T g (x) = L (x, y ) L( x, y ) = f ( x) where the rst inequality follows from (3) and g (x) 0, the second inequality follows from (1), and the last one from (4).
Opt & Syst, KTH 5 Lagrange relaxation and KKT conditions

Convex optimization problems If the functions f and g1 , . . . , gm are convex and continuously dierentiable, then condition (1) in Theorem 2 is equivalent to the condition
m

f ( x) +

i=1

y i gi ( x) = 0T

(5)

This follows since L(x, y ) is convex when y 0 and then it holds that x is a minimum point for L(x, y ) if, and only if, x L( x, y ) = 0T , i.e., if and only if (5) is satised. The global optimality conditions in Theorem 2 are sucient conditions for optimality, but in general not necessary. The next theorem shows that they are often also necessary conditions for convex optimization problems.
Opt & Syst, KTH 6 Lagrange relaxation and KKT conditions

Denition 2. The optimization problem (1) is a regular convex optimization problem if the functions f and g1 , . . . , gm are convex and continuously dierentiable and there exists a point x0 Rn such that gi (x0 ) < 0, i = 1, . . . , m. Theorem 3 (KKT for convex problems). Assume that (1) is a regular convex problem. Then x is a (global) optimal solution if, and only if, there exists a vector y Rm such that (1) f ( x) + (2) g( x) 0, (3) y 0, (4) y T g( x) = 0. Proof: Suciency was shown previously. Necessity is shown in the book.
Opt & Syst, KTH 7 Lagrange relaxation and KKT conditions

m i=1

y i gi ( x) = 0T

The conditions (2) (4) can be made more explicit. We have that
m

y T g( x) =
i=1

y i gi ( x) = 0

Since gi ( x) 0 and y i 0 it follows that y i gi ( x) = 0, i = 1, . . . , m. We then get the equivalent conditions (2 ) gi ( x) 0, i = 1, . . . , m, (3 ) y i 0, i = 1, . . . , m, (4 ) y i gi ( x) = 0, i = 1, . . . , m.

Opt & Syst, KTH

Lagrange relaxation and KKT conditions

Geometric interpretation The complementarity condition (4 ) implies that if gi ( x) < 0 then yi = 0. Therefore, condition (1) can be written f ( x) =
i:gi ( x)=0

y i gi ( x)

this means that the gradient is a negative linear combination of the gradients of the binding (active) constraints.

f ( x) x g2 ( x)
Opt & Syst, KTH

f ( x) x x) g2 (
9

x f ( x) = 0T

x) g1 (

Lagrange relaxation and KKT conditions

Quadratic optimization with inequality constraints 1 T minimize x Hx + cT x + c0 2 s.t. Ax b.

(6)

If H is positive semi-denite, then this is a convex optimization problem and we can apply Theorem 3. Theorem 4. x is a (global) optimal solution to (6) if, and only if, there exists a vector y Rm such that (1) Hx + c = AT y (2) Ax b, (3) y 0, (4) y T (A x b) = 0.
Opt & Syst, KTH 10 Lagrange relaxation and KKT conditions

Trac control in communication systems We consider a communication network consisting of two links. Three sources are sending data over the network to three dierent destinations.
Source 1 Destination 1

Link 1

Link 2

Source 2

Destination2

Source 3

Destination 3

Source 1 uses both links. Source 2 uses link 1. Source 3 uses link 2.
Opt & Syst, KTH 11 Lagrange relaxation and KKT conditions

Link 1 has capacity 2 (normalized entity [data/s]) Link 2 has capacity 1 The three sources sends data with speeds xr , r = 1, 2, 3. The three sources has each a utility function Ur (x), r = 1, 2, 3. A common choice of the utility function is Ur (xr ) = wr log(xr ). For ecient and fair use of the available capacity, the data speeds are chosen using the following optimization criterion: maximize U1 (x1 ) + U2 (x2 ) + U3 (x3 ) s.t. x1 + x2 2 x1 + x3 1

x1 0, x2 0, x3 0
Lagrange relaxation and KKT conditions

Opt & Syst, KTH

12

Assume Uk (x) = log(xk ), k = 1, 2, 3. The optimization problem can be written minimize log(x1 ) log(x2 ) log(x3 ) s.t. x1 + x2 2 x1 + x3 1

We relaxed the constraints xk 0, k = 1, . . . , 3 since they will be automatically satised, ( log(x) as x 0). The optimization problem is convex since the constraints are linear inequalities and the objective function is a sum of convex functions, and hence convex.

Opt & Syst, KTH

13

Lagrange relaxation and KKT conditions

The optimality conditions in Theorem 3 are


1 x + y1 + y2 = 0 1 1 x + y1 = 0 2 1 x + y2 = 0 3

(1)

(2)

x1 + x2 2 0 x1 + x3 1 0 y1 0 y2 0 y1 (x1 + x2 2) = 0 y2 (x1 + x3 1) = 0
14 Lagrange relaxation and KKT conditions

(3)

(4)

Opt & Syst, KTH

from (1) we get 1 x1 = y1 + y2 1 x2 = y1 1 x3 = y2

This leads to y1 > 0 and y2 > 0, hence the complementarity constraint (4) shows that (2) is satised with equality. We get
1 y1 +y2 1 y1 +y2

+ +

1 y1 1 y2

=2 =1

y2 = 3

y1 =

3 3+1

which in turn gives the optimal data speeds 3+1 3 x 1 = x 2 = , 3+2 3 3+1

1 x 3 = 3

Opt & Syst, KTH

15

Lagrange relaxation and KKT conditions

General nonlinear problems under equality constraints

minimize f (x) s.t. hi (x) = 0, i = 1, . . . , m The feasible region F = {x Rn : hi (x) = 0, i = 1, . . . , m} is not convex unless the functions hi are ane, i.e., hi (x) = aT i x + bi . We assume that n > m. We need the following technical assumption: Denition 3. A feasible solution x F is a regular point to (7) if hi (x), i = 1, . . . , m are linearly independent.

(7)

Opt & Syst, KTH

16

Lagrange relaxation and KKT conditions

Theorem 5 (KKT for problems with equality constraints). Assume that x F is a regular point and a local optimal solution to (7). Then there exists u Rm such that
m

(1) f ( x) +

i=1

u i hi ( x) = 0T ,

(2) hi ( x) = 0, i = 1, . . . , m. Proof idea: Let x(t) be an arbitrary parameterized curve in the feasible set F = {x Rn : gi (x) 0, i = 1, . . . , m} such that x(0) = x . The gure on the next page illustrates how this curve is mapped on a curve f (x(t)) on the range space of the objective function. The feasible set F is in general of higher dimension than one, which is illustrated in the right gure.
Opt & Syst, KTH 17 Lagrange relaxation and KKT conditions

f (x) f (x(t))

x3 , . . . , xn

x(t)

x2 x F

x x1 F

x2 , . . . , xn

x1

Since x(0) = x is a local optimal solution it holds that d f (x(t))|t=0 = f ( x) x (0) = 0 dt Furthermore, x(t) F , which leads to hi (x(t)) = 0, for some > 0.
Opt & Syst, KTH 18 Lagrange relaxation and KKT conditions

i = 1, . . . , m,

t (, )

This means that d hi (x(t))|t=0 = hi ( x) x (0) = 0, dt which in turn leads to x (0) N (A), where h1 ( x) . . A= . hm ( x) Conversely, the implicit function theorem can be used to show that if p N (A), then there exists a parameterized curve x(t) F with x(0) = x and x (0) = p. i = 1, . . . , m

Opt & Syst, KTH

19

Lagrange relaxation and KKT conditions

Alltogether, the above argument shows that f ( x)p = 0, p N ( A )

f ( x)T N (A) = R(AT ) f ( x)T = AT v ,

for some v Rm . If we let u = v Rm the last expression can be written


m

f ( x) + which was to be proven.

i=1

u i hi ( x) = 0T

Opt & Syst, KTH

20

Lagrange relaxation and KKT conditions

General nonlinear optimization problems with inequality constraints

minimize f (x) s.t. gi (x) 0, i = 1, . . . , m

(8)

where f (x) and gi (x) are real valued functions. If the problem is not convex (i.e., if F = {x Rn : gi (x) 0, i = 1, . . . , m} is not convex and/or f is not convex on F ) it is in general only possible to derive necessary optimality conditions. The regularity condition in Denition 2 has to be replaced with a stronger condition. Denition 4. For x F we let Ia (x) denote the index set for active constraints in the point x, i.e., Ia (x) = {i {1, . . . , m} : gi (x) = 0}.

Denition 5. A feasible solution x F is a regular point to (8) if gi (x) for i Ia (x) are linearly independent.
Opt & Syst, KTH 21

Lagrange relaxation and KKT conditions

Theorem 6 (KKT for general problems with inequality constraints). Assume that x is a regular point to (8) and a local optimal solution. Then there exists a vector y Rm such that (1) f ( x) +
m i=1

y i gi ( x) = 0T

(2) gi ( x) 0, i = 1, . . . , m, (3) y i 0, i = 1, . . . , m, (4) y i gi ( x) = 0, i = 1, . . . , m. Proof: The proof is similar to the proof of Theorem 3.

Opt & Syst, KTH

22

Lagrange relaxation and KKT conditions

You might also like