You are on page 1of 23

Ch.

11 - Optimization with Equality Constraints

Chapter 11
Optimization with Equality Constraints

Albert William Tucker (1905-1995) Harold William Kuhn (1925)


Joseph-Louis (Giuseppe Lodovico), comte de 1
Lagrange (1736-1813)

11.1 General Problem


• Now, a constraint is added to the optimization problem:
maxx,y u(x, y) s.t x px + y py = I, where px , py are
exogenous prices and I is exogenous
income.
• Different methods to solve this problem:
– Substitution
– Total differential approach
– Lagrange Multiplier

1
Ch. 11 - Optimization with Equality Constraints

11.1 Substitution Approach


• Easy to use in simple 2x2 systems. Using the constraint, substitute
into objective function and optimize as usual.
• Example:
U  U ( x1 , x2 )  x1 x2  2 x1 s.t. 60  4 x1  2 x2
1) Solve for x 2
x2  30  2 x1
2) Substituti ng into U(x 1 , x 2 )
U  x1 30  2 x1   2 x1  32 x1  2 x12
3) F.o.c. :
dU dx1  32  4 x1  0;  x1*  8; and x2*  14;
Check s.o.c. :
d 2U dx12  4  0  maximum
4) Calculate maximum Value for U(.) : U *  128
3

11.2 Total Differential Approach


• Total differentiation of objective function and constraints:

1 - 2) U  f x, y ; s.t. B  g  x, y 
3  4) dU  f x dx  f y dy  0; dB  g x dx  g y dy  0
 f x 0  dx   g x 0  dx 
5  6) dU     ; dB    
 0 f y  dy   0 g y  dy 
7  8) dx dy   f y f x ; dx dy   g y g x
9  10) f y fx  gy gx ; fy gy  fx gx

• Equation (10) along with the restriction (2) form the basis to solve
this optimization problem.

2
Ch. 11 - Optimization with Equality Constraints

11.2 Total Differential Approach


• Example: U(x1, x2) = x1x2 + 2x1 s.t. 60 = 4x1 + 2x2
Taking first-order differentials of U and budget constraint (B):

dU  x2 dx1  x1dx2  2dx1  0; dB  4dx1  2dx2  0


dx1 dx2   x1 x2  2; dx1 dx2  1 2
 x1 x2  2    1 2  x1  x2  2 2
60  41  1 2 x2   2 x2  x2*  14; x1*  8
U * (8,14)  814  2 8  128

11.2 Total-differential approach


• Graph for Utility function and budget constraint:

3
Ch. 11 - Optimization with Equality Constraints

11.3 Lagrange-multiplier Approach


• To avoid working with (possibly) zero denominators, let λ denote
the common value in (10). Rewriting (10) and adding the budget
constraint we are left with a 3x3 system of equations:
10 ' ) fx  g x
10 ' ' ) fy  g y

2) B  g ( x, y)
• There is a convenient function that produces (10’), (10’’) and (2) as
a set of f.o.c.: The Lagrangian function, which includes the objective
function and the constraint:
L  f ( x1 , x 2 )  λ B  g ( x1 , x 2 ) 
• The constraint is multiplied by a variable, λ, called the Lagrange
7
multiplier (LM). 7

11.3 Lagrange-multiplier Approach


• Once we form the Lagrangian function, the Lagrange function
becomes the new objective function.

(1) L  f ( x1 , x 2 )  λ B  g ( x1 , x 2 ) 
(2) L   B  g ( x1 , x 2 )  0
(3) L x1  f x1  λg x1 0
(4) L x 2  f x 2  λg x2 0
 L λλ L λx 1 L λx 2 
 
(5) H   L x 1λ L x 1x 1 L x 1x 2 
L L x 2 x1 L x 2 x 2 
 x 2λ

4
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach
• Note that
Lλ λ = 0
Lλ x1 = gx1
Lλ x2 = gx2
.
• Then
 0 g x1 g x2 
 
(5 ) H   g x1 L x1x1 L x1x 2 
g L x 2 x1 L x 2 x 2 
 x2

• If the constraints are linear, the Hessian of the Lagrangian can be


seen as the Hessian of the original objective function, bordered by
the first derivatives of the constraints. This new Hessian is called
9
bordered Hessian.

11.3 LM Approach: Example


Maximize Utility U  U  x,y  where U x , U y 0
Subject to the budget constraint B  xP x  yP y
L  U  x , y    B  xP x  yP y 
L     xP x  yP y  0
L x  U x   Px  0
L y  U y   Py  0
Ux Uy
LB    
Px Py
L  Lx Ly 0  Px  Py
H  L x L xx L xy   Px U xx U xy

L y L yx L yy  Py U yx U yy

 2 Px Py U xy  Py2U xx  Px2U yy 10

5
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach: Interpretation


(1 ) 
L  U  x , y    B  xP x  yP y 
L U x
(2) U x   Px  0   
x Px
L U y
(3) U y   Py  0   
y Py
L
(4)  B  xP x  yP y  0

From f.o.c.
U x U y U * Px U x
    , 
Px Py B Py U y

Note: From the f.o.c., at the optimal point:


- λ = marginal utility of good i, scaled by its price.
- Scaled marginal utilities are equal across goods.
- The ratio of marginal utilities (MRS) equals relative prices. 11

11.3 LM Approach: Second-order conditions


•  has no effect on the value of L* because the constraint equals
zero but …
• A new set of second-order conditions are needed
• The constraint changes the criterion for a relative max. or min.

(1 ) H  ax 2  2 hxy  by 2 s .t .  x   y  0
α
(2) y   x solve the constraint for y

2
 α   α 
(3) H  ax 2
 2 hx   x   b   x 
 β   β 
2

(4) 
H  a 2
 2  β h  b 2
 xβ 

 
(5 ) H  0 iff a  2
 2  β h  b 2
 0 12

6
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach: Second-order conditions

(1) H  aβ 2  2 α β h  bα 2  0
0  
(2) H   a h  - aβ 2  2 α β h - bα 2
 h b
(3) H is positive definite s.t.  x   y  0
0  
iff  a h 0  min
 h b

13

11.3 LM Approach: Example

1 - 2) U  x1 x2  2 x1 s.t. B  60 – 4 x1 – 2 x2  0
Form the Lagrangian function
3) L  x1 x2  2 x1  λ60 – 4 x1 – 2 x2 
FOC
4) Lλ  60 – 4 x1 – 2 x2  0
5  6) Lx1  x2  2  λ4  0;   1 4x2  1 2
7  8) Lx2  x1  λ2  0;   1 2x1
9  10) 1 4x2  1 2  1 2x1 ; x2  2 x1  2
11  12) 60  4 x1  22 x1  2; x1*  8
13  14) 60  48 – 2 x2 ; x2*  14
15  17) U  814  28; U *  128; *  4

14

7
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach: Example - Cramer’s rule

1) U  x1 x 2  2 x1 Utility function
2) B  60 – 4 x1 – 2 x 2  0 Budget constraint
3) L  x1 x 2  2 x1  λ 60 – 4 x1 – 2 x 2  Lagrangian function
4) L λ  60 – 4 x1 – 2 x 2  0 1st order conditions
5) L x1  x 2  2  λ4  0
6) L x 2  x1  λ2  0
 0 4  2       60 
7)  4 0 1   x1     2  ;

  2 1 0   x 2   0 
8) H  16  0 ; d 2 L is negative definite, L* is maximum

15

11.3 LM Approach: Example - Cramer’s rule


0 4 2
9) J  4 0 1  8  8  16
2 1 0
 60  4  2
10) J   2 0 1  4  60  64
0 1 0
0  60  2
11) J x1   4 2 1  120  8  128
2 0 0
0  4  60
12) J x2   4 0 2  16  240  224
2 1 0
13  15) λ *  64 16  4 x1*  128 16  8 x2*  224 16 14
16) U *  x1* x 2*  2 x1*  814  28  128 16

8
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach: Restricted Least Squares


• The Lagrangean approach

T
Min  ,  L( ,  | y, x)  t 1
( yt  xt  ) 2  2  ( r  q )
f.o.c:

L
 t 1 2( yt  xt b*)( xt )  2r  0  t 1 ( yt xt  xt2b*)  r  0
T T


L
 2(rb * q)  0  (rb * q)  0


• Then, from the 1st equation:


 ( x' y  x' xb*)  r  0  b*  ( x' x)1 x' y  ( x' x)1 r
 b  ( x' x) 1 r

17

11.3 LM Approach: Restricted Least Squares


• b* = b – r (xx)-1

• Premultiply both sides by r and then subtract q


rb* - q = rb – r2 (xx)-1 – q
0 = - r2 (xx)-1 + (rb – q)

Solving for  ⟹  = [r2 (xx)-1]-1 (rb – q)

Substituting in b* ⟹ b* = b – (xx)-1r [r2 (xx)-1]-1 (rb – q)

18

9
Ch. 11 - Optimization with Equality Constraints

11.3 LM Approach: N-variable Case


a) No one - variable test because there must be one more variable than constraint
b) 2  variable test of soc
H2  0 negative definite : ( max
H2  0 positive definite :) min
c) 3  variable test of soc
H 2  0, H3  0 negative definite : ( max
H 2  0, H3  0 positive definite :) min
d) n  variable case soc, (p. 361)
H 2  0, H 3  0, H 4  0,...( 1) n H n  0 negative definite : ( max
H 2  0, H 3  0, H 4  0,..., H n  0 positive definite :) min
Where
0 g1 g2 g3 
0 g1 g2  g Z 11 Z Z 
H2   g1 Z 11 Z 12  ; H3   1 ;
g2 Z Z 22 Z 
 g 2 Z 21 Z 22   
g3 Z Z Z 33  19

11.4 Optimality Conditions – Unconstrained Case


• Let x* be the point that we think is the minimum for f(x).
• Necessary condition (for optimality):
df(x*) = 0
• A point that satisfies the necessary condition is a stationary point
– It can be a minimum, maximum, or saddle point

• Q: How do we know that we have a minimum?


• Answer: Sufficiency Condition:
The sufficient conditions for x* to be a strict local minimum are:
df(x*) = 0
d2f(x*) is positive definite

10
Ch. 11 - Optimization with Equality Constraints

11.4 Constrained Case – KKT Conditions

• To prove a claim of optimality in constrained minimization (or


maximization), we have to check the found point (x*) with respect to
the (Karesh) Kuhn Tucker (KKT) conditions.

• Kuhn and Tucker extended the Lagrangian theory to include the


general classical single-objective nonlinear programming problem:

minimize f(x)
Subject to gj(x)  0 for j = 1, 2, ..., M
hk(x) = 0 for k = 1, 2, ..., K
x = (x1, x2, ..., xN)
Note: M inequality constraints, K equality constraints.

11.4 Interior versus Exterior Solutions


• Interior: If no constraints are active and (thus) the solution lies at the
interior of the feasible space, then the necessary condition for
optimality is same as for unconstrained case:
f(x*) = 0 ( difference operator for matrices --“del” )

Example: Minimize
f(x) = 4(x – 1)2 + (y – 2)2
with constraints:
x ≥ -1 & y ≥ -1.

Exterior: If solution lies at the exterior, the condition f(x*) = 0


does not apply because some constraints will block movement to this
minimum.

11
Ch. 11 - Optimization with Equality Constraints

11.4 Interior versus Exterior Solutions


• If solution lies at the exterior, the condition f(x*) = 0 does not
apply. Some constraints are active.

Example: Minimize
f(x) = 4(x – 1)2 + (y – 2)2
with constraints:
x + y ≤ 2; x ≥ - 1 & y ≥ - 1.

– We cannot get any more improvement if for x* there does not exist a
vector d that is both a descent direction and a feasible direction.
– In other words: the possible feasible directions do not intersect the
possible descent directions at all.

11.4 Mathematical Form


• A vector d that is both descending and feasible cannot exist if
-f =  i (gi) (with i  0) for all active constraints iI.
– This can be rewritten as 0 = f +  i (gi)
– This condition is correct IF feasibility is defined as g(x)  0.
– If feasibility is defined as g(x)  0, then this becomes
-f =  i (-gi)

• Again, this only applies for the I active constraints.

• Usually the inactive constraints are included, but the condition j gj


= 0 (with j  0) is added for all inactive constraints jJ.
– This is referred to as the complimentary slackness condition.
24

12
Ch. 11 - Optimization with Equality Constraints

11.4 Mathematical Form


• Note that the slackness condition is equivalent to stating that j = 0
for inactive constraints -i.e., zero price for non-binding constraints!

• That is, each inequality constraint is either active, and in this case it
turns into equality constraint of the Lagrange type, or inactive, and in
this case it is void and does not constrains the solution.

• Note that I + J = M, the total number of (inequality) constraints.

• Analysis of the constraints can help to rule out some combinations.


However, in general, a ‘brute force’ approach in a problem with J
inequality constraints must be divided into 2J cases. Each case must
be solved independently for a minima, and the obtained solution (if
any) must be checked to comply with the constrains. A lot of work!

11.4 Necessary KKT Conditions


For the problem:
Min f(x)
s.t. g(x)  0
(n variables, M constraints)

The necessary conditions are:


f(x) +  i gi(x) = 0 (optimality)
gi(x)  0 for i = 1, 2, ..., M (primary feasibility)
i gi(x) = 0 for i = 1, 2, ..., M (complementary slackness)
i  0 for i = 1, 2, ..., M (non-negativity, dual feasibility)

Note that the first condition gives n equations.

26

13
Ch. 11 - Optimization with Equality Constraints

11.4 Necessary KKT Conditions - Example


Example: Let’s minimize
f(x) = 4(x – 1)2 + (y – 2)2
with constraints:
x + y ≤ 2; x ≥ - 1 & y ≥ - 1.
Form the Lagrangian:

m p
L( x ,  ,  )  f ( x )    k h k ( x )    jg j ( x )
k 1 j1

L( x, , )  4x  12   y  2 2  1 x  y  2    2 x  1   3  y  1

There are 3 inequality constraints, each can be chosen active/non-


active: 8 possible combinations. But, the 3 constraints together:
x+y=2 & x=-1 & y=-1 have no solution, and a combination of any
two of them yields a single intersection point.

11.4 Necessary KKT Conditions - Example


The general case is:
L( x, , )  4x  1  y  2  1 x  y  2   2 x  1   3 y  1
2 2

We must consider all the combinations of active / non active constraints:

x  y  2  Lx , y,    4x  1  y  2  x  y  2 


2 2
(1)
Lx, y,    4x  1  y  2   x  1
2 2
(2) x  1 

y  1  Lx, y,    4x  1  y  2   y  1


2 2
(3)
(4) x  y  2 and x  1  x, y    1,3
(5) x  y  2 and y  1  x, y   3,1
(6) x  1 and y  1  x, y    1,1
(7) x  y  2 and x  1 and x  1  x , y   

Lx, y   4x  1  y  2 
2 2
(8) Unconstrained:

14
Ch. 11 - Optimization with Equality Constraints

11.4 Necessary KKT Conditions - Example


(1) Lx, y,   4x 12   y  22  x  y  2
L 
 x  y 2  0 

 x  2 y  x, y  0.8,1.2
L  
 8x  8    0    8  8x   f x, y  0.8
x  2y  4  8  82  y  0
L    1.6  0
 2y  4    0
y 

(2) Lx, y,   4x 12   y  22  x 1


L 
 x 1  0 

 x  1  x, y   1,2
L  
 8x  8    0    8  8x  16  f x, y   16
x  
L y2    16  0
 2y  4  0 
y 

11.4 Necessary KKT Conditions - Example


L x , y,    4x  1  y  2    y  1
2 2
(3)
L 
 y 1  0 
 y  1 x , y   1,1

L  
 8 x  8  0   x  1   f x , y   9
x 
L   6  60
 2 y  4    0
y 

(4) x, y    1,3; f x, y   17

(5) x, y   3,1; f x, y   25

(6) x, y    1,1; f x , y   25

(7) x, y   
(8) x, y   1,2; f x , y   0 x  y  3  2 - beyond the range

15
Ch. 11 - Optimization with Equality Constraints

11.4 Necessary KKT Conditions - Example


Finally, we compare among the 8 cases we have studied: case (7)
resulted was over-constrained and had no solutions, case (8) violated
the constraint x + y ≤ 2. Among the cased (1)-(6), it was case (1)
x, y   0.8,1.2; f x, y   0.8 , yielding the lowest value of f(x, y).

11.4 Necessary KKT Conditions: General Case


• For the general case (n variables, M Inequalities, L equalities):
Min f(x) s.t.
gi(x)  0 for i = 1, 2, ..., M
hj(x) = 0 for J = 1, 2, ..., L
• In all this, the assumption is that gj(x*) for j belonging to active
constraints and hk(x*) for k = 1, ..., K are linearly independent. This is
referred to as constraint qualification.
• The necessary conditions are:
f(x) +  i gi(x) +  j hj(x) = 0 (optimality)
gi(x)  0 for i = 1, 2, ..., M (primary feasibility)
hj(x) = 0 for j = 1, 2, ..., L (primary feasibility)
i gi(x) = 0 for i = 1, 2, ..., M (complementary slackness)
i  0 for i = 1, 2, ..., M (non-negativity, dual feasibility)
(Note: j is unrestricted in sign)

16
Ch. 11 - Optimization with Equality Constraints

11.4 Necessary KKT Conditions (if g(x)0)


• If the definition of feasibility changes, the optimality and feasibility
conditions change.

• The necessary conditions become:


f(x) –  i gi(x) +  j hj(x) = 0 (optimality)
gi(x)  0 for i = 1, 2, ..., M (feasibility)
hj(x) = 0 for j = 1, 2, ..., L (feasibility)
i gi(x) = 0 for i = 1, 2, ..., M (complementary slackness)
i  0 for i = 1, 2, ..., M (non-negativity, dual feasibility)

33

11.4 Restating the Optimization Problem


• Kuhn Tucker Optimization Problem: Find vectors x(Nx1), (1xM)
and  (1xK) that satisfy:
f(x) +  i gi(x) +  j hj(x) = 0 (optimality)
gi(x)  0 for i = 1, 2, ..., M (feasibility)
hj(x) = 0 for j = 1, 2, ..., L (feasibility)
i gi(x) = 0 for i = 1, 2, ..., M (complementary
slackness condition)
i  0 for i = 1, 2, ..., M (non-negativity)

• If x* is an optimal solution to NLP, then there exists a (*, *)


such that (x*, *, *) solves the Kuhn–Tucker problem.
• The above equations not only give the necessary conditions for
optimality, but also provide a way of finding the optimal point.

34

17
Ch. 11 - Optimization with Equality Constraints

11.4 KKT Conditions: Limitations

• Necessity theorem helps identify points that are not optimal. A


point is not optimal if it does not satisfy the Kuhn–Tucker
conditions.

• On the other hand, not all points that satisfy the Kuhn-Tucker
conditions are optimal points.

• The Kuhn–Tucker sufficiency theorem gives conditions under which


a point becomes an optimal solution to a single-objective NLP.

35

11.4 KKT Conditions: Sufficiency Condition


• Sufficient conditions that a point x* is a strict local minimum of the
classical single objective NLP problem, where f, gj, and hk are twice
differentiable functions are that
1) The necessary KKT conditions are met.
2) The Hessian matrix 2L(x*) = 2f(x*) + i2gi(x*) +
j2hj(x*) is positive definite on a subspace of Rn as defined by
the condition:
yT 2L(x*) y  0 is met for every vector y(1xN) satisfying:
gj(x*) y = 0 for j belonging to I1 = { j | gj(x*) = 0, uj* > 0}
(active constraints)
hk(x*) y = 0 for k = 1, ..., K
y0

36

18
Ch. 11 - Optimization with Equality Constraints

11.4 KKT Sufficiency Theorem (Special Case)


• Consider the classical single objective NLP problem.
minimize f(x)
Subject to gj(x)  0 for j = 1, 2, ..., J
hk(x) = 0 for k = 1, 2, ..., K
x = (x1, x2, ..., xN)
• Let the objective function f(x) be convex, the inequality constraints
gj(x) be all convex functions for j = 1, ..., J, and the equality
constraints hk(x) for k = 1, ..., K be linear.
• If this is true, then the necessary KKT conditions are also
sufficient.
• Therefore, in this case, if there exists a solution x* that satisfies the
KKT necessary conditions, then x* is an optimal solution to the
NLP problem.
• In fact, it is a global optimum.

11.4 KKT Conditions: Closing Remarks

• Kuhn-Tucker Conditions are an extension of Lagrangian function


and method.
• They provide powerful means to verify solutions
• But there are limitations…
– Sufficiency conditions are difficult to verify.
– Practical problems do not have required nice properties.
– For example, you will have a problems if you do not know the
explicit constraint equations.
• If you have a multi-objective (lexicographic) formulation, then it is
suggested to test each priority level separately.

19
Ch. 11 - Optimization with Equality Constraints

11.4 KKT Conditions: Example


Minimize C   x1  4    x2  4 
2 2
s.t. 2x 1  3x 2  6; 12 - 3 x1  2 x2  0; x1 , x 2  0
L   x1  4    x2  4   1 6 - 2x 1  3x 2   2  12  3 x1  2 x2 
2 2
Form Lagrangian :
F.o.c. :
1) Lx1  2 x1  4   21  32  0;
2) Lx2  2 x2  4   31  22  0;
3) Lλ1  6 - 2x 1  3x 2  0;
4) Lλ 2  12  3 x1  2 x2  0;
Case 1 : Let 2  0, Lλ 2  0 (2nd Constraint inactive) :
From 1) and 2)  1  x1  4  2 3  x2  4 ;
From 3)  x1  3  3 2 x2 ;
 3  3 2 x2  4  2 3  x2  4   x2*  5 3 * 6 13
x  30 / 39  10 / 13,
*
2 x  24 / 13,
*
1 1  28 / 13  0 (Violates KKT conditions )
Case 2 : Let 1  0, Lλ  0 (1st Constraint inactive) :
1

From 1) and 2)  2  2 3 x1  4    x2  4   x1  3 2  x2  4   4


From 4)  x1   2 3 x2  4
  2 3 x2  4  3 / 2 x2  4   4;  2 3  3 2x2*  6 39
x  36 13 ,
*
2 x  84 / 39  28 / 13, 2  16 / 13  0
*
1 (Meets KKT conditions )

11.4 KKT Conditions: Using Cramer’s rule


Assume L λ1  L x1  L x 2  0, L λ 2  0 , therefore x1 , x 2 ,& 1  0, 2  0.
1) Lλ1  6 - 2x1  3x 2  0
2) Lx1  2 x1  8  21  0
3) Lx2  2 x2  8  31  0
Use eq.s (1), (2), & (3) to solve for 1 , x1 , and x 2
 0  2  3  1   6
 2 2 0   x1    8 ,

  3 0 2   x2   8 
6 2 3 0 6 3 0 2 6
J 1  8 2 0 , J x1   2 8 0 , J x2  2 2 8
8 0 2 3 8 2 3 0 8
J   26 , J 1  56 , J x1   48 J x2   20
56 28 48 24 20 10
1     0 x1   0 x2   0
 26 13 26 13 26 13
1 , 2 , x1 , x2  do not meet the KKT conditions for a minimum. 40

20
Ch. 11 - Optimization with Equality Constraints

11.4 KKT Conditions: Example


Assume L x1 , L x 2 ,& L 2  0, L 1  0 , therefore x1 , x 2 ,& 2  0, 1  0.
(1) L2  12  3x1  2 x2  0
(2) Lx1  2 x1  8  32  0
(3) Lx2  2 x2  8  22  0
Use eq.s (1), (2), & (3) to solve for 2 , x1 , and x 2
0 3 2 2  12 12 3 2 0 12 2 0 3 12
3 2 0  x    8  , J  8 2 0 , J  3 8 0 , J  3 2 8
   1    2 x1 x2

2 0 2  x2   8  8 0 2 2 8 2 2 0 8
J   26 , J 2   32 , J x1   56 J x2   72
32 16 56 28 72 36
2   0 x1   0 x2   0
26 13 26 13 26 13
1 , 2 , x1 , x2  meet the KKT conditions for a minimum.
41

11.5 The Constraint Qualification

Example 1 - Irregularities at boundary points


Maximize   x1
x 2 - 1 - x1   0
3
subject to
and x1 & x2  0

L  x1   - x 2  1 - x1 
3

Lx1  1  3 1  x1   0
2

x1  1 at a max. However,
Lx1  1  3 1  1  1 when it should equal zero.
2

Reason : on an inflection point or cusp


42

21
Ch. 11 - Optimization with Equality Constraints

11.5 The Constraint Qualification


Example 2 - Irregulari ties at the boundary points
Maximize   x1
x 2 - 1 - x 1   0
3
subject to
and 2x 1  x 2  2
where x 1 & x2  0

L  x1  1 - x 2  1 - x 1    2 2 - 2x 1  x 2 
3

L x1  1  31 1  x1   2  2  0
2

L x 2   1   2  0
L1  -x 2  1 - x 1   0
3

L2  2 - 2x 1  x 2  0
1 1
x1  1, x 2  0 , λ1   , λ2  43
2 2

11.5 The Constraint Qualification


Example 3 - The feasible region of the problem contains no cusp
Maximize π  x2 -x12
  3
subject to  10-x12 -x2  0 and  x1  2, where x1 , x 2  0
 
L  x2 -x12  λ1 10-x12 -x2  λ2  2  x1 
3


Lx1  2 x1  6 λ1 10-x12 -x2 x1  λ2 
2


Lx2  1  3 λ1 10-x12 -x2  2


Lλ1  10-x12 -x2 
3

L2  2  x1
x1  2, x2  6, 1  1 , 2  4
 
2
Lx2  1  3 λ1 10-2 2 -6  1, when it should equal zero
44

22
Ch. 11 - Optimization with Equality Constraints

23

You might also like