You are on page 1of 56

Presession Mathematics Seminar

ron Tbis
Ph.D. Student
Central European University
Department of Economics
September 16, 2009
Presession Mathematics Seminar 1 / 54
Constrained Optimization
Constrained Optimization
Presession Mathematics Seminar 2 / 54
Constrained Optimization General Formalization
General Formalization
Consider a twice dierentiable function f : R
n
R. Unconstrained
optimization, which we have exhaustively discussed, amounted to solving
the following problem (s.t. stands for subject to or such that):
f (x) max / min
s.t. x R
n
.
Now we are considering constrained optimization, which consists of solv-
ing the following problem:
f (x) max / min
s.t. x S R
n
.
S is called the constraint set pertaining to the optimization problem.
Presession Mathematics Seminar 3 / 54
Constrained Optimization General Formalization
General Formalization
The constraint set is typically represented by means of some equalities
and/or weak inequalities that x has to satisfy. The general form of a
constrained optimization problem is the following:
f (x) max / min
s.t.
g
1
(x) = b
1
, h
1
(x) c
1
,
.
.
.
.
.
.
g
K
(x) = b
K
, h
M
(x) c
M
,
where b
1
, . . . , b
K
and c
1
, . . . , c
M
are constant. That is, the constraint set
is determined by K equalities and M weak inequalities. Formally:
S ={x R
n
: g
1
(x) = b
1
} . . . {x R
n
: g
K
(x) = b
K
}
{x R
n
: h
1
(x) c
1
} . . . {x R
n
: h
M
(x) c
M
} .
We generally assume that the functions g
1
, . . . , g
k
and h
1
, . . . , h
m
are also
twice continuously dierentiable.
Presession Mathematics Seminar 4 / 54
Constrained Optimization General Formalization
General Formalization
Greater-than-or-equal-to constraints can also be incorporated in this frame-
work. For example, if we require that h
m
(x) c
m
, we could as well require
that h
m
(x) c
m
.
The whole process of solving such a constrained optimization problem is
alternatively called nonlinear programming. The function f to be maxi-
mized or minimized is the objective function.
In the special case when the objective function and all of the constraints are linear,
we have a linear programming problem. Such problems can be more eciently
analyzed using methods dierent than those we are going to discuss. This is
beyond our scope.
Presession Mathematics Seminar 5 / 54
Constrained Optimization Kuhn Tucker Conditions
Kuhn Tucker Conditions
How to solve such a problem? For now consider the case when there are
only weak inequality constraints. Also, lets consider maximization only.
This case is most typical in economic analysis.
f (x) max
s.t. h
1
(x) c
1
, . . . , h
M
(x) c
M
.
The rst step is to set up the Lagrangian function:
L(x) = f (x)
M

m=1

m
[h
m
(x) c
m
] .
The variables
m
(m = 1, . . . , M) are the Lagrange multipliers corre-
sponding to the respective inequality constraints. Informally, each multi-
plier measures how restrictive the constraint isthat is, in which direction
and to what extent the optimal value of the objective function would
change if a constraint were to be slightly relaxed (or stiened).
Presession Mathematics Seminar 6 / 54
Constrained Optimization Kuhn Tucker Conditions
Kuhn Tucker Conditions
The second step is to nd the critical point(s) of the Lagrangian, that is,
the point(s) x

which solve(s) the following equations (i = 1, . . . , n):


L(x

)
x
i
=
f (x

)
x
i

m=1

m
h
m
(x

)
x
i
= 0.
The third step is to prescribe the so-called complementarity conditions,
which require the following three things: (i) Each of the multipliers must
be nonnegative. (ii) If any condition m is non-binding at (any of) the
critical point(s) of the Lagrangian [that is, it holds with strict inequality:
h
m
(x

) < c
m
], then the respective multiplier must be zero:
m
= 0. (iii)
If any of the multipliers is positive:
m
> 0, then the respective constraint
must be binding at the critical point: h
m
(x

) = c
m
. In short:

m
0,
m
[g
m
(x

) c
m
] = 0.
The above two formulae are referred to as the Kuhn Tucker conditions.
Presession Mathematics Seminar 7 / 54
Constrained Optimization Kuhn Tucker Conditions
Kuhn Tucker Conditions
x
1
x
2
x
**
contour lines of f(x)
h(x) c
Figure 1: Function f (x
1
, x
2
) has a global maximum at x

. The constraint
h(x
1
, x
2
) c is non-binding at x

, thus the constrained maximum coin-


cides with the unconstrained one and = 0.
Presession Mathematics Seminar 8 / 54
Constrained Optimization Kuhn Tucker Conditions
Kuhn Tucker Conditions
x
1
x
2
x
**
contour lines of f(x)
h(x) c
x
*
Figure 2: Function f (x
1
, x
2
) has a global maximum at x

, but the con-


strained maximum is x

. The constraint h(x


1
, x
2
) c is binding at x

. Any
relaxation of the constraint increases the objective function and > 0.
Presession Mathematics Seminar 9 / 54
Constrained Optimization Constraints with Equality
Constraints with Equality
Suppose that there also equality constraints: g
1
(x) = b
1
, . . ., g
K
(x) =
= b
K
. The usual notation for the multipliers corresponding to these con-
straints are
1
, . . .,
K
. The Kuhn Tucker conditions remain the same in
the presence of equality constraints, with the only modication that we do
not need complementarity conditions for these constraints. In particular,
it means that any
k
can be negative as well.
Try to explain graphically the relevance of equality constraints based on Figures
1 and 2.
Presession Mathematics Seminar 10 / 54
Constrained Optimization Optimality of the Kuhn Tucker Solution
Optimality of the Kuhn Tucker Solution
The following result is very important in practice: If the constraint set
S is nonempty and convex, and the objective function f is strictly qua-
siconcave, then any solution x

to the constrained maximization problem


is unique. Moreoverprovided that a technical condition, which we dis-
cuss later, holdsthe Kuhn Tucker conditions are both necessary and
sucient for determining this unique maximum. A special case when this
holds is when f is strictly concave and the hs are convex. (Why?) If these
convexity/concavity conditions on the constraint set and on the objective
function do not hold, then the Kuhn Tucker conditions are necessary but
may not be sucient.
Again, try to argue graphically. What happens if the maximization problem admits
no solution at all?
Presession Mathematics Seminar 11 / 54
Constrained Optimization Existence of a Solution
Existence of a Solution
We have just concluded that if a solution exists to a nonlinear program-
ming problem, then it has to satisfy the Kuhn Tucker conditions. But
when does a solution exist? Weierstrasss theorem says that if the con-
straint set is bounded and closed (the latter is always satised in our
contextwhy?), then any continuous function takes both maximum and
minimum values on this set.
Presession Mathematics Seminar 12 / 54
Constrained Optimization Problem 1
Problem 1
Solve the following nonlinear programming problem:
f (x
1
, x
2
) = x
2
1
+ x
2
2
+ x
2
1 max
s.t. h(x
1
, x
2
) = x
2
1
+ x
2
2
1.
Note that even though the constraint set is convex, the objective function
is strictly convex (you should check these), so the Kuhn Tucker condi-
tions may not be sucient. They are still necessary, however. Set up the
Lagrangian:
L(x
1
, x
2
) = x
2
1
+ x
2
2
+ x
2
1
_
x
2
1
+ x
2
2
1
_
.
Any maximum point x

(at least one exists by Weierstrasss theorem, since


the constraint set is bounded) must satisfy the rst-order conditions:
L(x

)
x
1
= 2x

1
2x

1
= 2x

1
(1 ) = 0,
L(x

)
x
2
= 2x

2
+ 1 2x

2
= 2x

2
(1 ) + 1 = 0.
Presession Mathematics Seminar 13 / 54
Constrained Optimization Problem 1
Problem 1
Contd:
L(x

)
x
1
= 2x

1
2x

1
= 2x

1
(1 ) = 0,
L(x

)
x
2
= 2x

2
+ 1 2x

2
= 2x

2
(1 ) + 1 = 0.
Also, we have to have the complementarity condition satised:
0, = 0 if (x

1
)
2
+ (x

2
)
2
< 1.
From the rst equation, either x

1
= 0 or = 1. Suppose that = 1.
Then the second equation yields a contradiction (1 = 0). Therefore, it
must be that x

1
= 0. Now suppose that the constraint (x

1
)
2
+ (x

2
)
2
= 1
holds with equality. (Give it a thought: Could be zero in the case? Yes,
it possibly could!)
Presession Mathematics Seminar 14 / 54
Constrained Optimization Problem 1
Problem 1
Maintaining the supposition that (x

1
)
2
+(x

2
)
2
= 1, the fact weve already
established that x

1
= 0 yields that either x

2
= 1 or x

2
= 1. Assume for
the moment that x

2
= 1. From the second equation: 2x

2
(1 ) +1 = 0,
we conclude that = 3/2, which clearly satises the complementarity
condition. Hence x

1
= 0, x

2
= 1, and = 3/2 satises the Kuhn Tucker
conditions. But were not done here. Now suppose that the constraint
still holds with equality but x

2
= 1. By the second equation, =
1/2 0. Therefore, the triple consisting of x

1
= 0, x

2
= 1, and
= 1/2 also satises the Kuhn Tucker conditions. Finally, suppose that
(x

1
)
2
+ (x

2
)
2
< 1. The complementarity condition yields that = 0.
Since x

1
= 0 must still hold, we have from the second equation that
x

2
= 1/2. Therefore, x

1
= 0, x

2
= 1/2, and = 0 satises the
Kuhn Tucker conditions, too.
Presession Mathematics Seminar 15 / 54
Constrained Optimization Problem 1
Problem 1
We have three candidates for the maximum point:
x

1
x

2

Candidate #1 0 1 3/2
Candidate #2 0 1 1/2
Candidate #3 0 1/2 0
Lets evaluate the objective function f (x
1
, x
2
) = x
2
1
+ x
2
2
+ x
2
1 at each
of them:
f (0, 1) = 1,
f (0, 1) = 1,
f (0, 1/2) = 5/4.
Therefore, the only solution is x

1
= 0 and x

2
= 1 with = 3/2. Note,
incidentally, that the point (0, 1/2) is a global minimum of the objective
function. (Why?)
Presession Mathematics Seminar 16 / 54
Constrained Optimization Problem 1
Problem 1
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1
1
2
x
1
x
2
contour lines of f(x)
x + x 1
2
2 2
1
(0,-1/2)
(0,1)
direction of growth
Figure 3: Graphical representation of Problem 1.
Presession Mathematics Seminar 17 / 54
Constrained Optimization Problem 2
Problem 2
Solve the following nonlinear programming problem:
f (x
1
, x
2
, x
3
) = x
2
1
x
2
2
x
2
3
+ 4x
3
max
s.t. h
1
(x
1
, x
2
, x
3
) = x
3
x
1
x
2
0.
h
2
(x
1
, x
2
, x
3
) = x
2
1
+ x
2
2
+ x
2
3
3 0.
The rst-order conditions and the complementarity condition are as fol-
lows:
2x

1
(1 +
2
) +
1
x

2
= 0,
2x

2
(1 +
2
) +
1
x

1
= 0,
4 2x

3
(1 +
2
)
1
= 0,

1
0,
1
= 0 if x

3
x

1
x

2
< 0,

2
0,
2
= 0 if (x

1
)
2
+ (x

2
)
2
+ (x

3
)
2
< 3.
Presession Mathematics Seminar 18 / 54
Constrained Optimization Problem 2
Problem 2
What we are going to do now is to contemplate every possible variation cor-
responding to each of the constraints being either binding or non-binding.
Itll be tedious, but its worth it for practice.
Case 1 Both constraints are non-binding. From the fourth and the fth
equations,
1
=
2
= 0. From the rst three equations x

1
= x

2
= 0,
and x

3
= 2. But then (x

1
)
2
+ (x

2
)
2
+ (x

3
)
2
= 4 > 3, which violates the
second constraint. Contradiction.
Case 2 The rst constraint is non-binding, the second one is binding. Then

1
= 0 and (x

1
)
2
+(x

2
)
2
+(x

3
)
2
= 3. By the rst equation, x

1
= 0 (since
1 +
2
1 > 0). Similarly, by the second equation, x

2
= 0. Therefore,
by the second constraint x

3
=

3. But the rst constraint requires


that x

3
< x

1
x

2
= 0, so that x

3
=

3. Now, from the third equation

2
= 2/

3 1 < 0, which is impossible.


Presession Mathematics Seminar 19 / 54
Constrained Optimization Problem 2
Problem 2
Case 3 The rst constraint is binding, the second one is non-binding. Then
x

1
x

2
= x

3
and
2
= 0. Suppose rst that x

1
= 0. Then, by the second
equation
1
= 2x

2
/x

1
. Substitute this into the rst equation to conclude
that (x

1
)
2
= (x

2
)
2
or x

2
= x

1
. Therefore,
1
= 2. But
1
0 by the
complementarity condition, so
1
= 2 and x

1
= x

2
. By the third equation,
we have that x

3
= 1. Since the rst constraint is binding, it must be that
(x

1
)
2
= (x

2
)
2
= 1. Therefore, (x

1
)
2
+ (x

2
)
2
+ (x

3
)
2
= 3, which violates
the assumption that the second constraint is non-binding. Thus we can
only have x

1
= 0. By the second equation x

2
= 0. Therefore, by the rst
constraint x

3
= 0. Now from the third equation, we have
1
= 4. To sum
up, x

1
= x

2
= x

3
= 0 with
1
= 4 and
2
= 0 satises the Kuhn Tucker
conditions.
Presession Mathematics Seminar 20 / 54
Constrained Optimization Problem 2
Problem 2
Case 4 Both constraints are binding: (x

1
)
2
+(x

2
)
2
+(x

3
)
2
= 3 and x

1
x

2
=
= x

3
. Subtract the second equation from the rst to conclude that
(x

2
x

1
) (2 +
1
+ 2
2
) = 0. The second term of the product is strictly
positive by the complementarity conditions, so it must be that x

1
= x

2
.
Substituting this into the rst constraint, we have x

3
= (x

1
)
2
. Substitut-
ing again into the second constraint, we obtain (x

1
)
4
+ 2 (x

1
)
2
3 = 0.
The solution to this quadratic equation in (x

1
)
2
is (x

1
)
2
= 12. Since x

1
must be real, the only possibility is (x

1
)
2
= 1 +2 = 1, that is, x

1
= 1.
Then, x

2
= x

1
= 1 and x

3
= 1. Substituting this into the rst equation,
we have (2 +
1
2
2
)x

1
= 0, which implies that 2 +
1
2
2
, since
x

1
= 0. But the third equation with x

3
= 1 implies that 2
1
2
2
= 0.
Add this two equations: 4
2
= 0 or
2
= 0. Moreover,
1
= 2. There-
fore, the point
1
= 2 and
2
= 0 with either x

1
= x

2
= 1 and x

3
= 1,
or x

1
= x

2
= 1 and x

3
= 1 both satisfy the Kuhn Tucker conditions.
Presession Mathematics Seminar 21 / 54
Constrained Optimization Problem 2
Problem 2
To sum up, we have three candidates for the maximum points:
x

1
x

2
x

3

1

2
Candidate #1 0 0 0 4 0
Candidate #2 1 1 1 2 0
Candidate #3 1 1 1 2 0
Lets evaluate the objective function f (x
1
, x
2
, x
3
) = x
2
1
x
2
2
x
2
3
+ 4x
3
at each of them:
f (0, 0, 0) = 0,
f (1, 1, 1) = 1,
f (1, 1, 1) = 1.
Therefore, Candidates #2 and #3 both solve the constrained maximiza-
tion problem.
Presession Mathematics Seminar 22 / 54
Constrained Optimization Problem 3
Problem 3
Relax, this problem is going to be much less tiresome. Solve the following
nonlinear programming problem:
f (x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
max
s.t. h(x
1
, x
2
) = x
2
1
+ x
2
2

1
2
.
Now the constraint set is convex and the objective function is strictly
concave (you should check these). Therefore, if a solution exists it will
be unique and it will be able to be completely characterized by the Kuhn
Tucker conditions. Moreover, the constraint set is also bounded, which
means by Weierstrasss theorem that a solution indeed exists.
Presession Mathematics Seminar 23 / 54
Constrained Optimization Problem 3
Problem 3
The rst-order conditions, along with the complementarity condition, are
as follows:
x

1
(1 + ) = 1,
x

2
_
+ exp
_
(x

2
)
2
__
= 0,
0, = 0 if (x

1
)
2
+ (x

2
)
2
<
1
2
.
The second equation implies that x

2
= 0. By the rst equation, x

1
=
= (1 + )
1
. Now use the complementarity condition. It tells us that if
(x

1
)
2
+ (x

2
)
2
= (1 +)
2
< 1/2, then = 0, which is equivalent to that
1/2 > (1 +0)
2
= 1. This is impossible. Thus, the only possibility is that
(1 + )
2
= 1/2 or =

2 1 and x

1
= (1 + )
1
= 1/

2.
The quadratic equation (1 + )
2
= 1/2 has another root for . Why did we
neglect it?
Presession Mathematics Seminar 24 / 54
Constrained Optimization Problem 4
Problem 4
Solve the following nonlinear programming problem:
f (x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
max
s.t. h(x
1
, x
2
) = x
2
1
+ x
2
2
2.
This is the same as Problem 3 with the only modication that the right-
hand side of the constraint is now 2 instead of 1/2.
Presession Mathematics Seminar 25 / 54
Constrained Optimization Problem 4
Problem 4
The rst-order conditions, along with the complementarity condition, are
as follows:
x

1
(1 + ) = 1,
x

2
_
+ exp
_
(x

2
)
2
__
= 0,
0, = 0 if (x

1
)
2
+ (x

2
)
2
< 2.
The second equation implies that x

2
= 0. By the rst equation, x

1
=
= (1 + )
1
. Now use the complementarity condition. It tells us that if
(x

1
)
2
+(x

2
)
2
= (1+)
2
< 2, then = 0, which is equivalent to that 2 >
> (1+0)
2
= 1. This sounds reasonable. Thus, = 0, x

1
= (1+)
1
=
= 1, and x

2
= 0 satises the Kuhn Tucker conditions. We have already
seen that the solution in this case is unique, so no further research for
other optima is needed.
Presession Mathematics Seminar 26 / 54
Constrained Optimization Problem 4
Problem 4
Note that the point x

1
= 1 and x

2
= 0 is also a global maximum of
the objective function f (x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
. (You should
check this.) This maximum point is now contained in the interior of the
constraint set, which is tantamount to saying that the constraint is non-
binding and the corresponding multiplier is thus zero. Note also that this
maximum point was not contained in the constraint set in Problem 3 and
that the function took its constrained maximum on the boundary of the
binding constraint set.
Presession Mathematics Seminar 27 / 54
Constrained Optimization Problem 4
Problem 4
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1
1
2
x
1
x
2
contour lines of f(x)
x + x 1/2
2
2 2
1
(1/2,0)
direction of growth
x + x 2
2 2
2 1
Figure 4: Graphical representation of Problems 3 and 4.
Presession Mathematics Seminar 28 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
Now we formalize the idea embodied in the multipliers. Consider again the
general problem.
f (x) max / min
s.t.
g
1
(x) = b
1
, h
1
(x) c
1
,
.
.
.
.
.
.
g
K
(x) = b
K
, h
M
(x) c
M
,
Suppose now that the parameters corresponding to the constraints b R
K
and c R
M
are allowed to vary. That is, for every possible set of val-
ues of the parameters, we solve the corresponding nonlinear programming
problem and calculate the optimum value of the objective function. The
function that assigns to any set of admissible parameter values this opti-
mum value of the objective function is called the value function v(b, c).
Presession Mathematics Seminar 29 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
The envelope theorem says that the following: Suppose that the value
function is dierentiable. Assume also that we have a set of parameter
values b and c such that if we change any of them by a suciently small
amount, then at the new altered values every binding constraint remains
binding and every non-binding constraint also remains so. Then the partial
derivatives of the value function are exactly the multipliers at the actual
optimum point. Formally, for any k = 1, . . . , K and m = 1, . . . , M, we
have that
v(b, c)
b
k
=
k
,
v(b, c)
c
m
=
m
.
Presession Mathematics Seminar 30 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
As an example, lets solve again Problems 3 and 4 but now lets allow the
right-hand side of the inequality constraint to be any real number c R.
That is, our task is to solve the following nonlinear programming problem:
f (x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
max
s.t. h(x
1
, x
2
) = x
2
1
+ x
2
2
c.
Our ultimate goal is to determine the value function corresponding to this
problem. Thus we have to solve this problem for every possible value of
c. First of all, observe that if c < 0, then the constraint set is empty.
Therefore, there exists no solution to the problem and the value function
is not dened for these values of c. If c = 0, then the constraint set
consists of the single point x
1
= x
2
= 0 and this is trivially the unique
solution to the problem. That is, v(0) = f (0, 0) = 0.
Presession Mathematics Seminar 31 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
Next, we contemplate values c > 0. From Problems 3 and 4, we already
know the Kuhn Tucker conditions:
x

1
(1 + ) = 1,
x

2
_
+ exp
_
(x

2
)
2
__
= 0,
0, = 0 if (x

1
)
2
+ (x

2
)
2
< c.
The second equation implies that x

2
= 0. By the rst equation, x

1
=
= (1 + )
1
. Now use the complementarity condition. It tells us that if
(x

1
)
2
+(x

2
)
2
= (1+)
2
< c, then = 0, which is equivalent to that c >
> (1 + 0)
2
= 1. This makes sense if and only if c > 1. Therefore, if
c > 1, then x

1
= 1, x

2
= 0 with = 0 is the unique solution and v(c) =
= f (1, 0) = 1. However, if c 1, then we run into a contradiction if
we assume that the constraint is non-binding. In this case, the constraint
must bind and we have x

1
= (1 + )
1
=

c and = 1/

c 1. Also,
v(c) = f (

c, 0) = 1 (1

c)
2
.
Presession Mathematics Seminar 32 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
To sum up:
x

1
x

2
v(c)
c < 0 N/A N/A N/A N/A
c = 0 0 0 N/A 0
0 < c 1

c 0 1/

c 1 1 (1

c)
2
c > 1 1 0 0 1
Lets calculate v

(c) whenever the value function is dierentiable and com-


pare its value with . What is our conclusion?
Presession Mathematics Seminar 33 / 54
Constrained Optimization Value Function and the Envelope Theorem
Value Function and the Envelope Theorem
0 0.5 1
0.5
1
y
x
Figure 5: Value function of the generalized version of Problems 3 and 4.
Presession Mathematics Seminar 34 / 54
Constrained Optimization Problem 5
Problem 5
Lets solve Problem 4 yet again but now lets prescribe that the constraint
hold with equality:
f (x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
max
s.t. h(x
1
, x
2
) = x
2
1
+ x
2
2
= 2.
The Lagrangian pertaining to this problem is the following:
L(x
1
, x
2
) = 2 (x
1
1)
2
exp
_
x
2
2
_
(x
2
1
+ x
2
2
2).
What can we say about the existence and uniqueness of the maximum point?
Think twice: The constraint set is not convex now!
Presession Mathematics Seminar 35 / 54
Constrained Optimization Problem 5
Problem 5
The rst-order conditions are the same as in Problem 4 (except for that
we have instead of ), but now we do not require any complementarity
condition.
x

1
(1 + ) = 1,
x

2
_
+ exp
_
(x

2
)
2
__
= 0.
By the second equation, one possibility is that x

2
= 0 and, from the
constraint (x

1
)
2
+ (x

2
)
2
= 2, x

1
=

2. From the rst equation, the


respective multipliers pertaining to these cases are = 1 1/

2. Both
values are negative in these cases. Since now , as we have just seen, can
also take negative values, we cannot know for sure that the only possibility
for the second equation to hold is that x

2
= 0 as it was in Problems 3 and
4. So we have to explore the possibility that + exp
_
(x

2
)
2
_
= 0, too.
Presession Mathematics Seminar 36 / 54
Constrained Optimization Problem 5
Problem 5
So suppose that + exp
_
(x

2
)
2
_
= 0. From the rst equation, =
= 1/x

1
1. Substitute this into the previous equation to yield 1/x

1
1 +
+exp
_
(x

2
)
2
_
= 0. From the constraint we know that (x

2
)
2
= 2 (x

1
)
2
.
We conclude that we have to solve the following (univariate) equation:
1/x

1
+ exp
_
2 (x

1
)
2
_
1 = 0.
By numerical methods (you dont have to be able to calculate these), we
nd three solutions: 1.1768, 0.1613, and 1.6994. The third one is
inadmissible, since it would yield no real value for x

2
. Each of the rst two
roots admits two values for x

2
: 0.7842 and 1.4050, respectively.
Presession Mathematics Seminar 37 / 54
Constrained Optimization Problem 5
Problem 5
To sum up, we have six candidates (once we have x

1
and x

2
, and the
value of the objective function are easily calculated):
x

1
x

2
f (x

1
, x

2
)
Candidate #1 1.4142 0.0000 0.2929 0.8284
Candidate #2 1.4142 0.0000 1.7071 4.8284
Candidate #3 1.1768 0.7842 1.8497 4.5884
Candidate #4 1.1768 0.7842 1.8497 4.5884
Candidate #5 0.1613 1.4050 7.1993 6.5479
Candidate #6 0.1613 1.4050 7.1993 6.5479
Therefore, we have a unique solution to the constrained maximization
problem: x

1
= 1.4142 =

2 and x

2
= 0.
Presession Mathematics Seminar 38 / 54
Constrained Optimization Problem 5
Problem 5
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1
1
2
x
1
x
2
contour lines of f(x)
x + x = 2
2
2 2
1
(1.4142,0)
direction of growth
(-0.1613,1.4050)
(-1.1768,-0.7842)
(-0.1613,-1.4050)
(-1.1768,0.7842)
(-1.4142,0)
Figure 6: Graphical representation of Problem 5.
Presession Mathematics Seminar 39 / 54
Constrained Optimization Regularity Condition
Regularity Condition
Remember that when we were discussing the optimality of the Kuhn
Tucker solution, we argued that they are necessary (but sucient only
under further special conditions) for a point to be a maximum provided that
a technical condition is satised. This is called the regularity condition
or constraint qualication. Be careful: If this condition fails, then it may
be the case that there are some points which satisfy the Kuhn Tucker
conditions but none of them is a maximum point and the maximum is
elsewhere at a point which does not satisfy the Kuhn Tucker conditions!
Presession Mathematics Seminar 40 / 54
Constrained Optimization Regularity Condition
Regularity Condition
Formally, consider the following nonlinear optimization problem:
f (x) max
s.t. h
1
(x) c
1
, . . . , h
M
(x) c
M
.
We say that a point x S satises the regularity condition if the gradients
of the constraint functions corresponding to the binding constraints are
linearly independent at this point. More formally, suppose that the rst

M M constraints are the ones which are binding. Then, the following
matrix must have rank

M:
_

_
h
1
(x)/x
1
h
1
(x)/x
n
.
.
.
.
.
.
.
.
.
h
b
M
(x)/x
1
h
b
M
(x)/x
n
_

_
.
Presession Mathematics Seminar 41 / 54
Constrained Optimization Regularity Condition
Regularity Condition
So far, we were not concerned about the regularity condition when we were
solving problems. However, as the following problem suggests, its failure
to hold may well cause complications. Therefore, the general method
of solving a nonlinear optimization problem must be complemented by a
second step:
1
Determine all points in the constraint set for which the Kuhn Tucker
conditions are satised. (Weve been doing this until now.)
2
Determine all points in the constraint set for which the regularity
condition does not hold.
Then, if a maximum point exists (or several maxima exist), it (they) must
be among the points that we can determine following these two steps.
Presession Mathematics Seminar 42 / 54
Constrained Optimization Regularity Condition
Regularity Condition
So far, we were not concerned about the regularity condition when we were
solving problems. However, as the following problem suggests, its failure
to hold may well cause complications. Therefore, the general method
of solving a nonlinear optimization problem must be complemented by a
second step:
1
Determine all points in the constraint set for which the Kuhn Tucker
conditions are satised. (Weve been doing this until now.)
2
Determine all points in the constraint set for which the regularity
condition does not hold.
Then, if a maximum point exists (or several maxima exist), it (they) must
be among the points that we can determine following these two steps.
Presession Mathematics Seminar 42 / 54
Constrained Optimization Regularity Condition
Regularity Condition
So far, we were not concerned about the regularity condition when we were
solving problems. However, as the following problem suggests, its failure
to hold may well cause complications. Therefore, the general method
of solving a nonlinear optimization problem must be complemented by a
second step:
1
Determine all points in the constraint set for which the Kuhn Tucker
conditions are satised. (Weve been doing this until now.)
2
Determine all points in the constraint set for which the regularity
condition does not hold.
Then, if a maximum point exists (or several maxima exist), it (they) must
be among the points that we can determine following these two steps.
Presession Mathematics Seminar 42 / 54
Constrained Optimization Problem 6
Problem 6
Solve the following nonlinear programming problem and for this time be
careful about the regularity condition:
f (x
1
, x
2
) = 3x
1
+ x
2
max
s.t. h
1
(x
1
, x
2
) = (x
1
1)
3
+ x
2
0,
x
1
0,
x
2
0.
First of all, lets rewrite the nonnegativity constraints into the following
less-than-or-equal-to forms: h
2
(x
1
, x
2
) = x
1
0 and h
3
(x
1
, x
2
) = x
2

0. The Lagrangian is thus as follows:
L(x
1
, x
2
) = 3x
1
+ x
2

1
_
(x
1
1)
3
+ x
2

+
2
x
1
+
3
x
2
.
Presession Mathematics Seminar 43 / 54
Constrained Optimization Problem 6
Problem 6
The rst-order conditions, along with the complementarity conditions,
(Reminder: How do we call these two sets of conditions collectively?),
are the following (With a little of abuse of notation, usual asterisks are
omitted for simplicity.):
3 3
1
(x
1
1)
2
+
2
= 0,
1
1
+
3
= 0,

1
0,
1
= 0 if (x
1
1)
3
+ x
2
< 0,

2
0,
2
= 0 if x
1
> 0,

3
0,
3
= 0 if x
2
> 0.
By either the rst or the second equation,
1
> 0 must hold (why?). The
rst constraint must thus hold with equality: (x
1
1)
3
+ x
2
= 0. Now
suppose that
3
> 0. Therefore, x
2
= 0. But then x
1
= 1, which leads us
to a contradiction through the rst equation. Therefore, it must be that

3
= 0.
Presession Mathematics Seminar 44 / 54
Constrained Optimization Problem 6
Problem 6
But if
3
= 0, then
1
= 1 by the second equation. Substituting this into
the rst equation, we have 3 3(x
1
1)
2
=
2
0, which is equivalent
to 1 (x
1
1)
2
. Therefore, either x
1
2 or x
1
0. But if we substitute
any x
1
2 into the rst constraint, which, as we have just seen, must
hold with equality: (x
1
1)
3
+x
2
= 0, then we obtain a negative value for
x
2
, which violates the nonnegativity condition. So the only possibility left
is that x
1
0. Together with the nonnegativity constraint, this means
that x
1
= 0. Putting this into the rst constraint, we have that x
2
= 1.
Moreover,
1
= 1,
3
= 0 (weve already established these), and, from
the rst equation,
2
= 0. This set of values clearly satises the Kuhn
Tucker conditions. Moreover, this is the only set of values that does so.
We thus have a unique Kuhn Tucker solution. The value of the objective
function is f (0, 1) = 1.
Presession Mathematics Seminar 45 / 54
Constrained Optimization Problem 6
Problem 6
Now lets determine the points in the constraint set where the regu-
larity conditions are not satised. Remember, its not satised when-
ever the gradients of all binding constraints are not linearly independent.
The constraints are represented by the following functions: h
1
(x
1
, x
2
) =
= (x
1
1)
3
+ x
2
, h
2
(x
1
, x
2
) = x
1
, and h
3
(x
1
, x
2
) = x
2
. The gradients
are given as follows:
h
1
(x)/x
1
= 3(x
1
1)
2
, h
1
(x)/x
2
= 1,
h
2
(x)/x
1
= 1, h
2
(x)/x
2
= 0,
h
3
(x)/x
1
= 0, h
3
(x)/x
2
= 1.
We should systematically go through every possible variation corresponding
to each of the constraints being either binding or non-binding (there are
eight possibilities). Again, itll be tedious, but we have to practice.
Presession Mathematics Seminar 46 / 54
Constrained Optimization Problem 6
Problem 6
Case 1 None of the constraints are binding. Then the regularity condition
is trivially satised.
Case 2 Only the rst constraint is binding. That is: (x
1
1)
3
+ x
2
= 0,
x
1
> 0, and x
2
> 0. Since the second element of the gradient of this
constraint is always 1 irrespective of x, the regularity condition holds.
(Remember that if a set of vectors contains only one element, then we
say that linear independence holds whenever this single vector is not 0.)
Case 3 Only the second constraint is binding. That is: (x
1
1)
3
+x
2
< 0,
x
1
= 0, and x
2
> 0. Since the rst element of the gradient of this
constraint is always 1, the regularity condition holds.
Case 4 Only the third constraint is binding. That is: (x
1
1)
3
+ x
2
< 0,
x
1
> 0, and x
2
= 0. Analogous to Case 3.
Presession Mathematics Seminar 47 / 54
Constrained Optimization Problem 6
Problem 6
Case 5 The rst two constraints are binding, but the third is not. That
is: (x
1
1)
3
+ x
2
= 0, x
1
= 0, and x
2
> 0. Therefore, for the regularity
condition to hold, we must have that the following matrix (containing the
gradients of the rst two constraints as rows) have rank 2:
_
h
1
(x)/x
1
= 3(x
1
1)
2
, h
1
(x)/x
2
= 1,
h
2
(x)/x
1
= 1, h
2
(x)/x
2
= 0.
_
But this matrix is always of rank 2. (To see this, calculate the determinant,
for example.)
Presession Mathematics Seminar 48 / 54
Constrained Optimization Problem 6
Problem 6
Case 6 The second and the third constraints are binding, but the rst is
not. That is: (x
1
1)
3
+ x
2
< 0, x
1
= 0, and x
2
= 0. For the regularity
condition to hold, it must be that the following matrix be of rank 2:
_
h
2
(x)/x
1
= 1, h
2
(x)/x
2
= 0,
h
3
(x)/x
1
= 0, h
3
(x)/x
2
= 1.
_
This matrix is always of rank 2.
Presession Mathematics Seminar 49 / 54
Constrained Optimization Problem 6
Problem 6
Case 7 All constraints are binding. That is: (x
1
1)
3
+x
2
= 0, x
1
= 0, and
x
2
= 0. For the regularity condition to hold, it must be that the following
matrix be of rank 3:
_
_
h
1
(x)/x
1
= 3(x
1
1)
2
, h
1
(x)/x
2
= 1,
h
2
(x)/x
1
= 1, h
2
(x)/x
2
= 0,
h
3
(x)/x
1
= 0, h
3
(x)/x
2
= 1.
_
_
But this matrix can never be of rank 3. (Why? Hint: Think of the
fundamental theorem of linear algebra.) So, if such a point were to exist,
we would have to conclude that the regularity condition is violated. But,
fortunately, no such point can exist where all of the three constraints are
simultaneously binding. (Why?)
Presession Mathematics Seminar 50 / 54
Constrained Optimization Problem 6
Problem 6
Case 8 The rst and the third constraints are binding, but the second is
not. That is: (x
1
1)
3
+ x
2
= 0, x
1
> 0, and x
2
= 0. For the regularity
condition to hold, it must be that the following matrix be of rank 2:
_
h
1
(x)/x
1
= 3(x
1
1)
2
, h
1
(x)/x
2
= 1,
h
3
(x)/x
1
= 0, h
3
(x)/x
2
= 1.
_
We can easily show that this matrix has rank 2 if and only if 3(x
1
1)
2
= 0.
If 3(x
1
1)
2
= 0, then matrix is of only rank 1 and the regularity condition
is violated. This can occur if and only if x
1
= 1. But the third constraint
is binding, therefore x
2
= 0. Note that the rst constraint is binding with
this choice of values. We thus have the regularity condition violated at
x
1
= 1 and x
2
= 0. Note that f (1, 0) = 3. Lets call this point an
irregular point.
Presession Mathematics Seminar 51 / 54
Constrained Optimization Problem 6
Problem 6
Now lets compare the Kuhn Tucker solution and the irregular point:
x
1
x
2
f (x
1
, x
2
)
Kuhn Tucker solution 0 1 1
Irregular point 1 0 3
We now see that the Kuhn Tucker solution is not a maximum! Moreover,
we can no way assign values to the multipliers so that the irregular point
satisfy the Kuhn Tucker conditions. We conclude that the unique solution
to the maximization problem is the irregular point in this case.
Presession Mathematics Seminar 52 / 54
Constrained Optimization Problem 6
Problem 6
0 1
1
2
3
x
x
2
1
(x - 1) + x 0
3
1 2
x 0
2
x 0
1
KuhnTucker solution
irregular point
gradient of the rst constraint
gradient of the nonnegativity constraint
contour lines of the objective function
direction of growth: upward and to the right
Figure 7: Example where the Kuhn Tucker method does not work due to
the regularity condition failing to hold.
Presession Mathematics Seminar 53 / 54
Thank you for your attention.
Presession Mathematics Seminar 54 / 54

You might also like