Optimization in Economic Theory
PD Dr. Johannes Paha
University of Hohenheim
winter term 2021/22
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 1 / 35
Chapter 6 – Constrained Optimization II
Chapter 6
Constrained Optimization II (Several Constraints)
Dec 01, 2021
Optimization in Economic Theory
PD Dr. Johannes Paha
winter term 2021/22
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 2 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Overview – Constrained Optimization
Recap: Constrained optimization with 2 variables and 1 constraint
Second order conditions
Generalization:
Lagrange with n choice variables and m equality constraints
Concluding remarks
Background reading: Simon and Blume (1994, Ch. 18.2, 19.3, 30.4)
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 3 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Learning goals
Students
Can use the bordered Hessian matrix to determine whether the solution to a
constrained optimization problem with two variables
and one constraint is a maximum or minimum
Can discuss conditions under which the Lagrange method works
to solve a constrained optimization problem with two variables
and one constraint
Can set up and solve the Lagrange function associated to a constrained
optimization problem with n variables and m < n constraints and use the bordered
Hessian matrix to determine whether the solution is a maximum or minimum
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 4 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Lagrange Theorem
Lagrange’s theorem: If (x̄1 , x̄2 ) maximizes F (x1 , x2 ) subject to the
constraint H(x1 , x2 ) = c with no other constraints and with Hj (x̄1 , x̄2 , ) ̸= 0 for at
least one j, then
Lj (x̄1 , x̄2 , λ) = 0 for j = 1, 2 and Lλ (x̄1 , x̄2 , λ) = 0.
Further steps
Second order conditions
Constraint qualification
Generalization to n choice variables and m < n constraints
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 5 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions
Consider the maximization of F (x1 , x2 ) subject to H(x1 , x2 ) = c
with both being increasing functions of their arguments.
How can we be sure that we obtained a maximum
(and not a minimum)?
The relative curvature of the contours of F and H at the candidate point is
important.
In order to have a maximum, the contour of F should be more convex than that of
H.
Formally, we need the bordered Hessian matrix to evaluate whether this is the case.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 6 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions
onret Sels and, TheirorSeparation
Maximum minimum: Graphical illustration75
of contours
) johannes.paha@uni-hohenheim.de
(b)
Optimization in Economic Theory (winter term 2021/22) 7 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Recall the Lagrangian
L = F (x1 , x2 ) + λ (c − H(x1 , x2 ))
The bordered Hessian matrix
Hb reads
0 −H1 −H2
Hb = −H1 F11 − λH11 H12 − λH12
−H2 F21 − λH21 F22 − λH22
In the different parts, the bordered Hessian matrix contains the second derivatives
of the Lagrangian with respect to x1 , x2 , and λ
The 0 at the top left is the second derivative of the Lagrangian with respect to λ:
Lλλ = 0
H1 and H2 emerge as Lλxi = −Hi .
At the bottom right, we have the second derivatives with respect to x1 and x2
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 8 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
If evaluated at (x̄1 , x̄2 , λ̄)
0 −H1 −H2
det −H1 F11 − λH11 F12 − λH12 > 0,
−H2 F21 − λH21 F22 − λH22
we have a maximum because the bordered Hessian matrix Hb is negative
semidefinite in this case.
And if the sign is negative, we have a minimum because because the bordered
Hessian matrix Hb is positive semidefinite in this case.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 9 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Gohout and Reimer (2002, p. 129)
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 10 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Given
0 −H1 −H2 0 −H1 −H2
Hb = −H1
F11 − λH11 F12 − λH12 = −H1
L11 L12
−H2 F21 − λH21 F22 − λH22 −H2 L12 L22
One finds = H1 H2 L12 + H2 H1 L12 − H22 L11 − L22 H12
det(Hb)
= − L11 H22 − 2L12 H1 H2 + L22 H12
This term must be positive for a maximum.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 11 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Put differently
Let CH = {(x1 , x2 ) : H(x1 , x2 ) = c} be the constraint set.
Moreover, we assume that at the maximizer (x 1 , x 2 ) the conditions Hi (x) ̸= 0
apply (constraint qualification is given, see below).
Then, if F and H are C 2 functions, the constraint set can be considered as the
graph of a C 1 function x2 = ϕ(x1 ) around (x 1 , x 2 ) (e.g., the budget line), so that
H(x1 , ϕ(x1 )) = c∀x1 near x 1 .
Differentiating yields (e.g., slope of the budget line)
∂H(·)
∂x1
+ ∂H(·)
∂x2
ϕ′ (x1 ) = 0
′ ∂x2 ∂H(·)/∂x1
ϕ (x1 ) = ∂x1 = − ∂H(·)/∂x2
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 12 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Also let f (x1 ) ≡ F (x1 , ϕ(x1 )) be a fn of one unconstrained variable.
To be shown: If f ′ (x 1 ) = 0 and f ′′ (x 1 ) < 0, x 1 will be a strict local max of f , and
(x 1 , x 2 ) = (x 1 , ϕ(x 1 )) will be a local constrained max of f .
Compute f ′ (x 1 )
f ′ (x1 ) = F1 (x1 , ϕ(x1 )) + F2 (x1 , ϕ(x1 ))ϕ′ (x1 )
Multiply H1 (x 1 , ϕ(x 1 )) + H2 (x 1 , ϕ(x 1 ))ϕ′ (x 1 )) = 0 by −λ and add it to f ′ (x 1 ),
evaluating both at x1 = x 1
f ′ (x 1 ) = (F1 (·) − λH1 (·)) + ϕ′ (·) (F2 (·) − λH2 (·))
= L1 (·) + ϕ′ (·)L2 (·)
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 13 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Second order conditions – The bordered Hessian matrix
Compute f ′′ (x 1 ) at x 1 given x 2 = ϕ(x 1 )
f ′′ (x 1 ) = [L11 + L12 ϕ′ ] + ϕ′ [L12 + ϕ′ L22 ]
2
= L11 + 2L12 − HH21 + L22 − HH21
H2−2 L11 H22 − 2L12 H1 H2 + L22 H12
=
(x 1 , x 2 ) is a maximum if f ′′ (x 1 ) < 0, so that
− L11 H22 − 2L12 H1 H2 + L22 H12 = det(Hb) > 0.
This proves that the determinant of the bordered Hessian matrix det(Hb) must be
positive for a maximum.
See slides 27ff for the case with n > 2 variables and m > 1 constraints.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 14 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Example: Optimal production
Consider an economy with 100 units of labor
that can produce beer (x ) or wine (y ).
To produce x units of beer, x 2 units of labor are needed.
And to produce y units of wine, y 2 units of labor are required.
The economy’s resource constraint is
x 2 + y 2 = 100.
| {z }
≡H(x ,y )
Geometrically, the PPF is a quarter-circle.
Suppose that the objective function is
F (x , y ) = ax + by
where a > 0 and b > 0 act as social values attached to the two goods.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 15 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Example: Optimal production
Step 1: Set up the Lagrangian
L(x , y , λ) = ax + by + λ 100 − x 2 − y 2 .
Step 2: Calculate the necessary FOCs
∂L
= a − 2λx = 0, (1)
∂x
∂L
= b − 2λy = 0, (2)
∂y
∂L
= 100 − x 2 − y 2 = 0. (3)
∂λ
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 16 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Example: Optimal production
Step 3: Solve for the unknowns. For example: Isolating x and y from (1) and (2)
and plugging the result into (3) yields
√
a2 + b 2
λ̄ = .
20
We have excluded the solution with a negative λ because it is only consistent with
negative outputs x and y ; see equations (1) and (2).
Step 4: Plugging this into x and y yields
10a 10b
x̄ = √ , ȳ = √ .
a2 + b 2 a2 + b 2
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 17 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Example: Optimal production
Insights
If a rises, x increases and y decreases.
If a and b increase by the same factor, x and y do not change.
Relative changes between a and b, however, do matter.
We will analyze this further in the comparative statics section.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 18 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Example: Optimal production
Second order condition
The bordered Hessian emerges as
0 −2x −2y
Hb = −2x −2λ 0
−2y 0 −2λ
It is easy to show (Sarrus rule) that det(Hb) = 8λ(x 2 + y 2 )
With x , y , λ > 0 at the candidate point (x̄ , ȳ , λ̄), det(Hb) > 0 and (x̄ , ȳ , λ̄) is a
local maximum of the constrained optimization problem.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 19 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Constraint qualification (Recall from Chapter 5)
Consider the utility maximization problem with two goods
maximize U(x1 , x2 )
subject to p1 x1 + p2 x2 = I
The objective function F (x) is the utility function U(x1 , x2 )
The (equality) constraint H(x) is the budget constraint
Optimality requires
F1 (x1 , x2 ) H1 (x1 , x2 ) F1 (x1 , x2 ) F2 (x1 , x2 )
= or =
F2 (x1 , x2 ) H2 (x1 , x2 ) H1 (x1 , x2 ) H2 (x1 , x2 )
Therefore
MU1 MU2
=
p1 p2
which means that in optimum, the marginal utility of income is equalized.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 20 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Constraint qualification
If one Hi (x̄1 , x̄2 ) becomes zero, good i is available for free
(pi = 0 in a utility maximization context).
We can change xi without affecting the constraint.
(The budget line becomes either vertical or horizontal.)
If this leads to a rise in the objective function (i.e. Fi (x̄1 , x̄2 ) > 0),
this is desirable
In this case, one should consume up to the point of satiation at which
Fi (x̄1 , x̄2 ) is also zero.
However, the Lagrange method fails if Hi (x̄1 , x̄2 ) = 0 for both i.
(It is not possible to draw the budget line any more.)
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 21 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Constraint qualification fails
Consider consumer’s optimal consumption choice problem
Rewrite the budget constraint as
(p1 x1 + p2 x2 − I)3 = |{z}
0
| {z }
=H(x =c
(Complicated though mathematically equivalent way of1 ,x 2)
expressing the budget constraint)
Partial derivatives Hi (x1 , x2 ) = 3pi (p1 x1 + p2 x2 − I)2
Any consumption bundle on the constraint satisfies p1 x1 + p2 x2 = I
and thus Hi (x1 , x2 ) = 0 for both i
But: Goods are not free at the margin (an increase in xi costs pi )
Yet, quantities have zero effects on the constraint
The Lagrange method cannot deal with this setting
(although we know that a solution exists).
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 22 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Constraint qualification fails
Remarks
In many economic applications, the failure of the constraint qualification is not the
relevant case.
Even if it is, it can usually be circumvented by rewriting the constraint.
Throughout the course, we will typically assume that the constraint qualification
holds, i.e., that Hi (x̄1 , x̄2 ) ̸= 0 for at least one i.
However, you should always keep in mind that the methods developed in this
course could fail if the constraint qualification does not hold.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 23 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The Lagrange function
With n choice variables, we have to determine x1 , x2 , . . . xn
With m < n equality constraints, we have
H i (x1 , x2 , . . . xn ) = ci , i = 1, 2, . . . m.
The Lagrangian is m
X
λi ci − H i (x1 , . . . xn )
L(x1 , . . . xn , λ1 , . . . λm ) = F (x1 , . . . xn ) +
where λi is the Lagrange parameter associated toi=1 constraint i.
A more compact notation is
L(x, λ) = F (x) + λ [c − H(x)] ,
where x is an n-dimensional vector, λ is an m-dimensional row vector, c an
m-dimensional vector, F is a function taking on scalar values,
H is a function taking on m-dimensional vector values.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 24 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The FOCs
The necessary FOCs at the optimum are
Pm i
∂L ∂F
∂xj
= ∂xj
− i=1
λi ∂H
∂xj
=0 j = 1, 2 . . . n,
∂L
∂λi
= ci − H i =0 i = 1, 2 . . . m.
This yields (m + n) equations for the (m + n) unknowns x̄1 , x̄2 , . . . x̄n , λ1 , λ2 , . . . λm .
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 25 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The FOCs
∂L
More compact matrix notation for the n FOCs ∂xj
n × 1 vector with partial derivatives of the objective function L
with respect to xj , j = 1 . . . n
Each constraint H i has a row vector of partial derivatives Hxi (x )
with components Hxj (x ).
These row vectors are stacked vertically to form an
m × n matrix Hx (x).
The Lagrange multipliers form a 1 × m row vector λ.
∂F ∂H 1 ∂H 1
∂x1 ∂x1
··· ∂xn
.. .. ..
− (λ1 , . . . , λm ) =0
. | {z } . .
∂F ∂H m ∂H m
∂xn
≡λ
∂x1
··· ∂xn
| {z } | {z }
≡Fx (x) ≡Hx (x)
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 26 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – Constraint qualification
Recall that Hx (x) is the m × n matrix where the row vectors of partial derivatives
are stacked
The constraint qualification is that rank Hx (x̄ ) = m
(the maximum possible; i.e. Hx has “full rank”).
A matrix is of “full rank”, if all rows and all columns are linearly independent
A set of vectors is said to be linearly dependent if one of the vectors in
the set can be defined as a linear combination of the others.
If no vector in the set can be written in this way, then the vectors are
said to be linearly independent.
We say that the vector of constraints (H i , . . . , H m ) satisfies the nondegenerate
constraint qualification (NDCQ) at x if the rank of Hx (x) is m.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 27 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – Lagrange Method
Consider the Lagrange function
L(x, λ) = F (x) + λ [c − H(x)]
Lagrange’s theorem: If x̄ maximizes F (x) subject to the constraint H(x) = c with
no other constraints and with rank Hx (x̄) = m, then
Lx (x̄, λ) = 0,
Lλ (x̄, λ) = 0.
This is a straightforward extension of the simple case with two variables and one
constraint.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 28 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – 2nd Order Cond’ns
In the general case with n variables and m constraints, the bordered Hessian
matrix emerges as
0 −Hx
Hb ≡ ,
−HxT Fxx − λHxx
where HxT is the transpose of Hx
The top left partition is (m × m), the bottom right (n × n),
the top right is (m × n), and the bottom left is (n × m).
We can start at the upper left corner and evaluate the signs of the leading
principal minors.
On the next slides, we see the less compact formulation that provides the intuition
for the structure of the matrix.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 29 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The bordered Hessian
In case of n choice variables and m < n constraints we have
m
X
λi ci − H i (x1 , . . . xn ) .
L(x1 , . . . xn , λ1 , . . . λm ) = F (x1 , . . . xn ) +
i=1
The bordered Hessian in less compact notation is
∂2L 2
∂2L ∂2L
∂λ21
. . . ∂λ∂1 ∂λ L
m ∂λ1 ∂x1
... ∂λ1 ∂xn
.. .. .. .. ..
. . . . .
∂2L ∂2L ∂2L ∂2L
∂λm ∂λ1 . . . ∂λ2m ∂λm ∂x1
... ∂λm ∂xn
Hb =
∂2L ∂2L ∂2L ∂2L
.
∂x1 ∂λ1 . . . ∂x1 ∂λm ∂x12
... ∂x1 ∂xn
.. .. .. .. ..
.
. . . .
2 2
∂ L ∂ L ∂2L ∂2L
∂xn ∂λ1
. . . ∂xn ∂λm ∂xn ∂x1
... ∂xn2
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 30 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The bordered Hessian
This can be simplified to
− ∂H − ∂H
0 ... 0 ∂x1
1
... ∂xn
1
.. .. .. .. ..
. . . . .
∂Hm ∂Hm
0 ... 0 − ∂x1 ... − ∂xn
Hb =
− ∂H1 ∂2L ∂2L
.
∂x1 ... − ∂H
∂x1
m
∂x12
... ∂x1 ∂xn
.. .. .. .. ..
. . . . .
∂H1 ∂Hm ∂2L ∂2L
− ∂xn ... − ∂xn ∂xn ∂x1 ... ∂xn2
This is in turn equivalent to !
0 −Hx
Hb =
−HxT Fxx − λHxx
in a compact notation (as before).
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 31 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – The bordered Hessian
Determinants of the principal submatrices
First, note that the first m principal submatrices are zero matrices that have a
determinant of 0.
Next, note that also the determinants of the next m submatrices either have a
determinant of 0 or they do not contain information on the objective function.
Only the last n − m submatrices have a non-zero determinant and contain
information on the objective function. We only need to consider these
sub-matrices.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 32 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More variables and constraints – 2nd Order Cond’ns
Altogether, we have the following second order conditions:
i) If the leading principal minors (from 2m + 1 to m + n) alternate in sign, with the
last one (the determinant of Hb) having the sign (−1)n , then Hb is negative
definite and the candidate point is a local maximum.
ii) If all leading principal minors (from 2m + 1 to m + n) have the same sign (−1)m ,
then Hb is positive definite and the candidate point is a local minimum.
iii) In all other cases, the matrix is indefinite.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 33 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
More var’s and constraints – Special case: n = 2, m = 1
Note that this definition nests the case with n = 2 and m = 1.
In this case, we only need to evaluate the determinant of the whole bordered
Hessian (because 2m + 1 = 3 = m + n).
Note that (−1)n > 0 and (−1)m < 0.
With n = 2 and m = 1, evaluated at x̄ , we have a maximum if
0 −H1 −H2
det −H1 F11 − λH11 F12 − λH12 > 0
−H2 F21 − λH21 F22 − λH22
and a minimum if the inequality sign is reversed.
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 34 / 35
Chapter 6 – Constrained Optimization II Double click here for audio contents
Concluding remarks
We have explored
Second order conditions to figure out whether we have identified a
maximum or a minimum
Constraint qualifications to figure out whether or not the Lagrange
method is applicable
Moreover, we have extended the Lagrange method to the case of n variables and
m < n equality constraints
In a next step, we have to introduce inequality constraints
johannes.paha@uni-hohenheim.de Optimization in Economic Theory (winter term 2021/22) 35 / 35