You are on page 1of 31

EC400 Part II, Math for Micro: Lecture 6

Leonardo Felli

NAB.SZT
16 September 2010
SOC with Two Variables and One Constraint

Consider the problem:

max f (x1 , x2 )
x1 ,x2
s.t. g1 (x1 , x2 ) ≤ b.

The bordered Hessian matrix is:


∂g ∂g
 
 0 ∂x 1 ∂x2 
 
 ∂g ∂ 2f 2 ∂2f ∂2g
∗ ∂ g

B=
 − λ − λ∗ 
 ∂x1 ∂x12 ∂x12 ∂x1 ∂x2 ∂x1 ∂x2 


 ∂g 2
∂ f ∂2g ∂2f ∂ 2f 
− λ∗ ∗ 
2
−λ 2
∂x2 ∂x1 ∂x2 ∂x1 ∂x2 ∂x2 ∂x2

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 2 / 31
The leading principal submatrixes are:
 
∂g
 0 ∂x1 
B1 = (0), B2 =  ∂g ∂ 2 f 2 , B3 = B
∗ ∂ g 
− λ
∂x1 ∂x12 ∂x12

The sufficient SOC are:


|B2 | < 0 (sign of (−1)1 ) which is always satisfied, and
|B3 | > 0 (sign of (−1)2 ).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 3 / 31
Consider now the problem:

max f (x1 , x2 , x3 )
x1 ,x2 ,x3
s.t. g (x1 , x2 , x3 ) ≤ b

The bordered Hessian matrix is:


∂g ∂g ∂g
 
0

 ∂x 1 ∂x2 ∂x3 

 ∂g ∂L ∂L ∂L 
 
 ∂x1 ∂x12 ∂x1 ∂x2 ∂x1 ∂x3
 

H=  ∂g

 ∂L ∂L ∂L  
 ∂x
 2 ∂x1 ∂x2 ∂x22 ∂x2 ∂x3 

 
 ∂g ∂L ∂L ∂L 
∂x3 ∂x1 ∂x3 ∂x2 ∂x3 ∂x32

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 4 / 31
The leading principal submatrixes are:
 ∂g 
0
 ∂x1 
H1 = (0), H2 = 
 ∂g

∂L 
∂x1 ∂x12

∂g ∂g
 
0

 ∂x1 ∂x2 

 ∂g ∂L ∂L 
H3 =  , H4 = H
 
 ∂x1 ∂x12 ∂x1 ∂x2 
 
 ∂g ∂L ∂L 
∂x2 ∂x1 ∂x2 ∂x22

The sufficient SOC are then |H2 | < 0 (always satisfied), |H3 | > 0 and
|H4 | < 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 5 / 31
Maximum Value Functions

Profit functions and indirect utility functions are example of maximum


value functions, whereas cost functions and expenditure functions are
minimum value functions.

Definition
Let x(b) be a solution to the problem of maximizing f (x) subject to
g (x) ≤ b, the corresponding maximum value function is then

v (b) = f (x(b))

A maximum value function is non decreasing.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 6 / 31
The Interpretation of the Lagrange Multiplier

Consider the problem:

max f (x)
x∈Rn
s.t. g1 (x) ≤ b1∗
..
.
gk (x) ≤ bk∗ .

Let b∗ = (b1∗ , ..., bk∗ ) and x1∗ (b∗ ), ..., xn∗ (b∗ ) denote the optimal
solution and let λ1 (b∗ ), ..., λk (b∗ ) be the corresponding Lagrange
multipliers.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 7 / 31
Theorem
Assume that, as b varies near b∗ , x1∗ (b∗ ), ..., xn∗ (b∗ ) and λ1 (b∗ ), ..., λk (b∗ )
are differentiable functions and that x ∗ (b∗ ) satisfies the constraint
qualification.
Then for each j = 1, 2, ..., k :

f (x ∗ (b∗ )) = λj (b∗ )
∂bj

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 8 / 31
Proof:

We prove it in the case of a single equality constraint, with f and g


functions of two variables.

The Lagrangian is

L(x, y , λ; b) = f (x, y ) − λ(h(x, y ) − b)

The solution satisfies for all b:


∂L ∗
0 = (x (b), y ∗ (b), λ∗ (b); b)
∂x
∂f ∗ ∂h
= (x (b), y ∗ (b)) − λ∗ (b) (x ∗ (b), y ∗ (b)),
∂x ∂x

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 9 / 31
and
∂L ∗
0 = (x (b), y ∗ (b), λ∗ (b); b)
∂y
∂f ∗ ∂h
= (x (b), y ∗ (b)) − λ∗ (b) (x ∗ (b), y ∗ (b)),
∂y ∂y

Furthermore, since h(x ∗ (b), y ∗ (b)) = b for all b:

∂h ∗ ∗ dx ∗ (b) ∂h ∗ ∗ dy ∗ (b)
(x , y ) + (x , y ) =1
∂x db ∂y db

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 10 / 31
Therefore, using the chain rule, we have:

df (x ∗ (b), y ∗ (b))
db
∂f ∗ ∗ dx ∗ (b) ∂f ∗ ∗ dy ∗ (b)
= (x , y ) + (x , y )
∂x db ∂y db
∗ (b) ∗
 
∗ ∂h ∗ ∗ dx ∂h ∗ ∗ dy (b)
= λ (b) (x , y ) + (x , y )
∂x db ∂y db

= λ (b).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 11 / 31
Economic Interpretation:

Lagrange multiplier can be interpreted as a ‘shadow price’.

For example, in a firm profit maximization problem, Lagrange


multipliers tell us how valuable another unit of input would be to the
firm’s profits.

Alternatively, they tell us how much the firm’s maximum profit


changes when the constraint is relaxed.

Finally, they identify the maximum amount the firm would be willing
to pay to acquire another unit of input.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 12 / 31
Recall that

L(x(b), y (b), λ(b)) = f (x(b), y (b)) − λ(b)(g (x(b), y (b)) − b)


= f (x(b), y (b))

So that
∂ d
L(x(b), y (b), λ(b); b) = f (x(b), y (b); b) = λ(b)
∂b db

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 13 / 31
Envelope Theorem

What we have found is simply a particular application of the envelope


theorem, which says that
∂ d
L(x(b), y (b), λ(b); b) = f (x(b), y (b); b)
∂b db

Consider the problem:

max f (x1 , x2 , ..., xn )


x1 ,x2 ,...,xn
s.t. h1 (x1 , x2 , ..., xn , c) = 0
..
.
hk (x1 , x2 , ..., xn , c) = 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 14 / 31
Let x1∗ (c), ..., xn∗ (c) denote the optimal solution and let
µ1 (c), ..., µk (c) be the corresponding Lagrange multipliers.

Suppose that x1∗ (c), ..., xn∗ (c) and µ1 (c), ..., µk (c) are differentiable
functions and that x ∗ (c) satisfies the constraint qualification.

Then for each j = 1, 2, ..., k :


∂ d
L(x ∗ (c), µ(c); c) = f (x ∗ (c); c)
∂c dc

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 15 / 31
Notice: in the case hi (x1 , x2 , ..., xn , c) = 0 is hi0 (x1 , x2 , ..., xn ) − c = 0,

then we are back to the previous case:


∂ d
L(x ∗ (c), µ(c); c) = f (x ∗ (c), c) = λj (c)
∂c dc

but the statement is more general.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 16 / 31
Proof:

We prove it for the simpler case of an unconstrained problem.

Let φ(x; a) be a continuous function of x ∈ R n and a scalar a.

For any a, consider the maximization problem of max φ(x; a). Let
x ∗ (a) be the solution of this problem and a continuous and
differentiable function of a.

We will show that


d ∂
φ(x ∗ (a); a) = φ(x ∗ (a); a)
da ∂a

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 17 / 31
By the chain rule we have:

d X ∂φ dx ∗ ∂φ ∗
φ(x ∗ (a); a) = (x ∗ (a); a) i (a) + (x (a); a)
da ∂xi da ∂a
i

Since by the first order conditions:


∂φ ∗
(x (a); a) = 0, ∀i
∂xi

We then get:
d ∂φ ∗
φ(x ∗ (a); a) = (x (a); a)
da ∂a

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 18 / 31
Intuitively, when we are already at a maximum, changing slightly the
parameters of the problem or the constraints, does not affect, up to a
first order, the value of the maximum function through changes in the
solution x ∗ (a), because of the first order conditions

∂φ ∗
(x (a); a) = 0
∂xi

This is a local result, when we use the envelope theorem we have to


make sure though that we don’t jump to another solution in a
discrete manner.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 19 / 31
Implicit Function Theorem

In economic theory, once we pin down an equilibrium or a solution to


an optimization problem, we are interested in how the exogenous
variables change the value of the endogenous variables.

The key tool used in this endeavor is the Implicit Function Theorem.

We have been using the Implicit Function Theorem throughout


without stating and explaining why we can use it.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 20 / 31
Implicit Function Theorem
The Implicit Function Theorem assures us of the condition under which a
set of simultaneous equations:

F 1 (y1 , ..., yn ; x1 , ..., xm ) = 0


F 2 (y1 , ..., yn ; x1 , ..., xm ) = 0
.. .. (1)
. .
F n (y1 , ..., yn ; x1 , ..., xm ) = 0

defines a set of implicit functions:

y1 = f 1 (x1 , ..., xm )
y2 = f 2 (x1 , ..., xm )
.. .. (2)
. .
yn = f n (x1 , ..., xm )

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 21 / 31
In other words, the conditions of the Implicit Function Theorem
guarantee the existence of these implicit functions even though we
may not be able to write these functions in an explicit form.

Consider (1).

Assume that the functions F 1 , .., F n all have continuous partial


derivatives with respect to all x and y variables.

Assume also that at a point (y0 , x0 ) satisfying (1) the determinant of


the (n × n) Jacobian w.r.t. the y -variables is not 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 22 / 31
In other words, assume that:
∂F 1 (y0 , x0 ) ∂F 1 (y0 , x0 ) ∂F 1 (y0 , x0 )


···

∂y1 ∂y2 ∂yn

∂F 2 (y0 , x0 ) ∂F 2 (y0 , x0 ) 2 0 0
∂F (y , x )

···
|J| = ∂y1 ∂y2 ∂yn 6= 0,

.. .. .. ..

. . . .

∂F n (y0 , x0 ) ∂F n (y0 , x0 ) n 0 0
∂F (y , x )
···
∂y1 ∂y2 ∂yn

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 23 / 31
Theorem (Implicit Function Theorem)
Then there exists an m−dimensional neighborhood of x0 in which the
variables y1 ..., yn are functions of x1 , ..., xm according to the f i
functions as in (2) above.

The definitions in (2) are satisfied at x0 and y0 .

They also satisfy (1) for every vector x in the neighbourhood, thereby
giving (1) the status of a set of identities.

Finally, the functions f i are continuous and have continuous partial


derivatives with respect to all the x variables.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 24 / 31
Derivatives of Implicit Functions

It is then possible to find the partial derivatives of the implicit


functions without having to explicitly solve for the y variables.

This is because in the neighborhood in question the equations (1)


have a status of identities, we can then take the total differential of
each of these and write dF j = 0.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 25 / 31
When considering only dx1 6= 0 and setting the rest dxi = 0, the
result, in matrix notation, is:
∂F 1 ∂F 1 1   ∂y
∂F 1
   
∂y1 ∂y2 · · · ∂F
∂yn ∂x1
1
∂x1
 ∂F 2 ∂F 2 2   ∂y2
∂F 2
· · · ∂F
  
∂yn   ∂x1
 ∂y
 1 ∂y2 ∂x1
   
 = −
 .. .. .  . ..

..
. ..   ..
  
 . .   . 
∂F n ∂F n ∂F n ∂yn ∂F n
∂y1 ∂y2 ··· ∂yn ∂x1 ∂x1

Finally, since |J| is non zero there is a unique nontrivial solution to


this linear system, which by Cramer’s rule can be expressed as
 
∂yj |Jj |
= .
∂x1 |J|

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 26 / 31
Comparative Static

This is for general problems.

Optimization problems have a unique feature: the condition that


indeed |J| =
6 0.

This is because the system of equation that define the implicit


functions is the set of FOC. Then J is simply the matrix of partial
second derivatives of L, similar to the bordered Hessian.

In other words, we can take the maximum value function, or a set of


equilibrium conditions, totally differentiate them and find how the
endogenous variables change with the exogenous ones in the
neighbourhood of the solution.

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 27 / 31
For example, in the case of optimization with one equality constraint
we get that

F 1 (λ, x, y ; b) = 0
F 2 (λ, x, y ; b) = 0
F 3 (λ, x, y ; b) = 0

is given by

b − g (x, y ) = 0
fx − λgx = 0
fy − λgy = 0

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 28 / 31
We need to ensure that the Jacobian is not zero and then then we
can use total differentiation.

In other words, we need to ensure that:

∂F 1 ∂F 1 ∂F 1
 
 ∂λ ∂x ∂y 
 
 ∂F 2 ∂F 2 ∂F 2 
|J| = det   6= 0
 ∂λ ∂x ∂x
 

 ∂F 3 ∂F 3 ∂F 3 
∂λ ∂x ∂y

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 29 / 31
or:
0
−gx −gy

−gx fxx − λgxx fxy − λgxy 6= 0

−gy fxy − λgxy fyy − λgyy

But the determinant of J, is that of the bordered Hessian B.

Whenever sufficient second order conditions are satisfied, we know


that the det of the bordered Hessian is not zero (in fact it is positive).

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 30 / 31
Now we can totally differentiate the equations:

gx dx + gy dy − 1db = 0
(fxx − λgxx )dx + (fxy − λgxy )dy − gx dλ = 0
(fyx − λgyx )dx + (fyy − λgyy )dy − gy dλ = 0

∂x ∂y ∂λ
At the solution, we can then solve for , , .
∂b ∂b ∂b

Leonardo Felli (LSE, NAB.SZT) EC400 Part II, Math for Micro: Lecture 6 16 September 2010 31 / 31

You might also like