You are on page 1of 19

Moral Hazard

EC202 Lectures XV & XVI

Francesco Nava

London School of Economics

February 2011

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 1 / 19


Summary
Hidden Action Problem aka:

1 Moral Hazard Problem


2 Principal Agent Model

Outline
Simpli…ed Model:
Complete Information Benchmark
Hidden E¤ort
Agency Cost

General Principal Agent Model


Complete Information Benchmark
Hidden E¤ort
Agency Cost
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 2 / 19
Outline: Moral Hazard Problem
The basic ingredients of a moral hazard model are as follows:

A principal and an agent, are involved in bilateral relationship


Principal wants Agent to perform some task
Agent can choose how much e¤ort to devote to the task
The outcome of the task is pinned down by a mix of e¤ort and luck
Principal cannot observe e¤ort and can only motivate Agent by
paying him based on the outcome of the task

Timing:
1 Principal chooses a wage schedule which depends on outcome
2 Agent chooses how much e¤ort to devote the task
3 Agent’s e¤ort and chance determine the outcome
4 Payments are made according to the proposed wage schedule
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 3 / 19
A simple Principal-Agent Model
Consider the following simpli…ed model:

A task has two possible monetary outcomes: q, q with q < q


Agent can choose one of two e¤ort levels: fe, e g with e < e
The probability of the high output given e¤ort e is:
π (e ) = Pr(q = q je )
Assume that π (e ) < π (e ) – ie more e¤ort ) better outcomes
Principal chooses a wage schedule w
Agent is risk averse and his preferences are:
U (w , e ) = E [u (w , e )]
Principal is risk neutral and his preferences are:
V (w ) = E [q w]
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 4 / 19
Simple Principal-Agent Model: Complete Info I
Begin by looking at the complete information benchmark:

Principal can observe the e¤ort chosen by Agent


Principal picks a wage schedule w (e ) that depend on Agent’s e¤ort
Agent’s reservation utility is u – ie what he gets if he resigns
Thus the participation constraint of Agent is:

U (w (e ), e ) = u (w (e ), e ) u

By picking wages appropriately Principal de facto chooses e and w


The problem of Principal thus is to:

max E [q je ] w (e ) + λ [u (w (e ), e ) u]
e,w

Recall that E [q je ] = π (e )q + π (e )q
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 5 / 19
Simple Principal-Agent Model: Complete Info II
Recall the problem of Principal:
max E [q je ] w (e ) + λ [u (w (e ), e ) u]
e,w

b (e ) that induces e¤ort e from Agent is:


The lowest wage w
b (e ), e ) = u
u (w
Thus Principal chooses to induce e¤ort e if and only if:
e 2 arg max E [q je ] b (e )
w
e 2fe,e g

Principal then induces such e¤ort choice by o¤ering wages:


b (e )
w if e = e
w (e ) =
b (e ) ε if e 6= e
w
Complete info implies that FOC for the wage requires MC = Price:
1/uw (w , e ) = λ
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 6 / 19
Simple Principal-Agent Model: Incomplete Info I

Now consider the case in which e¤ort is unobservable for Principal:

Suppose that Principal prefers Agent to exert high e¤ort e


Principal can only condition wage w (q ) on outcome q:

w if q = q
w (q ) =
w if q = q

Agent’s participation constraint at e requires:

U (w (q ), e ) = π (e )u (w , e ) + π (e )u (w , e ) u (PC(e))

Agent’s incentive constraint guarantees that he picks high e¤ort:

U (w (q ), e ) U (w (q ), e ) (IC)

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 7 / 19


Simple Principal-Agent Model: Incomplete Info II
The problem of a principal who wants the agent to exert e is to:

Maximize his pro…ts by choosing w (q ) subject to:


1 Agent’s participation constraint at e
ie the agent prefers to exert high e¤ort than to resign
2 Agent’s incentive constraint
ie the agent prefers to exert high e¤ort than low e¤ort

Thus the Lagrangian of this problem is:

max E [q w (q )je ] + λ[U (w (q ), e ) u ] + µ [U (w (q ), e ) U (w (q ), e )]


w ,w

Recall that:

U (w (q ), e ) = π (e )u (w , e ) + π (e )u (w , e )
E [q w (q )je ] = π (e )[q w ] + π (e )[q w]

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 8 / 19


Simple Principal-Agent Model: Incomplete Info III
Writing out Lagrangian explicitly the Principal’s problem becomes:

maxw ,w π (e )[q w ] + π (e )[q w] +


+ λ [ π (e )u (w , e ) + π (e )u (w , e ) u] +
+ µ [ π (e )u (w , e ) + π (e )u (w , e ) π (e )u (w , e ) π (e )u (w , e )]

First order conditions for this problem are:

π (e )[ 1 + λuw (w , e ) + µuw (w , e )] π (e )µuw (w , e ) = 0


π (e )[ 1 + λuw (w , e ) + µuw (w , e )] π (e )µuw (w , e ) = 0

By rearranging it is possible to show that:


1 Both µ and λ are positive if u is increasing and concave
2 Incentive Constraint binds since µ > 0
3 Participation constraint at high e¤ort binds since λ > 0
4 Wages w , w are found by solving the two constraints IC & PC
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 9 / 19
Simple Principal-Agent Model: Incomplete Info IV
For an explicit characterization let u be additively separable in w and e:
u (w , e ) = υ (w ) + η (e )
First order conditions in this scenario become:
π (e )[ 1 + λυw (w ) + µυw (w )] π (e )µυw (w ) = 0
π (e )[ 1 + λυw (w ) + µυw (w )] π (e )µυw (w ) = 0
Solving we …nd that λ, µ > 0 (condition (L) parallels the complete info):
π (e ) π (e )
λ= + >0 (L)
υw (w ) υw (w )
π (e ) 1 1
µ 1 = π (e ) >0 (M)
π (e ) υw (w ) υw (w )
π (e ) π (e )
υ (w ) = u + η (e ) η (e )
π (e ) π (e ) π (e ) π (e )
π (e ) π (e )
υ (w ) = u η (e ) + η (e )
π (e ) π (e ) π (e ) π (e )
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 10 / 19
Simple Principal-Agent Model: Example
Example: e 2 f0, 1g, u (w , e ) = w e 2 , u = 1,
q = 4, q = 0, π (1) = 3/4, π (0) = 1/4

Complete Info: what are w (0), w (1), e ?


Wages w (1) = 2 and w (0) = 1 are found by PC(e):
w (1) 1 = 1 and w (0) = 1
Optimal e¤ort e = 1 is found by comparing pro…ts:
(3/4)(4 2) + (1/4)( 2) = 1 > (1/4)(4 1) + (3/4)( 1) = 0
Incomplete Info: what are w , w , if …rm wants e = 1?
Wages w = 5/2 and w = 1/2 are found by solving PC(1) and IC:
(3/4)(w 1) + (1/4)(w 1) = 1
(3/4)(w 1) + (1/4)(w 1) = (1/4)w + (3/4)w
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 11 / 19
Principal-Agent Model
Consider a general setup in which:

Agent chooses any e¤ort level e 2 [0, 1]


Agent’s reservation utility is still u
The state of the world ω lives in some interval Ω
Output is produced according to a production function
q = q (e, ω )
Principal is risk neutral or risk averse and his preferences are:
V (w ) = E [v (q w )]
Agent is risk averse and his preferences are:
U (w , e ) = E [u (w , e )]
Principal moves …rst and takes Agent’s response as given
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 12 / 19
Principal-Agent Model: Complete Info I
Let’s begin by analyzing the complete info model:

Principal can observe Agent’s e¤ort e and output q...


... thus he can infer ω because he knows q = f (e, ω )

Agent’s participation constraint remains

U (w , e ) u

Principal can choose wages that depend on both e and ω...


...this is equivalent to Principal picking both e and w (ω )...
... since Principal could choose a wage schedule such that:

w (ω ) if e is optimal for Principal


w (e, ω ) =
w0 0
st U (w , e ) < u if Otherwise

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 13 / 19


Principal-Agent Model: Complete Info II
Thus the problem of Principal becomes:
maxe,w ( ) E [v (q (e, ω ) w (ω ))] + λ[E (u (w (ω ), e )) u]
First order conditions require that (note vw = ve vx ):
E [vx (q (e, ω ) w (ω ))qe (e, ω )] + λE (ue (w (ω ), e )) = 0
vx (q (e, ω ) w (ω )) + λuw (w (ω ), e ) = 0
Combining the two equations one gets that:
ue ( w ( ω ) , e )
E [vx (q (e, ω ) w (ω ))qe (e, ω )] = E vx (q (e, ω ) w (ω ))
uw ( w ( ω ) , e )
If Principal is risk neutral this condition requires (e¢ ciency):
ue ( w ( ω ) , e )
MRTe,w = E [qe (e, ω )] = E = MRSe,w
uw ( w ( ω ) , e )
Solving FOC & PC yields the optimal e¤ort e and wage schedule w (ω )
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 14 / 19
Principal-Agent Model: Incomplete Info I
Principal observes output q, but not e¤ort e and is thus unable to infer ω:
Let f (q je ) denote the probability of output q given e¤ort e
Let F (q je ) denote the cumulative distribution associated to f (q je )
Assume that f (q je ) satis…es:
1 Pdf of output has bounded support q, q
2 The support q, q is publicly known
3 The support q, q does not depend on e
4 If e > e 0 then F (q je ) < F (q je 0 )

De…ne a proportionate shift in output βz (q jz ) by:


βe (q je ) = fe (q je )/f (q je )
Since F (q je ) = 1 implies Fe (q je ) = 0 we get that:
Rq
E [ βe (q je )] = q βe (q je )f (q je )dq = Fe (q je ) = 0

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 15 / 19


Principal-Agent Model: Incomplete Info II

Suppose that Principal o¤ers wage schedule w (q )

If Agent participates, he chooses e¤ort to maximize his wellbeing:


Rq
max U (w (q ), e ) = max q u (w (q ), e )f (q je )dq
e 2[0,1 ] e 2[0,1 ]

First order condition requires:


Rq
q
[ue (w (q ), e )f (q je ) + u (w (q ), e )fe (q je )]dq = 0
) E [ue (w (q ), e )] + E [u (w (q ), e ) βe (q je )] = 0

Reduction in wellbeing due to extra e¤ort is exactly compensated ...


... by the increase in expected income due higher e¤ort

Principal takes Agent’s FOC as a constraint on his program


[as was the case with IC in the simpli…ed model]
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 16 / 19
Principal-Agent Model: Incomplete Info III
Principal chooses the wages to maximize his wellbeing subject to:
1 Agent’s participation constraint (PC)
2 Agent’s …rst order condition (FOC)
In particular the problem of Principal is:
max V (w (q )) + λ[U (w (q ), e ) u ] + µ[∂U (w (q ), e )/∂e ] =
w (q ),e
max E [v (q w (q ))] + λ[E [u (w (q ), e )] u] +
w (q ),e
+µ[E [ue (w (q ), e )] + E [u (w (q ), e ) βe (q je )]]
For convenience assume that Agent’s utility satis…es uwe = 0
More than with complete info if e is high since βe (q je ) > 0
First order conditions require:
vx (q w (q )) + λuw (w (q ), e ) + µuw (w (q ), e ) βe (q je ) = 0 [w(q)]
∂2 E [u (w (q ), e )]
E [v (q w (q )) βe (q je )] + µ = 0 [e]
∂e 2
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 17 / 19
Principal-Agent Model: Incomplete Info IV

By rearranging terms FOC become:


vx (q w (q ))
= λ + µβe (q je ) for any q
uw ( w ( q ) , e )
∂2 E [u (w (q ), e )]
E [v (q w (q )) βe (q je )] + µ =0
∂e 2
The second condition implies µ > 0 since:
1 E [ βe (q je )] = 0 & vx > 0 imply that E [v (q w (q )) βe (q je )] >0
2 Agent’s second order conditions imply ∂ E [u (w (q ), e )]/∂e 2 < 0
2

Therefore Agent’s FOC holds with equality


The …rst condition requires that Principal pays Agent:
1 More than with complete info if q is high since βe (q je ) > 0
2 Less than with complete info if q is low since βe (q je ) < 0
Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 18 / 19
Principal-Agent Model: Incomplete Info V

Main Conclusions with Moral Hazard:

Compared to compete info Principal:


1 pays Agent more when output is high
2 pays Agent less when output is bad
3 no longer provides full insurance to Agent on the variable output

He does so to provide incentives for Agent to exert e¤ort...


... since a fully insured Agent would have no motives to exert e¤ort

This conclusions rely on the information problem of Principal and


... would hold even if Principal were risk neutral

They always hold so long as Agent is risk averse

Nava (LSE) EC202 – Lectures XV & XVI Feb 2011 19 / 19

You might also like