You are on page 1of 69

"

° ?
"
""
"
th &
"
A.
.

10:43 y
.

.
, o

,
gyo

Perturbation and Projection 
Methods for Solving DSGE Models
Lawrence J. Christiano

Discussion of projections taken from Christiano‐Fisher, ‘Algorithms for Solving Dynamic Models with Occasionally 
Binding Constraints’, 2000, Journal of Economic Dynamics and Control.
Discussion of perturbations taken from Judd’s textbook.
A
Perturbation and Projection 
- -

Methods for Solving DSGE Models
Lawrence J. Christiano

Discussion of projections taken from Christiano‐Fisher, ‘Algorithms for Solving Dynamic Models with Occasionally 
Binding Constraints’, 2000, Journal of Economic Dynamics and Control.
Discussion of perturbations taken from Judd’s textbook.
Outline
• A Toy Example to Illustrate the basic ideas.
-

– Functional form characterization of model 
solution.
– Use of Projections and Perturbations.
model worked
RBC hours
• Neoclassical model. with constant
.


-

– Projection methods
– Perturbation methodsE .

]
• Make sense of the proposition, ‘to a first order 
approximation, can replace equilibrium conditions with 
linear expansion about nonstochastic steady state and 
solve the resulting system using certainty equivalence’ 
of trivial ( i slides wrote )
9
see
Sort

At another Level
, very powerful
important .
Simple Example
• Suppose that x is some exogenous variable 
and that the following equation implicitly  l

defines y:
h x, y 0, for all x ∈ X
=

#
• Let the solution be defined by the ‘policy rule’, 
g:
④y gx
situation
‘Error function’
• satisfying ,I , ,
.
- -

R x; g ≡ h x, g x 0 Ya yes ,
-
-
- -

Y ,
,

T t
f
.

• for all  x ∈ X
-
P .

I
Kwong
d f Cd ,x7
-
=

Payoff if
-

decision

idf is d

when 0C

Occurs .

Suppose I must be

chosen before I observed .

Example x =L people who


recore Are

immune

=0 Not item one

Belief Now is that


Ex = p .
I I. 98
I
P[oC=i ]
Decision take Ng
under
uncertainty :

① choose d

Valve
② of x

occurs

③ fcd ,
x ) Kinnan .

if chose d
you
optimal gulen x= '

then extreme regret


ex post if oc :O .

Question for economics

is whoo to choose d
under these circumstances?
Answer

M£EfCd,o÷

' I


- K IO

Problem for economists :

Computing Efcd ,
.sc )
sometimes A PAIN .

Economist t
engineers .

x x
Use eauiveah experts
certainty .

At Approx in
AS AN Approximation .

-
Afioors
.
Certainty eaviualewce :

replace original problem

with .

max fcd ,
Ex)
d

-
-

Perturbation method

shows ( proves ) that

← certainty equivalence

is correct if VARIANCE

of sis small
se
enough .

current dovid environment,


IN
Most Likely , going Away
.

Problem : SMALL
,
SMALL chance that
Ain 't Nothin get
' '

we seen is

growing
MDF
CONS
.

work
¢ ← Lofts
Max Ucc , l) eel of

k people
5.8 as l
µni9I same
-

w t '

y
.

J ← other

dh¥⇐ competition
income
.

"

:* :S AMONG


.

-
foe

w=F
.


"
firms
-

Marginal
duct
Poro

production i. ooip .
. axl of
Labor .

pl -
c .

work
W U
a - U
w
benefit
e

extra
of It ? he Ue
= -

'
°
-

a Ua
← w -

-÷%xof
working
IN consumption
goods units .
"CosfofworkiN,g\
-
IN consumption units
"

:÷: ÷÷÷÷÷÷
Slightly More formal

what is the cost pg

to a person
of de so

is consumption units .

de so what de > o is required

for o du '

-
.

total differentiation
.

l) ucdc de
Oa d Uca rue
=
,

date
" of
cost
UI
-

working
units

as

a
in
So far
,
economics relevant

to Later discussion .

Fr@rPerdprese.Nt Purposes

(illustration
economic
I want an .

hLx,y3=o

Look a h .h problem
in
Kaine) :
-

¥ so
Carty , e)

o-shfa.y.GE#c-

cord
N
?
Texas ilibrium .

GY this model
f=g( for
.

because
ONLY only simply
I

s-bsitiuted out < salty .


r

The Need to Approximate
-

• Finding the policy rule, g, is a big problem 
outside special cases

– ‘Infinite number of unknowns (i.e., one value of g
for each possible x) in an infinite number of 
equations (i.e., one equation for each possible x).’

• Two approaches: 

functioned
"

– projection and perturbation 
"

"

)
"
unknown
UNKNOWNS ? Glx wya 's
' EX
functions
equations : h( a ,gcxD=oV-
Projection
ĝ x;
• Find a parametric function,             , where     is a 
vector of parameters chosen so that it imitates 
R x; g 0
the property of the exact solution, i.e.,                     
8
.

8
for all x ∈ X , as well as possible.  know kno -

• Choose values for     so that     don't Neweyreal'T


Fs
Probablyget
?
WANT § let
'

to make Io . A
-

where do I

?
R̂ x; h x, ĝ x; fo
guess
prob
get 8
's
Try
p
economy
.

-
good urns sin
x
• is close to zero for             .
x∈X
00 Ie

IN +
b-
Y x .

• The method is defined by how ‘close to zero’ is 
ĝ x;
defined and by the parametric function,              ,  "
glad "

1-Qaoide.r.am
that is used. to

APP 73991 .
Fg
v.
"

g .

Like
behaves
gtsh.IE?.n
Projection, continued 2 finite:X
>
(
en

Hod
'

• Spectral and finite element approximations ne

– Spectral functions: functions,            , in which 
ĝ x;
ĝ x; x∈X
each parameter in     influences              for all            
"
example:      ""

basis O 1 a.
+ I ?
a

n
parameters
if
I

ĝ x; x , ∑ i Hi Mtl
.
.
- '
ar )
Lao
.

i 0 I
.

- 0n 1-
a
b

Hi x x i ~ordinary polynominal (not computationaly efficient)


-
-
Hi x Ti x ,
T z : −1, 1 → −1, 1 , i th order Chebyshev polynomial
→ i

: X → −1, 1
function :
each
Mathematician
PRUSSIAN
IgE
Spectral function
Yi inferences
Parameter , ,

globally .
← spectra

I
-

change 83

Not convenient sometimes

rn
1-
A spectral
inconvenient
Need
May
a

huge n
Projection, continued
ĝ x;
÷
–-
Finite element approximations: functions,             , 
ĝ x;
in which each parameter in     influences              
over only a subinterval of  x ∈ X
ĝ x; 1 2 3 4 5 6 7
← b
4 N

8,
only
• 85
Mfume
A

piecewise 2 has

vine : on
; 9
.
III!
Pei Eon r
on g


Tcu
' '
L Loc
'
so , , oh, sks x.
,
X
=
MA
- .ee#
¥÷ CAN
::
get
Away

Maybe SMALL
'

with
g) =o
.

pi ( x
;
t
p h
au know
I e
I
"
but:#
.

• • us
"
it .

Projection, continued Collocation

• ‘Close to zero’: two methods weighted

"
residual
Kin
x : x 1 , x 2 , . . . , x n ∈ X baler
• Collocation, for n values of                                   
choose n elements of                                so that   
-

1 n

¥¥€a÷•¥÷*÷÷io÷
R̂ x i ; h xi, ĝ xi; 0, i 1, . . . , n
RAM
– how you choose the grid of x’s matters… y

: ::*
'
• Weighted Residual, for m>n values of 
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
xxxxxxxx
"""

t.si#jF;rgsefosedoIn.i:s..onr-
m

∑ jh xj, ĝ xj;
w i
0, i 1, . . . , n
j 1
TX
¥x
"
-

no →
kxEX
R ( x ; g) =o

Want Don 't know


g .

what is but I
g .

what does
know g =

¢
V-x-CX.ws hlx gcses ) so
)
'

' ,
Rcxig
Also I know what
,

h is [ see
earlier example

model where h
of econ ,

is constructed resigns
derivates of ul.ge ) ) .

IN .

general No hope

to determine g exactly .

close
But can get pretty .
We (
generally id

SMALL # of examples
determine
can g exactly
have to Approximate ,
.

using § .

pics; g) to .
fxex

Look for § thats


Spectral
colocation n →
:
pick
8→ finite
element
Glx ) =D xp >a

pick
@ ,

80+8,043=0
,;

Bedi 8080,04)
Ttf be chosen .to

"
,
make £ I 0K¥
"

NATURAL Method is collocation


pi - o R -
- o


#

↳÷
o

.
Ia ,
Colo 58 ? =o

f- a Cto ,
8*7 =o

Find Jo 8 ,
to save
,

the Above equations .

5- ( o ) :O

#•¥
for ?

¥ * In .
-
-
Natural way
: - avi distant
#y
-

Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem  grid
=3

points
of interpolation.
-
– You get to evaluate a function on a set of grid points that you 
select, and you must guess the shape of the function 
between the grid points. Bunge
CARL
4th century
(

• Consider the function, 
fk 1 #
, k ∈ −5, 5
1 k2 Runge
function
• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order 
polynomial. 
– As you increase the number of grid points on a fixed interval 
grid, oscillations in tails grow more and more violent. 

N¥mAL
• Chebyshev approximation theorem: distribute more points in 
of
Magic
the tails (by selecting zeros of Chebyshev
convergence in sup norm.
polynomial) and get 

⇐ gsis k
.
Interpolation : situation

where I
Kwong g .

That makes it different

from what we have Now .

Similar because Cscvppose >


it is computationally
Very
expensive to evaluate
at
Any point , a

g
.

Try to Approximate it
function that is
by a

evaluate
fast to .

teen
••\

Xz Ks JC
, , .
"
?

"

f- Cx eo teapot . . .
.

ta.sc

AO g
A,
,
n - r

g
A
,
N 11

PARAMETER

a=a )
.

.
.

Pick values for ao ,


.
. .

,7a,o

FCK a ) =o
, ;

FC Kz ; a ) =o

:
I

f- ( )
'

x a
-

O
. .
;

¥ .

i :* :c:÷÷÷÷÷÷÷
To find on 's .

Y = X a a = do

÷?
X 'T = ⇐ X ) a + x' u
w

' kind
)
'
-

of , ⇐ x x' Y of
Zero
ix. y
'

Cx )
- -
'

= X

= x
-
'
Y TI O .
good
.

disturbing a

Approximating

o
Not
How You Select the Grid Points Matters !
function

protonation .

• @
&
&
( •
.

a
.÷:rmi

X
t
?
't

i @
-

at a

a & I d d d a t d l I
µ

Figure from Christiano‐Fisher, JEDC, 1990
Example of Importance of Grid Points
• Here is an example, taken from a related problem, the problem 
of interpolation.
– You get to evaluate a function on a set of grid points that you 
select, and you must guess the shape of the function 
between the grid points.

• Consider the function, 
fk 1 , k ∈ −5, 5
2
1 k

• Next slide shows what happens when you select 11 equally‐
spaced grid points and interpolate by fitting a 10th order 
polynomial. 
– As you increase the number of grid points on a fixed interval 
grid, oscillations in tails grow more and more violent. 

• Chebyshev approximation theorem: distribute more points in 
the tails (by selecting zeros of Chebyshev polynomial) and get 
convergence in sup norm.
ith basis function for T2 Chebyshev polynomial
I.
u

i -
-
0,32
, . .


.
'
Zeros:
O -
.

Chebyshev polynomials
Y Tito

Bo -

I
‐0.71 0.71
I
-
[

ask.IT/f-InB?9ieor:nvergenj 's

mat
Enya
"

Isis
" '
n

¥
.
8-
go
ers ooo
'
r
HE
Chebyshev
.

x?
G

Approxi m ati o n
)
ON
"

( interpol:^*
ation
(

'

¥
"

( .am

Eine : a .

points &
chosen
IN

Chebyshev
AS "

/
in
"

② Then
Ito S i D. ,
C- 5,5 ]
"
-

" ,

① bead of lith order Chebyshev


it Zeros
computed these zeros onto

function ③ Punt interval


basis the C- 5,5 ] Rac ?before
using tf
{ Ifcx ) fncxll
If fnl
-

- =

}
.

: xex .

f :
X → IR
.

X → R
f- n :

the
tf -
fnl is

all differences
set of
fncx )l
#
xD -

max I f -

fnl biggest
KEX
in
error
using
f, to Approximate .

"
f .

hi If fnl =0
T7E×
-

rn -76 I
"
' '
uniform converge
S fix ) dx
×

For MANY CASES


MANNY ,

CAN do this Analytically

Jax 't
-

our = Ex

ba

zCb3 as )
'
=
-
.

Symbolic algebra .

Economics :
mostly integration
tractable
Not
Analytically .

econometrics
BAYESIAN
'

constant have to

functions that
integrate
t
Aske
tractable
no
Most
Analytically
. Approximate
{ fax >

T
dux

beast
.mn

Suppose 5- WAS A

Polynomial .

Ssd dx
a
b

1a
't '

fn → f
nth order
A
che.by#nevafiuterpoLafion
of f S fax > dtnsfnlxdn
=
Quadrature integration .

Chebyshev is one

Method .

There Are
MANY .

Natural n irritate
Riemann definition

itntegral.EE#
of
I .

I
×
'

I f- Cx ) fn Csc 71 -

Iff
- -

,
Kio Kio
g) te EX

F'
pics , to
.
.
.
"
""'
' .

.
.
" " in
.
.

mini )
o

xijg
-

Projection, continued 
-

R(
" '
i
,

chgoseose • ‘Close to zero’: two methods

)
• Collocation, for n values of                                   
x : x1 , x2 , . . . , xn ∈ X
ai O1
choose n elements of                                so that   
n

R̂ x i ; h xi, ĝ xi; 0, i 1, . . . , n
- =

– how you choose the grid of x’s matters…
• Weighted Residual, for m>n values of 
Gealerkiwxxxxxxxx
x : x 1 , x 2 , . . . , x m ∈ X choose the n i ’s
=


m
f
evaluate ∑ wij h xj, ĝ xj; 0, i 1, . . . , n
the
j 1 xex, not just
R^Cxi£ ) old at 04
- .
-
ok "

grid points
: ,
f-

A *
.

Perturbation
F• Projection uses the ‘global’ behavior of the functional 

WITH
equation to approximate solution.

– Problem: requires finding zeros of non‐linear equations. 
' Iterative methods for doing this are a pain.

]%%⑧
– Advantage: can easily adapt to situations the policy rule is not 
continuous or simply non‐differentiable (e.g., occasionally 
binding zero lower bound on interest rate).

%ri
• Perturbation method uses Taylor series expansion 
(computed using implicit function theorem) to approximate 
model solution.
↳d÷÷÷i
:-( – Advantage:  can implement procedure using non‐iterative 
methods. 
me
only .

– Possible disadvantages: 

(
• Global properties of Taylor series expansion not necessarily very good.

Tx
• Does not work when there are important non‐differentiabilities (e.g., 
occasionally binding zero lower bound on interest rate).
Taylor Series Expansion Zeng
• Let                   be k+1
f :R→R differentiable on the open  -

interval and continuous on the closed interval 
-

unpaid
between a and x. 
– Then,
error typo ASS
rn
"
different
inbttder
f x Pk x Rk x 't" f
is
or

– where 
I
thfioispproxit →
k
%
x
rt
Taylor series expansion about x a:
1 1 2 2 1 k k
Pk x f a f a x−a 2!
f a x−a ... k!
f a x−a
- -
* -

21=2×1 ¥6
"

by ! kick 1)
'
×
× . - - -

remainder:
-
a
'

.
2
z

z!
"
1 k 1 k 1
il R k x k 1 !
f x−a , for some between x and a
4.3
'

y! 24
– Question: is the Taylor series expansion a good 
"

r
,

approximation for f? f
°
¥¥
5!

a!

a
think
might fcx
)
You →
)
pace k so
.

peg
Nope '

mm
Taylor Series Expansion
• It’s not as good as you might have thought.

• The next slide exhibits the accuracy of the 
Taylor series approximation to the Runge
function.
– In a small neighborhood of the point where the 
approximation is computed (i.e., 0), higher order 
Taylor series approximations are increasingly 
accurate.
– Outside that small neighborhood, the quality of 
the approximation deteriorates with higher order 
approximations.
Taylor Series Expansions about 0 of Runge Function
1.4
Increasing order of approximation leads to improved
accuracy in a neighborhood of zero and reduced accuracy
1.2 further away.

5th order Taylor series expansion
1 A

0.8

0.6 25th order


expansion
10th order expansion 1
1 x2
0.4

0.2
-1.5 -1 -0.5 0 0.5 1 1.5

I "
I
x sick ) -

Tsi
-

fix ) - .

p.ec ,
Taylor Series Expansion
• Another example: the log function
– It is often used in economics. 
– Surprisingly, Taylor series expansion does not provide 
a great global approximation.

• Approximate log(x) by its kth order Taylor series 


approximation at the point, x=a:
k
log x log a ∑ −1 i 1 1 x−a i
i a
i 1

– This expression diverges as N→∞ for 

x such that x −a a ≥1
Taylor Series Expansion of Log Function 
2
About x=1
1.8
25th order approximation
-

1.6

5th order approximation
1.4 -

1.2
Taylor series approximation deteriorates
infinitely for x>2 as order of approximation log(x)

ri
1 increases.
0.8

0.6

2nd order approximation
-
0.4

0.2

10th order approximation
-
0

I 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

x
Taylor Series Expansion
• In general, cannot expect Taylor series 
expansion to converge to actual function, 
globally. 

– There are some exceptions, e.g.,  Taylor’s series 
f x e x , cos x , sin x
expansion of                                           about x=0
converges to f(x) even for x far from 0.

– Problem: in general it is difficult to say for what 
values of x the Taylor series expansion gives a 
good approximation.            
-
Taylor Versus Weierstrass
• Problems with Taylor series expansion does not represent a 
problem with polynomials per se as approximating functions.

• Weierstrass approximation theorem: for every continuous function, 
f(x), defined on [a,b], and for every  0, there exists a finite‐
ordered polynomial, p(x), on [a,b] such that

|f x − p x | , for all x ∈ a, b

• Weierstrass – polynomials may approximate well, even if 
sometimes the Taylor series expansion is not very good.

• Weierstrass theorem asserts that there exists some sequence of 


polynomials the limit of which approximates a function well. 
– We used the Runge function to illustrate two sequences that do not 
approximate well: interpolation with fixed interval weights and Taylor 
series expansion about zero.  n

– Interpolation with Chebyshev zeroes gives an excellent approximation. 
• Ok, we’re done with the digression on the 
Taylor series expansion.

• Now, back to the discussion of the 
perturbation method.
– It approximates a solution using the Taylor series 
expansion.
Perturbation Method
• Suppose there is a point,             , where we 
x∗ ∈ X
know the value taken on by the function, g, 
that we wish to approximate:
g x∗ g ∗ , some x ∗
• Use the implicit function theorem to 
approximate g in a neighborhood of  x ∗
• Note:
R x; g 0 for all x ∈ X

d j
j
R x; g ≡ j
R x; g 0 for all j, all x ∈ X.
dx
Perturbation, cnt’d
• Differentiate R with respect to   and evaluate 
x
x∗
the result at       :
R 1
x∗ d h x, g x | h1 x∗ , g∗ h2 x∗ , g∗ g′ x∗ 0
x x∗
dx

′ ∗ h1 x∗ , g∗
→g x −
h2 x∗ , g∗
• Do it again!
R 2 x∗ d 2 h x, g x | ∗ h 11 x ∗ , g ∗ 2h 12 x ∗ , g ∗ g ′ x ∗
2 x x
dx
2
h 22 x ∗ , g ∗ g ′ x ∗ h 2 x ∗ , g ∗ g ′′ x ∗ .

→ Solve this linearly for g ′′ x ∗ .


Perturbation, cnt’d
• Preceding calculations deliver (assuming 
enough differentiability, appropriate 
invertibility, a high tolerance for painful 
notation!), recursively:
g ′ x ∗ , g ′′ x ∗ , . . . , g n x ∗
• Then, have the following Taylor’s series 
approximation:
gx ≈ĝx
ĝx g∗ g′ x∗ x − x∗
1 g ′′ x ∗ x − x∗ 2
... 1 g n x∗ x − x∗ n
2 n!
Perturbation, cnt’d
• Check….
• Study the graph of
R x; ĝ

x∈X
– over                to verify that it is everywhere close 
to zero (or, at least in the region of interest). 
Example: a Circle
• Function:
h x, y x2 y2 − 4 0.

• For each x except x=‐2, 2, there are two distinct y
that solve h(x,y)=0:

y g1 x ≡ 4 − x2 , y g2 x ≡ − 4 − x2 .

• The perturbation method does not require that 
the function g that solves h(x,g(x))=0 be unique. 
– When you specify the value of the function,  g, at the 
point of the expansion, you select the function whose 
Taylor series expansion is delivered by the 
perturbation method.
Example of Implicit Function Theorem
y
h x, y x2 y2 − 4 0.

x ∗
gx ≃ g∗ − ∗ x − x∗
g
‐2
2
x

‐2

h1 x∗ , g∗ x ∗
g′ x∗ − − ∗ h 2 had better not be zero!
h2 x∗ , g∗ g
Outline
• A Toy Example to Illustrate the basic ideas.
– Functional form characterization of model solution.
– Projections and Perturbations.

• Neoclassical model. Done!
– Projection methods
– Perturbation methods

• Stochastic Simulations and Impulse Responses
– Focus on perturbation solutions of order two.
– The need for pruning.
Neoclassical Growth Model
• Objective:
1−
ct − 1
E0 ∑ tu ct , u ct
1−
t 0

• Constraints:
ct exp k t 1 ≤ f kt, at , t 0, 1, 2, . . . .

2
at a t−1 t, t~ E t 0, E t V

f kt, at exp k t exp a t 1− exp k t


Efficiency Condition
ct 1

E t u ′ f k t , a t − exp k t 1

ct 1 period t 1 marginal product of capital

− u′ f kt 1 , at t 1 − exp k t 2 fK kt 1 , at t 1 0.

k t , a t ~given numbers
• Here, t 1 ~random variable
time t choice variable, k t 1

• Parameter,  , indexes a set of models, with 
the model of interest corresponding to
1
Solution
• A policy rule,
kt 1 g kt, at, .
• With the property:
ct

R k t , a t , ; g ≡ E t u ′ f k t , a t − exp g k t , a t ,

ct 1

kt 1 at kt 1 at
1 1

− u′ f g kt, at, , at t 1 − exp g g k t , a t , , at t 1,

kt 1 at 1

fK g kt, at, , at t 1 0,

• for all   a t , k t , .
Projection Methods
• Let                     
ĝ kt, at, ;

– be a function with finite parameters (could be either 
spectral or finite element, as before).

• Choose parameters,   , to make
R kt, at, ; ĝ
– as close to zero as possible, over a range of values of 
the state.
– use weighted residuals or Collocation. 
Occasionally Binding Constraints
• Suppose we add the non‐negativity constraint on 
investment:
exp g k t , a t , − 1− exp k t ≥ 0
• Express problem in Lagrangian form and optimum is 
characterized in terms of equality conditions with a 
multiplier and with a complementary slackness condition 
associated with the constraint.

• Conceptually straightforward to apply preceding method. 
For details, see Christiano‐Fisher, ‘Algorithms for Solving 
Dynamic Models with Occasionally Binding Constraints’, 
2000, Journal of Economic Dynamics and Control.
– This paper describes alternative strategies, based on 
parameterizing the expectation function, that may be easier, 
when constraints are occasionally binding constraints.
Perturbation Approach
• Straightforward application of the perturbation approach, as in the simple 
example, requires knowing the value taken on by the policy rule at a point.

• The overwhelming majority of models used in macro do have this 
property. 

– In these models, can compute non‐stochastic steady state without any 
knowledge of the policy rule, g.
k∗
– Non‐stochastic steady state is      such that

a 0 (nonstochastic steady state in no uncertainty case) 0 (no uncertainty)

k∗ g k∗ , 0 , 0

1
– and    1−
k∗ log .
1− 1−
Perturbation
• Error function:
ct

R k t , a t , ; g ≡ E t u ′ f k t , a t − exp g k t , a t ,

ct 1

− u′ f g kt, at, , at t 1 − exp g g k t , a t , , at t 1,

fK g kt, at, , at t 1 0,

kt, at, .
– for all values of                
• So, all order derivatives of R with respect to its 
arguments are zero (assuming they exist!).
Four (Easy to Show) Results About 
Perturbations
• Taylor series expansion of policy rule:
linear component of policy rule

g kt, at, ≃ k gk kt − k ga at g

second and higher order terms

1 g kt − k 2
g aa a 2t g 2 g ka k t − k a t gk kt − k ga at ...
2 kk

– g 0 : to a first order approximation, ‘certainty equivalence’ 
– All terms found by solving linear equations, except coefficient on past 
gk
endogenous variable,        ,which requires solving for eigenvalues

– To second order approximation: slope terms certainty equivalent –
gk ga 0
– Quadratic, higher order terms computed recursively.
First Order Perturbation
• Working out the following derivatives and 
evaluating at  k t k ∗ , a t 0

R k kt , at , ; g R a kt , at , ; g R kt , at , ; g 0

‘problematic term’ Source of certainty equivalence
• Implies: In linear approximation

Rk u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0

Ra u ′′ f a − e g g a − u ′ f Kk g a f Ka − u ′′ f k g a fa − eg gk ga ga fK 0

R − u′eg u ′′ f k − e g g k f K g 0

Absence of arguments in these functions reflects they are evaluated in  k t k∗ , at 0
Technical notes for following slide
u ′′ f k − e g g k − u ′ f Kk g k − u ′′ f k g k − e g g 2k f K 0
1 f − e g g − u ′ f Kk g − f g − e g g 2 f K 0
k k k k k k
u ′′
1 f − 1 e g u ′ f Kk f f K g e g g 2 f K 0
k k k k
u ′′
1 fk − 1 u ′ f Kk fk
g k g 2k 0
g
g
e fK fK u e fK
′′ g e
1 − 1 1 u ′ f Kk gk g 2k 0
u ′′ e g f K

• Simplify this further using:
f K ~steady state equation
fK K −1 exp a 1− , K ≡ exp k
exp −1 k a 1−
fk exp k a 1− exp k fK eg
f Kk − 1 exp −1 k a
f KK −1 K −2 exp a − 1 exp −2 k a f Kk e −g

• to obtain polynomial on next slide. 
First Order, cont’d
Rk 0
• Rewriting                term:
1 − 1 1 u ′ f KK gk g 2k 0
u ′′ f K

0 g k 1, g k 1
• There are two solutions,                                   
– Theory (see Stokey‐Lucas) tells us to pick the smaller 
one. 
– In general, could be more than one eigenvalue less 
than unity: multiple solutions.
gk ga
• Conditional on solution to     ,         solved for 
Ra 0
linearly using                equation.
• These results all generalize to multidimensional 
case
Numerical Example
• Parameters taken from Prescott (1986):

0. 99, 2 20 , 0. 36, 0. 02, 0. 95, V 0. 01 2

• Second order approximation:
3.88 0.98 0.996 0.06 0.07 0
ĝ k t , a t−1 , t , k∗ gk kt − k∗ ga at g
0.014 0.00017 0.067 0.079 0.000024 0.00068 1
1 g kk kt − k 2
g aa a 2t g 2
2
−0.035 −0.028 0 0
g ka kt − k at gk kt − k ga at
• Following is a graph that compares the policy 
rules implied by the first and second order 
perturbation.

• The graph itself corresponds to the baseline 
parameterization, and results are reported in 
parentheses for risk aversion equal to 20.
‘If initial capital is 20 percent away from steady state,  then capital
choice differs by 0.03 (0.035) percent between the two approximations.’

‘If shock is 6 standard deviations away from its mean, then capital 
choice differs by 0.14 (0.18) percent between the two approximations’

0.04
0.18
0.035 0.16
order) - k t+1 (1 order) )

order) - k t+1 (1 order) )


0.03 0.14
st

st
0.025 0.12

=2 0.1
0.02
= 20
0.08
nd

nd
0.015
100*( kt+1 (2

100*( kt+1 (2 0.06


0.01
0.04

0.005 0.02

0 0
-20 -10 0 10 20 -20 -10 0 10 20
* 100*at, percent deviation of initial shock from steady state
100*( kt - k ), percent deviation of initial capital from steady state

Number in parentheses at top correspond to       = 20.
Conclusion
• For modest US‐sized fluctuations and for 
aggregate quantities, it is reasonable to work 
with first order perturbations.

• First order perturbation: linearize (or, log‐
linearize) equilibrium conditions around non‐
stochastic steady state and solve the resulting 
system. 
– This approach assumes ‘certainty equivalence’. Ok, as 
a first order approximation.
List of endogenous variables determined at t
Solution by Linearization
• (log) Linearized Equilibrium Conditions:
Et 0 zt 1 1zt 2 z t−1 0 st 1 1 st 0

• Posit Linear Solution:
s t − Ps t−1 − t 0.
zt Az t−1 Bs t Exogenous shocks
• To satisfy equil conditions, A and B must:
2
0A 1A 2I 0, F 0 0B P 1 0A 1 B 0

• If there is exactly one A with eigenvalues less 
than unity in absolute value, that’s the solution. 
Otherwise, multiple solutions.

• Conditional on A, solve linear system for B. 

You might also like