Professional Documents
Culture Documents
A Simple Introduction To Dynamic Programming in Macro Models
A Simple Introduction To Dynamic Programming in Macro Models
Year
A Simple Introduction to
Dynamic Programming in
Macroeconomic Models
Ian King*
Department of Economics
University of Auckland
Auckland
New Zealand
April 2002
(October 1987)
Abstract
This is intended as a very basic introduction to the mathematical methods used in Thomas
Sargent's book Dynamic Macroeconomic Theory. It assumes that readers have no further
mathematical background than an undergraduate "Mathematics for Economists" course. It
contains sections on deterministic finite horizon models, deterministic infinite horizon
models, and stochastic infinite horizon models. Fully worked out examples are also provided.
Financial support of the Social Sciences and Humanities Research Council of Canada is gratefully
acknowledged. I am indebted to David Backus, Masahiro Igarashi and for comments. I would like to thank Tom
McCurdy for his comments on earlier drafts, and for suggesting the last example presented here.
FOREWARD (2002)
I wrote this guide originally in 1987, while I was a graduate student at Queens University at
Kingston, in Canada, to help other students learn dynamic programming as painlessly as
possible. The guide was never published, but was passed on through different generations of
graduate students as time progressed. Over the years, I have been informed that several
instructors at several different universities all over the world have used the guide as an
informal supplement to the material in graduate macro courses. The quality of the photocopies
has been deteriorating, and I have received many requests for new originals. Unfortunately, I
only had hard copy of the original, and this has also been deteriorating.
Because the material in this guide is not original at all it simply summarizes material
available elsewhere, the usual outlets for publication seem inappropriate. I decided, therefore,
to simply reproduce the handout as a pdf file that anyone can have access to. This required retyping the entire document. I am extremely grateful to Malliga Rassu, at the University of
Auckland for patiently doing most of this work. Rather than totally reorganize the notes in
light of what Ive learned since they were originally written, I decided to leave them pretty
much as they were with some very minor changes (mainly references). I hope people
continue to find them useful.
INTRODUCTION (1987)
This note grew out of a handout that I prepared while tutoring a graduate macroeconomics
course at Queen's University. The main text for the course was Thomas Sargent's Dynamic
Macroeconomic Theory. It had been my experience that some first year graduate students
without strong mathematical backgrounds found the text heavy going, even though the text
itself contains an introduction to dynamic programming. This could be seen as an introduction
to Sargent's introduction to these methods. It is not intended as a substitute for his chapter, but
rather, to make his book more accessible to students whose mathematical background does
not extend beyond, say, A.C. Chaing's Fundamental Methods of Mathematical Economics.
The paper is divided into 3 sections: (i) Deterministic Finite Horizon Models, (ii)
Deterministic Infinite Horizon Models, and (iii) Stochastic Infinite Horizon Models. It also
provides five fully worked out examples.
1.
Let us first define the variables and set up the most general problem, (which is usually
unsolvable), then introduce some assumptions which make the problem tractable.
1.1
Where: xt
i)
G ( x0 , x1 , , xT ; v0 ,v1 , ,vT 1 ) 0
ii)
vt for all t = 0 T 1
iii)
x0 = x0 given
iv)
xT 0
is a vector of state variables that describe the state of the system at any point in
i
is the objective function which is, in general, a function of all the state and
control variables for each time period.
G ()
is the feasible set for the control variables assumed to be closed and
bounded.
In principle, we could simply treat this as a standard constrained optimisation problem. That
is, we could set up a Lagrangian function, and (under the usual smoothness and concavity
assumptions) grind out the Kuhn-Tucker conditions.
In general though, the first order conditions will be non-linear functions of all the state and
control variables. These would have to be solved simultaneously to get any results, and this
could be extremely difficult to do if T is large. We need to introduce some strong assumptions
to make the problem tractable.
1.2
Here it is assumed that both the U () and the G () functions are time-separable. That is:
U ( x0 , , xT ; v0 , ,vT 1 ) U 0 ( x0 ,v0 ) + U1 ( x1 ,v1 ) + + U T 1 ( xT 1 ,vT 1 ) + S ( xT )
where S ( xT ) is a "scrap" value function at the end of the program (where no further decisions
are made). Also, the G () functions follow the Markov structure:
x1 = G0 ( x0 , v0 )
x2 = G1 ( x1 , v1 )
xT = GT 1 ( xT 1, vT 1 )
Transition equations
Note: Recall that each xt is a vector of variables xt where i indexes different kinds of state
variables. Similarly with vt . Time separability still allows interactions of different
state & control variables, but only within periods.
Max
v t ; t = 0 ,1,..., T 1
v t
subject to i)
ii)
T 1
t =0
U t ( xt , vt ) + S ( xT )
x ti+1 = G ti ( x t , v t )
i
x0 = x0 given
i = 1 n and t = 0,..., T 1
i = 1 n
Once again, in principle, this problem could be solved using the standard constrained
optimisation techniques. The Lagrangian is:
L=
T 1
t =0
U t ( xt ,vt ) + S ( xT ) +
[G (x ,v ) x ]
T 1 n
t = 0 i =0
i
t
i
t
i
t +1
This problem is often solvable using these methods, due to the temporal recursive structure of
the model. However, doing this can be quite messy. (For an example, see Sargent (1987)
section 1.2). Bellman's "Principle of Optimality" is often more convenient to use.
1.3
Max
v t ; t = 0 ,1,..., T 1
v t
T 1
t =0
U t ( xt , vt ) + S ( xT )
subject to i)
ii)
x ti+1 = G ti ( x t , v t )
i
x0 = x0 given
i = 1 n and t = 0,..., T 1
i = 1 n
Max
T 1
vt ; t = t 0 ,..., T 1
t =t 0
vt
U t ( x t , vt ) + S ( xT )
subject to i)
ii)
x ti+1 = G ti ( x t , v t )
i = 1 n and t = t 0 ,..., T 1
i = 1 n
(Note: This result depends on additive time separability, since otherwise we couldn'
t "break"
the solution at t0 . Additive separability is sufficient for Bellman'
s principle of optimality.)
Interpretation:
If the rules for the control variables chose for the t0 problem are optimal for
*
any given xt 0 , then they must be optimal for the xt 0 of the larger problem.
Bellman'
s P. of 0. allows us to use the trick of solving large problem A by solving the smaller
problem B, sequentially. Also, since t0 is arbitrary, we can choose to solve the problem
Step 1:
Max U T 1 ( xT 1 ,vT 1 ) + S ( xT )
{vT 1}
subject to: i) xT = GT 1 ( xT 1 , v T 1 )
ii) xT 1 = xT 1 given
One can easily substitute the first constraint into the objective function,
and use straightforward calculus to derive:
vT 1 = hT 1 ( xT 1 ) Control rule for vT 1
This can then be substituted back into the objective fn to characterize the
solution as a value function:
Step 2:
{U
!" v
v
Max
T 2
T 1
T 2
'(& U
T 2
#$
(xT 2 , vT 2 ) + Max
{U T 1 (xT 1 , vT 1 ) = S (xT )}
v
T 1
Once again, we can easily substitute the first constraint into the objective function,
and use straightforward calculus to derive:
vT 2 = hT 2 ( xT 2 ) Control rule for vT 2
This can be substituted back into the objective fn. to get a value function:
V ( xT 2 ,2) Max{U T 2 ( xT 2 , vT 2 ) + V ( xT 1 ,1)}
{vT 2 }
Step 3:
Using an argument analogous to that used in step 2 we know that, in general, the
problem in period T-k can be written as:
" Bellman's"
V ( xT k , k ) = Max {U T k ( x T k , v T k ) + V ( xT k +1 , k 1)}
{ vT k }
Equation
V (x 0 , T ) =
Max {U 0 ( x 0 , v 0 ) + V ( x1 , T 1)}
{v 0 }
Now, recall that x0 is given a value at the outset of the overall dynamic
problem.
Step 5:
it is simple to work out x1 , and hence v1 from the control rule of that period.
This process can be repeated until all the xt and vt values are known. The
overall problem A will then be solved.
1.4
An Example:
This is a simple two period minimization problem, which can be solved using this algorithm.
Min
{ vt }
1
t =0
[x
2
t
+ v t2 + x 22
(1)
subject to: i) xt +1 = 2 xt + vt
ii) x0 = 1
(2)
(3)
In this problem, T=2. To solve this, consider first the problem in period T-1 (i.e.: in period 1):
Step 1:
(4)
{v1 }
subject to: i) x2 = 2 x1 + v1
ii) x1 = x1 given
(5)
(6)
{v1 }
* v = x
1
(7)
* V (x ,1) = 3x
2
1
Step 2:
(8)
(9)
subject to: i) x1 = 2 x0 + v0
(10)
Min x 02 + v 02 + V ( x1 ,1)
{v 0 }
ii) x0 = 1
(11)
Min 1 + v 02 + 3[2 + v 0 ]
{v 0 }
FOC: 2v 0 + 6[2 + v 0 ] = 0
Step 5:
v0 =
3
2
(12)
/01 3 . ,- +
2
x1 =
1
2
(13)
1
2
(14)
1
2
x2 =
1
2
(15)
2.
2.1. Introduction:
One feature of the finite horizon models is that, in general the functional form of the
control rules vary over time:
vt = ht ( xt )
That is, the h function is different for each t. This is a consequence of two features of
the problem:
i)
ii)
The fact that U t ( xt ,vt ) and Gt ( xt ,vt ) have been permitted to depend on time
in arbitrary ways.
In infinite horizon problems, assumptions are usually made to ensure that the control
rules to have the same form in every period.
Consider the infinite horizon problem (with time-separability):
6 75
Max
vt ;t = 0,...,
v t
42 3 8
t =0
U t ( xt , v t )
subject to:
i) x t +1 = G t ( x t , v t )
ii) x0 = x0 given
For a unique solution to any optimization problem, the objective function should be
bounded away from infinity. One trick that facilitates this bounding is to introduce a
discount factor t where 0 t < 1 .
A convenient simplifying assumption that is commonly used in infinite horizon models
is stationarity:
Assume i)
t = t
ii)
U t (xt , v t ) = t U (xt , v t )
iii)
Gt ( xt ,vt ) = G ( xt ,vt )
10
This assumption, however, is not necessary, and there are many problems where this is
not used.
The infinite horizon problem becomes:
= ><
Max
vt ; t = 0,...,
v t
;9 : ?
t =0
t U ( xt , v t )
subject to: i) x t +1 = G ( x t , v t )
ii) x0 = x0 given
Vt ( xt ) =
Max U ( xt ,vt ) + Vt +1 ( xt +1 )
{vt }
Wt ( xt ) =
vt ( xt )
Max{U ( x t , v t ) + Wt +1 ( x t +1 )}
{vt }
(*)
A subtle change in notation has been introduced here. From this point on, Vt ( xt ) represents the form of the
value function in period t.
11
It can be shown that the function defined in (*) is an example of a contraction mapping. In
Sargent s (1987) appendix, (see, also, Stokey et al, (1989)) he presents a powerful result
called the contraction mapping theorem, which states that, under certain regularity
conditions, 2 iterations on (*) starting from any bounded continuous W0 (say, W0 = 0 ) will
cause W to converge as the number of iterations becomes large. Moreover, the W(.) that comes
out of this procedure is the unique optimal value function for the infinite horizon
maximization problem. Also, associated with W(.) is a unique control rule vt = h( xt ) which
solves the maximization problem.
This means is that there is a unique time invariant value function W(.) which satisfies:
W (xt ) =
Max{U ( x t , v t ) + W ( x t +1 )}
{v t }
subject to: i) x t +1 = G ( x t , v t )
ii) x0 = x0 given.
Associated with this value function is a unique time invariant control rule:
vt = h( xt )
2.2
Stokey et al., (1989) chapter 3 spell these conditions out. Roughly, these amount to assuming that U ( xt ,vt ) is
bounded, continuous, strictly increasing and strictly concave. Also, the production technology implicit in
G ( xt ,vt ) must be continuous, monotonic and concave. Finally, the set of feasible end of period x must be
compact
2
12
D EC
Max
vt ; t = 0,...,
v t
B@ A F
t =0
t U ( xt , v t )
Wt ( x t ) =
Max{U ( x t , v t ) + Wt +1 ( x t +1 )}
{v t }
s.t.
i) xt +1 = G ( xt , vt )
ii) x0 = x0 given.
Set Wt +1 ( xt +1 ) = 0 , and solve the maximization problem on the right side of the Bellman
equation.
U ( xt ,vt ), to get:
Wt ( xt ) = U ( xt ,ht ( xt ))
Next, set up the Bellman equation:
Wt 1 ( x t 1 ) =
s.t.
Max{U ( x t 1 , v t 1 ) + U ( x t , h( x t ))}
{v t 1 }
i) xt = G ( xt 1 ,vt 1 )
ii) x0 = x0 given.
Wt 1 ( x t 1 ) =
vt 1 = ht 1 ( xt 1 )
W (xt ) =
Max {U ( x t , v t ) + W ( x t +1 )}
{v t }
subject to:
i) xt +1 = G ( xt , vt )
ii) x0 = x0 given.
Bellman equation:
W (xt ) =
Max U ( x t , v t ) + W G ( x t +1 )
{v t }
14
Third, perform the maximization problem on the right side. Then substitute the resulting
control rules back into the Bellman equation to (hopefully) verify the initial guess. (i.e.: The
guess is verified if W ( xt ) = W
(xt +1 ) .)
If the initial guess was correct, then the problem is solved. If the initial guess was incorrect,
try the form of the value function that is suggested by the first guess as a second guess, and
repeat the process.
solution, as in method 1.
W ( x t ) U ( x t , h( x t )) G (x, h( x )) W (G ( x, h( x ))
+
=
x t
x t +1
x t
x t
B-S Formula 1
Sargent shows that if it is possible to re-define the state and control variables in such a
way as to exclude the state variables from the transition equations, the B-S formula
simplifies to:
W ( xt ) U ( xt ,h( xt ))
=
xt
xt
B-S Formula 2
Redefine the state and control variables to exclude the state variables from the
transition equations.
(2)
Set up the Bellman equation with the value function in general form.
(3)
Perform the maximization problem on the r.h.s. of the Bellman equation. That is,
derive the F.O.C.'
s.
(4)
Use the B-S Formula 2 to substitute out any terms in the value function.
As a result, you will get an Euler equation, which may be useful for certain purposes,
although it is essentially a second-order difference equation in the state variables.
15
Max
G H nC
{ct , kt +1 } t = 0
i) k t +1 = Ak t C t
s.t.
ii) k0 = k0 given.
C t consumptio n
Where :
kt capital stock
0 < <1
0 < <1
Since the period utility function is in the HARA class, it is worthwhile to use the guessverify method.
The Bellman equation is:
Max { nC t + W (k t +1 )}
s.t. (i) and (ii)
W (k t ) =
{C t , k t +1 }
(1)
W (kt ) = E + F n k t
(2)
W (kt +1 ) = E + F nkt +1
(2'
)
W (kt ) =
FOC:
Max
{kt +1}
{M n[Ak
kt +1 + [E + F n kt +1 ]
(3)
1
F
=
Akt kt +1 kt +1
16
kt +1 =
Akt
1 + F
(4)
RST
W (k ) = n Ak
U
t
PO
RST
F
F
Ak t + E + BF n
Ak t
1 + F
1 + F
F
1 + F
PO
Z XZ X
Y Y + (1 + F )W n k
(5)
Now, comparing equations (2) and (5), we can determine the coefficient for n k t :
F = (1 + F )
F=
(6)
kt +1 = Akt
C t = [1 ]Ak t
(7)
(8)
Given the initial k0 equations (7) - (8) completely characterize the solution paths of C and k.
17
Min
{vt }
t [x t2 + v t2 ]
t =0
subject to i) xt +1 = 2 xt + vt
ii) x0 = x0 given
Since this is a linear-quadratic problem, it is possible to use the guess-verify method.
The Bellman equation is:
W ( xt ) =
Min x t + vt + W ( xt +1 )
{vt }
(1)
i) xt +1 = 2 xt + vt
s.t.
ii) x0 = x0 given
Now guess that the value fn. is of this form:
W ( xt ) = Pxt
(2)
W ( xt +1 ) = Pxt +1
2
(2'
)
W ( xt ) =
Min x t + vt + P[2 xt + vt ]
{vt }
(3)
v t [1 + P] = 2 Px t
vt =
2 P
xt
1 + P
(4)
18
e c fgh 2P e c
h
g
f
2 P
xd
x d + P 2 x
W (x ) = x +
2
2
t
1 + P
1 + P
k
n
m
l
2
P
j ii x + P 2 1 + P j i x
puv z{| 2P y w z{| 2P y w s pr
o W (x ) = pt 1 1 + P x + P 2 1 + P x q p x
k
n
l
m
n
l
2 P
m
b W (x ) = l 1 1 + P ij
2
t
2
t
2
t
(5)
P = } 1
2 P
1 + P
}
2 P
+ P 2 1 + P ~ }
(6)
The P that solves (6) is the coefficient that was to be determined. For example, set
vt = 1.62 xt
(7)
(8)
For a given x0 , equations (7) and (8) completely characterize the solution. Note from
equation 8 that the system will be stable since 0.38 < 1.
19
Max
{ct }t = 0
t =0
t U (C t )
s.t.
Where:
(1)
i)
At +1 = Rt [ At + yt Ct ]
ii)
A0 = A0 given
Transition equation
(2)
(3)
Ct consumptio n
yt known, exogenously given income stream
At non - labour wealth beginning in time t
Rt known, exogenous sequence of one-period rates of return on At
C +
R C
j =1
j 1
k =0
1
t +k
t+ j
=y +
R y
t
j 1
j =1
k =0
1
t+k
t+ j
+ At
(4)
Here, we are not given an explicit functional form for the period utility function. We can
use method 3 in this problem to derive a well-known result.
To use the B-S formula 2, we need to define the state and control variables to exclude
state variables from the transition equation.
Define:
State Variables:
{At , yt , Rt 1}
Control Variable: vt
At +1
Rt
(5)
At +1 = Rt vt
Notice that (2'
) and (2) imply: Ct = At + yt vt
(2'
)
(6)
20
Max
t =0
t t =0
t U ( At + y t v t )
(1'
)
{v }
s.t.
i)
At +1 = Rt vt
(2'
)
ii)
A0 = A0 given
(3)
Reformulated problem
W ( At , y t , Rt 1 ) = Max {U ( At + y t v t ) + W ( At +1 , y t +1 , Rt }
{v t }
s.t. 2'and 3.
Substituting (2'
) into the Bellman equation yields:
W ( At , y t , Rt 1 ) =
Max {U ( At + y t v t ) + W ( Rt v t , y t +1 , Rt )}
{ vt }
(7)
W1 (Rt vt , yt +1
, R ) = U' A
(8)
Euler Equation
(9)
At + 2
U ' (Ct +1 )
t +1 + yt +1
Rt +1
Now, to get further results, let s impose structure on the model by specifying a
particular utility function:
Let U (Ct ) n Ct
U'(C ) = C1
t
(10)
21
C
C
now updating:
= Rt +1[Rt Ct ]
t+2
= Rt + k Ct
t+2
In general: Ct + j =
(11)
k =0
R C
j 1
t +k
k =0
(12)
R R C = y + R y
"
"
C + C = "
C= "
"
"
Ct +
j 1
j =1
k =0
1
t +k
j =0
t+k
j =1
k =0
1
t+k
t+ j
+ At
"
"
C = (1 ) y + R y
3.
j 1
Ct
=
1
j 1
k =0
j =1
j 1
j =1 k = 0
1
t+k
t+ j
+ At
"
3.1. Introduction:
As with deterministic infinite horizon problems, it is convenient to assume time separability,
stationary, and boundedness of the payoff function. However, stochastic models are more
general than deterministic ones because they allow for some uncertainty.
22
The type of uncertainty that is usually introduced into these models is of a very specific kind.
To preserve the recursive structure of the models, it is typically assumed that the stochastic
shocks the system experiences follow a homogeneous first order Markov process.
In the simplest terms, this means that the value of this period'
s shock depends only on the
value of last period'
s shock, not upon any earlier values. (White noise shocks are a special
case of this: they do not even depend upon last period'
s shock.)
Max
vt ;t = 0,...,
v t
E
0
t =0
tU ( x t , vt )
i) xt +1 = G ( xt ,vt , t +1 )
subject to:
ii) x0 given
where { t }t =0 is a sequence of random shocks that take on values in the interval [ , ] and
follow a homogeneous first order Markov process with the conditional cumulative density
function
F ( ', ) = Pr { t +1 '| t = }
(iii)
xt is observed
(2)
(3)
Nature chooses t +1
(4)
W (xt , t ) =
Max{U ( x t , v t ) + E t W ( x t +1 , t +1 )}
{v t }
23
Wt ( x t , t ) =
Max{U ( x t , v t ) + E t Wt +1 ( x t +1 , t +1 )}
{v t }
starting from any bounded continuous Wt +1 (say, Wt +1 = 0 ) will cause W (.) to converge
as the number of iterations becomes large. Once again, the W () that comes out of this
procedure is the unique optimal value function for the above problem.
Furthermore, associated with W () is a unique time invariant control rule vt = h( xt )
which solves the maximization problem.
Under the appropriate assumptions made about preferences and technology in an economy,
the following optimal growth model can be interpreted as a competitive equilibrium model. 5
That is, this model mimics a simple dynamic stochastic general equilibrium economy.
Because of this, the solution paths for the endogenous variables generated by this model can
be compared with actual paths of these variables observed in real-world macroeconomies.
4
5
The same conditions as in Note 2, with the added assumption of homogeneous first order Markov shocks.
See Stokey et al., (1989), chapter 15 for further discussion on this topic.
24
E0
t=0
[ n Ct + n(1 nt )]
(1)
i) Ct + kt = yt
subject to:
(2)
1
yt +1 = At +1kt nt
ii)
nA
iii)
t +1
(3)
= n At + t +1
(4)
parameter which lies in the interval (-1, 1), and t represents a white noise error process.6
The endogenous variable yt represents output in period t. The rest of the notation is the same
as in example 2.
These are only two substantive differences between this example and example 2. First, a
labour-leisure decision is added to the model, represented by the inclusion of the term (1 nt )
in the payoff function (1) and the inclusion of nt in the production function (3). Second, the
production technology parameter At in equation 3 is assumed to evolve over time according
to the Markov process (4).
Notice that in this example Ct ,kt , and nt are the control variables chosen every period by the
decision maker, whereas yt and At are the state variables. To solve this problem, we first set
up the Bellman equation:
W ( y t , At ) =
Max
{C t , k t , n t }
{[ n C t + n(1 nt ) ] + E t ( y t +1 , At +1 )}
(5)
Since this is a logarithmic example, we can use the guess-verify method of solution. The
obvious first guess for the form of the value function is:
W ( yt , At ) = D + G n yt + H n At
For example,
(6)
.
25
W ( yt +1 , At +1 ) = D + G n yt +1 + H n At +1
(6'
)
W ( yt +1 , At +1 ) = D + G[ n At +1 + n kt + (1 ) n nt ] + H n At +1
Now substitute this into equation 5:
W [ y t , At ] =
Max {[ n C t + n(1 n t )] + E t [D + (G + H ) n At +1 + G n k t + G (1 ) n nt ]}
{C t , k t , n t }
W [ y t , At ] =
Max {[ n ( y t k t ) + n(1 nt )] + D + G n k t + G (1 ) n n t (G + H )E t n At +1 }
{k t , nt }
(7)
nt :
G (1 )
+
=0
nt
1 nt
kt :
G
1
+
=0
yt kt
kt
nt =
kt =
G (1 )
+ G (1 )
(8)
G
yt
1 + G
(9)
y + n
+ D + n 1 + G y
+ G (1 )
1 + G
G (1 )
(7'
)
+ G (1 ) n
+ (G + H )E n A
W (y , A ) = n
t
+ G (1 )
t +1
Et n At +1 = n At .
26
W ( yt , At ) = (1 + G ) n yt + (G + H ) n At + constants
(10)
G = 1 + G =
H = (G + H )
1
1
(11)
(1 )
(12)
Similarly, the constant D can be determined from equations 6 and 10, knowing the values of
G and H given in equations 11 and 12.
Since all the coefficients have been determined, the initial guess of the form of the value
function (equation 6) has been verified. We can now use equations 11 and 12 to solve for the
optimal paths of nt ,kt , and Ct as functions of the state yt .
From equation 9: kt = yt
From equation 8: nt =
(1 )
(1 ) + (1 )
(13)
(14)
(15)
Notice that while the optimal kt and Ct are functions of the state yt , the optimal nt is not.
That is, nt is constant. This is a result that is peculiar to the functional forms chosen for the
preferences and technology, and it provides some justification for the absence of labourleisure decisions in growth models with logarithmic functional forms.
27
REFERENCES
Blanchard, O., and Fischer, S., Lectures on Macroeconomics, MIT Press, (1989).
Chaing, A.C., Fundamental Methods of Mathematical Economics, Third Edition, McGrawHill, New York (1984).
Kydland, F., and Prescott, E. "Time to Build and Aggregate Fluctuations", Econometrica 50,
pp. 1345-1370 (1982).
Long, J., and Plosser, C., "Real Business Cycles", Journal of Political Economy 90, pp. 39-69
(1983).
Manuelli, R., and Sargent, T., Exercises in Dynamic Macroeconomic Theory, Harvard
University Press, (1987).
Sargent, T., Dynamic Macroeconomic Theory, Harvard University Press, (1987).
Stokey, N., Lucas, R.E., with Prescott, E., Recursive Methods for Economic Dynamics,
Harvard University Press (1989).
28