LECTURE NOTES
Nezih Guner
Department of Economics
Pennsylvania State University
March 2006
1
1 Walrasian (Competitive) Equilibrium
This course is an introduction to modern macroeconomic theory. Our main
emphasis will be the analysis of resource allocations in dynamic stochastic en
vironments. We will start, however, with a simple environment, and analyze
a …nite dimensional (with …nite number of goods and agents) static exchange
economy. By an economy we mean a full characterization of: agents’s endow
ments, agents’s preferences, and a description of how these agents interact.
In an exchange economy people interact in the market place. They buy and
sell goods taking market prices as given. They do this in order to maximize
their utility. Their choices are constrained by their endowments. If we can …nd
a set of prices such that: given these prices people behave optimally, and there
is no excess demand or excess supply of any good, we will say our economy is
in an equilibrium.
1
Consider an exchange economy with i = 1, ..., : consumers and , = 1, ..., :,
commodities. Each individual in this exchange economy is endowed with n
I
units of goods, where n
I
is a :dimensional vector, i.e.
n
I
=
_
n
l
I
, n
2
I
, ...., n
n
I
_
.
Hence, the total resources n in this economy is also a :dimensional vector
given by
n =
_
n
l
, n
2
, ...., n
n
_
=
_
n
I=l
n
l
I
,
n
I=l
n
2
I
, ....,
n
I=l
n
n
I
_
.
Individuals have preferences over these goods and will trade with each other to
maximize their well being. We will assume that:
1. Consumer’s preferences are representable by a utility function n
I
: A =
1
n
÷
÷ 1, where A is the consumption set.
2. n
I
¸ C
2
, i.e. n
I
is such that it is continuos and the …rst and the second
derivatives exist. Here C
2
represents the set of such functions.
3. 1n
I
(r) ¸ 0 for all r ¸ A, i.e. preferences are strictly monotonic. [A
weaker assumption would be monotonicity, i.e.1n
I
(r) 0 for all r ¸ A[.
4. n
I
is strictly concave, i.e. n
I
(`r ÷ (1 ÷ `)j) `n
I
(r) ÷ (1 ÷ `)n
I
(j)
for all r, j ¸ A, r ,= j, and ` ¸ (0, 1). [A weaker assumption would be
strict quasi concavity, i.e. n
I
(`r ÷ (1 ÷ `)j) n
I
(j) for all r, j ¸ A,
n
I
(r) _ n
I
(j), ` ¸ (0, 1)[.
5. n
I
¸ 1
n
÷÷
, i = 1, ..., :, i.e. every agent is endowed with a positive amount
of each good.
1
There are several books that cover the materials presented here in a much more detail.
See, for example, Varian (1992), Farmer (1993), and MasCollel, Whinston and Green (1995).
General framework that is used here and in subsequent chapters was developed in Arrow
(1951), Debreu (1951), Arrow and Debreu (1954), and McKenzie (1954).
2
6. 1n
I
(r

) ÷ · as r

÷ r where some component of r, r
¸
= 0. However,
1n
I
(r) r is bounded for all r in any bounded subset of A.
Assumption #3 implies that people will be on their budget sets; assumption
#4 implies that indi¤erence curves do not have ‡at sections, and #6 implies
that in the case of two goods indi¤erence curves are tangent to the axes, but
never become vertical or horizontal. Hence preferences are represented by nice
indi¤erence curves as shown in Figure 1.
x
y
λx + (1λ)y
Figure 1: Strictly Convex Indi¤erence Curves
Remark 1 Note that we adopt the following notation:
« If r = (r
l
, ....., r
n
) and j = (j
l
, ....., j
n
) are :dimensional vectors
r = j means r
I
= j
I
for all i,
r _ j means r
I
_ j
I
for all i,
r j means r
I
_ j
I
for all i and r ,= j
r ¸ j means r
I
j
I
for all i.
Hence,
1
n
÷
= ¦r ¸ 1
n
[r _ 0¦ and 1
n
÷÷
= ¦r ¸ 1
n
[r ¸ 0¦ .
3
« 1n
I
(r) represents the vector of …rst derivatives, i.e.
1n
I
(r) =
_
0n
I
(r)
0r
l
I
0n
I
(r)
0r
2
I
....
0n
I
(r)
0r
n
I
_
.
« Finally, If r ¸ 1
n
, then the norm of r is de…ned as
r =
_
n
I=l
r
2
I
_
l/2
.
Given a set of prices j = (j
l
, j
2
, ..., j
n
), consumers in this economy try to
solve the following problem:
max
r1
n
I
(r
I
) (1)
subject to
j(r
I
÷n
I
) _ 0, and r
I
¸ A. (2)
Here r
I
is the consumption bundle that the consumer is choosing given a price
vector j and consumer’s endowment vector n
I
.
Hence, each individual tries to …nd the best possible consumption bundle
r
I
=
_
r
l
I
, r
2
I
, ..., r
n
I
_
and is constrained by the value of his/her available re
sources. The budget constraint can be written more explicitly as
jr
I
= j
l
r
l
I
÷j
2
r
2
I
÷... ÷j
n
r
n
I
_ j
l
n
l
I
÷j
2
n
2
I
÷... ÷j
n
n
n
I
= jn
I
.
Remark 2 Note that since j and r
I
are vectors, we should write j
t
r
I
(with j
t
representing the transpose of j) to represent an inner product. Here I adopt a
simpler notation, and do not di¤erentiate between row and column vectors. It
is obvious that we mean an inner product of two vectors when we write jr
I
.
Given our assumptions, there is a unique interior solution to this problem.
This solution consist of : equations for each consumer:
r
l
I
= n
l
I
÷)
l
I
(j, jn
I
)
r
2
I
= n
2
I
÷)
2
I
(j, jn
I
)
.
.
r
n
I
= n
n
I
÷)
n
I
(j, jn
I
).
Consumer chooses r
l
given initial endowments. All we care, however, is func
tions )
¸
I
representing excess demand or excess supply of each good for each
consumer. For each consumer then we will represent the optimal decisions as
r
I
= n
I
÷)
I
(j),
where we drop n
I
from )
I
as an argument, since it is given, and its value is
known to each consumer.
4
Note that …rst order conditions are necessary and su¢cient to characterize
r
I
(since we are maximizing a strictly concave function on a convex set, solution
exists and it is unique). Hence, r
I
is the solution to consumer i
t
: problem if and
only if it satis…es
1n
I
(r
I
) = j
I
j, (3)
and
jr
I
= jn
I
. (4)
Here j
I
is the Lagrange multiplier associated with the consumer’s budget con
straint. Note that 1n
I
(r
I
) = j
I
j is a set of : equations for each i, since there
is one derivative and one price for each good. Figure 2 represents the optimal
choice of a consumer for a twogood case.
x
w
w
1 x
1
x
2
w
2
Excess supply of good 1
Excess
demand
for good 2
Figure 2: Optimal Choice
Given this setup what do we know about )
I
´ We can state the following:
1. It is continuos. This follows from our assumptions on preferences. Note
that in Figure 2, the optimal choice will change continuously as we change
the prices, since the indi¤erence curves are strictly convex.
2. It is bounded below.
3. Its value is zero, i.e.
j)
I
(j) = 0.
This simply follows from monotonicity. With monotonicity people will be
on their budget sets.
5
4. It is homogenous of degree 0:
)
I
(`j) = )
I
(j), for all ` 0.
This implies that only the relative prices matter. Hence, if we multiply
all the prices with a constant the optimal choice does not change.
5. If j
n
÷ j, where some j
¸
= 0, then )
I
(j
n
) ÷ ·. This implies that if
the price of a good is zero, its demand will be in…nite.
Aggregate excess demand function is then given by
)(j) =
n
I=l
)
I
(j) =
_
)
l
(j) =
n
I=l
)
l
I
(j), )
2
(j) =
n
I=l
)
2
I
(j), ....., )
n
(j) =
n
I=l
)
n
I
(j)
_
.
Given the properties of )
I
, it is immediate that ) has the following properties
as well:
1. It is continuos.
2. It is bounded below.
3. Its value if zero, i.e.
j)(j) = 0.
4. It is homogenous of degree 0:
)
I
(`j) = )(j), for all ` 0.
5. If j
n
÷ j, where some j
¸
= 0, then )(j
n
) ÷ ·.
Since only the relative prices matter, we can normalize prices. This will
reduce the number of prices that we have to …nd by one. One normalization is
to restrict prices to be in the unit simplex de…ned as:
^(j) =
_
_
_
j ¸ 1
n
÷÷
[
n
¸=l
j
¸
= 1
_
_
_
.
Since j)(j) = 0, if all but one market is in equilibrium (i.e. has excess
demand of zero), the remaining market must be in equilibrium as well. This is
called the Walras Law, and allows us to focus on :÷1 markets rather than :.
Then, the fundamental question is if we can …nd prices such that all markets
are in equilibrium, i.e.
)(j
+
) = 0.
Before going into the details of …nding equilibrium prices, lets look at the
consumer’s problem in more detail. Following is the Lagrangian for the con
sumer’s problem.
6
L = n
I
(r
I
) ÷j
I
(jn
I
÷jr
I
).
Then, we have the following set of FOCs:
1n
I
(r
I
) = j
I
j.
Hence for each good , and for each individual i we have
0n
I
(r
I
)
0r
¸
I
= j
I
j
¸
.
The optimal decision for a consumer is therefore characterized by the familiar
condition that for any two goods the marginal rate of substitution must be equal
to the ratio of prices, i.e.
Ju1(r1)
Jr
!
1
Ju1(r1)
Jr
!
1
=
j

j

, for all i, /, /.
We also know that since any two agents face the same prices, their marginal
rates of substitution must be same for any two goods, i.e.
Ju1(r1)
Jr
!
1
Ju1(r1)
Jr
!
1
=
Ju¸(r¸)
Jr
!
¸
Ju¸(r¸)
Jr
!
¸
, for all i, ,, /, /.
We now introduce some de…nitions that will simplify our exposition.
De…nition 3 A consumer is a pair c
I
= (n
I
, n
I
).
De…nition 4 An exchange economy, c, is an :tuple (c
l
, ...., c
n
).
De…nition 5 An allocation is a vector of consumption bundles r = (r
l
, r
2
, ..., r
n
) =
__
r
l
l
, r
2
l
, ..., r
n
l
_
,
_
r
l
2
, r
2
2
, ..., r
n
2
_
, ....,
_
r
l
n
, r
2
n
, ..., r
n
n
__
.
De…nition 6 An allocation is feasible, if
n
I=l
r
I
_
n
I=l
n
I
.
De…nition 7 A Walrasian (competitive) equilibrium for c is an allocation r
+
and a price vector j
+
such that
1. The allocation r
+
I
solves agent i
t
s problem given j
+
, i.e. r
+
I
maximizes n
I
subject to j
+
(r
+
I
÷n
I
) _ 0 for all i.
2. Market clear, i.e.
n
I=l
r
I
_
n
I=l
n
I
(5)
7
c
1
c
2
w
2
2
w
2
1
w
2
1
w
1
1
Figure 3: Edgeworth Box
8
w
x
Excess supply of good 1
Excess
demand
for good 2
Excess demand for good 1
Excess
supply
of good 2
c
1
c
2
Figure 4: Walrasian Equilibrium
9
We will next look at a simple example of an economy with two agents and two
goods. When we have a 22 economy, we can represent the total resources of
this economy as a box (like in Figure 3). Such a box is called an Edgeworth Box.
Given an initial endowment point n, we know that the equilibrium consumptions
and prices are then given by the tangency of two indi¤erence curves (such as
point r in Figure 4).
Example 8 Consider the following version of a …nite dimensional exchange
economy with two goods and two agents. Utility functions take the form
n
I
(r
I
) =
_
_
r
l
I
_
o
_
r
2
I
_
l÷o
, for i = 1
_
r
l
I
_
o
_
r
2
I
_
l÷o
, for i = 2
,
where c, , ¸ (0, 1). Endowments are given by n
l
= (1, 0) and n
2
= (0, 1).
To …nd demand functions of agents 1 and 2 for goods 1 and 2 (as functions of
prices and endowments), we …rst need to setup the optimization problem for an
agent. For agent1, demand for goods 1 and 2 are chosen to solve the following
problem:
max
¦r
1
1
, r
2
1
¦
_
r
l
l
_
o
_
r
2
l
_
l÷o
,
subject to
j
l
r
l
l
÷j
2
r
2
l
= j
l
.
Let j
l
be the Lagrange multiplier associated with this budget constraint. Then,
it is easy to see that FOCs are given by
c
_
r
l
l
_
o÷l
_
r
2
l
_
l÷o
= j
l
j
l
,
and
(1 ÷c)
_
r
l
l
_
o
_
r
2
l
_
÷o
= j
l
j
2
.
After dividing these two equations and rearranging terms, we arrive at
r
2
l
=
j
l
j
2
_
1 ÷c
c
_
r
l
l
,
which is then can be substituted in the budget constraint:
j
l
r
l
l
÷j
2
j
l
j
2
_
1 ÷c
c
_
r
l
l
= j
l
.
Hence,
r
l
l
÷
_
1 ÷c
c
_
r
l
l
= 1,
or
r
l
l
= c, r
2
l
= (1 ÷c)
j
l
j
2
.
10
Similarly, for the second consumer we will get
r
l
2
= ,
j
2
j
l
Market clearing condition for good 1 is
r
l
l
÷r
l
2
= 1,
hence
)
l
(j) = r
l
l
÷r
l
2
÷1.
Then, )
l
(j) = 0 implies
c ÷,
j
2
j
l
= 1
or
j
2
j
l
=
1 ÷c
,
.
As a normalization, let j
2
= 1 ÷j
l
, then
1 ÷j
l
j
l
=
1 ÷c
,
.
Our equilibrium prices are given by
j
l
=
/
1 ÷a ÷/
and j
2
= 1 ÷
/
1 ÷a ÷/
=
1 ÷a ÷/ ÷/
1 ÷a ÷/
=
1 ÷a
1 ÷a ÷/
.
Hence, optimal consumption decisions are:
r
l
l
= c, r
2
l
= (1 ÷c)
j
l
j
2
= (1 ÷c)
b
l÷o÷b
l÷o
l÷o÷b
= (1 ÷c)
,
1 ÷c
.
Now let, for example, c = , = 0.ò, then we have
)
l
(j
l
) = a ÷/
1 ÷j
l
j
l
÷1
= 0.ò
1 ÷j
l
j
l
÷0.ò
= 0.ò
_
1 ÷j
l
j
l
÷1
_
= 0.ò
_
1 ÷j
l
÷j
l
j
l
_
=
0.ò ÷j
l
j
l
Figure 5 shows how )
l
(j
l
) looks like. Since it crosses the horizontal line, there
is an equilibrium price. Furthermore, this price is unique.
11
0 0.2 0.4 0. 6 0. 8 1 1. 2 1. 4 1. 6 1.8 2
1
0.5
0
0.5
1
1.5
2
2.5
3
3.5
4
Excess Demand Funct ion
Figure 5: Excess Demand Function
What we would like to know is whether solutions to )(j) = 0 exists in our
more general setup. The following proposition states that such solutions indeed
exist.
Theorem 9 If ) : ^(j) ÷ ^(j) is a continuos function that satis…es the Wal
ras’s Law, j)(j) = 0, then there exists j
+
in ^(j) such that )(j
+
) = 0.
Proof. See Varian (1992).
We now know that an equilibrium exists. Next, we would like to know if
the equilibrium satis…es some e¢ciency conditions. The condition we will use
is called Pareto optimality.
De…nition 10 An allocation is Pareto optimal if it is feasible and there is no
other feasible allocation r
t
such that n
¸
(r
t
¸
) _ n
¸
(r
¸
) for all , and n
¸
(r
t
¸
)
n
¸
(r
¸
) for at least one ,.
We will …rst show that every competitive equilibrium is Pareto optimal.
Theorem 11 (First Welfare Theorem) Every competitive equilibrium is Pareto
Optimal.
Proof. Suppose r is a competitive equilibrium allocation with price vector j.
Suppose that the statement is not true, i.e. there is another feasible allocation
12
r
t
that is preferred to r by at least one consumer. Since r
t
is feasible, we know
that
n
I=l
r
t
I
_
n
I=l
n
I
.
If we write this explicitly we have:
r
t
l
÷r
t
2
÷.... ÷r
0
n
_ n
t
l
÷n
t
2
÷.... ÷n
0
n
.
Note that this is a set of : equations:
r
tl
l
÷r
tl
2
÷... ÷r
tl
n
_ n
tl
l
÷n
tl
2
÷... ÷n
tl
n
, (6)
r
t2
l
÷r
t2
2
÷... ÷r
t2
n
_ n
t2
l
÷n
t2
2
÷... ÷n
t2
n
,
.
.
r
tn
l
÷r
tn
2
÷... ÷r
tn
n
_ n
tn
l
÷n
tn
2
÷... ÷n
tn
n
.
For the consumer who strictly prefers r
t
, it must be outside his/her budget set
with j. Otherwise he/she would choose it in the …rst place. Then,
jr
t
¸
jn
¸
,
or
j
l
r
tl
¸
÷j
2
r
t2
¸
÷... ÷j
n
r
tn
¸
j
l
n
l
¸
÷j
2
n
2
¸
÷... ÷j
n
n
n
¸
.
For all other consumers i ,= ,, we have
jr
t
I
= jn
I
.
Therefore,
n
I=l
jr
t
I
n
I=l
jn
I
.
If we write this explicitly we have:
jr
t
l
÷jr
t
2
÷... ÷jr
t
n
jn
l
÷jn
2
÷... ÷jn
n
,
or
j
l
r
tl
l
÷j
2
r
t2
l
÷... ÷j
n
r
tn
l
÷
j
l
r
tl
2
÷j
2
r
t2
2
÷... ÷j
n
r
tn
2
÷... ÷j
l
r
tl
n
÷j
2
r
t2
n
÷... ÷j
n
r
tn
n
j
l
n
l
l
÷j
2
n
2
l
÷... ÷j
n
n
n
l
÷
j
l
n
l
2
÷j
2
n
2
2
÷... ÷j
n
n
n
2
÷... ÷j
l
n
l
n
÷j
2
n
2
n
÷... ÷j
n
n
n
n
.
rearranging terms we get:
j
l
_
r
tl
l
÷r
tl
2
÷... ÷r
tl
n
¸
÷j
2
_
r
t2
l
÷r
t2
2
÷... ÷r
t2
n
¸
÷... (7)
j
n
[r
tn
l
÷r
tn
2
÷... ÷r
tn
n
[
j
l
_
n
l
l
÷n
l
2
÷... ÷n
l
n
¸
÷j
2
_
n
2
l
÷n
2
2
÷... ÷n
2
n
¸
÷
j
n
[n
n
l
÷n
n
2
÷... ÷n
n
n
[ .
13
But then we have a contradiction, since if we multiply each of : equations in
(6) with the appropriate price and then sum the right and left side of (6), we
get the opposite of (7).
Next we would like to know if every Pareto optimal allocation is also a
competitive equilibrium. In order to show this we …rst need to characterize the
set of Pareto optimal allocations. Remember that an allocation r is Pareto
optimal if it is feasible and if there is no way to reallocate r to make at least
one person strictly better o¤ without making anyone else worse o¤. Our claim
is that any such allocation must be a solution to the following problem (you can
try to show why this claim is correct):
max
r
n
I=l
c
I
n
I
(r
I
), with
n
I=l
c
I
= 1.
subject to
n
I=l
n
I
_
n
I=l
r
I
and r
I
_ 0.
This is a planner’s problem with c representing the weights of di¤erent agents
in planner’s objective. Note that we have a set of Pareto optimal allocations for
each possible value of c. The solutions to this planning problem is characterized
by two sets of equations:
c
I
1n
I
(r
I
) = ¬, (8)
where ¬ is the Lagrange multiplier for the planner’s resource constraint; and
n
I=l
n
I
=
n
I=l
r
I
, (9)
which is the feasibility constraint.
Now lets go back to the competitive allocations. Competitive allocations are
characterized by
1n
I
(r
I
) = j
I
j, (10)
where j
I
is the Lagrange multiplier for agent i and j is the set of prices; by
j(n
I
÷r
I
) = 0, (11)
which is the budget constraint for individuals i; and by
n
I=l
n
I
=
n
I=l
r
I
, (12)
which is the feasibility constraint.
Note that (8) is a set of :: equations, and that (9) is a set of : equations.
Similarly, (10) is a set of : : equations and (12) is a set of : equations.
Furthermore, (9) and (12) are identical. If we set j = ¬ (i.e. set prices such
14
that they are equal to the Lagrange multiplier for the constraint for each good),
and c
I
=
l
X1
, then (8) and (10) are identical as well. Then, whether a Pareto
optimal allocation can be decentralized comes down to whether the social planer
can make sure at prices ¬, the planner’s allocation is feasible for each consumer.
This might require redistribution of resources de…ned by the following transfer
functions
t
I
(c) = ¬ (r
I
÷n
I
) .
Theorem 12 (Second Welfare Theorem) Every Pareto Optimal allocations can
be decentralized as a competitive equilibrium, i.e. given a Pareto Optimal alloca
tion r, we can …nd a price vector j and transfers t
I
such that: given the initial
endowments these transfers r is a competitive allocation with prices j.
Before we analyze some particular examples, note that the …rst order condi
tions for the planner’s problem imply
c
I
0n
I
(r
I
)
0r
¸
I
= ¬
¸
for all i, ,.
Then, we have
Ju1(r1)
Jr
!
1
Ju1(r1)
Jr
!
1
=
Ju¸(r¸)
Jr
!
¸
Ju¸(r¸)
Jr
!
¸
=
¬

¬

,
and in a Pareto Optimal allocation the marginal rate of substitution between
any two goods must be the same for any two consumers. Figure 6 illustrates the
set of Pareto Optimal allocations for a 22 economy. Figure 7 shows the basic
idea behind the second welfare theorem. Given any Pareto optimal allocation
n
+
, we can …nd the prices that support this allocations and move from the
original endowment point n to n
+
using transfers.
Example 13 Consider again the following version of a …nite dimensional ex
change economy with two goods and two agents. Utility functions take the form
n
I
(r
I
) =
_
_
r
l
I
_
o
_
r
2
I
_
l÷o
, for i = 1
_
r
l
I
_
o
_
r
2
I
_
l÷o
, for i = 2
,
where c, , ¸ (0, 1). Endowments are given by n
I
= (a
I
, /
I
) for i = 1, 2, with
a
I
0 and /
I
0. To …nd demand functions of agents 1 and 2 for goods 1 and 2
(as functions of prices and endowments), we …rst need to setup the optimization
problem for an agent. For agent1, demand for goods 1 and 2 are chosen to solve
the following problem:
max
¦r
1
1
, r
2
1
¦
_
r
l
l
_
o
_
r
2
l
_
l÷o
,
subject to
j
l
r
l
l
÷j
2
r
2
l
= j
l
a
l
÷j
2
/
l
.
15
c
1
c
2
Figure 6: Pareto Optimal Allocations
16
c
1
c
2
w
w*
Figure 7: Second Welfare Theorem
17
Let ` be the Lagrange multiplier associated with this budget constraint. Then,
it is easy to see that FOCs are given by
c
_
r
l
l
_
o÷l
_
r
2
l
_
l÷o
= `j
l
,
and
(1 ÷c)
_
r
l
l
_
o
_
r
2
l
_
÷o
= `j
2
.
After dividing these two equations you can arrive at
r
2
l
=
j
l
j
2
1 ÷c
c
r
l
l
,
which then can be used to …nd r
l
l
and r
2
l
by substituting it in the budget con
straint:
r
l
l
= c
j
l
a
l
÷j
2
/
l
j
l
, r
2
l
= (1 ÷c)
j
l
a
l
÷j
2
/
l
j
2
.
Similarly for the second consumer we will get
r
l
2
= ,
j
l
a
2
÷j
2
/
2
j
l
, r
2
2
= (1 ÷,)
j
l
a
2
÷j
2
/
2
j
2
.
Market clearing conditions are
c
j
l
a
l
÷j
2
/
l
j
l
÷,
j
l
a
2
÷j
2
/
2
j
l
= a
l
÷a
2
,
and
(1 ÷c)
j
l
a
l
÷j
2
/
l
j
2
÷ (1 ÷,)
j
l
a
2
÷j
2
/
2
j
2
= /
l
÷/
2
.
If we use one of these equations, and normalize prices by j
2
= 1 ÷ j
l
, we will
get
j
l
=
c/
l
÷,/
2
(1 ÷c)a
l
÷c/
l
÷ (1 ÷,)a
2
÷,/
2
.
In order to characterize Pareto optimal allocations …rst note that
'1o
l
=
c
1 ÷c
r
2
l
r
l
l
, and '1o
2
=
,
1 ÷,
r
2
2
r
l
2
.
Since
r
l
2
= a
l
÷a
2
÷r
l
l
and r
2
2
= /
l
÷/
2
÷r
2
l
,
we can de…ne Pareto optimal allocations as those allocations that satisfy
c
1 ÷c
r
2
l
r
l
l
=
,
1 ÷,
/
l
÷/
2
÷r
2
l
a
l
÷a
2
÷r
l
l
. (13)
Note that equation (13) provides a complete characterization of all Pareto opti
mal allocations. Any division of the total output between two agents that satis…es
this is a Pareto optimal allocations.
18
Example 14 Consider a 2x2 exchange economy. There are 2 goods and 2 in
dividuals, and preferences and initial endowments are as follows:
n
l
(r
l
l
, r
2
l
) = a log(r
l
l
) ÷ (1 ÷a) log(r
2
l
), a ¸ (0, 1),
n
2
(r
l
2
, r
2
2
) = / log(r
l
2
) ÷ (1 ÷/) log(r
2
2
), / ¸ (0, 1),
and
n
l
= (n
l
l
, n
2
l
) = (1, 1), n
2
= (n
l
2
, n
2
2
) = (2, 2).
Then, individual 1’s problem is
max
r
1
1
,r
2
1
n
l
(r
l
l
, r
2
l
),
subject to
j
l
r
l
l
÷j
2
r
2
l
_ j
l
n
l
l
÷j
2
n
2
l
, r
l
l
_ 0, r
2
l
_ 0.
Langrangian for this problem is:
/(r
l
l
, r
2
l
) = a log(r
l
l
) ÷ (1 ÷a) log(r
2
l
) ÷`
l
[j
l
r
l
l
÷j
2
r
2
l
÷j
l
n
l
l
÷j
2
n
2
l
[,
and the …rst order conditions (FOC’s) are:
r
l
l
:
a
r
l
l
÷`
l
j
l
= 0 ==
a
r
l
l
= `
l
j
l
,
and
r
2
l
:
(1 ÷a)
r
2
l
÷`
l
j
2
= 0 ==
(1 ÷a)
r
2
l
= `
l
j
2
.
Hence, we get
r
2
l
=
j
l
j
2
r
l
l
(1 ÷a)
a
.
substituting into budget constraint, we have
r
l
l
=
a(j
l
n
l
l
÷j
2
n
2
l
)
j
l
=
a(j
l
÷j
2
)
j
l
,
and
r
2
l
=
(1 ÷a)(j
l
n
l
l
÷j
2
n
2
l
)
j
2
=
(1 ÷a)(j
l
÷j
2
)
j
2
.
Similarly the second individual’s Langrangian is:
/(r
l
2
, r
2
2
) = / log(r
l
2
) ÷ (1 ÷/) log(r
2
2
) ÷`
2
[j
l
r
l
2
÷j
2
r
2
2
÷j
l
n
l
2
÷j
2
n
2
2
[,
with FOC’s:
r
l
2
:
/
r
l
2
÷`
2
j
l
= 0 ==
/
r
l
2
= `
2
j
l
,
and
r
2
2
:
(1 ÷/)
r
2
2
÷`
2
j
2
= 0 ==
(1 ÷/)
r
2
2
= `
2
j
2
.
19
These imply
r
2
2
=
j
l
j
2
r
l
2
(1 ÷/)
/
.
Again substituting into the budget constraint, we get:
r
l
2
=
/(j
l
n
l
2
÷j
2
n
2
2
)
j
l
=
/(2j
l
÷ 2j
2
)
j
l
,
and
r
2
2
=
(1 ÷/)(j
l
n
l
2
÷j
2
n
2
2
)
j
2
=
(1 ÷/)(2j
l
÷ 2j
2
)
j
2
.
Then, the aggregate excess demand for good 1 is:
r
l
l
÷r
l
2
÷8 =
(a ÷ 2/)(j
l
÷j
2
)
j
l
÷8.
If we let j
l
÷j
2
= 1, equate the excess demand to zero, and solve two equations
simultaneously, we get
j
l
=
a ÷ 2/
8
,
and
j
2
=
8 ÷(a ÷ 2/)
8
.
Then, plugging these into demand functions of each individual gives us the com
petitive allocations:
r
l
l
=
8a
a ÷ 2/
and r
2
l
=
8(1 ÷a)
8 ÷(a ÷ 2/)
,
and
r
l
2
=
6/
a ÷ 2/
and r
2
2
=
6(1 ÷/)
8 ÷(a ÷ 2/)
.
The social planner’s problem is given by
:ar
(r
1
1
,r
2
1
,r
1
2
,r
2
2
)
c[a log(r
l
l
) ÷(1÷a) log(r
2
l
)[ ÷(1÷c)[/ log(r
l
2
) ÷(1÷/) log(r
2
2
)[,
subject to:
r
l
l
÷r
l
2
= 8 , r
2
l
÷r
2
2
= 8,
and
r
l
l
_ 0, r
2
l
_ 0, r
l
2
_ 0, r
2
2
_ 0.
Then the Langrangian for this problem is:
/(r
l
l
, r
2
l
, r
l
2
, r
2
2
) = c[a log(r
l
l
) ÷ (1 ÷a) log(r
2
l
)[ ÷ (1 ÷c)[/ log(r
l
2
) ÷ (1 ÷/) log(r
2
2
)[
÷j
l
[r
l
l
÷r
l
2
÷8[ ÷j
2
[r
2
l
÷r
2
2
÷8[,
20
and the FOC’s are:
r
l
l
:
ca
r
l
l
= j
l
,
r
2
l
:
c(1 ÷a)
r
2
l
= j
2
,
r
l
2
:
(1 ÷c)/
r
l
2
= j
l
,
and
r
2
2
:
(1 ÷c)(1 ÷/)
r
2
2
= j
2
.
Therefore we have
ca
r
l
l
=
(1 ÷c)/
8 ÷r
l
l
,
and
a(1 ÷a)
r
2
l
=
(1 ÷c)(1 ÷/)
8 ÷r
2
l
.
Hence,
r
l
l
=
8ca
(1 ÷c)/ ÷ca
, r
2
l
=
8c(1 ÷a)
c(1 ÷a) ÷ (1 ÷c)(1 ÷/)
,
r
l
2
=
8(1 ÷c)/
(1 ÷c)/ ÷ca
, r
2
2
=
8(1 ÷c)(1 ÷/)
c(1 ÷a) ÷ (1 ÷c)(1 ÷/)
.
By substituting these into FOC’s of the problem, we can …nd the values of the
Lagrange multipliers in the planner’s problem as
j
l
=
(1 ÷c)/ ÷ca
8
, and
j
2
=
c(1 ÷a) ÷ (1 ÷c)(1 ÷/)
8
.
Note that it is very easy to show that when c =
l
3
, the solutions to social
planner’s problem will be same as the Walrasian equilibrium allocations and j
l
and j
2
will be same as Walrasian prices. Hence, our competitive equilibrium is
Pareto optimal.
We also know that the Lagrange multipliers for the social planner’s problem
can be used as competitive equilibrium prices to decentralized any Pareto optimal
allocation. So for any c, we should be able to …nd prices and transfers that
decentralize a Pareto optimal allocation as a competitive equilibrium.
Let, for example, c =
l
2
. Then, Pareto optimal allocations are:
r
l
l
=
8a
/ ÷a
and r
2
l
=
8(1 ÷a)
(1 ÷a) ÷ (1 ÷/)
; (14)
21
for consumer 1, and
r
l
2
=
8/
/ ÷a
and r
2
2
=
8(1 ÷/)
(1 ÷a) ÷ (1 ÷/)
, (15)
for consumer 2. The Lagrange multipliers in the planner’s problem are:
j
l
=
/ ÷a
6
and j
2
=
(1 ÷a) ÷ (1 ÷/)
6
. (16)
Note that above allocations are not the same one we got when c = 1/3. Our
claim is that if prices are given by (16), and people can a¤ord to choose allo
cations de…ned in (14) and (15), then they will indeed do so as their optimal
choices. We need …rst transfer goods between individuals and change their initial
endowments to above allocations.
The transfer functions (in terms of c) can be de…ned as:
tr
l
(c) =
__
8ca
(1 ÷c)/ ÷ca
÷1
_
,
_
8c(1 ÷a)
c(1 ÷a) ÷ (1 ÷c)(1 ÷/)
÷1
__
,
and
tr
2
(c) =
__
8(1 ÷c)/
(1 ÷c)/ ÷ca
÷2
_
,
_
8(1 ÷c)(1 ÷/)
c(1 ÷a) ÷ (1 ÷c)(1 ÷/)
÷2
__
.
First one is the transfer to individual one and the one is transfer to individual
2. Notice that these terms should sum up to zero. One can plug 1/2 for c and
…nd the transfers for the above case.
Hence, we can now answer the following question. Suppose prices are given
by (16), and individual 1’s endowments are:
1 ÷
_
8
l
2
a
(1 ÷
l
2
)/ ÷
l
2
a
÷1
_
=
8a
/ ÷a
of good 1,
and
1 ÷
_
8
l
2
(1 ÷a)
l
2
(1 ÷a) ÷
l
2
(1 ÷/)
÷1
_
=
8(1 ÷a)
(1 ÷a) ÷ (1 ÷/)
of good 2,
what would be his/her choice in a competitive economy?
We already know that the consumer 1’s optimal choice for a given set of
prices is
r
l
l
=
a(j
l
n
l
l
÷j
2
n
2
l
)
j
l
.
Then, we have, for example:
r
l
l
=
a(
b÷o
6
3o
b÷o
÷
(l÷o)÷(l÷b)
6
3(l÷o)
(l÷o)÷(l÷b)
)
3o
b÷o
= a
_
8a
6
÷
8(1 ÷a)
6
_
6
a ÷/
= a
_
1
2
_
6
a ÷/
=
8a
a ÷/
,
22
which is exactly the planner’s allocation of good 1 to individual 1.
References
[1] Arrow, K. J. “The Role of Securities in the Optimal Allocation of Risk
Bearing," Review of Economic Studies, 31, 9196, 1964, (translation of
original 1953 article from French).
[2] Arrow, K. J. G. Debreu, “Existence of An Equilibrium For A Competitive
Economy," Econometrica, 22265290, 1954.
[3] Debreu, G. Theory of Value, Wiley, 1959.
[4] Farmer, Roger E. A. “General Equilibrium under Certainty (Chapter 4)”
The Macroeconomics of SelfFul…lling Prophecies, The MIT Press, Cam
bridge, MA, 1993.
[5] MassColell, Michael D. Whinston, and Jerry R. Green, Microeconomic The
ory, Oxford University Press, 1995.
[6] McKenzie, "On Equilibrium in Graham’s Model of World Trade and Other
Competitive Systems,” Econometrica, 22, 14761, 1954.
[7] Varian, Hal R. “Exchange (Chapter 17)” in Microeconomic Analysis, 3rd
Edition, W.W. Norton and Company, New York, 1992.
23
2 Exchange Economies with In…nitelyLived Agents
In our static Walrasian economy agents live for a single period. In this section
we will analyze model economies where they live forever. We will assume that
time is discrete and the horizon is in…nite, t = 0, 1, 2, .....As in the previous
section there is a …nite number of agents indexed by i = 1, ...:. We will assume,
however, there is one consumption good per period, i.e. : = 1 (see Kehoe
(1989) for an analysis with : 1). This consumption goods is not storable.
Agents have deterministic endowment streams. The endowment stream of agent
i is denoted by n
I
=
_
n
I

_
o
=0
.
Let c
I

be the consumption of agent i at time t, and let c
I
=
_
c
I

_
o
=0
be a
consumption sequence. Agents preferences are given by
l(c
I
) =
o
=0
,

I
n
I
(c
I

),
where ,
I
¸ (0, 1) is the discount factor of individual i. Hence we assume that
lifetime utility l(c
I
) is time separable. Timet consumption only a¤ects utility
at timet. We assume that n
I
¸ C
2
, strictly increasing, strictly concave, and
satis…es lim
c÷0
n
t
(c) = ·.
We still have to specify how markets work. We will assume that there is a
market at time 0 where agents can buy and sell goods of di¤erent time periods.
There is a price for every period’s good. Hence, each agent can sell his current
and future endowments in this grand time0 market and buy any amount of
current and future goods he/she wants. The consumer therefore faces a single
budget constraint:
o
=0
j

c
I

_
o
=0
j

n
I

.
After consumers choose their consumption sequences at this time zero market,
time does not play an explicit role. As time passes transactions (exchange of
goods) that were agreed at time zero take place. We assume that all contracts
that are agreed at time 0 are honored. We call this market arrangements Arrow
Debreu markets. We will normalize prices and set j
0
= 1.
De…nition 15 An ArrowDebreu equilibrium is a sequence of allocations c
I
=
_
c
I

_
o
=0
for each i, and a sequence of prices j = ¦j

¦
o
=0
such that
1. Given j, c
I
solves agent i
t
: maximization problem for each i:
max
c
1
o
=0
,

I
n
I
(c
I

), (17)
subject to
o
=0
j

c
I

_
o
=0
j

n
I

. (18)
24
2. And markets clear
n
I=l
c
I

_
n
I=l
n
I

for each t. (19)
Remark 16 It is immediate that for consumer’s maximization problem to exist
it must be case that
o
=0
j

n
I

is …nite. Otherwise, there is no well de…ned
solution for the maximization problem.
Remark 17 If
o
=0
j

n
I

is …nite, then the value of aggregate endowment is
also …nite in an ArrowDebreu equilibrium, since
o
=0
j

_
n
I=l
n
I

_
=
n
I=l
_
o
=l
j

n
I

_
.
If the society’s resources are …nite in an ArrowDebreu equilibrium then it
is immediate that any ArrowDebreu equilibrium is Pareto Optimal.
Proposition 18 Any ArrowDebreu equilibrium is Pareto Optimal.
Proof. Suppose given an ArrowDebreu equilibrium allocations c
I

and prices j,
there is another allocation ¯c

that Pareto dominates c
I

. Then it must be the case
that
o
=0
,

I
n
I
(¯c
I

) _
o
=0
,

I
n
I
(c
I

) for all i,
and there exists a , such that
o
=0
,

¸
n
¸
(¯c
¸

)
o
=0
,

¸
n
¸
(c
¸

).
Fist note that ¯c

is feasible, i.e.
n
I=l
¯c
I

_
n
I=l
n
I

for all t.
Since agent , prefers ¯c
¸

, it must be outside of her budget set given prices j. For
everyone else ¯c
I

must be feasible. Hence,
n
I=l
_
o
=l
j

¯c
I

_
n
I=l
_
o
=l
j

n
I

_
=
o
=0
j

_
n
I=l
n
I

_
,
which contradicts the fact that ¯c
I

is feasible, since feasibility implies
n
I=l
_
o
=l
j

¯c
I

_
=
o
=0
j

_
n
I=l
¯c
I

_
_
o
=0
j

_
n
I=l
n
I

_
.
Remark 19 Note that this proof follows exactly the same steps as the proof of
First Welfare Theorem in the last section.
25
How do equilibrium allocations look like? They are characterized by three
sets of equations. The FOCs for the consumer
,

I
0n
I
(c
I

)
0c
I

= j
I
j

for each i and each t. (20)
Equilibrium allocations must also satisfy the budget constraint of each individ
ual
o
=0
j

c
I

=
o
=0
j

n
I

.
There is one budget constraint per consumer. Finally, they have to be feasible
n
I=l
c
I

_
n
I=l
n
I

.
This feasibility constraint has to hold every period, since the good is perishable.
What does equation (20) tell us about consumer behavior? Note that for
any consumer i, and any two time periods t and t ÷ 1, we have
,

I
Ju1(c
1
t
)
Jc
1
t
,
÷l
I
Ju1(c
1
t+1
)
Jc
1
t+1
=
j

j
÷l
.
Hence,
0n
I
(c
I

)
0c
I
¸
= ,
j

j
÷l
0n
I
(c
I
÷l
)
0c
I
¸÷l
. (21)
This equation, which will appear many times in this course, is the intertemporal
optimization condition for the consumer.
If the consumer allocates his/her resources optimally, the cost of reducing
timet consumption today,
Ju1(c
1
t
)
Jc
1
t
, must be equal to the bene…t of increasing
timet ÷ 1 consumption,
Ju1(c
1
t+1
)
Jc
1
t+1
, after taking into account the discount factor
,, and the relative value of goods between two periods,
¡t
¡t+1
. Discounting makes
future consumption less valuable. If
¡t
¡t+1
is high, bene…t of moving resources
from t to t ÷1 is large, and if
¡t
¡t+1
is low, then the bene…t of moving resources
from t to t ÷ 1 is small.
Given FOCs for individual i and , we also know that their consumption at
time period will be related by
,

I
Ju1(c
1
t
)
Jc
1
t
,

¸
Ju1(c
¸
t
)
Jc
¸
t
=
j
I
j
¸
.
Again we can characterize the set of Pareto optimal allocations as solutions
to the following planner’s problem
max
¦c
1
t
¦
1
t=0
n
I=l
c
I
n
I
(c
I

)
26
subject to
n
I=l
c
I

=
n
I=l
n
I

, t = 0, 1, 2....
The solution to this problem is characterized by the following FOCs
c
I
0n
I
(c
I

)
0c
I

= ¬

, for i = 1, 2, ..., : and t = 0, 1, ....
where ¬

is the Lagrange multiplier on the timet resource constraint.
Given c, allocations that solves the planner’s problem is a Pareto optimal
allocation. In order to decentralize the Pareto optimal allocations we use the
Lagrange multipliers ¬

(c) as prices and transfer resources among consumers
according to:
t
I
(c) =
o
=0
¬

(c)
_
c
I

(c) ÷n
I

¸
,
where c
I

(c) is a Pareto optimal allocation of goods.
We also know that if we can …nd an c
+
, such that t
I
(c
+
) = 0 for all i,
then the ¬

(c
+
) are the Arrrow Debreu prices and allocations c
I

(c
+
) are Arrow
Debreu allocations for this economy. The following example illustrates how this
approach can be an easier way to …nd Arrow Debreu allocations than solving
the Arrow Debreu equilibrium directly. This method of computing competitive
equilibrium was formulated by Negishi (1960).
Example 20 Let : = 2 and
n
l
(c

) = n
2
(c

) = log(c

),
and
n
l

= n
2

= 1 for all t.
Also let the consumers di¤er in their discount factors with ,
l
< ,
2
. An Arrow
Debreu equilibrium for this economy is characterized by the following tree sets
of equations:
,

I
1
c
I

= j
I
j

, for i = 1, 2 and t = 0, 1, 2, ...,
o
=0
j

c
I

=
o
=0
j

for i = 1, 2,
and
c
l

÷c
2

= 2, for t = 0, 1, 2, .....
The Pareto optimal allocations are the solutions to the following maximization
problem:
max
]c
1
t
,c
2
t
]
1
t=0
c
l
o
=0
,

l
log(c
l

) ÷c
2
o
=0
,

2
log(c
2

),
27
subject to
c
l

÷c
2

= 2 for all t.
Then the Pareto optimal allocations must satisfy the following conditions
c
I
,

I
1
c
I

= ¬

for i = 1, 2,
and
c
l

÷c
2

= 2 for all t.
Using the FOCs for i = 1, 2 we have
c
l
,

l
1
c
l

= c
2
,

2
1
c
2

.
Hence,
c
2

=
c
2
c
l
,

2
,

l
c
l

.
Using the resource constraint, we have
c
l

_
1 ÷
c
2
c
l
,

2
,

l
_
= 2.
Then the Pareto optimal allocations are given by
c
I

=
2c
I
,

I
c
l
,

l
÷c
2
,

2
for i = 1, 2, (22)
and the Lagrange multipliers for the planner’s resource constraint are given by
¬

=
c
l
,

l
÷c
2
,

2
2
. (23)
Note that to …nd ¬

, we simply used the FOC, c
I
,

I
l
c
1
t
= ¬

. Then we can decen
tralize the allocations in (22) using prices in (23) and the following transfers:
t
l
(c
l
, c
2
) =
o
=0
¬

(c
l

÷1) =
_
c
l
1 ÷,
l
÷
c
2
1 ÷,
2
_
1
2
,
and
t
2
(c
l
, c
2
) =
o
=0
¬

(c
2

÷1) =
_
c
2
1 ÷,
2
÷
c
l
1 ÷,
l
_
1
2
.
28
Note that t
l
is calculated as:
t
l
(c
l
, c
2
) =
o
=0
¬

(c
l

÷1)
=
o
=0
c
l
,

l
÷c
2
,

2
2
(
2c
l
,

l
c
l
,

l
÷c
2
,

2
÷1)
=
o
=0
c
l
,

l
÷c
2
,

2
2
(
2c
l
,

l
÷c
l
,

l
÷c
2
,

2
c
l
,

l
÷c
2
,

2
)
=
o
=0
c
l
,

l
÷c
2
,

2
2
=
c
l
2(1 ÷,
l
)
÷
c
2
2(1 ÷,
2
)
.
Then, the values of c
l
and c
2
that makes t
l
(c
l
, c
2
) equal to 0 are
c
l
2(1 ÷,
l
)
=
c
2
2(1 ÷,
2
)
.
Now …rst note that if t
l
(c
l
, c
2
) = 0, then t
2
(c
l
, c
2
) = 0 as well. Second, we
can normalize one of the weights to 1, since in the planner’s problem all that
matters if the relative weights. Then, the set of weights that will give us the
competitive allocations are
c
2
= 1 and c
2
=
1 ÷,
l
1 ÷,
2
.
Hence, our claim is that the competitive allocations are
c
l

=
2
_
l÷o
1
l÷o
2
_
,

l
_
l÷o
1
l÷o
2
_
,

l
÷,

2
,
and
c
2

=
2,

2
_
l÷o
1
l÷o
2
_
,

l
÷,

2
.
To prove our claim lets go back to competitive allocations. The FOCs for the
consumer i was
,

I
1
c
I

= j
I
j

.
Lets focus on i = 1, using the FOC for t and t ÷ 1, we get
,

l
l
c
1
t
,
÷l
l
l
c
1
t+1
=
j

j
÷l
.
29
Hence,
c
l
÷l
=
j

j
÷l
,
l
c
l

.
We can use this relation to write
c
l

=
j
0
j

,

l
c
l
0
.
Now using this rule in consumer budget constraint
j
0
c
l
0
÷j
l
c
l
l
÷..... = j
l
÷j
2
÷.....,
we get
c
l
0
_
j
0
÷,
l
j
0
÷,
2
l
÷....
¸
= [j
0
÷j
l
÷.....[ ,
or
c
l
0
j
0
_
1 ÷,
l
÷,
2
l
÷.....
¸
= [j
0
÷j
l
÷.....[ .
Then,
c
l
0
=
o
=0
j

j
0
l
l÷o
1
=
(1 ÷,
l
)
o
=0
j

j
0
,
and
c
l

=
j
0
j

,

l
c
l
0
=
j
0
j

,

l
(1 ÷,
l
)
o
=0
j

j
0
.
Since a similar rule also determines the consumption behavior for the consumer
2, we have the following market clearing condition for t = 0,
(1 ÷,
l
)
o
=0
j

j
0
÷
(1 ÷,
2
)
o
=0
j

j
0
= 2.
Therefore,
j
0
=
(1 ÷,
l
)
o
=0
j

÷ (1 ÷,
2
)
o
=0
j

2
.
Indeed, using the market clearing condition for any period, we can get
j

=
_
,

l
(1 ÷,
l
) ÷,

2
(1 ÷,
2
)
¸
o
=0
j

2
.
Then,
c
l
0
=
(1 ÷,
l
)
o
=0
j

j
0
=
2(1 ÷,
l
)
(1 ÷,
l
) ÷ (1 ÷,
2
)
,
and
c
l

=
2
l÷o
1
l÷o
2
,

l
l÷o
1
l÷o
2
,

l
÷,

2
.
It is then trivial to check that these are the allocations we …nd by solving the
planner’s problem and setting the transfers to zero. In this example, it is much
easier to calculate Pareto optimal allocations than competitive (ArrowDebreu)
allocations.
30
Example 21 Consider a simple exchange economy with two consumers, in
dexed by i = 1, 2, who live forever, and with one perishable consumption good.
Time is discreet and indexed by t = 0, 1, ...... Each consumer values sequences
of consumption goods, c
I
=
_
c
I

_
o
=l
according to
l(c
I
) =
o
=0
,

ln(c
I

),
with , ¸ (0, 1). The endowment processes are given by
n
I

=
_
n
l

= 2 and n
2

= 0 if t is even
n
l

= 0 and n
2

= 1 if t is odd
.
An ArrowDebreu equilibrium for this economy consists of allocations
_
c
l

, c
2

_
o
=0
,
and prices ¦j

¦
o
=0
such that:
1. Given ¦j

¦
o
=0
, the allocations solve:
max
¦c
1
t
¦
1
t=0
o
=0
,

ln(c
I

),
subject to
o
=0
j

c
I

=
o
=0
j

n
I

,
for i = 1, 2, (note that since we are in an ArrowDebreu world, agents face
one single budget constraint).
2. Markets clear, i.e.
c
l

÷c
2

= n
l

÷n
2

,
for all t, (note that since good is perishable, the market clearing condition
has to hold in very period).
An allocation
_
c
l

, c
2

_
o
=0
is Pareto optimal, if it is feasible and there is no
other feasible allocation
_
¯c
l

, ¯c
2

_
o
=0
such that 1) n(¯c
I
) _ n(c
I
) for i = 1, 2, and
2) n(¯c
I
) n(c
I
) for at least one i = 1, 2, (note that an allocation
_
c
l

, c
2

_
o
=0
is
feasible if c
I

_ 0 for all t and i = 1, 2, and c
l

÷c
2

= n
l

÷n
2

for all t).
For this example we can write the social planner’s problem as
max
]c
1
t
,c
2
t
]
1
t=0
o
=0
,

_
cln(c
l

) ÷ (1 ÷c) ln(c
2

)
¸
,
subject to
c
l

÷c
2

= 2 if t is even,
and
c
l

÷c
2

= 1 if t is odd.
31
FOCs for this problem will result in
c,

c
l

=
(1 ÷c),

c
2

,
(note that given resources in any period, this condition equates MRSs for agent
1 and 2). Hence,
c
2

=
(1 ÷c)
c
c
l

.
You can now use the budget constraint to arrive at
c
l

= 2c, and c
2

= 2 ÷2c if t is even,
and
c
l

= c, and c
2

= 1 ÷c if t is odd.
It is also easy to show that Lagrange multiplier is
`

=
,

2
if t is even and `

= ,

if t is odd.
We can then use prices j

=
o
t
2
if t is even, and j

= ,

if t is odd, and …nd
transfer functions as
t
l
(c) =
o
=0
j

(c
I

(c) ÷n
I

),
that can be used to decentralize any Pareto optimal allocation.
2.1 Sequential Equilibrium
Our analysis above was built on ArrowDebreu markets where all trade take
place at a time0 market. There were no other market in any other period.
Now we look another possible market arrangement where we have a market
each period. Suppose now trades take place in spot markets that open every
period. Hence, at time t, agents only trade timet goods in a spot market. Let
¡

be the price of the good in this timet market. If agents can only trade timet
good at time t, and there no credit arrangements, then this economy would look
like a sequence of static exchange economies.
With spot markets we need a credit mechanism that will allow agents to
move their resources between periods. Therefore, we will assume that there is a
one period credit market that works as follows: each period agents can borrow
or lend in this one period credit market. Let
¯
1

= 1 ÷ ¯ r

be the gross interest
rate on timet borrowing (lending). How does this credit system work? One can
think of a central credit agency that keeps track of people who borrow and lend.
If you want to lend you bring your goods to the agency which gives it to other
people. Next period you can go and receive your goods back (plus the interest)
or bring your goods to pay your debt. We assume everybody honors his/her
contract and there is perfect record keeping.
32
Then, each individual will have a sequence of budget constraints (rather than
a single one as it was the case with ArrowDebreu markets):
¡
0
c
I
0
÷
I
0
= ¡
0
n
I
0
(24)
¡
l
c
I
l
÷
I
l
= ¡
l
n
I
l
÷ (1 ÷ ¯ r
0
)
I
0
....
¡

c
I

÷
I

= ¡

n
I

÷ (1 ÷ ¯ r
÷l
)
I
÷l
....
Hence, given a sequence of prices, ¦j

¦
o
=0
and ¦¯ r

¦
o
=0
, the agent i
t
: problem is
max
c
1
o
=0
,

I
n
I
(c
I

),
subject to the sequence of budget constraints de…ned above.
In the ArrowDebreu equilibrium, the value of agent’s resources had to be
…nite to make sure we had a well de…ned solution to agent’s maximization
problem. How can we make sure that agent’s resources are …nite in this setup?
Lets begin by rearranging the equations in (24).
Note that for t = 1, we have
¡
l
c
I
l
÷
I
l
= ¡
l
n
I
l
÷ (1 ÷ ¯ r
0
)
_
¡
0
n
I
0
÷¡
0
c
I
0
¸

I
l
= ¡
l
n
I
l
÷ (1 ÷ ¯ r
0
)¡
0
n
I
0
÷¡
l
c
I
l
÷(1 ÷ ¯ r
0
)¡
0
c
I
0
.
Similarly for t = 2, we get
¡
2
c
I
2
÷
I
2
= ¡
2
n
I
2
÷ (1 ÷ ¯ r
l
)
I
l

I
2
= ¡
2
n
I
2
÷ (1 ÷ ¯ r
l
)¡
l
n
I
l
÷ (1 ÷ ¯ r
l
)(1 ÷ ¯ r
0
)¡
0
n
I
0
÷¡
2
c
I
2
÷(1 ÷ ¯ r
l
)¡
l
c
I
l
÷(1 ÷ ¯ r
l
)(1 ÷ ¯ r
0
)¡
0
c
I
0
.
Working our way to time t we get
¡

c
I

÷
I

= ¡

n
I

÷ (1 ÷ ¯ r
÷l
)
I
÷l

I

= ¡

n
I

÷ (1 ÷ ¯ r
÷l
)¡
÷l
n
I
÷l
÷ (1 ÷ ¯ r
÷l
)(1 ÷ ¯ r
÷l
)¡
÷l
n
I
÷2
÷
..... ÷ (1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)¡
I
0
n
I
0
÷¡

c
I

÷ (1 ÷ ¯ r
÷l
)¡
÷l
c
I
÷l
÷ (1 ÷ ¯ r
÷l
)(1 ÷ ¯ r
÷l
)¡
÷l
c
I
÷2
÷
..... ÷(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)¡
I
0
c
I
0
.
Multiplying both side by
l
(l÷e :t1).....(l÷e :0)
, we get

I

(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
=
¡

n
I

(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
(25)
÷
¡
÷l
n
I
÷l
(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
÷... ÷¡
I
0
n
I
0
÷
¡

c
I

(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
÷
¡
÷l
c
I
÷l
(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
... ÷¡
I
0
c
I
0
.
33
Note that the right hand side of this equation is nothing but the time0 present
value of agent’s resources minus the present value of his/her consumption. Since
in this economy credit arrangements simply move resources between period and
do not add any resources to the agent’s budget constraint, we need to impose
the following condition
lim
÷o

I

(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
= 0. (26)
This condition (sometimes called noPonzi game condition) guarantees that
agent do not run a game where he/she keeps borrowing more and more and
never pays.
What is the relation between ArrowDebreu prices and spot prices? Obvi
ously, j
0
= ¡
0
= 1, where j
0
= 1 is the ArrowDebreu price of time0 good. For
any t 0, we have
j

=
¡

(1 ÷ ¯ r
0
)(1 ÷ ¯ r
l
)....(1 ÷ ¯ r
÷l
)
=
¡

H
÷l
=0
(1 ÷ ¯ r

)
. (27)
This equation has a very simple interpretation. In an ArrowDebreu world, you
have to pay j

units of time 0 goods to get 1 unit of timet goods. In a sequential
market one unit of timet good costs ¡

at time t. How much of time 0 goods
you need to be able to pay ¡

´ Since you can transfer resources form period 0
to period t using oneperiod credit arrangements, you need exactly
jt
I
t1
t=0
(l÷e :t)
.
Then, the equation (25) becomes

I

(1 ÷ ¯ r
÷l
).....(1 ÷ ¯ r
0
)
=

=0
j

n

÷

=0
j

c

.
Thus the condition (26) simply makes sure that as t ÷ ·, agents are on their
budget constraint:
o
=0
j

n

=
o
=0
j

c

,
and if agent’s resources are …nite, then there is a wellde…ned solution to agent’s
maximization problem. How can we make sure that (26) is satis…ed? This can
be achieved by placing an upper bound, call it 1
I
, on how much each agent can
borrow. Note that any level of borrowing limit, even a very large one, will still
preclude the possibility of a Ponzi game.
In this environment given a sequence of budget constraints, the agent’s prob
lem is characterized by the following set of FOCs:
,

0n
I
(c
I

)
0c
I

= `

¡

,
for c
I

; and
÷`

÷ (1 ÷ ¯ r

)`
÷l
= 0,
34
for 
I

, where `

is the Lagrange multiplier for timet budget constraint.
We can combine these two conditions to arrive at
,
 Ju
1
(c
1
t
)
Jc
1
t
¡

= (1 ÷ ¯ r

)
,
÷l
Ju
1
(c
1
t+1
)
Jc
1
t+1
¡
÷l
,
or
0n
I
(c
I

)
0c
I

= (1 ÷ ¯ r

)
¡

¡
÷l
,
0n
I
(c
I
÷l
)
0c
I
÷l
. (28)
This is exactly the intertemporal optimization condition we had for Arrow
Debreu economy, see equation (21), since using equation (27), we have
0n
I
(c
I

)
0c
I

= (1 ÷ ¯ r

)
j

(1 ÷ ¯ r
0
)(1 ÷ ¯ r
l
)....(1 ÷ ¯ r
÷l
)
j
÷l
(1 ÷ ¯ r
0
)(1 ÷ ¯ r
l
)....(1 ÷ ¯ r

)
,
0n
I
(c
I
÷l
)
0c
I
÷l
=
j

j
÷l
,
0n
I
(c
I
÷l
)
0c
I
÷l
.
In this economy, the credit balances are recorded in terms of the value of
timet goods. Timet resource constraint implies

I

= ¡

_
n
I

÷c
I

_
,
indicating the time t value of your credit balance. Next period, the budget
constraint is
¡
÷l
c
I
÷l
÷
I
÷l
= ¡

n
I

÷ (1 ÷ ¯ r

)
I

,
and you pay (or receive) to the credit agency a credit in the amount of (1÷¯ r

)
I

.
Hence, you borrow
l
1
t
jt
units of timet goods and pay back
(l÷e :t)l
1
t
jt+1
. Then, the
interest rate in terms of physical quantities is
1 ÷r

=
(l÷e :t)l
1
t
jt+1
l
1
t
jt
=
(1 ÷ ¯ r

)
I

¡
÷l
¡


I

=
(1 ÷ ¯ r

)¡

¡
÷l
. (29)
Rather than keeping the track of goods in terms of the value of goods, the credits
could also be recorded in the physical amount of goods. Suppose if you borrow
(or lend) 
I

units of goods in time t, then at time t ÷ 1 you pay (or receive)

I

(1 ÷r

) units of the good.
The spot prices are not necessary then to de…ne an equilibrium in this envi
ronment. Interest rates, which are de…ned in way, already contain the formation
about the relative value of the good. In this setup the budget constraint of the
individual is
c
I
0
÷
I
0
= n
I
0
(30)
c
I
l
÷
I
2
= n
I
l
÷ (1 ÷r
0
)
I
0
....
c
I

÷
I

= n
I

÷ (1 ÷r
÷l
)
I
÷l
....
35
The intertemporal FOC is given by
0n
I
(c
I

)
0c
I

= (1 ÷r

),
0n
I
(c
I
÷l
)
0c
I
÷l
. (31)
Again there is a wellde…ned relation between interest rates and ArrowDebreu
prices, since
1 ÷r

=
(1 ÷ ¯ r

)¡

¡
÷l
=
j

j
÷l
. (32)
This relation (1 ÷r

=
¡t
¡t+1
) will appear may times as we move forward in this
course.
We are now ready to de…ne a sequential equilibrium
De…nition 22 A sequential market equilibrium is a sequence of allocations c
I
=
_
c
I

_
o
=0
and a sequence of lending/borrowing decisions 
I
=
_

I

_
o
=0
for each
i, and a sequence of prices r = ¦r

¦
o
=0
and a borrowing constraint for each
individual 1
I
such that
1. Given r, c
I
and 
I
solve agent i
t
: maximization problem for each i:
max
c
1
o
=0
,

I
n
I
(c
I

), (33)
subject to
c
I

÷
I

= n
I

÷ (1 ÷r

)
I
÷l
for all i and for all t (34)
c
I

_ 0 for all i and for all t,
a:d 
I

_ ÷1
I
, for all i and for all t.
2. Markets clear
n
I=l
c
I

_
n
I=l
n
I

for each t, (35)
and

I

= 0 for each t. (36)
It should be rather straightforward that consumption allocations in an Arrow
Debreu equilibrium and in a sequential equilibrium are identical.
References
[1] Kehoe, T. “Intertemporal General Equilibrium Models,” in The Economies
of Missing Markets, Information, and Games, Frank Hahn (ed.), 1989.
[2] Negishi, T. “Welfare Economics and Existence of An Equilibrium For a
Competitive Economy," Metroeconomica, 12, 9297.
36
3 Stochastic Endowments
Everything was certain in the economies we have analyzed so far. But we all
know that uncertainty is an important element in many economic activities. We
will now extend our previous analysis in to a stochastic environment.
We assume time is discrete and the time horizon is in…nite, t = 0, 1, 2, .....There
is again a …nite number of agents indexed by i = 1, ...: and one consumption
good per period. This consumption goods is not storable. In contrast to the
previous setup endowments depend on the state of the economy. The state of
the economy is uncertain. There are, for example, good days and bad days,
sunny days and rainy days. Yet this uncertainty has a wellde…ned structure.
We will let :

denote the state of the economy at time t. It is a stochastic
variable, and we will assume that :

can take values from a given …nite set o.
For example if o = ¦qood, /ad¦ . Then, each period either :

= qood or :

= /ad.
We assume that :

follows a Markov process, i.e. probability that :
÷l
= :
t
only
depends on the current state :

. We will let
¬(:
t
[:) = jro/(:
÷l
= :
t
[:

= :),
represent the transition probability from :

= : to :
÷l
= :
t
. Since this is a
transition probability it must satisfy
s
0
ÇS
¬(:
t
[:) = 1.
We assume that :

is publicly observed. Hence, given any value of :
0
, we can use
transition probabilities to determine the probability of any particular sequence
of states occurring. We will call a possible realization of states up to time t a
history, denoted by :

.
:

= [:

, :
÷l
, ...., :
0
[ .
Given :
0
, the probability of a particular history :

is
¬(:

[:
o
) = ¬(:

[:
÷l
).....¬(:
l
[:
l
)¬(:
l
[:
o
).
Figure (8) illustrates possible 3 period histories when :

can take two values
from the set o = ¦/, ¦ and :
l
= .
Below we will assume that at t = 0, :
0
is known. Otherwise we have to
assume a probability distribution over :
0
. If :
0
is known, then the transition
probabilities characterize the stochastic structure of this economy. We will
analyze Markov processes in more detail below.
We assume that agents endowments depend on :

. In particular we assume
that
n
I

= n
I
(:

),
i.e. agent i
t
: endowments is a time invariant function of :

. For example, if
o = ¦, /¦ , then
n
I

=
_
2 if :

= /
0 if :

= ,
37
s
0
= l
s
1
= l
s
1
= h
s
2
= l
s
2
= h
s
2
= l
s
2
= h
s
2
= [l,1,1]
s
2
= [h,1,1]
s
2
= [l,h,1]
s
2
= [h,h,l]
Figure 8: Possible 3 peirod histories
is a wellde…ned time invariant endowment function.
We assume that there is a time0 ArrowDebreu market where agents can buy
and sell not only goods of di¤erent periods but also goods of di¤erent histories.
People at time 0 choose a contingent consumption plan
c
I
=
_
c
I

(:

)
_
o
=0
,
where agent decide his/her consumption for every date and for every possible
realization of the history.
Agents try to maximize the expected value of their utility de…ned as
l(c
I
) =
o
=0
_
s
t
,

n(c
I

(:

))¬(:

[:
0
)
_
, (37)
where the term
s
t ,

n(c
I

(:

))¬(:

[:
0
) represents the expected utility of time
t consumption plan (given :
0
). l(c
I
) is called an expected utility function or
sometimes a vonNeuman Morgenstein utility function. We use
s
t to denote
summation over all possibly histories that can happen up to time t. We assume
that n is strictly increasing, strictly concave, and satis…es lim
c÷0
n
t
(c) = ·.
Let j

(:

) be the price of timet, history :

goods at time 0. Then the budget
constraint of the individual is
o
=0
s
t
j

(:

)c
I

(:

) =
o
=0
s
t
j

(:

)n
I

(:

). (38)
38
The household’s problem is to maximize (37) subject to (38). Note that for this
problem to have a wellde…ned solution
o
=0
s
t j

(:

)n
I

(:

) must be …nite.
De…nition 23 An ArrowDebreu equilibrium is a sequence of consumption plans
c
I
=
_
c
I

(:

)
_
o
=0
for each i, and a sequence of history dependent prices j =
¦j

(:

)¦
o
=0
such that, given :
0
:
1. Given j, c
I
solves agent i
t
: maximization problem for each i:
l(c
I
) =
o
=0
_
s
t
,

n(c
I

(:

))¬(:

[:
0
)
_
, (39)
subject to
o
=0
s
t
j

(:

)c
I

(:

) =
o
=0
s
t
j

(:

)n
I

(:

). (40)
2. Markets clear
n
I=l
c
I

(:

) _
n
I=l
n
I

(:

) for each t and each :

. (41)
Note that when we write
n
I=l
c
I

(:

) _
n
I=l
n
I

(:

), the consumption de
pend on :

while the endowments depend on :

. This is …ne, since any history :

implies a …nal state :

. In what follows we will normalize prices by j
0
(:
0
) = 1.
The FOCs for the consumer are given by
,

n
t
(c
I

(:

))¬(:

[:
0
) = j
I
j

(:

) for all t. (42)
Note that if j
0
(:
0
) = 1, the any agent i
n
t
(c
I
0
(:
0
)) = j
I
.
Again, we can use the FOC for t ÷ 1,
,
÷l
n
t
(c
I
÷l
(:
÷l
))¬(:
÷l
[:
0
) = j
I
j
÷l
(:
÷l
) for all t,
to arrive at
,

n
t
(c
I

(:

))¬(:

[:
0
)
j

(:

)
=
,
÷l
n
t
(c
I
÷l
(:
÷l
))¬(:
÷l
[:
0
)
j
÷l
(:
÷l
)
,
or
n
t
(c
I

(:

)) =
j

(:

)
j
÷l
(:
÷l
)
¬(:
÷l
[:
0
)
¬(:

[:
0
)
,n
t
(c
I
÷l
(:
÷l
)), (43)
For any two histories :
÷l
and :

such that :
÷l
= [:
÷l
, :

[ , we have
¬(:
÷l
[:
0
) = ¬(:
÷l
[:

)¬(:

[:
0
).
39
Then, the intertemporal FOC becomes
n
t
(c
I

(:

)) = ,
j

(:

)
j
÷l
(:
÷l
)
¬(:
÷l
[:

)n
t
(c
I
÷l
(:
÷l
)).
Consumption of time t good in history :

(with current state :

) is related to
the consumption of time t ÷1 goods in history :
÷l
(with last state of :

) in the
usual way.
Note also that timet, history :

consumption of any two agents is related
by
n
t
(c
I

(:

))
n
t
(c
¸

(:

))
=
j
I
j
¸
. (44)
Remark 24 Note that equation (44) has a very strong consumption insurance
implication: the ratio of marginal utilities of any two agents is constant across
time and states. For example with CRRA utilities
n(c) =
c
l÷c
1 ÷o
with o 0,
we get
c
I

(:

) = c
¸

(:

)
_
j
I
j
¸
_
÷
1
o
.
Hence, time t consumption allocation to distinct agents are constant fractions
of each other, and as a result individual consumption is perfectly correlated
with aggregate consumption. Furthermore, timet consumption is independent
of timet endowment of the agent.
Thus we can write agent i
t
: consumption as
c
I

(:

) = n
t÷l
_
n
t
(c
l

(:

)
j
I
j
l
_
,
where n
t÷l
indicates the inverse function of the n
t
. We know that feasibility
implies
1
I=l
c
I

(:

) =
1
I=l
n
t÷l
_
n
t
(c
l

(:

)
j
I
j
l
_
=
1
I=l
n
I

(:

).
Since the right hand side of this equation only depends on :

the left hand side
must also only depend on :

, hence we have
c
I

(:

) = c
I

(:

) for all i.
Example 25 Suppose : = 2, and
n
I
(c
I

) = log(c
I

) for i = 1, 2.
40
Suppose there are two states, o = ¦1, 2¦ . Let :
0
= 1, and
jro/(:
÷l
= 1[:

= 1) = ¬
ll
, and jro/(:
÷l
= 2[:

= 1) = ¬
l2
.
Transition probabilities ¬
22
and ¬
2l
are de…ned similarly. The endowment func
tions are
n
l

= :

,
and
n
2

= 8 ÷:

.
Hence, each period there is …xed amount of good (i.e. there is no aggregate
uncertainty)
n = n
l

÷n
2

= 8 for all t.
The FOC for c
I

(:

) is
,

1
c
I

(:

)
¬(:

[:
0
) = j
I
j

(:

).
Since j
0
(:
0
) = 1,
1
c
I
0
(:
0
)
= j
I
.
Lets focus on t = 0, and t = 1, using equation (43) we have
,
1
c
I
l
(:
l
)
¬(:
l
[:
0
) =
1
c
I
0
(:
0
)
j
l
(:
l
),
or
,
c
I
0
(:
0
)
c
I
l
(:
l
)
¬(:
l
[:
0
) = j
l
(:
l
).
Here :
l
is a history for t = 1. There are two possible histories [1, 1[ and [2, 1[.
Since ,, ¬(:
l
[:
0
), and j
l
(:
l`
) is same for both agents we have
c
l
0
(:
0
)
c
l

(:
l
)
=
c
2
0
(:
0
)
c
2

(:
l
)
.
Since c
l
0
(:
0
) ÷c
2
0
(:
0
) = 8, and c
l
l
(:
l
) ÷c
2
l
(:
l
) = 8, it must be the case that
c
l
0
(:
0
) = c
l
l
(:
l
) = c
l
and c
2
0
(:
0
) = c
2
l
(:
l
) = c
2
,
where c
l
÷c
2
= 8. Indeed one can easily show that this will be true for any time
period. Hence, agents consume a …xed amount of goods every period. Then,
,
c
I
0
(:
0
)
c
I

(:
l
)
¬(:
l
[:
0
) = j
l
(:
l
).
implies
j
l
(:
l
) = ,¬(:
l
[:
0
).
41
Indeed one can easily show that
j

(:

) = ,

¬(:

[:
0
).
Then, the lifetime budget constraint for agent 1 implies
1 ÷
s
1
j
l
(:
l
)n
l
l
(:
l
) ÷
s
2
j
2
(:
2
)n
l
2
(:
2
) ÷......
= c
l
÷
s
1
j
l
(:
l
)c
l
÷
s
2
j
2
(:
2
)c
l
÷.....,
Given that j

(:

) = ,

¬(:

[:
0
), we have
1 ÷
s
1
j
l
(:
l
)n
l
l
(:
l
) ÷
s
2
j
2
(:
2
)n
l
2
(:
2
) ÷......
= c
l
_
1 ÷
s
1
,¬(:
l
[:
0
) ÷
s
2
,
2
¬(:
2
[:
0
) ÷.....
_
=
c
l
1 ÷,
.
Therefore,
c
l
= (1 ÷,)
_
1 ÷
s
1
j
l
(:
l
)n
l
l
(:
l
) ÷
s
2
j
2
(:
2
)n
l
2
(:
2
) ÷......
_
,
and
c
2
= (1 ÷,)
_
2 ÷
s
1
j
l
(:
l
)n
2
l
(:
l
) ÷
s
2
j
2
(:
2
)n
2
2
(:
2
) ÷......
_
.
Remark 26 With uncertainty sensitivity of agents to risk plays an important
role. Given n, there are two measures of risk aversion:
« Given a utility function n, with n
t
0 and n
t
_ 0, the coe¢cient of
absolute risk aversion is de…ned as:
C¹1¹ =
÷n
tt
(c)
n
t
(c)
.
« Given a utility function n, with n
t
0 and n
t
_ 0, the coe¢cient of relative
risk aversion is de…ned as:
C11¹ =
÷n
tt
(c)c
n
t
(c)
.
Note that for a linear utility function both CARA and CRRA are zero.
Agents with linear utility are called risk neutral. Agents with concave util
ity functions are risk averse, and the curvature of the utility function de
termines the risk aversion.
42
Note also that if
n(c) =
c
l÷c
1 ÷o
,
Then,
C11¹ =
÷(÷oc
÷c÷l
)c
c
÷c
= o,
and for this utility function o determines both the elasticity of intertem
poral substitution and the coe¢cient of relative risk aversion.
3.1 Asset Pricing
In an ArrowDebreu equilibrium, there is a wellde…ned price for each good in
each possible state of the world. Since there is complete set of prices, any asset
which promises to deliver a particular sequence of state contingent goods is
redundant.
This redundancy condition can be used to price any asset. In particular let
¦d(:

)¦
o
=0
be an asset that promises a deliver of d(:

) amount of timet goods
if the state is :

, where d is a function that maps states into positive real line.
What is the value of this asset? If you want to sell this asset, you have to
make sure that if the state is :

at time t you have d(:

) amount of timet goods.
Then you have to buy d(:

) amount of timet goods in (time0) ArrowDebreu
market. The state :

is, however, is not known at time 0. Thus you have to
cover all possible states that can happen. Then the cost of this asset for the
seller is
1
0,0
=
o
=0
s
t
j

(:

)d(:

),
where 1
0
,
0
indicates that the asset starts payments at time 0, and its price is
measured in terms of the value of the goods at time0.
Example 27 Let d(:

) = 1 for all :

.The asset pays 1 unit in every date and
in every possible state. The cost of this asset is
1
0,0
=
o
=0
s
t
j

(:

).
Example 28 Let d(:

) = 1 if t = t, and d(:

) = 0 otherwise. The asset pays
one unit at time t. Then the cost of this asset is
1
0,0
=
s
t
j

(:

).
Suppose now we take an asset ¦d(:

)¦
o
=0
, and get rid of the …rst t payments,
i.e. d(:

) = d(:

) if t t, and 0 otherwise. The asset pays according to function
d(:

) starting at date t ÷1. How much this asset is valued at time 0? Obviously
43
it will depend on what the history is up to t. Let 1
r,0
(:
r
) be the price of this
asset, given history :
r
, in terms of time 0 goods:
1
r,0
(:
r
) =
o
=r÷l
]e s
t
.e s
:
=s
:
]
j

(¯ :

)d(¯ :

),
where
]e s
t
.e s
:
=s
:
]
indicates that we are only summing over future histories that
has the right :
r
.
What is the value of this asset in terms of time t goods? It is simply
1
r,0
(:
r
)
j
r
(:
r
)
,
where j
r
(:
r
) is the price of timet, history :
r
goods at time0. Then,
1
r,r
(:
r
) =
o
=r÷l
]e s
t
.e s
:
=s
:
]
j

(¯ :

)
j
r
(:
r
)
d(¯ :

).
Now 1
r,r
(:
r
) indicates the price of this asset, given history :
r
, in terms of
timet, history :
r
goods. In what follows we will simply use 1
r
(:
r
).
Given the equation (42),
j

(:

)
j
r
(:
r
)
= ,
÷r
n
t
(c
I

(:

))
n
t
(c
I
r
(:
r
))
¬(:

[:
r
),
represents the price of time t, history :

goods in terms of time t, history :
r
goods. Then 1
r
(:
r
) is
1
r
(:
r
) =
o
=r÷l
]e s
t
.e s
:
=s
:
]
,
÷r
n
t
(c
I

(:

))
n
t
(c
I
r
(:
r
))
¬(:

[:
r
)d(¯ :

).
This is our asset pricing formula.
Suppose the current history is :

. Then, timet price of an asset that pays
d(:
÷l
) tomorrow and nothing else, for example, is given by
1

(:

) = 1

_
,
n
t
(c
I

(:
÷l
))
n
t
(c
I

(:

))
d(:
÷l
)
_
,
where 1

represents the expectations over :
÷l
given :

.
Similarly, timet price of an asset that pays d(:
÷l
) for every period starting
at t ÷ 1 is
1

(:

) = 1

_
o
=÷l
,
÷
n
t
(c
I

(:

))
n
t
(c
I

(:

))
d(:

)
_
.
Lets look at
1

(:

) = 1

_
,
n
t
(c
I

(:
÷l
))
n
t
(c
I

(:

))
d(:
÷l
)
_
,
44
more carefully. We can rearrange it to arrive at
n
t
(c
I

(:

)) = ,1

_
d(:
÷l
)
1

(:

)
n
t
(c
I

(:
÷l
))
_
.
For an asset that pays d(:
÷l
) next period and nothing else,
J(st+1)
1t(s
t
)
= 1
÷l
represents its return. Then
n
t
(c
I

(:

)) = ,
_
1

[1
÷l
[ 1

_
n
t
(c
I

(:
÷l
))
¸
÷Co·

_
n
t
(c
I

(:
÷l
))1
÷l
¸_
,
or
1

[1
÷l
[ =
1
,1

_
n
t
(c
I

(:
÷l
))
¸
_
n
t
(c
I

(:

)) ÷,Co·

_
n
t
(c
I

(:
÷l
))1
÷l
¸_
.
What does the covariance term indicate about asset prices? Suppose c
I

(:
÷l
)
and d(:
÷l
) move together, i.e. the asset pays you a high amount when your
consumption is high. Then, Co·

_
n
t
(c
I

(:
÷l
))1
÷l
¸
is negative and 1

[1
÷l
[
must be higher. Hence, if an asset is highly correlated with your consumption,
then its return must be high. You need a high return to hold this asset because
it is risky. If on the other hand c
I

(:
÷l
) and d(:
÷l
) move in opposite direction,
the asset is less risky and the return can be lower. The risky asset has to pay a
premium. We call this the risk premium.
Finally note that given
1

(:

) = 1

_
o
=÷l
,
÷
n
t
(c
I

(:

))
n
t
(c
I

(:

))
d(:

)
_
.
= 1

_
,
n
t
(c
I

(:
÷l
))
n
t
(c
I

(:

))
d(:
÷l
) ÷,
2
n
t
(c
I

(:
÷2
))
n
t
(c
I

(:

))
d(:
÷2
) ÷....
_
,
and
1
÷l
(:
÷l
) = 1
÷l
_
o
=÷2
,
÷÷l
n
t
(c
I

(:

))
n
t
(c
I

(:

))
d(:

)
_
.
= 1
÷l
_
,
n
t
(c
I

(:
÷2
))
n
t
(c
I

(:
÷l
))
d(:
÷2
) ÷,
2
n
t
(c
I

(:
÷3
))
n
t
(c
I

(:
÷l
))
d(:
÷3
) ÷....
_
.
It is easy to show that
1

(:

) = 1

_
,
n
t
(c
I

(:
÷l
))
n
t
(c
I

(:

))
[d(:
÷l
) ÷1
÷l
(:
÷l
)[
_
.
Then,
n
t
(c
I

(:

)) = 1

_
,n
t
(c
I

(:
÷l
))
[d(:
÷l
) ÷1
÷l
(:
÷l
)[
1

(:

)
_
, (45)
which is nothing but our familiar intertemporal optimization condition, since
[d(:
÷l
) ÷1
÷l
(:
÷l
)[
1

(:

)
,
45
is the return on this asset. This should not be surprising since we arrive at these
asset pricing functions from agent’s optimization problem.
Remark 29 Material in this section follows Ljungqvist and Sargent (2004).
Remark 30 Note that asset pricing equations that we analyze above provides a
relation between prices and quantities, but do not go far in enough to drive an
asset pricing function that maps prices to the fundamentals of the economy in
equilibrium. This is done in Lucas (1978).
Remark 31 Mehra and Prescott (1982) apply Lucas (1978) framework to the
U.S. data and investigate if the risk premium implied by the model is consistent
with the data.
References
[1] Ljungqvist, Lars and Thomas J. Sargent. 2004. “Competitive Equilibrium
with Complete Markets (Chapter 8)”, in Recursive Macroeconomic The
ory, MIT Press.
[2] Lucas, Robert E., Jr. "Asset Prices in an Exchange Economy," Economet
rica, 46(6), 1978, 14291445.
[3] Mehra, Rajnish and Prescott, Edward C., “The Equity Premium: A Puzzle,”
Journal of Monetary Economics, vol. 15, March 1985, pages 145161.
46
4 Overlapping Generations
In the previous section of this course we focused on pure exchange economies
(static and dynamic) and study some fundamental concepts of general equilib
rium, such as the ArrowDebreu and the sequential equilibrium. We also looked
at the basic properties of intertemporal decision making in dynamic, stochastic
environments.
In the rest of this class, we will study two major workhorses of modern
macroeconomics: overlapping generations (OLG) models and the onesector
growth model. For each of these models, we will present the basic environment,
introduce the equilibrium concepts that we will use, and look at the tools that
we need to analyze such environments. We will start with the OLG models.
This model was developed by Allais (1947), Samuelson (1958) and Diamond
(1965), and is used to study a variety of issues in modern macroeconomics.
4.1 Exchange Economy
Consider the following overlapping generations structure described below. Time
is discrete and the horizon is in…nite, t = 1, 2, .... There is one perishable con
sumption good per period. Each agent lives for two periods. Therefore at each
point in time the economy is populated by two generations, the young and the
old. At each point in time a new generation appears.
TIME
GENERATIONS 1 2 3
0 old (initial)
1 young old
2 young old
3 young
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
There are ·(t) members of generation t. Let c


(t) and c


(t ÷ 1) be the
consumption of agent / in generation t at time t (when young) and at t ÷ 1
(when old), respectively. Similarly let n


(t) and n


(t ÷ 1) be the endowment
of agent / in generation t at time t (when young) and at t ÷ 1 (when old). At
time 1, there is ·(0) old people (the initial old).
We will assume that the preferences of agent / in generation t 1 is repre
sented by n


(c


(t), c


(c
÷l
)), where n : 1
2
÷
÷ 1 is di¤erentiable, with
0n


_
c


(t), c


(t ÷ 1)
_
0c


(t ÷,)
0 and
0
2
n


_
c


(t), c


(t ÷ 1)
_
0
_
c


(t ÷,)
_
2
< 0 for , = 0, 1.
We will also assume that the initial old have preferences represented by a strictly
increasing utility function.
Remark 32 We will also use c

¸
and c

o÷l
to denote an agent’s consumption
when young and old (similarly n

¸
and n

o÷l
to denote his endowments). Fur
thermore, if it is understood that every generation is identical and all agents in
47
each generation are identical, we will simply refer to the consumption when old
and when young as c
¸
and c
o
(or c
l
and c
2
).
De…nition 33 A consumption allocation is a sequence
C =
_
_
c

1
(t)
_
Þ(÷l)
=l
,
_
c


(t)
_
Þ()
=l
_
o
=l
.
De…nition 34 A consumption allocation C is feasible if, for all t
C(t) =
Þ()
=l
c


(t)
. ¸¸ .
ct()
÷
Þ(÷l)
=l
c

÷l
(t
. ¸¸ .
)
ct1()
_
Þ()
=l
n


(t)
. ¸¸ .
Yt()
÷
Þ(÷l)
=l
n

÷l
(t)
. ¸¸ .
Yt1()
= 1 (t).
De…nition 35 A feasible consumption allocation C is e¢cient, if there does
not exit any other feasible allocation
´
C such that
\t,
´
C(t) _ C(t), with
´
C(t) C(t) for some t.
De…nition 36 A consumption allocation C
.
is Pareto superior to C
1
, if
(1) no agent prefers 1 to ¹ strictly, (2) at least one agent prefers ¹ to 1 strictly.
De…nition 37 A consumption allocation is Pareto optimal, if it is feasible and
if there does not exist another feasible allocation that is Pareto superior to it.
De…nition 38 A consumption allocation is symmetric if all members of all
generations consume the same consumption pair,
c


(t) = c
¸
¸
(:) = c
l
, c


(t÷1) = c
¸
¸
(:÷1) = c
2
, for all / and , in generations t or :.
Hence, a symmetric allocation treats all young in all generations and all old in
all generations the same way. If ·(t) = ·(t ÷1) = ·, and 1 (t) = 1 (t ÷1) = 1,
the set of symmetric and e¢cient allocations will be characterized by (see Figure
9):
·c
l
÷·c
2
= 1,
or
c
l
÷c
2
=
1
·
.
4.2 Competitive Equilibrium
We will look at two di¤erent market arrangements: ArrowDebreu markets and
sequential markets (see Kehoe (1989) for a detailed analysis of the AD setup).
48
c
2
c
1
slope = 1
Y/N
Y/N
Figure 9: Symmetric and E¢cient Allocations
4.2.1 ArrowDebreu Equilibrium
Suppose all agents (born and unborn) participate in a time1 market where they
can sell their endowments and buy consumption goods. Let j(t) be the price of
timet goods in this market. Then the problem of agent / in generation t 1 is
max
c
!
t
(),c
!
t
(÷l)
n


(c


(t), c


(t ÷ 1)), P(1)
subject to
j(t)c


(t) ÷j(t ÷ 1)c


(t ÷ 1) _ j(t)n


(t) ÷j(t ÷ 1)n


(t ÷ 1)
c


(t) _ 0, c


(t ÷ 1) _ 0.
The problem of the initial old is much simpler and given by
max
c
!
0
(l)
n

0
(c

0
(1)) P(2)
subject to
j(1)c

0
(1) _ j(1)n

0
(1).
Then, an ArrowDebreu equilibrium is an allocation
´
C =
_
_
´c

1
(t)
_
Þ(÷l)
=l
,
_
´c


(t)
_
Þ()
=l
_
o
=l
and a sequence of prices ¦´ j(t)¦
o
=l
such that given the sequence of prices the
elements of
´
C solve each generation’s maximization problem and the markets
clear.
49
De…nition 39 An AD equilibrium is a consumption allocation
´
C and a se
quence of prices ¦´ j(t)¦
o
=l
such that
(1) Given ¦´ j(t)¦
o
=l
, consumers maximize their utility by choosing the rele
vant elements of
´
C, i.e. consumers solve P(1) and P(2).
(2) Goods market clears, that is for all t 1
Þ()
=l
c


(t) ÷
Þ(÷l)
=l
c

÷l
(t) _
Þ()
=l
n


(t) ÷
Þ(÷l)
=l
n

÷l
(t).
Given a set of prices ¦j(t)¦
o
=l
, an agent of generation t solves the following
problem:
/ = max
c
!
t
(),c
!
t
(÷l)
n


_
c


(t), c


(t ÷ 1)
_
÷`
_
j(t)n


(t) ÷j(t ÷ 1)n


(t ÷ 1) ÷j(t)c


(t) ÷j(t ÷ 1)c


(t ÷ 1)
¸
.
Using the FOC for this problem,
01
0c


(t)
=
0n


_
c


(t), c


(t ÷ 1)
_
0c


(t)
÷`j(t) = 0,
and
01
0c


(t ÷ 1)
=
0n


_
c


(t), c


(t ÷ 1)
_
0c


(t)
÷`j(t ÷ 1) = 0,
we can determine the optimal consumption/saving decision:
j(t)
j(t ÷ 1)
=
Ju
!
t
Jc
!
t
()
Ju
!
t
Jc
!
t
(÷l)
. ¸¸ .
MRS
.
Note that this is again the standard optimality condition that equates the '1o
to the price ratio.
If
n


_
c


(t), c


(t ÷ 1)
_
= n(c


(t)) ÷,n(c


(t ÷ 1)),
this condition gives us the intertemporal optimization condition
n
t
(c


(t)) = ,
j(t)
j(t ÷ 1)
n
t
(c


(t ÷ 1)),
that we have seen again and again.
50
4.2.2 Sequential Markets
We could also imagine a market economy in which individuals trade their en
dowments using one period lending and borrowing arrangements to maximize
their lifetime utility, and in which market clearing determines the interest rate.
Let 1(t) be the real gross interest rate between t and t ÷ 1, representing how
much time t ÷ 1 goods market is willing to pay for each unit of time t goods.
Then, the consumer’s problem is given by
max
c
!
t
(),c
!
t
(÷l),l
!
t
n


(c


(t), c


(t ÷ 1)) P(3)
subject to
c


(t) _ n


(t) ÷ 


.¸¸.
lending/borrowing
c


(t ÷ 1) _ 


1(t) ÷n


(t ÷ 1)
c


(t) _ 0, c


(t ÷ 1) _ 0.
Generation 0 (the initial old),
max
c
!
0
(l)
n

0
(c

0
(1)) P(4)
subject to
c

0
(1) _ n

0
(1).
Remark 40 Note that 


< 0 represents borrowing.
Remark 41 Note that the old at time t can’t borrow since no one would lend
them.
For generations 1, 2,..., we can combine two budget constraints to arrive at
a lifetime budget constraint (represented in Figure 2):
c


(t) ÷
c


(t ÷ 1)
1(t)
_ n


(t) ÷
n


(t ÷ 1)
1(t)
. ¸¸ .
lifetime wealth
.
Note that lending/borrowing does not appear in this budget constraint since
they are simply ways to allocate resources between two periods.
An agent’s optimal decision will be determined by the solution of the follow
ing problem:
/ = max
c
!
t
(),c
!
t
(÷l)
n


_
c


(t), c


(t ÷ 1)
_
÷`
_
n


(t) ÷
n


(t ÷ 1)
1(t)
÷c


(t) ÷
c


(t ÷ 1)
1(t)
_
.
51
Using the FOC for this problem,
01
0c


(t)
=
0n


_
c


(t), c


(t ÷ 1)
_
0c


(t)
÷` = 0,
and
01
0c


(t ÷ 1)
=
0n


_
c


(t), c


(t ÷ 1)
_
0c


(t)
÷`
1
1(t)
= 0,
we can determine the optimal consumption/saving decision (represented by
point C in Figure 2):
1(t) =
Ju
!
t
Jc
!
t
()
Ju
!
t
Jc
!
t
(÷l)
. ¸¸ .
MRS
.
Note again that if
n


_
c


(t), c


(t ÷ 1)
_
= n(c


(t)) ÷,n(c


(t ÷ 1)),
this condition gives us the intertemporal optimization condition
n
t
(c


(t)) = ,1(t)n
t
(c


(t ÷ 1)).
c
t
h
(t)
c
t
h
(t+1)
slope = R(t)
w
t
h
(t)
w
t
h
(t+1)
C
c
t
h
(t)
c
t
h
(t+1)
Figure 10: Optimal Decisions
52
De…nition 42 A sequential market equilibrium is a sequence of consumption
allocation
¯
C, a sequence of lending/borrowing decisions,
_
_
¯



_
Þ()
=l
_
o
=l
, and a
sequence of interest rates
_
¯
1(t)
_
o
=l
such that
(1) given
_
¯
1(t)
_
o
=l
, consumers maximize their utility by choosing the rel
evant elements of
¯
C, i.e. consumers solve P(3) and P(4).
(2) loans market clears, that is for all t 1
Þ()
=l
¯



= 0.
Remark 43 Note that for any given exchange economy ArrowDebreu equi
librium and sequential market equilibrium are equivalent. That is given any
ArrowDebreu equilibrium,
´
C and ¦´ j(t)¦
o
=l
, with ´ j(t) 0 for all t, there is a
corresponding sequential equilibrium,
¯
C and
_
¯
1(t)
_
o
=l
with
¯
1(t) 0 for all t,
such that
´
C and
¯
C are identical. Similarly, given any sequential equilibrium,
´
C
and
_
´
1(t)
_
o
=l
, with
´
1(t) 0 for all t, there is a corresponding AD equilib
rium,
¯
C and ¦¯ j(t)¦
o
=l
with ¯ j(t) 0 for all t, such that
´
C and
¯
C are identical.
Remark 44 Note that in the de…nition of the sequential market equilibrium we
ignored the goods market equilibrium. This is because the good market clearing
implies the loan market clearing. To see this note that at any time t, the total
consumption of young is
Þ()
=l
c


(t) =
Þ()
=l
n


(t) ÷
Þ()
=l



,
and the total consumption of old is
Þ(÷l)
=l
c

÷l
(t) = 1(t ÷1)
Þ(÷l)
=l


÷l
÷
Þ(÷l)
=l
n

÷l
(t).
Then
Þ()
=l
c


(t)÷
Þ(÷l)
=l
c

÷l
(t) =
Þ()
=l
n


(t)÷
Þ()
=l



÷1(t÷1)
Þ(÷l)
=l


÷l
÷
Þ(÷l)
=l
n

÷l
(t).
If goods markets clear we have
Þ()
=l



= 1(t ÷1)
Þ(÷l)
=l


÷l
.
53
If if the initial old doesn’t have any debt to pay, that is if
Þ(0)
=l


0
= 0,
then,
Þ()
=l



= 0 for all t 1.
Example 45 Consider an OLG economy with identical agents and no popula
tion growth. Let
n

(c
¸
, c
o÷l
) = log(c
¸
) ÷ log(c
o÷l
),
and
[n
¸
, n
o÷l
[ = [8, 1[ .
Then
max
c¸t,cct
log(c
¸
) ÷ log(c
o÷l
)
subject to
c
¸
÷
c
o÷l
1(t)
_ n
¸
÷
n
o÷l
1(t)
.
With
/ = max
c¸t,cct+1
¦log(c
¸
) ÷ log(c
o÷l
) ÷`(n
¸
÷
n
o÷l
1(t)
÷c
¸
÷
c
o÷l
1(t)
)
Hence, FOCs are
1
c
¸
÷` = 0,
and
1
c
o÷l
÷
1
1(t)
` = 0.
Therefore
1
c
o÷l
=
1
1(t)
1
c
¸
÷ c
o÷l
= 1(t)c
¸
,
c
¸
÷
1(t)c
¸
1(t)
= n
¸
÷
n
o÷l
1(t)
= 2c
¸
= n
¸
÷
n
o÷l
1(t)
and,
c
¸
=
1
2
[n
¸
÷
n
o÷l
1(t)
[, c
o÷l
=
1
2
[n
¸
r(t) ÷n
o÷l
[.
Let 

be the savings given by


= n
¸
÷c
¸
=
1
2
n
¸
÷
1
2
n
o÷l
1(t)
.
Since in equilibrium 

= 0 (no heterogeneity, hence no lending and borrowing):
1(t) =
n
o÷l
n
¸
=
1
8
.
54
Remark 46 Note that although there is no lending or borrowing in equilibrium,
we still have to …nd 1(t) that makes this behavior optimal for the agents.
Note that 1(t) = 1 =
uc
u¸
simply represents the resources of an agent when
he/she is old and young. In this economy, the interest rate is completely deter
mined by endowment structure. Consumption allocations in competitive equilib
rium is
C = ¦(c
ol
, c
¸l
) , (c
o2
, c
¸2
) , (c
o3
, c
¸3
) , ...¦ = ¦(8, 1) , (8, 1), (8, 1), ...¦ .
Nobody saves in this economy (i.e. autarchy is the equilibrium), since all agents
are identical and they all want to lend when they are young and borrow when
they are old. Note that this allocations is not Pareto optimal, since consumption
allocation
´
C = ¦(´c
ol
, ´c
¸l
) , (´c
o2
, ´c
¸2
) , (´c
o3
, ´c
¸3
) , ...¦ = ¦(2, 2) , (2, 2), (2, 2), ...¦ ,
is both feasible and Pareto superior to C. Note that all generations prefer
´
C,
since
log(2) ÷ log(2) log(8) ÷ log(1) for all generations t _ 1,
and
log(2) log(1) for the initial old.
Example 47 Consider an economy with agents having similar utility functions
as the previous example
max
c¸t,cct+1
log(c
¸
) ÷ log(c
o÷l
),
but let the economy be populated by two types of agents that di¤er in their en
dowments
_
n
l
¸
, n
l
o÷l
¸
= [1, 1[ and
_
n
2
¸
, n
2
o÷l
¸
= [2, 1[ .
Now type2 agents want to lend when they are young, and might be able to do
this by o¤ering an interest to type1 agents. If we go through the same exercise
as above for each type of agents, we will get

l

= n
l
¸
÷c
l
¸
=
1
2
n
l
¸
÷
1
2
n
l
o÷l
1(t)
=
1
2
÷
1
21(t)
and

l

= n
2
¸
÷c
2
¸
=
1
2
n
2
¸
÷
1
2
n
2
o÷l
1(t)
= 1 ÷
1
21(t)
.
Therefore

l

÷
2

= 0 =
1
2
÷
1
21(t)
÷ 1 ÷
1
21(t)
= 0 = 1(t) =
2
8
,
and
c
l
¸
=
1
2
[n
l
¸
÷
n
l
o÷l
1(t)
[ =
ò
4
= 
l

= ÷
1
4
,
55
c
l
o÷l
= 1(t)c
l
¸
=
ò
4
2
8
=
ò
6
,
c
2
¸
=
1
2
[n
2
¸
÷
n
2
o÷l
1(t)
[ =
7
4
= 
l

=
1
4
,
c
2
o÷l
= 1(t)c
2
¸
=
7
4
2
8
=
7
6
.
Remark 48 Note again that 1(t) =
2
3
simply re‡ects the relative total endow
ments of young and old.
4.3 Pareto Optimality in OLG models
We will now investigate Pareto optimality in simple OLG setting that we con
structed above. We will go through several claims:
Claim 1 If an allocation is PO, then it is e¢cient.
Claim 2 Suppose that n

(c


(t), c


(t ÷1)) is the utility function of person / in
generation t, t _ 1. Let MRS be de…ned as
n

l
(c


(t), c


(t ÷ 1))
n

2
(c


(t), c


(t ÷ 1))
,
where n

¸
is the partial derivative of utility function with respect to ,t/
argument. Suppose that / and /
t
are two members of generation t. A
feasible allocation that assigns positive 1st and 2nd period consumption
to / and /
t
and implies di¤erent '1o for / and /
t
is not PO.
Claim 3 Consider a stationary and symmetric OLG environment with endow
ments given by (n
l
,n
2
). With strictly convex indi¤erence curves, and with
n
l
0, the unique equilibrium (i.e. autarchy) is PO if and only if MRS
at the endowment point is greater than equal to 1, i.e.
n
l
(n
l
, n
2
)
n
2
(n
l
, n
2
)
_ 1.
Proof Suppose
u1(u1,u2)
u2(u1,u2)
< 1. Then, n
l
(n
l
, n
2
) < n
2
(n
l
, n
2
), and (c
l
=
n
l
, c
2
= n
2
) is not Pareto optimal. To see this consider the alterna
tive feasible allocation that gives ¯c
l
= n
l
÷ to the young and ¯c
2
= n
2
÷
to the old in every period. This allocation Pareto dominates c
l
= n
l
and
c
l
= n
2
for some  ¸ (0, n
l
), that makes n
l
(¯c
l
, ¯c
2
) = n
2
(¯c
l
, ¯c
2
).
Suppose now
u1(u1,u2)
u2(u1,u2)
1, and suppose c
l
= n
l
and c
2
= n
2
is not
Pareto optimal. Then, there must exists an alternative feasible allocation
¯c
l
and ¯c
2
that Pareto dominates c
l
= n
l
and c
2
= n
2
. That is, for all t
n(¯c
l
, ¯c
2
) _ n(n
l
, n
2
),
56
and for some t
n(¯c
l
, ¯c
2
) n(n
l
, n
2
).
Let t be the …rst date such that ¯c
l
,= n
l
and ¯c
2
,= n
2
. The only way
¯c
l
and ¯c
2
can be di¤erent from the autarky and Pareto improving is if
goods are transferred from the young to the old at time t.
Let 

be the amount of goods that the young give to the old at time t.
Then at time t, ¯c
2
= n
2
÷

, and ¯c
l
= n
l
÷

. Now next period the old
(who were young at time t) must receive transfers from the young at time
t ÷ 1. Lets call this transfer 
÷l
.
Since n
l
(n
l
, n
2
) n
2
(n
l
, n
2
), in order to make generation t as well o¤ as
they were in the autarky, we need 
÷l


. Then, by the same argument

÷2

÷l
, and eventually the required transfer will be more than n
l
.
Hence, we can’t …nd an alternative allocation that Pareto dominates c
l
=
n
l
, and c
2
= n
2
.
4.4 Introducing a government
Suppose now there is a government that can levy taxes and provide subsidies.
Let
t


=
_
t


(t), t


(t ÷ 1)
¸
be the taxes or subsidies that agent / of generation t faces in his/her lifetime.
The government budget has be balanced, hence
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) = 0.
Then, the lifetime budget constraint for an agent will be given by
c


(t) ÷
c


(t ÷ 1)
1(t)
_ n


(t) ÷t


(t) ÷
n


(t ÷ 1) ÷t


(t ÷ 1)
1(t)
.
We can de…ne an equilibrium as:
De…nition 49 A sequential market equilibrium is a sequence of taxes, (¯t

÷l
(t), ¯t


(t))
o
=l
, a
sequence of consumption allocation
¯
C, a sequence of lending/borrowing decisions,
_
_
¯



_
Þ()
=l
_
o
=l
,
and a sequence of interest rates
_
¯
1(t)
_
o
=l
such that
(1) given (¯t

÷l
(t), ¯t


(t))
o
=l
and
_
¯
1(t)
_
o
=l
, consumers maximize their util
ity by choosing the relevant elements of
¯
C.
(2) the government budget balances, that is for all t 1
Þ()
=l
¯t


(t) ÷
Þ(÷l)
=l
¯t

÷l
(t) = 0.
57
(2) loans market clears, i.e. for all t 1
Þ()
=l
¯



= 0.
Example 50 Let [n
¸
, n
o÷l
[ = [2, 1[, \t and assume the same preference struc
ture as the examples above. Let [t
¸
, t
o÷l
[ = [
l
2
, ÷
l
2
[, \t. Hence, government
taxes young and transfers the proceeds to the old. Then, going trough the problem
of the agents we have


= n
¸
÷t
¸
÷c
¸
=
1
2
(n
¸
÷t
¸
)÷
1
2
n
o÷l
÷t
o÷l
1(t)
=
2 ÷
l
2
2
÷
1 ÷
l
2
21(t)
= 1(t) = 1.
and
c
¸
= c
o÷l
=
8
2
.
Note that this allocation is Pareto optimal. In contrast, if there were no government
the allocation would not be Pareto optimal. Note also that t =
l
2
is not a random
tax rate for the economy in this example. If we wanted to …nd the tax/transfer
rate that maximizes a representative agent’s lifetime utility, we would solve the
following problem:
max
r
log(n
¸
÷t) ÷ log(n
o
÷t),
where we use the fact that autarchy is the competitive equilibrium for this econ
omy. Hence the optima tax rate is given by
1
n
¸
÷t
=
1
n
o
÷t
= t
+
=
1
2
.
Note also that at the optimal tax rate t
+
=
l
2
, the interest rate is 1.
Now let the government be able to borrow from the young as well (note
that government cannot borrow from the old). Suppose government issues one
period bonds that are sure claims on 1 unit of goods the next period. Then, if
1(t) is the units of bonds sold at time t, at time t ÷ 1 the government needs
1(t) units of time t ÷ 1 goods to be able to pay back his commitments. The
government can achieve this by taxing the young, taxing the old, or by issuing
new bonds at time t ÷ 1.
The government budget is then given by
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) ÷j(t)1(t) ÷1(t ÷1) = 0,
where j(t) is the price of a unit of government bond at time t. The government
budget is in balance if
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) = 0,
58
and if
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) < 0, then j(t)1(t) ÷ 1(t ÷ 1) 0 and
government needs to issue new bonds, i.e. j(t)1(t) 1(t ÷1).
Note that with government bonds individual budget constraints become
c


(t) = n


(t) ÷t


(t) ÷


÷j(t)/


,
and
c


(t ÷ 1) = n


(t) ÷t


(t ÷ 1) ÷1(t)


÷/


,
where /


is the demand for government bonds by agent / of generation t. Note
that an agent demands /


units of government bonds at a unit price of j(t)
when young that will deliver /


units of goods next period. Note also that the
rate of return on government bonds is
l
¡()
.
The lifetime budget constraint for an agent is then given by
c


(t) ÷
c


(t ÷ 1)
1(t)
= n


(t) ÷t


(t) ÷
n


(t) ÷t


(t ÷ 1)
1(t)
÷/


_
j(t) ÷
1
1(t)
_
.
Note an equilibrium exits only if 1(t) =
l
¡()
. Otherwise the return on the
government bond and the return on private lending and borrowing are not the
same and agents can make pro…ts by borrowing from the agents and lending
to the government. When 1(t) =
l
¡()
, these arbitrage opportunities are all
exhausted.
When 1(t) =
l
¡()
, the budget constraint of the agent / becomes
c


(t) ÷
c


(t ÷ 1)
1(t)
= n


(t) ÷t


(t) ÷
n


(t) ÷t


(t ÷ 1)
1(t)
.
Hence, agents simply try to maximize their lifetime utility given their lifetime
resources. They are indi¤erent between lending to the government or lending
to the other agents. All that matters is the total amount that agents want to
lend (to the government or to the others), :


= 


÷j(t)/


= 


÷
b
!
t
1()
.
In equilibrium,
Þ()
=l
c


(t) =
Þ()
=l
n


(t) ÷
Þ()
=l
t


(t) ÷
Þ()
=l



. ¸¸ .
=0
÷
Þ()
=l
j(t)/


and the total amount that agents are willing to lend to government is
Þ()
=l
_
n


(t) ÷t


(t) ÷c


(t)
¸
=
Þ()
=l
j(t)/


. ¸¸ .
total lending to
the government
and this must be equal to j(t)1(t).
An equilibrium is now de…ned as
59
De…nition 51 A sequential equilibrium is a sequence of taxes, (¯t

÷l
(t), ¯t


(t))
o
=l
,
a sequence of government borrowing,
_
¯
1(t)
_
o
=l
, a sequence of consumption al
location
¯
C , a sequence of lending/borrowing decisions
_
_
¯ :


_
Þ()
=l
_
o
=l
, and a
sequence of interest rates
_
¯
1(t)
_
o
=l
such that
(1) given (¯t

÷l
(t), ¯t


(t))
o
=l
and
_
¯
1(t)
_
o
=l
, consumers maximize their util
ity by choosing the relevant elements of
¯
C.
(2) the government budget constraint holds, that is for all t
Þ()
=l
¯t


(t) ÷
Þ(÷l)
=l
¯t

÷l
(t) ÷
¯
1(t)
¯
1(t)
÷
¯
1(t ÷1) = 0,
(2) loans market clears, i.e. for all t
Þ()
=l
¯



÷
Þ()
=l
¯
/


¯
1(t)
. ¸¸ .
e
St(
e
1())
÷
¯
1(t)
¯
1(t)
= 0.
Remark 52 Note that we could introduce an exogenous stream of government
expenditure as well. Suppose the government spends each period G(t). This
resources do not provide any direct utility to the consumers and must be …nanced
by taxes or by borrowing. Then the government budget constraint would become
Þ()
=l
¯t


(t) ÷
Þ(÷l)
=l
¯t

÷l
(t) ÷
1(t)
1(t)
÷1(t ÷1) ÷G(t) = 0,
Remark 53 Note again that the goods market equilibrium and government bud
get constraint implies the loan market equilibrium. The budget constraint of
young agents at time t implies
Þ()
=l
c


(t) =
Þ()
=l
n


(t) ÷
Þ()
=l
t


(t) ÷
Þ()
=l
:


,
and similarly the budget constraint of the old agents implies
Þ(÷l)
=l
c

÷l
(t) = 1(t ÷1)
Þ(÷l)
=l
:

÷l
÷
Þ(÷l)
=l
n

÷l
(t) ÷
Þ(÷l)
=l
t

÷l
(t).
Then
Þ()
=l
c


(t) ÷
Þ(÷l)
=l
c

÷l
(t)
=
Þ()
=l
n


(t) ÷
Þ()
=l
t


(t) ÷
Þ()
=l
:


÷1(t ÷1)
Þ(÷l)
=l
:

÷l
÷
Þ(÷l)
=l
n

÷l
(t) ÷
Þ(÷l)
=l
t

÷l
(t).
60
If goods markets clear, we have
0 = ÷
Þ()
=l
:


÷1(t ÷1)
Þ(÷l)
=l
:

÷l
÷
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t).
From the government budget constraint,
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) ÷j(t)1(t) ÷1(t ÷1) = 0,
we have
÷
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) = j(t)1(t) ÷1(t ÷1).
Then, we have
Þ()
=l
:


= 1(t ÷1)
Þ(÷l)
=l
:

÷l
÷j(t)1(t) ÷1(t ÷1).
If the initial old and the government at time 0 has no outstanding debt, that is
if,
Þ(0)
=l
:

0
= 0 a:d 1(0) = 0,
we have
Þ(l)
=l
:

l
= j(1)1(1),
and loan markets clear at time t. Then at time 2,
Þ(2)
=l
:

2
= 1(1)
Þ(l)
=l
:

l
÷j(2)1(2) ÷1(1)
= 1(1)
Þ(l)
=l
:

l
÷j(2)1(2) ÷1(1)
Þ(l)
=l
:

l
= j(2)1(2),
and loan markets clear at time 2 as well. Indeed they clear in every period.
Example 54 Suppose that in period 1, the government wishes to borrow ò units
of time 1 goods and transfer it to the initial old. It will pay o¤ this debt by taxing
the young of generation 2 and will not issue new debt. Let
_
n
l
¸
, n
l
o÷l
¸
= [2, 1[ , with ·
l

= ò0, and
_
n
2
¸
, n
2
o÷l
¸
= [1, 1[ , with ·
2

= ò0, \t.
61
and
n


= c

¸
c

o÷l
.
It is straightforward to drive
:
l

= 1 ÷
1
21(t)
,
and
:
2

=
1
2
÷
1
21(t)
.
Then,
1
_
:


(1(t))
¸
= ò0
_
1 ÷
1
21(t)
_
÷ ò0
_
1
2
÷
1
21(t)
_
= 7ò ÷
ò0
1(t)
.
At t = 1, the loan market equilibrium implies
7ò ÷
ò0
1(1)
÷
1(1)
1(1)
. ¸¸ .
=5
= 0,
and
1(1) =
ò
7
.
Given r
l
,
:
l
l
= 1 ÷
1
21(1)
= 0.8,
and
:
2
l
=
1
2
÷
1
21(1)
= ÷0.2.
Then,
1(1) = ò1(1) =
2ò
7
.
Note that type 1 agents at time 1 want to lend 0.3 each or 15 as a whole. 10
units of this is borrowed by type 2 agents, and the remaining 5 is borrowed by
the government. Each bond costs j
l
=
7
5
units of type 1 goods. Since type 1
agents have more resources when young, they are willing to pay a price higher
than one for a unit of type 2 good to the government.
Example 55 Let
[n
¸
, n
o÷l
[ = [2, 1[ , with ·

= 100.
and
n


= c

¸
c

o÷l
.
Suppose government issues bonds in the …rst period, and gives the revenue to
the current old. Moreover, the government wants to raise 50 units of period 1
62
goods. After period 1, government issues new bonds at each period to pay the
outstanding claims. Note that
:

= 1 =
1
21(t)
,
Hence,
Þ
:

= 100 ÷
ò0
1(t)
.
At t = 1,
100 ÷
ò0
1(1)
= 1(1) = ò0.
and
1(1) = 1 and 1(1) = ò0.
Then,
:
l
= 1 ÷
1
2
=
1
2
.
Hence, young lends government
l
2
and each old consumes
l
2
units extra. Next
period the same situation will be repeated by 1(2) = ò0, and 1(2) = 1, etc. Note
that here government takes place of a permanent borrower, and allows agents to
transfer resources from one period to the next.
Remark 56 Since 1(t) =
l
¡()
, we could model government borrowing slightly
di¤erently. We could simply state that the government wants to borrow some
amount 1(t) at time t, and like any other agent in this economy is willing to
pay an interest rate 1(t). Then the government budget constraint would be
Þ()
=l
t


(t) ÷
Þ(÷l)
=l
t

÷l
(t) ÷1(t) ÷1(t)1(t ÷1) = 0,
and the loan market equilibrium condition would be
ÇÞ()



÷
ÇÞ()
/


. ¸¸ .
St(1())
÷1(t) = 0.
Consider again the budget constraint for agent / in generation t. Suppose
1(t) = 0, for all t. Then, the budget constraint is
c


(t) ÷
c


(t ÷ 1)
1(t)
= n


(t) ÷t


(t) ÷
n


(t ÷ 1) ÷t


(t ÷ 1)
1(t)
.
Now consider an alternative tax scheme that agent / faces, [¯t


(t), ¯t


(t ÷1)[. As
long as
t


(t) ÷
t


(t ÷ 1)
1(t)
= ¯t


(t) ÷
¯t


(t ÷ 1)
1(t)
,
the agents’s optimal decision will be the same. This is simply because his/her
budget set doesn’t change. Then, we can state the following:
63
Proposition 57 Consider a sequential equilibrium, (¯t

÷l
(t), ¯t


(t))
o
=l
,
_
¯
1(t)
_
o
=l
,
¯
C and
_
¯
1(t)
_
o
=l
, where
¯
1(t) = 0 for all t. Then, alternative taxes and trans
fers (´t

÷l
(t), ´t


(t))
o
=l
that satisfy
´t


(t) ÷
´t


(t ÷ 1)
1(t)
= ¯t


(t) ÷
¯t


(t ÷ 1)
1(t)
,
for all / and all t, and
Þ()
=l
´t


(t) ÷
Þ(÷l)
=l
´t

÷l
(t) = 0,
for all t are equivalent. That is allocations and the prices under (´t

÷l
(t), ´t


(t))
o
=l
are identical to allocations and prices under (¯t

÷l
(t), ¯t


(t))
o
=l
.
Indeed, we could state the following (which you should try to prove) two
results as well:
Proposition 58 Consider a sequential equilibrium, (¯t

÷l
(t), ¯t


(t))
o
=l
,
_
¯
1(t)
_
o
=l
,
¯
C and
_
¯
1(t)
_
o
=l
. These equilibrium allocations can be duplicated with alterna
tive taxes (´t

÷l
(t), ´t


(t))
o
=l
that satisfy
Þ()
=l
´t


(t) ÷
Þ(÷l)
=l÷l
´t
÷l
(t) = 0, for all t,
with
´
1(t) = 0 are equivalent. That is, given (´t

÷l
(t), ´t


(t))
o
=l
and
´
1(t) = 0,
the equilibrium allocations and equilibrium interest rate will be identical to
¯
C
and
_
¯
1(t)
_
o
=l
.
Proposition 59 Consider a sequential equilibrium, (¯t

÷l
(t), ¯t


(t))
o
=l
,
_
¯
1(t)
_
o
=l
,
¯
C and
_
¯
1(t)
_
o
=l
. Then alternative taxes and transfers (´t

÷l
(t), ´t


(t))
o
=l
that
satisfy
´t


(t) ÷
´t


(t ÷ 1)
1(t)
= ¯t


(t) ÷
¯t


(t ÷ 1)
1(t)
for all / and all t,
are equivalent. That is, corresponding to the alternative taxation patterns, there
is a pattern of government borrowing such that the initial equilibrium’s con
sumption allocation and the initial equilibrium gross interest rates constitute an
equilibrium under the alternative taxation pattern.
Note that both of these propositions imply neutrality of government policy.
If taxes are changed in a particular way (i.e. if they change so that each agent
64
faces the same present value of taxes), the government policy has no real e¤ect
(i.e. allocations and the interest rate remain the same). These results are usually
referred as Ricardian Equivalence results. In the next section we will show that
if agents care about others (i.e. they are altruistic), then we can arrive at even
more general neutrality results.
4.5 Intergenerational Linkages
Since government policies we have considered so far redistribute recourses be
tween young and old that are alive at a point in time or between young and
old that are alive in di¤erent periods, it is very important to ask the following
question: what are the e¤ects of government policies, if generation care about
each other’s welfare (i.e. they are altruistic)?
To analyze this question, consider the following form of altruism: Generation
0 cares about the wellbeing of generation1, and generations 1,2,.... are all
sel…sh. Hence,
n

0
= n

0
_
c

0
(1), n

l
(c

l
(1), c

l
(2))
_
,
where n

l
(c

l
(1), c

l
(2)) is the lifetime utility of generation 1. Hence, generation
0 can choose to transfer some of his/her resources to generation1. Let /

(0)
be the bequest of generation0 to generation1. Note that since generation1 is
sel…sh, he will simply take /

(0) as an addition to his/her endowments. The
budget constraint faced by generation0 is now given by
c

0
(1) = n

o
(1) ÷t

0
(1) ÷/

(0),
and those for generation1 are given by
c

l
(1) = n

l
(1) ÷t

l
(1) ÷

l
÷/

(0),
and
c

l
(2) = n

l
(2) ÷t

l
(2) ÷1(1)

l
.
Now consider policies of the following kind
Þ(0)
=l
t

0
(1) ÷
Þ(l)
=l
t

l
(1) = 0,
where resources are transferred between generation 0 and 1 at time 1. Lets
focus on one agent. Suppose there is no heterogeneity, i.e. the autarky is the
competitive equilibrium. Then the utility of generation 0 can be written as
n
0
(n
0
(1) ÷/(0) ÷t
0
(1), n
l
(n
l
(1) ÷/(0) ÷t
l
(1), n
l
(2)),
and the optimality condition for bequest is given by
0n
0
0c
o
(1)
. ¸¸ .
marginal cost
=
0n
0
0n
l
0n
l
0c
l
(1)
. ¸¸ .
marginal bene…t
.
65
c
0
(1)
c
1
(1)
slope = 1
w
0
(1)
w
1
(1)
w
0
(1)  τ
0
(1)
w
1
(1) – τ
1
(1)
C
b
b
not allowed
Figure 11: Bequest Decision
Figure 3 shows the tradeo¤ that is faced by generation 0. Given an after
tax endowment point, generation0 decides how much bequest to leave. We
restrict bequest to be nonnegative. Given preferences of generation0, we can
determine the optimal bequest level. Note that it is possible that bequest motive
is not operative, i.e. /(0) = 0. The bequests will be zero, if
0n
0
0c
o
(1)
0n
0
0n
l
0n
l
0c
l
(1)
.
Now consider changes in taxes and transfers in Figure 3. Note that as long as
taxes and transfer do not change the location of point C, the bequest will be
reduced or increased by generation 0 one to one with taxes. Hence, as long as
bequest motive is operative (i.e. the old choose to leave bequests), the taxes
and transfers that redistribute income between old and young at time 1 has no
e¤ect on real allocations. The taxes and transfers will have real e¤ect only if
they force the old not to leave any bequests.
In Figure 4, the old consume their after tax endowments (point D) and do
not leave any bequests. Indeed they would like to receive transfers from the
young (i.e. choose /(0) < 0), but that is ruled out.
Note that without altruism, only taxes that keep each agent’s lifetime tax
liabilities constant have no real e¤ect. With altruism government taxes that
transfers resources from the current old to the current young might have no
66
c
0
(1)
c
1
(1)
slope = 1
w
0
(1)
w
1
(1)
w
0
(1)  τ
0
(1)
w
1
(1) – τ
1
(1)
C
b
D
Figure 12: Bequests are Zero
67
real e¤ect, as long as the bequest motive is operative. Furthermore, govern
ment borrowing from the current old (which is …nanced by taxing the young
in the future) might not have any real e¤ect as well. See Aiyagari (1993) and
Barro (1974) for a detailed discussion of e¤ects of intergenerational linkages on
government policies in OLG models.
4.6 Production
Consider an OLG economy with identical agents who live for two periods. When
young, each agent has 1 unit of labor that he/she supplies inelasticly and earn
wages, n

. Given n

, each agent decides how much to consume, c

, and how
much to save, :

. When old they rent their savings as capital to the …rm which
pays them back 1 ÷r
÷l
for every unit rented. Hence, in this model agents can
store their goods and keep it for the next period as capital. Capital depreciates
at rate c ¸ (0, 1) after production. Moreover assume that population grows at
rate :, initial population size is ·
0
, i.e. ·

= (1 ÷ :)

·
0
, and initial old are
endowed with /
l
units of per capita capital at time 1.
Agents discount the future, i.e. consumption when old, at rate ,. Their
lifetime utility is given by
n(c
l
) ÷,n(c
2÷l
), , < 1, n
t
0, n
tt
< 0.
There is a large number of …rms that has access to the aggregate production
technology represented by
1 = 1(1, 1),
where 1 is output, 1 is capital used and 1 is labor used. The production
function 1 is called a neoclassical production function if it has positive …rst and
negative second derivative w.r.t. each argument
01
01
0,
01
01
0,
0
2
1
01
2
< 0,
0
2
1
01
2
< 0,
it is constant returns to scale (CRS), i.e.
1(`1, `1) = `1(1, 1), for all ` 0,
and it satis…es Inada conditions
lim
1÷0
1
1
= lim
J÷0
1
J
= ·, and lim
1÷o
1
1
= lim
J÷o
1
J
= 0.
Since the production function is CRS, we can assume that there is only one
representative …rm. The …rm’s objective is to maximize pro…ts, i.e. …rm’s
problem is
max
J,1
1 (1, 1) ÷n1 ÷r1.
68
Hence, the FOCs for this problem are
n =
01
01
,
and
r =
01
01
,
which determine competitive input prices.
Moreover, since the production function is CRS, the pro…ts are zero. This
results from the Euler’s theorem which states that if )(r, j) is homogenous of
degree one, then
)(r, j) =
0)
0r
r ÷
0)
0j
j.
Finally, since the production function is CRS, we can represent per capita
output as a function of per capita capital stock, since:
1 = 1(1, 1) = 11(
1
1
, 1) = 1)(
1
1
),
hence
1
1
= j = )(/).
Therefore, marginal products of capital and labor can be found as
01
01
= 1)
t
(
1
1
)
1
1
= )
t
(/),
and
01
01
= )(
1
1
) ÷1)
t
(
1
1
)(1)
1
1
2
= )(/) ÷/)
t
(/).
4.6.1 Individual’s problem
Each individual in generationt, t _ 1, solves
max n(c
l
) ÷,n(c
2÷l
),
subject to
c
l
÷:

= n

,
and
c
2÷l
= (1 ÷c):

÷r
÷l
:

= (1 ÷r
÷l
÷c):

.
The FOC for this problem is given by
n
t
(c
l
) = ,(1 ÷r
÷l
÷c)n
t
(c
2÷l
)
69
Note that this FOC de…nes implicitly a savings function
:

= :(n

, r
÷l
),
which can be analyzed, using Implicit Function Theorem, to determine how
savings change as n

and r
÷l
changes.
Theorem 60 (Implicit Function Theorem) Consider an equation of the follow
ing form
1(j, r
l
, r
2
, ....., r
n
) = 0, (46)
that de…nes an implicit function of j
j = )(r
l
, r
2
, ...., r
n
).
If (a) the function 1 has continuous partial derivatives 1
¸
, 1
r1
, ...., 1
rr
, (b)
at a point (j
0
, r
l0
, ...., r
n0
) satisfying (46), 1
¸
is nonzero, then there exists an
:dimensional neighborhood of (r
l0
, ...., r
n0
), call it ·, in which j is an implicit
function of r
l
, ..., r
n
. This implicit function satis…es j
0
= )(r
l0
, ...., r
n0
). It
also satis…es (46) for every :tuple in the neighborhood ·, hence giving (46)
the status of an identity. Moreover, the implicit function ) is continuous and
has continuous partial derivatives. In the neighborhood ·
1
¸
dj ÷1
r1
d
r1
÷.... ÷1
rr
d
rr
= 0.
Hence, we can de…ne
G(, :

()) = n
t
(n

÷:

) ÷,(1 ÷r
÷l
)n
t
((1 ÷r
÷l
):

) = 0,
and …nd
:
ut
=
0:

0n

= ÷
G
ut
G
st
0,
and
:
:t
=
0:

0r

= ÷
G
:t
G
st
7 0.
4.6.2 Firms
Firms try to maximize pro…ts, and labor and capital markets are competitive,
hence we will have the following FOCs
n

= )(/

) ÷/

)
t
(/

),
and
r

= )
t
(/

).
70
4.6.3 Determination of /

The capital stock next period is given by the total savings this period, i.e.
1
÷l
= ·

:(n

, r
÷l
),
or
(1 ÷:)/
÷l
= :(n

, r
÷l
) = :(n

(/

), r
÷l
(/
÷l
),
/
÷l
=
: [)(/

) ÷/

)
t
(/

), )
t
(/
÷l
)[
1 ÷:
.
Since the last equation relates current and next period levels of per capita
capital it determines how this economy’s per capita capital stock evolves given
any initial value of per capita capital stock, /
0
.
Remark 61 The dynamic properties of ¦/

¦
o
=l
depends on the derivative
d/
÷l
d/

=
÷:
u
(/

)/

)
tt
(/

)
1 ÷: ÷:
:
(/
÷l
))
tt
(/
÷l
)
,
Note that if :
:
0, this derivative is positive.
Remark 62 Furthermore, given a steady state level of per capita capital stock
/
+
= /

= /
÷l
, such a steady state will be stable if
0 <
÷:
u
(/
+
)/
+
)
tt
(/
+
)
1 ÷: ÷:
:
(/
+
))
tt
(/
+
)
< 1.
Given this setup we can de…ne an equilibrium as:
De…nition 63 Given a population growth rate : and time1 capital per worker
/
l
, a sequential competitive equilibrium consists of a sequence of per capita capi
tal stock ¦/

¦
o
=l
, sequence of prices ¦r

¦
o
=l
, ¦n

¦
o
=l
, and sequence of decisions
by the agents, ¦c
2l
, c
l
, c
2÷l
, :

¦
o
=l
such that
« given prices, agents’s solve their maximization problems for all t 1,
« r

and n

are given by …rm’s optimization problem for all t 1,
« ¦/

¦
o
=l
satis…es for all t 1
(1 ÷:)/
÷l
= :(n

, r
÷l
) = :(n

(/

), r
÷l
(/
÷l
).
Remark 64 Note that we could let the …rm’s output be
1 = 1(1, 1) ÷ (1 ÷c)1.
In this case, the …rm takes the capital of the old and pays
1 = 1 ÷
01
01
÷c.
Then sells the production and the undepreciated part of the capital to the con
sumers. This would not change anything since the decisions of the young and
the old would be the same as above.
71
4.6.4 An Example
Consider the following version of the economy outlines above, with : = 0, and
1

= · for all t. Agents are identical and have the following preferences
n(c
¸
, c
o÷l
) = log(c
¸
) ÷, log(c
o÷l
),
except with initial old having the following preferences
n(c
ol
) = c
ol
.
When young, agents can work and save in the form of a physical capital,
when old they cannot work but receive a rental return by renting their capital.
Capital depreciates at rate c. There exist a single …rm that produces each period
by combining labor inputs from young and capital inputs from old according to
the following technology
1

= ¹1
o

1
l÷o

.
First consider the …rm’s problem
max
1t,Jt
¹1
o

1
l÷o

÷n

1

÷r

/

,
where n

is the wage rate and r

is the user cost of capital. Then, FOCs are
n

= (1 ÷a)¹1
o

1
÷o

= (1 ÷a)¹
_
1

1

_
o
= (1 ÷a)¹
_
1

·
_
o
= (1 ÷a)¹(/

)
o
,
and
r

= a¹1
o÷l

1
l÷o

= a¹(/

)
o÷l
.
Agents’s problem for generations 1,2,3,....
max
c¸t,cct+1
log(c
¸)
÷, log(c
o÷l
)
subject to
c
¸
÷:

= n

,
and
c
o÷l
= (1 ÷r
÷l
÷c):

.
The life time budget constraint for an agent is then given by
c
¸
÷
c
o÷l
1 ÷r
÷l
÷c
= n
.
FOCs are then given by
1
c
¸
= `,
and
,
1
c
o÷l
= `
1
1 ÷r
÷l
÷c
.
72
Then,
c
o÷l
= ,c
¸
(1 ÷r
÷l
÷c).
And, using the lifetime budget constraint we get
c
¸
÷,c
¸
= n

= c
¸
=
1
1 ÷,
n

= :

= n

÷c
¸
=
_
1 ÷
1
1 ÷,
_
n

=
,
1 ÷,
n

.
For the initial old
max
cc1
c
ol
,
subject to
c
ol
= (1 ÷r
l
÷c)/
l
,
where /
l
is the initial capital stock.
Finally goods market clear
·c
¸
÷·c
o
÷·:

= ¹1
o

1
l÷o

÷ (1 ÷c)1

, for t _ 1,
and the savings market clear
·:

= ·(n

÷c
¸
) = 1
÷l
for t _ 1.
From savings market clearance we have
:

=
,
1 ÷,
n

= /
÷l
=
,
1 ÷,
(1 ÷a)¹
_
/



_
o
= /
÷l
= /
÷l
=
,
1 ÷,
(1 ÷a)¹(/

)
o
.
Hence, equilibrium value of ¦/

¦
o
=l
must satisfy
/
÷l
=
,
1 ÷,
(1 ÷a)¹(/

)
o
, given any /
l
.
This economy has two steady states where /
÷l
= /

= / (see Figure 13).
One is where /

= 0 for all t. Other one is
/ =
,
1 ÷,
(1 ÷a)¹(/)
o
= /
+
=
_
,
1 ÷,
(1 ÷a)¹
_ 1
1o
.
4.7 Introducing Government
We can introduce di¤erent government policies into this setup.
73
k
t
k
t+1
45
0
k
*
k
*
k
0
Figure 13: Dynamics of /

4.7.1 PayasYouGo Social Security System
A payasyougo social security system simply taxes the young and the transfers
those resources to the current old. If d is payments to the social security system
by the current young and the / is the bene…ts received by the current old, then
the contributions and bene…ts are related by
/ = (1 ÷:)d.
Hence, the problem of an agent born at time becomes
max n(c
l
) ÷,n(c
2÷l
),
subject to
c
l
÷:

= n

÷d,
and
c
2÷l
= (1 ÷c):

÷r
÷l
:

÷ (1 ÷:)d,
= (1 ÷r
÷l
÷c):

÷ (1 ÷:)d.
4.7.2 Income Taxes
We could also introduce income taxes. Suppose t
u
is the proportional tax
on labor income and t

is the proportional tax on capital income. Then, the
74
problem of an agent born at time becomes
max n(c
l
) ÷,n(c
2÷l
),
subject to
c
l
÷:

= (1 ÷t
u
)n

,
and
c
2÷l
= :

÷ (1 ÷t

)(r
÷l
÷c):

,
where the government taxes the net return on capital.
Suppose government uses this income to …nance an exogenous sequence of
government spending, ¦G

¦
o
=l
, then the government budget is balanced in
period t, if
t
u
n

÷t

(r

÷c):
÷l
= G

.
Furthermore, if we allow the government to borrow from the young, then the
government budget at time t is
t
u
n

÷t

(r

÷c):
÷l
÷1(t) = G

÷1(t ÷1)1(t),
where 1(t) and 1(t ÷ 1) are government borrowing at time t and time t ÷ 1,
and 1(t) = 1 ÷r

÷c. Note that government has to pay young the return they
would get buy holding into their goods and renting to the capital.
4.8 Dynamic E¢ciency
In OLG economies with production, we should also worry if the economy’s
savings decision is e¢cient. In order to look at this issue, consider the following
planner’s problem
max l = ,n(c
2l
) ÷
o
=l
[n(c
l
) ÷,n(c
2÷l
)[ ,
subject to
(1 ÷c)/

÷)(/

) = (1 ÷:)/
÷l
÷c
l
÷
c
2
(1 ÷:)
,
/
l
0, given.
The FOCs associated with this problem are given by
,n
t
(c
2
) = (1 ÷:)
÷l
n
t
(c
l
),
for c
2
; and by
(1 ÷:)n
t
(c
l÷l
) = (1 ÷)
t
(/

) ÷c)n
t
(c
l
),
for /

.
75
The …rst FOC characterizes the allocation of resources between two peo
ple who are alive at time t, while the second FOC characterizes the optimal
accumulation decision.
Note that this problem’s FOCs imply the following steady state values for
c
+
l
, c
+
2
, and /
+
:
,n
t
(c
+
2
) = (1 ÷:)
÷l
n
t
(c
+
l
), (47)
and
1 ÷c ÷)
t
(/
+
) = (1 ÷:). (48)
The last equation determines the steady state value of per capita capital stock:
)
t
(/
+
) = : ÷c.
This relation is called the golden rule of capital accumulation and characterizes
the e¢cient steady state capital stock.
Note that in this steady state
(1 ÷c)/
+
÷)(/
+
) = /
+
÷:/
+
÷c
+
,
where c
+
= c
+
l
÷
c
2
(l÷n)
. Hence,
)(/
+
) ÷:/
+
÷c/
+
= c
+
,
and the capital stock that maximize the consumption of a representative agent
at the steady state is given by
dc
+
d/
+
= )
t
(/
+
) ÷: ÷c = 0.
Therefore,
dc
+
d/
+
= )
t
(/
+
) ÷: ÷c ? 0 == )
t
(/
+
) ? : ÷c,
hence, if )
t
(/
+
) :÷c, i.e. capital stock exceeds the golden rule level, a decrease
in capital stock will increase the steady state level of per capita consumption.
Note that if )
t
(/
+
) < : ÷ c, the economy is over accumulating capital so that
technology does not return what is necessary to keep the per capita capital stock
constant — see Abel et. al. (1989) for further discussions.
References
[1] Abel, Andrew B., Mankiw, N. Gregory, Summers, Lawrence, and Zechauser,
R. J. “Assessing Dynamic E¢ciency: Theory and Evidence, Review of
Economic Studies, 56, pages 120, 1989.
[2] Aiyagari, Rao S. “Intergenerational Linkages and Government Budget Poli
cies.” Federal Reserve Bank of Minneapolis Quarterly Review (Spring
1987): 1423.
76
[3] Barro, Robert (1974), “Are Government Bonds Net Wealth?” Journal of
Political Economy, 82, 10951117.
[4] Diamond, Douglas (1965), “National Dept in a NeoClassical Growth
Model,” American Economic Review, 55, 11261150.
[5] Kehoe, T. “Intertemporal General Equilibrium Models,” in The Economies
of Missing Markets, Information, and Games, Frank Hahn (ed.), 1989.
[6] Samuelson, P. A. (1958), “An Exact ConsumptionLoan Model of Interest
with or without the Social Contrivance of Money,” Journal of Political
Economy, 66, 46782.
77
5 Dynamic Programming
Agents live for a …nite number of periods in OLG economies. As we have
already pointed out if agents are altruistically linked in an OLG economy, they
will behave as if they live forever. We will now analyze models in which agents
solve an in…nite horizon problem.
Our starting point will be the planning problem of a single agent who lives
forever. Later we will talk about market allocations. The problems we will
analyze have the following form:
sup
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
), (SP)
subject to
r
÷l
¸ I(r

), t = 0, 1, 2, ...,
and
r
0
given.
In this setup, time is discrete and the horizon is in…nite, t = 0, 1, 2.... At
time0, the economy starts with r
0
. Given r
0
, the agent chooses r
l
from a
feasible set I(r
0
). The r
0
together with the choice of r
l
determine the current
return, denoted by 1(r
0
, r
l
). At time1, the economy starts with r
l
and the
agent chooses r
2
, etc. The future is discounted at rate , ¸ (0, 1).
The problem is …nding an in…nite feasible sequence ¦r
÷l
¦
o
=0
that maximizes
o
=0
,

1(r

, r
÷l
). In this part of the class we will try to …nd and character
ize solutions to these sequential problems. A sequential problem (SP) will be
characterized by three objects:
« A set A such that r

¸ A for all t.
« A set of feasible actions that assigns for each r ¸ A a subset of A, I :
A ÷ A.
« A return function that maps any two elements from A into the real line,
1 : A A ÷ 1.
5.1 Neoclassical Growth Model
We will start by analyzing the neoclassical growth model. Consider a production
economy (i.e. one in which the good is storable and can be used for consumption
or production). The economy is populated by a large number of identical agents,
each starting his/her life with /
0
units of capital. There is no population growth.
We will focus on a planing problem faced by a single representative agent.
Each period this agent can combine the capital stock and his/her labor, denoted
by :

, to produce a single good, denoted by j

. Imagine this as a backyard
technology. We will later see how/why the solution to this representative agent
problem is identical to the market allocations.
78
The production technology is represented by
j

= 1(/

, :

),
where 1 is a neoclassical production function. Hence, 1(/, :) is continuously
di¤erentiable, strictly increasing, concave, and CRS, i.e.
1

(/, :) 0, 1
n
(/, :) 0 for all /, : 0,
1(`
¯
/÷(1÷`)
´
/, `¯ :÷(1÷`)´ :) _ `1(
¯
/, ¯ :)÷(1÷`)1(
´
/, ´ :) for all
¯
/, ¯ :,
´
/, ´ : 0 and ` ¸ (0, 1),
`1(/, :) = 1(`/, `:) for all ` 0.
It also satis…es 1(0, :) = 0 and the following Inada conditions
lim
÷0
1

(/, 1) = ·, and lim
÷o
1

(/, 1) = 0.
The output together with the undepreciated part of the current capital can
be used for consumption, denoted by c

, or be kept as future capital stock,
denoted by /
÷l
. Therefore,
c

÷/
÷l
_ 1(/

, :

) ÷ (1 ÷c)/

,
where c is the depreciation rate. We will let
)(/

) = 1(/

, :

) ÷ (1 ÷c)/

,
denote the total resources available at time t.
Finally, the representative agent has the following time separable utility
function
l (¦c

¦
o
=0
) =
o
=0
,

n(c

),
where n(c

) is strictly increasing and strictly concave, with lim
c÷0
n
t
(c) = ·.
Hence, the problem of a representative agent is
max
]ct,nt,t+1]
1
t=0
o
=0
,

n(c

),
subject to
c

÷/
÷l
_ )(/

),
/
0
0 is given,
c

_ 0, /
÷l
_ 0
and 1 _ :

_ 0.
We already know several things about this problem:
79
« Since agents do not drive any utility from leisure, they will supply all of
their labor endowment, i.e. :

= 1 for all t. Therefore, )(/

) = 1(/

, 1) ÷
(1 ÷c)/

.
« Since the utility function is strictly increasing, it must be the case that
c

÷/
÷l
= )(/

).
Then, we can write this problem as:
max
]t+1]
1
t=0
o
=0
,

n()(/

) ÷/
÷l
),
subject to
/
0
0 is given, and )(/

) _ /
÷l
_ 0,
where we have substituted c

= )(/

) ÷ /
÷l
. Hence, the problem is to …nd an
in…nite sequence ¦/
÷l
¦
o
=0
that maximizes
o
=0
,

n()(/

) ÷/
÷l
).
Note that the neoclassical growth model …ts our general description of a
sequential problem, since we can write:
1(r

, r
÷l
) = 1(/

, /
÷l
) = n()(/

) ÷/
÷l
),
and
I(r

) = I(/

) = [0, )(/

)[.
Remark 65 What about the set A´
How can we solve this problem? In order to gain some insight, …rst consider
a …nitetimehorizon version of this problem, given by:
max
]t+1]
1
t=0
T
=0
,

n()(/

) ÷/
÷l
),
subject to
/
0
0 is given, and )(/

) _ /
÷l
_ 0.
We can write down the Lagrangian for this problem as:
1 =
T
=0
,

[n()(/

) ÷/
÷l
) ÷`

/
÷l
[ ,
where `

is the Lagrange multiplier associated with the constraint /
÷l
_ 0.
Remark 66 Note that in this problem it will never be optimal to set /
÷l
=
)(/

).
80
The …rst order conditions for this problem are given by:
01
0/
÷l
= ÷,

n
t
()(/

) ÷/
÷l
) ÷,

`

÷,
÷l
n
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
) = 0,
and
01
0/
T÷l
= ÷,
T
n
t
()(/
T
) ÷/
T÷l
) ÷,
T
`
T
= 0.
Furthermore, we have the following KuhnTucker conditions:
`

/
÷l
= 0, t = 1, ..., T,
`

_ 0, t = 1, ..., T,
/
÷l
_ 0, t = 1, ..., T.
Now we can characterize the solution:
« Since n
t
()(/
T
) ÷ /
T÷l
) = n
t
(c
T
) 0 and n
t
()(/
T
) ÷ /
T÷l
) = `
T
, we
know that `
T
0. Then, it is the case that /
T÷l
= 0. Therefore, one
important piece of information in a …nite horizon problem is that agents
will consume everything in the last period.
« We also know that `

= 0 for all t < T. If this was not the case (i.e. if `

was strictly positive for some t), /
÷l
must be zero for some t < T. This
can not be optimal.
Then, we have, for t = 1, 2, ..., T ÷1
n
t
()(/

) ÷/
÷l
)
. ¸¸ .
utility cost of
saving more /
÷l
= ,n
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
)
. ¸¸ .
discounted utility bene…t
of having more /
÷l
next period.
.
This equation is called an Euler equation. It characterizes /
÷l
given /

and
/
÷2
. Since in this problem /
0
0 is given and /
T÷l
is 0, it provides us with a
complete characterization of the optimal path.
Example 67 Consider the following version of the neoclassical growth model
where
n(c

) = log(c

),
and
)(/

) = /
o

, with c ¸ (0, 1).
The Euler equation is then given by
n
t
(c

) = ,n
t
(c
÷l
))
t
(/
÷l
),
or
1
c

= ,
c/
o÷l
÷l
c
÷l
.
81
Hence,
c
÷l
= ,
_
c/
o÷l
÷l
_
c

,
or
(/
o
÷l
÷/
÷2
) = ,
_
c/
o÷l
÷l
_
(/
o

÷/
÷l
).
Note that at time T ÷1 this equation becomes
(/
o
T
÷/
T÷l
) = ,
_
c/
o÷l
T
_
(/
o
T÷l
÷/
T
).
Since /
T÷l
= 0,
/
o
T
= ,
_
c/
o÷l
T
_
(/
o
T÷l
÷/
T
),
and we can …nd /
T
as a function of /
T÷l
and work our way back to /
l
as a
function of /
0
(see Exercise 2.2 in Stokey, Lucas with Prescott (1989)).
Now we will look at an in…nite horizon problem. Note that in a …nitehorizon
problem the terminal condition, /
T÷l
= 0, gives us with valuable information
to solve our problem. Hence, a sequence that satis…es the Euler equations
together with this terminal condition provided us with a solution. It turns
out that a similar condition is required for an in…nite horizon problem. This
condition is called the transversality condition. The following proposition shows
that any interior in…nite sequence ¦/
÷l
¦
o
=0
that satis…es the Euler equation and
the transversality condition is a solution to SP.
Proposition 68 (Su¢ciency) Consider
max
]t+1]
1
t=0
o
=0
,

1(/

, /
÷l
),
subject to
/
0
0 given, and /
÷l
_ 0.
Let 1 be continuously di¤erentiable, and let 1(r, j) be concave in (r, j) and
strictly increasing in r. If
_
/
+
÷l
_
o
=0
satis…es:
« /
+
÷l
0, for all t.
« 1
2
(/
+

, /
+
÷l
) ÷,1
l
(/
+
÷l
, /
+
÷2
) = 0 , for all t.
« lim
÷o
,

1
l
(/
+

, /
+
÷l
)/
+

= 0,
Then,
_
/
+
÷l
_
o
=0
maximizes the objective function.
Remark 69 Note that for one sector growth model where 1(/

, /
÷l
) = n()(/

)÷
/
÷l
), the second condition implies
n
t
()(/

) ÷/
÷l
)(÷1) ÷,n
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
) = 0,
or
n
t
()(/

) ÷/
÷l
) = ,n
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
).
82
Proof : Consider an alternative feasible sequence
_
¯
/
÷l
_
o
=0
. We want to
show that
1 = lim
T÷o
T
=0
,

_
1(/
+

, /
+
÷l
) ÷1(
¯
/

,
¯
/
÷l
)
_
_ 0.
Since 1 is concave,
1 = lim
T÷o
T÷l
=0
,

_
1(/
+

, /
+
÷l
) ÷1(
¯
/

,
¯
/
÷l
)
_
_ lim
T÷o
T
=0
,

_
1
l
(/
+

, /
+
÷l
)(/
+

÷
¯
/

) ÷1
2
(/
+

, /
+
÷l
)(/
+
÷l
÷
¯
/
÷l
)
_
.
Note that /
+
0
=
¯
/
0
= /
0
0 is given. Then,
1 _ lim
T÷o
¦
T
=0
,

_
¸
_1
2
(/
+

, /
+
÷l
) ÷,1
l
(/
+
÷l
, /
+
÷2
)
. ¸¸ .
= 0
_
¸
_(/
+
÷l
÷
¯
/
÷l
)
÷1
l
(/
+
0
, /
+
l
)(/
+
0
÷
¯
/
0
)
. ¸¸ .
= 0
÷,
T
1
2
(/
+
T
, /
+
T÷l
)(/
+
T÷l
÷
¯
/
T÷l
)¦.
Since 1
2
(/
+

, /
+
÷l
) ÷,1
l
(/
+
÷l
, /
+
÷2
) = 0 for all t,
1 _ ÷ lim
T÷o
,
T
1
l
(/
+
T
, /
+
T÷l
)(/
+
T
÷
¯
/
T
)
_ ÷ lim
T÷o
,
T
1
l
(/
+
T
, /
+
T÷l
)/
+
T
= 0,
where we used lim
T÷o
,

1
l
(/
+

, /
+
÷l
)/
+

= 0, and the fact that 1
l
0 and
¯
/
T
_ 0.
Remark 70 Note that if )(r) : 1 ÷ 1 is di¤erentiable, then ) is concave if
and only if
1)(r)(j ÷r) _ )(j) ÷)(r) for all r, j ¸ 1.
Remark 71 Note that for one sector growth model
1(/

, /
÷l
) = n()(/

) ÷/
÷l
) = n(1(/

, 1) ÷ (1 ÷c)/

÷/
÷l
),
must be concave in (/

, /
÷l
).
What is the meaning of the transversality condition, lim
÷o
,

1
l
(/
+

, /
+
÷l
)/
+

=
0´ It simply states that it can not be optimal to choose a capital sequence such
that the discounted present value of /

, i.e. ,

1
l
(/
+

, /
+
÷l
)/
+

, is positive as t
goes to in…nity. This is not optimal, since then the agent is saving too much.
83
5.2 Dynamic Programming: An Introduction
The previous section focused on …nding solutions to the following problem
max
]t+1]
1
t=0
o
=0
,

n()(/

) ÷/
÷l
),
subject to
/
0
0 is given, and )(/

) _ /
÷l
_ 0.
We looked at solutions in terms of in…nite sequences, ¦/
÷l
¦
o
=0
, and provided
su¢cient conditions that an optimal sequence must satisfy.
There is an alternative and easier way to attack in…nitehorizon problems,
called dynamic programming. Lets start again from a …nite horizon problem.
Consider the following problem
max
]t+1]
1
t=0
T
=0
,

ln(/

÷/
÷l
), with /
0
given.
Suppose now we are at period T ÷ 1 and have some capital stock /. Then the
only problem is to choose tomorrow’s capital stock. Lets denote it with /
t
. Since
I exactly know what I will do with /
t
tomorrow, the problem of choosing /
t
is
simply
max

0
¦ln(/ ÷/
t
) ÷, ln(/
t
)¦ .
The solution is given by
1
/ ÷/
t
= ,
1
/
t
,
or
/
t
=
,/
1 ÷,
.
We will call this period T ÷1 policy function:
q
T÷l
(/) =
,/
1 ÷,
.
It tells us what we will do (i.e. how much we will save if we enter the period
with a capital stock /). Once we know this we can also calculate the maximized
value function for period T ÷1
\
T÷l
= ln(/ ÷
,/
1 ÷,
) ÷, ln(
,/
1 ÷,
)
= (1 ÷,) ln(/) ÷ ln(
1
1 ÷,
) ÷, ln(
,
1 ÷,
).
It tells us the highest utility level we can reach if we enter the period T ÷1 with
/. Once we know \
T÷l
, however, q
T÷2
and \
T÷2
can be determined by
\
T÷2
= max

0
¦ln(/ ÷/
t
) ÷,\
T÷l
(/
t
)¦
84
Indeed for any period t = 0, ...., T ÷ 1, the problem boils down to …nding
value functions and associated policy functions that satisfy
\

= max

0
¦ln(/ ÷/
t
) ÷,\
÷l
(/
t
)¦ .
The key feature of the dynamic programming approach is to split the problem
into several problems that involve today and tomorrow.
Remark 72 Note that \
T
(/) = :(/) and q
T
(/) = 0 are determined trivially.
Remark 73 The dynamic programming approach focuses on …nding the policy
functions and the value functions rather than …nding the sequences.
Remark 74 As the following example shows a pattern often emerges in \

and
q

that allows us to use an induction argument.
Example 75 Consider the following problem
max
]t+1]
1
t=0
o
=0
ln(c

),
subject to
/
0
0 is given,
/
÷l
= (/

÷c

)1, with 1 _ 1,
and /
÷l
_ 0 and c

_ 0.
How should we write this as a DP problem? Note that it makes sense to write
the value and policy functions as functions of /. If the agent knows / at period
t, then he/she can determine /
t
(the next period’s asset). Again lets start from
period T ÷1. We know that \
T
= ln(/). Hence,
\
T÷l
= max

0
_
ln(/
t
÷
/
t
1
) ÷ ln(/
t
)
_
.
Then, the FOC for /
t
is
1
1/ ÷/
t
=
1
/
t
== q
T÷l
=
1/
2
and \
T÷l
= 2 ln(
/
2
) ÷ ln(1).
For t = T ÷2, we have
\
T÷2
= max

0
_
ln(/ ÷
/
t
1
) ÷ 2 ln(
/
2
) ÷ ln(1)
_
.
Now the FOC for /
t
is
1
1/ ÷/
t
=
2
/
t
== q
T÷2
=
21/
8
and \
T÷2
= 8 ln(
/
8
) ÷ 8 ln(1).
85
It is easy to guess now
\

= (T ÷t ÷ 1) ln(
/
T ÷t ÷ 1
) ÷
T÷
I=l
i ln(1),
and
q

=
T ÷t
T ÷t ÷ 1
1/.
Suppose now for t ¸ ¦t ÷ 1, ..., T¦ the value and policy functions are given by
these equations. Then,
\

= max

0
_
ln(/ ÷
/
t
1
) ÷ (T ÷t) ln(
/
T ÷t
) ÷
T÷÷l
I=l
i ln(1)
_
,
and the FOC for /
t
is given by
1
1/ ÷/
t
= (T ÷t)
1
/
t
or /
t
=
(T ÷t)
(T ÷t ÷ 1)
1/,
and
\

= ln(/ ÷
(T ÷t)
(T ÷t ÷ 1)
/) ÷ (T ÷t) ln(
1
(T ÷t ÷ 1)
1/) ÷
T÷÷l
I=l
i ln(1),
= ln(
/
(T ÷t ÷ 1)
) ÷ (T ÷t) ln(
/
(T ÷t ÷ 1)
) ÷ (T ÷t) ln(1) ÷
T÷÷l
I=l
i ln(1)
= (T ÷t ÷ 1) ln(
/
T ÷t ÷ 1
) ÷
T÷
I=l
i ln(1),
which completes our backward induction.
In an in…nite horizon problem, the dynamic programming approach uses the
stationary property of in…nite horizon problems to …nd a solution. Note that
in an in…nite horizon problem, the problem today and tomorrow are identical.
The only thing that changes from today to tomorrow is the value of the current
capital stock.
In order to gain some further insight, let \ (/
0
) be the maximized value
function associated with one sector growth model, i.e.
\ (/
0
) = max
]t+1]
1
t=0
o
=0
,

n()(/

) ÷/
÷l
),
:.t. /
0
0 is given, and )(/

) _ /
÷l
_ 0.
The maximized value function is the highest level of discounted utility we can get
starting with some initial level of capital and follow the best possible sequence
of actions. It is a function that maps the set of capital stocks into the real line.
86
Note that we can write this problem as
max
]t+1]
1
t=0
o
=0
,

n()(/

) ÷/
÷l
)
= max
0¸1¸}(0)
¦n()(/
0
) ÷/
l
) ÷,
_
max
t¸t+1¸}(t)
o
=l
,
÷l
n()(/

) ÷/
÷l
)
_
¦.
Now if \ (/
0
) is the maximized value function, then it also gives us the maximum
value starting with /
l
.
Then, we have
\ (/
0
) = max
0¸1¸}(0)
[n()(/
0
) ÷/
l
) ÷,\ (/
l
)[ .
But time does not play any particular role here other than indicating today vs.
tomorrow.
Hence, if \ is the maximized value function, it should satisfy
\ (/)
. ¸¸ .
value function
= max
0¸
0
¸}()
[n()(/) ÷/
t
) ÷,\ (/
t
)[ ,
where / and /
t
denote today’s and tomorrow’s capital stock.
If we can …nd \ (we do not know yet how, we also do not know yet if a
solution is possible), then we have a complete characterization of the optimal
sequence of capital stock.
For any value of /, we can then de…ne
q(/)
.¸¸.
policy function
= aig max

0
Ç0,}()
[n()(/) ÷/
t
) ÷,\ (/
t
)[ .
Once, we know q(/), we can start from /
0
, and characterize the optimal path.
Hence, the dynamic programing approach focuses on …nding functions such as
\ and q, rather than …nding sequences. Of course the question is: how do we
…nd \ ´
Sometimes a guess and verify approach works as the following example
demonstrates:
Example 76 Let
n(c) = ln(c),
and
)(/) = /
o
.
Then, we can write the following dynamic programming problem
\ (/) = max
0¸
0
¸
c
[ln(/
o
÷/
t
) ÷,\ (/
t
)[ . (49)
87
Suppose we have the following guess
\ (/) = ¹÷1ln(/).
Once we have this guess, we have the following maximization problem
max
0¸
0
¸
c
[ln(/
o
÷/
t
) ÷, (¹÷1ln(/
t
))[ .
Note that once we have a functional form for \, this maximization problem is
trivial. Now we can …nd the FOC for /
t
1
/
o
÷/
t
=
,1
/
t
.
Hence,
/
t
= ,1/
o
÷/
t
,1,
or
/
t
= q(/) =
,1/
o
1 ÷,1
. (50)
If \ (/) is the true value function that solves equation (49) then if we follow q(/)
de…ned in equation (50) we should get back \ (/), i.e.
¹÷1ln(/) =
_
ln(/
o
÷
,1/
o
1 ÷,1
) ÷,
_
¹÷1ln(
,1/
o
1 ÷,1
)
__
.
One can arrange these terms to arrive at
1 =
c
1 ÷c,
,
and
¹ =
1
1 ÷,
_
c,
1 ÷c,
ln(c,) ÷ ln(1 ÷c,)
_
.
Hence, as we have guessed ¹ and 1 are constants. Once we have \ , we can
also …nd the optimal policy function
/
t
= q(/) =
,1/
o
1 ÷,1
= c,/
o
.
Remark 77 Note that this policy rule satis…es the transversality condition,
since for this problem
lim
÷o
,

1
l
(/
+

, /
+
÷l
)/
+

= lim
÷o
,

_
1
(/
+

)
o
÷/
+
÷l
c(/
+

)
o÷l
_
/
+

= lim
÷o
,

_
1
(/
+

)
o
÷c, (/
+

)
o
c(/
+

)
o÷l
_
/
+

= lim
÷o
,

1
(/
+

)
o
÷c, (/
+

)
o
c(/
+

)
o
= lim
÷o
,

c
1 ÷c,
= 0.
88
Remark 78 Check Exercise 2.8 in Stokey, Lucas with Prescott (1989) to see
how you can arrive the guess for \ using the policy rule for a …nite horizon
problem.
It is obvious that the cases where we can have a nice guess that will turn
out to be true \ are rather limited. What can we do, if we have no idea what
\ is?
We can still hope that starting from some initial guess \
0
(/) might help.
For this problem, for example, let
\
0
(/) = 0.
Then, we can …nd a new function for \, call it \
l
(/), by solving
\
l
(/) = max
0¸
0
¸
c
_
ln(/
o
÷/
t
) ÷,\
0
(/
t
)
¸
.
Again once we have \
0
(/) = 0, solving the maximization problem in the right
hand side is trivial. Since \
0
(/) = 0, the solution is /
t
= 0. Then we have
\
l
(/) = ln(/
o
) = cln(/).
By any chance if we got \
l
(.) = \
0
(.), as we did in the previous example, we
were done.
Lets now use this as a new guess to arrive at \
2
(/) :
\
2
(/) = max
0¸
0
¸
c
_
ln(/
o
÷/
t
) ÷,\
l
(/
t
)
¸
.
Again, once we have a guess for \ on the right hand side, we can write the FOC
for /
t
as
1
/
o
÷/
t
= ,
c
/
t
to arrive at
/
t
=
,c/
o
1 ÷c,
,
and
\
2
(/) =
_
ln
_
/
o
÷
,c/
o
1 ÷c,
_
÷,cln
_
,c/
o
1 ÷c,
__
.
Now we can de…ne \
3
, using \
2
, etc.
Our hope is that this sequences of functions will converge to some function
\ , which is the true function. Why should such a procedure work? The hope is
that as we start from some initial guess \
0
and update the function, we reduce
the error in \, hence approaching to the true value.
Note that the dynamic programming problem
\ (/) = max
0¸
0
¸
c
[ln(/
o
÷/
t
) ÷,\ (/
t
)[ ,
89
de…nes an operator on functions, which can be written as
(T\ )(/) = max
0¸
0
¸
c
[ln(/
o
÷/
t
) ÷,\ (/
t
)[
. ¸¸ .
T
,
where (T\ ) is a new function given \.
Then, our hope is that the sequence of unctions that de…ned by this operator
converges to a limit function \
¦\
n
(/)¦ ÷ \,
and furthermore,
(T\ ) = \,
i.e. \ is a …xed point of operator T.
We will now try to make things more precise. In particular we will make
clear:
« What do we mean by a sequence of functions being converged?
« Under what condition does the operator T generates a sequence of func
tions that converge?
« Under what conditions, if ¦\
n
(/)¦ is a sequence of functions with certain
properties, will the limit function \ also have those properties?
The next section provides some mathematical preliminaries to make these
claims more precise. Once we have the tools to solve for \, we will do two
things:
« show that \ provides a solution to SP,
« characterize \ and q.
Before, going into details, note that we have started this section with the
following sequential problem
sup
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
), (SP)
subject to
r
÷l
¸ I(r

), t = 0, 1, 2, ...,
and
r
0
given.
Now we are interested in solving the following functional equation (FE) to
…nd \,
\ (r) = sup
¸ÇI(r)
[1(r, j) ÷,\ (j)[ , (FE)
where r is today’s state and j is today’s choice (and tomorrow’s state).
90
6 Mathematical Preliminaries
This section is based on Stokey, Lucas with Prescott (1989), Sundaram (1996),
Rudin (1976). We will proceed in following steps:
« We will …rst de…ne a set of objects that we can compare with each other
in a meaningful way.
« This will allow us to de…ne convergence.
« Then we will focus on sequences of functions and de…ne di¤erent conver
gence notions for sequences of functions.
« We will then look at operators, such as the operator T we de…ned above, that
generate sequence of functions that converge and T\ preserves nice prop
erties of function \ .
6.1 Vector and Metric Spaces
De…nition 79 A real vector space (or linear space) is a set of elements (vectors)
A together with two operations, addition and scalar multiplication (which are
de…ned as r ÷j ¸ A and ar ¸ A for any two vectors r, j ¸ A and for any real
number a) such that for all r, j, . ¸ A and a, / ¸ 1
r ÷j = j ÷r,
(r ÷j) ÷. = r ÷ (j ÷.),
a(r ÷j) = ar ÷aj,
(a ÷/)r = ar ÷/r,
(a/)r = a(/r),
¬0 ¸ A, r ÷0 = r and 0r = 0,
1r = r.
De…nition 80 A metric space is a set o, together with a metric (distance func
tion) j : o o ÷ 1, such that for all r, j, . ¸ o : (i) j(r, j) _ 0, with
equality if and only if r = j, (ii) j(r, j) = j(j, r), (iii) j(r, .) _ j(r, j)÷j(j, .).
Example 81 The metric space (o, j), o = 1
2
, the set of real numbers, and
j(r, j) = [(r
l
÷j
l
)
2
÷ (r
2
÷j
2
)
2
[
l/2
, for all r and j in o is a metric space.
Example 82 The metric space (o, j), where o = 1, the set of real numbers,
and j(r, j) = [r ÷j[ , for all r and j in o is a metric space.
For vector spaces metrics are de…ned in a way that measures the distance
between two pints as the distance of their di¤erences from the zero pint
De…nition 83 A normed vector space is a vector space o, together with a norm
. : o ÷ 1, such that for all r, j ¸ o and a ¸ 1, (i) r _ 0, and r = 0
i¤ r = 0, (ii) ar = [a[ . r , (iii) r ÷j _ r ÷j .
91
Remark 84 Note that j(r, j) = r ÷j .
What about the distance between two functions?
Exercise 85 Let o = C(a, /) be the set of continuous and bounded functions
from [a, /[ to 1, and for r, j ¸ C(a, /) de…ne j(r, j) by
j(r, j) = max
Ço,b
[r(t) ÷j(t)[ .
Then (o, j) is a metric space. We need to show that,
(i) j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = [r(t
+
) ÷j(t
+
)[ _ 0,
where t
+
is the maximizer,
j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = 0 i¤ r(t) = j(t) \t ¸ [a, /[, i.e. r = j.
(ii) j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = max
o¸¸b
[j(t) ÷r(t)[ = j(j, r).
and
(iii)j(r, .) _ j(r, j) ÷j(j, .),
j(r, .) = max
o¸¸b
[r(t) ÷.(t)[ = [r(t
+
) ÷.(t
+
)[ _
[r(t
+
) ÷j(t
+
)[ ÷[j(t
+
) ÷.(t
+
)[
_ max
o¸¸b
[r(t) ÷j(t)[ ÷ max
o¸¸b
[j(t) ÷.(t)[ = j(r, j) ÷j(j, .),
where t
+
is the maximizer.
Example 86 Let o be the set of functions from some set ¹ to 1. For r, j ¸ o,
de…ne another metric as
j(r, j) = sup
Ç.
[r(t) ÷j(t)[
6.1.1 Supremum and In…mum
De…nition 87 Let ¹ ,= O ¸ 1, set of upper bounds of ¹ is de…ned as
l(¹) = ¦n ¸ 1 [ n _ a, for all a ¸ ¹¦ ,
and the set of lower bound of ¹ is de…ned as
1(¹) = ¦n ¸ 1 [ n _ a, for all a ¸ ¹¦ .
Then if l(¹) ,= O, ¹ is bounded above, and if 1(¹) ,= O, ¹ is bounded below.
Supremum of ¹, sup¹, is de…ned as the least upper bound of ¹ and in…mum
of ¹ is de…ned as the greatest lower bound of ¹. Note that if l(¹) = O, then
:nj¹ = ·; and if 1(¹) = O, then inf ¹ = ÷·. Also, max ¹ = ¹ ¨ l(¹),
and min¹ = ¹¨1(¹). Hence, while sup¹ and inf ¹ are always de…ned for any
nonempty set (they could be in…nite), max ¹ and min¹ might not always exists.
Theorem 88 Suppose sup¹ is …nite. Then, for any  0, there is a() ¸ ¹
such that a() sup¹÷.
92
6.2 Sequences and Completeness
De…nition 89 A sequence ¦r
n
¦
o
n=0
in a metric space o converges to a limit
r ¸ o, if for each  0, there exists ·
:
such that
j(r
n
, r) <  for all : _ ·
:
.
De…nition 90 A sequence ¦r
n
¦
o
n=0
in a metric space o converges to r ¸ o, if
j(r
n
, r) ÷ 0 as : ÷ ·.
We will be particularly interested in this class with metric spaces that are
formed by sets of functions with appropriate metric de…ned on functions.
Example 91 Let C(1, 2) denote the set of continuos functions from [1, 2[ to 1
with
j(r, j) = max
Ço,b
[r(t) ÷j(t)[ .
Let
r
l
(t) =
t
1 ÷t
, r
2
(t) =
2t
2 ÷t
, ...., r
n
(t) =
:t
: ÷t
, .....
Figure (14) shows r
n
(t) =
n
n÷
.
De…nition 92 If a sequence contains a convergent subsequence, the limit of the
convergent subsequence is called a limit point of the original sequence.
Theorem 93 A convergent sequence in 1
n
can have at most one limit point.
That is if ¦r

¦ is a sequence converging to a point r ¸ 1
n
, it cannot converge
to a point j ¸ 1
n
for r ,= j.
De…nition 94 A sequence in 1
n
is bounded if ¬' such that r

 _ ' for
all /.
Theorem 95 Every convergent sequence in 1
n
is bounded.
De…nition 96 The lim sup of a real valued sequence is the supremum of the set
of limit points, and the lim inf of a real valued sequence is the in…mum of the set
of limit points. Given a sequence ¦r

¦ , which might not necessarily converge
and therefore might have many limit points, lim sup and lim inf gives the upper
and lower bounds of the set of limit points.
As an example consider the following sequence,
r

=
_
1 if / is odd

2
if / is even
= ¦1, 1, 1, 2, 1, 8, 4, 1.....¦ ,
where lim sup r

= ·, and lim inf r

= 1. Note that lim sup and lim inf
are themselves limit points of a sequence. Therefore, in order to show that a
sequence converges we can use the following theorem.
93
1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
n=1
n=10
Figure 14: r
n
(t) =
n
n÷
94
Theorem 97 (Sundaram (1996), 1.18) A sequence in 1 converges to a limit
point r ¸ 1 if and only if lim sup r

= lim inf r

= r.
Note that the de…nition of convergence requires a candidate limit point r,
when such a candidate is not available the following criteria is useful.
De…nition 98 A sequence ¦r
n
¦
o
n=0
in o is a Cauchy sequence if for each  0,
there exists ·
:
such that
j(r
n
, r
n
) <  for all :, : _ ·
:
.
Theorem 99 Let ¦r

¦ be a Cauchy sequence in 1
n
. Then, i) ¦r

¦ is bounded,
ii) ¦r

¦ has at most one limit point.
Of course for the Cauchy criteria to be useful we must work in spaces where
the Cauchy criteria implies a limit point within that space.
De…nition 100 A metric space (o, j) is complete if every Cauchy sequence in
o converges to an element in o.
Note that in a complete metric space if you can show that a sequence is
Cauchy, you know for sure that the limit will be an element of that set.
Exercise 101 Let A = C(a, /) be the set of continuous functions from [a, /[ to
1, and for r, j ¸ C(a, /) de…ne j(r, j) by
j(r, j) =
_
b
o
[ r(t) ÷j(t) [ dt.
Lets show that (A, j) is a metric space and is not complete. Since [ r(t)÷j(t) [_
0,
(i) j(r, j) =
_
b
o
[ r(t) ÷j(t) [ dt _ 0.
If r(t) = j(t), then [ r(t) ÷j(t) [= 0, and
(ii) j(r, j) =
_
b
o
[ r(t) ÷j(t) [ dt = 0.
On the other hand if j(r, j) = 0, then r = j. In order to show this let j(r, j) = 0,
but r ,= j.Then there must exist a t
0
such that r(t
0
) ÷j(t
0
) =  0. Since r is
a continuous function, ¬c
r
such that [ t
0
÷t [ < c
r
implies [ r(t
0
) ÷r(t) [<
:
d
.
We can de…ne c
¸
similarly. Let c = min¦c
r
, c
¸
¦, then for all t such that[ t
0
÷t [
< c, r(t) ÷j(t)
:
2
. This follows from
 <[ r(t
0
) ÷j(t
0
) [_[ r(t
0
) ÷r(t) [ ÷ [ r(t) ÷j(t) [ ÷ [ j(t) ÷j(t
0
) [ .
De…ne o = [a, /[ ¨ [t
0
÷c, t
0
÷c[. Since t
0
¸ o,
_
S
[ r(t) ÷j(t) [ dt

2
c 0.
95
Note that
j(r, j) =
_
b
o
[ r(t) ÷j(t) [ dt _
_
S
[ r(t) ÷j(t) [ dt 0,
which is a contradiction. Finally,
(iii)j(r, .) _ j(r, j) ÷j(j, .),
follows from the triangular inequality for the absolute value. To show that C(a, /)
is not complete, consider )
n
(t) = t
n
for t ¸ [0, 1[. This sequence converges to
)
n
÷ ) =
_
0 if t ¸ [0, 1)
1i) t = 1,
,
which is not an element of C(0, 1). It is easy to show however that this sequence
is Cauchy, since
j()
n,
)
n
) =
_
l
0
(t
n
÷t
n
)dt =
t
n÷l
: ÷ 1
÷
t
n÷l
:÷ 1
¸
¸
¸
¸
l
0
=
1
: ÷ 1
÷
1
:÷ 1
,
which can be made arbitrarily small.
Example 102 (Stokey and Lucas, 1989, Exercise 3.3c, page 45) The set
o of all continuous, strictly increasing functions on [a, /[ with j(r, j) = max
o¸¸b
[r(t)÷
j(t)[ is a metric space. We need to show that,
(i) j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = [r(t
+
) ÷j(t
+
)[ _ 0,
where t
+
is the maximizer,
j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = 0 i¤ r(t) = j(t) \t ¸ [a, /[, i.e. r = j.
(ii) j(r, j) = max
o¸¸b
[r(t) ÷j(t)[ = max
o¸¸b
[j(t) ÷r(t)[ = j(j, r).
and
(iii)j(r, .) _ j(r, j) ÷j(j, .),
j(r, .) = max
o¸¸b
[r(t) ÷.(t)[ = [r(t
+
) ÷.(t
+
)[ _
[r(t
+
) ÷j(t
+
)[ ÷[j(t
+
) ÷.(t
+
)[
_ max
o¸¸b
[r(t) ÷j(t)[ ÷ max
o¸¸b
[j(t) ÷.(t)[ = j(r, j) ÷j(j, .),
where t
+
is the maximizer.
96
Example 103 (Stokey and Lucas, 1989, Exercise 3.6c, page 47) The met
ric space (o, j) in previous example is not complete. Consider the following
example of a Cauchy sequence in o that converges to a point that is not in o,
_
r
n
=
t
:
_
o
n=l
where t ¸ [a, /[.
Each element of this sequence is a continuous and strictly increasing on [a, /[.
Hence, ¦r
n
¦ is contained in o. This is also a Cauchy sequence since,
j(r, j) = max
o¸¸b
[r
n
(t) ÷r
n
(t)[ = max
o¸¸b
[
t
:
÷
t
:
[ = [
1
:
÷
1
:
[ max
o¸¸b
[t[
_ [
1
:
[ ÷[
1
:
[ max
o¸¸b
[t[,
which can be made arbitrarily small by picking : and : large enough. Limit of
this sequence of functions, however, is not in o since,
_
r
n
=
t
:
_
÷ 0,
which is not strictly increasing. Hence, not all Cauchy sequences in o converges
to a limit in o, therefore the metric space (o, j) is not complete.
Theorem 104 1 is complete.
Theorem 105 1
n
is compete.
6.3 Three Cs: Closedness, Boundedness and Compactness
of Sets
De…nition 106 In a metric space (o, j), the set ¹ ¸ o is closed if whenever
a
l
, a
2
, .... ¸ ¹ and a
l
, a
2
, ... ÷ a it follows that a ¸ ¹.
De…nition 107 A set ¹ in a metric space (o, j) is bounded if there exists a
number 1 such that j(a, a
t
) _ 1 for all a, a
t
¸ ¹.
Theorem 108 (Rudin (1976), 2.41) A set o _ 1
n
is compact if and only if
it is closed and bounded.
6.4 Functions
De…nition 109 Let o and T be metric spaces, 1 ¸ o, j ¸ 1, and ) maps 1
into T. Then ) is said to be continuous at j if for every  0 there exists c 0
such that
j
T
()(r), )(j)) <  for all points r ¸ 1 for which j
S
(r, j) < c.
97
De…nition 110 Let ) : o ÷ T where o _ 1
n
and T _ 1
l
. Then ) is said to
be continuous at r ¸ o if for all sequences ¦r

¦ such that r

¸ o for all /, and
r

÷ r, it is the case that ¦)(r

)¦ ÷ )(r).
Theorem 111 (Rudin (1976), 4.14) Suppose ) is a continuous mapping of
a compact metric space A into a compact metric space 1. Then )(A) is compact.
6.5 Sequences of Functions
6.5.1 Pointwise Convergence
Remember our de…nition of convergence in a metric space.
De…nition 112 Let (o, j) be a metric space. A sequence ¦r
n
¦
o
n=0
in o con
verges to r ¸ o, if for each  0, there exists ·
:
such that
j(r
n
, r) <  for all : _ ·
:
.
One problem with this de…nition was that we need a candidate limit point r
to check if the sequence converges. When we have a sequence of functions, such
a candidate naturally arises.
De…nition 113 Let ¦)
n
¦ be a sequence of functions de…ned on a set 1 ¸ o,
and suppose that the sequence of numbers ¦)
n
(r)¦ converges for every r ¸ 1.
We can then de…ne a function by,
)(r) = lim
n÷o
)
n
(r), r ¸ 1. (51)
We say that ¦)
n
¦ converges to ) pointwise on 1, if (51) holds.
The main question that we are interested is whether the important properties
of a sequence of functions are preserved under pointwise convergence. We would
like to know, for example, whether the limiting function will be continuous if
every function in the sequence is continuous.
Example 114 Let for each :, )
n
: [1, 2[ ÷ 1 be given by,
)
n
(r) =
:r
: ÷r
, r ¸ [1, 2[.
Let r =
3
2
, then, see Figure (14),
)
l
(
8
2
) =
8
ò
, )
2
(
8
2
) =
6
7
, )
3
(
8
2
) = 1, , )
2
(
8
2
) =
8:
2: ÷ 8
÷
8
2
.
Indeed,
)(r) = lim
n÷o
)
n
(r) = r, r ¸ [1, 2[.
98
Given that ¦)
n
¦ convergence pointwise to )(r) = r, we can use this limiting
function as our candidate and check if the sequence converges to this function
given a particular metric. Let,
j(), q) = max
rÇl,2
[)(r) ÷q(r)[ .
Then,
j()
n
, )) = max
rÇl,2
¸
¸
¸
¸
:r
: ÷r
÷r
¸
¸
¸
¸
= max
rÇl,2
¸
¸
¸
¸
r
2
: ÷r
¸
¸
¸
¸
< max
rÇl,2
¸
¸
¸
¸
r
2
:
¸
¸
¸
¸
=
4
:
.
Hence, j()
n
, )) ÷ 0, as : ÷ ·, and ¦)
n
¦ ÷ ).
In this example, the sequence of functions converges pointwise to a nice
function. Moreover, sequence also converges to the limiting function given our
metric. This will not be the case in general. Before going into examples, how
ever, note that convergence implies pointwise convergence.
Theorem 115 Let A = C(a, /) be the set of continuous functions [a, /[ ÷ 1,
and for ), q ¸ C(a, /) de…ne j(), q) by,
j(), q) = max
o¸r¸b
[ )(r) ÷q(r) [ .
If )
l
, )
2
, )
3
, . . . be functions in A such that )
l
, )
2
, )
3
, . . . ÷ ) ¸ C(a, /), then for
any r ¸ [a, /[, )
l
(r), )
2
(r), )
3
(r), . . . ÷ )(r) in 1.
The converse of this theorem, as we already pointed out, does not hold.
Example 116 Let for each :, )
n
: [0, 1[ ÷ 1 in C(0, 1) be given by,
)
n
(r) = r
n
, r ¸ [0, 1[.
Then,
)
l
(r), )
2
(r), )
3
(r), ÷ 0, for r ¸ [0, 1),
but,
)
l
(r), )
2
(r), )
3
(r), ÷ 1, for r = 1.
Hence,
)(r) = lim
n÷o
)
n
(r) =
_
0 if r < 1
1 if r = 1
,
which is not a continuous function.
Example 117 Let for each :, )
n
: [0, 1[ ÷ 1 in C(0, 1) be given by,
)
n
(r) =
:r
1 ÷:
2
r
2
, r ¸ [0, 1[.
99
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
n=1
n=10
Figure 15: )
n
(r) = r
n
100
In this case,
)
l
(r), )
2
(r), )
3
(r), ÷ 0, for all.
Yet,
j()
n
, )) = max
rÇ0,l
¸
¸
¸
¸
:r
1 ÷:
2
r
2
÷0
¸
¸
¸
¸
=
1
2
.
Hence, j()
n
, )) 90, as : ÷ ·, and hence ¦)
n
¦ 9) with j(), q) = max
o¸r¸b
[
)(r) ÷q(r) [ .
These example suggest that we need a stronger concept of convergence than
the pointwise convergence.
6.5.2 Uniform Convergence
De…nition 118 We say that a sequence of functions ¦)
n
¦, : = 1, 2, 8, . . . con
verges uniformly on 1 to a function ), if for every  0, there exists an integer
· such that : _ · implies
[)
n
(r) ÷)(r)[ _  for all r ¸ 1.
If ¦)
n
¦ converges uniformly on 1, then it is possible, for each , to …nd one
integer · which will work for all r ¸ 1. Following theorem gives a very useful
characterization for uniform convergence.
Theorem 119 The sequence of functions ¦)
n
¦, : = 1, 2, 8, . . . converges uni
formly on 1 if and only if for every  0, there exists an integer · such that
: _ ·, : _ ·, and r ¸ 1 implies
[)
n
(r) ÷)
n
(r)[ _ .
Proof. Suppose ¦)
n
¦ converges uniformly on 1, and let ) be the limit function.
Then there must exist an integer · such that : _ ·, and r ¸ 1 implies
[)
n
(r) ÷)(r)[ _

2
,
so that if : _ ·, : _ ·, and r ¸ 1, then
[)
n
(r) ÷)
n
(r)[ _ [)
n
(r) ÷)(r)[ ÷[)(r) ÷)
n
(r)[ _ .
Conversely, suppose the Cauchy condition holds. Then for every r, ¦)
n
(r)¦
converges to a limit point that we may call )(r). Thus, the sequence ¦)
n
¦ con
verges pointwise to ) on 1. We need to show that convergence is also uniform.
Let  0 be given, and choose · such that the Cauchy condition holds. Fix :,
and let : ÷ · in
[)
n
(r) ÷)
n
(r)[ _ .
Since )
n
(r) ÷ )(r) as : ÷ ·, this gives
[)
n
(r) ÷)(r)[ _ ,
for every : _ · and every r ¸ 1, which completes the proof.
101
Hence if a sequence is Cauchy then it converges pointwise to a limiting
function, and moreover convergence is uniformly.
The following theorem shows that uniform convergence preserves the conti
nuity and boundedness of the elements of a sequence of functions.
Theorem 120 (Uniform Convergence Theorem) Let o ¸ 1
n
, let ¦)
n
¦ be a
sequence of functions from o to 1 such that )
n
÷ ) uniformly. If functions )
n
are all bounded and continuous, then ) is bounded and continuous.
Proof. Boundedness is obvious, since )
n
(r) ÷ )(r) for each r, if ) is un
bounded then functions )
n
must be unbounded.
To show continuity, …x r ¸ o, we need to show that for all  0, there exists
a c 0 such that j ¸ o and r ÷j < c implies [)(r) ÷)(j)[ < . Let  0
be given. For any /,
[)(r) ÷)(j)[ _ [)(r) ÷)

(r)[ ÷[)

(r) ÷)

(j)[ ÷[)

(j) ÷)(j)[ .
Fix · su¢ciently large so that for all : _ ·,and for all . ¸ o,
[)(.) ÷)
n
(.)[ <

8
,
Since )
Þ
is continuous, there exists a c 0 such that,
r ÷j < c == [)
Þ
(r) ÷)
Þ
(j)[ <

8
.
Therefore whenever r ÷j < c, we have,
[)(r) ÷)(j)[ _ [)(r) ÷)
Þ
(r)[ ÷[)
Þ
(r) ÷)
Þ
(j)[ ÷[)
Þ
(j) ÷)(j)[
<

8
÷

8
÷

8
= .
Note that previous two Theorems are building blocks in showing that C(A),
the set of bounded and continuous functions on set A ¸ 1
l
, with j(), q) =
sup
rÇ·
[ )(r) ÷ q(r) [, is complete [Theorem 3.1 in page 47 in Stokey and
Lucas with Prescott (1989)].
The following theorem provides another characterization of uniform conver
gence.
Theorem 121 Suppose,
lim
n÷o
)
n
(r) = )(r) for r ¸ 1,
Let
'
n
= sup
rÇJ
[)
n
(r) ÷)(r)[ .
Then )
n
÷ ) uniformly on 1 if and only if '
n
÷ 0 as : ÷ ·.
102
Example 122 Let for each :, )
n
: 1
÷
÷ 1
÷
be given by,
)
n
(r) =
_
:r, for r _
l
n
1, for r
l
n
.
Then )
n
converges pointwise to,
)(r) =
_
0, for r = 0
1, for r 0
.
However, )
n
9) uniformly, otherwise ) would be continuous.
Next theorem provides another useful way to check uniform convergence.
Theorem 123 Suppose 1 is compact, and
(i) ¦)
n
¦ is a sequence of continuous functions on 1,
(ii) ¦)
n
¦ converges pointwise to a continuous function ) on 1,
(iii) )
n
(r) _ )
n÷l
(r) for all r ¸ 1, : = 1, 2, 8, . . .
Then )
n
÷ ) uniformly on 1.
Following example shows that compactness of the set on which the sequence
is de…ned crucial in the last theorem.
Example 124 Let for each :, )
n
: (0, 1) ÷ 1 in C(0, 1) be given by,
)
n
(r) =
1
1 ÷:r
, r ¸ (0, 1).
Note that )
n
(r) ÷ 0 monotonically in (0, 1), yet convergence is not uniform
since,
j()
n
, )) = sup
rÇ(0,l)
¸
¸
¸
¸
1
1 ÷:r
÷0
¸
¸
¸
¸
= 1.
Theorem 125 Let A _ 1
n
, and let C(A) be the set of bounded continuous
functions ) : A ÷ 1 with sup metric. Then C(A) is a complete metric space.
Proof. See Stokey and Lucas with Prescott (1989) Theorem 3.1.
Hence, if we have a sequence of functions in C(A) and the sequence satis…es
the Cauchy criteria, then the limiting function is also C(A).
6.6 Contraction Mappings
We will now analyze operators, such as the operator T we de…ned above:
(T\ )(/) = max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\ (/
t
)[
. ¸¸ .
T
.
103
De…nition 126 Let (o, j) be a metric space and T : o ÷ o be a function
mapping o into itself. T is a contraction mapping (with modulus ,) if for some
, ¸ (0, 1),
j(Tr, Tj) _ ,j(r, j), for all r, j ¸ o.
Theorem 127 (Contraction Mapping or Banach Fixed Point Theorem) If (o, j)
is a compete metric space and T : o ÷ o is a contraction mapping with modulus
,, then
(i) T has exactly one …xed point \ in o.
(ii) for any \
0
¸ o, j(T
n
\
0
, \ ) _ ,
n
j(\
0
, \ ), : = 0, 1, 2, 8....
Proof. Choose \
0
, and de…ne ¦\
n
¦
o
n=0
, by \
n÷l
= T\
n
, so that \
l
= T\
0
,etc.
Then,
j(\
2
, \
l
) = j(T\
l
, \
l
) _ ,j(\
l
, \
o
),
and operation in the same way will lead to
j(T\
n÷l
, \
n
) _ ,
n
j(\
l
, \
o
), : = 0, 1, 2, ...
For any : :,
j(\
n
, \
n
) _ j(\
n
, \
n÷l
) ÷.... ÷j(\
n÷l
, \
n
)
_ [,
n÷l
÷.... ÷,
n÷l
÷,
n
[j(\
l
, \
o
)
_ ,
n
[,
n÷n
÷.... ÷, ÷ 1[j(\
l
, \
o
)
_
,
n
1 ÷,
j(\
l
, \
o
)
Hence, ¦\
n
¦ is a Cauchy sequence. Since o is complete,
\
n
÷ \ ¸ o.
To show that T\ = \, for all :, and for all \
0
¸ o, note that
j(T\, \ ) _ j(T\, T
n
\
0
) ÷j(T
n
\
0
, \ )
_ ,j(\, T
n÷l
\
0
) ÷j(T
n
\
0
, \ ).
As : ÷ ·, j(T\, \ ) ÷ 0, hence T\ = \.
To show that \ is unique. Suppose,¬
´
\ ¸ o, such that T
´
\ =
´
\ and \ ,=
´
\ ,
then
0 < j(
´
\ , \ ) = j(T
´
\ , T\ ) _ ,j(
´
\ , \ ),
a contradiction. Finally, for any : _ 1.
j(T
n
\
0
, \ ) = j(T(T
n÷l
\
0
), T\ ) _ ,j(T
n÷l
\
0
, \ ).
Corollary 128 Let (o, j) be a complete metric space and let T : o ÷ o be a
contraction mapping with …xed point of \ ¸ o. If o
t
is a closed subset of o and
T(o
t
) ¸ o
t
then \ ¸ o
t
. If in addition, T(o
t
) ¸ o
tt
¸ o
t
, then \ ¸ o
tt
.
104
The Contraction Mapping Theorem states that if you have a contraction
mapping T on a complete metric space, then you always have a unique …xed
point. Furthermore you will get to this …xed point starting from any initial
guess in this metric space. The corollary states that if the contraction mapping
T maps closed subset o
t
of o into itself, then the …xed point has to be in this
subset. If T maps o
t
into a closed subset o
tt
of o
t
, then the …xed point has to
be in o
tt
.
The question is how we can check if T is a contraction mapping. The fol
lowing Theorem provides a nice characterization.
Theorem 129 (Blackwell’s Su¢ciency Conditions) Let A ¸ 1
l
and 1(A) be
a space of bounded functions ) : A ÷ A with sup norm. Let T : 1(A) ÷ 1(A)
satisfy,
i) for all ), q ¸ 1(A) such that )(r) _ q(r) for all r ¸ A implies
(T))(r) _ (Tq)(r), for all r ¸ A,
ii) ¬, ¸ (0, 1) such that [T() ÷a)[ (r) _ (T))(r) ÷ ,a, for all ) ¸ 1(A),
a _ 0, and r ¸ A.
Proof. See Stokey and Lucas with Prescott (1989) Theorem 3.3.
Consider the operator for the one sector growth model
(T\ )(/) = max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\ (/
t
)[ .
Let \(/) _ \ (/), then
(T\)(/) = max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\(/
t
)[
_ max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\ (/
t
)[ = (T\ )(/),
and
T(\ ÷a)(r) = max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\ (/
t
) ÷,a[
= max
0¸
0
¸
c
[n()(/) ÷/
t
) ÷,\ (/
t
)[ ÷,a.
= (T\ )(r) ÷,a.
Exercise 130 Consider the mapping ¹ of :dimensional space into itself given
by the system of linear equations
j = ¹(r),
where
j
I
=
n
¸=l
a
I¸
r
¸
÷/
I
, i = 1, . . . :.
105
If ¹ is a contraction mapping we can use the method of successive approxima
tions to solve the equation ¹r = r. Given
j(r, j) = max
l¸I¸n
[r
I
÷j
I
[ ,
…nd the conditions on a such that ¹ is contraction mapping. First note that
[¹r ÷¹j[ = [¹(r ÷j)[ = [
n
¸=l
a
I¸
(r
¸
÷j
¸
)[ _
n
¸=l
[a
I¸
[[r
¸
÷j
¸
[
_ max
l¸¸¸n
[r
¸
÷j
¸
[
n
¸=l
[a
I¸
[.
Then,
j(¹r, ¹j) = max
l¸I¸n
[¹r
I
÷¹j
I
[
_
_
max
l¸I¸n
[r
I
÷j
I
[
_
_
_
max
l¸I¸n
n
¸=l
[a
I¸
[
_
_
=
_
_
max
l¸I¸n
n
¸=l
[a
I¸
[
_
_
j(r, j).
If
max
l¸I¸n
n
¸=l
[a
I¸
[ _ / < 1,
then, ¹ is contraction mapping with modulus /.
Exercise 131 Let A = (1, ·). Let ) : A ÷ 1 be given by
)(r) =
1
2
_
r ÷
a
r
_
.
Show that if a ¸ (1, 8), then ) is a contraction. Find the …xed point of ) as a
function of a.One needs to check two things: a) ) maps A onto A and b) for all
r and r
t
from A j()(r), )(r
t
)) _ /j(r, r
t
) for some / ¸ (0, 1). To check that
) maps A onto A, observe that )
tt
(r) =
l
d
o
r
3
0 for all r ¸ A and a ¸ (1, 8).
Thus, ) is strictly convex which means that it has a unique minimum. To …nd
that minimum, solve )
t
(r) =
l
2
_
1 ÷
o
r
2
_
= 0. The solution is r
+
=
_
a 1.
Then )(r
+
) =
_
a 1, which implies that for all a ¸ (1, 8), ) : A ÷ A. To
verify that )(r) is actually a contraction, consider
j()(r), )(r
t
)) =
¸
¸
¸
¸
1
2
_
r ÷
a
r
_
÷
1
2
_
r
t
÷
a
r
t
_
¸
¸
¸
¸
=
¸
¸
¸
¸
1
2
_
1 ÷
a
rr
t
_
¸
¸
¸
¸
[r ÷r
t
[ .
It su¢ces to show that a ¸ (1, 8) implies that
¸
¸
l
2
_
1 ÷
o
rr
0
_¸
¸
< 1 for all r, r
t
¸ A.
For any …xed a, the function q
o
(.) =
¸
¸
l
2
_
1 ÷
o
:
_¸
¸
, . ¸ A, is decreasing on (1, a),
106
is equal to zero at . = a, and is increasing on (a, ÷·) with lim
:÷÷o
q
o
(.) =
l
2
.
Therefore, it is su¢cient to consider r and r
t
such that rr
t
¸ (1, a). Then,
0 _
¸
¸
l
2
_
1 ÷
o
rr
0
_¸
¸
<
¸
¸
l
2
(1 ÷a)
¸
¸
and
¸
¸
l
2
(1 ÷a)
¸
¸
< 1 if a ¸ (1, 8).
6.7 Correspondences
We will now have a closer look at the particular operator T in
\ (r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ . (FE)
Obviously the properties of T will depend on the maximization problem
max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ .
De…nition 132 A correspondence I from A _ 1
l
into 1 _ 1
n
is a map which
associates with each element r ¸ A, a (nonempty) subset I(r) _ 1.
Consider the following dynamic programming problem,
\ (r) = sup
¸
[1(r, j) ÷,\ (j)[
:.t. j is feasible given r,
where r ¸ A _ 1
l
is the beginning of period state variable, j ¸ A is the end
ofperiod state variable (or control variable) to be chosen, and 1(r, j) is the
current period return function.
Correspondences are used to denote the relationship between the current
state variable, r, and the choice variable, j. A feasibility correspondence, I :
A ÷ A, is used to de…ne which values of j are feasible given r. We would like to
know how I(r) behaves as r changes over A in order to be able to characterize
how the maximizing values of j and value function \ (r) behaves over A. Hence,
we need to introduce a notion of continuity for correspondences.
De…nition 133 I : A ÷ 1 is a compactvalued correspondence if I(r) is a
compact set of 1 for each r ¸ A.
De…nition 134 I : A ÷ 1 is a closedvalued correspondence if I(r) is a
closed set of 1 for each r ¸ A.
De…nition 135 I : A ÷ 1 is a convexvalued correspondence if I(r) is a
convex set of 1 for each r ¸ A.
To give an example of a correspondence with these properties consider,
I(r) = ¦j [ 0 _ j _ r¦ , where A _ 1
÷
, and 1 _ 1
÷
.
De…nition 136 Graph of a correspondence I(r) is the set ¹ de…ned as,
¹ = ¦(r, j) [ j ¸ I(r)¦ .
107
De…nition 137 I : A ÷ 1 is a closedgraph correspondence if ¹ is a closed
set.
De…nition 138 I : A ÷ 1 is a convexgraph correspondence if ¹ is a convex
set.
Note that a closedgraph correspondence is also closedvalued, and a convex
graph correspondence is also convexvalued. The converses, however, do not
hold.
6.7.1 Lower HemiContinuity:
De…nition 139 A correspondence I : A ÷ 1 is lower hemicontinuous (l.h.c.)
at r, if I(r) is nonempty and if, for every j ¸ I(r) and every sequence r
n
÷ r,
there exists · _ 1 and a sequence ¦j
n
¦
o
n=Þ
such that j
n
÷ j and j
n
¸ I(r
n
),
for all : _ ·.
Note that in order to check l.h.c. of a correspondence at r, we …rst pick
any point j ¸ I(r) and a sequence r
n
÷ r. Then, we look for a sequence j
n
which is contained in I(r
n
), and converges to the point j. Since we …rst pick
any point j in the image of r, l.h.c. fails if there exists a sudden “blowup” in
the correspondence. To give an example, consider the correspondence in Figure
16. It fails to be l.h.c. at r, because there is no sequence ¦j
n
¦ which converges
to j, and contained in I(r
n
), for all :.
y
x {x
n
}
Γ(x)
x
y
Figure 16: Not l.h.c. at r
108
6.7.2 Upper HemiContinuity:
De…nition 140 A compact valued correspondence I : A ÷ 1 is upper hemi
continuous (u.h.c) at r if I(r) is nonempty and if, for every sequence r
n
÷ r
and every sequence ¦j
n
¦ such that j
n
¸ I(r
n
) for all :, there exists a convergent
subsequence of ¦j
n
¦ whose limit point j is in I(r).
Note that in order to check u.h.c. of a correspondence at r, we …rst pick
r
n
÷ r and a sequence j
n
contained in the images of r
n
. Then, we look for a
convergent subsequence of j
n
which converges to a point j in the image of r.
Upper hemicontinuity will fail if there is a sudden “collapse” in the correspon
dence. Then, we could pick r
n
and j
n
, but fail to …nd a point j in the image
of r such that a subsequence of j converges to that point. The correspondence
in Figure 17 is not u.h.c. at r.
y
{y
n
}
Γ(x)
x {x
n
}
x
y
Figure 17: Not u.h.c. at r
De…nition 141 A correspondence I : A ÷ 1 is continuous at r ¸ A if it is
both u.h.c. and l.h.c.
6.8 The Theorem of Maximum:
Consider the following optimization problem,
sup
¸ÇI(r)
)(r, j),
109
where ) : A1 ÷ 1 is a single valued function, and I : A ÷ 1 is a nonempty
correspondence. Lets de…ne the maximized value function /(r) as,
/(r) = max
¸ÇI(r)
)(r, j),
and the set of maximizers G(r) as,
G(r) = ¦j ¸ I(r) [ )(r, j) = /(r)¦ .
Theorem 142 Let A _ 1
l
and 1 _ 1
n
, let ) : A 1 ÷ 1 be a continuous
function, and let I : A ÷ 1 be a compactvalued and continuous correspon
dence. Then:
(i) the function /(r) is continuous, and
(ii) the correspondence G(r) is nonempty, compactvalued, and u.h.c.
Proof.
Step1: G(r) ,= O.
Fix r. Since I(r) is compactvalued and )(r, ) is continuous, a maximum
exists by the Weierstrass Theorem. Hence, G(r) ,= O.
Step2: G(r) is compactvalued.
Step2a: G(r) is bounded.
Fix r. Since I(r) is a compactvalued, it is also bounded. G(r) is the set of
maximizers and I(r) is the feasible set. Therefore, G(r) _ I(r). Hence, G(r)
is bounded.
Step2b: G(r) is closed.
Fix r. Suppose j
n
÷ j and j
n
¸ G(r) for all : (hence, every element of j
n
is a maximizer given r). We want to show that j ¸ G(r).
Since G(r) _ I(r), j
n
¸ I(r) for all : as well. Since I(r) is compact
valued, it is also closed. Therefore, j ¸ I(r) (hence, j is feasible given r).
Since, /(r) = )(r, j
n
) for all : and ) is continuous, /(r) = )(r, j). Hence,
j ¸ G(r).
Step 3: G(r) is u.h.c.
Fix r. Let r
n
÷ r and choose j
n
¸ G(r
n
) for all :. We want to show that
there exists a subsequence ¦j
n
!
¦ of ¦j
n
¦ such that ¦j
n
!
¦ ÷ j ¸ G(r).
Since G(r) _ I(r), j
n
¸ I(r
n
) for all : as well. Since I(r) is u.h.c., there
must exist a subsequence ¦j
n
!
¦ of ¦j

¦ such that ¦j
n
!
¦ ÷ j ¸ I(r).
Let . ¸ I(r) (hence, . is any feasible point given r). Since I(r) is also l.h.c.,
there must exists a sequence .
n
!
÷ . such that .
n
!
¸ I(r
n
!
) for all /.
110
Since )(r
n
!
, j
n
!
) _ )(r
n
!
, .
n
!
) [note that j
n
!
are the elements of G(r),
and .
n
!
are the elements of I(r)], and ) is continuous, we have )(r, j) _
)(r, .). Since this holds for any . ¸ I(r), j ¸ G(r). Hence, G(r) is u.h.c.
Step 4: /(r) is continuous.
Fix r. Let ¦r
n
¦ be a sequence converging to r. We want to show that
¦/(r
n
)¦ ÷ /(r).
Choose j
n
¸ G(r
n
) for all : (hence, j
n
is a sequence of maximizers corre
sponding to each element of r
n
). Lets de…ne / = limsup /(r
n
) and /= liminf
/(r
n
). Note that we need to show that / = / = /(r).
By de…nition of lim sup, there exists a subsequence ¦r
n
!
¦ of ¦r

¦ such that
/ = lim)(r
n
!
, j
n
!
). Note that since j
n
¸ G(r
n
) for all :, ¦)(r
n
!
, j
n
!
)¦ is a
subsequence of ¦/(r
n
)¦.
Since G(r) is u.h.c., there must exist a convergent subsequence of ¦j
n
!
¦ ,
call it
_
j
t
¸
_
, converging to a limit point j ¸ G(r). Hence, / = lim)(r
¸
, j
t
¸
) =
)(r, j) = /(r). A similar argument also shows that / = /(r).
Corollary 143 If I is compactvalued, continuous, and convexvalued; and )
is continuous and strictly concave in j, then G is singlevalued, therefore it is a
continuous function, called q.
Note that in the following problem
(T\ )(r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ ,
T\ is the maximized value function, and therefore if 1(r, j)÷,\ (j) is continuos
and I(r) is compactvalue and continuos the T\ is also a continuos function.
Furthermore, if 1(r, j) ÷ ,\ (j) is continuos and strictly concave and I(r) is
compact valued, continuos, and convex valued, then there is a unique maximizer
j and hence G(r) = q(r) is a function.
References
[1] Rudin, W. Principles of Mathematical Analysis, McGrawHill, 1974.
[2] Sundaram, R. K. A First Course in Optimization Theory, Cambridge Uni
versity Press, 1996.
[3] Stokey, N. L. and Lucas, R. E. with Prescott, E. Recursive Methods in
Economic Dynamics, Harvard University Press, 1989.
111
7 Principal of Optimality
We have started our analysis of in…nitehorizon problems with the following
sequential problem (SP)
sup
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
), (SP)
subject to
r
÷l
¸ I(r

), t = 0, 1, 2, ...,
and
r
0
given.
We then argued that the supremum function de…ned by
\
+
(r
0
) = sup
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
),
should satisfy the following functional equation (FE),
\ (r) = sup
¸ÇI(r)
[1(r, j) ÷,\ (j)[ , (FE)
The supremum function \
+
(r
0
) tells us the in…nite discounted value of fol
lowing the best sequence ¦r
÷l
¦
o
=0
. Our hope was that rather than …nding the
best sequence
_
r
+
÷l
_
o
=0
, we can try to …nd the function \
+
(r
0
) as a solution
to the FE. If our conjecture is correct, then the function \ that solves FE would
give us the supremum function, i.e. \ (r
0
) = \
+
(r
0
).
We will now try to show that:
« The supremum function \
+
has to satisfy FE.
« If there is a solution to FE, then it is the supremum function, i.e. solution
\ to the FE evaluated at r
0
gives the value of the supremum in SP when
the initial value is r
0
.
« A sequence ¦r
÷l
¦
o
=0
attains the supremum in (SP) if and only if it sat
is…es
\ (r

) = 1(r

, r
÷l
) ÷,\ (r
÷l
), t = 0, 1, 2, 8....
« There is indeed a solution to FE.
In order to prove these claims, lets …rst introduce some notation. Let A, be
the possible values of the state. We will let I : A ÷ A denote the feasibility
correspondence. The graph of the feasibility correspondence is then given by
¹ = ¦(r, j) ¸ A A : j ¸ I(r)¦ . We will let 1 : ¹ ÷ 1 be the return function.
Note that 1 indicates the momentary return of being in a particular state r ¸ A,
112
and making a particular choice j out of feasible values I(r). Finally we will let
, _ 0 be the discount factor.
These are the givens in this problem. In order to gain some better insight,
we will use the one sector growth model as an example.
Example 144 (Stokey, Lucas, and Prescott (1989), Exercise 5.1, p. 103) Con
sider the dynamic programming problem for the one sector growth model,
\ (r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,\ (j)¦ ,
under the following standard assumption on preferences and technology (where
l : 1
÷
÷ 1, and ) : 1
÷
÷ 1):
« l1 : 0 < , < 1,
« l2 : l is continuous,
« l8 : l is strictly increasing,
« l4 : l is strictly concave,
« lò : l is continuously di¤erentiable.
« T1 : ) is continuous,
« T2 : )(0) = 0, for some r 0, r _ )(r) _ r for all 0 _ r _ r, and
)(r) < r for all r r,
« T8 : ) is strictly increasing,
« T4 : ) is weakly concave,
« Tò : ) is continuously di¤erentiable.
Note that in this example, we can pick A = [0, r[ where r is the highest
maintainable capital stock (see Figure 18). 1(r, j) = l[)(r) ÷ j[, and I(r) =
[0, )(r)[.
We will sometimes use the following notation to analyze the onesector
growth model
\ (/) = max
0¸
0
¸}()
¦l[)(/) ÷/
t
[ ÷,\ (/
t
)¦ ,
or
\ (/

) = max
0¸t+1¸}(t)
¦l[)(/

) ÷/
÷l
[ ÷,\ (/
÷l
)¦ .
113
x
y=δx
y=F(x,1)
y
x
upper
Figure 18: A = [0, r[
In order to de…ne the supremum function \
+
(r
0
), we will introduce some
further notation. We will call any sequence of actions a plan, ¦r
÷l
¦
o
=0
. Obvi
ously we are only interested in the set of feasible plans starting from r
0,
de…ned
as
¬(r
0
) = ¦¦r
÷l
¦
o
=0
: r
÷l
¸ I(r

), t = 0, 1, 2, ...¦ .
Let r
= (r
0
, r
l
, .........) ¸ ¬(r
0
), be a typical feasible plan.
Note that …nding a solution to the SP is simply …nding the best feasible plan
(or plans if there is more than one). In order to be able to say anything about
the solutions to the SP, we will need two assumptions:
« A1: I(r) ,= O for all r ¸ A.
« A2: \r
0
¸ A and r
¸ ¬(r
0
)
lim
n÷o
n
=0
,

1(r

, r
÷l
) exists (may be ÷· or ÷·).
Hence, we want to make sure that i) ¬(r
0
) is not empty, i.e. there is some
feasible plan, ii) we can evaluate (say how good or bad it is) any feasible plan.
For each : = 0, 1, 2, .... we will de…ne n
n
: ¬(r
0
) ÷ 1 by
n
n
(r
) =
n
=0
,

1(r

, r
÷l
),
114
which simply gives us the discounted return from following a feasible plan r
from data 0 to date :.
Also let n : ¬(r
0
) ÷ 1 = 1 ' ¦±·¦,
n(r
) = lim
n÷o
n
n
(r
),
to be the in…nite discounted sum of returns from following feasible plan r
. Then,
the supremum function is simply de…ned as
\
+
(r
0
) = sup
r
~
Çt(r0)
n(r
).
Remark 145 Given A1 and A2, \
+
(r
0
) is uniquely de…ned.
For some results below, we will make a more restrictive assumption than A2
and assume that 1 bounded.
« A2x: 1 : ¹ ÷ 1 is bounded.
Then, both \
+
and \ will be bounded functions. In particular, if 1 is a
bound on 1, Then,
[\
+
(r
0
)[ _ sup
rÇI(r0)
o
=0
,

[1(r

, r
÷l
)[ _
o
=0
,

1 =
1
1 ÷,
.
Hence, we can replace supremum with a maximum in SP. Then, by de…nition
\
+
(r
0
) is the unique function satisfying the following conditions:
\
+
(r
0
) _ n(r
) for all r
¸ ¬(r
0
), (SP1)
and for any  0
\
+
(r
0
) _ n(r
) ÷, for some r
¸ ¬(r
0
). (SP2)
Hence, if [\
+
(r
0
)[ < ·, then it must give more utility than any other feasible
plan.
We want to show that \
+
(r
0
) satis…es the FE. We have to make more clear
what we mean by that. We will say that \
+
satis…es the FE, if
\
+
(r
0
) _ 1(r
0
, j) ÷,\
+
(j) for all j ¸ I(r
0
), (FE1)
and for any  0
\
+
(r
0
) _ 1(r
0
, j) ÷,\
+
(j) ÷, for some j ¸ I(r
0
). (FE2)
We will …rst prove that under ¹1 and ¹2r the supremum function \
+
satis
…es FE. We will …rst prove, however, a Lemma that is key. This Lemma simply
tells us that you can separate discounted in…nite sum of returns from any fea
sible plan into current and future returns. This separation is key in dynamic
programming.
115
Lemma 146 Let A, I, 1, and , satisfy ¹2. Then, for any r
0
¸ A, and any
(r
0
, r
l
, ......) = r
¸ ¬(r
0
)
n(r
) = 1(r
0
, r
l
) ÷,n(r
t
),
where r
t
= (r
l
, r
2
, ....).
Proof: Remember
n(r
) = lim
n÷o
n
=0
,

1(r

, r
÷l
) .
Then,
n(r
) = 1(r
0
, r
l
) ÷, lim
n÷o
n
=0
,

1(r
÷l
, r
÷2
)
= 1(r
0
, r
l
) ÷,n(r
t
).
Note that for our one sector growth model example, this lemma simply states
that
o
=0
,

l()(r

)÷r
÷l
) = l()(r
0
)÷r
l
)÷ ,
o
=0
,

l()(r
÷l
) ÷r
÷2
)
. ¸¸ .
the present value of following a feasible plan starting from r1
.
Theorem 147 Let A, I, 1, and , satisfy ¹1 ÷¹2r. Then, \
+
satis…es FE.
Proof : Suppose , 0 (otherwise the result is trivial) and choose r
0
. We
know that SP1 and SP2 hold. To establish FE1, let r
l
¸ I(r
0
), and let  0 be
given. We know that there exists a feasible plan r
t
= (r
l
, r
2
, ....) ¸ ¬(r
l
) such
that
n(r
t
) _ \
+
(r
l
) ÷.
This is true since \
+
(r
l
) is the supremum function. Since (r
0
, r
t
) ¸ ¬(r
0
) by
the previous Lemma
\
+
(r
0
) _ n(r
) = 1(r
0
, r
l
) ÷,n(r
t
)
_ 1(r
0
, r
l
) ÷,\
+
(r
l
) ÷,,
Since r
l
is arbitrary, for all j ¸ I(r
0
)
\
+
(r
0
) _ 1(r
0
, j) ÷,\
+
(j) ÷,,
since  is also arbitrary
\
+
(r
0
) _ 1(r
0
, j) ÷,\
+
(j),
116
which establishes 111.To establish FE2, choose r
0
¸ A, and  0. Then, by
SP2 and the previous Lemma
\
+
(r
0
) _ n(r
) ÷ = 1(r
0
, r
l
) ÷,n(r
t
) ÷,
or
\
+
(r
0
) _ 1(r
0
, r
l
) ÷,n(r
t
) ÷,
since r
l
¸ I(r
0
), FE2 follows.
This theorem showed that the supremum function \
+
satis…es the FE. The
following theorem provides a converse: If \ satis…es the FE (i.e. if \ is a
solution to the FE) and if its bounded, then \ is the supremum function (i.e.
\ = \
+
).
Theorem 148 Let A, I, 1, and , satisfy ¹1 ÷¹2. If \ is a solution to the FE
and satis…es
lim
n÷o
,
n
\ (r
n
) = 0, for all (r
0
, r
l
, ...) ¸ ¬(r
0
), and for all r
0
¸ A,
then \ = \
+
.
Proof: Note that if we assumed ¹2r, boundedness condition is satis…es triv
ially. Hence, we could state this theorem as: "Let A, I, 1, and , satisfy ¹1÷¹2.
If \ is a solution to the FE, then \ = \
+
". Stating bounded condition explicitly
makes it role more transparent. We need to show that FE1 and FE2 imply o11
and o12. Note that 111 implies that for all r
¸ ¬(r
0
)
\ (r
0
) _ 1(r
0
, r
l
) ÷,\ (r
l
)
_ 1(r
0
, r
l
) ÷,1(r
l
, r
2
) ÷,
2
\ (r
2
)
.
.
_ n
n
(r
) ÷,
n÷l
\ (r
n÷l
), : = 1, 2, ...
As : ÷ ·, lim
n÷o
,
n
\ (r
n
) = 0, and hence
\ (r
0
) _ n
n
(r
),
and SP1 holds, i.e.,
\ (r
0
) _ n(r
) for all r
¸ ¬(r
0
),
We now need to show that for any ,
\ (r
0
) _ n(r
) ÷ for some r
¸ ¬(r
0
).
To this end …x  0, and choose ¦c

¦
o
=l
in 1
÷
such that
o
=l
,
÷l
c

_
:
2
.
Since FE2 holds, we can choose r
l
¸ I(r
0
), r
2
¸ I(r
l
), .... so that
\ (r

) _ 1(r

, r
÷l
) ÷,\ (r
÷l
) ÷c
÷l
, t = 0, 1, .....,
117
i.e.
\ (r
0
) _ 1(r
0
, r
l
) ÷,\ (r
l
) ÷c
l
,
\ (r
l
) _ 1(r
l
, r
2
) ÷,\ (r
2
) ÷c
2
,
etc. Then, obviously r
= (r
0
, r
l
, ....) ¸ ¬(r
0
), and
\ (r
0
) _
n
=0
,

1(r

, r
÷l
) ÷,
n÷l
\ (r
n÷l
) ÷ (c
l
÷... ÷,
n
c
n÷l
)
_ n
n
(r
) ÷,
n÷l
\ (r
n÷l
) ÷,2, : = 1, 2, ....,
which implies that for all : su¢ciently large, \ (r
0
) _ n
n
(r
) ÷ . Since  0
was arbitrary, it follows that SP2 holds.
It is important to note that the proof requires lim
n÷o
,
n
\ (r
n
) = 0 is
true for all feasible plans. Obvioulsy this is satis…ed if 1 and as a result \ is
bounded. If boundedness s not satis…ed, and if there is a feasible plan that does
not satisfy lim
n÷o
,
n
\ (r
n
) = 0 for a \, then we can not conclude that \
is the supremum function (even if it satis…es the FE). It is also important to
understand what this theorem states. It states that if \ satis…es the FE and
it is bounded then it is the supremum function. If the boundedness fails, then
there might be solutions to the FE that are not supremum function.
Example 149 Consider the following consumptionsaving decision of an in…
nitely lived agent with initial assets r
0
¸ A = 1. The agent can borrow or lend
at rate 1 ÷r = 1 =
l
o
1. Hence, price of borrowing one unit for tomorrow is
,. There are no borrowing constraints, i.e.
c

÷,r
÷l
_ r

,
Hence, the agent’s problem is
\
+
(r
0
) = sup
]ct,rt+1]
1
t=0
o
=0
,

c

.
s.t.
0 _ c

_ r

÷,r
÷l
.
r
0
given.
What is the value of \
+
(r
0
)´ Since agent can borrow as much as he wants it is
obvious that \
+
(r
0
) = ·. Consider now the following FE that corresponds to
this problem
\ (r) = sup
¸¸
o
{
[r ÷,j ÷,\ (j)[ .
Both \ (r) = · and
¯
\ (r) = r are solutions to the FE, but
¯
\ does not satisfy
boundedness condition in the previous theorem. Remember that when we solved a
118
SP using a Lagrangian approach, we needed the Transversality Condition (TC).
Hence, if a sequence of actions satisfy the Euler equation, it was the optimal
solution if it also satis…ed the TC. Argue (with some carefully selected sentences)
that the TC and the bounded condition in Theorem 4.3 (page 72 in SLP) serve
the same purpose.
The following two theorems show that a sequence that attains the supremum
in SP must satisfy FE and with bounded returns, any sequence that satis…es
FE is a solution to SP.
Theorem 150 Let A, I, 1, and , satisfy ¹1 ÷¹2. Let r
+
¸ ¬(r
0
) be a feasible
plan that attains supremum in (SP) for r
0
, then
\
+
(r
+

) = 1(r
+

, r
+
÷l
) ÷,\
+
(r
+
÷l
).
Proof: See Stokey, Lucas and Prescott (1989), Theorem 4.3, page 75).
Theorem 151 Let A, I, 1, and , satisfy ¹1 ÷¹2. Let r
+
¸ ¬(r
0
) be a feasible
plan from r
0
satisfying
\
+
(r
+

) = 1(r
+

, r
+
÷l
) ÷,\
+
(r
+
÷l
),
with
lim
÷o
sup,

\
+
(r
+

) _ 0,
then r
+
attains supremum in SP for r
0
.
Proof: See Stokey, Lucas and Prescott (1989), Theorem 4.3, page 76).
7.1 FE with Bounded Returns
Consider the following functional equation for a dynamic programming problem,
\ (r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ , (1)
under the assumption that 1 is bounded and , < 1.
As before A is the set of all possible values for state variable r, I(r) is
the feasibility correspondence, ¹ = ¦(r, j) ¸ A A : j ¸ I(r)¦ is the graph of
I(r), j is the next period state variable to be chosen from I(r), 1 : ¹ ÷ 1 is the
return function, and , _ 0 is the discount factor. We are interested establishing
assumptions on A, I(r), 1, and , which will guarantee that the solution to the
functional equation (1) exists, it is unique, and the solution has some desirable
properties.
« Assumption 1: A is a convex subset of 1
l
. I : A ÷ A is a nonempty,
compactvalued, and continuous correspondence.
119
« Assumption 2: 1 : ¹ ÷ 1 is bounded and continuous, and 0 < , < 1.
Note that if 1 is bounded, the solution the supremum function \
+
will be
also bounded. Therefore, we can try to …nd a solution to (1) in the space of
bounded and continuous functions C(A), with metric
j(), q) = sup
rÇ·
[ )(r) ÷q(r) [ .
The functional equation in (1) de…nes an operator on the elements of C(A)
given by,
(T))(r) = max
¸ÇI(r)
[1(r, j) ÷,)(j)[ . (2)
Given any particular )() ¸ C(A), we can evaluate [1(r, j) ÷,)(j)[ for every
possible value of j and solve the maximization problem for any r. This operation
will give us a new function denoted by (T))(). Then, solution to (1) will be
given by a …xed point of the operator T. Hence, we are trying to …nd a \ that
satis…es \ = T(\ ).
Given a …xed point T(\ ) = \ ¸ C(A), we can characterize the policy
correspondence G(r) given by
G(r) = ¦j ¸ I(r) : \ (r) = 1(r, j) ÷,\ (j)¦ . (3)
Remark 152 It is very important to understand what the operator (T)) does.
Given a function ), T) is simply the maximized value function in the following
problem
(T))(r) = max
¸ÇI(r)
[1(r, j) ÷,)(j)[ .
« Hence, if 1(r, j) ÷,)(j) and j ¸ I(r) satis…es the properties of the Max
imum Theorem, we can say something about the maximized value function
T).
« Beyond the Maximum Theorem, in general we would like to show that
T : o ÷ o (where o is some function space). Hence, if ) ¸ o, then
T) ¸ o. Then we know that the …xed point \, T\ = \, will be also an
element of o.
« We can also use the Corollary to the Contraction mapping theorem to
further characterize \.
« Finally, note that T de…nes a sequence of functions starting from any
guess )
0
(r), by )
l
= (T)
0
)(r). Sometime we will work directly on this
sequence to be able to say something about where these functions converge.
Following theorem establishes that operator T has a unique …xed point, and
G(r) is compactvalued and u.h.c.
120
Theorem 153 Under the Assumptions 1 and 2 on A, I(r), 1, and ,,
(i) T : C(A) ÷ C(A).
(ii) T has exactly one …xed point \ in C(A).
(iii) For any \
0
¸ C(A), j(T
n
\
0
, \ ) _ ,
n
j(\
0
, \ ) for : = 0, 1, 2, . . .
(iv) G : A ÷ A is compactvalued and u.h.c.
Sketch of the proof: (i) We need to show that T maps set of bounded and
continuous function into itself. Hence, if we take any function ) ¸ C(A), and
…nd (T)), then it must be the case that (T)) ¸ C(A). Suppose )() is bounded
and continuous. Then, the objective function 1(r, j) ÷ ,)(j) is continuous
and the feasibility set I(r) is compact. Therefore, a maximum exists and this
operator is well de…ned.
Since 1(r, j) ÷ ,)(j) is bounded, T) is bounded as well. Finally, Since
1(r, j) ÷ ,)(j) is continuous and I(r) is compactvalued and continuous, the
maximized value function T) is continuous by the Theorem of Maximum. Hence,
T : C(A) ÷ C(A).
We established that \ = T\ is bounded and continuous. This is done by
showing that T\ preserves boundedness and continuity properties of \. This
approach is an important tool in dynamic programming problems.
(ii)(iii) These two results follow directly from the Contraction Mapping The
orem which states that a contraction mapping T on a complete metric space o
has exactly one …xed point in o [SLP, (1989), Theorem 3.2]. Note that the
Contraction Mapping Theorem has two ingredients: The metric space o must
be complete, and the operator T must be a contraction mapping. Since the set
C(A) with uniform metric j(), q) = sup
rÇ·
[ )(r) ÷q(r) [ is a complete metric
space [SLP, (1989), Theorem 3.1], and T satis…es the Blackwell’s Su¢ciency
Conditions for a contraction mapping [SLP, (1989), Theorem 3.3], the Contrac
tion Mapping Theorem applies to the problem (1).
Since we are maximizing a continuous function on a compactvalued and
continuous feasibility correspondence, the properties of G(r) follow directly from
the Theorem of Maximum.
Lets now go back to our example of onesector growth model
\ (r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,\ (j)¦ .
In this problem:
« A is convex. Since A = [0, r[, where r is the highest maintainable capital
stock, A is a convex subset of 1.
« I(r) is compactvalued. Fix any r ¸ A. Since )(0) = 0 and )(r) _ r, I(r)
is bounded. I(r) = [0, )(r)[ is obviously closed. Hence, it is compact.
« I(r) is continuous. It is bounded below and above by continuous functions,
therefore it is continuous.
121
« 1 : ¹ ÷ 1 is bounded. This follows from l(0) _ l()(r) ÷j) _ l()(r)).
1 is also continuous since l and ) are continuous.
« , ¸ (0, 1).
Hence this problem satis…es conditions in Assumption 1 and Assumption
2.
Does T de…ned by
(T\ )(r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,\ (j)¦ ,
satisfy Blackwell’s su¢ciency conditions?
« Monotonicity. Let \(j) _
\(j), then
(T\)(r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,\(j)¦ ,
_ max
0¸¸¸}(r)
_
l[)(r) ÷j[ ÷,
\(j)
_
= (T
\)(r).
« Discounting. Given \(j¸r) ÷a,
(T(\ ÷a)(r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,(\(j) ÷a)¦ ,
= max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,\(j)¦ ÷,a
= (T\)(r) ÷,a.
In order to characterize \ and G further we need more assumptions on A,
I(r), 1, and ,.
Assumption 3: \j, 1(, j) is strictly increasing. [1 is strictly increasing in
the current state].
Assumption 4: I is monotone, r _ r
t
= I(r) _ I(r
t
). [Feasibility set is
expanding in the current state].
Theorem 154 Under the Assumptions 14 on A, I(r), 1, and ,, \ is strictly
increasing.
Sketch of the proof: This result follows directly from the Corollary to
the Contraction Mapping Theorem [Stokey and Lucas, (1989), Theorem 3.2,
Corollary 1] which states that if o
t
is a closed subset of o, and T(o
t
) _ o
t
then
\ ¸ o
t
. If in addition T(o
t
) _ o
tt
_ o
t
, then \ ¸ o
tt
.
122
Let o
t
be the set of all bounded and continuous functions, and o
tt
be the
set of all boundedcontinuous, and strictlyincreasing functions. Then, in order
to apply the Corollary we need to show that T maps bounded and continuous
functions into strictly increasing functions.
To do this, suppose \ () is nondecreasing and by using the Assumptions on
1 and I show that T\ is strictly increasing. Let r
t
r, we want to show that
T\ (r
t
) T\ (r). Let j
+
be the maximizer when the current state is r, i.e.
(T\ )(r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ = 1(r, j
+
) ÷,\ (j
+
),
since r
t
r, and 1(:, j) is strictly increasing, we have
(T\ )(r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ = 1(r, j
+
) ÷,\ (j
+
)
< 1(r
t
, j
+
) ÷,\ (j
+
),
since j
+
is feasible when the current state is r
t
, we have
(T\ )(r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ = 1(r, j
+
) ÷,\ (j
+
)
< 1(r
t
, j
+
) ÷,\ (j
+
)
_ max
¸ÇI(r
0
)
[1(r
t
, j) ÷,\ (j)[ = (T\ )(r
t
).
Again lets go back to our example. We have
« 1(, j) is strictly increasing. Note that l and ) are strictly increasing.
Hence, r
t
r implies l()(r
t
) ÷j) l()(r) ÷j).
« I is monotone. Since ) is strictly increasing, r _ r
t
implies [0, )(r)[ _
[0, )(r
t
)[ or I(r) _ I(r
t
).
Therefore this example satis…es Assumption 3 and Assumption 4. Hence,
the solution must be strictly increasing.
We would also like to establish some concavity properties for \. In order to
do this we need further assumptions on 1 and I.
Assumption 5: 1 is strictly concave,
1(0(r, j) ÷ (1 ÷0)(r
t
, j
t
)) _ 01(r, j) ÷ (1 ÷0)1(r
t
, j
t
),
for all (r, j), (r
t
, j
t
) ¸ ¹, and for all 0 ¸ (0, 1) and the inequality is strict if
r ,= r
t
.
Assumption 6: I is convex. For all 0 ¸ [0, 1[, and for all r, r
t
¸ A, j ¸ I(r)
and j
t
¸ I(r
t
),
j ¸ I(r) and j
t
¸ I(r
t
) = 0j ÷ (1 ÷0)j
t
¸ I(0r ÷ (1 ÷0)r
t
).
123
Theorem 155 Under the Assumptions 12 and 56 on A, I(r), 1, and ,, \
is strictly concave and G is a continuous and singlevalued.
Sketch of the proof: The fact that \ is strictly concave follows from the
Corollary to the Contraction Mapping Theorem. The arguments are similar to
those in Theorem 1.2, with o
t
being the set of all bounded, continuous, and
weakly concave functions and o
tt
being the set of all bounded, continuous, and
strictly concave functions. The fact that G(r) is a singlevalued function, call it
q, follows from the Theorem of Maximum under the additional assumptions of
the strict concavity of the objective function and the convexity of the feasibility
set.
Again we want to show that T\ (r) is strictly concave, i.e. if r
0
= 0r÷(1 ÷
0)r
t
, then
T\ (r
0
) 0(T\ )(r) ÷ (1 ÷0)(T\ )(r
t
).
Let j
+
be the maximizer for r and j
++
be the maximizer for r
t
. Then we have
0(T\ )(r) ÷ (1 ÷0)(T\ )(r
t
) = 01(r, j
+
) ÷0,\ (j
+
) ÷ (1 ÷0)1(r
t
, j
++
) ÷ (1 ÷0),\ (j
++
)
= 01(r, j
+
) ÷ (1 ÷0)1(r
t
, j
++
) ÷, [0\ (j
+
) ÷ (1 ÷0),\ (j
++
)[
< 1(r
0
, 0j
+
÷ (1 ÷0)j
++
) ÷,(\ (0j
+
÷ (1 ÷0)j
++
),
where the last step follows from the strict concavity of 1 and weak concavity of
\. But then,
0(T\ )(r) ÷ (1 ÷0)(T\ )(r
t
) = 01(r, j
+
) ÷0,\ (j
+
) ÷ (1 ÷0)1(r
t
, j
++
) ÷ (1 ÷0),\ (j
++
)
= 01(r, j
+
) ÷ (1 ÷0)1(r
t
, j
++
) ÷, [0\ (j
+
) ÷ (1 ÷0),\ (j
++
)[
< 1(r
0
, 0j
+
÷ (1 ÷0)j
++
) ÷,\ (0j
+
÷ (1 ÷0)j
++
)
_ max
¸ÇI(r
0
)
[1(r
0
, j) ÷,\ (j)[ = T\ (r
0
),
where the last step follows since j
0
is feasible given r
0
.
Again for our example:
« 1 is strictly concave. In order to see that, let r, r
t
¸ [0, r[, and j ¸ I(r)
and j
t
¸ I(r
t
). Let r
0
= 0r ÷ (1 ÷ 0)r
t
¸ [0, r[, and j
0
= 0j ÷ (1 ÷ 0)j
t
.
Note that since ) is concave and l is strictly concave, hence
1[0(r, j) ÷ (1 ÷0)(r
t
, j
t
)[ = l[)(0r ÷ (1 ÷0)r
t
) ÷(0j ÷ (1 ÷0)j
t
)[
_ l[0)(r) ÷ (1 ÷0))(r
t
) ÷(0j ÷ (1 ÷0)j
t
)[
= l[0()(r) ÷j) ÷ (1 ÷0)()(r
t
) ÷j
t
)[
0l[)(r) ÷j[ ÷ (1 ÷0)l[)(r
t
) ÷j
t
[.
« I is convex. In order to see that, let r, r
t
¸ [0, r[, and j ¸ I(r) and
j
t
¸ I(r
t
). Hence, that j _ )(r) and j
t
_ )(r
t
). Let r
0
= 0r÷(1÷0)r
t
¸
[0, r[, and j
0
= 0j ÷ (1 ÷0)j
t
. Note that,
)(r
0
) = )(0r ÷ (1 ÷0)r
t
) _ 0)(r) ÷ (1 ÷0))(r
t
) _ 0j ÷ (1 ÷0)j
t
= j
0
= j
0
¸ [0, )(r
0
)[ = I(r
0
).
124
Now we also know that the solution to one sector growth problem is a strictly
concave function \.
The next theorem shows that under concavity restrictions, the policy func
tions associated with operator T converges uniformly. Hence, if q
n
(r) have
some nice properties that are preserved under uniform convergence, then q(r)
has those nice properties as well.
Theorem 156 Under Assumptions 12 and 56, if \
0
¸ C(A) and ¦\
n
, q
n
¦
are de…ned by
\
n÷l
= T\
n
, : = 0, 1, 2, ....
and
q
n
(r) = aig max
¸ÇI(r)
[1(r, j) ÷,\
n
(j)[ , : = 0, 1, 2, ....
Then, q
n
÷ q pointwise, if A is compact, then the convergence is uniform.
The …nal property of the value function that we are interested in is its
di¤erentiability. In order to establish that we need one more restriction on 1.
Assumption 7: 1 is continuously di¤erentiable on the interior of A.
The following result is key to establish the di¤erentiability of \.
Theorem 157 (Benveniste and Scheinkman) Let A _ 1
l
be a convex set,\ :
A ÷ 1 be a concave function. Let r
0
¸ i:tA and let 1 be a neighborhood of r
0
.
If there is a concave, di¤erentiable function \ : 1 ÷ 1 with \(r
0
) = \ (r
0
)
and \(r
0
) _ \ (r) for all r ¸ 1 then \ is di¤erentiable at r
0
and
\
I
(r
0
) = \
I
(r
0
), for i = 1, 2, ..., .
Proof: See Figure 54. This theorem states that if you can …nd a function \
for which \ is like an envelope, then if \ is di¤erentiable at r
0
, so is \ and
their derivatives are identical.
The following theorem shows that for dynamic programming problems such
a function is readily available.
Theorem 158 Under the Assumptions 12, and 57, if r
0
¸ i:tA, q(r
0
) ¸
i:tI(r
0
), then \ is continuously di¤erentiable at r
0
with,
\
I
(r
0
) = 1
I
(r
0
, q(r
0
)),
where \
I
and 1
I
are the derivatives with respect to i

argument.
Sketch of the proof: The proof follows from the Theorem of Benveniste
and Scheinkman [Stokey and Lucas, (1989), Theorem 4.10]. The basic idea is
that for any given r
0
, and any neighborhood 1 of r
0
, the function \ : 1 ÷ 1
given by,
\(r) = 1(r, q(r
0
)) ÷,\ (q(r
0
)),
125
x
x
0
V(x)
W(x)
Figure 19: Benveniste and Scheinkman Theorem
satis…es the conditions of the Theorem of Benveniste and Scheinkman. Note
that \ is concave. Also
\(r
0
) = \ (r
0
) = 1(r
0
, q(r
0
)) ÷,\ (q(r
0
)),
since for r
0
, q(r
0
) is the optimal policy. Finally,
\(r) = 1(r, q(r
0
)) ÷,\ (q(r
0
))
_ \ (r) = max
¸
[1(r, j) ÷,\ (j)[ .
One more time lets go back to our example. We know that l()(r) ÷j) is a
di¤erentiable function. Hence the solution \ is di¤erentiable. Remember that
for the special case of
)(/) = /
o
and n(/
o
÷/
t
) = ln(/
o
÷/
t
),
we had show that (using a guess and verify approach) the value function took
the form
\ (/) = ¹÷1ln(/),
where ¹ and 1 were constant. Not surprisingly, \ (/) = ¹ ÷1ln(/) is strictly
increasing, strictly concave and di¤erentiable. Note, however, that n(c) = ln(c)
126
is unbounded. Even if we restrict our attention A = (0, /), it is still unbounded
below. Hence, in order to be able to apply these results we must restrict atten
tion to A = [/, /[. We will later see that for this problem even when A = (0, ·),
the solution \ is still well de…ned.
Note that for any dynamic programming problem that satisfy Assumptions
17,
\ (r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[
that satisfy Assumptions 17, we can write the following FOCs for j
1
¸
(r, j) ÷,
0\ (j)
0j
= 0,
where 1
¸
(r, j) indicates the derivative of the return function with respect to j
and
J\ (¸)
J¸
is the derivative function of the value function with respect to j. We
can also write the following envelope condition
0\ (r)
0r
= 1
r
(r, j).
We can combine these two equations to arrive at
1
¸
(r, j) ÷,1
r
(r, j) = 0.
This is the Euler equation.
Remark 159 Remember that when we solved this problem using a Lagrangian
approach, we got the same Euler equation.
Again for our one sector growth model the FOC for j is
l
t
()(r) ÷j)(÷1) ÷,\
t
(j) = 0,
and the envelope condition is
\
t
(r) = l
t
()(r) ÷j))
t
(r).
Then, the Euler equation is
l
t
()(r) ÷q(r))(÷1) ÷,l
t
()(q(r)) ÷q(q(r))))
t
(q(r)) = 0,
or
l
t
()(r) ÷q(r)) = ,l
t
()(q(r)) ÷q(q(r))))
t
(q(r)).
Again we could use (/, /
t
) notation to write
l
t
()(/) ÷/
t
) = ,l
t
()(/
t
) ÷/
tt
))
t
(/
t
),
or
l
t
()(/

) ÷/
÷l
) = ,l
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
).
We will next go trough several examples to illustrate:
127
« How we can write any SP as a dynamic programming (DP) problem.
« The properties of the contraction mappings.
« Why does a DP approach provide us with a powerful tool to characterize
solutions to SP problems.
Before going into examples, however, we will …rst establish a result that will
be useful when the boundedness condition fails.
7.2 Unbounded Returns
Remember that we establish the principal of optimality with two theorems We
showed that under
« A1: I(r) ,= O for all r ¸ A,
and
« A2: \r
0
¸ A and r
¸ ¬(r
0
), lim
n÷o
n
=0
,

1(r

, r
÷l
) exists (may
be÷· or ÷·),
the supremum function \
+
satis…es the FE, and if a function satis…es the
FE and if its bounded, i.e. if
lim
n÷o
,
n
\ (r
n
) = 0, for all (r
0
, r
l
, ...) ¸ ¬(r
0
), and for all r
0
¸ A, (B)
then \ is the supremum function.
It turns out that even if (B) fails, under certain conditions we can come up
with su¢cient conditions such that any \ that solves the FE is a supremum
function. The basic idea is that even if 1 is unbounded on A, we might still be
able to some up with a function
´
\ , that provides and upper bound for \
+
.
Theorem 160 Suppose A, I, 1, and , satisfy A1 and A2, and let ¬(r
0
), n(r
)
and \
+
as de…ned before. Suppose there is a function
´
\ : A ÷ 1 such that
(1) T
´
\ _
´
\ ,
(2) lim
n÷o
,
n
´
\ (r
n
) _ 0, for all r
0
¸ A, and r
¸ ¬(r
0
),
n(r
) _
´
\ (r
0
), for all r
0
¸ A, and r
¸ ¬(r
0
).
If the function \ : A ÷ 1 de…ned by
\ (r) = lim
n÷o
(T
n
´
\ )(r)
is a …xed point of T, then \ = \
+
.
Proof: See SLP, page 93.
128
This theorem simply states that if you can come up with a function that is
an upper bound on \
+
, then we can work our way from
´
\ to \, and \ = \
+
Example 161 Consider the following SP
max
]t+1]
1
t=0
_
,

ln(/
o

÷/
÷l
)
¸
s.t.
0 _ /
÷l
_ /
o

.
Let A = (0, ·). Then the return function is 1(r, j) = 1(/

, /
÷l
) = ln(/
o

÷
/
÷l
). It turns out that function
´
\ (/) =
o In()
1 ÷c,
,
satis…es all of the conditions of the previous theorem. Not surprisingly, the
…xed pint \ is what we …nd using a guessandverify method (see SLP page 95).
Where does
´
\ (/) come from? It is the maximum utility we would get if we had
every period /, and did not save at all. Then each period we would get
ln(/
o
) = cln/,
and our discounted lifetime utility would be
´
\ (/) = cln/(1 ÷, ÷,
2
÷......) =
o In()
1 ÷c,
.
In many economic applications it is not hard to …nd a function like
´
\ , even if
1 is not bounded.
129
8 Examples and Exercises
Exercise 162 Consider the following problem: choose a sequence ¦c

, /
÷l
¦ for
t _ 0 to maximize
o
=0
,

n(c

) subject to
c

÷/
÷l
_ /

,
and /
0
0 given. The functions n maps 1
÷
to 1 and are strictly increasing
and continuous, while , ¸ (0, 1). Let A be the state space, let 1 : A A ÷ 1
be the return function, and let I : A ÷ A be the feasible set. Specify A, 1, and
I for the above problem.
Exercise 163 Consider the following problem: choose a sequence ¦c

, /
÷l
, :

¦
for t _ 0 to maximize
o
=0
,

n(c

, 1 ÷:

) subject to
c

÷/
÷l
_ )(/

, :

) ÷ (1 ÷c)/

, 0 _ :

_ 1,
and /
0
0 given. The functions n and ) map 1
÷
1
÷
to 1
÷
and are strictly
increasing and continuous, while , ¸ (0, 1) and c ¸ [0, 1[. Let A be the state
space, let 1 : A A ÷ 1 be the return function, and let I : A ÷ A be the
feasible set. Specify A, 1, and I for the above problem.
Exercise 164 Consider the following problem: choose a sequence ¦c

, :
l
, :
2
, /
l÷l
¦
for t _ 0 to maximize
o
=0
,

l(c

, 1 ÷:
l
÷:
2
) subject to
c

= )(/

, :
l
) and /
÷l
= :
2
,
and /
0
0 given. The functions l and ) map 1
2
÷
to 1
÷
and both are strictly
increasing, continuous, and strictly concave, while , ¸ (0, 1). Let A be the state
space, let 1 : A A ÷ 1 be the return function, and let I : A ÷ A be the
feasible set. Specify these objects for the above problem.
Exercise 165 Consider the following model. There is one person, two inputs
(capital and labor) and two outputs (a consumption good and an investment
good). The person has preferences given by
o
=0
,

n(c

), where n : 1
÷
÷ 1
and c

is date t output of the consumption good. The production function for
output of the date t consumption good is G
l
(/
l
, :
l
) and that for output of the
investment good is G
2
(/
2
, :
2
), where /
l
÷ /
2
_ /

and :
l
÷ :
2
_ 1. At
date 0, /
0
is given and /
÷l
= (1 ÷ c)/

÷ G
2
(/
2
, :
2
), where c ¸ (0, 1). Aside
from the discount factor, the components of a sequential problem as de…ned in
Stokey, Lucas, with Prescott are as follows: A ( a set in which the state variable
lies), I : A ÷ A (a constraint set), and 1 : A A ÷ 1 (a return function).
Describe A, I, and 1 for the above model.
Exercise 166 Let A = 1
÷
, 1(r, j) = 0 for all (r, j), and I(r) = [0, r,,[,
where , ¸ (0, 1) and is the discount factor. Here A is the state space, 1 is
the return function, and I is the feasible set. (a) Show that ·(r) = r satis…es
the associated functional equation.(b) Is ·(r) = r the maximized value of the
objective in the corresponding sequential problem? Explain.
130
Exercise 167 Consider the following economy populated by a continuum of
identical agents. Time is discrete and the horizon is in…nite. Each individual
owns a set of trees at time 0, denoted by :
0
. In each period t, the individual has
to cop some trees to produce (and consume) fruits. The technology to produce
fruits is denoted by
j

= )(0

, :

),
where 0

is the amount of trees cut in period t, and :

is the labor used to
cut trees. Each individual is endowed with 1 unit of time each period. Agents
decide how much to work and how many trees to cut in each period in order to
maximize
o
=0
,

n(c

), with , ¸ (0, 1),
where c

is consumption of fruits at time t.
Example 168 In Stokey, Lucas and Prescott (1989), a dynamic programming
problem is de…ned by , (the discount factor), A (the set in which the state
variable lies), I : A ÷ A (feasibility correspondence), and 1 : AA ÷ 1 (the
return function). Describe A, I, and 1 for this model.
« Set up the dynamic programming problem faced by a representative agent.
Write down the …rst order condition to the agent’s DP problem and inter
pret it.
« Let
)(0

, :

) = 0
o

:
l÷o

, with c ¸ (0, 1),
and
n(c

) =
c
l÷c

1 ÷o
, with o ¸ (0, 1).
« Show that the sequence of chopped trees will follow the following equation
0
÷l
= ,
1
1c+co
0

.
Argue that the relationship between the initial set of trees :
0
and the se
quence of chopped trees ¦0

¦
o
=0
can be used to pin down 0
0
, and hence to
de…ne the whole sequence ¦0

¦
o
=0
.
Exercise 169 This problem tries to get you acquainted with the Principal of
Optimality (i.e. Theorems 4.2 and 4.3 in Stokey, Lucas and Prescott 1989)
Consider the consumptionsaving decision of an in…nitely lived agent with initial
assets r
0
¸ A = 1. The agent can borrow or lend at rate 1 ÷ r = 1 =
l
o
1.
Hence, price of borrowing one unit for tomorrow is ,. There are no borrowing
constraints, i.e.
c

÷,r
÷l
_ r

131
Hence, the agent’s problem is
\
+
(r
0
) = sup
]ct,rt+1]
1
t=0
o
=0
,

c

.
s.t.
0 _ c

_ r

÷,r
÷l
.
r
0
given.
What is the value of \
+
(r
0
)´ Consider the following FE that corresponds to this
problem
\ (r) = sup
¸¸
o
{
[r ÷,j ÷,\ (j)[ .
Show that both \ (r) = · and
¯
\ (r) = r are solutions to the FE. Does
¯
\
satisfy boundedness condition in Theorem 4.3 (page 72 in SLP). Remember that
when we solved a SP using a Lagrangian approach, we needed the Transversality
Condition (TC). Hence, if a sequence of actions satisfy the Euler equation, it
was the optimal solution if it also satis…ed the TC. Argue (with some carefully
selected sentences) that the TC and the bounded condition in Theorem 4.3 (page
72 in SLP) serve the same purples.
Example 170 Consider the standard one sector growth model,
\ (/) = max

0
¦l()(/) ÷/
t
)) ÷,\ (/
t
)¦.
By using FOCs for this problem show that if \ is strictly concave /
t
= q(/) is
increasing in /. Remember that the FOC for /
t
is
l
t
()(/) ÷/
t
)(÷1) ÷,\
t
(/
t
) = 0.
Note that this equation characterizes /
t
given \
t
. Hence, we can use the Implicit
Function Theorem to de…ne
H(/
t
, /) = l
t
()(/) ÷/
t
)(÷1) ÷,\
t
(/
t
).
Then,
q
t
(/) =
d/
t
d/
= ÷
H

H

0
= ÷
l
tt
()(/) ÷/
t
))
t
(/)(÷1)
l
tt
()(/) ÷/
t
)(÷1)(÷1) ÷,\
00
(/
t
)
.
Since l
tt
< 0, and \
tt
< 0, q
t
(/) 0. Using FOCs and the Implicit function
theorem to characterize q is a powerful tool that is often used. Note that often
we know if \ is strictly increasing or not, or concave or not.
Example 171 Consider the following version of the one sector growth model:
Again each agent start his/her life with /
0
, and lives forever. Agents has to
decide each period how much to work (hence labor supply is endogenous) and
132
how much to save. Each agent has one unit of time. Let :

denote the labor
supply. Then, the production function is now
)(/

, :

) = 1(/

, :

) ÷ (1 ÷c)/

,
and the utility function is
l(c

, 1 ÷:

).
Then, the agent’s problem is
max
]t+1,nt]
1
t=0
o
=0
,

l()(/

, :

) ÷/
÷l
, 1 ÷:

).
Let write down the DP problem facing an agent. Note that the capital stock is
still the only state variable in this economy (i.e. at each date agents only need
to know /

in order to be able to make decisions on :

and /
÷l
). Then,
\ (/

) = max
t+1,nt
[l()(/

, :

) ÷/
÷l
, 1 ÷:

) ÷,\ (/
÷l
)[ .
We now have two FOCs. One is for /
÷l
l
l
()(/

, :

) ÷/
÷l
, 1 ÷:

)(÷1) ÷,\
t
(/
÷l
) = 0,
and one is for :

l
l
()(/

, :

) ÷/
÷l
, 1 ÷:

))
2
(/

, :

) ÷l
2
()(/

, :

) ÷/
÷l
, 1 ÷:

)(÷1) = 0,
where l
I
and )
I
indicate the derivative w.r.t. i

argument. The FOC for :

does not involve \. We will call it a static FOC. Note that it simply states that
l
l
()(/

, :

) ÷/
÷l
, 1 ÷:

))
2
(/

, :

)
. ¸¸ .
marginal bene…t of work
= l
2
()(/

, :

) ÷/
÷l
, 1 ÷:

)
. ¸¸ .
marginal cost of work
.
When we have a static FOC, we can always solve it …rst and then focus on the
dynamic FOC. In this example, we can use the FOC for :

to …nd the optimal
work decision
:

= ·(/

, /
÷l
),
and then substitute it in the FOC for /
÷l
l
l
()(/

, ·(/

, /
÷l
)) ÷/
÷l
, 1 ÷·(/

, /
÷l
))(÷1) ÷,\
t
(/
÷l
) = 0.
Note that now everything is in terms of /

and /
÷l
in this equation.
Example 172 Consider the stochastic case of the standard one sector growth
model. Let the output be given by )(/, .) where . is a stochastic technology
shock:
)(/, .) = .1(/, 1) ÷ (1 ÷c).
133
A representative agent now sees .

, and then make a decision on /
÷l
. Hence,
a representative agent needs to know both /

and .

in order to be able to decide
on /
÷l
. Then, the SP will be
max
]t+1]
1
t=0
1
0
_
o
=0
,

l()(/

, .

) ÷/
÷l
)
_
.
The DP problem is then
\ (/, .) = max
0¸
0
¸}(,:)
¦l [)(/, .) ÷/
t
[ ÷,1 [\ (/
t
, .
t
)[¦ ,
where .
t
is the value of next period shock which is unknown at the current period
when /
t
is chosen, and 1() is the expected value operator. Hence, we have two
new features in the stochastic case:
« First, the state of the problem now consists of both the current capital stock
/ and current shock .. Therefore, in order to be able to analyze how the
solution of this problem behaves over time, we need to keep track of both
/ and ..
« Second, we need to characterize what we mean by the expression 1().
Example 173 Consider again the stochastic version of the onesector growth
model with c = 1. Hence,
)(/, .) = .)(/) = 1(/, 1).
Furthermore, suppose . can only take values from a …nite set 7 = ¦.
l
, .
2
, . . . , .
n
¦ .
In this case a probability distribution over 7 is simply an assignment of proba
bilities (¬
l
, ¬
2
, . . . , ¬
n
) to each element of 7. Since ¬
I
’s are probabilities,
¬
I
= Ii(. = .
I
), ¬
I
_ 0 \i, and
n
I=l
¬
I
= 1.
We can then write the DP problem as
\ (/, .
¸
) = max
0¸
0
¸}():¸
_
l [)(/).
¸
÷/
t
[ ÷,
n
I=l
\ (/
t
, .
I
)¬
I
_
, for each ,.
Example 174 Now let . can only take values from a …nite set 7 = ¦.
l
, .
2
, . . . , .
n
¦ ,
but follow a Markov process. Let ¬
I¸
denote the probability that the next period’s
shock will be .
¸
given that today’s shock is .
I
. The matrix formed by ¬
I¸
for all
i, , is called a transition matrix.
H =
_
¸
¸
_
¬
ll
¬
l2
. . . ¬
ln
¬
2l
¬
22
. . . ¬
2n
¬
nl
¬
n2
. . . ¬
n3
_
¸
¸
_
134
The de…ning properties of a transition matrix are:
i)¬
I¸
_ 0 for all i, , and ii)
¸
¬
I¸
= 1 for all i.
Then, the DP problem is
\ (r, .
I
) = max
¸ÇI(r,:1)
_
_
_
1(r, j, .
I
) ÷,
n
¸=l
\ (j, .
¸
)¬
I¸
_
_
_
.
Example 175 Finally suppose 7 = [., .[. Then we have to de…ne a distribution
function for .. Suppose . is iid with
H(a) = jro/(. _ a).
Then, we can write the DP problem as
\ (/, .) = max
0¸
0
¸}(,:)
_
l [)(/, .) ÷/
t
[ ÷,
_
:
:
[\ (/
t
, .
t
)[ dH(.
t
)
_
,
On the other hand if the future distribution of . depend on its current realization,
i.e. if
H(a[.) = jro/(.
t
_ a[.),
we will write
\ (/, .) = max
0¸
0
¸}(,:)
_
l [)(/, .) ÷/
t
[ ÷,
_
:
:
[\ (/
t
, .
t
)[ dH(.
t
[.)
_
.
Exercise 176 Consider the following functional equation,
(Tn)(r) = ,
_
n(r
t
)d1(r
t
, r) ÷q(r).
a) Let C(A) be the set of all bounded and continuous functions on A _ 1
l
.
Under what conditions on 1( , r) and q(r), T : C(A) ÷ C(A) ?
b) Show that T is a contraction mapping.
Exercise 177 Consider the following functional equation
(T\ )(/) = max ¦\(/), ,\ ()(/))¦ , 0 < , < 1, a 0.
where \(/) is a continuous, increasing, bounded and strictly concave func
tion de…ned on [0, /[ and )(/) is a continuous, increasing and bounded function
de…ned on [0, /[. Does T map strictly concave functions into strictly concave
functions? Prove or provide a counterexample.
135
Exercise 178 Prove the following: Operator T de…ned by
(T\ )(r) = max
_
r
1 ÷,
, ,\ (r)
_
, with , ¸ (0, 1) and r ¸ [0, r[,
on the set of bounded and continuous functions T is a contraction mapping
Exercise 179 Let )(; c) : R
n
÷ R
n
and satisfy Blackwell’s su¢cient condi
tions for a contraction mapping. Here c ¸ R is a parameter. Let q(c) denote the
…xed point of ), namely, the unique solution to r = )(r; c). Prove the following:
If )(r; c) is decreasing in c, then q(c) is decreasing in c.
Exercise 180 Let the mapping T be de…ned by
T(·)(r) = max
0¸¸¸}(r)
¦l[)(r) ÷j[ ÷,·(j)¦
where the function l : 1
÷
÷ 1 and the function ) : 1
÷
÷ 1
÷
are continuous
and strictly increasing and , ¸ 1. Prove the following.
« If · : 1
÷
÷ 1 is continuous, then T(·) is continuous.
« If · : 1
÷
÷ 1 is continuous, then T(·) is strictly increasing.
Exercise 181 Consider the following twosector model of optimal growth. A
social planner seeks to maximize the utility of the representative agent given by
o
=0
,

n(c

, 

),
where c

is consumption and of good 1 at t, whereas 

is leisure at t.Sector 1
produces consumption goods using capital /
l
and labor :
l
according to the pro
duction function c

_ )
l
(/
l
, :
l
). Sector 2 produces the capital good according
to the production function /
÷l
_ )
2
(/
2
, :
2
). Total employment :

= :
l
÷:
2
and leisure 

is constraint by the endowment of time , 

÷:

_ . The sum of
the amounts of capital used in both sectors cannot exceed the initial capital in
the economy, /
l
÷/
2
_ /

, /
0
given.
« Formulate this problem as a dynamic programming problem.
« Consider now another economy that is similar to the previous one ex
cept for the fact that capital is sector speci…c. The economy starts period
t with given amounts of capital /
l
and /
2
that must be used in sector
1 and 2 respectively. During this period the capitalgood sector produces
capital that is speci…c to each sector according to transformation curve
q(/
l÷l
, /
2÷l
) _ )
2
(/
2
, :
2
). Formulate this problem as a dynamic pro
gramming problem.
136
Exercise 182 A …rm maximizes the present value of cash ‡ows, with future
earnings being discounted at rate ,. Income at time t is given by sales j

.¡

,
where j

is the price of good and ¡

is the quantity produced. The …rm behaves
competitively and therefore takes prices as given. It knows that the prices evolve
according to a law of motion given by j
÷l
= )(j

). Total or gross production
depends on the amount of capita, /

, and labor :

and on the square of the
di¤erence between current ratio of sales to investment r

and the previous period
ratio. This last feature captures the notion that changes in the ratio of sales to
investment require some reallocation of resources within the …rm. It is assumed
that the wage rate is constant and equal to n. Capital depreciates at rate c. The
…rm’s problem is
max
o
=0
,

[j

¡

÷n:

[ ,
subject to
¡

÷r

_ q
_
/

, :

,
_
¡

r

÷
¡
÷l
r
÷l
_
2
_
/
÷l
_ (1 ÷c)/

÷r

j
÷l
= )(j

)
/
0
0 and
¡
÷l
r
÷l
0 given.
We assume that q is bounded, increasing in …rst two arguments and decreasing
in last argument. Formulate this problem as a dynamic programming problem.
Exercise 183 A worker’s instantaneous utility unction n(.) depends on the
amount of market goods consumed c
l
and also amount of home produced goods
c
2
. In order to acquire market produced goods the worker must allocate some
amount of time 
l
to market activities that pay a salary of n. The worker takes
wages as given. It is known that the wages move according to n
÷l
= /(n

).
The quantity of home produced goods depend on the stock of expertise that the
worker has at the beginning of the period, which we label by a

. This stock of
expertise depreciates at rate c and can be increased by allocating time to non
market activities. Hence, the agents problem is
max
o
=0
,

n(c
l
, c
2
)
subject to
c
l
_ n



,
c
2
_ )(a

),
a
÷l
_ (1 ÷c)a

÷
2
,

l
÷
2
_ ,
137
n
÷l
= q(n

),
a
0
0 given.
It is assumed that n and q are bounded and continuous. Formulate this problem
as a dynamic programming problem.
Exercise 184 Consider the problem of choosing a consumption sequence c

to
maximize
o
=0
,

(lnc

÷¸ lnc
÷l
), 0 < , < 1, ¸ 0,
subject to
c

÷/
÷l
_ ¹/
o

, ¹ 0, 0 < c < 1, /
o
0 and c
÷l
given.
Here c

is consumption at t, and /

is capital stock at the beginning of period
t. The current utility function lnc

÷ ¸ lnc
÷l
is designed to represent habit
formation in consumption.
1. For formulate the dynamic programming problem of a representative agent.
2. Prove that the solution to this dynamic programming problem is of the form
·(/, c
÷l
) = 1÷1 ln/ ÷Glnc
÷l
where c
÷l
is the lagged consumption and
1, 1, and G are constants.. Prove that the optimal policy function is of
the form ln/
÷l
= 1 ÷H ln/

where 1 and H are constants. Give explicit
formulas for 1, 1, G, H, and 1.
Exercise 185 Consider the following optimization problem.:
max
o
=0
n(a

)
subject to
o
=0
a

_ :, and a

_ 0.This problem which is a dynamic program
ming problem with , = 1, is known as the ”cakeeating” problem. We begin with
a cake of size : and we need to allocate this cake to consume in each period of
an in…nite horizon.
« Show that this problem may be written as a dynamic programming problem.
That is describe formally the state and choice spaces, the reward function,
and the feasibility correspondence.
« Show that if n : ¹ ÷ 1 is increasing and linear, i.e. n(a) = /a, for some
/ 0 then the problem always has at least one solution. Find a solution.
« Show that if n : ¹ ÷ 1 is increasing and strictly concave, then the problem
has no solution.
138
Exercise 186 Write down the Bellman equation for the following problem.
Consider the problem of a worker who faces a wage o¤er n every period from
a distribution 1(\) = jro/(n _ \). If the worker accepts the o¤er he com
mits to work for at wage n forever. If he declines the o¤er, he can search one
more period and have a new wage o¤er from the distribution 1. If the worker is
unemployed, he receives unemployment compensation c.
Exercise 187 Write down the Bellman equation for the following problem.
Consider the problem of a …rm which faces a …rm speci…c productivity shock
:

each period. The price j for the …rm’s product and wage rate n are con
stant over time. Output is a function of employment level :

and :

and given
by )(:

, :

). Upon observing :

, …rm decides how much labor to employ for the
current period. Changing the level of employment implies an adjustment cost
that is given by q(:

, :
÷l
). The shocks that …rm faces next period depends on
the value of the shock in current period. Firm has also the option of exit for the
next period.
Exercise 188 Consider the problem of a monopolist introducing a new product.
The monopolist desires to maximize the presentvalue of his pro…ts. He faces
a constant interest rate r. In each period the monopolist faces the downward
sloping demand curve described by
j = 1(¡) with 1(0) = j, 1(·) = j and 1
t
(¡) < 0.
That is, according to demand schedule 1 the monopolist can sell ¡ units of
output at a price of j. Marginal revenue, or 1(¡) ÷ ¡1
t
(¡), is assumed to be
strictly decreasing in ¡. Let per unit cost of production be given by c. This cost
declines convexly with production experience, c, according to
c = C(c), with C(0) < j, C(·) j, C
t
(c) < 0, and C
tt
(c) 0.
Production experience is taken to be the cumulative sum of past production, so
the law of motion governing experience is given by
c
t
= c ÷¡.
a)Formulate the monopolist dynamic programming problem. Hint: Cast the
relevant maximization problem in terms of the production experience variable.
b) Compute the …rst order condition associated with this problem. Interpret it.
Prove that the decisionrule for ¡ is strictly increasing. c) Consider the problem
of myopic monopolist who maximizes current period pro…ts. Who produces the
most: the myopic or farsighted monopolist?
Exercise 189 Consider the following version of one sector growth model. Econ
omy is populated by identical representative agents who want to maximize
1
o
,

n(c

, :

), with 0 < , < 1,
139
subject to
c

÷q

÷/
÷l
_ 1(/

, :

),
and
0 _ :

_ 1, c

_ 0, /
÷l
_ 0, and /
0
given.
Here c

is time t consumption, :

is hours worked, and q

is government spend
ing. Government spending does not create any utility to the consumers. It is
iid and drawn from the following cumulative distribution each period
Ii [q

_ a[ = G(a).
Agents make decisions after they observe the level of government spending. As
sume n
l
0, n
2
< 0, n
ll
< 0, n
22
< 0, n
2l
< 0, and 1
l
0, 1
2
0, 1
ll
< 0,
1
22
< 0. Write down the dynamic programming problem for a representative
agent (or equivalently for a central planner).
1. Drive the FOC for :

. Show that
Jnt
J¸t
_ 0.
2. Drive the FOC and Euler equation for /
÷l
. Is /
÷l
increasing or decreas
ing in q

´
3. Will your results about the properties of policy function on part c change,
if q

was serially correlated?
Exercise 190 Consider the following savingsconsumption problem for an in
…nitely lived agent
max l = 1
0
_
o
=0
,

_
(a ÷

)c

÷/c
2

_
_
:.t. a
÷l
= 1a

÷j

÷c

and a
o
0 given
Here j

is timet realization of a given nonrandom income stream, c

is timet
consumption, and 

is a test shock. 1 is the gross interest rate of return on
keeping assets from one period to the next. Let 1 =
l
o
.
« Write down the value function and FOC for a for this problem.
« Assume that


= j
÷l
÷j

, 0 < j < 1 and j

iid
How does the FOC looks like now?
Exercise 191 Assume that
n(c) =
c
l÷o
1 ÷c
, c 0.
140
Assume that the gross interest rate 1

is independently and identically dis
tributed and is such that 1(1
l÷o

) < 1,,. Consider the problem of a represen
tative agent who wants to maximize
1
o
=0
,

n(c

), 0 < , < 1,
subject to
¹
÷l
_ 1

(¹

÷c

), and ¹
o
given,
where ¹

is period t assets. It is assumed that c

must be chosen before 1

is
observed. Show that the optimal policy function takes the form c

= `¹

and
give an explicit formula for `. Hint: Consider a value function of the general
form ·(¹) = 1¹
l÷o
, for some constant 1.
Exercise 192 Consider the following version of one sector growth model where
a representative agent tries to maximize
max
o
=0
l(c

, 

)
subject to
c

÷i

= 1(/

/

, 

)
and
/
÷l
= (1 ÷c(/

))/

÷i

(1 ÷

)
and
/
0
0 given
Here c

is consumption and 

is labor supply. Production for a given capi
tal stock /

and labor supply 

is given by 1(/

/

, 

). The variable /

is the
intensity of factor utilization for capital which is a choice variable. When /

is high, you get more services form a given capital stock. Using capital stock
more intensely, however, has a cost. It makes capital stock depreciate faster,
i.e. c
t
0. Moreover, let 0 < c < 1 and c
tt
0. Hence in this economy depre
ciation is not …xed, but given by function c(/). There is an investment speci…c
technology shock here. The productivity of the exiting capital at time t, /

, is
not a¤ected by this technology shock. When 

is high, however, it is cheaper
to make investment since a given unit of [1(//, ) ÷c[ can create more invest
ment and a higher capital stock next period. Let 

be an autocorrelated random
variable with cumulative distribution 1(
÷l
[

), and let 

¸ Q = [, [.
« Let
l(c

, 

) = l(c

÷G(

)),
with l
t
0, l
tt
< 0, G
t
0, G
tt
0. Find the marginal rate of substitu
tion between consumption and labor given this particular utility function.
What is the signi…cance of the MRS that this particular utility function
implies?
141
« Write down the dynamic programming problem faced by this consumer.
« Write down the FOCs for /
÷l
, /

and 

. Write down also the Euler equa
tion for /
÷l
. Interpret each of these FOCs.
« Using the FOC for /
÷l
, show that if the value function is concave, then
Jt+1
J:t
0. Try to interpret
Jt+1
J:t
.
Exercise 193 Consider the following version of the one sector growth model
max
]ct, t+1]
1
t=0
o
=0
,

n(c

),
subject to
c

÷/
÷l
_ 1(/

, 

),
/
0
0 is given,
c

_ 0, /
÷l
_ 0, 

_
´


.
The agent has /
0
0 units of capital at time 0. The agent also has 2 units of
labor endowment in even periods and 1 units of labor endowment in odd periods,
i.e.
(
´

0
,
´

l
,
´

2
, ...) = (2, 1, 2, 1, ....),
where
´


denotes the labor endowment at time t. Formulate the problem of max
imizing consumer utility subject to feasibility constraint as a dynamic program
ming problem.
Exercise 194 Consider the following version of the one sector growth model
with linear utility
max
]ct, t+1]
1
t=0
o
=0
,

c

,
subject to
c

÷/
÷l
_ )(/

),
/
0
0 is given,
c

_ 0, /
÷l
_ 0.
Here )(/

) represents the total amount of goods available at time. The function
) is strictly increasing, strictly concave with )(0) = 0. The representative agent
has /
0
units of capital at time 0, chooses a sequence of ¦c

, /
÷l
¦
o
=0
to maximize
o
=0
,

c

.
« Write down the dynamic programming problem associated with this se
quential problem.
142
« Write down the FOC, the Envelope condition, and the Euler equation as
sociated with the dynamic programming problem. Using the Euler equation
show that the optimal decision rule for an interior solution is
/
÷l
= q(/

) = ()
t
)
÷l
_
1
,
_
= /
+
,
where ()
t
)
÷l
indicates the inverse of )
t
function.
« Suppose /
÷l
= q(/

) is an interior decision for all /

, …nd the maximized
value function \
+
(/
0
).
« Do you expect /
÷l
= q(/

) to be the optimal decision for any /

´ Explain.
« Describe the optimal decision rule as fully as possible.
143
9 Deterministic Dynamics
Consider the following sequential problem (SP)
max
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
), (SP)
subject to
r
÷l
¸ I(r

), t = 0, 1, 2, ...,
and
r
0
given,
where 1 is a bounded return function and I is a feasibility correspondence.
We know that (see Chapter 4 of SLP) the maximized value function
\
+
(r
0
) = max
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
),
satis…es the following functional equation (FE),
\ (r) = max
¸ÇI(r)
[1(r, j) ÷,\ (j)[ , (FE)
As an example consider the FE for the standard onesector growth model
\ (/) = max
0¸
0
¸}(r)
¦l()(/) ÷/
t
) ÷,\ (/
t
)¦ , (52)
where / is the current period capital stock, /
t
is the next period capital stock
to be chosen, )() is the production function, l() is the utility function, and ,
is the discount factor. Assume that,
1. ) : 1
÷
÷ 1
÷
, and l : 1
÷
÷ 1 are continuous, strictly concave, strictly
increasing, and continuously di¤erentiable.
2. , ¸ (0, 1)
3. )(0) = 0, lim
÷0
)
t
(/) = ·, lim
÷o
)
t
() = 0, and lim
c÷o
l
t
(c) = ·.
Under these assumptions, we know that:
1. There exists unique maximum maintainable capital stock / 0 such that
)(/) = /, and if /

/ then /
÷l
_ )(/

) < /

. Hence the set of possible
values for the state variable / can be restricted to [0, /[.
2. Functional equation (52) has a unique, bounded and continuous solution
on [0, /[.
3. \ is strictly increasing and strictly concave.
144
4. The maximum in (52) is attained at a unique value q(/), and the policy
function q() is continuous.
5. Given any /
0
, the sequence /
÷l
= q(/

) de…nes an optimal path for the
capital stock, i.e. it is a solution for SP.
We want to characterize q() in order to understand how the optimal se
quence of the capital stock behaves over time. In particular, we would …rst like
to …nd stationary points of q (the points where q(/) = /), and then try to …gure
out if and how the economy moves over time towards these stationary points.
We also know under these assumptions that the solution is everywhere in
terior, and q(/) is characterized by the …rst order and the envelope conditions
(52),
l
t
()(/) ÷q(/)) = ,\
t
(q(/)), (53)
and,
\
t
(/) = l
t
[)(/) ÷q(/)[)
t
(/). (54)
Claim: The policy function q() is increasing.
Proof: Let / and /
o
be such that /, /
o
¸ [0, /[ and /
0
/. We want to show
that q(/
0
) q(/). Consider the FOCs for / and /
0
,
l
t
()(/) ÷q(/)) = ,\
t
(q(/)),
l
t
()(/
0
) ÷q(/
0
)) = ,\
t
(q(/
0
)).
Take the ratio of these two equalities to get ,
l
t
()(/) ÷q(/))
l
t
()(/
0
) ÷q(/
0
))
=
\
t
(q(/))
\
t
(q(/
0
))
.
Suppose q(/
0
) _ q(/). Since /
0
/, we have )(/
0
) )(/), then since l
tt
< 0,
l
t
()(/) ÷q(/))
l
t
()(/
0
) ÷q(/
0
))
1,
and since \
tt
< 0,
\
t
(q(/))
\
t
(q(/
0
))
_ 1,
leading to a contradiction.
9.1 Stationary Points of g()
Note that / = 0 is a trivial stationary point of q(), since q(0) = 0. An economy
that start with no capital will be stuck there. In order to characterize the
stationary points, lets evaluate the …rst order and envelope conditions at q(/) =
/ to get,
l
t
()(/) ÷/) = ,\
t
(/), (55)
145
and
\
t
(/) = l
t
[)(/) ÷/[)
t
(/). (56)
These will give us the necessary condition for a stationary point
)
t
(/) =
1
,
. (57)
Since )() is continuous and strictly increasing, there will be a unique /
+
satis
fying (57) given by,
/
+
= ()
t
)
÷l
(1,,). (58)
This is our unique candidate for a stationary point.
We showed that q(/) = / implies (58) (the necessary condition), if we can
also show that (58) implies q(/) = /, we will have a full characterization of the
stationary points of q().
In order to get the su¢cient condition, note that the strict concavity of \
implies that for any / and /
0
,
[\
t
(/) ÷\
t
(/
0
)[[/ ÷/
0
[ _ 0, and
[\
t
(/) ÷\
t
(/
0
)[[/ ÷/
0
[ = 0 if and only if / = /
0
.
In particular, let /
0
= q(/), then
[\
t
(/) ÷\
t
(q(/))[[/ ÷q(/)[ _ 0, and
[\
t
(/) ÷\
t
(q(/))[[/ ÷q(/)[ = 0 if and only if / = q(/).
By using (53) and (54), we get
\
t
(q(/)) =
1
,
l
t
[)(/) ÷q(/)[,
and,
\
t
(/) = l
t
[)(/) ÷q(/)[)
t
(/).
leading to
[\
t
(/) ÷\
t
(q(/))[ = l
t
[)(/) ÷q(/)[()
t
(/) ÷
1
,
).
Since l
t
() 0,
[)
t
(/) ÷1,,[[/ ÷q(/)[ _ 0, and
[)
t
(/) ÷1,,[[/ ÷q(/)[ = 0 if and only if / = q(/).
Hence, /
+
is a stationary point, since LHS of the inequality is zero at /
+
. There
fore,
/ = /
+
= [)
t
(/) ÷1,,[ = 0 = / = q(/).
146
In order to characterize the behavior of the economy out of the steady state,
note that since /
+
is the unique steady state, if / ,= /
+
,
[)
t
(/) ÷1,,[[/ ÷q(/)[ < 0.
Since )() is concave,
)
t
(/) _ (_) 1,, = / _ (_) /
+
.
Therefore,
q(/) (<) / = / < () /
+
,
and q(/) will look like Figure 20.
This analysis shows that: (i) there is a unique steady state /
+
0, (ii)
given any /
0
0, economy will converge to /
+
(global stability), and (iii) the
convergence is monotone.
g(k)
k
k’
k
*
k
*
Figure 20: q(/) for one sector growth model
9.1.1 A Nonmonotone Example
Consider an economy with two sectors, one producing consumption goods and
the other producing investment goods. The agents have one unit of time. The
147
production of the consumption goods c

require both labor :

, and capital /

,
and is given by,
c

= :

)
_
/

:

_
,
with )() is a neoclassical production function, and lim
n÷0
:)(/,:) = 0.
The production of capital goods require only labor,
/
÷l
= 1 ÷:

.
The dynamic programming problem faced by a representative agent is
\ (/) = max
¸Ç0,l
_
l
_
(1 ÷j))
_
/
1 ÷j
__
÷,\ (j)
_
, (59)
where / is the current capital stock, and j = 1 ÷ : is the next period capital
stock to be chosen. The production function for capital goods imply that j ¸
[0, 1[. [Check that \ () is strictly increasing and strictly concave, and the policy
function q : [0, 1[ ÷ [0, 1[ is continuous.]
Lets …nd the stationary points of q(). In this example, / = 0 is no more a
stationary point. Since capital goods are produced by labor only, an economy
with / = 0 will not be stuck there.
The …rst order and envelope conditions for this problem are given by,
l
t
_
[1 ÷q(/)[)
_
/
1 ÷q(/)
___
)
_
/
1 ÷q(/)
_
÷
/
1 ÷q(/)
)
t
_
/
1 ÷q(/)
__
= ,\
t
(q(/)),
\
t
(/) = l
t
_
[1 ÷q(/)[)
_
/
1 ÷q(/)
__
)
t
_
/
1 ÷q(/)
_
.
Setting q(/) = /, we get the necessary condition for a stationary point,
)
_
/
1 ÷/
_
÷
_
/
1 ÷/
÷,
_
)
t
_
/
1 ÷/
_
= 0. (60)
Under the assumptions on )() there is again a unique /
+
satisfying the necessary
condition.
Since \ () is strictly concave we can show that this condition is also su¢
cient. Using,
[\
t
(/) ÷\
t
(q(/))[[/ ÷q(/)[ _ 0, and
[\
t
(/) ÷\
t
(q(/))[[/ ÷q(/)[ = 0 if and only if / = q(/),
we get,
l
t
()
_
)
t
_
/
1 ÷q(/)
_
÷
1
,
)
_
/
1 ÷q(/)
_
÷
1
,
/
1 ÷q(/)
)
t
_
/
1 ÷q(/)
__
[/÷q(/)[ _ 0,
hence,
l
t
()
1
,
_
)
t
_
/
1 ÷q(/)
__
, ÷
/
1 ÷q(/)
_
÷)
_
/
1 ÷q(/)
__
[/ ÷q(/)[ _ 0,
148
and since l
t
() 0, (60) is also a su¢cient condition for a stationary point.
We again have a unique stationary point /
+
which is characterized by (60).
But it is obvious that q() will not look like Figure 20. If / is near to 0, it is
optimal to spend most of the time to produce capital, hence near q(0) will be
close to one and q() will be decreasing.
In order to investigate this possibility further, let
l(c) = c
o
, and )(.) = .
0
where 0 < c < 1, and 0 < 0 < 1.
Then the FOC becomes,
l
t
_
/
0
(1 ÷q(/))
l÷0
_
_
_
/
1 ÷q(/)
_
0
÷
/
1 ÷q(/)
0
_
/
1 ÷q(/)
_
0÷l
_
= ,\
t
(q(/)),
resulting in
c(1 ÷0)/
o0
(1 ÷(q(/))
o(l÷0)÷l
= ,\
t
(q(/)).
Claim: q(/) is decreasing.
Proof: Take any / and /
0
, and let /
0
/. Suppose q(/
0
) _ q(/). From FOCs
for / and /
0
we have,
_
/
/
0
_
o0
_
1 ÷q(/)
1 ÷q(/
0
)
_
o(l÷0)÷l
=
\
t
(q(/))
\
t
(q(/
0
))
.
If q(/
0
) = q(/), the LHS<1, and RHS=1. If q(/
0
) q(/), LHS<1, and
RHS1. Hence q(/) is decreasing.
10 Euler Equations
Consider the Euler equation for the standard one sector growth model,
l
t
()(/

) ÷/
÷l
) = ,l
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
), (61)
where l() and )() are utility and production functions, , is the discount factor,
and /

is the capital stock. We know that given any initial capital stock /
0
, the
sequence of capital stocks ¦/

¦ that satis…es (61) and the transversality condition
is an optimal solution for this problem. Note that (61) is a (nonlinear) second
order di¤erence equation in /

. Hence, analyzing how the optimal capital stock
moves over time amounts to analyzing how this di¤erence equation behaves. As
we have already shown that, in this example there exists a unique stationary
capital stock given by, )
t
(/
+
) = 1,,, and it is possible to characterize how
economy moves from any given /
0
towards /
+
.
For a general dynamic programming problem (A, I, ¹, 1, ,) where,
1. A is a convex subset of 1
l
,
2. I : A ÷ 1 is a nonempty, compactvalued, continuous, and convex
correspondence,
149
3. 1 : ¹ ÷ 1 is a bounded, continuous, strictly concave, and continuously
di¤erentiable on the interior of ¹,
4. , ¸ (0, 1),
the Euler equations are given by,
1
¸
(r

, r
÷l
) ÷,1
r
(r
÷l
, r
÷2
) = 0, (62)
and the transversality conditions are given by,
lim
÷o
,

1
r
(r

, r
÷l
)r

= 0. (63)
Hence, any sequence of state variables ¦r

¦ satisfying (62) and (63) is a
solution for SP given r
0
.
For the onesector growth model
1(/, /
t
) = l()(/) ÷/
t
),
therefore,
1
¸
(r

, r
÷l
) = l
t
()(/

) ÷/
÷l
)(÷1)
and,
1
r
(r
÷l
, r
÷2
) = l
t
()(/
÷l
) ÷/
÷2
))
t
(/
÷l
)
which gives us (61). In order to be able to analyze the equations like (61) we
need to study how to solve di¤erence equations.
10.1 Solving Linear Di¤erence Equations
Consider a system of …rstorder linear di¤erence equations
r
÷l
= ¹r

, (64)
where r

is a ( 1) vector, and ¹ is an ( ) matrix of coe¢cients. If we had
an initial value for r, then the periodt solution would be r

= ¹

r
0
.
There is no loss of generality in analyzing a …rst order system of from (64)
since:
1. Any higher order system of linear equations can be written as a …rst order
system. Consider a ¡t/ order system given by,
.
÷l
= 1
l
.

÷1
2
.
÷l
÷. . . ÷1
j
.
÷j÷l.
(65)
This can be written as …rst order system as,
_
¸
¸
¸
¸
_
.
÷l
.

.
÷j÷2
_
¸
¸
¸
¸
_
. ¸¸ .
rt+1
=
_
¸
¸
¸
¸
_
1
l
1
2
1
j÷l
1
j
1 0 0 0
0 1 0 0
0 0 1 0
_
¸
¸
¸
¸
_
. ¸¸ .
.
_
¸
¸
¸
¸
_
.

.
÷l
.
÷j÷l
_
¸
¸
¸
¸
_
. ¸¸ .
rt
where r

and r
÷l
are (¡ 1) vectors and ¹ is a (¡ ¡) matrix.
150
2. Any nonhomogenous …rst order system can be written as a homogenous
system. Consider a nonhomogenous system given by,
.
÷l
= C ÷¹.

. (66)
If (1 ÷¹) is nonsingular, the steadystate of (66) is given by . = C ÷¹.,
and therefore, . = (1 ÷¹)
÷l
C. Let r

= .

÷., and r
÷l
= .
÷l
÷., then
r
÷l
= .
÷l
÷. = C ÷¹.

÷. = C ÷¹.

÷C ÷¹. = ¹(.

÷.) = ¹r

.
In order to analyze the behavior of r

= ¹

r
0
we need to know how ¹

evolves.
Remark 195 Note that if  = 1 and a is a scalar, then r

= a

r
0
÷ 0 if [a[ < 1.
Note that if ¹ is a diagonal matrix, the it is easy to analyze ¹

. Therefore,
it is a good idea to diagonalize ¹. To do that lets introduce some de…nitions:
De…nition 196 Given an   matrix ¹, an eigenvector c of ¹ is a  1 vector
that is mapped by ¹ into its scalar multiple. That is ¹c = `c for some scalar
`. If c is an eigenvector, it must solve,
(¹÷`1)c = 0.
Remark 197 Note that an   matrix is a linear operator in 1
l
. Then an
eigenvector is almost like a …xed point of this operator in the following sense:
¹c scales c by a constant `.
De…nition 198 (¹ ÷`1)c = 0 will have nonzero solutions if and only if ` is
chosen so that
ool(¹÷`1) = 0.
The values of ` that satisfy this equation are called eigenvalues of ¹. For each
eigenvalue `
I
, then (¹÷`1)c = 0 will give the corresponding eigenvector c
I
.
De…nition 199 A square matrix ¹ is diagonalizable, if there exists an invert
ible matrix 1 such that 1
÷l
¹1 is diagonal.
Theorem 200 Let ¹ be an   matrix. If  eigenvectors of ¹ are linearly
independent, then it is diagonalizable, and 1
÷l
¹1 = A, where 1 is the matrix
of eigenvectors, and A is a diagonal matrix with the eigenvalues of ¹in the
diagonal.
Theorem 201 If  eigenvalues of ¹ are all di¤erent, then its eigenvectors are
linearly independent, hence ¹ is diagonalizable.
151
10.1.1 Twodimensional case
Let  = 2, and consider
r

= a
ll
r

÷a
l2
j

,
and
j

= a
2l
r

÷a
22
j

.
Then,
_
r
÷l
j
÷l
_
. ¸¸ .
:t+1
=
_
a
ll
a
l2
a
2l
a
22
_
. ¸¸ .
.
_
r

j

_
. ¸¸ .
:t
Let `
l
and `
2
, with `
l
,= `
2
, be the eigenvalues of ¹. Then, ¹ is diagonal
izable as
A = 1
÷l
¹1,
where
1 =
_
/
ll
/
l2
/
2l
/
22
_
,
with [/
ll
/
2l
[
t
and [/
l2
/
22
[
t
are the eigenvectors corresponding to `
l
and `
2
,
and
A =
_
`
l
0
0 `
2
_
.
Now in order to be able to use A, we want to transform our original system
so that we end with something like ´ .
÷l
= A´ .

, where ´ . denotes the transformed
variable. To this end note that
1
÷l
.
÷l
= 1
÷l
¹(11
÷l
)
. ¸¸ .
1
.

= 1
÷l
¹1
. ¸¸ .
A
(1
÷l
.

) = A1
÷l
.

.
Let ´ .

= 1
÷l
.

. Then,
_
´ r
÷l
´ j
÷l
_
=
_
`
l
0
0 `
2
_ _
´ r

´ j

_
,
or
´ r
÷l
= `
l
´ r

and ´ j
÷l
= `
2
´ j

.
Then,
´ r

= `

l
´ r
0
and ´ j

= `

2
´ j
0
.
We can now recover our original variables using .

= 1´ .

,
_
r

j

_
=
_
/
ll
/
l2
/
2l
/
22
_ _
`

l
´ r
0
`

2
´ j
0
_
,
which implies
r

= /
ll
`

l
´ r
0
÷/
l2
`

2
´ j
0
, (67)
152
and
j

= /
2l
`

l
´ r
0
÷/
22
`

2
´ j
0
. (68)
Finally, substituting ´ .

= 1
÷l
.

for t = 0, i.e. using,
_
´ r
0
´ j
0
_
=
_
¯
/
ll
¯
/
l2
¯
/
2l
¯
/
22
_
_
r

j

_
,
where
¯
/ denotes elements of 1
÷l
, we have
´ r
0
=
¯
/
ll
r
0
÷
¯
/
l2
j
0
and ´ j
0
=
¯
/
2l
r
0
÷
¯
/
22
j
0
.
Then, equations (67) and (68) become:
r

= /
ll
`

l
(
¯
/
ll
r
0
÷
¯
/
l2
j
0
) ÷/
l2
`

2
_
¯
/
2l
r
0
÷
¯
/
22
j
0
_
, (69)
and
j

= /
l2
`

l
(
¯
/
ll
r
0
÷
¯
/
l2
j
0
) ÷/
22
`

2
_
¯
/
2l
r
0
÷
¯
/
22
j
0
_
. (70)
Now, it is obvious that the behavior of (69) and (70) depend on `
l
and `
2
:
« If [`
l
[ < 1 and [`
2
[ < 1, then r

÷ 0 and j

÷ 0.
« If [`
l
[ 1 and [`
2
[ 1, then r

÷ ±· and j

÷ ±· (unless ´ r
0
= ´ j
0
=
0).
« If [`
l
[ < 1 and [`
2
[ 1, then the system is stable only if
¯
/
2l
r
0
÷
¯
/
22
j
0
= 0. (71)
Hence, there is a particular set of initial conditions that put the system
into a stable path (usually referred as a saddle path). In other words,
initial conditions j
0
and r
0
are not independent of each other.
In this case, i,e, if (71) holds, we have
r

= /
ll
`

l
(
¯
/
ll
r
0
÷
¯
/
l2
j
0
) and j

= /
2l
`

l
(
¯
/
ll
r
0
÷
¯
/
l2
j
0
).
And
r

=
/
ll
/
2l
j

, (72)
and we know exactly how r

and j

are related for all t 0.
153
10.1.2 General Case
If ¹ in r
÷l
= ¹r

is diagonalizable, then A = 1
÷l
¹1, or
¹ = 1A1
÷l
,
and we have
r

= 1A

1
÷l
r
0
. (73)
Let again / represent the elements of 1, and
¯
/ represent the elements of 1
÷l
.
Then,
1A

=
_
¸
¸
_
/
ll
`

l
/
l2
`

2
/
ll
`

l
/
2l
`

l
/
22
`

2
/
2l
`

l
/
ll
`

l
/
l2
`

2
/
ll
`

l
_
¸
¸
_
,
and,
1
÷l
r
0
=
_
¸
¸
¸
_
l
¸=l
¯
/
l¸
r
¸,0
l
¸=l
¯
/
2²
r
¸,0
l
¸=l
¯
/
l¸
r
¸,0
_
¸
¸
¸
_
.
Therefore,
_
¸
¸
_
r
l,
r
2,
r
l,
_
¸
¸
_
=
_
¸
¸
¸
_
l
I=l
l
¸=l
¯
/
lI
`

I
/
I¸
r
¸,0
l
I=l
l
¸=l
¯
/
2I
`

I
/
I¸
r
¸,0
l
I=l
l
¸=l
¯
/
lI
`

I
/
I¸
r
¸,0
_
¸
¸
¸
_
.
Hence for any /, r
,
is given by,
r
,
=
l
I=l
l
¸=l
/
I
`

I
¯
/
I¸
r
¸,0
=
l
I=l
/
I
`

I
_
_
l
¸=l
¯
/
I¸
r
¸,0
_
_
. (74)
Note that this exactly equations (69) and (70) we derived for the two dimensional
case. We are interested in the behavior of r

as t ÷ ·. From (74) we have the
following immediate result.
Theorem 202 If all eigenvalues of ¹ are less then 1 in absolute value, then
r

÷ 0 as t ÷ ·, since `

I
÷ 0 for all i.
Suppose not all of the eigenvalues are less than one in absolute value, but only
: <  of them are less then one in absolute value. Without loss of generality,
suppose that …rst : of the eigenvalues are less than one in absolute value and
last  ÷: are greater than one in absolute value. Then it is obvious from (74)
that we need additional restrictions on the initial values.
154
Theorem 203 Suppose [`
I
[ < 1 for 1_ i _ :, and [`
I
[ 1 for i :.
lim
÷o
r

= 0 if and only if r
¸
,
0
satis…es the restrictions
l
¸=l
¯
/
I¸
r
¸,0
= 0 for
all i :.
Proof: (Su¢ciency) If
l
¸=l
¯
/
I¸
r
¸,0
= 0 for all i :, then
r
,
=
n
I=l
/
I
`

I
_
_
l
¸=l
¯
/
I¸
r
¸,0
_
_
,
and the e¤ects of unstable roots are killed.
(Necessity) Suppose
l
¸=l
¯
/
I¸
r
¸,0
,= 0 for some i :. There must exists
some / such that /
I
,= 0. For this /, lim
÷o
r

,= 0 since
r
,
=
n
I=l
/
I
`

I
_
_
l
¸=l
¯
/
I¸
r
¸,0
_
_
÷ ·.
Note that the condition
l
¸=l
¯
/
I¸
r
¸,0
= 0 for all i : is exactly equation
(71) that we got for the two dimensional case.
This theorem implies that for the stability of the system the initial values
associated with the stable eigenvalues must form a basis for the system. To
see this note that the condition on the initial values for the state variables
associated with unstable eigenvalues,
l
¸=l
/
I¸
r
¸,0
= 0 for all i :, imply that
the  ÷ : initial values associated with unstable eigenvalues are functions of
: initial values associated with stable eigenvalues. In other words, the initial
values associated with the instable eigenvalues should not be exogenous to the
system.
Speci…cally,
l
¸=l
¯
/
I¸
r
¸,0
= 0 for all i : =
n
¸=l
¯
/
n÷l¸
r
¸,0
÷
l
¸=n÷l
¯
/
n÷l¸
r
¸,0
= 0,
or,
_
¸
¸
_
¯
/
n÷l,n÷l
¯
/
n÷l,n÷2
¯
/
n÷l,n
¯
/
n÷2,n÷l
¯
/
n÷2,n÷2
¯
/
n÷2,n
¯
/
l,n÷l
¯
/
l,n÷2
¯
/
n,n
_
¸
¸
_
. ¸¸ .
(l÷n)·(l÷n)
_
¸
¸
_
r
n÷l,0
r
n÷2,0
r
l,0
_
¸
¸
_
. ¸¸ .
(l÷n)·l
= ÷
_
¸
¸
¸
_
n
¸=l
¯
/
n÷l,¸
r
¸,0
n
¸=l
¯
/
n÷2,¸
r
¸,0
n
¸=l
¯
/
l,¸
r
¸,0
_
¸
¸
¸
_
.
Hence, the initial values associated with unstable eigenvalues can not be inde
pendent of the initial values associated with stable eigenvalues. The subspace
in 1
n
formed by the initial values associated with stable eigenvalues is called
the stable manifold of the system.
155
Note that although we studied the linear di¤erence equations, there is no
reason for (62) to be linear in r. Indeed, most probably these equations will be
nonlinear. If the objective function 1 is quadratic, however, the Euler equations
will be linear and we can apply the results from the linear di¤erence equations.
For nonlinear cases we will try to linearize the Euler equations around the steady
state and study the linearized versions.
10.2 Quadratic Return Functions
Suppose 1(r, j) is a quadratic function. Then the …rst derivatives of 1(r, j)
can be written as
1
r
(r, j) = 1
r
÷1
rr
r ÷1
r¸
j,
and,
1
¸
(r, j) = 1
¸
÷1
t
r¸
r ÷1
¸¸
j,
where 1
r
and 1
¸
are  1 vectors of constants, 1
rr
, 1
¸¸
, and 1
r¸
are  
matrices of constants, and 1
t
r¸
is the transpose of 1
r¸
. Note that the …rst
derivative of a quadratic function of (r, j) can only have a constant, terms with
r, or terms with j. Hence, for the quadratic case the Euler equations become
1
¸
÷1
t
r¸
r

÷1
¸¸
r
÷l
÷,¦1
r
÷1
rr
r
÷l
÷1
r¸
r
÷2
¦ = 0,
which can be written as
1
¸
÷,1
r
÷1
t
r¸
r

÷ (1
¸¸
÷,1
rr
)r
÷l
÷,1
r¸
r
÷2
= 0 (75)
To …nd the conditions for stationary points, lets evaluate these equations at
r = r

= r
÷l
= r
÷2
. This gives
1
¸
÷,1
r
÷1
t
r¸
r ÷ (1
¸¸
÷,1
rr
)r ÷,1
r¸
r = 0
If (1
r¸
÷ 1
¸¸
÷ ,1
rr
÷ ,1
r¸
) is nonsingular, then a unique stationary point
will be given by
r = ÷(1
r¸
÷1
¸¸
÷,1
rr
÷,1
r¸
)
÷l
(1
¸
÷,1
r
).
Let the deviations from the steady state be denoted by .

= r

÷ r. If 1
r¸
is
nonsingular, then we have
,
÷l
1
÷l
r¸
1
0
r¸
.

÷,
÷l
1
÷l
r¸
(1
¸¸
÷,1
rr
).
÷l
÷.
÷2
= 0. (76)
Since this is a system of second order linear di¤erence equation in .

we can
write it as a …rst order system,
_
.
÷2
.
÷l
_
=
_
J 1
1 0
_ _
.
÷l
.

_
(77)
where
J = ÷,
÷l
1
÷l
r¸
(1
¸¸
÷,1
rr
),
1 = ,
÷l
1
÷l
r¸
1
0
r¸
,
156
and 1 and 0 are   matrices. Let
7

=
_
.
÷l
.

_
and ¹ =
_
J 1
1 0
_
,
Then the dynamics of the system can be represented by
7
÷l
= ¹7

. (78)
Hence we need to analyze the eigenvalues ¹ which is a 2 2 matrix. Following
lemma shows that there is a upper bound in the number of eigenvalues that can
be less than one in absolute value in the quadratic case.
Lemma 204 Assume 1
r¸
and (1
r¸
÷1
¸¸
÷,1
rr
÷,1
r¸
) are nonsingular, let
¹ be de…ned as in (78), then if ` is a characteristic root of ¹, so is (,`)
÷l
.
Proof: If ` is a characteristic root of ¹, then (¹÷`1) will be singular, that
is for some stacked vector r ,= 0 with r
t
= (r
t
l
, r
t
2
) ¸ 1
2l
,
_
J ÷`1 1
1 ÷`1
_ _
r
l
r
2
_
=
_
0
0
_
,
which lead the following set of equations,
(J ÷`1)r
l
÷1r
2
= 0 and r
l
÷`r
2
= 0.
If 1
r¸
and (1
r¸
÷ 1
¸¸
÷ ,1
rr
÷ ,1
r¸
) are nonsingular, one can show that ¹
will be nonsingular (left as an exercise). If ¹is nonsingular, then ` ,= 0. Then
r
l
= `r
2
, and the two equations reduces to
(1 ÷`J ÷`
2
1)r
2
= 0,
implying that (1 ÷`J ÷`
2
1) is singular. Hence,
ool(1 ÷`J ÷`
2
1) = 0,
and ` is the characteristic root of ¹ if and only if it satis…es this equation.
Using the values of 1 and J this equation becomes
ool(÷,
÷l
1
÷l
r¸
1
t
r¸
÷`,
÷l
1
÷l
r¸
(1
¸¸
÷,1
rr
) ÷`
2
1) = 0,
since 1
r¸
is nonsingular,
ool(,
÷l
1
t
r¸
÷`,
÷l
(1
¸¸
÷,1
rr
) ÷`
2
1
r¸
) = 0. (79)
Hence ` is characteristic root of ¹ if and only if (79) holds. Let
´
` = (`,)
÷l
,
we need to show that (79) also holds for
´
`. If we replace ` with
´
` in equation
(79), we get
ool(,
÷l
1
t
r¸
÷
´
`,
÷l
(1
¸¸
÷,1
rr
) ÷
´
`
2
1
r¸
) = 0,
157
or
ool(,
÷l
1
t
r¸
÷`
÷l
,
÷2
(1
¸¸
÷,1
rr
) ÷`
÷2
,
÷2
1
r¸
) = 0.
since (`
÷2
,
÷l
) is a constant
(`
÷2
,
÷l
) ool(`
2
1
t
r¸
÷`,
÷l
(1
¸¸
÷,1
rr
) ÷,
÷l
1
r¸
) = 0. (80)
Note that determinant is zero if and only if (2.5) holds. Hence if `
I
is a root of
¹so is (,`
I
)
÷l
Previous Lemma implies that the roots of ¹ come in almost reciprocal pairs
and we can have at most  roots smaller than one. Indeed if exactly  of the
eigenvalues are smaller than one in absolute value then we have the following
global stability result.
Theorem 205 Let 1 : 1
2l
÷ 1 be a strictly concave, quadratic function. Let
I(r) = 1
l
for all r ¸ 1
l
, and 0 < , < 1. Assume that 1
r¸
and (1
r¸
÷ 1
¸¸
÷
,1
rr
÷,1
r¸
) are nonsingular, and let r be the unique stationary pint. Assume
¹ has  characteristic roots less than one in absolute value, then \r
0
¸ 1
l
, there
exits a unique solution ¦r

¦ to the optimization problem. This sequence satis…es
(75) and has lim
÷o
r

= r.
To gain some intuition about this theorem, let  = 1, then
_
.
÷2
.
÷l
_
=
_
c d
1 0
_ _
.
÷l
.

_
, (81)
where c and d are some constants. We know from our analysis of twodimensional
case that if one of the eigenvalues is less than one in absolute value and the other
one is greater than one in absolute value, then we need the following condition
¯
/
l2
.
l
÷
¯
/
22
.
0
= 0. (82)
Note that this restriction implies a relation between .
l
and .
0
. The basic idea is
that for any .
0
, .
l
can be chosen to satisfy this equation. This will be the case
if
¯
/
l2
,= 0. Suppose not, i.e.
¯
/
l2
= 0. Then, both (.
0
= 0, .
l
= 0) and (.
0
= 0
and .
l
= ), would satisfy this restriction contradicting the fact that there is at
most one sequence that is optimal.
10.3 Linear Approximations
This Theorem gives us a powerful tool for analyzing the stability of dynamic
programming problems with quadratic return functions. What happens when
the return function is not quadratic, and the Euler equations are not linear. The
following theorem deals with this problem and develops a local stability result.
Theorem 206 Suppose the dynamic programing problem (A, I, ¹, 1, ,) sat
is…es
158
« A is a convex subset of 1
l
. I : A ÷ A is a nonempty, compactvalued,
and continuous correspondence.
« 1 : ¹ ÷ 1 is bounded and continuous, and 0 < , < 1.
« 1 is strictly concave,
1(0(r, j) ÷ (1 ÷0)(r
t
, j
t
)) _ 01(r, j) ÷ (1 ÷0)1(r
t
, j
t
),
for all (r, j), (r
t
, j
t
) ¸ ¹, and for all 0 ¸ (0, 1) and the inequality is strict
if r ,= r
t
.
« I is convex. For all 0 ¸ [0, 1[, and for all r, r
t
¸ A, j ¸ I(r) and
j
t
¸ I(r
t
),
j ¸ I(r) and j
t
¸ I(r
t
) = 0j ÷ (1 ÷0)j
t
¸ I(0r ÷ (1 ÷0)r
t
).
« 1 is continuously di¤erentiable on the interior of A.
Let r be a stationary point. Assume further that 1 is twice continuously
di¤erentiable in a neighborhood · of (r, r). Let 1
rr
, 1
r¸
, and 1
¸¸
be the ma
trices of second derivatives of 1 evaluated at (r, r), and assume that 1
r¸
and
(1
r¸
÷1
¸¸
÷,1
rr
÷,1
r¸
) are nonsingular. Let ¹be de…ned as
_
.
÷2
.
÷l
_
=
_
J 1
1 0
_
. ¸¸ .
.
_
.
÷l
.

_
,
where J = ÷,
÷l
1
÷l
r¸
(1
¸¸
÷,1
rr
), and 1 = ,
÷l
1
÷l
r¸
1
0
r¸
. Suppose ¹ has  char
acteristic roots less than one in absolute values, then there exists a neighborhood
l of r such that if r
0
¸ l, then ¦r

¦ satis…es lim
÷o
r

= r.
Consider the standard onesector growth model where 1(r, j) = l()(r)÷j),
the steady state /
+
is characterized by
,)
t
(/
+
) = 1,
and the Euler equation is given by
÷l
t
()(/

) ÷/
÷l
) ÷,)
t
(/
÷l
)l
t
()(/
÷l
÷/
÷2
) = 0.
The linear approximation of this equation around /
+
is given by
0 = ÷l
00
())
t
(/
+
)(/

÷/
+
) ÷l
tt
()(/
÷l
÷/
+
) ÷,)
tt
(/
+
)l
t
()(/
÷l
÷/
+
) ÷
,)
t
(/
+
)l
tt
())
t
(/
+
)(/
÷l
÷/
+
) ÷,)
t
(/
+
)l
tt
()(/
÷2
÷/
+
).
159
Substituting the steady state condition )
t
(/
+
) =
l
o
,
0 = ÷
1
,
l
tt
()(/

÷/
+
) ÷
_
(1 ÷
1
,
)l
tt
() ÷,)
tt
(/
+
)l
t
()
_
(/
÷l
÷/
+
) ÷
l
tt
()(/
÷2
÷/
+
),
where l() indicate that the function is evaluated at the steady state values.
Hence we have the second order system given by
(/
÷2
÷/
+
) = ÷
1
,
(/

÷/
+
) ÷
_
(1 ÷
1
,
) ÷
,)
tt
(/
+
)l
t
()
l
tt
()
_
(/
÷l
÷/
+
).
As a …rst order system we have,
_
/
÷2
÷/
+
/
÷l
÷/
+
_
=
_
(1 ÷
l
o
) ÷
}
00
/}
0
I
00
/I
0
÷
l
o
1 0
_
. ¸¸ .
.
_
/
÷l
÷/
+
/

÷/
+
_
.
Hence ¹ is the matrix which governs the behavior of this linearized system
around /
+
.
We could also derive ¹ using the formulas from previous Theorem,
1
r
= l
t
()(r) ÷j))
t
(r),
1
rr
= l
tt
()(r) ÷j)()
t
(r))
2
÷l
t
()(r) ÷j))
tt
(r),
1
¸
= ÷l
t
()(r) ÷j),
1
¸¸
= l
tt
()(r) ÷j),
1
¸r
= ÷l
tt
()(r) ÷j))
t
(r),
which will give us after evaluating at the steady state
1
t
r¸
÷1
¸¸
÷,1
rr
÷,1
r¸
= ÷l
tt
()(r) ÷j))
t
(r) ÷,l
tt
()(r) ÷j) ÷
,l
tt
()(r) ÷j)()
t
(r))
2
÷,l
t
()(r) ÷j))
tt
(r)
÷,l
tt
()(r) ÷j))
t
(r)
= ,l
t
)
tt
and
1
r¸
= ÷
1
,
l
tt
,
which are nonzero (nonsingular). We can then …nd
J = ÷
1
,
1
÷l
r¸
(1
¸¸
÷,1
rr
) =
1
,
,
1
l
tt
(l
tt
÷,l
tt
()
t
)
2
÷l
t
)
tt
)
= 1 ÷
1
,
÷
)
tt
,)
t
l
tt
,l
t
,
160
and
1 = ÷,
÷l
1
÷l
r¸
1
t
r¸
= ÷
1
,
leading to ¹
¹ =
_
J 1
1 0
_
=
_
(1 ÷
l
o
) ÷
}
00
/}
0
I
00
/I
0
÷
l
o
1 0
_
.
In order to analyze the dynamics of this problem we need to …nd two
eigenvalues of ¹. From matrix algebra we know that `
l
÷ `
2
=trace(¹) and
`
l
`
2
= ool(¹). Under the standard assumptions on preferences and technology
}
00
/}
0
I
00
/I
0
0. Hence, both the trace and the determinant are positive. Therefore
both of the eigenvalues must be positive.
Characteristic equation for ¹ is given by
1(`) = `
2
÷trace(¹)` ÷ ool(¹) = `
2
÷
_
(1 ÷
1
,
) ÷
)
tt
,)
t
l
tt
,l
t
_
` ÷
1
,
.
Note that
1(0) =
1
,
,
1(1) = ÷
)
tt
,)
t
l
tt
,l
t
,
and
1(1,,) = ÷
1
,
)
tt
,)
t
l
tt
,l
t
.
It is easy to show that and one of the eigenvalues of ¹ is greater than 1
and other is less than one. Hence the system is stable around the steady state.
Furthermore, as
}
00
/}
0
I
00
/I
0
becomes larger, the smaller eigenvalue which determines
the convergence of the system becomes closer to one. Hence, the curvature in
the technology speeds up convergence and the curvature in preferences retard
it.
Finally, for this example, we have
/
÷l
÷/
+
= `
l
(/

÷/
+
),
where `
l
is the eigenvalue that is less than 1 in absolute value. Again from our
analysis of the two dimensional case, we know that this should correspond to
/
÷l
÷/
+
=
/
ll
/
l2
(/

÷/
+
).
Let / = [/
ll
/
l2
[
t
be the eigenvector associated with `
l
, then
_
(1 ÷
l
o
) ÷
}
00
/}
0
I
00
/I
0
÷
l
o
1 0
_
_
/
ll
/
l2
_
= `
l
_
/
ll
/
l2
_
,
161
which indeed implies that
/
ll
= `
l
/
l2
=
/
ll
/
l2
= `
l
.
Example 207 SLP (1989), p.157, Exercise 6.7, ac.
a) Given
\ (/) = max
0¸¸¸l
_
/
o0
(1 ÷j)
o(l÷0)
÷,\ (j)
_
,
where c, 0 ¸ (0, 1). The …rst order and envelope conditions are given by
/
o0
c(1 ÷0)(1 ÷/
t
)
o(l÷0)÷l
(÷1) ÷,\
t
(/
t
) = 0,
and
\
t
(/) = c0/
o0÷l
(1 ÷/
t
)
o(l÷0)
,
leading to the Euler equation
/
o0
c(1 ÷0)(1 ÷/
t
)
o(l÷0)÷l
= ,c0/
to0÷l
(1 ÷/
tt
)
o(l÷0)
.
b) Let q(/) = /, then the necessary condition for a stationary point is given
by,
1 ÷0
1 ÷/
=
,0
/
= /
+
=
,0
1 ÷0 ÷,0
.
Note that /
+
is the unique candidate for a stationary point. It is easy to verify
that it is also su¢cient.
c) Lets linearize the Euler equation around /
+
,
(÷1)c0(/
+
)
o0÷l
c(1 ÷0)(1 ÷(/
+
)
o(l÷0)÷l
)[/

÷/
+
[ ÷
(/
+
)
o0
c(1 ÷0)(c(1 ÷0) ÷1)(1 ÷(/
+
)
o(l÷0)÷2
)[/
÷l
÷/
+
[ ÷
,c0(c0 ÷1)(/
+
)
o0÷2
(1 ÷(/
+
)
o(l÷0)
)[/
÷l
÷/
+
[ ÷
(÷1),c0(c(1 ÷0))(/
+
)
o0
(1 ÷(/
+
)
o(l÷0)÷l
)[/
÷2
÷/
+
[,
divide by (÷1),c0(c(1÷0))(/
+
)
o0
(1÷(/
+
)
o(l÷0)÷l
) and substitute /
+
. Then we
have
(/
÷2
÷/
+
) ÷1(/
÷l
÷/
+
) ÷,
÷l
(/

÷/
+
) = 0,
where
1 =
1 ÷c(1 ÷0)
c(1 ÷0)
÷
1 ÷c0
c0,
.
Hence,
_
(/
÷2
÷/
+
)
(/
÷l
÷/
+
)
_
=
_
÷1 ÷,
÷l
1 0
_
. ¸¸ .
.
_
(/
÷l
÷/
+
)
(/

÷/
+
)
_
,
and the behavior of the system depends on the roots of ¹. Note that `
l
÷
`
2
=trace(¹) = ÷1, and `
l
`
2
= ool(¹) = ,
÷l
. You can easily check what
conditions we need to have one of the roots being greater than one and other
being less than one in absolute value.
162
11 Stochastic Case – Finite Number of Shocks
Consider the stochastic case of the standard one sector growth model. Output is
given by )(/). where . is a stochastic technology shock. The Bellman equation
for this problem will be
\ (/, .) = max
0¸
0
¸}():
¦l [)(/). ÷/
t
[ ÷,1 [\ (/
t
, .
t
)[¦ , (83)
where .
t
is the value of next period shock which is unknown at the current
period when /
t
is chosen, and 1() is the expected value operator. Hence, we
have two new features in the stochastic case: First, the state of the problem
now consists of both the current capital stock / and current shock .. Therefore,
in order to be able to analyze how the solution of this problem behaves over
time, we need to keep track of both / and .. Second, we need to characterize
what we mean by the expression 1().
11.1 Preliminaries
Suppose . can only take values from a …nite set 7 = ¦.
l
, .
2
, . . . , .
n
¦ . In this
case a probability distribution over 7 is simply an assignment of probabilities
(¬
l
, ¬
2
, . . . , ¬
n
) to each element of 7. Since ¬
I
’s are probabilities,
¬
I
= Ii(. = .
I
), ¬
I
_ 0 \i, and
n
I=l
¬
I
= 1.
We can then rewrite (83) as
\ (/, .) = max
0¸
0
¸}():
_
l [)(/). ÷/
t
[ ÷,
n
I=l
\ (/
t
, .
I
)¬
I
_
. (84)
In this setup, assignment of probabilities to the elements of 7 will also de…ne
an assignment of probabilities for any subset of 7. For any set ¹ _ 7, let
1
.
= ¦i : .
I
¸ ¹¦ be the set of indices of elements of 7 that belong to subset ¹.
We can de…ne Ii(. ¸ ¹), call it j(¹), as follows
j(¹) =
IÇ1
.
¬
I
where 1
.
= ¦i : .
I
¸ ¹¦ .
De…nition 208 Function j() de…nes a probability measure over any given fam
ily Z of subsets of 7 that includes O and 7 if it satis…es
i) j(¹) _ 0 \¹ ¸ Z , ii) j(O) = 0 and j(7) = 1, and
iii) for any ¹
l
, ¹
2
, . . . of pairwise disjoint sets in Z, j('
I
¹
I
) =
I
j(¹
I
).
163
How large can we pick the family of subsets of 7 and de…ne a probability
measure over it? Obviously, when 7 is …nite, we can take Z to be any family
of subsets of 7, including the set of all subsets of 7, and de…ne a probability
measure. If 7 is not …nite, then we can not de…ne a probability measure on all
subsets of 7 that has the obvious addingup property for disjoint unions. What
properties a family of subsets must have so that we can de…ne a meaningful
probability measure over it?
De…nition 209 Let o be a set and S be a family of subsets of o. S is called a
oalgebra if
a)O ¸ S, o ¸ S
/)¹ ¸ S =¹
c
= o ¸ ¹ ¸ S
c)¹
n
¸ S, : = 1, 2, . . . = '
n=l
¹
n
¸ S.
Hence the main properties of a oalgebra are closures under complementation
and countable union.
De…nition 210 (o, S) where S is a oalgebra is called a measurable space, and
any ¹ ¸ o is called a measurable set.
De…nition 211 (o, S, j) where S is a oalgebra and j is a probability measure
over S is called a probability space.
If o is a …nite set the family of all subsets of o obviously constitutes a o
algebra. The family of all subsets of a …nite set o is called the complete oalgebra
for o, and it is the oalgebra routinely used for …nite sets.
De…nition 212 Given a measurable space (o, S), a realvalued function ) :
o ÷ 1 is a measurable function w.r.t. S if
¦: ¸ o : )(:) _ a¦ ¸ S for all a ¸ 1,
If the space is a probability space ) is called a random variable.
Note that when o is a …nite set, this de…nition is trivial. For any a ¸ 1
there must be certain number of values of o for which )(:) _ a, and these sets
necessarily belongs to the complete oalgebra for o. Therefore, if o is a …nite
set, and S is the complete oalgebra for o, then any function ) : o ÷ 1 is
measurable w.r.t. S.
De…nition 213 Let 7 be a …nite set, and Z be the complete oalgebra of 7.
Then given a probability space (7, Z, j),
I
)(.
I
)j(.
I
) for all .
I
¸ 7 is called
the expected value of the random variable ) w.r.t the distribution j.
Hence, the problem (83) with …nite number of shocks is a wellde…ned prob
lem. We can de…ne the probability measure j() over any subset of shocks and
de…ne the expected value of any function.
164
11.2 Transition Functions
We would also like to analyze the problem where the expected value of the
next period shock depends on the current value of the shock. An autoregressive
process over the shocks will be such a case. Then a general form of (83) will be
\ (r, .) = max
¸ÇI(r,:)
¦1(r, j, .) ÷,1 [\ (j, .
t
) [ .[¦ , (85)
where 1 [\ (j, .
t
) [ .[ denotes to fact that the expected value of \ (j, .
t
) depends
on .. Suppose 7 was a …nite set. Then instead of assigning unconditional
probabilities for each element of 7, we need to de…ne conditional probabilities
or transition probabilities. Let ¬
I¸
denote the probability that the next period’s
shock will be .
¸
given that today’s shock is .
I
. Then (85) becomes
\ (r, .
I
) = max
¸ÇI(r,:1)
_
_
_
1(r, j, .
I
) ÷,
n
¸=l
\ (j, .
¸
)¬
I¸
_
_
_
. (86)
The matrix formed by ¬
I¸
for all i, , is called a transition matrix.
H =
_
¸
¸
_
¬
ll
¬
l2
. . . ¬
ln
¬
2l
¬
22
. . . ¬
2n
¬
nl
¬
n2
. . . ¬
n3
_
¸
¸
_
(87)
The de…ning properties of a transition matrix are:
i)¬
I¸
_ 0 for all i, , and ii)
¸
¬
I¸
= 1 for all i.
Each row of a transition matrix must sum up to one. Note that each row of
a transition matrix de…nes a probability measure over the all subsets of 7. A
transition matrix naturally de…nes a transition function.
De…nition 214 Let 7 be a …nite set and Z be the complete oalgebra of 7. A
transition function H : 7 Z ÷ [0, 1[ satis…es:
i) for all . ¸ 7, H(., ) is a probability measure,
ii) for all ¹ ¸ Z, H(, ¹) is a measurable function.
Note that for every combination of current shock and a subset of next period
shocks H assigns a probability. Hence, a transition function does two things:
First, for any given value of current shock ., it de…nes a probability measure
over next period shocks (each row of matrix H above). Second, for any given
subset of next period shocks it …nds the probability distribution over current
shocks of moving into that subset .
Using a transition matrix H we can de…ne two important operators:
165
De…nition 215 For any Zmeasurable function ), the Markov operator T over
) is given by
(T))(.
I
) =
¸
)(.
¸
)¬(.
I
, .
¸
), all .
I
¸ 7.
De…nition 216 For any probability measure ` over (7, Z), adjoint of Markov
operator T
+
over ` is given by
(T
+
`)(¹) =
I
¬(.
I
, ¹)`(.
I
), all ¹ ¸ Z.
Note that T)(.
I
) is the expected value of ) next period if the current shock
is .
I
, and T
+
`(¹) is the probability that next period shock lies in the set ¹, if
the current state is drawn according to the probability measure `. Hence, T
+
`
de…nes a probability measure over next period, if ` is the probability measure
over the current period.
11.3 Policy Functions and Transitions Functions
Consider the following dynamic programming problem
\ (r, .
I
) = max
¸ÇI(r,:1)
_
_
_
1(r, j, .
I
) ÷,
n
¸=l
\ (j, .
¸
)¬
I¸
_
_
_
, (88)
where . ¸ 7 = ¦.
l
, .
2
, . . . , .
n
¦ , r ¸ A, and the transition probability are given
by elements of transition matrix H = [¬
I¸
[ .
The state of this problem is : = (r, .) ¸ A 7, and as for the deterministic
case, we would like to know how : moves over time. Behavior of . is governed
by H which is exogenously given. On the other hand behavior of r is governed
by the optimal policy function q : A 7 ÷ A, and it is part of the solution.
Hence, the behavior of : is governed by q and
together. Indeed, q and H
de…ne a transition function 1 over (o, S) where o = A 7 as follows
1 [(r, .

) , ¹1[ =
_
H(.

, 1) if q(r, .

) ¸ ¹
0 if q(r, .

) , ¸ ¹
,
where r ¸ A, .

¸ 7, ¹ ¸ X, and 1 ¸ Z.
Suppose A is also a …nite set. Then X is the complete oalgebra for A.
Given the current state (r, .

), 1 [(r, .

) , ¹1[ gives the probability that
next period state will be in the set ¹ 1. This will happen with probability
H(.

, 1) if the optimal choice r
t
is in the set ¹.
Example 217 Consider the following dynamic programming problem from Green
wood, Hercowitz, and Haufman (AER, 1988)
\ (/

; 

) = max
ct, t+1, lt, t
_
l(c

, 

) ÷,
_
Q
\ (/
÷l
; 
÷l
)d1(
÷l
[ 

)
_
(89)
166
s.t.
c

= 1(/

/

, 

) ÷
/
÷l
1 ÷

÷
/

1 ÷

(1 ÷c(/

)).
Let / ¸ 1 = ¦/
l
, /
2
, . . . , /
n
¦ , and  ¸ 1 = ¦
:
, 
s
¦ where 
:
= c
¸
r
÷ 1, and

s
= c
¸
s
÷1. The transition matrix for  is given by
H =
_
¬
::
¬
:s
¬
s:
¬
ss
_
,
where ¬
:s
= Ii
_

t
= c
¸
s
÷1 [  = c
¸
r
÷1
¸
, 0 _ ¬
I¸
_, ¬
::
÷ ¬
:s
= 1, and
¬
s:
÷¬
ss
= 1. Then (2.6) becomes
\ (/
I
; ¸
:
) = max
c, 
0
, l, 
_
l(c, ) ÷,
2
s=l
¬
:s
\ (/
t
; ¸
s
)
_
s.t.
c

= 1(/
I
/, ) ÷/
t
c
÷¸
r
÷/
I
c
÷¸
r
(1 ÷c(/)).
Suppose there exists a unique value of /
t
= /
t
(/
I
, ¸
:
) ¸ 1 for all (/
I
, ¸
:
) ¸ 11.
Hence,
Ii [/
t
= /
¸
[ / = /
I
, ¸ = ¸
:
[ =
_
1 for some , ¸ ¦1, 2, . . . , :¦
0 otherwise
.
Using this probabilities we can de…ne a transition matrix 1 over the state space
of this problem. Note that the state is given by
o = ¦(/
l
, ¸
:
), (/
2
, ¸
:
), . . . , (/
n
, ¸
:
), (/
l
, ¸
s
), (/
2
, ¸
:
), . . . , (/
n
, ¸
:
)¦
which has 2 : elements, and 1 will be given by
1 = [j
I:, ¸s
[ = Ii [/
t
= /
¸
[ / = /
I
, ¸ = ¸
:
[ ¬
:s
(90)
for all i, , = 1, 2, . . . , : and for all r, : = 1, 2.
Then, analyzing the behavior of the state of this problem over time, as we
did for nonstochastic case in Recitations #6 and #7, amount to analyzing how
this matrix 1 behaves over time.
12 Markov Chains
Let 7 = ¦.
l
, .
2
, . . . , .
l
¦ be a …nite set and ^
l
be the dimensional unit simplex.
Then a probability distribution j over 7 is given by a row vector in ^
l
j ¸ ^
l
=
_
j ¸ 1
l
: j _ 0,
l
I=l
j
I
= 1
_
.
167
Let H = [¬
I¸
[ be a transition matrix (Markov chain) over 7
¬
I¸
_ 0,
l
¸=l
¬
I¸
= 1.
Suppose j is the probability distribution over the current state, then the
distribution over next period state is given by
´ j = jH =
_
j
l
j
2
j
l
¸
_
¸
¸
_
¬
ll
¬
l2
. . . ¬
ll
¬
2l
¬
22
. . . ¬
2l
. . .
¬
ll
¬
ll
. . . ¬
ll
_
¸
¸
_
=
_
l
I=l
j
I
¬
Il
l
I=l
j
I
¬
I2
. . .
l
I=l
j
I
¬
Il
_
.
Note that ´ j ¸ ^
l
since
¸
´ j
¸
=
l
I=l
j
I
¸
¬
I¸
= 1. Similarly, if j is the
probability distribution over current state then the probability distribution over
two periods ahead is given by (jH)H = j(H H) = jH
2
.
As we did with the deterministic case, we would like to know what happens
to the state of the problem as we follow the optimal policy from any given inial
value. In the deterministic case the optimal policy rule q(r) provided us with
all the information we need about the behavior of the model.
Suppose we are given some initial probability distribution j
0
over 7. The
long run behavior of the state space is given by j
0
H
n
. On the other hand, if the
initial state is .
I
, the probability distribution over states : period ahead will be
simply the ith row of H
n
. The steady state for this problem will obviously be a
stationary probability distribution over 7.
De…nition 218 A set 1 _ 7 is called an ergodic set, if H(.
I
, 1) = 1 for all
.
I
¸ 1, and if no other proper subset of 1 has this property.
Hence, if the current state is in an ergodic set then with probability one next
period will be also in this ergodic set. It is obvious that there will be a close
link between ergodic sets and the stationary distribution.
De…nition 219 An invariant distribution j
+
over 7 is a probability distribution
such that j
+
= j
+
H.
Our main concern is with the conditions under which j
0
H
n
÷ j
+
as : ÷ ·.
12.1 Examples
Example 220 Let  = 2, and suppose
H =
_
8,4 1,4
1,4 8,4
_
.
168
In this example 7 itself is the only ergodic set, and H
n
converges to
lim
n÷o
H
n
= Q =
_
1,2 1,2
1,2 1,2
_
.
Then for any initial distribution j
0
the invariant distribution is given by j
+
=
(1,2, 1,2) , and each row of Q is an invariant distribution:
_
1,2 1,2
¸
_
8,4 1,4
1,4 8,4
_
=
_
1,2 1,2
¸
Example 221 Let  = 8, and suppose
H =
_
_
1 ÷¸ ¸,2 ¸,2
0 1,2 1,2
0 1,2 1,2
_
_
.
In this example if the current state is .
l
, then there is a positive probability
that next period state will be .
2
or .
3
. But once we reach .
2
or .
3
there is no
possibility of going back to .
l
. A state such as .
l
is called a transient state. Here
1 = ¦.
2
, .
3
¦ is the only ergodic set. Moreover,
H
n
=
_
_
(1 ÷¸)
n
c
n
,2 c
n
,2
0 1,2 1,2
0 1,2 1,2
_
_
÷ Q =
_
_
0 1,2 1,2
0 1,2 1,2
0 1,2 1,2
_
_
,
where ¸ ¸ (0, 1) and c
n
= 1 ÷ (1 ÷ ¸)
n
. Whatever the initial distribution j
0
,
the invariant distribution is given by j
+
= (0, 1,2, 1,2) . Hence, economy will
eventually leave the transient state and will end up in the ergodic set:
_
0 1,2 1,2
¸
_
_
1 ÷¸ ¸,2 ¸,2
0 1,2 1,2
0 1,2 1,2
_
_
=
_
0 1,2 1,2
¸
Note that in these two examples there is a unique ergodic set and a unique
invariant distribution. Economy will eventually enter into the ergodic set re
gardless of where it starts.
Example 222 Suppose
H =
_
0 H
l
H
2
0
_
,
where H
l
and H
2
are / ( ÷/) and ( ÷/) / Markov matrices. Then
H
2n
=
_
(H
l
H
2
)
n
0
0 (H
2
H
l
)
n
_
and H
2n÷l
=
_
0 (H
l
H
2
)
n
H
l
(H
2
H
l
)
n
H
2
0
_
.
In this example there is only one ergodic set 7 itself. But ergodic set has
cyclically moving subsets. If the system begins in C
l
= ¦.
l
, .
2
, . . . , .

¦ _ 7 then
after any even number of periods it will be back in the set C
l
, and after every
169
odd number of periods it will be in the set C
2
= 7 ¸ C
l
. Reverse will happen if
the system begins in the set C
2
. In this example contrary to the …rst two H
n
do
not converge. Let
H
l
= H
2
=
_
8,4 1,4
1,4 8,4
_
,
then
H
2n
=
_
Q 0
0 Q
_
and H
2n÷l
=
_
0 Q
Q 0
_
where Q =
_
1,2 1,2
1,2 1,2
_
.
Note that although H
n
does not converge, the following average does
lim
Þ÷o
1
·
Þ÷l
n=0
H
n
=
_
¸
¸
_
1,4 1,4 1,4 1,4
1,4 1,4 1,4 1,4
1,4 1,4 1,4 1,4
1,4 1,4 1,4 1,4
_
¸
¸
_
and each row of this matrix is an invariant distribution:
_
1,4 1,4 1,4 1,4
¸
_
¸
¸
_
0 0 8,4 1,4
0 0 1,4 4,4
8,4 1,4 0 0
1,4 8,4 0 0
_
¸
¸
_
=
_
1,4 1,4 1,4 1,4
¸
Example 223 Suppose
H =
_
H
l
0
0 H
2
_
,
where H
l
and H
2
are / / and ( ÷ /) ( ÷ /) Markov matrices. In this
example there are two ergodic sets, 1
l
= ¦.
l
, .
2
, . . . , .

¦ and 1
2
= 7 ¸ 1
l
. If
the economy starts in the set 1
l
it will stay there forever and if the economy
starts in the set 1
2
it will also stay there forever. In this example
H
n
=
_
H
n
l
0
0 H
n
2
_
,
and H
n
converges if and only if H
l
and H
2
converge. Let
H
l
= H
2
=
_
8,4 1,4
1,4 8,4
_
.
Then
lim
n÷o
H
n
=
_
¸
¸
_
1,2 1,2 0 0
1,2 1,2 0 0
0 0 1,2 1,2
0 0 1,2 1,2
_
¸
¸
_
.
In this case there are two invariant distributions j
+
l
= (1,2, 1,2, 0, 0) and j
+
2
=
(0, 0, 1,2, 1,2) . Note also that any convex combinations of j
+
l
and j
+
2
are also
invariant distributions. Contrary to the previous examples, where the economy
170
will end up depends on the initial distribution j
0
. If j
0
= (¬, 1 ÷¬, 0, 0) then j
+
l
will be invariant distribution. On the other hand if j
0
= (0, 0, ¬, 1 ÷¬) then j
+
2
will be the invariant distribution. Suppose, for example, j
0
= (1,8, 1,8, 1,8, 0)
then j
+
= (1,8, 1,8, 1,6, 1,6) which is a convex combination of j
+
l
and j
+
2
.
Example 224 Let  = 8, and suppose
H =
_
_
1 ÷¸ c, ,¸
0 1 0
0 0 1
_
_
,
where c, ,, ¸ ¸ (0, 1) and c ÷ , = 1. In this example .
l
is a transient state.
Note that if the system ever reaches .
2
or .
3
it will never leave. Such states are
called absorbent. Here, we have two ergodic sets, 1
l
= ¦.
2
¦ and 1
2
= ¦.
3
¦ .
Furthermore
H
n
=
_
_
(1 ÷¸)
n
cc
n
,c
n
0 1 0
0 0 1
_
_
÷ Q =
_
_
0 c ,
0 1 0
0 0 1
_
_
,
where c
n
= 1 ÷(1 ÷¸)
n
. Each row of Q is an invariant distribution:
_
0 c ,
¸
_
_
1 ÷¸ c, ,¸
0 1 0
0 0 1
_
_
=
_
0 c ,
¸
Our concern is to …nd the conditions under which there exists a unique
ergodic set, and a unique invariant distribution j
+
. Since j
+
was de…ned to be
the probability distribution for which j
+
= H
n
j
+
holds, it is obvious that we
need a …xed point argument to establish the uniqueness of j
+
12.2 Invariant Distributions
Let ^
l
be the dimensional unit simplex. We already show that the transition
matrix H maps ^
l
into itself. Hence, if we can show that ^
l
is a complete metric
space with some appropriate metric and H de…nes a contraction mapping on ^
l
,
then we can use the contraction mapping theorem to show that H has a unique
…xed point in ^
l
.
Let  
.
denote the norm on 1
l
de…ned by
r
.
=
l
I=l
[r
I
[.
You can easily verify that
_
1
l
,  
.
_
is a complete metric space. Since ^
l
is a closed subset of 1
l
,
_
^
l
,  
.
_
is also a complete metric space.
Let T
+
: ^
l
÷ ^
l
be de…nes as T
+
j = jH. All we need is to …nd the
conditions on H so that T
+
is a contraction mapping on ^
l
.
171
Lemma 225 Let H
l·l
be a Markov matrix, and for , = 1, 2, . . . ,  let 
¸
=
min
I
¬
I¸
(the minimum element in column ,). If
l
¸=l

¸
=  0, then T
+
:
^
l
÷ ^
l
de…ned by T
+
j = jH is a contraction mapping with modulus 1 ÷.
Proof: Let ¡, j ¸ ^
l
, then
T
+
j ÷T
+
¡
.
= jH÷¡H
.
=
l
¸=l
[
l
I=l
(j
I
÷¡
I
)¬
I¸
[
=
l
¸=l
[
l
I=l
(j
I
÷¡
I
)(¬
I¸
÷
¸
) ÷
l
I=l
(j
I
÷¡
I
)
¸
[
_
l
¸=l
l
I=l
[ j
I
÷¡
I
[ (¬
I¸
÷
¸
) ÷
l
¸=l

¸
[
l
I=l
(j
I
÷¡
I
) [
=
l
I=l
[ j
I
÷¡
I
[
l
¸=l
(¬
I¸
÷
¸
) ÷ 0
= (1 ÷)j ÷¡
.
Note that for  to be positive we need at least one column in H with all
positive entries. What happens if this condition fails? If  is not positive then
in each column we have at least one zero entry. In other words, for every
given state there exist another state from which it is impossible to transit into
the given state. It is obvious this will lead into having several ergodic sets or
cyclically moving subsets within an ergodic set.
Theorem 226 Let 7 = ¦.
l
, .
2
, . . . , .
l
¦ be a …nite set, and let the Markov ma
trix H de…ne transition probabilities on 7. For : = 1, 2, . . . let 
(n)
¸
= min
I
¬
(n)
I¸
for , = 1, 2, . . . ,  (where ¬
(n)
I¸
is the elements of the matrix H
n
) and let 
(n)
=
l
¸=l

(n)
¸
. Then 7 has a unique ergodic set with no cyclically moving subsets
if and only if for some · _ 1, 
(Þ)
0. In this case ¦j
0
H
n
¦ converges to a
unique limit j
+
¸ ^
l
for all j
0
¸ ^
l
, and convergence is at a geometric rate
independent of j
0
.
Proof: Suppose 
(Þ)
0. Then by previous Lemma T
+Þ
: ^
l
÷ ^
l
de…ned
as T
+
j = jH
Þ
is a contraction mapping with modulus 1 ÷ 
(Þ)
. Since ^
l
is a
closed subset of a complete metric space, by the contraction mapping theorem
T
+Þ
has a unique …xed point. Let this …xed point be j
+
. Moreover
j
0
H
Þ
÷j
+

.
_ (1 ÷
(Þ)
)

j
0
÷j
+

.
for / = 1, 2, . . . and for all j
0
¸ ^
l
.
Now suppose ¦j
0
H
n
¦ ÷ j
+
for all j
0
¸ ^
l
. Then every row of the H
n
must
converge to j
+
,
¦H
n
¦ ÷
_
¸
¸
_
j
+
l
j
+
2
. . . j
+
l
j
+
l
j
+
2
. . . j
+
l
. . .
j
+
l
j
+
2
. . . j
+
l
_
¸
¸
_
,
172
where j
+
I
is the unique probability of being in state i in the long run. Then for
some · su¢ciently large there is at least one column , for which ¬
(Þ)
I¸
0
(otherwise j
+
will not de…ne a probability distribution over 7), then 
(Þ)
_

(Þ)
¸
0
173
13 Recursive Competitive Equilibrium
\
+
(r
0
) = sup
]rt+1]
1
t=0
o
=0
,

1(r

, r
÷l
),
then this function should satisfy the following functional equation (FE),
\ (r) = sup
¸
[1(r, j) ÷,\ (j)[ , (FE)
subject to
j ¸ I(r).
The supremum function \
+
(r
0
) tells us the in…nite discounted value of fol
lowing the best sequence ¦r
÷l
¦
o
=0
. Our strategy was that rather than …nding
the best sequence
_
r
+
÷l
_
o
=0
, we can try to …nd the function \
+
(r
0
) as a so
lution to the FE and use the associated policy rule j = q(r) to analyze the
optimal sequence. We then showed that: 1) The supremum function \
+
has to
satisfy FE, and if there is a solution to FE, then it is the supremum function.
2) There is indeed a solution \ to FE. We also analyzed: 1) Properties of \. 2)
Dynamic behavior implied by the policy function, j = q(r).
Our analysis so far was based on a planner’s (or a representative agent’s)
problem. Now we will talk about market economies. We will imagine a large
number of representative agent’s interacting in a market economy and try to
understand what conditions have to be satis…ed for this market economy to be
in equilibrium. Once we move to a market economy we have to be clear about:
« Ownership structure (who own capital, hence who decides on capital ac
cumulation).
« Which markets are open (goods, capital, labor).
« Economic agents (households and …rms).
In general a market equilibrium will be a situation such that for some given
prices, individuals and …rms decisions are such that markets clear. There is more
than one way to think about the markets. We have already seen ArrowDebrue
economies with time zero trade (i.e. all agents participating in a big market at
time 0); and economies with sequential markets where assets are traded every
period. We will now introduce a third equilibrium concept called the recursive
competitive equilibrium that is most suitable for analysis of dynamic economies.
Recursive competitive equilibrium is based on the idea that dynamic pro
gramming problems can be split into decisions about today and the entire future.
As you remember the key in our dynamic programming problem was the idea
of the state, i.e. the variables that provide all the information we need to make
decisions. In a recursive competitive equilibrium the prices are de…nes as func
tions of the state. Hence, in a recursive competitive equilibrium both individual
decisions (characterized by a value function and a decision rule) and the prices
will be functions of the state.
174
We will now de…ne the recursive competitive equilibrium for our onesector
growth model. We will imagine an economy that is populated by a large number
of identical agents (households). The households own both capital and labor.
They rent their capital and labor to a single …rm that produces output by a
constant returns to scale technology and pays the rental rate and the wage rate
to the households. Household decide how much of their total resources (which
consist of undepreciated capital, rental income and wage income) to consume
and how much of it to save as future capital.
We will denote an individual capital holdings by /, and the aggregate stock
of capital by 1. Suppose each household has one unit of time and start with /
0
units of capital. Let r

and n

be period t rental rate and the wage rate (we
do not know yet how they are determined). The households want to maximize
their lifetime utility by choosing the optimal consumption/saving path given a
set of prices. Hence, the agent’s problem is
max
ct,t+1
,

n(c

), ((HHP))
s.t.
c

÷/
÷l
= n

÷ (1 ÷r

÷c)/

= n

÷ (1 ÷c)/

÷r

/

,
and
/
0
0 given.
Note that since agents do not value leisure they will supply all of their time to
the …rm.
The …rm faces a simple pro…t maximization problem each period given by
max
1t,Þt
(1(1

, ·

) ÷r

1

÷n

·

) for all t, (FP)
The …rst order conditions associated with the …rms maximization problem are
r

= 1
1
(1
+

, ·
+

),
and
n

= 1
Þ
(1
+

, ·
+

),
where 1
+

and ·
+

are aggregate capital stock and labor demanded by the …rm.
Note that in equilibrium it must be the case that ·
+

= 1. We also know
that in equilibrium, the households will supply all of their capital stock to the
…rm, i.e. 1
+

= 1

, hence
r

= 1
1
(1

, 1),
and
n

= 1
Þ
(1

, 1).
These FOCs de…ne for every value of aggregate capital stock a rental rate and
a wage rate. We will therefore de…ne a rental rate function r : 1 ÷ 1
÷
, and a
wage function n : 1 ÷ 1
÷
as
r = r(1) and n = n(1).
175
Therefore, in each period if each household knows the aggregate capital stock
1, then they know exactly what the current rental rate and the wage rate are.
Hence, an household needs to know both / and 1 to be able to solve its dynamic
programming problem. Furthermore, each household also needs to know how
1 evolves over time. The aggregate capital stock, however, evolves over time as
a result of the decisions of all of the households. This implies that we will need
a consistency condition.
Lets …rst look at the household problem. Let \ (/, 1) be the value function
for a household with / units of capital if the aggregate capital stock is 1. This
value function is de…ned as
\ (/, 1) = max
c,
0
[n(c) ÷,\ (/
t
, 1
t
)[ , (HHP)
subject to
c ÷/
t
= r(1)/ ÷ (1 ÷c)/ ÷n(1),
and
1
t
= G(1).
Note that the solution to this problem will imply a law of motion for individual
capital given by
/
t
= q(/, 1) = aig max(HH1)
Here, G(1) is the law of motion for the aggregate capital stock. The household
need to know G in order to be able to predict 1
t
. Of course in equilibrium G is
not an arbitrary object.
Now we can de…ne a recursive competitive equilibrium (RCE): A RCE is a
set of functions for quantities G(1) and q(/, 1), for utility level \ (/, 1), and
for prices r(1) and n(1) such that:
« \ (/, 1) solves (HHP) and q(/, 1) is the associated policy function.
« Prices are competitive, i.e.
r(1) = 1
1
(1, 1),
and
n(1) = 1
Þ
(1, 1).
« Individual and aggregate decisions are consistent, i.e.
G(1) = q(1, 1) for all 1.
Note the following:
1. The third condition is the key feature of a RCE. It requires that when
ever the individual consumer is endowed with aggregate capital stock, his
individual behavior is exactly same as the aggregate behavior.
176
2. We did not mention the price of the aggregate output. Other prices are
in terms of output.
3. We did not mention a market clearing condition such as
C ÷1
t
= 1(1, 1) ÷ (1 ÷c)1.
This will hold by CRS assumption, since
1(1, 1) ÷ (1 ÷c)1 = r(1)1 ÷ (1 ÷c)1 ÷n(1).
We will now look at a slightly di¤erent example. Consider now an economy
populated by in…nitely many identical households that live forever. Each house
hold is endowed with 1 unit of time to allocate between leisure , and work /.
Hence in each period


÷/

= 1.
In addition households own an initial stock of capital /
0
, which they rent to
…rms and augment through investment. Households utility is given by
l (c(), /()) = 1
_
o
=0
,

l(c

, 1 ÷/

)
_
, with , ¸ (0, 1),
where c() and /() are in…nite sequences of consumption and leisure, l is con
tinuously di¤erentiable in both arguments, l
l
0, l
2
0, and l is strictly
concave.
Let 1 and H denote the aggregate capital stock and labor supply. Firms
has access to the following CRS production technology 1(1

, H

) : 1
2
÷
÷ 1
with 1
l
0, 1
2
0, 1(0, 0) = 0, and 1 is concave in 1 and H separately.
Moreover,
1

= c
:t
1(1

, H

) with .
÷l
= j.

÷
÷l
, j ¸ (0, 1), and  s ·(0, o
:
).
and the aggregate capital stock evolves according to
1
÷l
= (1 ÷c)1

÷A

,
where c ¸ (0, 1) is the depreciation rate and A

is the aggregate investment.
The aggregate resource constraint is then
1

= C

÷A

.
We could also write the aggregate resource constraint as
1
÷l
÷C

= (1 ÷c)1

÷1

.
The …rm’s problem is given by following static maximization problem (note
that due to CRS there is only one …rm)
max
1t, 1t
(c
:t
1(1

, H

) ÷r

1

÷n

H

) for all t,
177
where r

is the rental cost of capital and n

is the wage rate. The …rst order
conditions for this problem is given by
r

= c
:t
1
1
(1

, H

), and n

= c
:t
1
1
(1

, H

)
The representative household’s problem in this economy is given by the following
Bellman equation
\ (., /, 1) = max
c, r, 
[l(c, 1 ÷/) ÷,1(\ (.
t
, /
t
, 1
t
) [ .)[
:.t. c ÷r _ r(., 1)/ ÷n(., 1)/
/
t
= (1 ÷c)/ ÷r
1
t
= (1 ÷c)1 ÷A(., 1)
.
t
= j. ÷, c _ 0, 0 _ / _ 1,
where c, /, r are individual consumption, capital stock and investment and
1 and A are aggregate capital stock and investment. Note that r(., 1) and
n(., 1) indicate the fact that these prices depend on the aggregate capital
stock. In this problem the state variable for a representative household is given
by :

= (.

, /
,
1

) while the aggregate state is given by o

= (.

, 1

).
Note that we could also write the representative household’s problem as
\ (., /, 1) = max
c, 
0
, 
[l(c, 1 ÷/) ÷,1(\ (.
t
, /
t
, 1
t
) [ .)[
:.t. c ÷/
t
_ r(., 1)/ ÷n(., 1)/ ÷ (1 ÷c)/
1
t
= G(., 1)
.
t
= j. ÷, c _ 0, 0 _ / _ 1,
where c and / are individual consumption and capital stock, 1 is the aggregate
capital stock and investment. Note that r(., 1) and n(., 1) indicate the fact
that these prices depend on the aggregate capital stock.
REMARK: In the …rst formulation the households is given pricing func
tions n and r as well as an aggregate investment function A. This allows con
sumer to …gure out current income as well as future aggregate capital stock
1
t
. In the second formulation, the households is given a function that maps
(., 1) to 1
t
directly.
Then a RCE for this economy is a value function \ (., /, 1), a set of deci
sions rules c(., /, 1), /(., /, 1), and r(., /, 1); corresponding aggregate deci
sion rules C(., 1), H(., 1), and A(., 1); and factor prices n(., 1) and r(., 1)
such that:
« \ (., /, 1) solves the household problem with associate solutions given by
c(., /, 1), /(., /, 1), and r(., /, 1).
« Firms FOCs are satis…es, i.e.
r(., 1) = c
:
1
1
(1, H(., 1)), and n(., 1) = c
:
1
1
(1, H(., 1)).
178
« Individual and aggregate decisions are consistent, i.e.
c(., 1, 1) = C(., 1),
/(., 1, 1) = H(., 1),
r(., 1, 1) = A(., 1).
« Aggregate resource constraint has to be satis…ed
C(., 1) ÷A(., 1) = 1 (., 1) for all . and 1.
Alternatively, a RCE for this economy is a value function \ (., /, 1), a set of
decisions rules q(., /, 1) and /(., /, 1); corresponding aggregate decision rules
G(., 1), and H(., 1); and factor prices n(., 1) and r(., 1) such that:
« \ (., /, 1) solves the household problem with associate solutions given by
q(., /, 1) and /(., /, 1).
« Firms FOCs are satis…es, i.e.
r(., 1) = c
:
1
1
(1, H(., 1)), and n(., 1) = c
:
1
1
(1, H(., 1)).
« Individual and aggregate decisions are consistent, i.e.
q(., 1, 1) = G(., 1),
/(., 1, 1) = H(., 1),
« Aggregate resource constraint has to be satis…ed
C(., 1) ÷G(., 1) = 1 (., 1) ÷ (1 ÷c)1 for all . and 1.
179
14 Introduction to Numerical Methods
14.1 Deterministic Case
Consider the following version of the neoclassical growth model:
max
o
=0
,

l(c

) (P1)
subject to
c

÷/
÷l
= )(/

) and /
0
0 given.
The DP problem associated with P1 is given by
\ (/

) = max
t+1Ç0,}(t)
¦l ()(/

) ÷/
÷l
) ÷,\ (/
÷l
)¦ . (P2)
First order and envelope conditions for this problem are given by
l
t
()(/

) ÷/
÷l
) = ,\
t
(/
÷l
), (FOC)
and
\
t
(/

) = l
t
()(/

) ÷/
÷l
))
t
(/

). (Envelope)
Suppose the utility function and the production function take the following
parametric forms:
l(c) =
c
l÷c
1 ÷o
,
and
)(/) = /
o
.
Then, the steady state level of capital stock, / = /

= /
÷l
, is given by the
solution to the following Euler equation
l
t
(/
o
÷/) = ,l
t
(/
o
÷/)a/
o÷l
,
which gives us
/
+
= (a,)
l/l÷o
.
Hence, given a set of parameters we can …nd the steady state value of capital
stock.
Remark 227 We know that given any value of /
0
0, this economy converges
monotonically to /
+
. Hence, when we choose a grid for / below, we should make
sure that /
+
is within that grid.
In order to be able to solve P2 numerically, …rst assume that the capital
stock can only take values in a discrete set given by
/

¸ 1 = ¦/
l
, /
2
, . . . , /
Þ
¦ , \t.
180
This will make maximization over next period’s capital stock /
t
much more
easier since we only need to search over a …nite number of possibilities.
Given this discrete set, we can de…ne the following iteration on 1
\
n
(/
I
) = max

0
Ç1

0
¸}(t)
_
l ()(/
I
) ÷/
t
) ÷,\
n÷l
(/
t
)
_
, i = 1, . . . , ·. (P3)
Let the optimal value of /
t
that solves P3 be denoted by
q
n
(/
I
) = aig max

0
Ç1

0
¸}(1)
_
l ()(/
I
) ÷/
t
) ÷,\
n÷l
(/
t
)
_
, i = 1, . . . , ·. (P4)
Start with some initial guess for the value function \ (/), denoted by \
0
(/),
on this set 1. Note that for a discrete set a function is simply a list of numbers
corresponding to each element in that set. Hence, we can take
\
0
(/
I
) =
_
¸
¸
_
\
0
(/
l
)
\
0
(/
2
)
\
0
(/
Þ
)
_
¸
¸
_
Þ·l
=
_
¸
¸
_
0
0
0
_
¸
¸
_
.
Then, for i = 1, . . . , ·
\
l
(/
I
) = max

0
Ç1

0
¸}(1)
¦l ()(/
I
) ÷/
t
) ÷ 0¦ ,
= max ¦l ()(/
I
) ÷/
l
) , l ()(/
I
) ÷/
2
) , . . . , l ()(/
I
) ÷/
Þ
)¦ .
This way we have a maximum value for each /
I
and we can store these values
as our new guess for \
\
l
(/
I
) =
_
¸
¸
_
\
l
(/
l
)
\
l
(/
2
)
\
l
(/
Þ
)
_
¸
¸
_
Þ·l
.
we can also store the maximizing values for /
t
(assuming that it is unique)
as our policy function
q
l
(/
I
) =
_
¸
¸
_
q
l
(/
l
) = aig max ¦l ()(/
l
) ÷/
l
) , l ()(/
l
) ÷/
2
) , . . . , l ()(/
l
) ÷/
Þ
)¦
q
l
(/
2
) = aig max ¦l ()(/
2
) ÷/
l
) , l ()(/
2
) ÷/
2
) , . . . , l ()(/
2
) ÷/
Þ
)¦
q
l
(/
Þ
) = aig max ¦l ()(/
Þ
) ÷/
l
) , l ()(/
Þ
) ÷/
2
) , . . . , l ()(/
Þ
) ÷/
Þ
)¦
_
¸
¸
_
Þ·l
.
Now, we can proceed to the next iteration and get for i = 1, . . . , ·,
\
2
(/
I
) = max

0
Ç1

0
¸}(1)
_
¸
_
¸
_
l ()(/
I
) ÷/
t
) ÷,\
l
(/
t
)
. ¸¸ .
,= 0
_
¸
_
¸
_
,
= max¦l ()(/
I
) ÷/
l
) ÷,\
l
(/
l
), . . .
, l ()(/
I
) ÷/
Þ
) ÷,\
l
(/
Þ
)¦,
181
where the values for \
l
(/
I
) comes from the matrix that we stored in the previous
iteration.
Let
: =
:or:(\
n÷l
÷\
n
)
:or:(\
n
)
,
and continue iterating until : < , where  is a small number.
If our problem satis…es some nice properties, then we know that these iter
ations will converge to the unique value function of this problem. In the …nal
iteration, we can save the value function and policy function in order to analyze
how this economy behaves:
\ (/
I
) =
_
¸
¸
_
\ (/
l
)
\ (/
2
)
\ (/
Þ
)
_
¸
¸
_
Þ·l
, q(/
I
) =
_
¸
¸
_
q(/
l
)
q(/
2
)
q(/
Þ
)
_
¸
¸
_
Þ·l
Remark 228 Matlab program growmodel.m implements a discrete state space
solution for nonstochastic one sector growth model.
Remark 229 Note that we can easily add endogenous labor supply decision to
one sector growth model. If we do that, we will have a static maximization
problem for labor supply. Indeed, before going into value function iteration we
can …nd the optimal labor supply decisions for all combinations of / and /
t
.
Then, we can use these labor supply values whenever we need them. Matlab
program growmodel2.m solves nonstochastic version of one sector growth model
with endogenous labor supply decision. The utility function is assumed to have
the following form
(1 ÷c) log(c) ÷clog(1 ÷:).
Note that the program starts with …nding the optimal labor supply decisions and
utility values for all feasible combinations of (/, /
t
), and then enters the value
function iteration stage. It uses fsolve.m, a builtin Matlab function tha1t …nd
the zero of a function of one variable. The function solvelab.m calculates, for
any given value of :, the value of the FOC for :.
Remark 230 Note that we use the set 1 = ¦/
l
, /
2
, . . . , /
Þ
¦ in above algorithm
in two places: First, we de…ned value functions on 1. Second, we …nd the
maximizer in the following problem from the set 1
q
n
(/
I
) = aig max

0
Ç1

0
¸}(1)
_
l ()(/
I
) ÷/
t
) ÷,\
n÷l
(/
t
)
_
, i = 1, . . . , ·.
Indeed, we can solve this maximization problem better. Suppose we want to
…nd the maximizer to this problem not on the grid (i.e. we want to ignore the
constraint /
t
¸ 1), but among any feasible values (i.e. we want to care only
about the constraint /
t
_ )(/
I
)). How can we do this? Note that there are
182
two parts to this maximization problem: First, l ()(/
I
) ÷/
t
) part, which is
continuous and can be evaluated for any value of /
t
. Second, ,\
n÷l
(/
t
) part,
which is only de…ned on 1. However, we can use linear (or other forms of )
interpolation to …nd the value of \
n÷l
for any /, given the value of \
n÷l
on
1. The basic idea of linear interpolation is shown in Figure 1. Suppose we want
to evaluate
l ()(/
I
) ÷/
t
) ÷,\
n÷l
(/
t
)
for a value of / between /
I
and /
I÷l
. Then, given \
n÷l
(/
I
) and \
n÷l
(/
I÷l
),
\
n÷l
(/) is given by
\
n÷l
(/) = \
n÷l
(/
I
) ÷
\
n÷l
(/
I÷l
) ÷\
n÷l
(/
I
)
/
I÷l
÷/
I
(/ ÷/
I
).
Matlab code growmodel3.m implements this procedure. The function value.m
…nds l ()(/
I
) ÷/
t
) ÷ ,\
n÷l
(/
t
) for any value of /
t
. We then use a builtin
maximizer fminbnd.m to …nd the maximum of this function over /
t
. What
is the advantage of doing this? When we use a small number of grid points,
maximization over 1 will not provide an accurate solution of these maximization
problems. The interpolation methods works …ne even with small number of grid
points. Figures at the end show results from growmodel.m and growmodel3.m
with 21 grid points. As you can see, decision rule growmodel.m is not very
good, whereas growmodel3.m has no problem generating a nice function for /
t
.
14.2 Stochastic Case
Consider now the stochastic version of the neoclassical growth model:
max
o
=0
,

l(c

)
subject to
c

÷/
÷l
= oxp(.

))(/

),
and
ln.

= j ln.
÷l
÷ n

.¸¸.
IIJÞ(µ
r
,cr)
.
Then the dynamic programming problem is given by
\ (/

, .

) = max
t+1Ç0,:t}(t)
¦l (oxp(.

))(/

) ÷/
÷l
) ÷,1 [\ (/
÷l
, .
÷l
)[¦ .
(P5)
The state for this economy is now given by :

= (/

, .

).
Hence, in order to be able to apply discreet state space methods, we also
need to de…ne a grid for .

. Suppose .

can only take values in a …nite set given
by
. ¸ 7 = ¦.
l
, .
2
, . . . , .
1
¦ .
183
k
k
i k
i+1
k
i+2
V(k
i
)
V(k
i+1
)
V(k
i+2
)
k
V(k
i+1
) V(k
i
)
k
i+1
 k
i
k  k
i
V(k)
Figure 21: Linear Interpolation
184
Then, one can represent the autocorrelation for .

using a transition matrix
H:
H =
_
¸
¸
¸
_
¬
ll
¬
l2
. . . ¬
l1
¬
2l
.
.
.
¬
1l
¬
11
_
¸
¸
¸
_
1·1
,
where
¬
I¸
= Ii [.
÷l
= .
¸
[ .

= .
I
[ ,
¬
I¸¸
0 \i, ,,
1
¸
¬
I¸
= 1,
and
\i, ¬, s.t. ¬
I¸
0.
Now we can rewrite our iterations on the set 1 7 as, for i = 1, . . . , ·,
, = 1, . . . , ',
\
n
(/
I
, .
¸
) = max

0
Ç1

0
¸oxj(:¸)}(1)
¦l (oxp(.
¸
))(/
I
) ÷/
t
) (P6)
÷,
1
:=l
¬
¸:
_
\
n÷l
(/
t
, .
:
)
¸
¦, (91)
and
q
n
(/
I
, .
¸
) = aig max

0
Ç1

0
¸oxj(:¸)}(1)
[l (oxp(.
¸
))(/
I
) ÷/
t
) (P7)
÷,
1
:=l
¬
¸:
_
\
n÷l
(/
t
, .
:
)
¸
¦. (92)
Since now \ is a function of both / and ., in a discrete state space it will
be given by a · ' matrix. Again let
\
0
(/
I
, .
¸
) =
_
¸
¸
¸
_
\
0
(/
l
, .
l
) \
0
(/
l
, .
2
) . . . \
0
(/
l
, .
1
)
\
0
(/
2
, .
l
)
.
.
.
\
0
(/
Þ
, .
l
) \
0
(/
Þ
, .
1
)
_
¸
¸
¸
_
Þ·1
=
_
¸
¸
¸
_
0 0 . . . 0
0
.
.
.
0 0
_
¸
¸
¸
_
.
Then,
\
l
(/
I
, .
¸
) = max

0
Ç1

0
¸oxj(:¸)}(1)
¦l (oxp(.
¸
))(/
I
) ÷/
t
) ÷ 0¦ =
185
_
¸
¸
¸
¸
¸
¸
¸
_
max

0
Ç1

0
¸oxj(:1)}(1)
¦l (oxp(.
l
))(/
l
) ÷/
t
)¦ max

0
Ç1

0
¸oxj(:2)}(1)
¦l (oxp(.
2
))(/
l
) ÷/
t
)¦ . . . \
l
(/
l
, .
1
)
max

0
Ç1

0
¸oxj(:1)}(2)
¦l (oxp(.
l
))(/
2
) ÷/
t
)¦ max

0
Ç1

0
¸oxj(:2)}(2)
¦l (oxp(.
2
))(/
2
) ÷/
t
)¦
.
.
.
.
.
.
\
l
(/
Þ
, .
l
) \
l
(/
Þ
, .
1
)
_
¸
¸
¸
¸
¸
¸
¸
_
Þ·1
.
Once we have stored \
l
(/
I
, .
¸
) and q
l
(/
I
, .
¸
), we can move to the next
iteration and compute:
\
2
(/
I
, .
¸
) = max

0
Ç1

0
¸oxj(:¸)}(1)
_
l (oxp(.
¸
))(/
I
) ÷/
t
) ÷,
1
:=l
¬
¸:
_
\
l
(/
t
, .
:
)
¸
_
,
where ¬
¸:
are given exogenously and \
l
(/
I
, .
:
) are the values that we stored
in the previous iteration. We can keep repeating this procedure until we have
convergence.
14.2.1 Tauchen’s Method
In the previous section we represented the autocorrelated stochastic process
.

= j.
÷l
÷ n

.¸¸.
IIJÞ(µ
r
,cr)
,
with a transition matrix H. If the only information we had was the information
on j, j
u
, and o
u
, could we come up with the entries on H´ One way to achieve
this is to use Tauchen’s method
1. Determine a grid for 7. Let .
l
= j
:
÷¡o
:
and .
j
= j
:
÷¡o
:
, where ¡ is
some integer value, and j
:
and o
:
are the unconditional mean and stan
dard deviations for .. Then simply pick ' equally spaced points between
.
l
and .
j
to form
A = ¦.
l
, . . . , .
1
¦ .
2. Let the distance between each points be represented by n = .

÷ .
÷l
.
Then for each i, if , ¸ ¦2, . . . , ' ÷1¦ , the transition probabilities are
given by
j
I¸
= Ii[.
¸
÷
n
2
_ j.
I
÷n _ .
¸
÷
n
2
[
= 1
_
.
¸
÷j.
I
÷
u
2
o
u
_
÷1
_
.
¸
÷j.
I
÷
u
2
o
u
_
,
where
Ii[n

_ a[ = 1(
a
o
u
)
is the cumulative density function for a standard normal distribution.
186
z
2
Prob(z = z
2
) = F(z
2
+ w/2 )  F(z
2
 w/2 )
z
2
– w/2
z
2
+ w/2
w
Figure 22: Tauchen’s Method
3. Finally
j
Il
= 1
_
.
l
÷j.
I
÷
u
2
o
u
_
,
and
j
I1
= 1 ÷1
_
.
1
÷j.
I
÷
u
2
o
u
_
.
14.2.2 Simulations
So far we have looked at the numerical solutions of the nonstochastic and sto
chastic versions of the one sector growth model. We showed how we can use
discrete state space methods to …nd \ and q. Once we …nd value and policy
functions, we would like to know how these arti…cial economies behave.
In the nonstochastic case, since the capital stock is the only state variable,
q(/) gives us all the information we need. Given an initial capital stock /
0
,
we can analyze, using q(/), how this economy evolves, i.e. how the optimal
sequence ¦/

¦
o
=0
behaves. We know that under standard assumptions on utility
187
and production functions, q(/) is strictly increasing, and has a unique positive
stationary point de…ned by
/
+
= ()
t
)
÷l
(1,,). (93)
If we have the following parametric forms:
l(c) =
c
l÷c
1 ÷o
,
and
)(/) = /
o
÷ (1 ÷c)/,
then,
/
+
=
_
1
a,
÷
(1 ÷c)
c
_
l/o÷l
.
Indeed, growmodel.m solves this particular model with o = 1, c = 0.7ò, and
c = 0.8. Then, /
+
= 11.0766. This program generates the policy function, q(/),
shown in Figure 23, and the transition path (for 100 periods) shown in Figure
24.
9.5 10 10.5 11 11.5 12 12.5
9.5
10
10.5
11
11.5
12
12.5
current capital
n
e
x
t
p
e
r
i o
d
c
a
p
i t
a
l
Figure 23: Policy Function from growmodel.m
Consider now the stochastic version of the onesector growth model:
max 1
0
_
o
=0
,

n(c

)
_
188
0 10 20 30 40 50 60 70 80 90 100
9.8
10
10.2
10.4
10.6
10.8
11
11.2
11.4
timeperiod
c
a
p
i t
a
l
s
t
o
c
k
Figure 24: Transition path from growmodel.m
subject to
c

÷/
÷l
= oxp(.

)1(/

).
Matlab code bc.m solves a particular version of this problem in which
n(c) =
c
l÷c
1 ÷o
, and 1(/) = /
o
,
and .

follows
.

= j.
÷l
÷n,
where n is iid normal with zero mean and standard deviation o
u
. In bc.m, the
parameter values are
, o c j o
u
0.95 2 0.3 0.81 0.02
Figure 25 shows the decision rule for /
÷l
that bc.m generates. Note that each
line is a decision rule for a given value of ..
Given these decision rules, we would like to know how this economy behave.
In this case, we have to analyze how the capital stock and the exogenous shocks
behave jointly.
In order do this, let j
I¸,:s
be the probability that the current capital stock
is /
I
and the current shock is .
¸
and the economy moves to a state where the
capital stock is /
:
and the shock is .
s
. That is,
j
I¸,:s
= Ii [/
t
= /
:
, .
t
= .
s
[ / = /
I
, . = .
¸
[ .
189
0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2
0.13
0.14
0.15
0.16
0.17
0.18
0.19
0.2
optimal investment decision
Figure 25: Policy Function from bc.m
We know that the probability of moving from .
¸
to .
s
is given by ¬
¸s
. Hence,
j
I¸,:s
= Ii [/
t
= /
:
[ / = /
I
, . = .
¸
[ ¬
¸s
.
What is the probability that we end up in /
:
next period, given that the current
state is : = (/
I
, .
¸
)´ We know the policy function q(/, .). Hence, given : =
(/
I
, .
¸
), we know exactly what is the choice for the optimal capital stock. Then,
j
I¸,:s
=
_
¬
¸s
, if q(/
I
, .
¸
) = /
:
0, otherwise
.
Consider the following ordering of the all possible states for this economy:
o = ¦(/
l
, .
l
), (/
2
, .
l
), . . . , (/
l
, .
1
), (/
l
, .
2
), . . . , (/
Þ
, .
1
)¦ ,
and construct the following transition matrix on o
1 =
_
¸
¸
¸
_
j
ll,ll
j
ll,2l
. . . j
ll,Þ1
j
2l,ll
.
.
.
j
Þ1,ll
j
Þ1,Þ1
_
¸
¸
¸
_
Þ1·Þ1
,
Suppose now j
0
is an initial · ' vector of probability distribution on o.
190
That is
j
0
=
_
¸
¸
¸
_
j
0
ll
j
0
2l
.
.
.
j
0
Þ1
_
¸
¸
¸
_
Þ1·l
,
with
j
0
I¸
_ 0, \i, ,, and
Þ
I=l
1
¸=l
j
I¸
= 1.
Then, we can iterate on j
0
using the transition matrix 1 to get the next
period’s distribution over o
j
l
= 1j
0
.
If we keep iterating on
j
n
= 1j
n÷l
,
we can hope that we …nd the stationary distribution over o.
j
+
= 1j
+
.
For the stochastic model, j
+
characterizes the stationary distribution over
/. Note that j
+
is · ' dimensional. If we integrate it over productivity
shocks, we will get a distribution over /. This distribution is equivalent of /
+
in the nonstochastic version, i.e. there isn’t a single level of /
+
that economy
converges, but there is probably distribution over /. Figure 26 shows the long
run distribution of capital stock from bc.m.
Grid for / In nonstochastic version of this problem we …rst …nd /
+
, and since
we know that economy will converge to /
+
, we constructed a grid around /
+
.
In stochastic version of this problem, we can use j
+
to …nd the appropriate
grid for capital. Suppose you start from some grid for /, and plot the long run
distribution of / (like in Figure 26). If your grid is not wide enough, then you
observe accumulation of probabilities at the end points of your grid. Figure 27
shows results from bc.m with a narrow grid for /. If this occurs, you can expand
your grid until you capture the ergodic set for /.
In this particular example, there is another way to construct the grid for /,
which is used in bc.m Note that, if o = 0, the value function for this problem
is
\ (/

, .

) = max¦(oxp(.

)/
o

÷/
÷l
) ÷,1

[\ (/
÷l
, .
÷l
)[¦.
The solution can be characterized by the following Euler equation,
1 = c,1

[(oxp(.
÷l
)/
o÷l
÷l
[
= c,1

[(oxp(j.

÷n)/
o÷l
÷l
[.
191
0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
longrundistributionof capital
Figure 26: Long Run Distribution of / from bc.m
0.15 0.155 0.16 0.165 0.17 0.175 0.18 0.185
0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
longrundistributionof capital
Figure 27: Long run disrtibution of capital with narrow grid for /
192
Since only random variable here is n, the above equation can be rearranged as
1 = c,1

[oxp(j.

)(oxpn)/
o÷l
÷l
[
= c, oxp(j.

)/
o÷l
÷l
1

[oxp(n)[.
Taking logarithm of both sides, we get
0 = lnc, ÷j.

÷ (c ÷1) ln/
÷l
÷ ln1

[oxp(n)[.
Now notice that moment generating function for normal distribution is
1(c
r
) = '(t) = oxp(rt ÷o
2
t
2
,2).
Then,
1

[oxp(n)[ = '(1) = oxp(o
2
u
,2),
and we can obtain
0 = lnc, ÷j.

÷ (c ÷1) ln/
÷l
÷ ln[oxp(o
2
u
,2)[.
Therefore,
ln/
÷l
=
1
(1 ÷c)
[lnc, ÷j.

÷
o
2
u
2
[.
Now it immediately follows that,
1(ln/
÷l
) =
1
(1 ÷c)
[lnc, ÷
o
2
u
2
[,
and
\ ar(ln/
÷l
) =
1
(1 ÷c)
2
\ ar(j.

)
=
1
(1 ÷c)
2
j
2
o
2
u
1 ÷j
2
=
1
(1 ÷c)
2
j
2
o
2
u
(1 ÷j
2
)
.
So, we can construct a grid for / around 1(ln/
÷l
). Although o 0 in bc.m,
this approach still provides us with a nice grid for /.
Generating Time Series from the Model Once we have a wellde…ned
grid for /, we can use the optimal decision rule, q(/, .), to generate time series
from the model that we can compare with the data. Suppose we want to create
time series for T periods. We can do this as follows:
1. First, we need to generate a time series for .. In order to do that:
« Pick a starting value for ., suppose .
l
= .
¸
.
193
« Determine .
2
using [¬
¸l
, ¬
¸2
,¬
¸3
, ..., ¬
¸n
[ . Note that this is a proba
bility distribution over 7. To determine the value for .
2
, we can use
a random number generator. For example Matlab command “rand”
gives a uniform random number between 0 and 1. Suppose we draw
such a random number. Call it n. Now if n is less than ¬
¸l
, then we
will set .
2
= .
l
. If it is larger than ¬
¸l
, but less than ¬
¸l
÷ ¬
¸2
, we
will set .
2
= .
2
(note that subscript on the left hand side refers to
time and the one on the right hand side refers to the index from the
set 7), and so on. This way we can determine .
2
. Suppose, given
n,we have .
2
= .
5
, then we will use [¬
5l
, ¬
52
,¬
53
, ..., ¬
5n
[ , and a new
random uniform number and determine .
3
, etc.
« Often you generate a series longer than T, and discard the …rst set
of simulations to avoid the e¤ect of initial ..
2. Once you have T periods ., you can start from some value of / (for exam
ple, you can set /
l
to the expected value of / using j
+
), and, given /
l
and
.
l
, get /
2
= q(/
l
, .
l
). Then, you can get /
3
= q(/
2
, .
2
), etc. This way we
can generate time series for j

, c

, /

that we can compare with the data.
3. This way we can generate all the variables that we care about. Usually
this procedure is repeated several times to get multiple simulations for
each variable we care about.
After …nding \, q, and j
+
, bc.m generates simulated data. It …rst generates
100 period long productivity shocks (in total 20 such series). Figure 28 shows
examples of such time series for ..
Once we have a time series for ., we can use it to generate time series for
other variables we care about using q(/, .). In order to do this, let ¦.

¦
l00
=0
be one
of the time series bc.m generates. Furthermore let /
0
be the mean of capital
stock in Figure 26. Then, j
0
= oxp(.
0
)/
o
0
, /
l
= q(/
0
, .
0
), j
l
= oxp(.
l
)/
o
l
,
/
2
= q(/
l
, .
l
), ..... This way we can construct a series for j

. Figure 29 shows
one such series for j

that bc.m generates.
The next question is how we can compare time series that our arti…cial
economy generates with the U.S. data.
HodrickPrescott Filter In order to be able to compare our simulated data
with the U.S. data, we will …rst look at the U.S. data. The following …gure
shows the U.S. real quarterly GDP between 1947 and 2003. At a …rst glance it
does not look anything like the picture that comes out of our model. But our
model did not have growth, so we have to remove the growth component from
the data. Furthermore, we are mainly interested in business cycle ‡uctuations,
i.e. ‡uctuations that occur with a frequency of 3 to 5 years.
The common procedure in the real business cycle literature is to use Hodrick
Prescott (HP) …lter that works as follows. Consider a series ¦1

¦
T
=l
. Suppose
1

consists of two components: a trend component and a cyclical component:
1

= t

÷1
J

.
194
0 50 100 150
0.1
0.05
0
0.05
0.1
first seri
0 50 100 150
0.1
0.05
0
0.05
0.1
secondseri
0 50 100 150
0.1
0.05
0
0.05
0.1
tenthseri
0 50 100 150
0.1
0.05
0
0.05
0.1
0.15
twientiethseri
Figure 28: Simulated . series from bc.m
0 20 40 60 80 100 120
0.5
0.52
0.54
0.56
0.58
0.6
0.62
0.64
0.66
0.68
output  onesimulation
Figure 29: Simulated j

from bc.m
195
0 20 40 60 80 100 120
0.06
0.04
0.02
0
0.02
0.04
0.06
0.08
DEVIATIONSFROMTRENDfor Y
Figure 30: Deviations from Trend, Simulated Data
HP …lter pick t

from the following minimization problem.
min
]rt]
J
t=1
T
=l
(1

÷t

)
2
÷j
T÷l
=2
[(t
÷l
÷t

) ÷(t

÷t
÷l
)[
2
.
In this problem there is a trade o¤ between the extent to which the trend
component tracks the actual series against the smoothness of the trend. If
j = 0, then 1

= t

, 1
J

= 0, and if j ÷ ·, then t

approaches a linear trends.
For quarterly data, it is customary to set j = 1600.This way HP …lter eliminates
‡uctuations at frequencies lower than 32 quarters (8 years).
The following pictures show the actual data together with the trend compo
nent as well as deviations from the trend.
Suppose we apply the same procedure to our simulated data. Figure 30
shows the cyclical component in the model. The basic comparison is between
the cyclical component in the data and the cyclical component in the model.
14.2.3 Calibration — Prescott (1986)
In order to simulate the model, we need to choose functional forms for n and )
and specify parameter values. We also have to specify the stochastic structure
for .. How can we do this? Prescott (1986) uses long run (secular) growth
observations to choose functional forms and parameters:
Observation 1: In the U.S. economy, the capital and labor shares of output
has been relatively constant, while their relative prices change over time. This
196
suggests a CobbDouglas production function
.)(/, :) = ./
l÷0
:
0
,
with labor share parameter 0. Furthermore, the average value of labor share
in total output in the U.S. has been about 64%. Hence, we set
0 = 0.64.
Observation 2: In the U.S. economy, real wages have increased over time,
yet per capita market hours have been relatively constant. This suggests a unit
elasticity of substitution between consumption an leisure. Hence, the following
functional form is appropriate
n(c, 1 ÷:) =
(c
l÷ç
(1 ÷:)
ç
)
l÷~
÷1
1 ÷¸
,
where
l
~
is the intertemporal elasticity of substitution.
There exists a range of estimates for ¸ in the micro studies. Prescott (1986)
picks ¸ = 1, which imply.
n(c, 1 ÷:) = (1 ÷c) log(c) ÷clog(1 ÷:).
Observation 3: Given the CobbDouglas production function, we have
log(.
÷l
) ÷log(.

) = (log(1
÷l
) ÷log(1

)) ÷(1 ÷0)(log(1
÷l
) ÷log(1

))
÷0(log(·
÷l
) ÷log(·

)),
where 1

, 1

and ·

are aggregate output, capital and labor. Setting 0 = 0.64,
one can construct a time series for .

and analyze its statistical properties.
Remaining Parameters: we still need to determine ,, c, and c. In order to
choose these variables, consider again the DP given by
\ (/

, .

) = max
t+1,nt
¦(1 ÷c) log(c) ÷clog(1 ÷:) ÷,1

\ (/
÷l
, .

)¦ , (P1)
subject to
c

÷/
÷l
_ ./
l÷0

:
0

÷ (1 ÷c)/

÷/
÷l
,
.
÷l
= j.

÷

.
Then, the FOC for /
÷l
is given by
(1 ÷c)
1
c

= ,1

\
l
(/
÷l
, .
÷l
),
the Envelope condition is given by
\
l
(/

, .
÷l
) = (1 ÷c)
1
c
÷l
(.

(1 ÷0)/
÷0

:
0

÷ 1 ÷c),
197
while the FOC for :

is
(1 ÷c)
1
c

.

0/
l÷0

:
0÷l

= c
1
1 ÷:

.
Finally, we can write the following Euler equation that determines the accumu
lation of capital
(1 ÷c)
1
c

= ,1

_
(1 ÷c)
1
c
÷l
(.
÷l
(1 ÷0)/
÷0
÷l
:
0
÷l
÷ 1 ÷c)
_
.
Consider now the nonstochastic version of the Euler equation:
c
÷l
c

= ,((1 ÷0)/
÷0
÷l
:
0
÷l
÷ 1 ÷c) ==
c
÷l
c

= ,((1 ÷0)
/
l÷0
÷l
:
0
÷l
/
÷l
÷ 1 ÷c).
Since in the steady state, c
÷l
= c

= c and /
÷l
= /

= /, this equation
becomes
1 = ,((1 ÷0)
j
/
÷ 1 ÷c)
or
1
,
÷c ÷1 = (1 ÷0)
j
/
. (C1)
Which can also be written as
1
,
= 1 ÷ '11 ÷c
. ¸¸ .
return on capital
(C1’)
Next consider the FOC for labor, which is given by
(1 ÷c)
1
c

0/
l÷0

:
0÷l

= c
1
1 ÷:

,
and can be written as
(1 ÷c)
1
c

0
/
l÷0

:
0

:

= c
1
1 ÷:

,
or in steady state
0
j
c
1 ÷:
:
=
c
(1 ÷c)
. (C2)
« In the U.S. economy, rate of return on capital is about 4%. This suggests
1
,
= 1 ÷ 0.04 == , = 0.06.
« Given 0 = 0.64, , = 0.06, and

¸
= 2.6 (which is the ratio in the U.S.
economy), and using
1
,
÷c ÷1 = (1 ÷0)
j
/
,
we have
c = 0.86
1
2.6
÷ 1 ÷
1
0.06
= 0.007
198
« Finally, given 0 = 0.64, : = 1,8, and
¸
c
= 1.8 (again the ratio for the U.S.
economy), and using
0
j
c
1 ÷:
:
=
c
(1 ÷c)
,
we have
c
(1 ÷c)
= 0.64(1.8)
2,8
1,8
= 0.086 = 1.664,
which implies
c =
1.664
2.664
= 0.62ò.
Remark 231 See Cooley and Prescott (1995) for a much more detailed discus
sion of the basic business cycle regularities and the calibration procedure.
The standard RBC model relies on a single productivity shock to generate
business cycles. The success of the model is judged by its performance in
generating unconditional second moments of detrended data. Table 1 and 2
(taken from Hansen and Wright (1992)) shows the basic set of observations that
a standard RBC model tries to replicate. These observations are: a) relative
volatility of di¤erent variables (output, consumption, investment, productivity,
and labor input); b) contemporaneous correlations between variables (correla
tions of consumption, investment and labor input with output as well as the
correlation between productivity and labor input).
Table 3 (again taken from Hansen and Wright (1992)) shows the performance
of the standard RBC model. A comparison between Tables 1 and 2 and Table 3
reveals that: a) The standard model is not able to generate the level of volatility
we observed in the data. b) It does relatively well with respect to relative
volatilities; c) It does a poor job of generating the low correlation between
hours and productivity (which is 0.93 in the standard model).
Remark 232 Note that the setup that Hansen and Wright (1992) consider is
exactly the one we outlined above.
Chart 1 and 2 (from Hansen and Wright (1992)) shows the relation between
productivity and hours in the US data and in the standard model. The standard
model generates a positive relation between hours and productivity. In the
standard model, productivity shocks operate as labor demand shocks along a
stable labor supply curve. Furthermore, with the current calibration, the labor
supply is not very elastic, so the relation in Chart 2 is quite steep. The low
elasticity is responsible for the low volatility of output in Table 3. How can we
improve upon this? There are two obvious candidates:
« increase the elasticity of labor supply.
« introduce labor demand shocks.
Hansen and Wright (1992) discusses di¤erent ways to achieve these goals.
199
14.2.4 Linear Decision Rules
In the previous sections we analyzed how we can simulate arti…cial data from a
stochastic dynamic general equilibrium model and looked at one possible way to
confront the model with the data (by looking at unconditional second moments).
The key step in this analysis was an approximation of the decision function
q(/

, .

). We approximated the value function \ and the associated decision
rule q on a …nite number of grid points. Obviously, this is not the only way to
…nd an approximation for q.
An alternative way is to linearize the model dynamics around nonstochastic
steady state and try to …nd a decision rule of the following sort
q(/

, .

) = a/

÷/.

.
In some (indeed one) particular cases such linear rules emerge without any
approximation.
Remark 233 Unfortunately, the notation changes as we go along in these
notes, e.g. here we use K to denote per capita capital stock while we used k
for this variable above.
Consider again the standard stochastic onesector growth model. The output
is produced according to
1

= 1
o

(¹

1

)
l÷o
,
where 1, 1, ¹, and 1 are output, capital stock, laboraugmenting productivity
shock, and labor input, respectively. The law of motion for the capital stock is
given by
1

= 1

(1 ÷c) ÷1

and the aggregate resource constrain is
1

= C

÷1

.
Hence, the wage rate and gross rate of return on capital are given by
n

= (1 ÷c)1
o

(¹

1

)
o
¹

= (1 ÷c)
_
1

¹

1

_
l÷o
¹

,
and
1

= 1 ÷c
_
¹

1

1

_
l÷o
÷c.
The economy is populated by identical agents (population is normalized to 1)
who want to maximize
l = 1
0
o
=0
,

n(C

, 1 ÷1

),
200
where the momentary utility function is
l(C, 1 ÷1) = ln(C) ÷/ ln(1 ÷1), / 0.
The productivity shock ¹

follows
ln¹

= ¹÷qt ÷a

,
where
a

= ja
÷l
÷

, 

is white noise.
Let c = 1. Then,
1
÷l
= 1

= 1

÷C

,
and
1

= c
_
¹

1

1

_
l÷o
.
The Euler equation for the representative agent is given by
1
C

= ,1

_
1
÷l
C
÷l
_
.
We will try a guess and verify approach to the solution. Let :

be agents savings
at time t. Then,
C

= (1 ÷:

)1

.
Taking the logarithm of the Euler equation
÷ln(C

) = ln(,) ÷ ln1

_
1
÷l
C
÷l
_
,
and substituting our guess we have:
÷ln[(1 ÷:

)1

[ = ln(,) ÷ ln1

_
1
÷l
(1 ÷:
÷l
)1
÷l
_
.
Substituting 1

, we have
÷ln(1 ÷:

) ÷ln(1

) = ln(,) ÷ ln1

_
c
:

1

(1 ÷:
÷l
)
_
,
which becomes
ln(:

) ÷ln(1 ÷:

) = ln(,) ÷ ln(c) ÷ ln1

_
1
1 ÷:
÷l
_
.
There is nothing stochastic in this equation, so 1

_
l
l÷st+1
_
=
l
l÷st+1
, and
we have
ln(:) ÷ln(1 ÷:) = ln(,) ÷ ln(c) ÷ln(1 ÷:),
201
which gives us a constant savings rule.
: = c,.
Hence, in this economy
1
÷l
= c,1

= c,(1
o

(¹

1

)
l÷o
),
which can be written simply in terms of 1
÷l
and ¹

, once we solve for 1

.
To …nd 1

, note that the static FOC for 1

is
C

1 ÷1

=
n

/
,
Substituting C

= (1 ÷:

)1

and n

, it is easy to show that
1

= 1 =
1 ÷c
(1 ÷c) ÷/(1 ÷:)
.
Lets put the pieces together now. First note that ln(1

) is
ln(1

) = cln(1

) ÷ (1 ÷c)(ln¹

÷ ln1

) (94)
= cln(:) ÷cln(1
÷l
) ÷ (1 ÷c)(ln¹

÷ ln1)
= cln(:) ÷cln(1
÷l
) ÷ (1 ÷c)(¹÷qt) ÷ (1 ÷c)a

÷ (1 ÷c) ln(1),
which can be written as
ln(1

) ÷qt = cln(:) ÷cln(1
÷l
) ÷cqt ÷ (1 ÷c)[¹÷ ln(1)[ ÷ (1 ÷c)a

= cln(:) ÷c[ln(1
÷l
) ÷q(t ÷1)[ ÷ (1 ÷c)[¹÷ ln(1)[ ÷cq ÷ (1 ÷c)a

= cln(:) ÷ (1 ÷c)[¹÷ ln(1)[ ÷cq
. ¸¸ .
·
÷c[ln(1
÷l
) ÷q(t ÷1)[ ÷ (1 ÷c)a

.
Then, on nonstochastic (i.e. with a

= 0) balanced growth path, we have
ln(1

) ÷qt = A ÷c[ln(1

) ÷q ÷q(t ÷1)[ = A ÷c[ln(1

) ÷qt[,
where we used the fact that ln(1

) ÷ ln(1
÷l
) = q. Let ln(1
+

) be the value of
ln(1

) along a nonstochastic growth path, which is given by
ln(1
+

) =
A
1 ÷c
÷qt.
Finally, let ´ j

be the deviations of ln(1

) from ln(1
+

), i.e.
´ j

= ln(1

) ÷ln(1
+

) = ln(1

) ÷
A
1 ÷c
÷qt. (95)
Then,
´ j
÷l
= ln(1
÷l
) ÷
A
1 ÷c
÷q(t ÷1),
202
which gives us
cln(1
÷l
) = c´ j
÷l
÷
c
1 ÷c
A ÷cq(t ÷1). (96)
Substituting (95) and (96) into (94), we have
´ j

= cln(:) ÷c´ j
÷l
÷
c
1 ÷c
A ÷cq(t ÷1) ÷ (1 ÷c)[¹÷qt ÷a

÷ ln(1)[
÷
1
1 ÷c
A ÷qt
= cqt ÷ (1 ÷c)qt ÷qt ÷c´ j
÷l
÷ (1 ÷c)a

= c´ j
÷l
÷ (1 ÷c)a

,
Then, it can be shown that the deviations of the logarithm of the output from
its steady state follows an ¹1(2) process.
´ j

= c´ j
÷l
÷ (1 ÷c)(ja
÷l
÷

)
= c´ j
÷l
÷j(´ j
÷l
÷c´ j
÷2
) ÷ (1 ÷c)

= (c ÷j)c´ j
÷l
÷cjc´ j
÷2
÷ (1 ÷c)

.
Note that in this simple model, the linear decision rules allowed us to show that
´ j

follows a particular stochastic process. Hence, linearized models provides us
with additional tools to confront the model with the data. Based on this simple
model, a natural question we can ask, for example, is: If the deviations of the
logarithm of GDP in the U.S. data follow an AR(2) process.
Method of Undetermined Coe¢cients Campbell (1994) analyzes a lin
earized version of the stochastic onesector growth models that do not allow for
analytic solutions. His strategy is …rst to linearize the model around a non
stochastic growth model. His basic setup is very similar to the one we analyzed
above without the restriction of c = 1.
The output is produced according to
1

= (¹

·

)
o
1
l÷o

,
where 1, 1, ¹, and 1 are output, capital stock, laboraugmenting productivity
shock, and labor input, respectively. The law of motion for the capital stock is
given by
1

= 1

(1 ÷c) ÷1

÷C

(97)
and
ln(¹

) = cln(¹

) ÷

. (98)
He start with an economy in which the labor input is …xed, i.e. ·

= 1, and
identical agents who want to maximize
l = 1
0
o
I=0
,
I
C
l÷~
÷I
1 ÷¸
.
203
The Euler equation for this economy is then given by
C
÷~

= ,1

_
C
÷~
÷l
1
÷l
_
. (99)
His strategy is …rst loglinearize equations (97) and (99) to arrive at
/
÷l
 `
l
/

÷`
2
a

÷ (1 ÷`
l
)c

, (100)
and
1

^c
÷l
= o`
3
1

(a
÷l
÷/
÷l
), (101)
where r = ln(A) for any variables of interest, `
l
, `
2
and `
3
are constants that
depend on models parameters and o = 1,¸.
Then equations (100), (101), and (98) de…ne a loglinear system. His strategy
is to arrive from these three equations to linear decision rules for /
÷l
and c
÷l
that have the following forms
/
÷l
= j

/

÷j
o
a

,
c

= j
c
/

÷j
co
a

.
The idea is …nd these four coe¢cients (where the term the method of undeter
mined coe¢cients come from) that are consistent with (100), (101), and (98).
Once we have these linear rules then we can: 1) use them to simulate data
from our model economies (exactly as we did when we solve the model directly on
a grid), 2) use these linear decision rules to determines the statistical properties
that model variables has to follow (e.g. in this case j

is an ARMA(2,1) process).
Can we generalize this procedure? This is done by Christiano (1992). His
strategy is …rst to show that a linearized stochastic model can be written as
1

_
:
I=0
c
I
.
÷:÷l÷I
÷,
:÷l
I=0
c
I
:
÷:÷l÷I
_
= 0, (102)
where .

is an endogenous state variable (like capital stock) and :

is an exoge
nous shock, and c
t
s and ,
t
s are constants that depend on model parameters.
Suppose the shock follows an AR(1) process
:

= j:
÷l
÷

. (103)
Then, Christiano (1992) shows that (102) and (103) can be written as
.

= ¹.
÷l
÷1:

,
where ¹ and 1 are some matrices (if .

contains more than 1 variable).
Remark 234 Obviously, this note constitutes a very preliminary introduction
to numerical methods that are used to study dynamic macroeconomic models.
You can check Judd (1998) and Marimon and Scott (1999) for more trough
analysis.
204
References
[1] Campbell, John. 1994. “Inspecting the Mechanism.” Journal of Monetary
Economics, 33, 463506.
[2] Christiano, Lawrance J. 2001. “Solving Dynamic Equilibrium Models by a
Method of Undetermined Coe¢cients.” mimeo. Nortwestern University.
[3] Cooley, Thomas F. and Prescott, Edward C. (1995). “Economic Growth
and Business Cycles.” in Frontiers of Business Cycle Research, edited by
Thomas F. Cooley, Princeton University Press.
[4] Hansen, Gary. (1985). "Indivisible Labor and the Business Cycle.” Journal
of Monetary Economics, 16, 309327.
[5] Hansen, Gary and Wright, Randall. 1992. “The Labor Market in Business
Cycle Theory ” Federal Reserve Bank of Minneapolis Quarterly Review,
Spring.
[6] Judd, Kenneth L. 1998. Numerical Methods in Economics. MIT University
Press.
[7] Marimon, Ramon and Scott, Andrew (eds.). 1999. Computational Methods
for the Study of Dynamic Economies. Oxford University Press.
[8] Prescott, Edward C. 1986. “Theory Ahead of Business Cycle Measurement”
Federal Reserve Bank of Minneapolis Quarterly Review, Fall.
[9] Tauchen, George. 1985. “Finite State Markov Chain Approximations to Uni
variate and Vector Autoregressions,” Economics Letters, vol. 20, pp. 177
181.
205