You are on page 1of 15

Output Dynamics with Microfoundations

From Solow-Swan to Ramsey-Cass-Koopmans & OLG

Mausumi Das

Lecture 4 & 5, EC004, DSE

24 May, 2022

Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 1 / 15
Dynamic Optimization in Discrete Time: A Cake Eating
Problem
In the last class we analysed a simple dynamic optimization problem
where the household consumes out of a …xed initial endowment (a
cake) of size W0 .
We …rst wanted to solve this problem using the Lagrangean method;
hence we had restricted ourselves to a …nite time horizon such that
t = 0, 1, 2, ......., T .
The corresponding dynamic optimization problem was speci…ed as:
T
Max. ∑ βt u (ct ) ; u 0 > 0; u 00 < 0;
fct gTt=0 ,fW t +1 gTt=0 t =0
0<β<1

subject to
(i) ct Wt for all t = 0, 1, 2, ......., T .
(ii) Wt +1 = Wt ct ; ct > 0, Wt +1 > 0 for t = 0, ..., T ; W0 given.
In addition we had assumed that limc !0 u 0 (c ) = ∞.
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 2 / 15
Cake Eating Problem: Direct Approach (Sequence
Problem)

Solving the problem directly by using the standard Lagrangean


method, we had derived a set of …rst order conditions, called the
Euler Equations and the Tranversality Condition, which are given
below:
u 0 (ct ) = βu 0 (ct +1 ) for all t = 0, 1, 2, ......., T . (1)
βt u 0 (ct ) WT +1 = 0 (2)
Since βt u 0 (ct ) > 0 for all t 6 T , by complementary slackness, the
Transversality Condition for this problem got reduced to the following
Terminal Condition:
WT +1 = 0 (20 )

Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 3 / 15
A Cake Eating Problem: Direct Approach (Contd.)
In e¤ect the solution to the problem has now been pinned down by:
an initial condition: W0 (given);
a terminal condition: WT +1 = 0;
a di¤erence equation for ct : u 0 (ct ) = βu 0 (ct +1 ) for all t;
and a di¤erence equation for Wt : Wt +1 = Wt ct for all t.
Together they determine the optimal time path of consumption
fct gTt=0 (the control variable), and the resulting time path of size of
T
the cake Wt +1 t =0 (the state variable).
Plugging back these solutions in utility function, we can then derive
the maximized value of the total life-time utility generated from a
given initial size of the cake (W0 ). Let this maximized value be
denoted by V (W0 ):
T
V ( W0 ) ∑ β t u ( ct )
t =0
We call this the Value Function. The value function is exactly
analogous to the indirect utility function in consumer theory.
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 4 / 15
Solution Path of the Cake Eating Problem: An Exercise
As an illustration, you can …nd out the precise solution paths for a
…nite time cake-eating problem with a speci…c utility function given by
u (ct ) = log ct .
For this utility function, the Euler equation gives us the following
di¤erence equation:
ct +1 = βct for all t (3)
On the othe hand, we have another di¤erence equation from resource
constraint:
Wt +1 = Wt ct for all t (4)
These equations represent a system of two di¤erence equations (linear
and …rst order) in two variables (ct and Wt ) along with two boundary
conditions: W0 (given) and WT +1 = 0.
Solving you should be able to derive the precise time paths of ct and
Wt and also the maximised value of lifetime utility: V (W0 ).
(Try this as a homework)
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 5 / 15
Cake Eating Problem: In…nite Horizon
Let us now extend the time horizon of the cake eating problem to
in…nity.
The corresponding dynamic optimization problem will now be
speci…ed as:

Max.
fct gt =0 ,fW t +1 gt∞=0
∞ ∑ βt u (ct ) ; u 0 > 0; u 00 < 0; 0<β<1
t =0
subject to
(i) ct Wt for all t > 0.
(ii) Wt +1 = Wt ct ; ct > 0, Wt +1 > 0 for all t > 0; W0 given.
The direct solution method should work here exactly in the same way
as before, except that we no longer have a well-de…ned terminal time.
Manipulating in…nite number of resource constraints and in…nite
number of FOCs could be inconvenient.
So we will now discuss the other technique - dynamic programming -
and see how the same problem can be solved using this technique.
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 6 / 15
Cake Eating in In…nite Horizon: Dynamic Programming
Dynamic programming e¤ectively converts the in…nite horizon
problem into a two-period problem by appropriately rewriting the
objective function.

Let fct gt∞=0 and Wt +1 t =0 denote the optimal solution paths
(unique) for the in…nite horizon problem de…ned above.
If these are indeed the solution to the in…nite horizon maximization
problem, then we can write the Value Function at time 0 as:

V ( W0 ) ∑ β t u ( ct )
t =0
Using the constraint function, we can rewrite the value function at
time 0 as

V ( W0 ) Max.∞ ∑ βt u (Wt
f W t +1 g t =0 t = 0
Wt +1 ) ; W0 (given)

= ∑ βt U (Wt , Wt +1 ) ; Wt =0 = W0 (given)
t =0
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 7 / 15
Value Function: (Contd.)
Suppose we were to repeat this exercise again the next time period,
i.,e. at t = 1.
Now of course the time period t = 1 will be counted as the initial
point and the corresponding initial value of the state variable will be
W1 .
Let τ denote the new time subscript which now counts time with
t = 1 as the initial point, going forward upto ∞. By construction
then, τ t 1.
Note that when we set the new optimization exercise in terms of τ
(relevant for t = 1, 2...., ∞), it looks exactly analogous to the one
de…ned in terms of t, except that the intial endowment di¤ers.
In particular, the new value function in terms of τ will look as follows:

V (W1 ) Max.∞ ∑ β τ u (Wτ
f W τ +1 g τ =0 τ = 0
Wτ +1 ) ; Wτ =0 = W1 (given)

= ∑ βt U Ŵτ , Ŵτ +1 ; Ŵτ =0 = W1 (given)
τ =0
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 8 / 15
Value Function & Principle of Optimality:

It is important to note here that by the Principle of Optimality if



Wt +1 t =0 was an optimal soution to the problem that maximises

Max.∞ ∑ βt U (Wt , Wt +1 )
f W t +1 g t =0 t = 0
(Wt =0 = W0 given), (A)


then Ŵτ +1 τ =0 such that Ŵτ =1 = Wt =2 ; Ŵτ =2 = Wt =3 , .... must
be a solution to the problem that maximises

Max.∞ ∑ β τ U (Wτ , Wτ +1 )
f W τ +1 g τ =0 τ = 0
(Wτ =0 = Wt =1 given). (B)


Otherwise Wt +1 t =0 could not have been an optimal solution to
problem (A) to begin with!

Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 9 / 15
Value Function & Bellman Equation:

Noting the relationship between t and τ, and noting the principle of


optimality, we can immediately see that the two value functions are
related in the following way:

V ( W0 ) = ∑ βt U (Wt , Wt +1 )
t =0

= U (W0 , W1 ) + β ∑ βt 1
U ( Wt , Wt + 1 )
t =1

= U (W0 , W1 ) + β ∑ βτ U Ŵτ , Ŵτ +1 (Principle of Optim
τ =0
= U (x0 , W1 ) + βV (W1 ).

The above relationship is the basic functional equation in dynamic


programming which relates two successive value functions recursively.
It is called the Bellman Equation.
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 10 / 15
Value Function & Bellman Equation: (Contd.)
Recursive Property:

The Bellman Equation is a recursive equation because it expresses the


value function as a function of itself:
V (W0 ) = U (W0 , W1 ) + βV (W1 ).
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 11 / 15
Value Function & Bellman Equation: (Contd.)
The Bellman equation plays a crucial role in the dynammic
programming technique.
Since W1 is an optimal value itself, we can write the Bellman
equation as:
V (W0 ) = Max [U (W0 , W1 ) + βV (W1 )] ; W0 given.
fW 1 g

Thus it reduces the initial optimization problem with in…nite number


of variables and in…nte number of constraints to a simple optimization
exercise involving only one variable (W1 ).
Notice however that it breaks down the ini…nite horizon dynamic
optimization problem into a two-stage problem:
Given W0 , what value of W1 should you choose;
What is the associated value of the continuation path (V (W1 )).
Thus in choosing the optimal value of W1 , you have to take into
account not only the value of the instantaneous utility
U (W0 , W1 ) ,but also the value of the continuation path V (W1 ).
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 12 / 15
Value Function & Policy Function (Contd.)

Since the above functional relationship holds for any two successive
values of the state variable,we can write the Bellman Equation more
generally as:

V (Wt ) = Max [U (Wt , Wt +1 ) + βV (Wt +1 )] for any t > 0.


f W t +1 g

We shall now use the Bellman equation to derive the solution path of
the cake eating problem under in…nite horizon.

Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 13 / 15
Solving the Cake Eating problem through Dynamic
Programming:

Recall that the Bellman equation for the Cake-eating problem at time
0 is given by:

V (W0 ) Max.u (W0 W1 ) + βV (W1 )


fW 1 g

We now only have to solve the above (reduced form) maximization


problem with respect to W1 .
From the FONC (with respect to W1 ) :

u 0 ( W0 W1 ) = βV 0 (W1 )

Plugging back the value of c0 , the FOC becomes:

u 0 (c0 ) = βV 0 (W0 c0 ) (5)

Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 14 / 15
Cake Eating: Dynamic Programming (Contd.)

It seems that from the above FOC we would be able to easily …nd the
optimal c0 once we know the exact speci…cation of u (c ) and the
given value of W0 .
The matter is not that simple though. There is a catch!
Recall the unlike the direct approach, where we had …rst solved for
the optimal consumption stream and then calculated the value
function by plugging in these optimal values in the objective function,
here we have just used the knowledge that such a value function
exists without explicitly deriving it.
Hence we do not know the exact from of V (W0 ) or V (W1 ) or, for
that matter, V 0 (W1 )!
Is there a way to derive the value of V 0 (W1 ) without actually solving
the entire problem?
(If not, then we are back to the direct approach; dynamic
programming approach then will have no special appeal!)
Das (Lecture 4 & 5, EC004, DSE) Solow to RCK & OLG 24 May, 2022 15 / 15

You might also like