Professional Documents
Culture Documents
Dynamic Programming 1
Dynamic Programming 1
University of Hohenheim
Chapter 10
Dynamic Programming
Overview
Learning goals
Students
can explain the difference between continuous and discrete time,
can solve a two-period and a multi-period consumption problem in discrete time,
can define a Bellman equation and solve a dynamic programming problem by
applying a three-step procedure,
can solve problems from optimal control theory.
Preliminaries
Continuous time
t0 t
Discrete time
Preliminaries
Consider the plot of a function over continuous and over discrete time (e.g., GDP
or population, etc.).
In continuous time the function is differentiable with respect to time, in discrete
time it is not.
Preliminaries
Growth Rates
Growth Rates
X (t) = X0 e gt .
Note that g is the growth rate of X , which we denoted by gX .
U = u(ct , ct+1 ).
v
The Lagrangian is
wt+1 ct+1
h i
L = u(ct , ct+1 ) + λ at + wt + − ct − .
1+r 1+r
Ignoring non-negativity constraints for now, the FOCs are
Lct = uct (ct , ct+1 ) − λ = 0,
λ
Lct+1 = uct+1 (ct , ct+1 ) −= 0,
1+r
wt+1 ct+1
Lλ = at + wt + − ct − = 0.
1+r 1+r
Plugging uct (ct , ct+1 ) = λ from the first FOC into the second FOC yields
0 CJ