You are on page 1of 5

Notes on Dynamic Optimization in Continuous Time

Martin Scheffel

October 16, 2007

For dynamic optimization in continuous time, many books or articles use the
Hamiltonian. This function was derived by Pontryagin (1962) using calculus of varia-
tion. You may find all that you need on this topic in an excellent small book by Alpha
Chiang. Moreover, the example of the neoclassical growth model is well exposed in the
book by Robert Barro and Xavier Sala-i-Martin. The full references are:
Barro/ Sala-i-Martin 2003: Economic Growth
Chiang 1999: Elements of Dynamic Optimization.

1 Task
Consider the following optimization problem in continuous time
max F (s(t), c(t), , t) dt (1)
c(t) 0

subject to

ṡ(t) = f (s(t), c(t), t), ∀t ∈ [0, T ] (2)

s(0) given

where the functions s(t) and c(t) denote the path of the state- and control variables.
For simplicity assume that

1. equation (2) holds with equality

2. the initial condition s(0) = s0 is given

3. the time horizon T is given

2 Pontryagin’s Maximum Principle
To begin with, we want to use the familiar Lagrangean to solve this problem. Let λ(t)
denote the Lagrange-multiplier associated with the constraint in period t. This gives

0 = f (s(t), c(t), t) − ṡ(t)

0 = λ(t) · [f (s(t), c(t), t) − ṡ(t)]
0= λ(t) · [f (s(t) , c(t), t) − ṡ(t)] dt

Therefore, the Lagrangean reads

L= F (s(t), c(t), t) dt −
λ(t) · [f (s(t), c(t), t) − ṡ(t)] dt
0 0
L= [F (s(t), c(t), t) + λ(t) · f (s(t), c(t), t)] dt − λ(t)ṡ(t) dt (3)
0 0

Define the Hamiltonian as H(s(t), c(t), t) ≡ F (s(t), c(t), t) + λ(t) · f (s(t), c(t), t) such
that equation (3) gets
L= H(s(t), c(t), t) dt − λ(t)ṡ(t) dt
0 0

Considering the second integral, integration by parts gives

λ(t)ṡ(t) dt = [λ(t)s(t)]T0 −
λ̇(t)s(t) dt
0 0
= λ(T )s(T ) − λ(0)s(0) − λ̇(t)s(t) dt

such that the Lagrangean (3) reads

L= H(s(t), c(t), t) dt + λ̇(t)s(t) dt + λ(0)s(0) − λ(T )s(T )
0 0

Now, we are going to use some principles of the calculus of variation. Assume that
we already know the optimal consumption policy c∗ (t). Hence, we also know the
optimal path of the state variable s∗ (t), too. Let p(t) denote an arbitrary continuous
function. For any small  ∈ R the plan c(t) = c∗ (t) + p(t) is a perturbation of the
optimal plan c∗ (t). Hence, there is also an associated perturbed path of the state
variable s(t) = s∗ (t) + q(t). We can now rewrite the Lagrangean as a function of the

perturbation parameter  and arrive at:
L() = H(s∗ (t) + q(t), c∗ (t) + p(t), t) dt+
λ̇(t)[s∗ (t) + q(t)] dt + λ(0)s(0) − λ(T )[s∗ (T ) + q(t)]

We already know that the optimum is  = 0. Nevertheless, the first order condition
with respect to  gives a quite powerful insight into this dynamic optimization problem,
helping us to characterize the optimal plan c∗ (t).
Z T   Z T
∂L ∂H ∂H
= q(t) + p(t) dt + λ̇(t)q(t) dt − λ(T )q(T )
∂ 0 ∂s(t) ∂c(t) 0
Z T   Z T
∂H ∂H
0= q(t) + λ̇(t) dt + p(t) dt − λ(T )q(T ) (4)
0 ∂s(t) 0 ∂c(t)

Since the perturbation function p(t) and the induced perturbation function q(t) are
arbitrary, equation (4) must hold for every p(t) and associated q(t). Hence, the factors
∂H ∂H
in front of p(t) and q(t) have to be 0. That means ∂s(t) + λ̇(t) = 0, ∂c(t) = 0 and
consequently λ(T )q(T ) = 0.
From the last condition, we get the so called Transversality Condition (TVC), where
we have to distinguish three different cases:
1. Assume s(T ) is given. This implies s∗ (T ) = s(T ), such that q(T ) = 0. Since
λ(T )q(T ) = 0, there is no further restriction on λ(T ).

2. Assume s(T ) is free. With s∗ (T ) 6= s(T ), the perturbation q(T ) can take on
every arbitrary value. Since λ(T )q(T ) = 0 the Lagrange multiplier has to satisfy
λ(T ) = 0.

3. Assume s(T ) ≥ smin with smin . This is a combination of both previous cases:
Hence, either λ(T ) = 0 or q(T ) = 0.
The following theorem summarizes Pontryagin’s Maximum Principle zusammen:
Theorem 1 Let c∗ : [0, T ] → R and s∗ : [0, T ] → R continuous differentiable functions.
If the plan c∗ is optimal, then there exists a continuous differentiable function λ :
[0, T ] → R such that:
1. c∗ (t) maximizes the Hamilotnian and for an interior solution, we have


2. s∗ (t) and λ(t) are solutions to the system of differential equations

ṡ(t) =

λ̇(t) = −

3. the boundary conditions (TVC) hold:

(a) If s(T ) is fixed, λ(T ) is arbitrary

(b) If s(T ) is free, λ(T ) = 0
(c) If s(T ) ≥ smin , then λ(T ) ≥ 0, [s∗ (T )−smin ] ≥ 0 and λ(T )[s∗ (T )−smin ] = 0

The extension to the infinite horizon problem is straightforward. We only have to

adjust the boundary condition. The first order conditions remain the same.

Theorem 2 For T → ∞, the transversality condition reads

lim s(T )λ(T ) = 0

T →∞

3 Neoclassical Growth Model

The optimization problem of the representative agent in the continues time neoclassical
growth model is Z ∞
max e−(ρ−n)t u(ĉ(t))dt
c(t) 0

subject to
K̇(t) = F (K(t), A(t)L(t)) − C(t) − δK(t), ∀t ∈ R+

where ĉ(t) ≡ C(t)

. Using lower case letters for variables in intensive form, we can
rewrite the constraint as

k̇(t) = f (k(t)) − c(t) − (δ + n + g)k(t) ∀t ∈ R+

For convenience, drop the time dependency in the notation. We now proceed as follows:
Step 1: Set up the Hamiltonian

H = e−(ρ−n)t u(ĉ) + λ[f (k) − c − (δ + n + g)k] (5)

Step 2: Derive the first order conditions

∂H λ
= e−(ρ−n)t u0 (ĉ) − = 0 (6)
∂ĉ A
= λ[f 0 (k) − (δ + n + g)] = −λ̇ (7)
= f (k) − c − (δ + n + g)k = k̇ (8)

Step 3: Solve the system of first order conditions. There is a trick that makes the
analysis very easy. First, solve equation (6) for λ and take the logarithm

log λ = −(ρ − n)t + log u0 (ĉ) + log A

Then, take the derivative with respect to time and we arrive at

λ̇ u00 (ĉ)
= −(ρ − n) + 0 ċ + g
λ u (ĉ)

We can now use this equation to substitute the growth rate of the Lagrange multiplier
out of equation (7) to arrive at the consumption Euler Equation

ĉ˙ u0 (ĉ) 0
= − 00 [f (k) − δ − ρ]
ĉ u (ĉ)ĉ

In the book by David Romer, you will most often find this Euler equation for CRRA
utility with θ as the coefficient of relative risk aversion. In per capita units, the Euler
Equation therefore reads
ĉ˙ 1
= [f 0 (k) − δ − ρ]
ĉ θ
and in per efficiency terms, the Euler equation is

ċ 1
= [f 0 (k) − δ − ρ] − g
c θ

Step 4: Use the boundary conditions to pin down the equilibrium (saddle) path of
consumption and physical capital, similar as we did it in class for the discrete time
case. We won’t do this here.