You are on page 1of 15

MS&E 322 Winter 2023

Stochastic Calculus and Control January 7, 2023


Prof. Peter W. Glynn Page 1 of 15

Section 5: The Connection between SDEs and PDEs

Contents

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
5.2 The Backwards Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
5.3 Infinite Horizon Discounted Reward . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
5.4 Expected Reward to a Hitting Time . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
5.5 Bounding Expected Reward to a Hitting Time . . . . . . . . . . . . . . . . . . . . . 5
5.6 PDEs for Diffusions with Absorbing Boundaries . . . . . . . . . . . . . . . . . . . . . 7
5.7 PDEs for Diffusions with Reflecting Boundaries . . . . . . . . . . . . . . . . . . . . . 9
5.8 The Generator of a Diffusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
5.9 Integro-PDEs for Diffusions with Jump Boundaries . . . . . . . . . . . . . . . . . . . 14

5.1 Introduction

In Section 1, we saw how we can heuristically derive the partial differential equations (PDEs)
that need to be solved in order to compute various probabilities and expectations associated with
a given diffusion process. In this chapter, we show how the Itô-Doeblin calculus can be used to
rigorously validate this connection. This rigorous discussion will also serve to clarify the role of
boundary conditions in uniquely specifying the probabilistically interesting solution to the PDE.

5.2 The Backwards Equations

Suppose that we wish to compute u∗ (t, x) = Ex [f (X(t))], where X = (X(t) : t ≥ 0) is the


solution to the d-dimensional SDE

dX(t) = µ(X(t))dt + σ(X(t))dB(t) (5.2.1)

and µ and σ satisfy the Lipschitz and growth conditions of Section 4.

Step 1: Assume that Ex [|f (X(t))|] < ∞ for all x ∈ Rd . If we set Y = (f (X(t)) : t ≥ 0), then

E [Y |X(u) : 0 ≤ u ≤ s]

is a martingale for 0 6 s 6 t, adapted to (X(u) : 0 6 u 6 t); see Section 2. But

E [Y |X(u) : 0 6 u 6 s] = u∗ (t − s, X(s)).

Hence, we know that (u∗ (t−s, X(s)) : 0 6 s 6 t) is a Px - martingale for each x ∈ Rd . Furthermore,
u∗ (0, x) = f (x).
1
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

Step 2: In view of Step 1, we should seek conditions on u so that u(0, x) = f (x) and (u(t −
s, X(s)) : 0 6 s < t) is a Px -martingale. Assume that u ∈ C 1, 2 . In view of Itô’s formula, we find
that
 d
∂u(t − s, X(s)) X ∂u(t − s, X(s))
du(t − s, X(s)) = − + µi (X(s))
∂t ∂xi
i=1
d 2

1 X ∂ u(t − s, X(s))
+ bij (X(s)) ds (5.2.2)
2 ∂xi ∂xj
i, j=1
d n
X ∂u(t − s, X(s)) X
+ σij (X(s))dBj (s).
∂xi
i=1 j=1

Since we know the stochastic integral is a local martingale, we can force u(t − s, X(s)) to be a local
martingale by requiring that u satisfy the PDE
∂u
= L u, (5.2.3)
∂t
where
d d
X ∂ 1 X ∂2
L = µi (x) + bij (x) .
∂xi 2 ∂xi ∂xj
i=1 i, j=1

If u satisfies (5.2.3), then we find that


d
Z tX m
∂u(t − s, X(s)) X
u(t − s, X(s)) − u(t, X(0)) = σij (X(s))dBj (s).
0 ∂xi
i=1 j=1

So, if u is bounded on [0, t] × Rd , then the left-hand side is bounded, so that the stochastic integral
∂u
is a martingale. Similarly if ∂x i
is bounded over [0, t] × Rd for 1 6 i 6 d, the growth condition on
σ and the fact that X is square-integrable ensure that the right hand side is a martingale. If the
stochastic integral is martingale, then

Ex [u(0, X(t))] = Ex [u(t, X(0))]

so that
u(t, x) = Ex [f (X(t))]
We conclude that if u ∈ C 1, 2 satisfies
∂u
= Lu
∂t (5.2.4)
s.t. u(0, x) = f (x)
∂u
and if either u is bounded over [0, t] ∈ Rd or ∂x i
is bounded over [0, t] × Rd for 1 6 i 6 d, then
u(t, x) = Ex [f (X(t))] (= u∗ (t, x)). In other words, if u satisfies the PDE (5.2.4) and is either itself
bounded or has bounded (spatial) derivatives, then u must be the expectation of interest.

Exercise 5.2.1 Derive the backwards equation for the SDE

dX(t) = µ(X(t), t)dt + σ(X(t), t)dB(t)

and provide sufficient conditions under which the solution to the PDE is the required expectation.
2
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

5.3 Infinite Horizon Discounted Reward

Let X satisfy (5.2.1), where µ and σ satisfy the usual growth and Lipschitz conditions.

Step 1: Assume that


Z ∞ 
−αt
Ex e |f (X(t))|dt < ∞
0
R ∞
for each x ∈ Rd and α > 0; our goal is to compute u∗ (x) = Ex e−αt f (X(t))dt . Set

0
Z ∞
Y = e−αt f (X(t))dt
0

Then, E [Y |X(u) : 0 ≤ u ≤ t] is a martingale adapted to X = (X(t) : t ≥ 0). But


Z t
E [Y |X(u) : 0 ≤ u ≤ t] = e−αt f (X(u))du + e−αt u∗ (X(t))
0

Step 2: To find a PDE for u∗ , we therefore seek conditions on u so that


Z t
e−αu f (X(u))du + e−αt u(X(t)) (5.3.1)
0

is a martingale adapted to X = (X(t) : t ≥ 0). If u ∈ C 2 , then Itô’s formula yields


Z t 
−αs −αt
d e f (X(s))ds + e u(X(t))
0
−αt
=e f (X(t))dt − αe−αt u(X(t))dt
d d
X ∂u(X(t)) 1 X ∂ 2 u(X(t))
+ e−αt ( µi (X(t)) + bij (X(t)))dt
∂xi 2 ∂xi ∂xj
i=1 i, j=1
d m
X ∂u(X(t)) X
+ e−αt σij (X(t))dBj (t)
∂xi
i=1 j=1

In order that (5.3.1) be representable as a stochastic integral (and hence a local martingale), we
require that u satisfy the PDE
αu − L u = f. (5.3.2)
If u satisfies (5.3.2), then
Z t
e−αs f (X(s))ds + e−αt u(X(t))
0
Z t d n (5.3.3)
−αs
X ∂u(X(s)) X
= e σij (X(s))dBj (s).
0 ∂xi
i=1 j=1

Note that if f is bounded by kf k, then |u∗ (x)| 6 kf k/α. Hence, if f is bounded, we should seek
bounded solutions u to (5.3.2). If u and f are both bounded, then the left-hand-side of (5.3.3) is a
martingale. Hence Z t 
Ex e−αu f (X(u))du + e−αt u(X(t)) = u(x).
0
3
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

If u and f are bounded, then clearly Ex e−αt u(X(t)) → 0 as t → ∞ and


 

Z t  Z ∞ 
−αs −αt
Ex e f (X(s))ds → Ex e f (X(t))dt
0 0

as t → ∞, so that u(x) = u∗ (x).

We conclude that if f is bounded and there exists a bounded solution u ∈ C 2 to

αu − L u = f,

then the solution u satisfies Z ∞ 


−αt
u(x) = Ex e f (X(t))dt .
0

Exercise 5.3.1 Suppose that f : Rd → R+ . If u ∈ C 2 is a nonnegative solution to αu − L u = f ,


then u∗ (x) 6 u(x) for x ∈ Rd . Moreover, suppose that f : Rd → R+ . If u ∈ C 2 is a nonnegative
function satisfying the differential inequality αu − L u > f , then u∗ (x) 6 u(x) for x ∈ Rd .

5.4 Expected Reward to a Hitting Time

Suppose X = (X(t) : t ≥ 0) satisfies (5.2.1), where µ and σ satisfy the usual growth and
Lipschitz conditions. For f : Rd → R+ and C c ⊆ Rd , our goal is to find the PDE for
Z τ 

u (x) = Ex f (X(s))ds ,
0

where τ = inf{t > 0 : X(t) ∈ C c }.

Step 1: Assume that


Z τ 
Ex f (X(s))ds < ∞
0

for x ∈ C, and put Y = 0 f (X(s))ds. Then

E [Y |X(u) : 0 6 u 6 t]

is a martingale adapted to X = (X(t) : t ≥ 0). But


Z τ ∧t
E [Y |X(u) : 0 6 u 6 t] = f (X(u))du + u∗ (X(τ ∧ t)).
0

Hence, (u∗ (x) : x ∈ Rd ) is a nonnegative function such that


Z τ ∧t
f (X(s))ds + u∗ (X(τ ∧ t))
0

is a Px -martingale for each x ∈ Rd . Furthermore, u∗ = 0 on C c .

Step 2: We seek conditions on a nonnegative u vanishing on C c so that


Z τ ∧t
f (X(s))ds + u∗ (X(τ ∧ t)) − u(X(0)) (5.4.1)
0
4
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

is a Px -martingale. If u ∈ C 2 , it follows that


Z t  d m
X ∂u X
d f (X(s))ds + u(X(t)) = f (X(t))dt + (L u)(X(t))dt + (X(t)) σij (X(t))dBj (t)
0 ∂xi
i=1 j=1

and hence
Z τ ∧t
f (X(s))ds + u(X(τ ∧ t)) − u(X(0))
0
Z τ ∧t Z d
τ ∧t X m (5.4.2)
∂u X
= [(L u + f )(X(s))]ds + (X(s)) σij (X(s))dBj (s).
0 0 ∂xi
i=1 j=1

In order that (5.4.1) be a martingale, we choose u so that (L u + f )(X(s)) = 0 on for s 6 τ . But


X(s) ∈ C for s 6 τ , so this requires that L u + f = 0 on C. To guarantee the integrability of the
left-hand side of (5.4.2), suppose that f and u are bounded on C, and Ex τ < ∞ for x ∈ C. Then,
the left-hand side is integrable and it follows that (5.4.1) is a Px -martingale. Consequently,
Z τ ∧t
Ex f (X(s))ds + Ex u(X(τ ∧ t)) = u(x). (5.4.3)
0

Since f and u are bounded with Ex τ < ∞, the Bounded Convergence Theorem yields the identity
Z τ
Ex f (X(s))ds + Ex u(X(τ )) = u(x)
0

when we send t → ∞ in (5.4.3). But u = 0 on C c so u(X(τ )) = 0, and hence


Z τ 
u(x) = Ex f (X(s))ds .
0

We conclude that if f and u are bounded non-negative functions satisfying

L u = −f (5.4.4)

on C, subject to u = 0 on C c , and if Ex τ < ∞ for x ∈ C, then u(x) = u∗ (x) for x ∈ C.

5.5 Bounding Expected Reward to a Hitting Time

A key step in proving that u = u∗ in Section 5.4 above is establishing that


Z τ
f (X(s))ds
0

is Px -martingale for each x ∈ C. This is needed, for example, to prove that (5.4.1) is a martingale
(rather than just a local martingale). We now offer a method, based on construction of a suitable
local supermartingale, for bounding Z τ
Ex f (X(s))ds
0
when f is non-negative. Of course, such bounds are of significant interest in their own right.
5
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

We start from (5.4.2), which holds for u ∈ C 2 . Suppose now that rather than requiring L u = −f
on C , we demand only that u be a non-negative C 2 function satisfying the differential inequality
L u 6 −f on C. In this case,
Z τ ∧t
f (X(s))ds + u(X(τ ∧ t))
0

is a nonnegative (local) Px -supermartingale. Let (Tn : n 6 1) be the localizing sequence given by


(for example)
  2 
Z t Xd m
 ∂u X 
Tn = inf t > 0 :  (X(s)) σij (X(s)) ds > n
 0 ∂xi 
i=1 j=1

Then,
Z τ ∧Tn ∧t
Ex f (X(s))ds + Ex u(X(τ ∧ Tn ∧ t)) 6 u(x)
0

for x ∈ C. Since u > 0 in C, evidently


Z τ ∧Tn ∧t
Ex f (X(s))ds 6 u(x).
0

Letting first n → ∞ and then t → ∞, we conclude that from the Monotone Convergence Theorem
that Z τ
Ex f (X(s))ds ≤ u(x) (5.5.1)
0

for x ∈ C. Hence, if f > 0 and u ∈ is non-negative on C with L u 6 −f on C, we obtain the


C2
bound (5.5.1). In particular, if we can find a non-negative u ∈ C 2 such that L u 6 −1 on C, then
we have the inequality
Ex τ 6 u(x)
for x ∈ C. Such a function u is called a Lyapunov function.

One often finds such Lyapunov functions through an educated guess of some kind. Lyapunov
functions that are frequently employed as guesses are: kxkp , exp(kxkp ) and log(1 + kxkp ) (p > 2).
But, in complex mathematical settings, one may need to be much more creative in finding a suitable
choice of Lyapunov function.

Remark 5.5.1 To prove that (5.4.3) is a martingale and to pass limits through the expectation
Ex u(X(τ ∧ t)), we need to also show that (u(X(τ ∧ t) : t > 0)) is suitably integrable. In particular,
if τ < ∞ Px a.s. and
sup Ex u(X(τ ∧ t))p < ∞ (5.5.2)
t>0

for p > 1, this suffices to ensure that u(X(τ ∧ t)) is integrable (so that (5.4.3) is a martingale) and
that
Ex u(X(τ ∧ t)) → 0 = Ex u(X(τ ))
as t → ∞. The method of supermartingales/Lyapunov functions also works in this setting.
6
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

Specifically, suppose that one has solved (5.4.4) but the solution u is unbounded. To verify
(5.5.2), we attempt to find a nonnegative Lyapunov function v ∈ C 1,2 such that v(0, x) > u(x)p
and
∂v
(L v)(t, x) 6 (t, x)
∂t
for t > 0 and x ∈ C. If (Tn : n > 1) is the appropriately chosen localizing sequence, we conclude
that for 0 6 s 6 t,

Ex v(t − (s ∧ Tn ∧ τ ), X(s ∧ Tn ∧ τ )) − v(t, X(0))


Z s∧Tn ∧τ  
∂v
= Ex − (t − r, X(t − r)) + (L v)(t − r, X(t − r)) dr
0 ∂t
60

and hence

Ex u(X(t ∧ Tn ∧ τ ))p
= Ex u(X(t ∧ Tn ))p I(τ > t ∧ Tn )
6 Ex v(0, X(t ∧ Tn ))I(τ > t ∧ Tn )
6 Ex v(t − (t ∧ Tn ∧ τ ), X(t ∧ Tn ∧ τ ))
6 v(t, x)

Since u > 0, Fatou’s lemma yields the bound

Ex u(X(t ∧ τ ))p 6 v(t, x)

for t > 0 and x ∈ C, from which (5.5.2) can now potentially be verified.

5.6 PDEs for Diffusions with Absorbing Boundaries

Let Y = (Y (t) : t > 0) be the solution to the d-dimensional SDE

dY (t) = µ(Y (t))dt + σ(Y (t))dB(t)

where µ and σ satisfy the Lipschitz and growth conditions of Section 4. For C c ⊆ Rd , let T = {t >
0 : Y (t) ∈ C c } be the exit time from C. Set

X(t) = Y (T ∧ t)

The process X = (X(t) : t ≥ 0) is “absorbed” at the boundary of C, and is a so-called “diffusion


with an absorbing boundary”.

We start by deriving the backwards equation for X. Let f be a bounded function, and set
u∗ (t, x) = Ex f ((X(t)) for x ∈ C. As in Subsection 5.2, let u ∈ C 1, 2 be bounded and satisfy
∂u
(t, x) = (L u)(t, x) (5.6.1)
∂t
for t > 0 and x ∈ C (the closure of C), subject to

u(0, x) = f (x). (5.6.2)


7
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

Since u and f are bounded, we conclude that


Ex u(t − (t ∧ T ), Y (t ∧ T )) = u(t, x)
But
Ex u(t − (t ∧ T ), Y (t ∧ T ))
= Ex u(0, X(t))I(T > t) + Ex u(t − T, Y (T ))I(T 6 t)
= Ex f (X(t))I(T > t) + Ex u(t − T, Y (T ))I(T 6 t).
We want to choose u so that
Ex u(t − T, Y (T ))I(T 6 t) = Ex f (X(t))I(T 6 t) (5.6.3)
because this will imply that
u(t, x) = Ex f (X(t))
To obtain (5.6.3), select u so that u(s, x) = f (x) for s > 0 when x ∈ ∂C (the boundary of C).
Because of (5.6.1) and (5.6.2), this can be guaranteed if we add the boundary condition
(L u)(x) = 0
for x ∈ ∂C. Hence u = u∗ if f is bounded and u is the bounded solution to
ut = L u on R+ × C
subject to u(0, x) = f (x) on C
and L u = 0 on ∂C.

We provide a second illustration by deriving a PDE for


Z ∞
u∗ (x) = Ex e−αt f (X(t))dt
0

for x ∈ C, where α > 0 and f is bounded. As in Subsection 5.3, we consider a bounded u ∈ C 2


satisfying
αu − L u = f
on C. It follows that Z T
Ex f (Y (s))ds + Ex e−αT u(Y (T )) = u(x).
0
If, as in our earlier discussion, we require that L u = 0 on ∂C, then u(Y (T )) = f (Y (T ))/α. But
Z ∞
f (Y (T ))/α = e−αt f (X(T + t))dt.
0

It follows that if L u = 0 on ∂C,


Z ∞
u(x) = Ex e−αt f (X(t))dt.
0

Hence, u = u∗ if u is a bounded C2 solution of


αu − L u = f
on C, subject to
Lu = 0 on ∂C.
8
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

5.7 PDEs for Diffusions with Reflecting Boundaries

Suppose that X = (X(t) : t ≥ 0) is the solution to the one-dimensional SDE

dX(t) = µ(X(t))dt + σ(X(t))dB(t) + dL̃(t) − dŨ (t)

where µ(·) and σ(·) satisfy the regularity hypothesis described in Section 4 that guarantee exis-
tence/uniqueness, and L̃, Ũ are the minimal nondecreasing processes such that

I(X(t) 6= a)dL̃(t) = 0 (5.7.1)

I(X(t) 6= b)dŨ (t) = 0 (5.7.2)


for a < b.

We start by deriving the backwards equation for X. Let f be bounded and set u∗ (t, x) =
Ex f (X(t)). Recall that (u∗ (t − s, X(s)) : 0 6 s 6 t) is then a martingale adapted to (X(s) : 0 6
s 6 t). For u ∈ C 1, 2 , observe that

d(u(t − s, X(s)))
 
∂u
= − (t − s, X(s)) + (L u)(t − s, X(s)) ds
∂t
∂u ∂u ∂u
+ (t − s, X(s))dL̃(s) − (t − s, X(s))dŨ (s) + (t − s, X(s) σ(X(s))dB(s)
∂x ∂x ∂x
In order that u(t − s, X(s)) be a martingale, we require that

ut = L u (5.7.3)

on R+ × [a, b]. Also, because of (5.7.1),


∂u ∂u
(t − s, X(s))dL̃(s) = (t − s, a)dL̃(s)
∂x ∂x
Similarly,
∂u ∂u
(t − s, X(s))dŨ (s) = (t − s, b)dŨ (s)
∂x ∂x
So, we set
∂u ∂u
(s, a) = (s, b) = 0 (5.7.4)
∂x ∂x
for s > 0, so that u(t − s, X(s)) − u(t, X(0)) will be a pure stochastic integral. Since u is bounded,
we conclude that
Ex u(t, X(0)) = Ex u(0, X(t))
so that
u(t, x) = Ex f (X(t))
Hence, u = u∗ if u ∈ C 1,2 is a bounded solution to (5.7.3), subject to the boundary condition (5.7.4).

To further illustrate these ideas, suppose we wish to derive a PDE for


Z ∞

u (x) = Ex e−αt dL̃(t)
0
9
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

for α > 0.

Step 1: If u∗ (x) < ∞ for a 6 x 6 b, then


Z t
e−αs dL̃(s) + e−αt u∗ (X(t))
0

is a Px -martingale adapted to X = (X(t) : t ≥ 0).

Step 2: Let u ∈ C 2 be a non-negative function. We seek conditions on u under which


Z t
e−αs dL̃(s) + e−αt u(X(t))
0

is a Px -martingale. Note that


Z t 
−αs −αt
d e dL̃(s) + e u(X(t))
0
−αt (5.7.5)
=e dL̃(t) − αe−αt u(X(t))dt + e−αt (L u)(X(t))dt
+ e−αt u0 (X(t))dL̃(t) − e−αt u0 (X(t))dŨ (t) + e−αt u0 (X(t))σ(X(t))dB(t).
In order that the right-hand side of (5.7.5) be the differential of a stochastic integral, we choose u
so that
L u − αu = 0 (5.7.6)
on [a, b], subject to

u0 (a) = −1 (5.7.7)
0
u (b) = 0 (5.7.8)

We conclude that if u ∈ C 2 is a bounded solution to (5.7.6) subject to (5.7.7) and (5.7.8), then
u = u∗ .

5.8 The Generator of a Diffusion

We have seen that the rate matrix Q plays a key role in the analysis of Markov jump processes,
as does the differential operator L in the setting of diffusion processes. In fact, the equations that
arise in computing, probabilities and expectations for the two classes of processes are essentially
identical, provided that one substitutes the linear differential operator L in the diffusion setting
for the (linear) operator/matrix Q that occurs in the jump process context.

The basic reason, of course, is that both Q and L describe the short-time (infinitesimal)
dynamics of the corresponding Markov processes. Consider first a jump process X = (X(t) : t ≥ 0)
taking values in a finite state space S having rate matrix Q. For any function f : S → R,

Ex f (X(t)) = f (x) + t(Qf )(x) + o(t) (5.8.1)

as t → 0. Of course, an equivalent mathematical statement is to require that


Ex f (X(t)) − f (x)
lim = (Qf )(x) (5.8.2)
t&0 t
10
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

Note that the limit (5.8.2) is guaranteed to exist for all functions f . The rate matrix Q is often called
the generator of X, because Q “generates” the transition probabilities (P (t, x, y) : x, y ∈ S, t > 0)
via the Kolmogorov backwards and forwards differential equations, which in turn uniquely specify
the finite-dimensional distributions of X.

In the diffusion setting, we can proceed similarly. Roughly speaking, we define the generator to
be the operator A defined through the limit
Ex f (X(t)) − f (x)
lim = (A f )(x) (5.8.3)
t&0 t

Note that if the limit (5.8.3) exists (for all x ∈ Rd ) for the functions f and g, the limit also exists
for f + g and A (f + g) = A f + A g, so that A is a linear operator. The domain of A , denoted
D(A ), is clearly a vector space.

A key distinction between diffusions and jump processes is that not all functions f lie in D(A ),
when X is a diffusion.

Example 5.8.1 Let X = B, and put f (x) = I(x 6 0). Then, E0 f (B(t)) = P0 (B(t) 6 0) = 21 ,
where f (0) = 1. In this case, E0 f (B(t)) is not even continuous in t as t → 0, let alone differentiable
(as required by (5.8.3)).

Hence, the set of functions f for which A f can be defined through the limit (5.8.3) (i.e. the
“domain of A ”) is a proper subspace of the space of real-valued functions defined on Rd . The
Itô-Doeblin calculus allows us to now identify sufficient conditions under which a function f lies in
the domain.

In particular, suppose that X satisfies (5.2.1), which µ and σ satisfies the usual Lyapunov and
growth conditions. If f ∈ C 2 , Itô’s lemma asserts that
Z t
f (X(t)) − f (X(0)) − (L f )(X(s))ds (5.8.4)
0

is a Px -local martingale. If f and L f are bounded, then (5.8.4) is, in fact, a martingale. Because
L f is continuous, it is evident that Ex (L f )(X(t)) → (L f )(x) as t → 0, so
Z t
Ex (L f )(X(s))ds = t(L f )(x) + o(t) (5.8.5)
0

as t → 0. Hence, a sufficient condition for f to lie in the domain of A is that f ∈ C 2 with f and
L f bounded, in which case A f = L f .

We turn next to studying the role of the boundary behavior of X in restricting the domain of
A . Suppose, for example, that X is a process on R+ satisfying

dX(t) = µ(X(t))dt + σ(X(t))dB(t) + dL(t) (5.8.6)

where µ and σ satisfy the conditions of Lions and Snitzman (1984) that ensure existence of a unique
solution to (5.8.6), and L is the minimal nondecreasing process such that I(X(t) > 0)dL(t) = 0.
11
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

For this reflecting process and f ∈ C 2 , Itô’s lemma ensures that


Z t
f (X(t)) − f (x(0)) − (L f )(X(s))ds − f 0 (0)(L(t) − L(0))
0

is a Px -local martingale. If f and L f are bounded and f 0 (0) = 0, then an argument identical to
that just given for diffusions without boundaries establishes that for each x > 0,
Ex f (X(t)) − f (x)
lim = (L f )(x).
t&0 t

We conclude that for such reflecting diffusions, the domain includes all functions f ∈ C 2 for which
f 0 (0) = 0 and f , L f are bounded, in which case A f = L f .

Remark 5.8.1 When modeling a jump process, the typical starting point is the specification of
the rate matrix Q. The first mathematical question to be answered is whether there exists a unique
jump process with the given rate matrix Q. For finite-state jump processes, this question is easily
answered.

The modern approach to diffusions is to model the process of interest by specifying a SDE.
One then studies the question of whether there exists a unique solution to the SDE. But another
approach (and one that was extensively followed in the early days of diffusions) is to model the
process by specifying the generator and its domain. One then needs to establish that there exists
a family of transition probabilities (P (t, x, ·) : t > 0, x ∈ Rd ) such that
Z
P (t + s, x, dy) = P (t, x, dz)P (s, z, dy) (5.8.7)
Rd

Equation (5.8.7) is known as the semigroup property, and is essential to proving that the stochastic
process that is constructed from the generator has finite-dimensional distributions that are consis-
tent with the Markov property. Given a generator and its domain, the key question is therefore
showing that there exists a unique semigroup that is generated by that generator and associated
domain. This is a question that can be answered by appealing to a functional analysis result due
to Hille and Yosida. Because this methodology takes advantage of Banach space ideas, one needs
to strengthen the definition (5.8.3) so that it exists as a Banach space limit. For example, if the
underlying Banach space is chosen to be the space of continuous functions equipped with supremum
norm, one would require that A f be a continuous function for which
Ex f (X(t)) − f (x)
sup − (A f )(x) → 0
x t
as t & 0. When this (strong) definition is followed, it leads to a definition of the generator under
which many functions f fail to be in the domain. On the other hand, it offers the ability to apply
function analytic tools to constructing diffusion processes.

Remark 5.8.2 One classical means of enlarging the class of functions lying in the domain of the
generator is to consider instead the characteristic operator A. The characteristic operator is defined
by letting Tε be the first time that kX(t) − X(0)k exceeds ε, and defining Af via
Ex f (X(Tε )) − f (x)
lim = (Af )(x) (5.8.8)
ε&0 Ex τε
12
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

the domain of a is hte class of functions f for which the limit (5.8.8) exists for all x ∈ Rd . Note
that if f is C 2 , it follows that (5.8.4) is a local martingale that is bounded for t 6 Tε , proving
the existence of the limit (5.8.8) for such functions, with Af = L f . Hence, we no longer need to
require that f and L f be (globally) bounded, in order that f lie in the domain of A.

The modern definition of the generator of a Markov process proceeds as follows (and leads to a
domain that is very large). Given a Markov process X = (X(t) : t ≥ 0), we say that f lies in the
domain of the generator A (and write f ∈ D(A )) if there exists a function g such that
Z t
f (X(t)) − g(X(s))ds (5.8.9)
0

is a Px -local martingale for each x ∈ Rd , in which case we write g = A f .

This leads to a very large domain D(A ). In addition to containing all C 2 functions f , this
definition of the domain may include even non-smooth functions. Suppose, for example, that X
satisfies (5.2.1) and is a positive recurrent diffusion. Then, there typically exists a stationary
distribution π such that Z
Ex h(X(t)) → h(x)π(dx)
S
as t → ∞, for all functions h for which the right-hand side is integrable. Set
Z
g(x) = h(x) − h(y)π(dy).
S
If we assume that Z ∞
| Ex g(X(t))|dt < ∞
0
for all x ∈ Rd (as would generally occur when the rate of convergence to equilibrium is exponentially
fast), then Z ∞
f (x) = − Ex g(X(t))dt
0
is a well-defined (finite-valued) function. Furthermore, it is easily verified that (5.8.9) is then a
Px -martingale for each x, so f ∈ D(A ). Note that g need not be smooth, and there is no guarantee
that f is smooth. So D(A ) may include functions that lie (for example) outside C 2 . Because this
definition leads to a very large domain, it has become increasingly popular in the literature.

Again, the boundary behavior of X plays a role in defining the domain D(A ). Consider, for
example, a diffusion that is subject to absorption at the exit time from C. Such a diffusion can be
represented as X(t) = Y (t ∧ T ), where T is the exit time from C. Note that for t > T ,
Z t
f (X(t)) − g(X(s))ds
0
Z T
=f (Y (T )) − g(Y (s))ds − g(Y (T ))(t − T )
0
Hence, in order that this process be a local martingale, we must require that g = 0 on ∂C (or
equivalently, A f = 0 on ∂C). Hence if the boundary is accessible (i.e. Px (T < ∞) > 0), f ∈ D(A )
requires that A f = 0 on ∂C.

Exercise 5.8.1 Suppose that X satisfies (5.8.6). Show that f ∈ D(A ) if f ∈ C 2 and f 0 (0) = 0.
13
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

5.9 Integro-PDEs for Diffusions with Jump Boundaries

Suppose that X = (X(t) : t ≥ 0) is a process for which

Z t Z t N (t)
X
X(t) = X(0) + µ(X(s))ds + σ(X(s))dB(s) + ξi ,
0 0 i=1

where N (t) = max{n ≥ 0 : Tn ≤ t}, T0 = 0, and

Ti = inf{t > Ti−1 : X(t) = 0}.

We assume that the ξi ’s are iid positive rvs. In other words, X has state space [0, ∞), and X
jumps into (0, ∞) according to the distribution P (ξ1 ∈ ·) every time X hits the origin.
To illustrate the computation of probabilities and expectations for this process, consider the
computation of Z ∞
Ex e−αt r(X(t))dt,
0

where r is bounded. As usual, the appropriate martingale here is


Z t
e−αs r(X(s))ds + e−αt u(X(t)). (5.9.1)
0

Note that X evolves continuously between the jumps, so separating out these intervals and applying
Itô’s formula (assuming that u ∈ C 2 ) yields
Z t
−αt
e u(X(t)) = u(X(0)) + e−αs ((L u)(X(s)) − αu(X(s)))ds
0
Z t
+ e−αs u0 (X(s))σ(X(s))dB(s) (5.9.2)
0
N (t)
X
+ (u(X(Ti )) − u(X(Ti −)))e−αTi .
i=1

Indeed,

e−αt u(X(t)) = u(X(0)) + (e−αT1 u(X(T1 −))I(T1 ≤ t) − u(X(0)))


N (t)
X
+ (u(X(Ti )) − u(X(Ti −)))e−αTi
i=1
N (t)−1
X
+ (u(X(Ti+1 −))e−αTi+1 − u(X(Ti ))e−αTi )
i=1
+ (u(X(t))e−αt − u(X(TN (t) )))e−αTN (t) .

So, on {T1 ≤ t}, Itô’s formula guarantees that if u ∈ C 2 , then


Z T1 Z T1
−αT1 −αs
e u(X(T1 −))−u(X(0)) = e ((L u)(X(s))ds−αu(X(s)))ds+ e−αs u0 (X(s))σ(X(s))dB(s).
0 0
14
§ SECTION 5: THE CONNECTION BETWEEN SDE’S AND PDE’S

Similarly, on {Ti+1 ≤ t},


Z Ti+1
−αTi+1 −αTi
u(X(Ti+1 −))e − u(X(Ti ))e = e−αs ((L u)(X(s))ds − αu(X(s)))ds
Ti
Z Ti+1
+ e−αs u0 (X(s))σ(X(s))dB(s).
Ti

Finally,
Z t Z t
u(X(t))e−αt −u(X(TN (t) ))e−αTN (t) = e−αs ((L u)(X(s))−αu(X(s)))ds+ e−αs u0 (X(s))σ(X(s))dB(s).
TN (t) TN (t)

Combining and collecting like terms yields (5.9.2).

Now,
N (t) N (t)
X X
(u(X(Ti )) − u(X(Ti −)))e−αTi = (u(X(Ti )) − u(X(Ti −)) − E(u(X(Ti )) − u(X(Ti −)) | X(s−) : 0 ≤ s ≤ Ti ))e−αTi
i=1 i=1
N (t)
X
+ E(u(X(Ti )) − u(X(Ti −)) | X(s−) : 0 ≤ s ≤ Ti )e−αTi .
i=1

Note that the stochastic integral is a local martingale, and it can be shown that
N (t)
X
(u(X(Ti )) − u(X(Ti −)) − E(u(X(Ti )) − u(X(Ti −)) | X(s−) : 0 ≤ s ≤ Ti ))e−αTi
i=1

is a martingale adapted to (X(t−) : t ≥ 0). Hence, in order that (5.9.1) be a martingale, we require
that u satisfy
(L u)(x) − αu(x) = 0, x ≥ 0
subject to Z ∞
u(y)P (ξ ∈ dy) = u(0).
0
If we can find a bounded solution to this system, then our standard argument establishes that
Z ∞
u(x) = Ex e−αt r(X(t))dt
0

for x ≥ 0.

15

You might also like