You are on page 1of 10

TCM 3 1 0 FA L L 2 0 1 9 : S TO C H A S T I C M E T H O D S —

l e c t ure 25: differential calculus for the poisson process

Course handouts are designed as a study aid and are not meant to replace the recommended textbooks. Handouts
may contain typos and/or errors. The students are encouraged to verify the information contained within and to
report any issue to the lecturer.

contents

1 Introduction 1

2 Differential of the Poisson process 1


2.1 Infinitesimal increments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Differential of a function of the Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Continuity in mean square and in probability 5

4 Doob-Meyer decomposition of the differential 6

5 Itô integral in the space of mean square integrable functions 6


5.1 Itô-Leibniz rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

References 10

1 introduction

A very concise and insightful introduction to stochastic differential equations with Poisson noise is contained
in lecture 11 of Robert Lipster’s lecture notes [3] 1 . A physics style presentation is chapter 8 of [2]. A
mathematics (stochastic analysis) style introduction to the topic can be found in [1].

2 differential of the poisson process

A pure counting Poisson process r, admits the pathwise representation



X
Nt = n I[σn ,σn+1 ) (t) (1)
n=1

For more visit https://bit.ly/2lVSwTs.


1 https://www.eng.tau.ac.il/ liptser/
˜

1
2. differential of the poisson process

as the arrival process satisfies 0 < σ n < σ n+1 with probability one for all n > 0. Our aim now is to recast this
expression to an equivalent form which, however, affords a more transparent representation of increments
of the process. To start with we observe that

proposition. we can equivalently write the set indicator function I[σn ,σn+1 ) (t) as

I[σn ,σn+1 ) (t) = I[0,t] (σ n ) I(t,∞) (σ n+1 ) = I[σn ,σn+1 ) (t)


Proof.
Using the definition, we verify that



 0 t < σ n < σ n+1



I[σn ,σn+1 ) (t) =  1 σ n ≤ t < σ n+1





 0

σ n < σ n+1 < t
 


 0 t < σ n 

 1 t < σ n+1

 

 
=  1 σn ≤ t ×  1 t < σ n+1 = I[0,t] (σ n ) I(t,∞) (σ n+1 )
 
 


 

 1

σn < t  0

σ n+1 ≤ t

Furthermore, for all t ∈ R+ and all n the identity

1 = I[0,t] (σ n ) + I(t,∞) (σ n )

allows us to write
 
I[0,t] (σ n )I(t,∞) (σ n+1 ) = I[0,t] (σ n ) 1 − I[0,t] (σ n+1 )

= I[0,t] (σ n ) − I[0,t] (σ n )I[0,t] (σ n+1 ) = I[0,t] (σ n ) − I[0,t] (σ n+1 ) + I(t,∞) (σ n )I[0,t] (σ n+1 )

The claim follows because

σ n < σ n+1 ⇒ I(t,∞) (σ n )I[0,t] (σ n+1 ) = 0

with probability one.

We use alternative expression of the set indicator function to recast (1) in the form of a telescopic series

X  
Nt = n I[0,t] (σ n ) − I[0,t] (σ n+1 )
n=1
∞ 
X n
X ∞ X
X ∞  
= I[0,t] (σ n ) − I[0,t] (σ n+1 ) 1= I[0,t] (σ n ) − I[0,t] (σ n+1 ) (2)
n=1 l=1 l=1 l=n

By hypothesis the arrival process satisfies with probability one

σ∞ = ∞

therefore

2
2.1 . infinitesimal increments

proposition.

X
Nt = I[0,t] (σ i ) (3)
i=1
Proof.
We use in (2) the identity
∞ 
X 
I[0,t] (σ i ) − I[0,t] (σ i+1 ) = I[0,t] (σ n ) − I[0,t] (σ∞ ) = I[0,t] (σ n )
i=n

The pathwise representation (3) is the one we want to apply to write increments of the process. Namely,
for all t , s > 0 we get into
∞ 
X  X∞
Nt+s − Nt = I[0,t+s] (σ i ) − I[0,t] (σ i ) = I(t,t+s] (σ i )
i=1 i=1

2.1 Infinitesimal increments

We can always reconstruct the statistics of a Poisson process over a finite time horizon from that of Poisson
increments over an infinitesimal time step. If s = dt we write

X
dNt = I(t,t+dt] (σ i )
i=1

proposition. Let Nt a Poisson process with rate r. With probability one

dNt : R+ 7→ {0, 1}
Proof.
Arrivals obey the probability distribution

t i−1 −r t
d Pσi (t) ≡ P(t < σ i ≤ t + dt) = r i e dt
(i − 1)!
The expectation value of an increment over dt is therefore
∞ ∞ ∞
X (r t)i−1
X X t i−1 −r t
E dNt = I(t,t+dt] (σ i ) = ri e dt = r dt e−r t = r dt
(i − 1)! (i − 1)!
i=1 i=1 i=1

Upon recalling that the Poisson process has independent increments by definition, we conclude that in an
infinitesimally small time interval around t only two events are possible:

• no arrival occurs, i.e. I(t,t+dt] (σ i ) = 0 for all i, and



X
P (dNt = 0) = 1 − d Pσi (t) = 1 − r dt
i=1

3
2. differential of the poisson process

• one arrival occurs, i.e. there is an i? such that I(t,t+dt] (σ i? ) = 1, and



X
P (dNt = 1) = d Pσi (t) = r dt
i=1

We may summarize the properties of an increment


dNt as dt dNt

(dNt )2 = dNt (4a) dt 0 0


E dNt = r dt (4b) dNt 0 dNt

Indeed, (4a) is satisfied only if dNt = {0, 1} and (4b) Table 1: Differential algebra for infinitesimal
completely specifies the probability of the possible increments of the Poisson: process: we neglect
events. terms of order o(dt)  O(dt).

2.2 Differential of a function of the Poisson process

Let

f : R+ 7→ R

any function continuous almost everywhere (i.e. it may have discontinuities on a set of Lebesgue measure
zero).

definition. The differential of f with respect to a Poisson process Nt is

df (Nt ) = f (Nt+dt ) − f (Nt )

The differential is in fact linear in dNt

proposition.
 
df (Nt ) = dNt f (Nt + 1) − f (Nt )
Proof.
It is instructive to first verify the claim when f is analytic. In such a case we can apply (4a) to the Taylor
expansion of f
∞ ∞
X (dNt )n (n) X 1 (n)
f (Nt+dt ) − f (Nt ) = f (Nt ) = dNt f (Nt )
n! n!
n=1 n=1

where we denote
dn f
f (n) (x) = (x)
dx n

4
If we re-write the Taylor series in terms of finite increments we get into
 
f (Nt+dt ) − f (Nt ) = dNt f (Nt + 1) − f (Nt )

We now recognize that the result does not depend on analyticity of f : the result just states that the increment
is non-vanishing when dNt = 1

3 continuity in mean square and in probability

The pathwise representations (1), (3) of the Poisson process are based on the idea that the process has jumps
on a countable set. A countable has zero Lebesgue measure. This is the reason why the Poisson process can
be thought as continuous if the statistical indicators of the process to check continuity are insensitive to
events occurring on zero Lebesgue measure sets.

proposition. A Poisson process is continuous in mean square:

lim E(Nt − Ns )2 = 0
t&s
Proof.
The generating function of a Poisson process of rate r is

E z Nt = exp (r t (z − 1))

yields

d2
= E Nt (Nt − 1) = r t (r t + 1)
dz 2 z=1

Using the independentce of increments we get into

E(Nt − Ns )2 = r (t − s) (r (t − s) + 1)

whence the claim readily follows.

Continuity in mean square implies continuity in probability

proposition. For all ε > 0

lim P (|Nt − Ns | > ε) = 0


t&s
Proof.
The property is a consequence of Chebyshev inequality

E(Nt − Ns )2
P (|Nt − Ns | > ε) ≤
ε2

5
5. it ô integral in the space of mean square integrable functions

4 doob -meyer decomposition of the differential

A general result in the theory of stochastic processes, the Doob-Meyer decomposition theorem see e.g. [4],
ensures the existence of a unique decomposition of every adapted and integrable stochastic process as the
sum of a martingale and a predictable process, called the ”drift” or ”compensator”, starting at zero. The
manifestation of this general result in the case of differentials of a Poisson process satisfying (4b) is the
decomposition

dNt = r dt + dM t (5)

where

definition. we refer to the ordinary differential (4b) specified by the expectation value as the compensator
of the Poisson process differential,

and

definition. we call dM t the martingale part or Poisson white-noise part.

The decomposition (5) defines the statistics of dM t

proposition.

(dM t )2 = dNt (6a)

E dM t = 0 (6b)

Proof. (6b) follows immediately from (4b). To check (6a) we need to use elementary differential algebra:

(dM t )2 = (dNt − r dt)2 = (dNt )2 + o(dt) = dNt

having used Table 1 to retain product of differentials within order O(dt).

It is instructive to look directly at the realizations of dM t :


 
P dM t = −r dt = 1 − r dt (7a) dt dM t
 
P dM t = 1 − r dt = r dt (7b)
dt 0 0
whence we readily verify dM t 0 dNt
E(dM t )2 = (−r dt)2 (1 − r dt) Table 2: Differential algebra for the martingale
part increment.
+ (1 − r dt)2 r dt = r dt + o(dt) = E dNt

5 i t ô i n tegral in the space of mean square integrable functions

Let suppose that {ξt }t≥0 is a stochastic process adapted to the filtration of the Poisson process (4) and
satisfying the properties

6
1. mean square integrability:

Zt
E ds ξ2s < ∞
0

2. Non anticipating: ξt may depend only on Ns with s ≤ t. As a consequence ξt and dM t = dNt − r dt


are independent variables

E ξt dNt = E ξt E dM t = 0

definition. For any stochastic process ξt satisfying the above two properties we can define the Itô integral

Zt X  
0
dM s · ξs := lim ξtk−1 M tk − M tk−1 (8)
kpk↓0
tk ∈p
0

where p denotes any partition of the interval 0 = t0 ≤ t1 ≤ . . . ≤ tn = t and kpk the mesh of the partition

kpk = min(tk − tk−1 )


tk ∈ p

0
The over-script · in (8) emphasizes the “pre-point” discretization rule of the Itô integral: for any interval
(tk−1 , tk ]

• the integrand is evaluated at tk−1 (Itô or pre-point convention)

• the increment is dM t = M tk − M tk−1 is thus independent of ξtk−1

In order to give a precise meaning to the limit over a sequence of partitions in (8) we may resort to to a
“dyadic partitioning” of the interval [0, t]. In such a case we define
2n
jt
X 
In = ξtk−1 (M tk − M tk−1 ) where Tn = (9)
2n j=0
tk ∈Tn

so that the sum on the right hand side consists of exactly 2n addends. Increments of the Poisson process are
thus evaluated over nearest-neighbors of the dyadic level Tn
rt
E(dM tk ) =
2n
Under these hypotheses we can think of ξtk−1 as a function

ξtk−1 = f (N0 , Nt1 , . . . , Ntk−1 ) for t1 ≤ · · · ≤ tk−1 ∈ Tn (10)

The non-anticipating hypothesis is manifest in the fact that ξtk−1 depends on values of {Nt , t ≥ 0} up to
tk−1 but no-further.
We require the limit on the right hand side (8) to exists in mean square sense. This means that we require
the sequence of approximations {In }n≥0 to converge in the sense of Cauchy

lim E (In − In+m )2 → 0 ∀m ≥ 0


n↑0

7
5. it ô integral in the space of mean square integrable functions

Note that we use here convergence in the sense of Cauchy because in general the explicit form of the
primitive function may not be available as it often happens for ordinary integrals.
The definition (8) entails that

proposition. the Itô integral enjoys the

i the martingale property: for all s ≤ t


 t
 Zs

Z


0 

|
0
E  dM u · ξu {Nv , v ≤ s} = dM u · ξu (11)
0 0

ii the Itô isometry


 t 2
Z  Zt Zt
2
ds ξ2s
0
E  dM s · ξs  = ds r E ξs = r E (12)
 
 
0 0 0
Proof.
We use the abridged notation

Zt
 t 
Z 
0 0
E Ns dM u · ξu ≡ E  dM u · ξu {Nv , v ≤ s}
 
 
0 0
X    
= lim E ξuk−1 M uk − M uk−1 Nu1 , Nu2 , . . . , Ns
kpk↓0
uk ∈p

• The martingale property stems from the non-anticipating requirement imposed on {ξt }t≥0 and the
independence of the increments of the Poisson process. We can always decompose any partition p of
the interval [0, t] into a component p1 partitioning [0, s] and p2 partitioning (s, t]. Then

Zt Zs Zt
0 0 0
E Ns dM u · ξu = E Ns dM u · ξu + E Ns dM u · ξu
0 0 s
X X
= E Ns lim ξuk−1 (M uk − M uk−1 ) + E Ns lim ξuk−1 (M uk − M uk−1 )
kpk↓0 kpk↓0
uk ∈p1 uk ∈p2

By hypothesis we are entitled to exchange the limit with the integral sign. It is thus sufficient to check
that
X
E Ns ξuk−1 (M uk − M uk−1 ) ≡
uk ∈p1
 
 X  X
E  ξuk−1 (M uk − M uk−1 ) Nu1 , Nu2 , . . . , Ns  = ξuk−1 (M uk − M uk−1 )
uk ∈p1 uk ∈p1

which holds true because we are evaluating the expectation value of functions of random variables

8
5.1 . it ô-leibniz rule

conditioned upon the values of the very same random variables, and
X
E Ns ξuk−1 (M uk − M uk−1 ) ≡
uk ∈p2
 
 X  X  
E  ξuk−1 (M uk − M uk−1 ) Nu1 , Nu2 , . . . , Ns  = E ξuk−1 E (M uk − M uk−1 ) = 0
uk ∈p2 uk ∈p2

which holds true because of increments independence.

• Mean square integrability is proved along the same lines:


 t 2
Z  X
E  dM s ξs  = E lim (M sk+1 − M sk )(M sl+1 − M sl ) ξsk ξsl
 
kpk↓0
s ,s ∈p
 
0 k l

As by hypothesis ξt is non anticipating


n o
E (M sk+1 − M sk )(M sl+1 − M sl ) ξsk ξsl = δk l r (sk+1 − sk ) E ξ2sk

Hence upon exchanging the order of the limit and expectation value operations
 t 2
Z  X Zt
E  dM s ξs  = lim δk l r (sk+1 − sk ) E ξ2sk = r ds E ξ2s
 
kpk↓0
s ,s ∈p
 
0 k l 0

which yields the claim.

remark. The Doob-Meyer decomposition allows us apply the foregoing construction to define under the
same hypotheses

Zt Zt Zt
0 0
dNs · ξs ≡ ds ξs + dM s · ξs
0 0 0

The dNt and dM t integrals now differ by an ordinary integral.

**

5.1 Itô-Leibniz rule


The upshot of the foregoing section is that the Itô integral is discretization dependent. The value of the
integral depends upon the way we sample the integrand in the sequences of finite sums we use as finite
approximations of the integral. Self-consistence of the definition of differential of a function

Zt2
f (Nt2 ) − f (Nt1 ) = df (Ns )
t1

9
references

then requires to adapt the Leibnitz rule to take into account the discretization sensitivity of products of
differentials. Let

f , g : R+ 7→ R

two smooth functions. Then if we evaluate differential prefactors according to the pre-point convention

d (f g) (Nt ) = f (Nt+dt ) g(Nt+dt ) − f (Nt ) g(Nt )


      
= f (Nt+dt ) − f (Nt ) g(Nt ) + f (Nt ) g(Nt+dt ) − g(Nt ) + f (Nt+dt ) − f (Nt ) g(Nt+dt ) − g(Nt )

we obtain

definition. the Itô-Leibniz differential

d (f g) (Nt ) = (df )(Nt ) g(Nt ) + f (Nt ) (dg)(Nt ) + (df )(Nt ) (dg)(Nt )

***

references

[1] R. F. Bass. Stochastic differential equations with jumps. Probability Surveys, 1:1–19, 2004.

[2] K. Jacobs. Stochastic Processes for Physicists. Understanding Noisy Systems. Cambridge University Press,
September 2010.

[3] R. Lipster. Stochastic Processes. Lecture Notes Tel Aviv University, 1996.

[4] J. M. Steele. Stochastic calculus and financial applications, volume 45 of Applications of mathematics.
Springer, 2001.

10

You might also like