Professional Documents
Culture Documents
Course handouts are designed as a study aid and are not meant to replace the recommended textbooks. Handouts
may contain typos and/or errors. The students are encouraged to verify the information contained within and to
report any issue to the lecturer.
contents
1 Introduction 1
References 10
1 introduction
A very concise and insightful introduction to stochastic differential equations with Poisson noise is contained
in lecture 11 of Robert Lipster’s lecture notes [3] 1 . A physics style presentation is chapter 8 of [2]. A
mathematics (stochastic analysis) style introduction to the topic can be found in [1].
1
2. differential of the poisson process
as the arrival process satisfies 0 < σ n < σ n+1 with probability one for all n > 0. Our aim now is to recast this
expression to an equivalent form which, however, affords a more transparent representation of increments
of the process. To start with we observe that
proposition. we can equivalently write the set indicator function I[σn ,σn+1 ) (t) as
1 = I[0,t] (σ n ) + I(t,∞) (σ n )
allows us to write
I[0,t] (σ n )I(t,∞) (σ n+1 ) = I[0,t] (σ n ) 1 − I[0,t] (σ n+1 )
= I[0,t] (σ n ) − I[0,t] (σ n )I[0,t] (σ n+1 ) = I[0,t] (σ n ) − I[0,t] (σ n+1 ) + I(t,∞) (σ n )I[0,t] (σ n+1 )
We use alternative expression of the set indicator function to recast (1) in the form of a telescopic series
∞
X
Nt = n I[0,t] (σ n ) − I[0,t] (σ n+1 )
n=1
∞
X n
X ∞ X
X ∞
= I[0,t] (σ n ) − I[0,t] (σ n+1 ) 1= I[0,t] (σ n ) − I[0,t] (σ n+1 ) (2)
n=1 l=1 l=1 l=n
σ∞ = ∞
therefore
2
2.1 . infinitesimal increments
proposition.
∞
X
Nt = I[0,t] (σ i ) (3)
i=1
Proof.
We use in (2) the identity
∞
X
I[0,t] (σ i ) − I[0,t] (σ i+1 ) = I[0,t] (σ n ) − I[0,t] (σ∞ ) = I[0,t] (σ n )
i=n
The pathwise representation (3) is the one we want to apply to write increments of the process. Namely,
for all t , s > 0 we get into
∞
X X∞
Nt+s − Nt = I[0,t+s] (σ i ) − I[0,t] (σ i ) = I(t,t+s] (σ i )
i=1 i=1
We can always reconstruct the statistics of a Poisson process over a finite time horizon from that of Poisson
increments over an infinitesimal time step. If s = dt we write
∞
X
dNt = I(t,t+dt] (σ i )
i=1
dNt : R+ 7→ {0, 1}
Proof.
Arrivals obey the probability distribution
t i−1 −r t
d Pσi (t) ≡ P(t < σ i ≤ t + dt) = r i e dt
(i − 1)!
The expectation value of an increment over dt is therefore
∞ ∞ ∞
X (r t)i−1
X X t i−1 −r t
E dNt = I(t,t+dt] (σ i ) = ri e dt = r dt e−r t = r dt
(i − 1)! (i − 1)!
i=1 i=1 i=1
Upon recalling that the Poisson process has independent increments by definition, we conclude that in an
infinitesimally small time interval around t only two events are possible:
3
2. differential of the poisson process
Indeed, (4a) is satisfied only if dNt = {0, 1} and (4b) Table 1: Differential algebra for infinitesimal
completely specifies the probability of the possible increments of the Poisson: process: we neglect
events. terms of order o(dt) O(dt).
Let
f : R+ 7→ R
any function continuous almost everywhere (i.e. it may have discontinuities on a set of Lebesgue measure
zero).
proposition.
df (Nt ) = dNt f (Nt + 1) − f (Nt )
Proof.
It is instructive to first verify the claim when f is analytic. In such a case we can apply (4a) to the Taylor
expansion of f
∞ ∞
X (dNt )n (n) X 1 (n)
f (Nt+dt ) − f (Nt ) = f (Nt ) = dNt f (Nt )
n! n!
n=1 n=1
where we denote
dn f
f (n) (x) = (x)
dx n
4
If we re-write the Taylor series in terms of finite increments we get into
f (Nt+dt ) − f (Nt ) = dNt f (Nt + 1) − f (Nt )
We now recognize that the result does not depend on analyticity of f : the result just states that the increment
is non-vanishing when dNt = 1
The pathwise representations (1), (3) of the Poisson process are based on the idea that the process has jumps
on a countable set. A countable has zero Lebesgue measure. This is the reason why the Poisson process can
be thought as continuous if the statistical indicators of the process to check continuity are insensitive to
events occurring on zero Lebesgue measure sets.
lim E(Nt − Ns )2 = 0
t&s
Proof.
The generating function of a Poisson process of rate r is
E z Nt = exp (r t (z − 1))
yields
d2
= E Nt (Nt − 1) = r t (r t + 1)
dz 2 z=1
E(Nt − Ns )2 = r (t − s) (r (t − s) + 1)
E(Nt − Ns )2
P (|Nt − Ns | > ε) ≤
ε2
5
5. it ô integral in the space of mean square integrable functions
A general result in the theory of stochastic processes, the Doob-Meyer decomposition theorem see e.g. [4],
ensures the existence of a unique decomposition of every adapted and integrable stochastic process as the
sum of a martingale and a predictable process, called the ”drift” or ”compensator”, starting at zero. The
manifestation of this general result in the case of differentials of a Poisson process satisfying (4b) is the
decomposition
dNt = r dt + dM t (5)
where
definition. we refer to the ordinary differential (4b) specified by the expectation value as the compensator
of the Poisson process differential,
and
proposition.
E dM t = 0 (6b)
Proof. (6b) follows immediately from (4b). To check (6a) we need to use elementary differential algebra:
Let suppose that {ξt }t≥0 is a stochastic process adapted to the filtration of the Poisson process (4) and
satisfying the properties
6
1. mean square integrability:
Zt
E ds ξ2s < ∞
0
E ξt dNt = E ξt E dM t = 0
definition. For any stochastic process ξt satisfying the above two properties we can define the Itô integral
Zt X
0
dM s · ξs := lim ξtk−1 M tk − M tk−1 (8)
kpk↓0
tk ∈p
0
where p denotes any partition of the interval 0 = t0 ≤ t1 ≤ . . . ≤ tn = t and kpk the mesh of the partition
0
The over-script · in (8) emphasizes the “pre-point” discretization rule of the Itô integral: for any interval
(tk−1 , tk ]
In order to give a precise meaning to the limit over a sequence of partitions in (8) we may resort to to a
“dyadic partitioning” of the interval [0, t]. In such a case we define
2n
jt
X
In = ξtk−1 (M tk − M tk−1 ) where Tn = (9)
2n j=0
tk ∈Tn
so that the sum on the right hand side consists of exactly 2n addends. Increments of the Poisson process are
thus evaluated over nearest-neighbors of the dyadic level Tn
rt
E(dM tk ) =
2n
Under these hypotheses we can think of ξtk−1 as a function
The non-anticipating hypothesis is manifest in the fact that ξtk−1 depends on values of {Nt , t ≥ 0} up to
tk−1 but no-further.
We require the limit on the right hand side (8) to exists in mean square sense. This means that we require
the sequence of approximations {In }n≥0 to converge in the sense of Cauchy
7
5. it ô integral in the space of mean square integrable functions
Note that we use here convergence in the sense of Cauchy because in general the explicit form of the
primitive function may not be available as it often happens for ordinary integrals.
The definition (8) entails that
Zt
t
Z
0 0
E Ns dM u · ξu ≡ E dM u · ξu {Nv , v ≤ s}
0 0
X
= lim E ξuk−1 M uk − M uk−1 Nu1 , Nu2 , . . . , Ns
kpk↓0
uk ∈p
• The martingale property stems from the non-anticipating requirement imposed on {ξt }t≥0 and the
independence of the increments of the Poisson process. We can always decompose any partition p of
the interval [0, t] into a component p1 partitioning [0, s] and p2 partitioning (s, t]. Then
Zt Zs Zt
0 0 0
E Ns dM u · ξu = E Ns dM u · ξu + E Ns dM u · ξu
0 0 s
X X
= E Ns lim ξuk−1 (M uk − M uk−1 ) + E Ns lim ξuk−1 (M uk − M uk−1 )
kpk↓0 kpk↓0
uk ∈p1 uk ∈p2
By hypothesis we are entitled to exchange the limit with the integral sign. It is thus sufficient to check
that
X
E Ns ξuk−1 (M uk − M uk−1 ) ≡
uk ∈p1
X X
E ξuk−1 (M uk − M uk−1 ) Nu1 , Nu2 , . . . , Ns = ξuk−1 (M uk − M uk−1 )
uk ∈p1 uk ∈p1
which holds true because we are evaluating the expectation value of functions of random variables
8
5.1 . it ô-leibniz rule
conditioned upon the values of the very same random variables, and
X
E Ns ξuk−1 (M uk − M uk−1 ) ≡
uk ∈p2
X X
E ξuk−1 (M uk − M uk−1 ) Nu1 , Nu2 , . . . , Ns = E ξuk−1 E (M uk − M uk−1 ) = 0
uk ∈p2 uk ∈p2
Hence upon exchanging the order of the limit and expectation value operations
t 2
Z X Zt
E dM s ξs = lim δk l r (sk+1 − sk ) E ξ2sk = r ds E ξ2s
kpk↓0
s ,s ∈p
0 k l 0
remark. The Doob-Meyer decomposition allows us apply the foregoing construction to define under the
same hypotheses
Zt Zt Zt
0 0
dNs · ξs ≡ ds ξs + dM s · ξs
0 0 0
**
Zt2
f (Nt2 ) − f (Nt1 ) = df (Ns )
t1
9
references
then requires to adapt the Leibnitz rule to take into account the discretization sensitivity of products of
differentials. Let
f , g : R+ 7→ R
two smooth functions. Then if we evaluate differential prefactors according to the pre-point convention
we obtain
***
references
[1] R. F. Bass. Stochastic differential equations with jumps. Probability Surveys, 1:1–19, 2004.
[2] K. Jacobs. Stochastic Processes for Physicists. Understanding Noisy Systems. Cambridge University Press,
September 2010.
[3] R. Lipster. Stochastic Processes. Lecture Notes Tel Aviv University, 1996.
[4] J. M. Steele. Stochastic calculus and financial applications, volume 45 of Applications of mathematics.
Springer, 2001.
10