Professional Documents
Culture Documents
i
CERTIFICATION
SIGNATURE.................... SIGNATURE...................
ii
DEDICATION
I dedicate this work to my parents Mama Bridget Kibong and Pa Ngwayi Cletus
Tatah(Late)
iii
ACKNOWLEDGEMENTS
My genuine appreciation to my Supersvisor Prof Shu Felix Che for his endeavor
and assiduity for the realization of this project. He has been a very instrumental
director who provides substantial materials when need arises. I am also indebted
to all the teaching staff and administration of the faculty of science, Department
of Mathematics and Computer Science for the enormous sacrifieces tendered to us
especially during this very difficult moments of crises.
To my parents Mrs kibong Bridget and Mr Ngwayi Cletus Tatal(late) and the
rest of the family members siblings, my wife Tabah Belinda and kids for their morals,
finacial and material support for my education.
I will not forget my friends especially all my classmates of the M.sc who has
journey with me during this difficult moment of crises for their constant sacrifices to
enable ourselves to be elevated especially Guy who assisted me during my research
for mutual understanding.
Finally to God the Father Almighty who granted me the wisdom, strength ,
knowledge and the ability to put tihs work throughout.
iv
ABSTRACT
v
Résumé
vi
Contents
DECLARATION i
DEDICATION iii
ACKNOWLEDGEMENTS iv
ASTRACT v
Résumé vi
Contents viii
1 Introduction 1
1.1 Statement Of Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Aims and objectives of the study . . . . . . . . . . . . . . . . . . . . 2
1.3 Project Rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Potential Impact of the Research . . . . . . . . . . . . . . . . . . . . 3
2 LITERATURE REVIEW 4
2.1 History of Stochastic Differential Equations (SDE’s) . . . . . . . . . . 4
2.2 History of Delay Differential Equations . . . . . . . . . . . . . . . . . 5
2.3 Delay Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 Classification of (FDEs) and (RFDEs) . . . . . . . . . . . . . 8
2.3.2 Classification of Delay Differential Equations (DDEs) . . . . . 10
2.3.3 Types of Delay Differential Equation and its Applications . . . 11
2.3.4 Linear Delay Differential Equations (LDDEs) . . . . . . . . . 11
2.4 Uniqueness and Existence of DDEs . . . . . . . . . . . . . . . . . . . 12
vii
3 3 Research methodology 15
3.1 Multidimensional Langevin Stochastic Differential Equation . . . . . 15
3.2 Stationary Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 The covariance function of the stationary solution . . . . . . . . . . . 24
3.4 The Stationary, Solution For (ar, br) Near δS . . . . . . . . . . . . . 26
3.5 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 I Asymptotic Behaviour of the Solutions of (3.2) . . . . . . . . . . . . 28
3.7 The Function v0 (a.b, r) . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Bibliography 60
viii
Chapter1
Introduction
Some scholars common challenges here is that why do researchers not concentrate in
the study of Ordinary Differential Equations (ODEs), stochastic differenctial equa-
tions (SDEs) or partial differential equations (PDEs) instead of studying time-delayed
term? That we are familiar with the aforementioned differential equations and we
have more information about them and they are much easier to handle. The attempt
to answer the above challenges can be seen as follows:
This can be attributed to the crucial impact of the time delay on every activi-
ties related to human life encompassing variety of domains and applications such as:
economical sciences, biological sciences, ecological sceinces, biometrical sciences, dis-
tributed networks and many others; Banks[2], cushing[7] or Hale[14] for application
and futher references. There are a variety of real life situation related to time-delayed
term in our society today. Some of them include: cultivation of crops and its normal
period of harvest can be delayed due to some natural or artificial consequences such
as drought and the nature of chemicals used can slow down growth of the crops and
its normal period for harvest is delayed. Crisis can also delay many activities sched-
uled within a normal period and it is extended. Therfore time-delayed term is a vital
component of any dynamic process in life sciences
In general, there are a variety of delay differential equations: such as linear delay
differential equations (LDDES), non-linear differential equations (NDDES), stochas-
tic delayed differential equations (SDDEs) etc. In our research we shall study mul-
tidimensional langevin stochastic defferential equations extended by a time-delayed
term.
XN
dX(t) = [ai X(t) + bi X(t − r)] dt + dW (t), t ≥ 0
i=1
X(s) = Z(s), s ∈ [−r, 0]
Where a, b, r are real constants, r > 0 and (Z(s), s ∈ [−r, 0]) is a given process.
Differntial equations, which include time-delayed terms are useful for stochastic
1
modelling e.g biological, biometrical or economical sciences Bank[2] cushing[7] or
Hale[14] for application and further references.
We shall be concerned with the explicit solution of (1.1) and with necessary and
sufficient conditions on a, b and r such that a stationary solution X of (1.1) exists.
In this case, we determine the function (if br ̸= 0 ). Moreover, we shall study the
covariance function K of the stationary solution deriving a differential equation for K
solving it [−r, r] and showing, that it tends to zero with exponential rate. Contrary
to the Oenstein-Uhlenbeck case the covariance function may oscillate around zero
Equation (1.1) is a very special linear stochastic functional differential equation and
the results can be extended partially to more general cases though not very easy to
handled where
0 = r0 < r1 < r2 < r3 · · · < rn , ai ∈
are fixed. (For special cases see Bailey, Williams[1]). Secondly, the solution of (1.1)
already shows typical effects due to the presence of time-delayed terms, extensions of
the results to higher dimensions and other closing terms than W (t) are possible and
will be prescuted elsewhere.
The following notations will be used: N is the set of non-negative integers, the
real, + = [0, ∞], 2 = ×, the set of complex numbers, i. the imaginary unit, 1A (.)
denotes the indicator function of the set A, δA the boundary of A, Lp (c, d) stands
for the linear space of real valued wrt the Lebesgue measure. N (µ, σ 2 ) denotes the
normal distribution with expectation µ and variance σ 2
2
1.3 Project Rational
Differential equations, which include time-delayed terms are useful for stochastic mod-
elling. These include Biological sceinces, Biometrical sciences or Economical sciences,
Banks[2], Cushing[7] or Hale[14] for applications and further references.
• The knowledge acquired from the study of stochastic differential equation will
enable us to carry out collaborative research
• It is our desire to come out with a thesis whose standard and level will be
sufficient to fulfil one of the requirements for the award of a master’s degree of
science (Msc) in Mathematics
3
Chapter2
LITERATURE REVIEW
4
physicists Stratonovich leading to a calculus similar to ordinary calculus.[32]
it shall be noted that Langevin Equation known by its acronym (LE) was first pro-
posed in 1908 by Paul Langevin[22].According to Paul Langevin, a Langevin equation
is a Stochastic differential equation describing how a system evolves when subject
to a combination of deterministic and fluctuating("random") forces.
dx
= µ(x, t) + σ(x, t)η(t) [22]
dt
Where η(t) represents "random fluctuations". For such equation, we must specify
characterestic derivations of η, e.g the type of distributions they are drown from, its
parameter, and their time correlations
1 T
Z
C(Y ) =< η(0)η(τ ) >= lim η(t)η(t + τ )dt
T →∞ T 0
5
biology, microbiology, heat flow, engineering mechanics, nuclear reaction, physiology...
etc[17]. Laplace and Condorcet are the pioneers of this study; it appeared in the 18th
century[11]. The stability’s main theory of basic DDEs was developed (elaborated)
by Pontryagin in 1942, however, after the World War II, there was rapid growth of the
theory and its applications (after the World War II, the theory grow rapidly). Bellman
and Cooke are credited with writing significant works about DDEs in 1963 [6]. The
DDEs studies witnessed massive movement(growth) in 1950 regarding DDEs studies
resulting in publishing many important works such as Myshkis[28], Krasovskii[19],
Bellman and Cooke [6], Halanay[13] , Norkin[31], Hale[14] Yanushevski[37] in 1978,
Marshal[25], these researches and publications lasted until this day in a variety of
domains
6
Figure 2.1: when the Robot sent images to Earth
DDE model depends on the initial function to determine a unique solution, because
u (t) depends on the solution at prior times. Then it is necessary to supply an initial
′
auxiliary function sometimes called the "history" function, before t = 0, the auxiliary
function in many models is constant, β : max βi
Figure 2.2: The initial function defined over the interval [−β, 0] is mapped into a
solution curve on the interval [0, t0 − β]. Initial function segment ϕ(σ), σ ∈ [−β, 0]
has to be specified and t = t0 , function segment ut0 (σ), σ ∈ [−β, 0]
There are no many differences between properties of Delay differential equation
and ordinary differential equation, sometimes analytical method of ODEs have been
used in DDEs when it is possible to apply. The order of the DDEs is the highest
derivative include in the equation (Driver[8] ), in Table 2.1 we have shown some
examples about the order of delay differential equation (DDE). Table 2.1: The order
of DDE and ODE
7
We have shown the substantial difference between DDEs and ODEs in Table 2.2
8
v(t) is given for the whole time interval necessary, the equation (2.2) have three kind
of differential equations (DEs)
i) If ψ1 (t) ⊂ (−∞, 0] and ψ2 (t) = ϕ for t ∈ [t0 , t1 ], we say that FDE is retarded
functional differential equation (RFDE), therefor the right-hand side of (2.2) does not
depend on the derivative of u.
In other words, the rate of change of the state of an RFDE is determined by the inputs
v(t), as well as the present and past states of the system. An RFDE is sometimes also
designated as a hereditary differential equation or, in control theory as a time-delay
system.
ii) If ψ1 ⊂ (−∞, 0] and ψ2 (t) ⊂ (−∞, 0] for, t ∈ [t0 , t1 ], we say that FDE is a
neutral functional differential equation (NDFE), that is mean the rate of change of
the state depends on its own past values as well.
iii) An FDE is called an advanced functional differential equation (AFDE) if
ψ1 (t) ⊂ [0, ∞) and ψ2 (t) = ∅ for t ∈ [t0 , t1 ]. An equation of the advanced type
may represent a system in which the rate of change of a quantity depends on its
present and future values of the quantity and of the input signal v(t).
Note: And retarded functional differential equation (RFDE) classify to another
kind of differential equations.
3. If delays are constant are called fixed point delays, systems which have only
multiple constant time delay can be classified as, if the delays related by integer
will be called linear commensurate time delay system. If the delays are not
related by integer will be called linear non commensurate time delay system, in
Figure 2.3 the diagram below functional differential equation and their branches
are classified.
9
Figure 2.3: Classification of FDEs and RFDEs, (Schoen[33] )
10
• Autonomous delay differential equations (never changing under the chang t ).
Where α(p) is the initial history function and, a(t) and, b(t) are any constant func-
tions, with β > 0.β, Is constant function In general the solution u(t) of equation (2.4)
has a jump discontinuity in u̇(t) at the initial point. The left and right derivatives
are not equal.
lim u̇(t) = p′ (0) ̸= lim u̇(t)
t→0− t→0+
For example, the simple delay differential equation u̇(t) = u(t − 1), t ≥ 0 with history
function u(t) = 1, t ≤ 0, it is easy to verify that, u̇ (0+ ) = 1 ̸= u̇ (0− ) = 0. Another
example: u̇(t) = −u(t − 1), t ≥ 0 with history function u(t) = 1, t ≤ 0, it is easy to
verify that, u̇ (0+ ) = −1 ̸= u̇ (0− ) = 0. The second derivative ü(t) is given by ü(t) =
...
−u̇(t − 1) and therefor it has a jump at t = 1 = β, the third derivative u (t) is given
11
by ü(t) = −ü(t − 1) = −u̇(t − 2), and hence it has jump at t = 2 = 2β, in general, the
jump in u̇(t) at t = 0 propagates to a jump in un+1 (t) at time t = n. The propagation
of discontinuities is a feature of DDEs that does not occur in ODEs and ..etc. This
propagates becomes subsequence discontinuity points (Bellen and Zennaro[3] ).
Where a and β are any real numbers, with β > 0 and d > 0, θ ∈ C 1 [−β, 0]. As
we stated before that, the delay differential equations is a special class of functional
differential equations, (Falbo, 1995), the interval [−β, 0]] is called the (pre-interval)
and the function θ is called (pre-function).
12
Note: If d > β this implies that u ≡ 0 is the solution on the interval [0, β], then
if d > 2β we transfer the DE to the interval [β, 2β], then we have new interval [0, β],
on which u = 0. This implies that we can solve the problem only on [0, 2β]. If
β < d < 2β, then the solution expanded on [0, d]. So that if we continue this way,
the solution moved along cover [0, d], for any positive real number d
Proof: we observe that the DE itself is linear first order delay differential equation
with single constant-delay and constant coefficient, and we observe that by plugging
the function u ≡ 0 is the solution on the interval [0, β]. Now if v(t) and u(t) are any
two solution, then v̇(t) = av(t − β) and u̇(t) = au(t − β). As well, if we define a
function z(t) = J1 u(t) + J2 v(t) for ant two constants J1 , J2 , then ż(t) = az(t − β).
This mean that, z(t) is also a solution to the DE. As we know the function u(t) ≡ 0 is
one solution, now by contradiction, there exists another function v(t) not identically
zero that satisfies the equation (2.6). Thus v(t) satisfies the DE on the interval [0, β],
and the function 0 (zero) on the interval [−β, 0]. But if we take on a nonzero value
at least once somewhere in semi-open interval (0, β]. This implies we are supposing
that v(r) ̸= 0 for some r ∈ (0, β].Let H be the set of reals such that τ ∈ H if and
only if either τ = −β or τ > −β and v(t) = 0 for all t ∈ [−β, τ ].
13
let K be the number set such that τ ∈ K if and only if either τ = ε or τ < ε and
v(t) ̸= 0 for all t ∈ (τ, ε]. We can note that K exists since t0 ∈ K. Since v (t∗ ) = 0, K
is bounded below because t∗ is one of its lower bounds, assume x be the Greatest
Lower Bound (GLB) of K. Since v is continuous at x then, v(x) = 0 otherwise would
be nonzero throughout the open interval (x − c∗ , x + c∗ ), making x not a lower bound
of K. Denote K by (x, e], since for all t ∈ K, t < t∗∗ = t∗ + β2 , then t − β ∈ H and
v(t − β) = 0, so from the DE v̇(t) = av(t − β)H. Hence, v̇(t) ≡ 0 on (x, e]. This
mean that v(t) = a constant, J on (x, e]. But v(x) = 0, so by continuity of v at x,
the constant must be zero. Therefore v(t) ≡ 0 on (x, e] contradiction the assumption
that v (t0 ) ̸= 0 at some point in [t∗ , d]
As well, on [−β, 0], v(t) = u(t) = θ(t); so z(t) = 0. Therefore z(t) is the trivial
solution satisfying equation (2.6), then v(t) ≡ u(t) on [−β, d].
14
Chapter3
3 Research methodology
Solutions
assume W = (W (t), f (t), t ≥ 0) if a real-valued standard Wiener process on a
probability space (Ω, F, P), Z = (Z(t), t ∈ [−r, 0]) is a stochastic process on the
space and let Z(t) be F(0) − measurable, t ∈ [−r, 0] such a process Z will be called
an initial process
15
Definition 3.1
A pathwise continuous stochastic process X = (X(t), t ≥ −r) on (Ω, F, P) is called
solution of (3.1) if
i) X(t) is F(t) − measurable
Rt
ii) X(t) = Z(0) + 0 (aX(s) + bX(s − r))ds + W (t)P − a s, t ≥ 0
iii) X satisfies
A solution X is said to be unique if for every solution Y of (3.1) we have
P(lim sup |Y (t) − X(t)| > 0) = 0
t≥−r
To solve he equation (3.1) firstly we ave to consideer the deterministic equation cor-
ressponding to
Xn
Ẋ(t) = (ai x(t) + bi (t − ri ))t ≥ 0 (3.2a)
i=0
x(t) = g(t), t ≥ 0 (3.2b)
Where g is a given function [−r, 0] x = x(t), t ≥ −r is called a solution of (3.2) if
it is absolutely continuous on [0, , ∞) satisfies (3.2a). (Lebesgue almost everywhere)
and (3.2b) The equation (3.2) can be solved step by stop on the interval [k − r, (k +
r)r], K ≥ 0 Proveided that ∈ L1 (−r, 0). In this way we get for g(t) := 1{0} (t), t ∈
[−r, 0] the so called fundamental solution X0 of (3.2):
t
[r]
X bk
X0 (t) = ( )(t − kr)k exp(a(t − kr)), t ≥ 0 (3.3)
k=0
k!
Now;
Let us to the stochastic equation (3.1) it holds
Proposition 3.2:
Assume Z has a continuous trajectories. Then the stochastic equation (3.1) has a
unique solution it is given by:
R0 Rt
X(t) = X0 (t)Z(0) + b −r X0 (t − s − r)Z(s)ds + 0 X0 (t − s)dW (s)
(3.5)
X(t) = Z, t ∈ [−r, 0]
16
Where X0 denotes the fundamental solution of (3.2).
Proof:
Existence and uniqueness immediately follow by solving (3.1) using the step-method.
The representation of (3.5) is verified by inserting (3.1).■
Definition 3.3
A solution U = (U (t), t ≥ −r)of (3.1) is called a stationary solution if is finite-
dimensioal distruibution are invariant under time translation ie
P (U (t + tk ) ∈ Ak , k = 1, · · · n) = P (U (tk ) ∈ Ak , k = 1 · · · n)
Definition 3.4
A stationary solution is said to be uniquely determined if every two stationary
solution of (3.1a) have the same finite-dimension distributions.
In the following, we suppose that W = (W (t), F(t), t ∈ R). This is no restriction of
generality.
Before formulating conditions for the existence of stationary solutions, we need some
more knowledge on the deterministic equation (3.2) and related quantities Hale[14].
The characteristic function h(.) of (3.1a) is defined by
h(λ) = λ − a − b exp(−λr), λ ∈ C
Λ := {λ ∈ C|h(λ) = 0} (3.6)
17
Lemma 3.5:
Assume a, b ∈ R, r > 0 are fixed. Then it holds
i) For every real C the set Λ ∩ {λ ∈ K|Reλ > c} if finite, in particular V0 (a, b, r) < ∞
ii) For every V > V0 , there exist a constant Ki = Kj (v) > 0, j = 0 : 1 such that
Proof
The proof of proposition (i) and of the inequality (3.8a) can be found in Hale[14]
chapter 1. The Inequality (3.8) thus follows immediately ■
Corollary 3.6
For every g ∈ L1 [−r, 0] and every V > V0 (a, b, r) There exists a constant
C = C(y, v) > 0 such that
Where
−u
cos ξ
f or u ̸= 0
v1 (u) := π
2
f or u = 0
(ξ = ξ(u) as above V O2 (u) := −u
The following proposition is essential for the solution and a consequence of a
well-known results for deterministic difference-differential equations.
Proposition 3.7
We have v0 (a, b, r) < 0 if and only if (ar, br) ∈ S.
18
Proof
introduce
u := ar, v := br, µ = λr (3.9a)
and note that λ is a root of h(λ) = 0 if and only if µ is a of h̃ = 0 with
Defining
‘Ṽ0 (u, v) := max{Reµ : h̃(µ) = 0} (3.9c)
We get:
(ar, br)
v0 (a, b, r) = ṽ0 (3.10)
r
Note that h̃ is the characteristic function of
Now apply a result of Hayes[15] see also Hale[14] which says that ṽ0 (u, v) < 0if and
only if (u, v) ∈ S.
Proposition 3.5
For the equation (3.1a) the following properties are equivalent:
i) There exists a stationary solution X
ii) All characteristic root of (3.1a) have negative real part: ṽ(a, b, r) < 0.
iii) (ar, br) ∈ S
iv) The fundamental solution x0 of (32) is Square integrable:
Z ∞
2
σ0 := x0 ds < ∞ (3.11)
0
Proof
The equivalence of (ii) and (iii)was established in proposition (3.7). We show
(ii) ⇐⇒ (iv); if v0 is negative, then (3.11) follows from (3.8a) by choosing an
v ∈ (v0 , 0).
Assume (3.11) holds. Then every solution g of (3.2) with initial function g ∈ L2 (−r, 0)
is square integrable over R+
This follows from (3.4) using Schawrts inequality. If v0 ≥ 0, then there exists a
characteristic root λ0 with Reλ0 ≥ 0. Obviously f (t) := exp(λ0 ), t ≥ −r is a solution
of (2.2)with f |[−r, 0] ∈ L2 (−r, 0).
But f is nost square integrable on R+ . thus v0 < 0 must hold.
19
It remains now to show that (iv) ⇐⇒ (i)
Suppose that (iv) holds. Then the following integrable function exist by assumption.
Z t
Ut := x0 (t − s)dWs , t ∈ R
−∞
Obviously Ut ≡ 0. Calculating the characteristic function one can show that for
P
all t1 < · · · < tn the random vector (U (t1 ), · · · U (tn )) is normally distributed with
covariance matrix G = g(ij) given by
Z ∞
gi,j = x0 (|ti − tj | + s)x0 (s)ds, i, j = 1, · · · n (3.12)
0
In particular, U is continous and stationary. That is this process U satisfies the
equation (3.1) is proved by inserting and using tat x[ 0](.) is the fundamental solution
of (3.2).
Conversely, assume (i) holds and X is a stationarty solution. In particular, X is
continuous and has the representation (3.5). This, Introducing X 0 by
Z 0
0
X (t); = X0 (t)X(0) + b x0 (t − s − r)X(s)ds, t ≥ 0 (3.13)
−r
we obtain
t
−λ2
Z
E exp(iλX(t)) = E exp(iλX (t)) · exp 0
X02 (s)ds , t ≥ 0, λ ∈ R (3.14)
2 0
Use Z t Z t
2
X0 (t − s)dWs ∈ µ 0, X0 (s)ds
0 0
and the independence of (W (t), t ≥ 0) and F(0) the left-hand side of (3.14) is inde-
pendent of t by stationarity. Thus we get (3.11).
That is (iv) and therefore v0 < 0 by (ii). By (3.8) and he corollary (3.6) it follows
that
lim X 0 (t) = 0 p − a.s
t→∞
20
Cosequently, we have
Z t
E(Xt0 )2 + X02 (v)dv = K(0) < ∞, t≥0
0
Corollary 3.9
i) If
a + b < 0 and a − b ≤ 0
then there exists a stationary solution of (3.1a) regardless of the value of r > 0.
ii) If
a+b≥0
ar ≥ 1
then there does not exist a stationary solution of (3.1 a) regardless of r > 0 in the
case (3.16a) or regardless of b ∈ R in the case (3.16b), respectively.
iii) If
a + b < 0 and a − b > 0
then there exists a stationary solution of (3.1a) if and only if
Where
[arccos( −a
b
)]
r0 (a, b) := 1 with arccos z ∈ [0, π] f or z ∈ [−1, 1]
(b2 − a2 ) 2
proof
Obviously, (i) and (iii) follow from proposition (3.8) and the definition of the set
S. Let us pove (iii): Defining r0 = r0 (a, b) := sup{r > 0 : (ar, br) ∈ S} We find
that for fixed values a, b satisfying (2.7) it must hold r0 = ( 1b )v0 = ( a1 )u0 where
v0 = cosu0ξ0 = sinξ0ξ0 , ξ0 = ξ0 (u0 ) is sthe solution of ξ0 = u0 with ξ0 ∈ (0, π) for u0 ̸= 0
and u0 and v0 satisfy uv00 = ab Therefore,
21
1 ξ0
r0 = −( )( )
b sin ξ0
−( 1b )(arccos( −a
b
))
= −a
sin(arccos( b ))
−( 1b )(arccos( −a
b
))
= 2 1
( 1−a
b2
)2
Proposition 3.10
Let Z be an initial process and X the correspondin solution of (3.1a) exists, then
the distribution of (X(t − t1 ) · · · X(t + tn )) where n ∈ N, t1 , . . . , tn are fixed with
0 ≤ t1 < t2 · · · < tn , tends for → ∞ to a zero mean normal distribution with the
covariance matrix (gi,j ) defined by (3.12).
Proof
From (3.5) it follows with the notation (3.13) that (note that X is nonstationary in
general now):
X(t) = X 0 (t) + G(t)
With Z t
G(t) := x0 (t − s)dWs , t≥0
0
Because of the existence of a stationary solution we have v0 < 0 and as ain the proof
of proposition (3.9) we get X 0 (t) → 0 a.s
To show that the finite dimensional distributions of G tend to zero mean nor-
mal distributions with covariance matrix gij calculate the characteristic function
E exp(i nk=1 λk · G(t + tk )).■
P
Proposition 3.11
Assume a stationary solution V = (V (t), t ≥ −r) of (3.1a) exists. Then
i) V is the unique stationary solution of (3.1a) and it is a zero mean gaussian process
with covariance function K(·) given by
Z x
K(t) := x0 (s + t)x0 (s)ds t ≥ 0, (3.19)
0
22
ii) V has a spectral density f related to K by
Z
K(t) = exp(itu)f (u)du, t∈R
R
and given by
Proof (i) follows at once from Proposition 3.10, and (ii1) was shown in the proof of
Proposition 3.8. Let us consider (ii).
Integrating (3.2a) we get for x0
Z t Z t
x0 (t) = 1 + a x0 (s)ds + b x0 (s − r)ds, t ≥ 0
0 0
and thus Z t
|x0 (t)| ≤ 1 + (|a| + |b|) |x0 (s)| ds, t≥0
0
Applying a Gronwall-type lemma we obtain
Thus, x0 (·) and x0 (· + t) are the inverse Fourier transforms of (2π)−1/2 h−1 (iu) and
1
(2π) 2 h−1 (−iu) exp(iut), u ∈ R, respectively.Applying the Parseval-equation to
(3.19) we obtain (3.20) and therefore (ii).
23
3.3 The covariance function of the stationary solu-
tion
3.3 The covariance function of the stationary solution Assume (3.1a) has a stationary
solution, i.e. (ar, br) ∈ S, and let K(·) be its covariance function given by (3.19). We
shall show that K(·) satisfies the difference-differentialequation (3.2a) and calculate
it explicitly on [−r, r]. Then, using the step-method, K can be calculated principally
on the whole real axis. We shall treat K(·) on R+ only. All formulas obtained below
can be extended to 1 − ∞, 0] by the property K(−1) = K(1), 1 ≥ 0.
Lemma 3.12 The covariance function K(·) of the stationary solution of (3.1a) has
the following properties:
iii) We have
2aK(0) + 2bK(r) = −1 (3.24)
and
2aK(0) + 2bK(r) = −1 K̇(0+) = −1/2 (3.25)
iv) K is twice continuously differentiable on [0, r] and it holds
where K̈(t) is defined at t = 0 and t = r to be the right or left hand side derivative of
Where K(t) is defined at t = 0 and t = r to be the right or left hand side derivative
of
K̇, respectively.
Proof Using (3.11), the inequalities (3.8) and v0 < 0 it follows from Lebesgue’s
dominating convergence principle that K is differentiable on R+
Z x
K̇(t) = ẋ0 (s + t)x0 (s)ds, t ≥ 0
0
The continuity of K̇ on [0, ∞) follows from (3.8) similarly. Thus (i) is proved.
Now use that x0 solves (3.2a), x0 (u) = 0(u ∈ [−r, 0)) and K(−u) = K(u) to get(3.23)
Furthermore observe
24
Z x
bK(r) = bx0 (s + r)x0 (s)ds
0
Z x
= bx0 (s)x0 (s − r)ds
0
Z x Z x
= x0 (s)ẋ0 (s)ds − a x20 (s)ds
0 0
= lim x20 (s)/2 − x20 (0)/2 − aK(0)
= −1/2 − aK(0)
Thus (3.24) holds. Inserting it in (3.23) and using the continuity of K and its
symmetry we get (3.25)
From (3.23) it follows for t ∈ [0, r]
The right-hand side of this equation is continuous differentiable on [0, r], where the
derivatives at the boundaries t = 0 and t = r are understood one-sided. Use (3.23)
to obtain (3.26)
Now we are ready to calculate K explicity on [0, r]. Remark that (ar, br) ∈ S by
assumption.
1
where i := |a2 − b2 | 2 and
(b sin h(lr) − l)/(2d(a + b cosh(lr))), |b| < −a
K(0) = (br − 1)/4b, b=a (3.28)
(b sin(lr) − l)/(2d(a + b cos(lr))), b < −|a|
Proof
Solve (3.26) with the conditions (3.24) − (3.25)
25
strictly positive with
If (ar, br) ∈ S and br < − exp(ar − 1), then K(·) oscillates around zero with
(Oscillation around zero means, that for every t0 > 0 one can find t1 , t2 > t0 such
that K (t1 ) < 0 and K (t2 ) > 0.)
Proof Because of (3.23), Proposition 3.2 and its proof we have that K is strictly
positive if and only if x0 is strictly positive. Now apply Proposition 3.2 again.
tends to
1/ (2πf ∗ (u)) := (u + b∗ sin ur∗ )2 + (a∗ + b∗ cos ur∗ )2
26
Proposition 3.15 If (a, b, r) tends to (a∗ , b∗ , r∗ ) with (ar, br) ∈ S and
(a∗ r∗ , b∗ r∗ ) ∈ δS then f (·) tends to f ∗ (· ) uniformly on compact subsets of
R\ {u∗ , −u∗ }
27
3.5 APPENDIX
We shall summarize some more or less known facts on the deterministic equation
(3.2), the fundamental solution x0 and the function v0 . In particular, combining with
Proposition 2.14 one gets some more information on the behaviour of the covariance
function K(·)
Lemma 3.18 (Myškis[29], p. 101) Every solution of (3.2) with initial func-
tion g ∈ L1 (−r, 0) has the following representation
X
exp (λk t) C0,k + · · · + Cmk −1.k tmk −1 + o(exp(γt)) (3.31)
xg (t) =
Re λk ≥γ
28
where â := a − u and b̂ := b exp(−ur) with any real number u. Then, we can find
the characteristic functions h and ĥ for (3.6) and (3.7), respectively. It is easy to see
that
h(λ) = ĥ(λ − u), λ ∈ K (3.38)
Now we use (3.39) with u = r0 and obtain for t > 0 with x0 (t) ̸= 0
29
Indeed, if x0 is the fundamental solution of (3.2) then
is a solution of
˙
x̄(t) = qx̄(t − 1), t≥0
with q := hr exp(−ar)
and prove
b) There exist positive solutions of (3.42) if and only if the fundamental solution
of (3.12) is positive.
From (a) then it follows that all solutions of (3.42) oscillates around zero if and
only if p > 1/e and the lemma is proved.
Proof of (a) Assume, there exists a positive solution g of (3.12). Then q(t) > 0
Proof of (a) Assume, there exists a positive solution g of (3.12). Then g(t) > 0 for
all t ≥ 0 and g(t) < 0 for t ≥ 1. Therefore, there exists limt→× g(t − 1)/y(t) =: x
and it is easy to see that 1 ≤ α < x. Let ε > 0. Then there is a real number Q such
that
x − z ≤ y(t − 1)/q(t) and Q ≤ t < x.
30
and therefore
f (t) ≤ g(t), −1 ≤ t ≤ 0
f (0) = g(0),
ĥ(z) := z − c exp(−z), z ∈ C,
v̂0 (c) := max{Re z : ĥ(z) = 0}
Then v̂0 (c) < ∞, c ∈ R, and we have
v0 (a, b, r) = 1/r [v̂0 (hr · exp(−ar)) + ar]
31
Proof We have h(λ) = 0 if and only if ĥ(z) = 0 where z = (λ−a)r and c = br exp(−ar)
Consequently, we have to study the function v̂0 of one real variable c only. This
has been done by several authors, see e.g. Wright[36].
We shall present here without proof some properties of v̂0 , partially known from
Wright[36]. For details the reader is referred to Mensch[26]. Note that if a and r are
fixed, then v0 and v̂0 have very similar graphs as functions of br
32
Figure 1 The function v̂0 (c)
vi) If |c| < exp(−1) then it holds
∞
X
(−1)n−1 nn−1 /n! cn (3.43)
v̂0 (c) =
n=1
33
Chapter4
In this chapter we shall establish some examples to test the validity of the technique
developed.
4.2 Problem 1
. Solve the stochastic differential equation
where X0 is constant. Show that the solution xt is a Gaussian deviate and find its
mean and variance.
Solution
The formal solution to this problem is
Z t Z t
x(t) = X0 + a(s)ds + b(s)dWs
0 0
where the second integral is to be interpreted as an Ito integral. The fact that x(t)
is a Gaussian deviate stems from the fact that the stochastic part of x is simply a
34
weighted combination of Gaussian deviates, and is therefore a Gaussian deviate. The
mean is Z t
µ(t) = X0 + a(s)ds
0
and the variance is
Z t Z t Z t
2
b2 (s)ds
E (x − µ) =E b(s)dWs b(s)dWs =
0 0 0
4.3 Problem 2
. In an Ornstein-Uhlenbeck process, x(t), the state of a system at time t, satisfies the
stochastic differential equation
where α and σ are positive constants and X is the equilibrium state of the system
in the absence of system noise. Solve this SDE. Use the solution to explain why x(t)
is a Gaussian process, and deduce its mean and variance.
Solution
For any function y = y(x, t), Ito’s lemma gives
∂y σ 2 ∂ 2 y
∂y ∂y
dy = − α(x − X) + 2
dt + σ dWt
∂t ∂x 2 ∂x ∂x
We observe that if x(0) is constant or is itself a Gaussian deviate then x(t) is simply
a sum of Gaussian deviates and so is a Gaussian deviate. The mean value of x(t) is
Z t
−αt −α(t−s)
µ(t) = E X + (x(0) − X)e +σ e dWs = X + (x̄(0) − X)e−αt
0
35
The variance of x(t) is now computed from
" Z t 2 #
2 −αt −α(t−s)
E (x − µ) = E (x(0) − x̄(0))e +σ e dWs
0
Z t
2 −2αt −αt
=E (x(0) − x̄(0)) e + 2(x(0) − x̄(0))e σ e−α(t−s) dWs
0
Z t Z t
2 −α(t−s) −α(t−u)
+σ e dWs e dWu
0 0
Z t
2 −2αt
=σ0 e +σ 2
e−2α(t−s) ds
0
4.4 Problem 3
. Let x = (x1 , . . . , xn ) be the solution of the system of Ito stochastic differential
equations
dxk = ak dt + bkα dWα
where the repeated greek index indicates summation from α = 1 to α = m. Show
that x = (x1 , . . . , xn ) is the solution of the Stratonovich system
1
dxk = ak − bjα bkα,j dt + bkα ◦ dWα
2
Let ϕ = ϕ(t, x) be a suitably differentiable function of t and x. Show that ϕ is the
solution of the stochastic differential equation
∂ϕ ∂ϕ ∂ϕ
dϕ = + āk dt + bkα ◦ dWα
∂t ∂xk ∂xk
where
1
āk = ak − bkα,j bjα .
2
Solution
The SDEdxi = ai dt + biα dW α has formal solution
Z t Z t
xi (t) = xi (0) + ai (s, xs ) ds + biα (s, xs ) dWsα
t0 t0
The task is to relate the Ito integral in this solution to the corresponding Stratonovich
integral. Each Wiener process behaves separately.
36
It follows directly from the stochastic differential equation that
(j) (j) β β
xk−1/2 −xk−1 = aj (tk−1 , xk−1 ) tk−1/2 − tk−1 +bjβ (tk−1 , xk−1 ) Wk−1/2 − Wk−1 +· · ·
Z t n
X
biα (s, xs ) ◦ dWsα = lim α
Wkα − Wk−1
biα tk−1/2 , xk−1/2
t0 n→∞
k=1
n
X
biα (tk−1 , xk−1 ) Wkα − Wk−1
α
= lim
n→∞
k=1
n
X ∂biα (tk−1 , xk−1 )
Wkα − Wk−1
α
+ lim tk−1/2 − tk−1
n→∞
k=1
∂t
n
X ∂biα (tk−1 , xk−1 ) (j) (j)
α α
+ lim x k−1/2 − x k−1 Wk − Wk−1 + ···
n→∞
k=1
∂x(j)
Z t
= biα (s, xs ) dWsα
t0
n
X ∂biα (tk−1 , xk−1 ) (j) (j)
Wkα − Wk−1
α
+ lim xk−1/2 − xk−1
n→∞
k=1
∂x(j)
and therefore the value of the second contribution to the Stratonovich integral is
n
X ∂biα (tk−1 , xk−1 )
β β
Wkα − Wk−1
α
lim bjβ (tk−1 , xk−1 ) Wk−1/2 − Wk−1
n→∞
k=1
∂x(j)
n
1 X
= lim biα,j (tk−1 , xk−1 ) bjβ (tk−1 , xk−1 ) δα,β ds
n→∞ 2 k=1
Z t
1
= biα,j (t, x)bjα (t, x)ds.
2 t0
In conclusion,
Z t Z t
1 t
Z
α α
biα (s, xs ) ◦ dWs = biα (s, xs ) dWs + biα,j (t, x)bjα (t, x)ds
t0 t0 2 t0
where repetition of α implies summation over the independent Wiener processes. The
formal solution of the SDE with the Ito integral replaced by the Stratonovich integral
is therefore
Z t Z t
1
xi (t) = xi (0) + ai (s, xs ) − biα,j (s, xs ) bjα (s, xs ) ds + biα (s, xs ) ◦ dWsα .
t0 2 t0
37
Thus the Stratonovich form of the SDE is obtained from the Ito form by replacing
the drift component ai by the modified drift b
ai = ai − biα,j bjα /2.
Let ϕ = ϕ(t, x) then
∂ϕ 1
dϕ = dt + ϕ,j dx(j) + ϕij dx(i) dx(j) + · · ·
∂t 2
∂ϕ 1
= dt + ϕ,j (aj dt + bjα dWα ) + ϕij biα dWα bjβ dWβ + · · ·
∂t 2
∂ϕ 1
= dt + ϕ,j (aj dt + bjα dWα ) + ϕij biα bjα dt
∂t 2
Now write
1
bjα dWα = bjα ◦ dWα − + bjα,k bkα dt
2
to obtain
∂ϕ 1 1
dϕ = + ϕ,j aj − ϕ,j bjα,k bkα + ϕij biα bjα dt + ϕ,j bjα ◦ dWα
∂t 2 2
∂ϕ 1
= aj + ϕij biα bjα dt + ϕ,j bjα ◦ dWα
+ ϕ,j b
∂t 2
4.5 Problem 4
. The position x(t) of a particle executing a uniform random walk is the solution of
the stochastic differential equation
- Intuitive approach
The SDE can be integrated immediately to get
Z t
xt = X + µ(s)ds + σWt , x(0) = X.
0
38
Solution: PDE approach
If x satisfies the SDE dxt = µdt + σdWt then the density f (x, t) of x at time t
satisfies the partial differential equation
∂f σ2 ∂ 2f ∂f
= 2
−µ .
∂t 2 ∂x ∂x
The task is to solve this equation with initial condition f (x, 0) = δ(x−X). In absence
of intuition, take either the Fourier transform of f with respect to x or the Laplace
transform of f with respect to t. For example, let
Z ∞
fb(t; ω) = f (x, t)eiωx dx
−∞
then
dfb σ 2 ∞ ∂ 2 f iωx
Z Z ∞
∂f iωx
= 2
e dx − µ e dx
dt 2 −∞ ∂x −∞ ∂x
−ω 2 σ 2
= + µiω fb
2
with initial condition fb(0; ω) = eiωX . Clearly the solution of this first order ODE is
−ω 2 σ 2 t
f (t; ω) = exp
b + (X + µt)iω
2
However, this is the characteristic function of the Gaussian distribution with mean
X + µt and variance σ 2 t. Thus
1 2 2
f (x, t) = √ e−(x−X−µt) /2σ t
σ 2πt
4.6 Problem 5
. Solve the stochastic differential equation
where X0 is constant. Show that the solution xt is a Gaussian deviate and find its
mean and variance.
39
Solution
The formal solution to this problem is
Z t Z t
x(t) = X0 + a(s)ds + b(s)dWs
0 0
where the second integral is to be interpreted as an Ito integral. The fact that x(t)
is a Gaussian deviate stems from the fact that the stochastic part of x is simply a
weighted combination of Gaussian deviates, and is therefore a Gaussian deviate. The
mean is Z t
µ(t) = X0 + a(s)ds
0
and the variance is
Z t Z t Z t
2
b2 (s)ds
E (x − µ) =E b(s)dWs b(s)dWs =
0 0 0
4.7 prolem 6
. It is given that the solution of the initial value problem
Show that 2
E[X] = x0 e−αt , V[X] = e−2αt eσ t − 1 x20
Solution
The expected value of X is
40
Consequently it follows that
The variance of X is
h 2 i
x(0) exp − α + σ 2 /2 t + σW (t) − x(0)e−(αt
V[X] = E
h 2 i
2 −2αt
2
= x (0)e E exp −σ t/2 + σW (t) − 1
2
2 −2αt−σ 2 t σW (t) σ 2 t/2
= x (0)e E e −e
2
h 2 2
i
= x2 (0)e−2αt−σ t E e2σW (t) − 2eσW (t) eσ t/2 + eσ t
h 2 i
2 −2αt−σ 2 t 2σ t σ2 t σ2 t
= x (0)e e − 2e + e
h 2 i
= x2 (0)e−2αt eσ t − 1
4.8 Problem 7
. Benjamin Gompertz (1840) proposed a well-known law of mortality that had the
important property that financial products based on male and female mortality could
be priced from a single mortality table with an age decrement in the case of females.
Cell populations are also well-know to obey Gompertzian kinetics in which N (t), the
population of cells at time t, evolves according to the ordinary differential equation
dN M
= αN log
dt N
where M and α are constants in which M represents the maximum resource-limited
population of cells. Write down the stochastic form of this equation and deduce that
ψ = log N satisfies an OU process. Further deduce that mean reversion takes place
about a cell population that is smaller than M , and find this population.
Solution
The stochastic form of this SDE is
M
dN = αN log dt + σN dW,
N
Ito’s lemma applied to ψ = log N gives
dN σ 2 N 2 σ2 σ2
M
dψ = − dt = α log dt+σdW − dt = α log M − αψ − dt+σdW.
N 2N 2 N 2 2
41
Thus ψ satisfies an OU equation with mean state ψ̄ = log M − σ 2 /2α which in turn
translates into the population N = M e−σ /2α . The standard solution of the OU
2
equation may be used to write down the general solution for ψ and consequently the
general solution for N (t). The result of this calculation is that
Z t
−αt −αt −α(t−s)
N (t) = exp 1 − e ψ̄ + e ψ0 + σ e dWs
0
Z t
−αt −σ 2 /2α −αt −α(t−s)
= exp 1 − e log M e + e log N0 + σ e dWs
0
(1−e−αt ) Z t
e−αt −σ 2 /2α −α(t−s)
= (N0 ) Me exp σ e dWs
0
e−αt 2 −αt Z t
−σ 2 /2α N0 σ e −α(t−s)
= Me exp +σ e dWs .
M 2α 0
4.9 Problem 8
. The position x(t) of a particle executing a uniform random walk is the solution of
the stochastic differential equation
where µ and σ are now prescribed functions of time. Find the density of x at time
t > 0.
Rt R
t
In this case xt − X − 0 µ(s)ds is an N 0, 0 σ 2 (s)ds Gaussian random deviate.
Thus 2
Rt
1 x − X − 0
µ(s)ds
f (x, t) = q R exp −
Rt
t 2 2
2 0 σ (s)ds
2π 0 σ (s)ds
42
If x satisfies the SDEdxt = µ(t)dt + σ(t)dWt then the density f (x, t) of x at time t
satisfies the partial differential equation
∂f σ 2 (t) ∂ 2 f ∂f
= − µ(t)
∂t 2 ∂x2 ∂x
The task is to solve this equation with initial condition f (x, 0) = δ(x − X). The
procedure is precisely the same as the previous question, except that the Fourier
transform of f is now the preferred approach. Let
Z ∞
fb(t; ω) = f (x, t)eiωx dx
−∞
then
dfb σ 2 (t) ∞ ∂ 2 f iωx
Z Z ∞
∂f iωx
= 2
e dx − µ(t) e dx
dt 2 −∞ ∂x −∞ ∂x
−ω 2 σ 2 (t)
= + µ(t)iω fb
2
with initial condition fb(0; ω) = eiωX . Clearly the solution of this first order ODE is
2 Z t Z t
−ω 2
fb(t; ω) = exp σ (u)du + X + µ(u)du iω
2 0 0
This function is now the characteristic function of the Gaussian distribution with
mean and variance Z t Z t
X+ µ(u)du, σ 2 (u)du
0 0
Thus 2
Rt
1 1 x − X − 0 µ(u)du
f (x, t) = √ qR exp − Rt
2π t 2
σ (u)du 2 0
σ 2 (u)du
0
4.10 Problem 9
. The state x(t) of a particle satisfies the stochastic differential equation
43
Solution 1: Intuitive approach This is again a repeat of the previous example.
In matrix notation this SDE can be integrated immediately to get
Xt = X0 + At + BW.
" #
1 1 (X − X0 − At) G−1 (X − X0 − At)T
f (X, t) = exp − .
(2πt)N/2 |G|1/2 2t
The task is to solve this equation with initial condition f (x, 0) = δ(x − X). The
n-dimensional Fourier transform of f with respect to x is defined by the formula
Z
f (t; ω) =
b f (x, t)eiω,x dx
Rn
∂ 2 f iω,x
Z Z
dfb gjk ∂f iω,x
= e dx − ak e dx
dt 2 R ∂xj ∂xk
n
R ∂xk
n
−gjk ωj ωk
= + ak ωk i fb
2
44
with initial condition fb(0; ω) = eiω · X. Clearly
−gjk ωj ωk t
f (t; ω) = exp
b + (Xk + ak t) iωk
2
By way of variety, the probability density function f (x, t) is computed by direct
inversion of the Fourier transform using the identity
Z
1
f (x, t) = fb(t; ω)e−iω·x dω
(2π)n Rn
Z
1 1
= exp − [gjk ωj ωk t + 2 (xk − Xk − ak t) iωk ) dω
(2π)n Rn 2
Let v = x − X − at, then by observing that G = [gjk t] is a symmetric positive definite
matrix, it is straight forward algebra to demonstrate that
T
gjk ωj ωk t + (xk − Xk − ak t) iωk = ω + ivG−1 G ω + ivG−1 + vG−1 vT
Therefore,
vG−1 vT
Z
1 1 −1
−1 T
f (x, t) = exp − exp − ω + ivG G ω + ivG dω
2 (2π)n Rn 2
vG−1 vT
Z
1 1 T
= exp − exp − ξGξ dξ
2 (2π)n Rn 2
In order to evaluate this integral, observe that since G is symmetric and positive
definite then there exists a non-singular matrix F such that G = F F T . Thus
ξGξ T = ξF F T ξ T = (ξF )(ξF )T = ηη T , η = ξF
Changing variables from ξ to η = ξF gives
vG−1 vT
Z
1 1 T
f (x, t) = exp − exp − ξGξ dξ
2 (2π)n Rn 2
−1 T
ηT
Z
vG v 1 1
= exp − exp − dη
2 (2π)n Rn det F 2
vG−1 vT
2
η1 + η22 + · · · + ηn2
Z
1 1
= exp − exp − dη.
2 (2π)n Rn det F 2
Now |F |2 = |G| = tn |g| where g = [gjk ]. Thus |F | = tn/2 |g|1/2 . Furthermore,
2 n
η1 + η22 + · · · + ηn2
Z Z
exp −η /2 dη = (2π)n/2
2
exp − dη =
R n 2 R n
45
4.11 Problem 10
. The state x(t) of a system evolves in accordance with the stochastic differential
equation
dxt = µxdt + σxdWt , x(0) = X,
where µ and σ are constants. Find the density of x at time t > 0.
σ 2 X 2 d2 Y σ 2 X 2 −1
dY 1
dY = dX + dt → dY = (µXdt + σXdWt ) + dt
dX 2 dX 2 X 2 X2
∂f (x, t) 1 ∂2 2 2
∂
= 2
σ x f (x, t) − (µxf (x, t))
∂t 2 ∂x ∂x
Let z = log x then
∂f ∂f 1 ∂ 2f ∂ 2f 1 ∂f 1
= , 2
= 2 2
−
∂x ∂z x ∂x ∂z x ∂z x2
46
Thus it is seen that f satisfies the modified PDE
∂f 1 ∂2 ∂
σ 2 x2 f (x, t) −
= 2
(µxf (x, t))
∂t 2 ∂x ∂x
σ2 ∂ ∂f ∂
= 2xf (x, t) + x2 − (µxf (x, t))
2 ∂x ∂x ∂x
σ2 2
∂f 2∂ f ∂f
= 2f (x, t) + 4x +x −µ x +f
2 ∂x ∂x2 ∂x
σ 2 x2 ∂ 2 f ∂f
+ 2σ 2 − µ x + σ2 − µ f
= 2
2 ∂x ∂x
2 2
σ ∂ f ∂f 2
∂f 2
= = − + 2σ − µ + σ − µ f
2 ∂z 2 ∂z ∂z
σ2 ∂ 2f
3 2 ∂f 2
= + σ − µ + σ − µ f
2 ∂z 2 2 ∂z
Now take the Fourier transform of this equation with respect to z to deduce that
Z
f (t; ω) =
b f (z, t)eiωz dz
R
σ2ω2
dfb 3 2 2
= − −i σ − µ ω + σ − µ fb
dt 2 2
4.12 Probblem 11
. The state x(t) of a system evolves in accordance with the Ornstein-Uhlenbeck
process
dx = −α(x − β)dt + σdWt , x(0) = X
where α, β and σ are constants. Find the density of x at time t > 0.
47
Solution 1: Intuitive approach The SDE is reorganised into the form dx +
α(x − β)dt = σdWt . Ito’s lemma is now applied to (x − β)eαt to obtain
Z t
αt αt αt
eαs dWs
d (x − β)e = σe dWt → (x − β)e = (X0 − β) + σ
0
"
3t 2
2 #
1 z − log X + σ − µt
f = e(σ −µ)t
2
2
√ exp − 2
σX 2πt 2σ t
"
3t 2
2 #
1 log x − log X + σ − µt
= e(σ −µ)t
2
2
√ exp − 2
σX 2πt 2σ t
2 2
σ 2
1 log x/X + 2 − µ t + σ t
= e(σ −µ)t
2
√ exp −
2σ 2 t
σX 2πt
2 2 2
σ σ
1 log x/X + 2 − µ t log x/X + 2 − µ t σ2t
= e(σ −µ)t
2
2
√ exp − − 2 − σ t −
2σ 2 t 2σ 2 t 2
σX 2πt
2
σ2
1 log x/X + 2
−µ t 2
σ
2
σ t
= e(σ )t
2 −µ
√ exp − − log x/X − −µ t−
2σ 2 t 2 2
σX 2πt
2 2
σ
1 log x/X + 2 − µ t
= √ exp − − log x/X
2
2σ t
σX 2πt
2 2
σ
1 log x/X + 2
− µ t
= √ exp − exp(− log x/X)
σX 2πt 2σ 2 t
2 2
σ
1 log x/X + 2 − µ t
= √ exp − .
σx 2πt 2σ 2 t
Thus x(t) is a gaussian deviate with mean value β + (X0 − β) e−αt and variance
Z t
σ 2 (1 − e−αt )
σ 2
e−2α(t−s) ds =
0 2α
48
The final conclusion is that x has probability density function
" #
−αt 2
α (x − β − (X0 − β) e )
r
1 α
f (x, t) = −αt
exp − .
σ π (1 − e ) σ 2 (1 − e−αt )
then by taking the Fourier transform of the partial differential equation, it follows
that fb(t; ω) satisfies the ordinary differential equation
∂ fˆ σ2ω2 b
Z
∂
=− f +α ((x − β)f (x, t))eiωx dx
∂t 2 R ∂x
σ2ω2 b
Z
=− f − iαω (x − β)f (x, t)eiωx dx
2 R
2 2
σ ω b ∂ fb
=− f + iαβω fb − αω
2 ∂ω
with initial condition fˆ(0, ω) = eiωX . Clearly ϕ(t, ω) = log fb satisfies the first order
partial differential equation
∂ϕ ∂ϕ σ2ω2
+ αω =− + iαβω
∂t ∂ω 2
with initial condition ϕ(0, ω) = iωX. We may solve this equation by characteristic
methods. Take η = log ω − αt and ξ = ω then
σ2ξ 2
∂ϕ ∂ϕ ∂ϕ ∂ϕ 1 ∂ϕ ∂ϕ
+ αω = −α + αω + = αξ =− + iαβξ.
∂t ∂ω ∂η ∂ξ ω ∂η ∂ξ 2
This equation is now integrated to obtain
σ2ξ 2
ϕ=− + iβξ + ψ(η)
4α
49
where the initial condition yields
σ2ω2 η σ 2 e2η
iωX = − + iβω + ψ(log ω) → ψ(η) = ie (X − β) +
4α 4α
It therefore follows that
σ2ω2 σ 2 2(log ω−αt)
ϕ=− + iβω + i(X − β)elog ω−αt + e
4α 4α
σ2ω2 σ 2 ω 2 −2αt
=− + iβω + i(X − β)ωe−αt + e
4α 4α
σ2ω2
1 − e−2αt + iω β + (X − β)e−αt
=−
4α
Thus fˆ(t; ω) = exp ϕ corresponds to a Gaussian distribution with mean β+(X−β)e−αt
and variance σ 2 (1 − e−2αt ) /2α, that is
" #
2
(x − β − (X − β)e−αt )
r
1 α
f (x, t) = exp −α
σ π (1 − e−2αt ) σ 2 (1 − e−2αt )
4.13 Problem 12
Cox, Ingersoll and Ross proposed that the instantaneous interest rate r(t) should
follow the stochastic differential equation
√
dr = α(θ − r)dt + σ rdW, r(0) = r0
where dW is the increment of a Wiener process and α, θ and σ are constant parame-
ters. Show that this equation has associated transitional probability density function
v q/2 √ √ 2 √ √
f (t, r) = c e−( u− v) e−2 uv Iq (2 uv)
u
where Iq (x) is the modified Bessel function of the first kind of order q and the functions
c, u, v and the parameter q are defined by
2α 2αθ
c= , u = cr0 e−α(t−t0 ) , v = cr, q= −1
σ2 (1 − e−α(t−t0 ) ) σ2
Solution
The transitional probability density function for the stochastic differential equation
√
dr = α(θ − r)dt + σ rdw
50
satisfies the partial differential equation
Let ϕ(t, ω) = log fb(t; ω) then clearly ϕ is the solution of the partial differential equa-
tion 2 2
∂ϕ σ ω ∂ϕ
+ + ωα = −ωαθ
∂t 2 ∂ω
with initial condition ϕ(0, ω) = log fb(0; ω) = −ωR. Take ξ = ω and η = log ω −
log (2α + σ 2 ω) − αt then
2 2 2 2
σ2
∂ϕ σ ω ∂ϕ ∂ϕ σ ω ∂ϕ ∂ϕ 1
+ + ωα = −α + + ωα + −
∂t 2 ∂ω ∂η 2 ∂ξ ∂η ω 2α + σ 2 ω
∂ϕ σ 2 ω 2 + 2ωα ∂ϕ ∂ϕ
2α
= −α + +
∂η 2 ∂ξ ∂η ω (2α + σ 2 ω)
σ 2 ω 2 + 2ωα ∂ϕ
=
2 ∂ξ
51
Therefore ϕ satisfies the partial differential equation
σ 2 ω 2 + 2ωα ∂ϕ ∂ϕ 2αθ
= −ωαθ → =− 2
2 ∂ξ ∂ξ σ ξ + 2α
Thus the general solution for ϕ is
2αθ 2
ϕ=− log σ ξ + 2α + ψ(η)
σ2
where the initial condition gives
2αθ
log σ 2 ω + 2α + ψ log ω − log 2α + σ 2 ω .
−ωR = − 2
σ
Let λ = log (ω/ (2α + σ 2 ω)) then
ω 2α
= eλ → ω=
2α + σ 2 ω e−λ − σ2
Thus
2αe−λ
−2αR 2αθ
ψ(λ) = −λ + 2 log
e −σ 2 σ e−λ − σ 2
Bearing in mind that the task is to compute fb(t; ω) = eϕ , it follows that
−(1+q) ψ(η)
fb(t; ω) = σ 2 ω + 2α e
q+1
2
−(1+q) 2α −2αR
= σ ω + 2α exp −λ
1 − σ 2 eη e − σ2
q+1
−2αReη
2α
= exp
(σ 2 ω + 2α) (1 − σ 2 eη ) 1 − σ 2 eη
where the parameter q = 2αθ/σ 2 − 1. Now
ω
eη = e−αt
2α + σ 2 ω
which further simplifies fb(t; ω) to obtain
q+1
−2αRωe−αt
2α
f (t; ω) =
b exp
2α + σ 2 ω (1 − e−αt ) 2α + σ 2 ω (1 − e−αt )
Let functions c, u, v be defined by
2α
c= , u = cRe−αt) , v = cr
σ2 (1 − e−αt )
52
then q+1
−cRωe−αt
c
fb(t; ω) = exp
c+ω c+ω
q+1
c −uω
= exp
c+ω c+ω
q+1
c −u cu
= e exp
c+ω c+ω
Since fb(t; ω) is a function of (c + ω), it follows that
√
To complete this calculation, we compute the Laplace transform of rq/2 Iq (2 ucr)
where Iq (x) is the modified Bessel function of argument x. The result is
Z ∞
√ √
rq/2 Iq (2 ucr)e−ωr dr
q/2
L r Iq (2 ucr); ω =
0
Z ∞ √ q X ∞ k
!
(2 ucr) (ucr)
= rq/2 q Γ(q + 1)
e−ωr dr
0 2 k=0
k!(q + 1) k
∞
!
∞
(uc)q/2 X (uc)k
Z
= rq+k e−ωr dr
Γ(q + 1) k=0 k!(q + 1)k 0
∞
!
(uc)q/2 X (uc)k Γ(k + q + 1)
=
Γ(q + 1) k=0 k!(q + 1)k ω q+k+1
∞
!
1 u q/2 (c/ω)q+1 X (uc/ω)k
= Γ(q + 1)
c c Γ(q + 1) k=0 k!
1 u q/2
= (c/ω)q+1 exp(uc/ω)
c c
Thus c q/2 √
f (r, t) = ce−u e−cr rq/2 Iq (2 ucr)
u
√
cr q/2
= ce−u e−cr Iq (2 ucr)
u
v q/2 √ √ 2 √ √
=c e−( u− v) e−2 uv Iq (2 uv)
u
when cr is replaced by v
53
4.14 Problem 13
Consider the problem of numerically integrating the stochastic differential equation
dx = a(t, x)dt + b(t, x)dW, x(0) = X0
Develop an iterative scheme to integrate this equation over the interval [0, T ] using
the EulerMaruyama algorithm.
It is well-know that the Euler-Maruyama algorithm has strong order of con-
vergence one half and weak order of convergence one. Explain what programming
strategy one would use to demonstrate these claims.
Solution
Let N be the number of steps to be taken in advancing the solution from t = 0 to
t =√T , then ∆t = T /N . The standard deviation of each Wiener step is therefore
σ = ∆ and the iterative scheme is captured by the pseudo-code
1. Initialize x at X0
2. Iterate N times x− > a(k∆t, x)∆t + σb(k∆t, x)ξ where ξ ∼ N (0, 1).
3. Final value of x is x(T )
Note that there is no requirement to store intermediate values of x.
Strategy First it is necessary to simulate the integration process a large number
of times, say M times, by which is meant that x(T ) will be simulated M times with
each simulation based on an underlying realisation of the Wiener process. Let x∆t k (T )
be the value of x(T ) returned by the k simulation when using the step size ∆t. In
th
computing x∆t k (T ) we choose a very fine resolution of the interval [0, T ], say involving
N small time steps, and construct the corresponding series of N Wiener increments.
These increments, appropriately packaged to build the Wiener increments ∆W∆t
over intervals of duration ∆t, define the realisation of the underlying Wiener process
needed to compute x∆t k (T ) and, of course, by integrating at the finest resolution, i.e.
with ∆t = T ?N , one computes numerically ones best estimate of the true value of
xk (T ) against which numerical error will be measured. Needless to say, if the SDE
has an exact solution expressed in terms of W (T ), then this solution would be taken
as the exact solution against which error is to be estimated. For each realisation
the errorek = x∆t (T ) − xk (T ) is computed. To test for strong convergence one
PM k
plots log |ek | /M against log ∆t and to test for weak convergence one plots
P k=1
log M k=1 ek /M against log ∆t. The former plot will be an approximate straight
line with gradient one half and the latter plot will be an approximate straight line of
gradient one.
54
4.15 Problem 14
. If the state of a system satisfies the stochastic differential equation
dx = a(x, t)dt + b(x, t)dW, x(0) = X,
write down the initial value problem satisfied by f (x, t), the probability density
function of x at time t > 0. Determine the initial value problem satisfied by the
cumulative density function of x.
Solution
Let f (x, t) be the probability density function corresponding to the distribution of
the states of the stochastic differential equation
dx = a(x, t)dt + b(x, t)dW, x(0) = X,
then f (x, t) satisfies the partial differential equation
∂f 1 ∂ 2 (b2 f ) ∂(af )
= − , f (x, 0) = δ(x − X)
∂t 2 ∂x2 ∂x
Let F (x, t) be the cumulative distribution function of f (x, t) then
Z x
∂F
F (x, t) = f (u, t)du → f (x, t) = .
−∞ ∂x
The equation satisfied by F (x, t) is therefore
∂ 2F 1 ∂2
2 ∂F ∂ ∂F ∂ ∂F 1 ∂ 2 ∂F ∂F
= b − a → − b +a =0
∂x∂t 2 ∂x2 ∂x ∂x ∂x ∂x ∂t 2 ∂x ∂x ∂x
Thus it follows that F satisfies
∂F 1 ∂ 2 ∂F ∂F
= b −a + ψ(t)
∂t 2 ∂x ∂x ∂x
However,
∂F ∂ 2F ∂f
F (x, t) → 1, = f → 0, 2
= → 0, x → ∞
∂x ∂x ∂x
and therefore ψ(t) = 0. In conclusion, F (x, t) satisfies the partial differential equation
∂F 1 ∂ 2 ∂F ∂F
= b −a
∂t 2 ∂x ∂x ∂x
with initial condition
Z x 0 x<X
F (x, 0) = f (u, 0)du = 1/2 x = X
−∞ 1 x>X
55
4.16 Problem 15
. Compute the stationary densities
√ for the following stochastic differential equations.
(a) dX = (β − αX)dt + σ XdW
(b) dX = −α tan Xdt + σdW
(c) dX = [(θ1 − θ2 ) cosh(X/2) − (θ1 + θ2 ) sinh(X/2)] cosh(X/2)dt+2 cosh(X/2)dW
(d) dX = Xα dt + dW
(e) dX = Xα − X dt + dW
Solution
The stationary density f (x) satisfies
1 d(gf ) d(gf ) d log(gf )
2 dx
− µf = 0 → dx
= 2µ
g
(gf ) → dx
= 2µ
g
→ f (x) =
A 2µ
dx , where A is a constant which takes the value which ensures that f
R
g
exp g
integrates to one.
(a) Here µ = (β − αX) and g(x) = σ 2 X. Thus
Z Z
A 2(β − αx) A 2β 2α A 2 2
f (x) = 2 exp 2
dx = 2 exp 2
− 2 dx = 2 x2β/σ −1 e−2αx/σ .
σ x σ x σ x σ x σ σ
(c) Here µ = [(θ1 − θ2 ) cosh(X/2) − (θ1 + θ2 ) sinh(X/2)] cosh(X/2) and and g(x) =
4 cosh2 (X/2). Thus
Z
A [(θ1 − θ2 ) cosh(x/2) − (θ1 + θ2 ) sinh(x/2)] cosh(x/2)
f (x) = exp dx
4 cosh2 (x/2) 4 cosh2 (x/2)
Z
A θ1 − θ2 θ1 + θ2 sinh(x/2)
= exp − dx
4 cosh2 (x/2) 4 4 cosh(x/2)
Z
A (θ1 − θ2 ) x θ1 + θ2
= exp − log cosh(x/2)
4 cosh2 (x/2) 4 2
A (θ1 − θ2 ) x
= exp
4 cosh2+(θ1 +θ2 )/2 (x/2) 4
56
(e) Here µ = (α/X − X) and g(x) = σ 2 . Thus
x2
Z
A (α/x − x) A 2α A 2α/σ2 −x2 /σ2
f (x) = 2 exp 2 dx = exp log x − = x e
σ σ2 σ2 σ2 σ2 σ2
57
4.17 Discussion of Results
The aforemention results obtained in solving the On multidimensional langevin
stochastic differential equation extended by a time-delayed term using Itô’s formula
and partial differencial equations produces a better mathematical modeling. It
should be noted that stochastic differencial equations has played a vital role in the
constructions of various models. Most SDEs, Langevin equations are converted into
a stochastic integral which fascilitate modelling.
58
Chapter5
5.2 Conclusion
In this Work, we developed the On Multidimensional Langevin Stochastic Differential
Equation extended by a time-delayed term. We have presented the techniques and
the problems and soltions were obtained for different mathematical models. Different
methods were used in the course of constructions of the various models and better
approaches were meritted.
5.3 Recommendations
This dissertation was base On Multidimensional Langevin Stochastic Differential
Equation extended by a time-delayed term. The techniques were well definedd and
the problems and solutions was well established and the mathematical models were
constructed following standard. It should be recommended that direction for future
59
investigation should do the following areas as their research findings. In principle,
they may extend this work to any other type of differential equation.
60
Bibliography
[1] Herbert R Bailey and Michael Z Williams. Some results on the differential-
difference equation. Journal of Mathematical Analysis and Applications,
15(3):569–587, 1966.
[2] Harvey Thomas Banks. Modeling and control in the biomedical sciences, vol-
ume 6. Springer Science & Business Media, 2013.
[3] Alfredo Bellen and Marino Zennaro. Numerical methods for delay differential
equations. Oxford university press, 2013.
[5] C Chicone, Sergei M Kopeikin, Bahram Mashhoon, and David G Retzloff. Delay
equations and radiation damping. Physics Letters A, 285(1-2):17–26, 2001.
[9] Albert Einstein et al. On the motion of small particles suspended in liquids
at rest required by the molecular-kinetic theory of heat. Annalen der physik,
17(549-560):208, 1905.
61
[10] K Gopalsamy. A periodic partial integrodifferential equation. HOUSTON JOUR-
NAL OF MATHEMATICS, 18(4), 1992.
[11] Henryk Gorecki, Stanislaw Fuksa, Piotr Grabowski, and Adam Korytowski.
Analysis and synthesis of time delay systems. 1989.
[13] Aristide Halanay. Differential equations: Stability, oscillations, time lags, vol-
ume 23. Academic press, 1966.
[15] ND36426 Hayes. Roots of the transcendental equation associated with a certain
difference-differential equation. Journal of the London Mathematical Society,
1(3):226–232, 1950.
[16] Kiyosi Itô. 13. on the ergodicity of a certain stationary process. Proceedings of
the Imperial Academy, 20(2):54–55, 1944.
[17] SAAD IDREES Jumaa. Solving Linear First Order Delay Differential Equations
by MOC and Steps Method Comparing with Matlab Solver. PhD thesis, Ph. D
thesis, Near East University in Nicosia, 2017.
[19] NN Krasovskii. Some problems in the theory of motion stability. Moscow, USSR:
Fizmatgiz, 1959.
[20] Yang Kuang. Delay differential equations: with applications in population dy-
namics. Academic press, 1993.
62
[24] Tatyana Luzyanina, Koen Engelborghs, Stephan Ehl, Paul Klenerman, and Gen-
nady Bocharov. Low level viral persistence after infection with lcmv: a quanti-
tative insight through numerical bifurcation analysis. Mathematical biosciences,
173(1):1–23, 2001.
[27] Klaus Morgenthal. Über das asymptotische verhalten der lösungen einer lin-
earen differentialgleichung mit nachwirkung. Zeitschrift für Analysis und ihre
Anwendungen, 4(2):107–124, 1985.
[30] Patrick W Nelson and Alan S Perelson. Mathematical analysis of delay differen-
tial equation models of hiv-1 infection. Mathematical biosciences, 179(1):73–94,
2002.
[31] Sim Borisovich Norkin et al. Introduction to the theory and application of differ-
ential equations with deviating arguments. Academic Press, 1973.
[33] Gerhard Manfred Schoen. Stability and stabilization of time-delay systems. PhD
thesis, ETH Zurich, 1995.
[34] Marian Smoluchowski. The kinetic theory of brownian molecular motion and
suspensions. Ann. Phys, 21:756–780, 1906.
[35] Lionel Weicker, Thomas Erneux, Otti d’Huys, Jan Danckaert, Maxime Jacquot,
Yanne Chembo, and Laurent Larger. Slow–fast dynamics of a time-delayed
electro-optic oscillator. Philosophical Transactions of the Royal Society A: Math-
ematical, Physical and Engineering Sciences, 371(1999):20120459, 2013.
63
[36] Edward M Wright. A non-linear difference-differential equation. 1955.
[37] Sun Yi and A Galip Ulsoy. Solution of a system of linear delay differential equa-
tions using the matrix lambert function. In 2006 American Control Conference,
pages 6–pp. IEEE, 2006.
64