You are on page 1of 73

On Multidimensional Langevin Stochastic

Differential Equation Extended By A


Time-Delayed Term

Ngwayi Marcel Ngwayi

July 16, 2022


DECLARATION

I, Ngwayi Marcel Ngwayi, registration number:UBa20SP105, a student of the De-


partment of Mathematics and Computer Science, in THE FACULTY OF SCIENCE
hereby declare that this work titled "On Multidimensional Langevin Stochastic
Differential Equation Extended By A Time-Delayed Term" is my original work. It
has not been presented in any application for a degree or any academic pursuit. I
have sincerely acknowledge all borrowed ideas nationally and internationally through
citations.

Signature of candidate......... Date: ........

i
CERTIFICATION

This is to certify that the research entitled "On Multidimensional Langevin


Stochastic Differential Equation Extended By A Time-Delayed Term” is
the original work of Ngwayi Marcel Ngwayi
This work is submitted in partial fulfillment of the requirements for the award of a
Masters’s Degree M.Sc in Probability, in the Falculty of Science, of The University
of Bamenda

Supervisor Head of Department

Prof Shu Felix Che Prof Shu Felix Che

SIGNATURE.................... SIGNATURE...................

ii
DEDICATION

I dedicate this work to my parents Mama Bridget Kibong and Pa Ngwayi Cletus
Tatah(Late)

iii
ACKNOWLEDGEMENTS

My genuine appreciation to my Supersvisor Prof Shu Felix Che for his endeavor
and assiduity for the realization of this project. He has been a very instrumental
director who provides substantial materials when need arises. I am also indebted
to all the teaching staff and administration of the faculty of science, Department
of Mathematics and Computer Science for the enormous sacrifieces tendered to us
especially during this very difficult moments of crises.

To my parents Mrs kibong Bridget and Mr Ngwayi Cletus Tatal(late) and the
rest of the family members siblings, my wife Tabah Belinda and kids for their morals,
finacial and material support for my education.

I will not forget my friends especially all my classmates of the M.sc who has
journey with me during this difficult moment of crises for their constant sacrifices to
enable ourselves to be elevated especially Guy who assisted me during my research
for mutual understanding.

Finally to God the Father Almighty who granted me the wisdom, strength ,
knowledge and the ability to put tihs work throughout.

iv
ABSTRACT

Many physical problems in engineering, biological, biometrical and economical sci-


ences have been modeled in the form of On Multidiemensional Langevin Stochastic
Differential Equations Extended By A Time-Delayed term. For many decades the
application of stochastic differential equation(SDEs) is really a challenging impetus
in the aforementioned field of sciences. In this dissertation, we have presented a
variety of examples for solving langevin stochastic differential equations using many
approaches, some of which includes Itô’s formula, theorems, lemma and partial differ-
ential equations (PDE’s). Necessary and sufficient conditions are given under which
stationary solution exists. In this case, it is unique and Gaussian. Its Covariance
function and its spectral density are studied
Keyords:Multidimension, Langevin Stochastic Differential Equation, Time Delayed
Term, Stationary Solutitons, Covariance Function, Spectral Density.

v
Résumé

De nombreux problèmes physiques en sciences de l’ingénieur, biologiques, biométri-


cales et économiques ont été modélisés sous la forme de On Multidiemensional
Langevin Équations différentielles stochastiques étendues par un terme temporisé.
Pour .Depuis plusieurs décennies, l’application de l’équation différentielle stochas-
tique (SDE) est une impulsion stimulante dans le domaine des sciences susmentionné.
Dans ce mémoire, nous avons présenté une variété d’exemples pour résoudre langevin
.équations différentielles stochastiques utilisant de nombreuses approches, dont
certaines incluent la formule d’Itô, des théorèmes, des lemmes et des équations aux
dérivées partielles (EDP). les conditions nécessaires et suffisantes sont données sous
lesquelles la solution stationnaire existe. Dans ce cas, il est unique et Gassion. .sa
fonction de covariance et son la densité spectrale est étudiée
Mots clés : multidimensionnel, équation différentielle stochastique de Langevin,
temps Terme retardé, Solutions stationnaires, Fonction de covariance, Densité
spectrale..

vi
Contents

DECLARATION i

DEDICATION iii

ACKNOWLEDGEMENTS iv

ASTRACT v

Résumé vi

Contents viii

1 Introduction 1
1.1 Statement Of Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Aims and objectives of the study . . . . . . . . . . . . . . . . . . . . 2
1.3 Project Rational . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Potential Impact of the Research . . . . . . . . . . . . . . . . . . . . 3

2 LITERATURE REVIEW 4
2.1 History of Stochastic Differential Equations (SDE’s) . . . . . . . . . . 4
2.2 History of Delay Differential Equations . . . . . . . . . . . . . . . . . 5
2.3 Delay Differential Equations . . . . . . . . . . . . . . . . . . . . . . . 6
2.3.1 Classification of (FDEs) and (RFDEs) . . . . . . . . . . . . . 8
2.3.2 Classification of Delay Differential Equations (DDEs) . . . . . 10
2.3.3 Types of Delay Differential Equation and its Applications . . . 11
2.3.4 Linear Delay Differential Equations (LDDEs) . . . . . . . . . 11
2.4 Uniqueness and Existence of DDEs . . . . . . . . . . . . . . . . . . . 12

vii
3 3 Research methodology 15
3.1 Multidimensional Langevin Stochastic Differential Equation . . . . . 15
3.2 Stationary Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 The covariance function of the stationary solution . . . . . . . . . . . 24
3.4 The Stationary, Solution For (ar, br) Near δS . . . . . . . . . . . . . 26
3.5 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.6 I Asymptotic Behaviour of the Solutions of (3.2) . . . . . . . . . . . . 28
3.7 The Function v0 (a.b, r) . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 examples and Applications 34


4.1 Test Examples For On Multidimensional Langevin Stochastic Differ-
ential Equations Extended By A Time-Delayed Term . . . . . . . . . 34
4.2 Problem 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 Problem 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.4 Problem 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.5 Problem 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.6 Problem 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.7 prolem 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.8 Problem 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.9 Problem 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.10 Problem 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.11 Problem 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.12 Probblem 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.13 Problem 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.14 Problem 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.15 Problem 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.16 Problem 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.17 Discussion of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5 Summary of Findings, Conclusion And Recommendations 59


5.1 Summary of Major Findings . . . . . . . . . . . . . . . . . . . . . . . 59
5.2 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Bibliography 60

viii
Chapter1

Introduction

Some scholars common challenges here is that why do researchers not concentrate in
the study of Ordinary Differential Equations (ODEs), stochastic differenctial equa-
tions (SDEs) or partial differential equations (PDEs) instead of studying time-delayed
term? That we are familiar with the aforementioned differential equations and we
have more information about them and they are much easier to handle. The attempt
to answer the above challenges can be seen as follows:
This can be attributed to the crucial impact of the time delay on every activi-
ties related to human life encompassing variety of domains and applications such as:
economical sciences, biological sciences, ecological sceinces, biometrical sciences, dis-
tributed networks and many others; Banks[2], cushing[7] or Hale[14] for application
and futher references. There are a variety of real life situation related to time-delayed
term in our society today. Some of them include: cultivation of crops and its normal
period of harvest can be delayed due to some natural or artificial consequences such
as drought and the nature of chemicals used can slow down growth of the crops and
its normal period for harvest is delayed. Crisis can also delay many activities sched-
uled within a normal period and it is extended. Therfore time-delayed term is a vital
component of any dynamic process in life sciences
In general, there are a variety of delay differential equations: such as linear delay
differential equations (LDDES), non-linear differential equations (NDDES), stochas-
tic delayed differential equations (SDDEs) etc. In our research we shall study mul-
tidimensional langevin stochastic defferential equations extended by a time-delayed
term.
XN
dX(t) = [ai X(t) + bi X(t − r)] dt + dW (t), t ≥ 0
i=1
X(s) = Z(s), s ∈ [−r, 0]
Where a, b, r are real constants, r > 0 and (Z(s), s ∈ [−r, 0]) is a given process.
Differntial equations, which include time-delayed terms are useful for stochastic

1
modelling e.g biological, biometrical or economical sciences Bank[2] cushing[7] or
Hale[14] for application and further references.
We shall be concerned with the explicit solution of (1.1) and with necessary and
sufficient conditions on a, b and r such that a stationary solution X of (1.1) exists.
In this case, we determine the function (if br ̸= 0 ). Moreover, we shall study the
covariance function K of the stationary solution deriving a differential equation for K
solving it [−r, r] and showing, that it tends to zero with exponential rate. Contrary
to the Oenstein-Uhlenbeck case the covariance function may oscillate around zero
Equation (1.1) is a very special linear stochastic functional differential equation and
the results can be extended partially to more general cases though not very easy to
handled where
0 = r0 < r1 < r2 < r3 · · · < rn , ai ∈
are fixed. (For special cases see Bailey, Williams[1]). Secondly, the solution of (1.1)
already shows typical effects due to the presence of time-delayed terms, extensions of
the results to higher dimensions and other closing terms than W (t) are possible and
will be prescuted elsewhere.
The following notations will be used: N is the set of non-negative integers, the
real, + = [0, ∞], 2 = ×, the set of complex numbers, i. the imaginary unit, 1A (.)
denotes the indicator function of the set A, δA the boundary of A, Lp (c, d) stands
for the linear space of real valued wrt the Lebesgue measure. N (µ, σ 2 ) denotes the
normal distribution with expectation µ and variance σ 2

1.1 Statement Of Problem


For many decades the application of stochastic differential equations is really a chal-
lenging impetus or circumstances in the field of sciences, egnineering and technology
and as such need to be resolved. It should be noted that there is a variety of methods
to solve these problems but due to high demand we can generate more easier ways
to solve them as a result, we will apply a technique based on the multidimensional
langevin stochastic differential equation extended by a time-delayed term.

1.2 Aims and objectives of the study


The aims of this thesis focuses on how to find algebraic solution on multidimensional
langevin stochastic differential equation extended by a time delayed term for solving
stochastic modelling such as in Biological, Biometrical or Economical sciences

2
1.3 Project Rational
Differential equations, which include time-delayed terms are useful for stochastic mod-
elling. These include Biological sceinces, Biometrical sciences or Economical sciences,
Banks[2], Cushing[7] or Hale[14] for applications and further references.

1.4 Potential Impact of the Research


At the end of this project, we expect that. Research findings and results shall be
beneficial to those working on stochastic modelling such as Biological, Biochemical or
Economical sciences in particular and the world at large who are end users of every
research findings. As regard to the aformentioned areas, we hope to come out with
results that will be of great help and support to policy makers in decision making as
far as the society is concerned

• A stochastic differential equation (SDE) concept, methods and processes should


be well understood

• The knowledge acquired from the study of stochastic differential equation will
enable us to carry out collaborative research

• It is our desire to come out with a thesis whose standard and level will be
sufficient to fulfil one of the requirements for the award of a master’s degree of
science (Msc) in Mathematics

3
Chapter2

LITERATURE REVIEW

Differential equations Known by its acronyms(DEs) play a central role in application


of mathematics to natural sciences and engineering fields of studies. However, when
someone tries to find the solution of DEs, it is certain that he will definitely try
to know which kind of DEs is in question. In principle, we are familiar with more
things in ordinary differential equations(ODE’s) likewise partial differential equations
equations(PDE’s). Nevertheless it is the same case with special class of (DE’s) such as
Delay Differential equations (DDE’s), stochastic differential equations(SDE’s) which
is Our manin objectives in this topic. It should be noted that if you do not have
background knowledge of (DE’s) in general, it will absolutely be difficult for you
to understand all aspects concerning SDEs and DDEs and likewise this thesis in
particular. Thus, in principle, the main objectives of this chapter is to give the reader
an easy to comprehend background and history of DDEs and SDEs to better our goal
of this thesis which is titled "on multidimensional stochastic differential equation
extended by a time-delayed term"

2.1 History of Stochastic Differential Equations


(SDE’s)
Stochastic differential equations originated in the theory of Brownian motion, in
the work of Albert Einstein[9] and Smoluchowski[34] These early examples were
linear stochastic differential equations alo called "Langevin" equations after french
physicist Langevin describing the motion of a harmonic oscillator subjected to a ran-
dom force.[22]
The mathematical theory of stochastic differential equation in the 1940s through
the groundbreaking of Japanese mathematician Kyosi Itô, who introduced the
concept of Stochastic integral and initiated the study of non-linear stochas-
tic differential equations[16]. Another approach was later proposed by Russian

4
physicists Stratonovich leading to a calculus similar to ordinary calculus.[32]
it shall be noted that Langevin Equation known by its acronym (LE) was first pro-
posed in 1908 by Paul Langevin[22].According to Paul Langevin, a Langevin equation
is a Stochastic differential equation describing how a system evolves when subject
to a combination of deterministic and fluctuating("random") forces.

dx
= µ(x, t) + σ(x, t)η(t) [22]
dt
Where η(t) represents "random fluctuations". For such equation, we must specify
characterestic derivations of η, e.g the type of distributions they are drown from, its
parameter, and their time correlations

1 T
Z
C(Y ) =< η(0)η(τ ) >= lim η(t)η(t + τ )dt
T →∞ T 0

C should be a decreasing function of τ a fast fall-off tells us that the fluctuation


change quikly.

Wiener Processes. According to Wiener processes, the integral of η(t) is an


unusual construct which we begin by definging as the Wiener process Wt
Z t
Wt = η(s)ds, η(s) ∼ N (0, 1) and η(0)η(τ )
0

heuristically, a Wiener process Wt may be envisioned as a random walk where the


step at time t is η(t), a random variable drawn from N (0, 1). The process is named
after Norber Wiener who described its properties in 1920s before going on to find
cybernetic, antecedent of systems biology. Norbert contributed to both World wars,
formalizing the notion of feedback in application to the developement of anti-aircraft
guns.

2.2 History of Delay Differential Equations


Researchers had been preoccupied with Differential Integral Equations, Functional
Differential Equations (FDEs) and Difference Differential Equations (DDEs) for at
least two centuries. The progress of human learning and reliance on automatic control
system after the World War I gave birth to different type of equation named Delay Dif-
ferential Equation (DDEs). The last 60 years, researchers have been concerned about
the theory of DDEs and FDEs. This theory has become an indispensable part in any
researchers’ glossary who deal with particular applications(implementations) such as

5
biology, microbiology, heat flow, engineering mechanics, nuclear reaction, physiology...
etc[17]. Laplace and Condorcet are the pioneers of this study; it appeared in the 18th
century[11]. The stability’s main theory of basic DDEs was developed (elaborated)
by Pontryagin in 1942, however, after the World War II, there was rapid growth of the
theory and its applications (after the World War II, the theory grow rapidly). Bellman
and Cooke are credited with writing significant works about DDEs in 1963 [6]. The
DDEs studies witnessed massive movement(growth) in 1950 regarding DDEs studies
resulting in publishing many important works such as Myshkis[28], Krasovskii[19],
Bellman and Cooke [6], Halanay[13] , Norkin[31], Hale[14] Yanushevski[37] in 1978,
Marshal[25], these researches and publications lasted until this day in a variety of
domains

2.3 Delay Differential Equations


The more general kind of DEs is called a functional differential equations (FDEs),
as well as the delay differential equations which is the simplest and maybe most
natural class of functional differential equations[8]. If we look at various fields and its
applications we will see the time delay are normal ingredients of the dynamic process
of various life sciences such as biology, economics, microbiology, ecology, distributed
networks, mechanics, nuclear reactors, physiology, engineering systems, epidemiology
and heat flow (Gopalsamy[10] ) and "to ignore them is to ignore reality" (Kuang[20]
). Delay differential equations (DDEs) is of the form

u′ (t) = g (t, u(t), u (t − β1 (t, u(t))) , u (t − β2 (t, u(t))) , . . .)

For t ≥ 0 and βi > 0, the delays, βi , i = 1, 2, . . . are commensurable physical quantities


and may be constant. In DDEs the derivative at any time relies on the solution at
previous times (and in the situation of neutral equations on the derivative at previous
times), more generally that is βi = βi (t, u(t)). Example of familiar delay problem
such as Remote Control, images are sent to Earth and a signal is sent back. For the
Moon, the time delay in the control loop is 2-10 s and for the Mars, it is 40 minutes!
(Erneux[35] ) For many years Ordinary differential equations were an essential tool of
the mathematical models. However, the delay has been ignored in ordinary differential
equation models. DDEs model is better than ODE model because DDE model used
to approximate a highdimensional model without delay by a lower dimensional model
with delay, the analysis of which is more easily carried out. This approach has been
used extensively in the process control industry (Kolmanoviskii and Myshkis[28]).

6
Figure 2.1: when the Robot sent images to Earth
DDE model depends on the initial function to determine a unique solution, because
u (t) depends on the solution at prior times. Then it is necessary to supply an initial

auxiliary function sometimes called the "history" function, before t = 0, the auxiliary
function in many models is constant, β : max βi

Figure 2.2: The initial function defined over the interval [−β, 0] is mapped into a
solution curve on the interval [0, t0 − β]. Initial function segment ϕ(σ), σ ∈ [−β, 0]
has to be specified and t = t0 , function segment ut0 (σ), σ ∈ [−β, 0]
There are no many differences between properties of Delay differential equation
and ordinary differential equation, sometimes analytical method of ODEs have been
used in DDEs when it is possible to apply. The order of the DDEs is the highest
derivative include in the equation (Driver[8] ), in Table 2.1 we have shown some
examples about the order of delay differential equation (DDE). Table 2.1: The order
of DDE and ODE

7
We have shown the substantial difference between DDEs and ODEs in Table 2.2

Table 2.2: Substantial difference between DDEs and ODEs

2.3.1 Classification of (FDEs) and (RFDEs)


In this section we introduce some nomenclature and definitions about DDEs that
will be required from the reader in order to understand this topic well, as we said
before the DDEs is class of FDEs, therefore we will try to explain the power relation
between DDEs and FDEs. Suppose, βmax = constant ∈ [0, ∞), and let u(t) be
an n-dimensional variable portraying the conduct of a operation in the time period
t ∈ [t0 − βmax , t1 ]. FDE is formulated as follows, let ψ1 (t) and ψ2 (t) be time-dependent
sets of real number, ∀t ∈ [t0 , t1 ]. Suppose that u is continuous function in [t0 , t1 ], and
u̇(t) for t ∈ [t0 , t1 ] is the right-hand derivatives of u. For each, ∈ [t0 , t1 ] , ut is defined
by ut (r) = u(t + r), where r ∈ ψ1 (t) and analogously u̇t is defined by u̇t (r) = u̇(t + r)
where r ∈ ψ2 (t). We say that u satisfies an FDE in [t0 , t1 ] if ∀t ∈ [t0 , t1 ] the following
equation holds.
u̇(t) = g (t, ut , u̇t , v(t))

8
v(t) is given for the whole time interval necessary, the equation (2.2) have three kind
of differential equations (DEs)
i) If ψ1 (t) ⊂ (−∞, 0] and ψ2 (t) = ϕ for t ∈ [t0 , t1 ], we say that FDE is retarded
functional differential equation (RFDE), therefor the right-hand side of (2.2) does not
depend on the derivative of u.

u̇(t) = g (t, ut , v(t))

In other words, the rate of change of the state of an RFDE is determined by the inputs
v(t), as well as the present and past states of the system. An RFDE is sometimes also
designated as a hereditary differential equation or, in control theory as a time-delay
system.
ii) If ψ1 ⊂ (−∞, 0] and ψ2 (t) ⊂ (−∞, 0] for, t ∈ [t0 , t1 ], we say that FDE is a
neutral functional differential equation (NDFE), that is mean the rate of change of
the state depends on its own past values as well.
iii) An FDE is called an advanced functional differential equation (AFDE) if
ψ1 (t) ⊂ [0, ∞) and ψ2 (t) = ∅ for t ∈ [t0 , t1 ]. An equation of the advanced type
may represent a system in which the rate of change of a quantity depends on its
present and future values of the quantity and of the input signal v(t).
Note: And retarded functional differential equation (RFDE) classify to another
kind of differential equations.

1. Retarded difference equation or sometimes called functional differential equation


with discrete delay.

2. Functional differential equation contains distributed delays.

3. If delays are constant are called fixed point delays, systems which have only
multiple constant time delay can be classified as, if the delays related by integer
will be called linear commensurate time delay system. If the delays are not
related by integer will be called linear non commensurate time delay system, in
Figure 2.3 the diagram below functional differential equation and their branches
are classified.

9
Figure 2.3: Classification of FDEs and RFDEs, (Schoen[33] )

2.3.2 Classification of Delay Differential Equations (DDEs)


Delay differential equations can be classified as (Lumb[23] ):-

• Linear delay differential equations (LDDEs).

• Nonlinear delay differential equations (Non-LDDEs).

• Stochastic delay differential equations (SDDEs)

• Neutral delay differential equations (NDDEs).

10
• Autonomous delay differential equations (never changing under the chang t ).

• Non-autonomous delay differential equations.

2.3.3 Types of Delay Differential Equation and its Applica-


tions
The fact that the ordinary differential equation models are replaced by the delay dif-
ferential equation models led to the rapid growth of delay differential equation models
in a variety of fields and each field has its scope of applications. The first mathe-
matical modeler is Hutchinson; he introduced delay in biological model (Driver[8]
). Various classes of delay differential equation have various range of application
(Lumb[23] ). For instance, retarded differential equation (RDDE) is applied in radi-
ation damping (Chicone[5] ), modeling tumor growth (Buric and Todorovic[4]), the
application area of distributed delay differential equation is in model of HIV infection
(Nelsonand Perelson[30] ), Biomodeling„ neutral delay differential equations (NDDE)
application area is distributed network (Kolmanoviskii and Myshkis[28] ), Fixed differ-
ential equation is applied in Cancer chemotherapy (Kolmanoviskii[28] ) and infectious
disease modeling (Hairer[12] ), and another model, Single fixed delay application is
in Immunology ((Luzyanina[24] ) and Nicholson blowflies model (Kolmanoviskii and
Myshkis,[28] )

2.3.4 Linear Delay Differential Equations (LDDEs)


We consider the linear first order delay differential equations, with single constant-
delay and constant coefficients
u̇(t) = a(t)u(t) + b(t)u(t − β); for t > 0
u(p) = α(p); −β ≤ p ≤ 0

Where α(p) is the initial history function and, a(t) and, b(t) are any constant func-
tions, with β > 0.β, Is constant function In general the solution u(t) of equation (2.4)
has a jump discontinuity in u̇(t) at the initial point. The left and right derivatives
are not equal.
lim u̇(t) = p′ (0) ̸= lim u̇(t)
t→0− t→0+

For example, the simple delay differential equation u̇(t) = u(t − 1), t ≥ 0 with history
function u(t) = 1, t ≤ 0, it is easy to verify that, u̇ (0+ ) = 1 ̸= u̇ (0− ) = 0. Another
example: u̇(t) = −u(t − 1), t ≥ 0 with history function u(t) = 1, t ≤ 0, it is easy to
verify that, u̇ (0+ ) = −1 ̸= u̇ (0− ) = 0. The second derivative ü(t) is given by ü(t) =
...
−u̇(t − 1) and therefor it has a jump at t = 1 = β, the third derivative u (t) is given

11
by ü(t) = −ü(t − 1) = −u̇(t − 2), and hence it has jump at t = 2 = 2β, in general, the
jump in u̇(t) at t = 0 propagates to a jump in un+1 (t) at time t = n. The propagation
of discontinuities is a feature of DDEs that does not occur in ODEs and ..etc. This
propagates becomes subsequence discontinuity points (Bellen and Zennaro[3] ).

Figure 2.4: The propagation of discontinuities

2.4 Uniqueness and Existence of DDEs


Delay differential equation (DDE) as Ordinary differential equation (ODE), have the
theorem of uniqueness and existence. The Boundary Value Problem (BVP)

u̇(t) = au(t − β), β > 0, on [0, d]


u(t) = θ(t), on [−β, 0]

Where a and β are any real numbers, with β > 0 and d > 0, θ ∈ C 1 [−β, 0]. As
we stated before that, the delay differential equations is a special class of functional
differential equations, (Falbo, 1995), the interval [−β, 0]] is called the (pre-interval)
and the function θ is called (pre-function).

2.4.0.1 Existence Theorem


u̇(t) = au(t − β), β > 0, on [0, d], d > 0
u(t) = 0 on [−β, 0]
Has unique solution u(t) ≡ 0 on the interval [−β, 0].

12
Note: If d > β this implies that u ≡ 0 is the solution on the interval [0, β], then
if d > 2β we transfer the DE to the interval [β, 2β], then we have new interval [0, β],
on which u = 0. This implies that we can solve the problem only on [0, 2β]. If
β < d < 2β, then the solution expanded on [0, d]. So that if we continue this way,
the solution moved along cover [0, d], for any positive real number d
Proof: we observe that the DE itself is linear first order delay differential equation
with single constant-delay and constant coefficient, and we observe that by plugging
the function u ≡ 0 is the solution on the interval [0, β]. Now if v(t) and u(t) are any
two solution, then v̇(t) = av(t − β) and u̇(t) = au(t − β). As well, if we define a
function z(t) = J1 u(t) + J2 v(t) for ant two constants J1 , J2 , then ż(t) = az(t − β).
This mean that, z(t) is also a solution to the DE. As we know the function u(t) ≡ 0 is
one solution, now by contradiction, there exists another function v(t) not identically
zero that satisfies the equation (2.6). Thus v(t) satisfies the DE on the interval [0, β],
and the function 0 (zero) on the interval [−β, 0]. But if we take on a nonzero value
at least once somewhere in semi-open interval (0, β]. This implies we are supposing
that v(r) ̸= 0 for some r ∈ (0, β].Let H be the set of reals such that τ ∈ H if and
only if either τ = −β or τ > −β and v(t) = 0 for all t ∈ [−β, τ ].

Figure 2.5: The set, H


The set H exist since it contains all of the points in the interval [−β, 0].H is
bounded above, since r is one of its upper bounds. Suppose t∗ be the Least Upper
Bound (LUB) of H. Note that v (t∗ ) = 0, otherwise there exist a positive number,
c such that v(t) ̸= 0 on (t∗ − c, t∗ + c), making t∗ − c an upper bound of H, less
than the least upper bound of H.We assume that, t∗∗ = t∗ + β2 , then ∃ a number
t0 between t∗ and t∗∗ such that v (t0 ) ̸= 0 . If there is not any t0 , then v(t) = 0, ∀t
between t∗ and t∗∗ , making t∗ not UB of H. Since v is continuous then ∃ an interval
[e, r] containing t0 as an interior point and such that for all t ∈ [e, r], v(t) ̸= 0. Let ε
be the minimum of r and t∗∗ . Therefore v(t) ̸= 0 on the interval [e, ε], ε ≤ t∗∗ .Now,

13
let K be the number set such that τ ∈ K if and only if either τ = ε or τ < ε and
v(t) ̸= 0 for all t ∈ (τ, ε]. We can note that K exists since t0 ∈ K. Since v (t∗ ) = 0, K
is bounded below because t∗ is one of its lower bounds, assume x be the Greatest
Lower Bound (GLB) of K. Since v is continuous at x then, v(x) = 0 otherwise would
be nonzero throughout the open interval (x − c∗ , x + c∗ ), making x not a lower bound
of K. Denote K by (x, e], since for all t ∈ K, t < t∗∗ = t∗ + β2 , then t − β ∈ H and
v(t − β) = 0, so from the DE v̇(t) = av(t − β)H. Hence, v̇(t) ≡ 0 on (x, e]. This
mean that v(t) = a constant, J on (x, e]. But v(x) = 0, so by continuity of v at x,
the constant must be zero. Therefore v(t) ≡ 0 on (x, e] contradiction the assumption
that v (t0 ) ̸= 0 at some point in [t∗ , d]

2.4.0.2 Uniqueness Theorem


If v(t) and u(t) is a solution to the Boundary Value Problem (BVP) (2.5), then v(t) ≡
u(t) on [−β, d]
Proof: Let z(t) = v(t) − u(t), then

ż(t) = v̇(t) − u̇(t)


= av(t − β) − au(t − β)
= az(t − β) on (0, d]

As well, on [−β, 0], v(t) = u(t) = θ(t); so z(t) = 0. Therefore z(t) is the trivial
solution satisfying equation (2.6), then v(t) ≡ u(t) on [−β, d].

14
Chapter3

3 Research methodology

Theorems and Proofs


In this chapter, we shall develop a technique that can be used for solving On Mul-
tidimensional Langevin Stochastic Differential Equation Extended by Time-delayed
term. There are infinitely many techniques for solving the aforementioned equation,
some of these methods include charaterestic steps, the method solution, the general
solution, the method of steps, the Itô formula, the path by path integral etc. In our
research, we shall dwell on Itô formula and the part by part integral. It shall be noted
that the other methods are also good but we prefer the aforementioned because we
are familiar with the terms.

3.1 Multidimensional Langevin Stochastic Differen-


tial Equation
Consider the langenvin stochastic differential equation
n
X
dX(t) = [ai X(t) + bi X(t − ri )]dt + dW (t), t ≥ 0 (3.1a)
i=1

With the initial condition


X(t) = Z(t), t ∈ [−r, 0] (3.1b)
where a, b ∈ R, R > 0

Solutions
assume W = (W (t), f (t), t ≥ 0) if a real-valued standard Wiener process on a
probability space (Ω, F, P), Z = (Z(t), t ∈ [−r, 0]) is a stochastic process on the
space and let Z(t) be F(0) − measurable, t ∈ [−r, 0] such a process Z will be called
an initial process

15
Definition 3.1
A pathwise continuous stochastic process X = (X(t), t ≥ −r) on (Ω, F, P) is called
solution of (3.1) if
i) X(t) is F(t) − measurable
Rt
ii) X(t) = Z(0) + 0 (aX(s) + bX(s − r))ds + W (t)P − a s, t ≥ 0
iii) X satisfies
A solution X is said to be unique if for every solution Y of (3.1) we have
P(lim sup |Y (t) − X(t)| > 0) = 0
t≥−r

To solve he equation (3.1) firstly we ave to consideer the deterministic equation cor-
ressponding to
Xn
Ẋ(t) = (ai x(t) + bi (t − ri ))t ≥ 0 (3.2a)
i=0
x(t) = g(t), t ≥ 0 (3.2b)
Where g is a given function [−r, 0] x = x(t), t ≥ −r is called a solution of (3.2) if
it is absolutely continuous on [0, , ∞) satisfies (3.2a). (Lebesgue almost everywhere)
and (3.2b) The equation (3.2) can be solved step by stop on the interval [k − r, (k +
r)r], K ≥ 0 Proveided that ∈ L1 (−r, 0). In this way we get for g(t) := 1{0} (t), t ∈
[−r, 0] the so called fundamental solution X0 of (3.2):
t
[r]
X bk
X0 (t) = ( )(t − kr)k exp(a(t − kr)), t ≥ 0 (3.3)
k=0
k!

Where [ rt ] := max{k ∈ N|k ≤ rt }


For g ∈ L1 (−r, 0) The solution Xg of (3.2) is given by
Z 0
Xg (t) := X0 (t).g(0) + b X0 (t − s − r)g(s)dst ≥ 0 (3.4)
−r

Now;
Let us to the stochastic equation (3.1) it holds

Proposition 3.2:
Assume Z has a continuous trajectories. Then the stochastic equation (3.1) has a
unique solution it is given by:
 R0 Rt
X(t) = X0 (t)Z(0) + b −r X0 (t − s − r)Z(s)ds + 0 X0 (t − s)dW (s)
(3.5)
X(t) = Z, t ∈ [−r, 0]

16
Where X0 denotes the fundamental solution of (3.2).

Proof:
Existence and uniqueness immediately follow by solving (3.1) using the step-method.
The representation of (3.5) is verified by inserting (3.1).■

3.2 Stationary Solutions


We shall establish conditions under which a stationary solution of (3.1a) exists.

Definition 3.3
A solution U = (U (t), t ≥ −r)of (3.1) is called a stationary solution if is finite-
dimensioal distruibution are invariant under time translation ie

P (U (t + tk ) ∈ Ak , k = 1, · · · n) = P (U (tk ) ∈ Ak , k = 1 · · · n)

for all t > 0, n ≥ 1, t1 · · · tn ≥ −r and all Borel sets Ak , k = 1 · · · n


We say (3.1a) has a stationary solution U if ther is an initial process Z such that U
is a stationary solution of (3.1) with U (t) = Z(t), t ∈ [−4, 0]

Definition 3.4
A stationary solution is said to be uniquely determined if every two stationary
solution of (3.1a) have the same finite-dimension distributions.
In the following, we suppose that W = (W (t), F(t), t ∈ R). This is no restriction of
generality.
Before formulating conditions for the existence of stationary solutions, we need some
more knowledge on the deterministic equation (3.2) and related quantities Hale[14].
The characteristic function h(.) of (3.1a) is defined by

h(λ) = λ − a − b exp(−λr), λ ∈ C

A characteristic root of (3.1a) is a solution of h(λ) = 0 Define Λ to be the set of all


characteristic roots of (3.1a).

Λ := {λ ∈ C|h(λ) = 0} (3.6)

and introduce the notation

V0 = V0 (a, b, r) := max{Reλ|λ ∈ Λ} (3.7)

Then we have the following result.

17
Lemma 3.5:
Assume a, b ∈ R, r > 0 are fixed. Then it holds
i) For every real C the set Λ ∩ {λ ∈ K|Reλ > c} if finite, in particular V0 (a, b, r) < ∞
ii) For every V > V0 , there exist a constant Ki = Kj (v) > 0, j = 0 : 1 such that

|X0 (t) ≤ K0 exp(v, t), t ≥ 0 (3.8a)

|X0˙(t)| = K1 exp(v, t), t ≥ 0 (3.8b)

Proof
The proof of proposition (i) and of the inequality (3.8a) can be found in Hale[14]
chapter 1. The Inequality (3.8) thus follows immediately ■

Corollary 3.6
For every g ∈ L1 [−r, 0] and every V > V0 (a, b, r) There exists a constant
C = C(y, v) > 0 such that

|Xg (t)| ≤ C exp(v, t), t ≥ 0

Proof Apply (3.8a) and (3.4) ■


Define S := {(u, v) ∈ R2 : u < 1, u + v < 0, −v < ξ sin ξ + u cos ξ}
Where ξ = ξ(u) is a root of ξ = u. tan ξ, 0 < ξ < π if u ̸= 0 and ξ = π2 , if u = 0
The set S ⊆ R2 can also be described with the following way:

S = {(u, v) ∈ R2 : u < 1, v ∈ (v1 (u), v2 (u))}

Where
−u

cos ξ
f or u ̸= 0
v1 (u) := π
2
f or u = 0
(ξ = ξ(u) as above V O2 (u) := −u
The following proposition is essential for the solution and a consequence of a
well-known results for deterministic difference-differential equations.

Proposition 3.7
We have v0 (a, b, r) < 0 if and only if (ar, br) ∈ S.

18
Proof
introduce
u := ar, v := br, µ = λr (3.9a)
and note that λ is a root of h(λ) = 0 if and only if µ is a of h̃ = 0 with

h̃(µ) := µ − u − v exp(−µ), µ ∈ C (3.9b)

Defining
‘Ṽ0 (u, v) := max{Reµ : h̃(µ) = 0} (3.9c)
We get:
(ar, br)
v0 (a, b, r) = ṽ0 (3.10)
r
Note that h̃ is the characteristic function of

ẋ(t) = ux(t) + vx(t − 1), t ≥ 0

Now apply a result of Hayes[15] see also Hale[14] which says that ṽ0 (u, v) < 0if and
only if (u, v) ∈ S.

Proposition 3.5
For the equation (3.1a) the following properties are equivalent:
i) There exists a stationary solution X
ii) All characteristic root of (3.1a) have negative real part: ṽ(a, b, r) < 0.
iii) (ar, br) ∈ S
iv) The fundamental solution x0 of (32) is Square integrable:
Z ∞
2
σ0 := x0 ds < ∞ (3.11)
0

Proof
The equivalence of (ii) and (iii)was established in proposition (3.7). We show
(ii) ⇐⇒ (iv); if v0 is negative, then (3.11) follows from (3.8a) by choosing an
v ∈ (v0 , 0).
Assume (3.11) holds. Then every solution g of (3.2) with initial function g ∈ L2 (−r, 0)
is square integrable over R+
This follows from (3.4) using Schawrts inequality. If v0 ≥ 0, then there exists a
characteristic root λ0 with Reλ0 ≥ 0. Obviously f (t) := exp(λ0 ), t ≥ −r is a solution
of (2.2)with f |[−r, 0] ∈ L2 (−r, 0).
But f is nost square integrable on R+ . thus v0 < 0 must hold.

19
It remains now to show that (iv) ⇐⇒ (i)
Suppose that (iv) holds. Then the following integrable function exist by assumption.
Z t
Ut := x0 (t − s)dWs , t ∈ R
−∞

Obviously Ut ≡ 0. Calculating the characteristic function one can show that for
P
all t1 < · · · < tn the random vector (U (t1 ), · · · U (tn )) is normally distributed with
covariance matrix G = g(ij) given by
Z ∞
gi,j = x0 (|ti − tj | + s)x0 (s)ds, i, j = 1, · · · n (3.12)
0
In particular, U is continous and stationary. That is this process U satisfies the
equation (3.1) is proved by inserting and using tat x[ 0](.) is the fundamental solution
of (3.2).
Conversely, assume (i) holds and X is a stationarty solution. In particular, X is
continuous and has the representation (3.5). This, Introducing X 0 by
Z 0
0
X (t); = X0 (t)X(0) + b x0 (t − s − r)X(s)ds, t ≥ 0 (3.13)
−r

we obtain
t
−λ2
 Z 
E exp(iλX(t)) = E exp(iλX (t)) · exp 0
X02 (s)ds , t ≥ 0, λ ∈ R (3.14)
2 0

Use Z t  Z t 
2
X0 (t − s)dWs ∈ µ 0, X0 (s)ds
0 0

and the independence of (W (t), t ≥ 0) and F(0) the left-hand side of (3.14) is inde-
pendent of t by stationarity. Thus we get (3.11).
That is (iv) and therefore v0 < 0 by (ii). By (3.8) and he corollary (3.6) it follows
that
lim X 0 (t) = 0 p − a.s
t→∞

Thus X(t) ∈ in particular, X has finite second moments.■


µ(0, σ02 ),
Now it is easy to see poposition (3.2) that the covariance of function K(·) of the
stationary soltion X is given by
X
K(u) = X(t)X(t + u)
Z t
0 0
= EX (t)X (t + U ) X0 (t − s)X0 (t + u − s)ds u, t ≥ 0
0

20
Cosequently, we have
Z t
E(Xt0 )2 + X02 (v)dv = K(0) < ∞, t≥0
0

In the followin g corollary the condition of stationarity will be presented in a more


lucid form.

Corollary 3.9
i) If
a + b < 0 and a − b ≤ 0
then there exists a stationary solution of (3.1a) regardless of the value of r > 0.
ii) If
a+b≥0
ar ≥ 1
then there does not exist a stationary solution of (3.1 a) regardless of r > 0 in the
case (3.16a) or regardless of b ∈ R in the case (3.16b), respectively.
iii) If
a + b < 0 and a − b > 0
then there exists a stationary solution of (3.1a) if and only if

r ∈ (0, r0 (a, b))

Where
[arccos( −a
b
)]
r0 (a, b) := 1 with arccos z ∈ [0, π] f or z ∈ [−1, 1]
(b2 − a2 ) 2

proof
Obviously, (i) and (iii) follow from proposition (3.8) and the definition of the set
S. Let us pove (iii): Defining r0 = r0 (a, b) := sup{r > 0 : (ar, br) ∈ S} We find
that for fixed values a, b satisfying (2.7) it must hold r0 = ( 1b )v0 = ( a1 )u0 where
v0 = cosu0ξ0 = sinξ0ξ0 , ξ0 = ξ0 (u0 ) is sthe solution of ξ0 = u0 with ξ0 ∈ (0, π) for u0 ̸= 0
and u0 and v0 satisfy uv00 = ab Therefore,

21
1 ξ0
r0 = −( )( )
b sin ξ0
−( 1b )(arccos( −a
b
))
= −a
sin(arccos( b ))
−( 1b )(arccos( −a
b
))
= 2 1
( 1−a
b2
)2

For a = 0, that is for u0 − 0, we get r0 = −π


2b

Now we shall formulate a proposition crucial for the unicity of stationary solutions.

Proposition 3.10
Let Z be an initial process and X the correspondin solution of (3.1a) exists, then
the distribution of (X(t − t1 ) · · · X(t + tn )) where n ∈ N, t1 , . . . , tn are fixed with
0 ≤ t1 < t2 · · · < tn , tends for → ∞ to a zero mean normal distribution with the
covariance matrix (gi,j ) defined by (3.12).

Proof
From (3.5) it follows with the notation (3.13) that (note that X is nonstationary in
general now):
X(t) = X 0 (t) + G(t)
With Z t
G(t) := x0 (t − s)dWs , t≥0
0
Because of the existence of a stationary solution we have v0 < 0 and as ain the proof
of proposition (3.9) we get X 0 (t) → 0 a.s
To show that the finite dimensional distributions of G tend to zero mean nor-
mal distributions with covariance matrix gij calculate the characteristic function
E exp(i nk=1 λk · G(t + tk )).■
P

Proposition 3.11
Assume a stationary solution V = (V (t), t ≥ −r) of (3.1a) exists. Then
i) V is the unique stationary solution of (3.1a) and it is a zero mean gaussian process
with covariance function K(·) given by
Z x
K(t) := x0 (s + t)x0 (s)ds t ≥ 0, (3.19)
0

K(t) := K(−t), t<0

22
ii) V has a spectral density f related to K by
Z
K(t) = exp(itu)f (u)du, t∈R
R

and given by

f (u) = (1/2π)|h(iu)|−2 (3.20)


−1
= (1/2π) (u + h sin ur)2 + (a + h cos ur)2 , u ∈ R.

iii) A version of V is given by U = (U (t), t ≥ −r) with


Z t
U (t) := x0 (t − s)dW (s), t ≥ −r.
−∞

Proof (i) follows at once from Proposition 3.10, and (ii1) was shown in the proof of
Proposition 3.8. Let us consider (ii).
Integrating (3.2a) we get for x0
Z t Z t
x0 (t) = 1 + a x0 (s)ds + b x0 (s − r)ds, t ≥ 0
0 0

and thus Z t
|x0 (t)| ≤ 1 + (|a| + |b|) |x0 (s)| ds, t≥0
0
Applying a Gronwall-type lemma we obtain

|x0 (t)| ≤ K · exp(c · t), t≥0

with K := 1 + |b|r and c := |a| + |b|.


Thus the Laplace-transform L [x0 ] of x0 exists at least for Re λ > c. From (3.2) it
easily follows

L [x0 ] (λ) = (λ − a − b exp(−λr))−1 = 1/h(λ) Re λ > c. (3.21)

Using (3.21) we get for the fundamental solution x0 :


Z
x0 (t) = (1/2π) (exp(itu)/h(iu))du, t≥0 (3.22)
R

Thus, x0 (·) and x0 (· + t) are the inverse Fourier transforms of (2π)−1/2 h−1 (iu) and
1
(2π) 2 h−1 (−iu) exp(iut), u ∈ R, respectively.Applying the Parseval-equation to
(3.19) we obtain (3.20) and therefore (ii).

23
3.3 The covariance function of the stationary solu-
tion
3.3 The covariance function of the stationary solution Assume (3.1a) has a stationary
solution, i.e. (ar, br) ∈ S, and let K(·) be its covariance function given by (3.19). We
shall show that K(·) satisfies the difference-differentialequation (3.2a) and calculate
it explicitly on [−r, r]. Then, using the step-method, K can be calculated principally
on the whole real axis. We shall treat K(·) on R+ only. All formulas obtained below
can be extended to 1 − ∞, 0] by the property K(−1) = K(1), 1 ≥ 0.

Lemma 3.12 The covariance function K(·) of the stationary solution of (3.1a) has
the following properties:

i) K(·) is continuously differentiable on [0, ∞), where the derivative at zero is


inderstood to be the right-hand side one.

K̇(t) = aK(t) + bK(t − r), t≥0 (3.23)

iii) We have
2aK(0) + 2bK(r) = −1 (3.24)
and
2aK(0) + 2bK(r) = −1 K̇(0+) = −1/2 (3.25)
iv) K is twice continuously differentiable on [0, r] and it holds

K̈(t) = a2 − b2 K(t), t ∈ [0, r], (3.26)




where K̈(t) is defined at t = 0 and t = r to be the right or left hand side derivative of
Where K(t) is defined at t = 0 and t = r to be the right or left hand side derivative
of
K̇, respectively.
Proof Using (3.11), the inequalities (3.8) and v0 < 0 it follows from Lebesgue’s
dominating convergence principle that K is differentiable on R+
Z x
K̇(t) = ẋ0 (s + t)x0 (s)ds, t ≥ 0
0

The continuity of K̇ on [0, ∞) follows from (3.8) similarly. Thus (i) is proved.
Now use that x0 solves (3.2a), x0 (u) = 0(u ∈ [−r, 0)) and K(−u) = K(u) to get(3.23)
Furthermore observe

24
Z x
bK(r) = bx0 (s + r)x0 (s)ds
0
Z x
= bx0 (s)x0 (s − r)ds
0
Z x Z x
= x0 (s)ẋ0 (s)ds − a x20 (s)ds
0 0
= lim x20 (s)/2 − x20 (0)/2 − aK(0)
= −1/2 − aK(0)

Thus (3.24) holds. Inserting it in (3.23) and using the continuity of K and its
symmetry we get (3.25)
From (3.23) it follows for t ∈ [0, r]

K̇(t) = aK(t) + bK(r − t)

The right-hand side of this equation is continuous differentiable on [0, r], where the
derivatives at the boundaries t = 0 and t = r are understood one-sided. Use (3.23)
to obtain (3.26)
Now we are ready to calculate K explicity on [0, r]. Remark that (ar, br) ∈ S by
assumption.

Proposition 3.13 For t ∈ [0, r] we have



−1
K(0) · cosh(lt) − (2l) sinh(lt), |b| < −a

K(t) = K(0) − 2t , h=a (3.27)
 −1
K(0) cos(lt) − (2l) sin(lt), h < −|u|

1
where i := |a2 − b2 | 2 and

(b sin h(lr) − l)/(2d(a + b cosh(lr))), |b| < −a

K(0) = (br − 1)/4b, b=a (3.28)

(b sin(lr) − l)/(2d(a + b cos(lr))), b < −|a|

Proof
Solve (3.26) with the conditions (3.24) − (3.25)

Proposition 3.14 If (ar, br) ∈ S and br ≥ − exp(ar − 1), then K(.) is

25
strictly positive with

lim t−1 ln K(t) = r0 (a, b, r). (3.29)


t→x

If (ar, br) ∈ S and br < − exp(ar − 1), then K(·) oscillates around zero with

lim t−1 ln |K(t)| = v0 (a, b, r) (3.30)


t→r

(Oscillation around zero means, that for every t0 > 0 one can find t1 , t2 > t0 such
that K (t1 ) < 0 and K (t2 ) > 0.)
Proof Because of (3.23), Proposition 3.2 and its proof we have that K is strictly
positive if and only if x0 is strictly positive. Now apply Proposition 3.2 again.

3.4 The Stationary, Solution For (ar, br) Near δS


If (ar, br) ∈ S then the stationary solution of (3.1a) has the variance K(0), which
depends on (a, b, r). It is easy to see that K(0) tends to infinity if (ar, br) tends
to a boundary point of S. Nevertheless, in some sense the normalized process
X(·)[K(0)]−1/2 converges to a limit process. This is shown in the present chapter.
Let a, b, a∗ , b∗ ∈ R, r, r∗ > 0 such that P := (ar, br) ∈ S and P ∗ := (a∗ r∗ , b∗ r∗ ) ∈ ĈS,
(the latter relation means b∗ = −a∗ or b∗ r∗ = v1 (a∗ r∗ ) ). Denote by V the stationary
solution of (3.1a) for the given coefficients (a, b, r), f its spectral density, K(·) its
covariance function.
We want to study the behaviour of the stationary soiution V if (a, b, r) tends
to (a∗ , b∗ , r∗ ). Note that (a∗ r∗ , b∗ r∗ ) ∈
/ S, such that there does not correspond a
stationary solution V to (a , b , r ). Assume (a, b, r) tends to (a∗ , b∗ , r∗ ). Then
∗ ∗ ∗ ∗

1/(2πf (u)) = (u + b sin ur)2 + (a + b cos ur)2




tends to
1/ (2πf ∗ (u)) := (u + b∗ sin ur∗ )2 + (a∗ + b∗ cos ur∗ )2


uniformly on all compact sets.


Note that 1/(2πf (u)) > 0.u ∈ R. and that 1/ (2πf ∗ (u) has zeros al ±u∗ with
(
0 if b∗ = −u∗
u∗ :=
ξu (a∗ r∗ ) /r∗ if b∗ r∗ = r1 (a∗ r∗ )

where ξ0 (·) was defined in Chapter 3.2 above.


Now the following proposition is obvious:

26
Proposition 3.15 If (a, b, r) tends to (a∗ , b∗ , r∗ ) with (ar, br) ∈ S and
(a∗ r∗ , b∗ r∗ ) ∈ δS then f (·) tends to f ∗ (· ) uniformly on compact subsets of
R\ {u∗ , −u∗ }

Corollary 3.16 It holds lim(a,h,r)→(a∗ ,b∗ ,r∗ ) K(0)


R ∞= ∞ Rα
Proof The assertion follows from K(0) = −∞ f (u)du, −∞ f ∗ (u)du = ∞, Propo-
sition 3.15 and Fatou’s lemma. ■
Now introduce the stationary process Y by
Y (t) := (K(0))−1/2 X(t), t ≥ −r
Then we have
dY (t) = [(aY (t) + bY (t − r))]dt + (K(0))−1/2 dW (t), t≥0
Its covariance function KY is given by
KY (t) = K(t)/K(0), t ∈ R.
The Proposition 3.15, its Corollary 3.16, the Proposition 3.13 and formula (3.25)
now imply immediately

Proposition 3.17 If (a, b, r) with (ar, br) ∈ S tends to (a∗ , b∗ , r∗ ) with


(a∗ r∗ , b∗ r∗ ) ∈ δS then for all t ∈ R
(
cosh u∗ t ≡ 1 if b∗ = −a∗
KY (t) →
cos u∗ t if b∗ r∗ = r1 (a∗ r∗ )
Remark 3.18 If u∗ ̸= 0, then there exists a zero mean (non Gaussian) weak stationary
process Y ∗ having cos (u∗ t) as its covariance function and satisfying the difference-
differential equation (2.2a) :
Y ∗ (t) := 21/2 cos (u∗ t + Φ) t∈R
where Φ is a uniform distributed on [0, 2π] random variable.
Proof Because of
b∗ r∗ = −ξ (a∗ r∗ ) / sin ξ (a∗ r∗ ) = −a∗ r∗ / cos ξ (a∗ r∗ ) and ξ = u∗ r∗
we have
2−1/2 r∗ Y ∗ (t) = −r∗ u∗ sin (u∗ r + Φ)
= −ξ (a∗ r∗ ) sin (u∗ t + Φ)
= a∗ r∗ cos (u∗ t + Φ) − a∗ r∗ cos (u∗ t + Φ) + b∗ r∗ sin ζ sin (u∗ t + Φ)
= a∗ r∗ cos (u∗ t + Φ) + b∗ r∗ cos (u∗ (t − r∗ ) + Φ) , t ≥ 0
The proof that Y ∗ is a zero mean process with covariance function cos (u∗ t) is an easy
calculation.

27
3.5 APPENDIX
We shall summarize some more or less known facts on the deterministic equation
(3.2), the fundamental solution x0 and the function v0 . In particular, combining with
Proposition 2.14 one gets some more information on the behaviour of the covariance
function K(·)

3.6 I Asymptotic Behaviour of the Solutions of (3.2)


Let us start with another representation of solutions of (3.2).

Lemma 3.18 (Myškis[29], p. 101) Every solution of (3.2) with initial func-
tion g ∈ L1 (−r, 0) has the following representation
X
exp (λk t) C0,k + · · · + Cmk −1.k tmk −1 + o(exp(γt)) (3.31)

xg (t) =
Re λk ≥γ

Where γ is an arbitrary real number, mk denotes the multiplicity of the charac-


teristic where i is an arbitrary
root i.k of (3.2), k ∈ N .
This lemma will be used to prove the following

Proposition 3.19 For all a, b ∈ R, r > 0 it holds:


If
br ≥ − exp(ar − 1) (3.32)
then the fundamental solution x0 of (2.2) is strictly positive on [0, ∞), and it holds
lim t−1 ln x0 (t) = v0 (a, b, r) (3.33)
If
br < − exp(ar − 1) (3,34)
then x0 oscillates around zero, i.e. for every t0 > 0 one can find t1 , t2 > t0 suct
lim t−1 ln |x0 (t)| = v0 (a, b, r) (3.35)
t→∞

Proof At first let us show that (3.35) holds.


ẋ(t) = ax(t) + bx(t − r), t ≥ 0 (3.36)

ẋ(t) = âx(t) + b̂x(t − r), t≥0 (3.37)

28
where â := a − u and b̂ := b exp(−ur) with any real number u. Then, we can find
the characteristic functions h and ĥ for (3.6) and (3.7), respectively. It is easy to see
that
h(λ) = ĥ(λ − u), λ ∈ K (3.38)

x0 (t) = exp(ut)x̂0 (t), t≥0 (3.39)


where x0 and x̂0 are the fundamental solutions corresponding to (3.36) and (3.37),
respectively.
From (3.38) we get:
if λ0 is a root of h with Re λ0 = z, then λ0 − u is a root of ĥ with Re(λ0 − u) = z − u.
Now we choose u = v0 (a, b, r). Then

v̂0 := max{Re λ : ĥ(λ) = 0} = 0

and by using Lemma 3.18 we get


X
exp (λk t) C0.k + · · · + Cmh −1,k lmk 1 + o(1).

x̂0 (t) =
Re λk −0

Now we use (3.39) with u = r0 and obtain for t > 0 with x0 (t) ̸= 0

t−1 ln |x0 (t)| = t1 ln exp (v0 t) |x̂0 (t)|


(3.40)
− v0 + t−1 ln |x̂0 (t)|

Therefore, limt→x t−1 ln |x0 (t)| = v0 (a, b, r).


Thus we have (3.35)
If additionally x0 is strictly positive then x̂0 is (up to an o(1) ) a strictly positive
polynomial, and thus (3.33) follows from (3.40). Consequently, it remains to show that
(3.32) implies the strict positiveness of x0 (·) and that (3.34) implies the oscillation of
x0 (·) around zero.
But this follows from the following lemma

lemma 3.20 Consider the equation

ẋ(t) = qx(t − 1), t ≥ 0. (3.41)


Then it holds:
i) The fundamental solution of (3.41) is positive if and only if q ≥ −1/e.
ii) The fundamental solution of (3.41 ) oscillates around zero if and oniy if q <
−1/e.

29
Indeed, if x0 is the fundamental solution of (3.2) then

x̄(t) := exp(− art)x0 (rt), t≥0

is a solution of
˙
x̄(t) = qx̄(t − 1), t≥0
with q := hr exp(−ar)

Lemma 3.20is a very special case of a general property of certain functionaldiffer-


ential equation proved in Morgenthal[27]. For this particular case we shall give the
following simple proof 1 for the sake of completeness (see also Ladas[21] ).
Let q ≥ 0. Then by the representation (3.3) we see that x0 (t) > 0 for all 1 ≥ 0.
Now let q < 0. Then we write instead of (3.41)

ẋ(t) = −px(t − 1), p>0 (3.42)

and prove

a) There exist positive solutions of (3.42) if and only if p ≤ 1/e.

b) There exist positive solutions of (3.42) if and only if the fundamental solution
of (3.12) is positive.

From (a) then it follows that all solutions of (3.42) oscillates around zero if and
only if p > 1/e and the lemma is proved.

Proof of (a) Assume, there exists a positive solution g of (3.12). Then q(t) > 0
Proof of (a) Assume, there exists a positive solution g of (3.12). Then g(t) > 0 for
all t ≥ 0 and g(t) < 0 for t ≥ 1. Therefore, there exists limt→× g(t − 1)/y(t) =: x
and it is easy to see that 1 ≤ α < x. Let ε > 0. Then there is a real number Q such
that
x − z ≤ y(t − 1)/q(t) and Q ≤ t < x.

With (3.42) we get that

g ′ (t)/g(t) = −p(g(t − 1)/g(t)) − p(α − ε), t ≥ Q


Integrating this inequality over the interval [t − 1, t⌋ we find

ln g(t) − g(t − 1) < −p(x − g)

30
and therefore

ln g(t − 1)/g(t) ≥ p(x − θ), t ≥ Q


ln g(t)
ln α/α ≥ p
ln α/α ≥ p.
Assume now that p ≤ 1/e. If there is a solution x(t) = exp(−λt) with λ ∈ R for
the equation (3.12) then we have proved (a). Proceeding from x(t) := exp(−λt) we
find Proof of (b) To prove this we use the following

Lemma 3.21 Let f and g be solutions of (3.12) with

f (t) ≤ g(t), −1 ≤ t ≤ 0

f (0) = g(0),

g(t) > 0, −1 ≤ t < ∞.

Then, f (t) ≥ g(t) for t ≥ 0


The proof of this lemma is left to the reader (see also Kozakiewicz[18] ).
Now, we can show (b): Assume, there exists a positive solution of (3.12). Then we
can take it in that way that it is positive for all t ≥ −1 (this can be done because
the equation (3.12) is autonomous). Then, with Lemma 3.4 we find x0 (t) > 0 for all
1≥0

3.7 The Function v0(a.b, r)


We have seen in Proposition 2.14 above that the number v0 = v0 (a, b, r) is connected
with the asymptotic behaviour of the covariance function K(·) (more generally, with
the solutions of (2.2a)). Thus it is of some interest to study its behaviour in more
detail. We shall show, that the function v0 is smooth outside the surface F :=
{(a, b, r) | br = − exp(ar − 1)} and that it has a certain singularity on this surface.
Lemma 3.22 Assume c ∈ R and define ĥ(·) and v̂0 (c)by

ĥ(z) := z − c exp(−z), z ∈ C,
v̂0 (c) := max{Re z : ĥ(z) = 0}
Then v̂0 (c) < ∞, c ∈ R, and we have
v0 (a, b, r) = 1/r [v̂0 (hr · exp(−ar)) + ar]

31
Proof We have h(λ) = 0 if and only if ĥ(z) = 0 where z = (λ−a)r and c = br exp(−ar)
Consequently, we have to study the function v̂0 of one real variable c only. This
has been done by several authors, see e.g. Wright[36].
We shall present here without proof some properties of v̂0 , partially known from
Wright[36]. For details the reader is referred to Mensch[26]. Note that if a and r are
fixed, then v0 and v̂0 have very similar graphs as functions of br

Proposition 3.23 The function

v̂0 (c) := max{Re z | z = c exp(−z), z ∈ C}, c∈R

has the following properties:


i) v̂0 is continuous on R,
ii) differentiable on R\{− exp(−1)},
iii) siricily decreasing on (−∞, − exp(−1)) and strictly increasing on (-exp(-1),
∞)
iv) v̂0 (−π/2) = v̂0 (0) = 0

v̂0 (− exp(−1)) = −1, v̂0 (x exp x) = x, x ≥ −1

v) limc↓− exp(−1) v̂0′ (c) = ∞, limc↑− exp(−1) v̂0′ (c) = −e

32
Figure 1 The function v̂0 (c)
vi) If |c| < exp(−1) then it holds

X
(−1)n−1 nn−1 /n! cn (3.43)

v̂0 (c) =
n=1

The Figure 1 gives an imagination of the function v̂0 (c)

33
Chapter4

examples and Applications

In this chapter we shall establish some examples to test the validity of the technique
developed.

4.1 Test Examples For On Multidimensional


Langevin Stochastic Differential Equations
Extended By A Time-Delayed Term
Consider the following Lagevin stochastic differential equations

4.2 Problem 1
. Solve the stochastic differential equation

dxt = a(t)dt + b(t)dWt , x0 = X0

where X0 is constant. Show that the solution xt is a Gaussian deviate and find its
mean and variance.

Solution
The formal solution to this problem is
Z t Z t
x(t) = X0 + a(s)ds + b(s)dWs
0 0

where the second integral is to be interpreted as an Ito integral. The fact that x(t)
is a Gaussian deviate stems from the fact that the stochastic part of x is simply a

34
weighted combination of Gaussian deviates, and is therefore a Gaussian deviate. The
mean is Z t
µ(t) = X0 + a(s)ds
0
and the variance is
Z t Z t  Z t
2
b2 (s)ds
 
E (x − µ) =E b(s)dWs b(s)dWs =
0 0 0

4.3 Problem 2
. In an Ornstein-Uhlenbeck process, x(t), the state of a system at time t, satisfies the
stochastic differential equation

dx = −α(x − X)dt + σdWt

where α and σ are positive constants and X is the equilibrium state of the system
in the absence of system noise. Solve this SDE. Use the solution to explain why x(t)
is a Gaussian process, and deduce its mean and variance.

Solution
For any function y = y(x, t), Ito’s lemma gives

∂y σ 2 ∂ 2 y
 
∂y ∂y
dy = − α(x − X) + 2
dt + σ dWt
∂t ∂x 2 ∂x ∂x

Take y = (x − X)eαt then

dy = α(x − X)eαt − α(x − X)eαt dt + σeαt dWt = σeαt dWt


 

Integration of this Ito equation yields


Z t Z t
−αt
αt
(x−X)e = (x(0)−X)+σ eαs dWs → x = X+(x(0)−X)e +σ e−α(t−s) dWs
0 0

We observe that if x(0) is constant or is itself a Gaussian deviate then x(t) is simply
a sum of Gaussian deviates and so is a Gaussian deviate. The mean value of x(t) is
 Z t 
−αt −α(t−s)
µ(t) = E X + (x(0) − X)e +σ e dWs = X + (x̄(0) − X)e−αt
0

35
The variance of x(t) is now computed from
" Z t 2 #
2 −αt −α(t−s)
 
E (x − µ) = E (x(0) − x̄(0))e +σ e dWs
0
 Z t
2 −2αt −αt
=E (x(0) − x̄(0)) e + 2(x(0) − x̄(0))e σ e−α(t−s) dWs
0
Z t Z t 
2 −α(t−s) −α(t−u)
+σ e dWs e dWu
0 0
Z t
2 −2αt
=σ0 e +σ 2
e−2α(t−s) ds
0

4.4 Problem 3
. Let x = (x1 , . . . , xn ) be the solution of the system of Ito stochastic differential
equations
dxk = ak dt + bkα dWα
where the repeated greek index indicates summation from α = 1 to α = m. Show
that x = (x1 , . . . , xn ) is the solution of the Stratonovich system
 
1
dxk = ak − bjα bkα,j dt + bkα ◦ dWα
2
Let ϕ = ϕ(t, x) be a suitably differentiable function of t and x. Show that ϕ is the
solution of the stochastic differential equation
 
∂ϕ ∂ϕ ∂ϕ
dϕ = + āk dt + bkα ◦ dWα
∂t ∂xk ∂xk
where
1
āk = ak − bkα,j bjα .
2

Solution
The SDEdxi = ai dt + biα dW α has formal solution
Z t Z t
xi (t) = xi (0) + ai (s, xs ) ds + biα (s, xs ) dWsα
t0 t0

The task is to relate the Ito integral in this solution to the corresponding Stratonovich
integral. Each Wiener process behaves separately.

36
It follows directly from the stochastic differential equation that
 
(j) (j)  β β
xk−1/2 −xk−1 = aj (tk−1 , xk−1 ) tk−1/2 − tk−1 +bjβ (tk−1 , xk−1 ) Wk−1/2 − Wk−1 +· · ·
Z t n
X
biα (s, xs ) ◦ dWsα = lim α
Wkα − Wk−1
 
biα tk−1/2 , xk−1/2
t0 n→∞
k=1
n
X
biα (tk−1 , xk−1 ) Wkα − Wk−1
α

= lim
n→∞
k=1
n
X ∂biα (tk−1 , xk−1 )
Wkα − Wk−1
α
 
+ lim tk−1/2 − tk−1
n→∞
k=1
∂t
n
X ∂biα (tk−1 , xk−1 )  (j) (j)

α α

+ lim x k−1/2 − x k−1 Wk − Wk−1 + ···
n→∞
k=1
∂x(j)
Z t
= biα (s, xs ) dWsα
t0
n
X ∂biα (tk−1 , xk−1 )  (j) (j)

Wkα − Wk−1
α

+ lim xk−1/2 − xk−1
n→∞
k=1
∂x(j)
and therefore the value of the second contribution to the Stratonovich integral is
n
X ∂biα (tk−1 , xk−1 ) 
β β

Wkα − Wk−1
α

lim bjβ (tk−1 , xk−1 ) Wk−1/2 − Wk−1
n→∞
k=1
∂x(j)
n
1 X
= lim biα,j (tk−1 , xk−1 ) bjβ (tk−1 , xk−1 ) δα,β ds
n→∞ 2 k=1
Z t
1
= biα,j (t, x)bjα (t, x)ds.
2 t0

In conclusion,
Z t Z t
1 t
Z
α α
biα (s, xs ) ◦ dWs = biα (s, xs ) dWs + biα,j (t, x)bjα (t, x)ds
t0 t0 2 t0

where repetition of α implies summation over the independent Wiener processes. The
formal solution of the SDE with the Ito integral replaced by the Stratonovich integral
is therefore
Z t  Z t
1
xi (t) = xi (0) + ai (s, xs ) − biα,j (s, xs ) bjα (s, xs ) ds + biα (s, xs ) ◦ dWsα .
t0 2 t0

37
Thus the Stratonovich form of the SDE is obtained from the Ito form by replacing
the drift component ai by the modified drift b
ai = ai − biα,j bjα /2.
Let ϕ = ϕ(t, x) then

∂ϕ 1
dϕ = dt + ϕ,j dx(j) + ϕij dx(i) dx(j) + · · ·
∂t 2
∂ϕ 1
= dt + ϕ,j (aj dt + bjα dWα ) + ϕij biα dWα bjβ dWβ + · · ·
∂t 2
∂ϕ 1
= dt + ϕ,j (aj dt + bjα dWα ) + ϕij biα bjα dt
∂t 2
Now write
1
bjα dWα = bjα ◦ dWα − + bjα,k bkα dt
2
to obtain
 
∂ϕ 1 1
dϕ = + ϕ,j aj − ϕ,j bjα,k bkα + ϕij biα bjα dt + ϕ,j bjα ◦ dWα
∂t 2 2
 
∂ϕ 1
= aj + ϕij biα bjα dt + ϕ,j bjα ◦ dWα
+ ϕ,j b
∂t 2

4.5 Problem 4
. The position x(t) of a particle executing a uniform random walk is the solution of
the stochastic differential equation

dxt = µdt + σdWt , x(0) = X

where µ and σ are constants. Find the density of x at time t > 0.

- Intuitive approach
The SDE can be integrated immediately to get
Z t
xt = X + µ(s)ds + σWt , x(0) = X.
0

Consequently xt − X − µt is a Gaussian random deviate with mean value zero and


variance σ 2 t. Thus
(x − X − µt)2
 
1
f (x, t) = √ exp −
σ 2πt 2σ 2 t

38
Solution: PDE approach
If x satisfies the SDE dxt = µdt + σdWt then the density f (x, t) of x at time t
satisfies the partial differential equation

∂f σ2 ∂ 2f ∂f
= 2
−µ .
∂t 2 ∂x ∂x
The task is to solve this equation with initial condition f (x, 0) = δ(x−X). In absence
of intuition, take either the Fourier transform of f with respect to x or the Laplace
transform of f with respect to t. For example, let
Z ∞
fb(t; ω) = f (x, t)eiωx dx
−∞

then
dfb σ 2 ∞ ∂ 2 f iωx
Z Z ∞
∂f iωx
= 2
e dx − µ e dx
dt 2 −∞ ∂x −∞ ∂x
−ω 2 σ 2
 
= + µiω fb
2
with initial condition fb(0; ω) = eiωX . Clearly the solution of this first order ODE is

−ω 2 σ 2 t
 
f (t; ω) = exp
b + (X + µt)iω
2

However, this is the characteristic function of the Gaussian distribution with mean
X + µt and variance σ 2 t. Thus
1 2 2
f (x, t) = √ e−(x−X−µt) /2σ t
σ 2πt

4.6 Problem 5
. Solve the stochastic differential equation

dxt = a(t)dt + b(t)dWt , x0 = X0

where X0 is constant. Show that the solution xt is a Gaussian deviate and find its
mean and variance.

39
Solution
The formal solution to this problem is
Z t Z t
x(t) = X0 + a(s)ds + b(s)dWs
0 0

where the second integral is to be interpreted as an Ito integral. The fact that x(t)
is a Gaussian deviate stems from the fact that the stochastic part of x is simply a
weighted combination of Gaussian deviates, and is therefore a Gaussian deviate. The
mean is Z t
µ(t) = X0 + a(s)ds
0
and the variance is
Z t Z t  Z t
2
b2 (s)ds
 
E (x − µ) =E b(s)dWs b(s)dWs =
0 0 0

4.7 prolem 6
. It is given that the solution of the initial value problem

dx = −αx + σxdW, x(0) = x0

in which α and σ are positive constants is

x(t) = x(0) exp − α + σ 2 /2 t + σW (t)


  

Show that  2 
E[X] = x0 e−αt , V[X] = e−2αt eσ t − 1 x20

Solution
The expected value of X is

E[X] = E x(0) exp − α + σ 2 /2 t + σW (t) = x(0)e−(α+σ /2)t E e−σW (t)


    2  

The task is now to compute


Z ∞
 −σW (t)  1 2
E e =√ e−σW e−W /2t dW
2πt −∞
1 σ2 t/2 ∞ −(W +σt)2 /2t
Z
2
=√ e e dW = eσ t/2
2πt −∞

40
Consequently it follows that

E[X] = x(0)e−(α+σ )t eσ2 t/2 = x(0)e−αt


2 /2

The variance of X is
h 2 i
x(0) exp − α + σ 2 /2 t + σW (t) − x(0)e−(αt
  
V[X] = E
h 2 i
2 −2αt
 2 
= x (0)e E exp −σ t/2 + σW (t) − 1
 2 
2 −2αt−σ 2 t σW (t) σ 2 t/2
= x (0)e E e −e
2
h 2 2
i
= x2 (0)e−2αt−σ t E e2σW (t) − 2eσW (t) eσ t/2 + eσ t
h 2 i
2 −2αt−σ 2 t 2σ t σ2 t σ2 t
= x (0)e e − 2e + e
h 2 i
= x2 (0)e−2αt eσ t − 1

4.8 Problem 7
. Benjamin Gompertz (1840) proposed a well-known law of mortality that had the
important property that financial products based on male and female mortality could
be priced from a single mortality table with an age decrement in the case of females.
Cell populations are also well-know to obey Gompertzian kinetics in which N (t), the
population of cells at time t, evolves according to the ordinary differential equation
 
dN M
= αN log
dt N
where M and α are constants in which M represents the maximum resource-limited
population of cells. Write down the stochastic form of this equation and deduce that
ψ = log N satisfies an OU process. Further deduce that mean reversion takes place
about a cell population that is smaller than M , and find this population.

Solution
The stochastic form of this SDE is
 
M
dN = αN log dt + σN dW,
N
Ito’s lemma applied to ψ = log N gives
dN σ 2 N 2 σ2 σ2
   
M
dψ = − dt = α log dt+σdW − dt = α log M − αψ − dt+σdW.
N 2N 2 N 2 2

41
Thus ψ satisfies an OU equation with mean state ψ̄ = log M − σ 2 /2α which in turn
translates into the population N = M e−σ /2α . The standard solution of the OU
2

equation may be used to write down the general solution for ψ and consequently the
general solution for N (t). The result of this calculation is that
 Z t 
−αt −αt −α(t−s)

N (t) = exp 1 − e ψ̄ + e ψ0 + σ e dWs
0
   Z t 
−αt −σ 2 /2α −αt −α(t−s)

= exp 1 − e log M e + e log N0 + σ e dWs
0
 (1−e−αt )  Z t 
e−αt −σ 2 /2α −α(t−s)
= (N0 ) Me exp σ e dWs
0
 e−αt  2 −αt Z t 
−σ 2 /2α N0 σ e −α(t−s)
= Me exp +σ e dWs .
M 2α 0

4.9 Problem 8
. The position x(t) of a particle executing a uniform random walk is the solution of
the stochastic differential equation

dxt = µ(t)dt + σ(t)dWt , x(0) = X,

where µ and σ are now prescribed functions of time. Find the density of x at time
t > 0.

Solution 1: Intuitive approach


This is a repeat of the previous example. The SDE can be still be integrated
immediately to get
Z t Z t
xt = X + µ(s)ds + σ(s)dWs , x(0) = X
0 0

Rt  R 
t
In this case xt − X − 0 µ(s)ds is an N 0, 0 σ 2 (s)ds Gaussian random deviate.
Thus   2 
Rt
1 x − X − 0
µ(s)ds
f (x, t) = q R exp −
 
Rt 
t 2 2
2 0 σ (s)ds
2π 0 σ (s)ds

Solution 2: PDE approach

42
If x satisfies the SDEdxt = µ(t)dt + σ(t)dWt then the density f (x, t) of x at time t
satisfies the partial differential equation
∂f σ 2 (t) ∂ 2 f ∂f
= − µ(t)
∂t 2 ∂x2 ∂x
The task is to solve this equation with initial condition f (x, 0) = δ(x − X). The
procedure is precisely the same as the previous question, except that the Fourier
transform of f is now the preferred approach. Let
Z ∞
fb(t; ω) = f (x, t)eiωx dx
−∞

then
dfb σ 2 (t) ∞ ∂ 2 f iωx
Z Z ∞
∂f iωx
= 2
e dx − µ(t) e dx
dt 2 −∞ ∂x −∞ ∂x
−ω 2 σ 2 (t)
 
= + µ(t)iω fb
2
with initial condition fb(0; ω) = eiωX . Clearly the solution of this first order ODE is
2 Z t Z t
   
−ω 2
fb(t; ω) = exp σ (u)du + X + µ(u)du iω
2 0 0

This function is now the characteristic function of the Gaussian distribution with
mean and variance Z t Z t
X+ µ(u)du, σ 2 (u)du
0 0
Thus   2 
Rt
1 1  x − X − 0 µ(u)du 
f (x, t) = √ qR exp − Rt 
2π t 2
σ (u)du 2 0
σ 2 (u)du
0

4.10 Problem 9
. The state x(t) of a particle satisfies the stochastic differential equation

dxt = adt + bdWt , x(0) = X

where a is a constant vector of dimension n, b is a constant n × m matrix and dW is


a vector of Wiener increments with m × m covariance matrix Q. Find the density of
x at time t > 0.

43
Solution 1: Intuitive approach This is again a repeat of the previous example.
In matrix notation this SDE can be integrated immediately to get

Xt = X0 + At + BW.

where Xt , X0 and A are N dimensional column vectors, B is a constant N ×M matrix


and Wt is an M dimensional column vector of correlated Wiener processes. Clearly Xt
is an N dimensional Gaussian deviate with expected value X0 + At. The covariance
of Xt is therefore
h i
E (Xt − X0 − At) (Xt − X0 − At)T = E B Wt WtT B T = BQB T t = Gt
  

where Qdt = E dWt dWtT . Thus


 

" #
1 1 (X − X0 − At) G−1 (X − X0 − At)T
f (X, t) = exp − .
(2πt)N/2 |G|1/2 2t

Solution 2: PDE approach


If x satisfies the SDEdxt = adt + bdWt then the density f (x, t) of x at time t
satisfies the partial differential equation
∂f gjk ∂ 2 f ∂f
= − ak
∂t 2 ∂xj ∂xk ∂xk
where m
X
gjk = bjr Qrs bks
r,s=1

The task is to solve this equation with initial condition f (x, 0) = δ(x − X). The
n-dimensional Fourier transform of f with respect to x is defined by the formula
Z
f (t; ω) =
b f (x, t)eiω,x dx
Rn

in which ω is an n-dimensional vector. By taking the Fourier transform of the Kol-


mogorov equation satisfied by f (x, t), it follows that

∂ 2 f iω,x
Z Z
dfb gjk ∂f iω,x
= e dx − ak e dx
dt 2 R ∂xj ∂xk
n
R ∂xk
n
 
−gjk ωj ωk
= + ak ωk i fb
2

44
with initial condition fb(0; ω) = eiω · X. Clearly
 
−gjk ωj ωk t
f (t; ω) = exp
b + (Xk + ak t) iωk
2
By way of variety, the probability density function f (x, t) is computed by direct
inversion of the Fourier transform using the identity
Z
1
f (x, t) = fb(t; ω)e−iω·x dω
(2π)n Rn
Z  
1 1
= exp − [gjk ωj ωk t + 2 (xk − Xk − ak t) iωk ) dω
(2π)n Rn 2
Let v = x − X − at, then by observing that G = [gjk t] is a symmetric positive definite
matrix, it is straight forward algebra to demonstrate that
T
gjk ωj ωk t + (xk − Xk − ak t) iωk = ω + ivG−1 G ω + ivG−1 + vG−1 vT


Therefore,
vG−1 vT
  Z  
1 1 −1
 −1 T

f (x, t) = exp − exp − ω + ivG G ω + ivG dω
2 (2π)n Rn 2
vG−1 vT
  Z  
1 1 T
= exp − exp − ξGξ dξ
2 (2π)n Rn 2
In order to evaluate this integral, observe that since G is symmetric and positive
definite then there exists a non-singular matrix F such that G = F F T . Thus
ξGξ T = ξF F T ξ T = (ξF )(ξF )T = ηη T , η = ξF
Changing variables from ξ to η = ξF gives
vG−1 vT
  Z  
1 1 T
f (x, t) = exp − exp − ξGξ dξ
2 (2π)n Rn 2
−1 T
ηT
  Z  
vG v 1 1
= exp − exp − dη
2 (2π)n Rn det F 2
vG−1 vT
 2
η1 + η22 + · · · + ηn2
  Z 
1 1
= exp − exp − dη.
2 (2π)n Rn det F 2
Now |F |2 = |G| = tn |g| where g = [gjk ]. Thus |F | = tn/2 |g|1/2 . Furthermore,
 2 n
η1 + η22 + · · · + ηn2
Z  Z
exp −η /2 dη = (2π)n/2
2

exp − dη =
R n 2 R n

In conclusion, the probability density function of x is


(x − X − at)g −1 (x − X − at)T
 
1 1
f (x, t) = exp − , g = [gjk ]
(2πt)n/2 |g| 2t

45
4.11 Problem 10
. The state x(t) of a system evolves in accordance with the stochastic differential
equation
dxt = µxdt + σxdWt , x(0) = X,
where µ and σ are constants. Find the density of x at time t > 0.

Solution 1: Intuitive approach


The SDE is the geometric random walk. Take Y = log X and use Ito’s lemma to
construct the SDE satisfied by Yt . Ito’s lemma yields

σ 2 X 2 d2 Y σ 2 X 2 −1
 
dY 1
dY = dX + dt → dY = (µXdt + σXdWt ) + dt
dX 2 dX 2 X 2 X2

which simplifies to give


dY = µ − σ 2 /2 dt + σdWt


This equation has solution Y = Y0 +(µ − σ 2 /2) t+σWt . Consequently Y is a Gaussian


deviate with mean value Y0 + (µ − σ 2 /2) t and variance σ 2 t. The density of Y is thus
" #
2
1 (Y − Y0 − (µ − σ 2 /2) t)
fY (Y, t) = √ exp −
σ 2πt 2σ 2 t

The transformation Y = log X is now used to map f (Y, t) into


" #
2
1 1 (log (X/X0 ) − (µ − σ 2 /2) t)
fX (X, t) = √ exp −
σ 2πt X 2σ 2 t

Solution 2: PDE approach


If the state x(t) of a system satisfies the stochastic differential equation
dx = µxdt + σxdWt with µ and σ constants, then f (x, t), the density of x at
time t > 0, satisfies the partial differential equation

∂f (x, t) 1 ∂2 2 2
 ∂
= 2
σ x f (x, t) − (µxf (x, t))
∂t 2 ∂x ∂x
Let z = log x then
∂f ∂f 1 ∂ 2f ∂ 2f 1 ∂f 1
= , 2
= 2 2

∂x ∂z x ∂x ∂z x ∂z x2

46
Thus it is seen that f satisfies the modified PDE

∂f 1 ∂2 ∂
σ 2 x2 f (x, t) −

= 2
(µxf (x, t))
∂t 2 ∂x  ∂x 
σ2 ∂ ∂f ∂
= 2xf (x, t) + x2 − (µxf (x, t))
2 ∂x ∂x ∂x
σ2 2
   
∂f 2∂ f ∂f
= 2f (x, t) + 4x +x −µ x +f
2 ∂x ∂x2 ∂x
σ 2 x2 ∂ 2 f  ∂f
+ 2σ 2 − µ x + σ2 − µ f

= 2
2  ∂x ∂x
2 2

σ ∂ f ∂f 2
 ∂f 2

= = − + 2σ − µ + σ − µ f
2 ∂z 2 ∂z ∂z
σ2 ∂ 2f
 
3 2 ∂f 2

= + σ − µ + σ − µ f
2 ∂z 2 2 ∂z

Now take the Fourier transform of this equation with respect to z to deduce that
Z
f (t; ω) =
b f (z, t)eiωz dz
R

satisfies the ordinary differential equation

σ2ω2
   
dfb 3 2 2

= − −i σ − µ ω + σ − µ fb
dt 2 2

with initial condition fb(0, ω) = X −1 eiω log X . Clearly,


 2 2   
−1 σ ω t 3t 2 2

f (t, ω) = X exp −
b + i log X − σ + µt ω + σ − µ t
2 2

with inverse transform

4.12 Probblem 11
. The state x(t) of a system evolves in accordance with the Ornstein-Uhlenbeck
process
dx = −α(x − β)dt + σdWt , x(0) = X
where α, β and σ are constants. Find the density of x at time t > 0.

47
Solution 1: Intuitive approach The SDE is reorganised into the form dx +
α(x − β)dt = σdWt . Ito’s lemma is now applied to (x − β)eαt to obtain
Z t
αt αt αt
eαs dWs
 
d (x − β)e = σe dWt → (x − β)e = (X0 − β) + σ
0
"
3t 2
2 #
1 z − log X + σ − µt
f = e(σ −µ)t
2
2
√ exp − 2
σX 2πt 2σ t
"
3t 2
2 #
1 log x − log X + σ − µt
= e(σ −µ)t
2
2
√ exp − 2
σX 2πt 2σ t
   2  2 
σ 2
1  log x/X + 2 − µ t + σ t 
= e(σ −µ)t
2
√ exp −
2σ 2 t

σX 2πt
   2  2   2   
σ σ
1  log x/X + 2 − µ t log x/X + 2 − µ t σ2t 
= e(σ −µ)t
2
2
√ exp − − 2 − σ t −
2σ 2 t 2σ 2 t 2

σX 2πt
    2 
σ2
1 log x/X + 2
−µ t  2
σ
 2
σ t
= e(σ )t
2 −µ
√ exp − − log x/X − −µ t−

2σ 2 t 2 2

σX 2πt
   2  2 
σ
1  log x/X + 2 − µ t
= √ exp − − log x/X 

2
2σ t
σX 2πt
   2  2 
σ
1 log x/X + 2
− µ t 
= √ exp −  exp(− log x/X)

σX 2πt 2σ 2 t
   2  2 
σ
1  log x/X + 2 − µ t 
= √ exp − .
σx 2πt 2σ 2 t

which simplifies to give


Z t
−αt
x(t) = β + (X0 − β) e +σ e−α(t−s) dWs .
0

Thus x(t) is a gaussian deviate with mean value β + (X0 − β) e−αt and variance
Z t
σ 2 (1 − e−αt )
σ 2
e−2α(t−s) ds =
0 2α

48
The final conclusion is that x has probability density function
" #
−αt 2
α (x − β − (X0 − β) e )
r
1 α
f (x, t) = −αt
exp − .
σ π (1 − e ) σ 2 (1 − e−αt )

Solution 2: PDE approach


If x(t) evolves in accordance with the stochastic differential equation dx =
−α(x − β)dt + σdWt then the density of x at time t > 0 satisfies the partial
differential equation
∂f σ2 ∂ 2f ∂
= + α ((x − β)f )
∂t 2 ∂x2 ∂x
and the initial condition f (x, 0) = δ(x − X). Let
Z
f (t; ω) =
b f (x, t)eiωx dx
R

then by taking the Fourier transform of the partial differential equation, it follows
that fb(t; ω) satisfies the ordinary differential equation

∂ fˆ σ2ω2 b
Z

=− f +α ((x − β)f (x, t))eiωx dx
∂t 2 R ∂x
σ2ω2 b
Z
=− f − iαω (x − β)f (x, t)eiωx dx
2 R
2 2
σ ω b ∂ fb
=− f + iαβω fb − αω
2 ∂ω
with initial condition fˆ(0, ω) = eiωX . Clearly ϕ(t, ω) = log fb satisfies the first order
partial differential equation
∂ϕ ∂ϕ σ2ω2
+ αω =− + iαβω
∂t ∂ω 2
with initial condition ϕ(0, ω) = iωX. We may solve this equation by characteristic
methods. Take η = log ω − αt and ξ = ω then
σ2ξ 2
 
∂ϕ ∂ϕ ∂ϕ ∂ϕ 1 ∂ϕ ∂ϕ
+ αω = −α + αω + = αξ =− + iαβξ.
∂t ∂ω ∂η ∂ξ ω ∂η ∂ξ 2
This equation is now integrated to obtain
σ2ξ 2
ϕ=− + iβξ + ψ(η)

49
where the initial condition yields
σ2ω2 η σ 2 e2η
iωX = − + iβω + ψ(log ω) → ψ(η) = ie (X − β) +
4α 4α
It therefore follows that
σ2ω2 σ 2 2(log ω−αt)
ϕ=− + iβω + i(X − β)elog ω−αt + e
4α 4α
σ2ω2 σ 2 ω 2 −2αt
=− + iβω + i(X − β)ωe−αt + e
4α 4α
σ2ω2
1 − e−2αt + iω β + (X − β)e−αt
 
=−

Thus fˆ(t; ω) = exp ϕ corresponds to a Gaussian distribution with mean β+(X−β)e−αt
and variance σ 2 (1 − e−2αt ) /2α, that is
" #
2
(x − β − (X − β)e−αt )
r
1 α
f (x, t) = exp −α
σ π (1 − e−2αt ) σ 2 (1 − e−2αt )

4.13 Problem 12
Cox, Ingersoll and Ross proposed that the instantaneous interest rate r(t) should
follow the stochastic differential equation

dr = α(θ − r)dt + σ rdW, r(0) = r0
where dW is the increment of a Wiener process and α, θ and σ are constant parame-
ters. Show that this equation has associated transitional probability density function
 v q/2 √ √ 2 √ √
f (t, r) = c e−( u− v) e−2 uv Iq (2 uv)
u
where Iq (x) is the modified Bessel function of the first kind of order q and the functions
c, u, v and the parameter q are defined by
2α 2αθ
c= , u = cr0 e−α(t−t0 ) , v = cr, q= −1
σ2 (1 − e−α(t−t0 ) ) σ2

Solution
The transitional probability density function for the stochastic differential equation

dr = α(θ − r)dt + σ rdw

50
satisfies the partial differential equation

∂f σ 2 ∂ 2 [rf ] ∂[(θ − r)f ]


= 2
−α , f (r, 0) = δ(r − R)
∂t 2 ∂r ∂r
Since the variance is proportional to r (and cannot be negative), the sample space of
r is (0, ∞). Let Z ∞
fb(t; ω) = f (r, t)e−ωr dr
0
then the Laplace transform of the Kolmogorov equation gives

∂ σ 2 ∂[rf ]
Z  
∂ fb(t; ω)
= − α(θ − r)f e−ωr dr
∂t 0 ∂r 2 ∂r
∞ Z ∞ 2
σ 2 ∂[rf ]
  
−ωr σ ∂[rf ]
= − α(θ − r)f e +ω − α(θ − r)f e−ωr dr
2 ∂r 0 2 ∂r
Z ∞ 2  0
σ ∂[rf ]
=ω − α(θ − r)f e−ωr dr
0 2 ∂r
σ2ω2 ∞
2  Z Z ∞
σ ω −ωr ∞ −ωr
rf e−ωr dr

= rf e 0
+ rf e dr − ωαθf (t; ω) + ωα
b
2 2 0 0
σ 2 ω 2 ∂ fb(t; ω) ∂ f
b(t; ω)
=− − ωαθfb(t; ω) − ωα
2 ∂ω ∂ω
 2 2  b
σ ω ∂ f (t; ω)
=− + ωα − ωαθfb(t; ω)
2 ∂ω

Let ϕ(t, ω) = log fb(t; ω) then clearly ϕ is the solution of the partial differential equa-
tion  2 2 
∂ϕ σ ω ∂ϕ
+ + ωα = −ωαθ
∂t 2 ∂ω
with initial condition ϕ(0, ω) = log fb(0; ω) = −ωR. Take ξ = ω and η = log ω −
log (2α + σ 2 ω) − αt then
 2 2  2 2
σ2
   
∂ϕ σ ω ∂ϕ ∂ϕ σ ω ∂ϕ ∂ϕ 1
+ + ωα = −α + + ωα + −
∂t 2 ∂ω ∂η 2 ∂ξ ∂η ω 2α + σ 2 ω
∂ϕ σ 2 ω 2 + 2ωα ∂ϕ ∂ϕ
 

= −α + +
∂η 2 ∂ξ ∂η ω (2α + σ 2 ω)
σ 2 ω 2 + 2ωα ∂ϕ
=
2 ∂ξ

51
Therefore ϕ satisfies the partial differential equation
σ 2 ω 2 + 2ωα ∂ϕ ∂ϕ 2αθ
= −ωαθ → =− 2
2 ∂ξ ∂ξ σ ξ + 2α
Thus the general solution for ϕ is
2αθ 2

ϕ=− log σ ξ + 2α + ψ(η)
σ2
where the initial condition gives
2αθ
log σ 2 ω + 2α + ψ log ω − log 2α + σ 2 ω .
 
−ωR = − 2
σ
Let λ = log (ω/ (2α + σ 2 ω)) then
ω 2α
= eλ → ω=
2α + σ 2 ω e−λ − σ2
Thus
2αe−λ
 
−2αR 2αθ
ψ(λ) = −λ + 2 log
e −σ 2 σ e−λ − σ 2
Bearing in mind that the task is to compute fb(t; ω) = eϕ , it follows that
−(1+q) ψ(η)
fb(t; ω) = σ 2 ω + 2α e
 q+1  
2
−(1+q) 2α −2αR
= σ ω + 2α exp −λ
1 − σ 2 eη e − σ2
q+1
−2αReη
  

= exp
(σ 2 ω + 2α) (1 − σ 2 eη ) 1 − σ 2 eη
where the parameter q = 2αθ/σ 2 − 1. Now
ω
eη = e−αt
2α + σ 2 ω
which further simplifies fb(t; ω) to obtain
q+1
−2αRωe−αt
  

f (t; ω) =
b exp
2α + σ 2 ω (1 − e−αt ) 2α + σ 2 ω (1 − e−αt )
Let functions c, u, v be defined by

c= , u = cRe−αt) , v = cr
σ2 (1 − e−αt )

52
then q+1
−cRωe−αt
  
c
fb(t; ω) = exp
c+ω c+ω
 q+1  
c −uω
= exp
c+ω c+ω
 q+1  
c −u cu
= e exp
c+ω c+ω
Since fb(t; ω) is a function of (c + ω), it follows that

f (r, t) = e−u e−cr L−1 (c/ω)q+1 exp(cu/ω); ω → r


 


To complete this calculation, we compute the Laplace transform of rq/2 Iq (2 ucr)
where Iq (x) is the modified Bessel function of argument x. The result is
Z ∞
√ √
rq/2 Iq (2 ucr)e−ωr dr
 q/2 
L r Iq (2 ucr); ω =
0
Z ∞ √ q X ∞ k
!
(2 ucr) (ucr)
= rq/2 q Γ(q + 1)
e−ωr dr
0 2 k=0
k!(q + 1) k

!

(uc)q/2 X (uc)k
Z
= rq+k e−ωr dr
Γ(q + 1) k=0 k!(q + 1)k 0

!
(uc)q/2 X (uc)k Γ(k + q + 1)
=
Γ(q + 1) k=0 k!(q + 1)k ω q+k+1

!
1  u q/2 (c/ω)q+1 X (uc/ω)k
= Γ(q + 1)
c c Γ(q + 1) k=0 k!
1  u q/2
= (c/ω)q+1 exp(uc/ω)
c c
Thus  c q/2 √
f (r, t) = ce−u e−cr rq/2 Iq (2 ucr)
u

 cr q/2
= ce−u e−cr Iq (2 ucr)
u
 v q/2 √ √ 2 √ √
=c e−( u− v) e−2 uv Iq (2 uv)
u
when cr is replaced by v

53
4.14 Problem 13
Consider the problem of numerically integrating the stochastic differential equation
dx = a(t, x)dt + b(t, x)dW, x(0) = X0
Develop an iterative scheme to integrate this equation over the interval [0, T ] using
the EulerMaruyama algorithm.
It is well-know that the Euler-Maruyama algorithm has strong order of con-
vergence one half and weak order of convergence one. Explain what programming
strategy one would use to demonstrate these claims.

Solution
Let N be the number of steps to be taken in advancing the solution from t = 0 to
t =√T , then ∆t = T /N . The standard deviation of each Wiener step is therefore
σ = ∆ and the iterative scheme is captured by the pseudo-code
1. Initialize x at X0
2. Iterate N times x− > a(k∆t, x)∆t + σb(k∆t, x)ξ where ξ ∼ N (0, 1).
3. Final value of x is x(T )
Note that there is no requirement to store intermediate values of x.
Strategy First it is necessary to simulate the integration process a large number
of times, say M times, by which is meant that x(T ) will be simulated M times with
each simulation based on an underlying realisation of the Wiener process. Let x∆t k (T )
be the value of x(T ) returned by the k simulation when using the step size ∆t. In
th

computing x∆t k (T ) we choose a very fine resolution of the interval [0, T ], say involving
N small time steps, and construct the corresponding series of N Wiener increments.
These increments, appropriately packaged to build the Wiener increments ∆W∆t
over intervals of duration ∆t, define the realisation of the underlying Wiener process
needed to compute x∆t k (T ) and, of course, by integrating at the finest resolution, i.e.
with ∆t = T ?N , one computes numerically ones best estimate of the true value of
xk (T ) against which numerical error will be measured. Needless to say, if the SDE
has an exact solution expressed in terms of W (T ), then this solution would be taken
as the exact solution against which error is to be estimated. For each realisation
the errorek = x∆t (T ) − xk (T ) is computed. To test for strong convergence one
PM k 
plots log |ek | /M against log ∆t and to test for weak convergence one plots
P k=1
log M k=1 ek /M against log ∆t. The former plot will be an approximate straight

line with gradient one half and the latter plot will be an approximate straight line of
gradient one.

54
4.15 Problem 14
. If the state of a system satisfies the stochastic differential equation
dx = a(x, t)dt + b(x, t)dW, x(0) = X,
write down the initial value problem satisfied by f (x, t), the probability density
function of x at time t > 0. Determine the initial value problem satisfied by the
cumulative density function of x.

Solution
Let f (x, t) be the probability density function corresponding to the distribution of
the states of the stochastic differential equation
dx = a(x, t)dt + b(x, t)dW, x(0) = X,
then f (x, t) satisfies the partial differential equation
∂f 1 ∂ 2 (b2 f ) ∂(af )
= − , f (x, 0) = δ(x − X)
∂t 2 ∂x2 ∂x
Let F (x, t) be the cumulative distribution function of f (x, t) then
Z x
∂F
F (x, t) = f (u, t)du → f (x, t) = .
−∞ ∂x
The equation satisfied by F (x, t) is therefore
∂ 2F 1 ∂2
       
2 ∂F ∂ ∂F ∂ ∂F 1 ∂ 2 ∂F ∂F
= b − a → − b +a =0
∂x∂t 2 ∂x2 ∂x ∂x ∂x ∂x ∂t 2 ∂x ∂x ∂x
Thus it follows that F satisfies
 
∂F 1 ∂ 2 ∂F ∂F
= b −a + ψ(t)
∂t 2 ∂x ∂x ∂x
However,
∂F ∂ 2F ∂f
F (x, t) → 1, = f → 0, 2
= → 0, x → ∞
∂x ∂x ∂x
and therefore ψ(t) = 0. In conclusion, F (x, t) satisfies the partial differential equation
 
∂F 1 ∂ 2 ∂F ∂F
= b −a
∂t 2 ∂x ∂x ∂x
with initial condition

Z x 0 x<X
F (x, 0) = f (u, 0)du = 1/2 x = X

−∞ 1 x>X

55
4.16 Problem 15
. Compute the stationary densities
√ for the following stochastic differential equations.
(a) dX = (β − αX)dt + σ XdW
(b) dX = −α tan Xdt + σdW
(c) dX = [(θ1 − θ2 ) cosh(X/2) − (θ1 + θ2 ) sinh(X/2)] cosh(X/2)dt+2 cosh(X/2)dW
(d) dX = Xα dt + dW
(e) dX = Xα − X dt + dW


Solution
The stationary density f (x) satisfies
1 d(gf ) d(gf ) d log(gf )
2 dx
− µf = 0 → dx
= 2µ
g
(gf ) → dx
= 2µ
g
→ f (x) =

A 2µ
dx , where A is a constant which takes the value which ensures that f
R
g
exp g
integrates to one.
(a) Here µ = (β − αX) and g(x) = σ 2 X. Thus
Z  Z 
A 2(β − αx) A 2β 2α A 2 2
f (x) = 2 exp 2
dx = 2 exp 2
− 2 dx = 2 x2β/σ −1 e−2αx/σ .
σ x σ x σ x σ x σ σ

(b) Here µ = −α tan X and g(x) = σ 2 . Thus


 Z   
A tan x A log | cos x| A 1/σ2
f (x) = 2 exp −2 2
dx = 2 exp 2 2
= 2 cos2 x
σ σ σ σ σ

(c) Here µ = [(θ1 − θ2 ) cosh(X/2) − (θ1 + θ2 ) sinh(X/2)] cosh(X/2) and and g(x) =
4 cosh2 (X/2). Thus
Z 
A [(θ1 − θ2 ) cosh(x/2) − (θ1 + θ2 ) sinh(x/2)] cosh(x/2)
f (x) = exp dx
4 cosh2 (x/2) 4 cosh2 (x/2)
Z 
A θ1 − θ2 θ1 + θ2 sinh(x/2)
= exp − dx
4 cosh2 (x/2) 4 4 cosh(x/2)
Z 
A (θ1 − θ2 ) x θ1 + θ2
= exp − log cosh(x/2)
4 cosh2 (x/2) 4 2
 
A (θ1 − θ2 ) x
= exp
4 cosh2+(θ1 +θ2 )/2 (x/2) 4

(d) Here µ = α/X and g(x) = σ 2 . Thus


Z   
A 2α A 2α A 2
f (x) = 2 exp 2
dx = 2 exp 2
log x = 2 x2α/σ
σ σ x σ σ σ

56
(e) Here µ = (α/X − X) and g(x) = σ 2 . Thus

x2
 Z   
A (α/x − x) A 2α A 2α/σ2 −x2 /σ2
f (x) = 2 exp 2 dx = exp log x − = x e
σ σ2 σ2 σ2 σ2 σ2

57
4.17 Discussion of Results
The aforemention results obtained in solving the On multidimensional langevin
stochastic differential equation extended by a time-delayed term using Itô’s formula
and partial differencial equations produces a better mathematical modeling. It
should be noted that stochastic differencial equations has played a vital role in the
constructions of various models. Most SDEs, Langevin equations are converted into
a stochastic integral which fascilitate modelling.

58
Chapter5

Summary of Findings, Conclusion


And Recommendations

5.1 Summary of Major Findings


Our priority of this dissertation is geared towards the developement of techniques
for solving problems On Multidimensional Langevin Stochastic Differential equations
extended by a time-delayed term.
In order to achieve our objectves, we developed some techniques that can solve these
problems using different methods listed in chapter 4 page (56) on discussion of results.
Furthermore, test examples were conducted to validate the techniques developed and
finally the results have shown a better approach of the Mathematical models.

5.2 Conclusion
In this Work, we developed the On Multidimensional Langevin Stochastic Differential
Equation extended by a time-delayed term. We have presented the techniques and
the problems and soltions were obtained for different mathematical models. Different
methods were used in the course of constructions of the various models and better
approaches were meritted.

5.3 Recommendations
This dissertation was base On Multidimensional Langevin Stochastic Differential
Equation extended by a time-delayed term. The techniques were well definedd and
the problems and solutions was well established and the mathematical models were
constructed following standard. It should be recommended that direction for future

59
investigation should do the following areas as their research findings. In principle,
they may extend this work to any other type of differential equation.

60
Bibliography

[1] Herbert R Bailey and Michael Z Williams. Some results on the differential-
difference equation. Journal of Mathematical Analysis and Applications,
15(3):569–587, 1966.

[2] Harvey Thomas Banks. Modeling and control in the biomedical sciences, vol-
ume 6. Springer Science & Business Media, 2013.

[3] Alfredo Bellen and Marino Zennaro. Numerical methods for delay differential
equations. Oxford university press, 2013.

[4] Nikola Burić and Dragana Todorović. Dynamics of delay-differential equations


modelling immunology of tumor growth. Chaos, Solitons & Fractals, 13(4):645–
655, 2002.

[5] C Chicone, Sergei M Kopeikin, Bahram Mashhoon, and David G Retzloff. Delay
equations and radiation damping. Physics Letters A, 285(1-2):17–26, 2001.

[6] Kenneth L Cooke. Differential—difference equations. In International sympo-


sium on nonlinear differential equations and nonlinear mechanics, pages 155–171.
Elsevier, 1963.

[7] JM Cushing. Integro-differential equations and delay models in population dy-


namics lecture notes in biomathematics 20 springer. Berlin Heidelberg New York,
1977.

[8] RD Driver. Introduction to delay differential equations. In Ordinary and Delay


Differential Equations, pages 225–267. Springer, 1977.

[9] Albert Einstein et al. On the motion of small particles suspended in liquids
at rest required by the molecular-kinetic theory of heat. Annalen der physik,
17(549-560):208, 1905.

61
[10] K Gopalsamy. A periodic partial integrodifferential equation. HOUSTON JOUR-
NAL OF MATHEMATICS, 18(4), 1992.

[11] Henryk Gorecki, Stanislaw Fuksa, Piotr Grabowski, and Adam Korytowski.
Analysis and synthesis of time delay systems. 1989.

[12] E Hairer, SP Nørsett, and G Wanner. Solving ordinary differential equations, i:


Nonstiff problems, springer verlag. Berlin, Germany [Google Scholar], 1993.

[13] Aristide Halanay. Differential equations: Stability, oscillations, time lags, vol-
ume 23. Academic press, 1966.

[14] Jack K Hale. Functional differential equations. In Analytic theory of differential


equations, pages 9–22. Springer, 1971.

[15] ND36426 Hayes. Roots of the transcendental equation associated with a certain
difference-differential equation. Journal of the London Mathematical Society,
1(3):226–232, 1950.

[16] Kiyosi Itô. 13. on the ergodicity of a certain stationary process. Proceedings of
the Imperial Academy, 20(2):54–55, 1944.

[17] SAAD IDREES Jumaa. Solving Linear First Order Delay Differential Equations
by MOC and Steps Method Comparing with Matlab Solver. PhD thesis, Ph. D
thesis, Near East University in Nicosia, 2017.

[18] E Kozakiewicz. On the asymptotic behaviour of positive solutions of two dif-


ferential inequalities with retarded argument. Coil. Math. Soc. Jan. Bolyai,
15:309–319, 1977.

[19] NN Krasovskii. Some problems in the theory of motion stability. Moscow, USSR:
Fizmatgiz, 1959.

[20] Yang Kuang. Delay differential equations: with applications in population dy-
namics. Academic press, 1993.

[21] Gerasimos Ladas and V Lakshmikantham. Sharp conditions for oscillations


caused by delays. Applicable Analysis, 9(2):93–98, 1979.

[22] Paul Langevin. L’évolution de l’espace et du temps. In Atti del IV Congresso


Internazionale di Filosofia, volume 1, pages 193–214, 1911.

[23] Patricia M Lumb. Delay differential equations: Detection of small solutions.


2004.

62
[24] Tatyana Luzyanina, Koen Engelborghs, Stephan Ehl, Paul Klenerman, and Gen-
nady Bocharov. Low level viral persistence after infection with lcmv: a quanti-
tative insight through numerical bifurcation analysis. Mathematical biosciences,
173(1):1–23, 2001.

[25] David Marshal. AN INVESTIGATION INTO THE DEVELOPMENT OF


PROBABILITY CONCEPTS IN CHILDREN. PhD thesis, Loughborough Uni-
versity of Technology, 1979.

[26] Beatrice Mensch. Über einige lineare stochastische Funktional-


Differentialgleichungen: Stationarität und statistische Aspekte. 1990.

[27] Klaus Morgenthal. Über das asymptotische verhalten der lösungen einer lin-
earen differentialgleichung mit nachwirkung. Zeitschrift für Analysis und ihre
Anwendungen, 4(2):107–124, 1985.

[28] Anatolii Dmitrievich Myshkis. On solutions of linear homogeneous differential


equations of the first order of stable type with a retarded argument. Matem-
aticheskii Sbornik, 70(3):641–658, 1951.

[29] AD Myškis. Linejnyje differencial’nyje uravnenija s zapazdyvajuscim argumen-


tom, 1972.

[30] Patrick W Nelson and Alan S Perelson. Mathematical analysis of delay differen-
tial equation models of hiv-1 infection. Mathematical biosciences, 179(1):73–94,
2002.

[31] Sim Borisovich Norkin et al. Introduction to the theory and application of differ-
ential equations with deviating arguments. Academic Press, 1973.

[32] Bernt Øksendal. Stochastic differential equations. In Stochastic differential equa-


tions, pages 65–84. Springer, 2003.

[33] Gerhard Manfred Schoen. Stability and stabilization of time-delay systems. PhD
thesis, ETH Zurich, 1995.

[34] Marian Smoluchowski. The kinetic theory of brownian molecular motion and
suspensions. Ann. Phys, 21:756–780, 1906.

[35] Lionel Weicker, Thomas Erneux, Otti d’Huys, Jan Danckaert, Maxime Jacquot,
Yanne Chembo, and Laurent Larger. Slow–fast dynamics of a time-delayed
electro-optic oscillator. Philosophical Transactions of the Royal Society A: Math-
ematical, Physical and Engineering Sciences, 371(1999):20120459, 2013.

63
[36] Edward M Wright. A non-linear difference-differential equation. 1955.

[37] Sun Yi and A Galip Ulsoy. Solution of a system of linear delay differential equa-
tions using the matrix lambert function. In 2006 American Control Conference,
pages 6–pp. IEEE, 2006.

64

You might also like