This action might not be possible to undo. Are you sure you want to continue?
Roger Mansuy Marc Yor
Motion
Aspects of
Brownian
●
123
This work is subject to copyright. All rights are reserved, whether the whole or part of
the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustra
tions, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage
in data banks. Duplication of this publication or parts thereof is permitted only under
the provisions of the German Copyright Law of September 9, 1965, in its current version,
and permission for use must always be obtained from Springer. Violations are liable to
prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication
does not imply, even in the absence of a speciﬁc statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
Printed on acidfree paper
9 8 7 6 5 4 3 2 1
springer.com
ISBN 9783540223474 eISBN 9783540499664
Library of Congress Control Number: 2008930798
Mathematics Subject Classiﬁcation (2000): 6002, 6001, 60J65, 60E05
An earlier version of this book was published by Birkhäuser,
Yor, Marc: Some aspects of Brownian Motion. Part II, 1997.
France
Université Paris VI
2008 SpringerVerlag Berlin Heidelberg
è Mod les Al atoires é
4, place Jussieu
Cover design: WMX Design GmbH, Heidelberg
Marc Yor Roger Mansuy
c
Laboratoire de Probabilités et
75252 Paris Cedex 5
deaproba@proba.jussieu.fr
Basel
as Yor, Marc: Some aspects of Brownian Motion, Part I, 1992, and
The cover illustration is based on a simulation of BESQ processes provided by C. Umansky.
21, Boulevard Carnot
France
92340 BourglaReine
Introduction
This volume is the result of our eﬀorts to update the eleven ﬁrst chapters
of the two previously published ETH Z¨ urich Lecture Notes by the second
author: Some Aspects of Brownian Motion, Part I (1992); Part II (1997).
The original volumes have been out of print since, roughly, the year 2000.
We have already updated the remaining chapters of Part II in:
Random Times and Enlargements of Filtrations in a Brownian Setting,
Lecture Notes in Maths, n
◦
1873, Springer (2006).
Coming back to the present volume, we modiﬁed quite minimally the old
eleven ﬁrst chapters, essentially by completing the Bibliography. Here is a
detailed description of these eleven chapters; each of them is devoted to the
study of some particular class of Brownian functionals; these classes appear
in increasing order of complexity.
In Chapter 1, various results about certain Gaussian subspaces of the Gaus
sian space generated by a onedimensional Brownian motion are obtained;
the derivation of these results is elementary in that it uses essentially Hilbert
spaces isomorphisms between certain Gaussian spaces and some L
2
spaces of
deterministic functions.
In Chapter 2, several results about Brownian quadratic functionals are ob
tained, with some particular emphasis on a change of probability method,
which enables to obtain a number of variants of L´evy’s formula for the
stochastic area of Brownian motion.
In Chapter 3, RayKnight theorems on Brownian local times are recalled
and extended; the processes which appear there are squares of Bessel pro
cesses, which links naturally chapter 3 with the study of Brownian quadratic
functionals made in chapter 2; in the second half of chapter 3, some relations
with Bessel meanders and bridges are discussed.
v
vi
In Chapter 4, the relation between squares of Bessel processes and Brownian
local times is further exploited, in order to explain and extend the Ciesielski
Taylor identities.
In Chapters 5 and 7, a number of results about Brownian windings are
established; exact distributional computations are made in chapter 5, whereas
asymptotic studies are presented in chapter 7.
Chapter 6 is devoted to the study of the integral, on a time interval, of
the exponential of a Brownian motion with drift; this study is important in
mathematical ﬁnance.
In Chapters 8 and 9, some extensions of Paul L´evy’s arc sine law for
Brownian motion are discussed, with particular emphasis on the time spent
by Brownian motion below a multiple of its onesided supremum.
Principal values of Brownian and Bessel local times  in particular their
Hilbert transforms  are discussed in Chapter 10. Such principal values
occur naturally in the Dirichlet decomposition of Bessel processes with di
mension smaller than 1, as well as when considering certain signed measures
which are absolutely continuous with respect to the Wiener measure.
The Riemann zeta function and Jacobi theta functions are shown, in Chap
ter 11, to be somewhat related with the Itˆo measure of Brownian excursions.
Some generalizations to Bessel processes are also presented.
We are well aware that this particular selection of certain aspects of Brown
ian motion is, at the same time, quite incomplete and arbitrary, but in the
defense of our choice, let us say that:
a. We feel some conﬁdence with these particular aspects...
b. Some other aspects are excellently treated in a number of lecture notes
and books, the references of which are gathered at the end of this volume.
c. Between 2004 and 2006, we had undertaken an ambitious updating of the
same ETH Lecture Notes , but were unable to complete this more demanding
task. The interested reader may consult online (http://roger.mansuy.free.fr/
Aspects/Aspects references.html) the extensive Bibliography we had gath
ered for this purpose.
Many thanks to Kathleen Qechar for juggling with the diﬀerent versions,
macros, and so on...
Brannay, May 4th, 2008.
Keywords, chapter by chapter
Chapter 1: Gaussian space, ﬁrst Wiener chaos, ﬁltration of Brownian
bridges, ergodic property, spacetime harmonic functions.
Chapter 2: Quadratic functionals, L´evy’s area formula, OrnsteinUhlenbeck
process, FubiniWiener integration by parts.
Chapter 3: RayKnight theorems, transfer principle, additivity property,
L´evyKhintchine representation, generalized meanders, Bessel bridges.
Chapter 4: CiesielskiTaylor (: CT) identities, Biane’s extensions.
Chapter 5: Winding number, HartmanWatson distribution, Brownian lace.
Chapter 6: Asian options, Conﬂuent hypergeometric functions, beta and
gamma variables.
Chapter 7: KallianpurRobbins ergodic theorem, Spitzer’s theorem, Gauss
linking number.
Chapter 8: P. L´evy’s arc sine law, F. Petit’s extensions, Walsh’s Brown
ian motion, Excursion theory Master formulae, FeynmanKac formula.
Chapter 9: Local time perturbation of Brownian motion, Bismut’s iden
tity, Knight’s ratio formula.
Chapter 10: Hilbert transform, principal values, Yamada’s formulae, Dirich
let processes, Bertoin’s excursion theory for BES(d).
Chapter 11: Riemann Zeta function, Jacobi theta function, Convolution
of Hitting times, Chung’s identity.
vii
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
Keywords, chapter by chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1 The Gaussian space of BM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 A realization of Brownian bridges . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The ﬁltration of Brownian bridges . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 An ergodic property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 A relationship with spacetime harmonic functions . . . . . . . . . . 7
1.5 Brownian motion and Hardy’s inequality in L
2
. . . . . . . . . . . . . 11
1.6 Fourier transform and Brownian motion . . . . . . . . . . . . . . . . . . . 14
2 The laws of some quadratic functionals of BM . . . . . . . . . . . 17
2.1 L´evy’s area formula and some variants . . . . . . . . . . . . . . . . . . . . 18
2.2 Some identities in law and an explanation of them via
Fubini’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 The laws of squares of Bessel processes . . . . . . . . . . . . . . . . . . . . 27
ix
x Contents
3 Squares of Bessel processes and RayKnight theorems for
Brownian local times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 The basic RayKnight theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2 The L´evyKhintchine representation of Q
δ
x
. . . . . . . . . . . . . . . . . 34
3.3 An extension of the RayKnight theorems . . . . . . . . . . . . . . . . . 37
3.4 The law of Brownian local times taken at an independent
exponential time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.5 Squares of Bessel processes and squares of Bessel bridges . . . . 41
3.6 Generalized meanders and squares of Bessel processes . . . . . . . 47
3.7 Generalized meanders and Bessel bridges . . . . . . . . . . . . . . . . . . 51
4 An explanation and some extensions of the Ciesielski
Taylor identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1 A pathwise explanation of (4.1) for δ = 1 . . . . . . . . . . . . . . . . . . 58
4.2 A reduction of (4.1) to an identity in law between two
Brownian quadratic functionals. . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.3 Some extensions of the CiesielskiTaylor identities . . . . . . . . . . 60
4.4 On a computation of F¨ oldesR´ev´esz . . . . . . . . . . . . . . . . . . . . . . . 64
5 On the winding number of planar BM . . . . . . . . . . . . . . . . . . . 67
5.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2 Explicit computation of the winding number of planar
Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Contents xi
6 On some exponential functionals of Brownian motion
and the problem of Asian options . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.1 The integral moments of A
(ν)
t
. . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 A study in a general Markovian setup . . . . . . . . . . . . . . . . . . . . 84
6.3 The case of L´evy processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.4 Application to Brownian motion. . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.5 A discussion of some identities . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
7 Some asymptotic laws for multidimensional BM . . . . . . . . . . 101
7.1 Asymptotic windings of planar BM around n points . . . . . . . . 102
7.2 Windings of BM in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
7.3 Windings of independent planar BM’s around each other . . . . 107
7.4 A uniﬁed picture of windings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7.5 The asymptotic distribution of the selflinking number
of BM in IR
3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
8 Some extensions of Paul L´evy’s arc sine law for BM. . . . . . 115
8.1 Some notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
8.2 A list of results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
8.3 A discussion of methods  Some proofs . . . . . . . . . . . . . . . . . . . . 120
8.4 An excursion theory approach to F. Petit’s results . . . . . . . . . . 125
8.5 A stochastic calculus approach to F. Petit’s results . . . . . . . . . 133
xii Contents
9 Further results about reﬂecting Brownian motion
perturbed by its local time at 0. . . . . . . . . . . . . . . . . . . . . . . . . . . 137
9.1 A RayKnight theorem for the local times of X,
up to τ
µ
s
, and some consequences . . . . . . . . . . . . . . . . . . . . . . . . . 137
9.2 Proof of the RayKnight theorem for the local times of X . . . 140
9.3 Generalisation of a computation of F. Knight . . . . . . . . . . . . . . 144
9.4 Towards a pathwise decomposition of (X
u
; u ≤ τ
µ
s
) . . . . . . . . . 149
10 On principal values of Brownian and Bessel local times . . . 153
10.1 Yamada’s formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
10.2 A construction of stable processes, involving principal values
of Brownian local times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
10.3 Distributions of principal values of Brownian local times,
taken at an independent exponential time . . . . . . . . . . . . . . . . . 158
10.4 Bertoin’s excursion theory for BES(d), 0 < d < 1 . . . . . . . . . . . 159
11 Probabilistic representations of the Riemann zeta
function and some generalisations related to Bessel
processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
11.1 The Riemann zeta function and the 3dimensional Bessel
process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
11.2 The right hand side of (11.4), and the agreement formulae
between laws of Bessel processes and Bessel bridges . . . . . . . . . 169
11.3 A discussion of the identity (11.8) . . . . . . . . . . . . . . . . . . . . . . . . 171
11.4 A strengthening of Knight’s identity, and its relation to the
Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
11.5 Another probabilistic representation of the Riemann zeta
function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Contents xiii
11.6 Some generalizations related to Bessel processes . . . . . . . . . . . . 178
11.7 Some relations between X
ν
and Σ
ν−1
≡ σ
ν−1
+σ
ν−1
. . . . . . . 182
11.8 ζ
ν
(s) as a function of ν . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
Further general references about BM
and Related Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Chapter 1
The Gaussian space of BM
In this Chapter, a number of linear transformations of the Gaussian space
associated to a linear Brownian motion (B
t
, t ≥ 0) are studied. Recall that
this Gaussian space is precisely equal to the ﬁrst Wiener chaos of B, that is:
Γ(B)
def
=
⎧
⎨
⎩
B
f
≡
∞
_
0
f(s)dB
s
, f ∈ L
2
(IR
+
, ds)
⎫
⎬
⎭
In fact, the properties of the transformations being studied may be deduced
from corresponding properties of associated transformations of L
2
(IR
+
, ds),
thanks to the Hilbert spaces isomorphism:
B
f
↔ f
between Γ(B) and L
2
(IR
+
, ds), which is expressed by the identity:
E
_
(B
f
)
2
¸
=
∞
_
0
dt f
2
(t) (1.1)
This chapter may be considered as a warmup, and is intended to show that
some interesting properties of Brownian motion may be deduced easily from
the covariance identity (1.1).
1
2 1 The Gaussian space of BM
1.1 A realization of Brownian bridges
Let (B
u
, u ≥ 0) be a 1dimensional BM, starting from 0. Fix t > 0 for one
moment, and remark that, for u ≤ t:
B
u
=
u
t
B
t
+
_
B
u
−
u
t
B
t
_
is the orthogonal decomposition of the gaussian variable B
u
with respect
to B
t
.
Hence, since (B
u
, u ≥ 0) is a Gaussian process, the process
_
B
u
−
u
t
B
t
, u ≤ t
_
is independent of the variable B
t
.
Let now Ω
∗
(t)
≡ C ([0, t]; IR) be the space of continuous functions ω : [0, t] →
IR; on Ω
∗
(t)
, denote X
u
(ω) = ω(u), u ≤ t, and T
u
= σ¦X
s
, s ≤ u¦. T
t
is
also the Borel σﬁeld when Ω
∗
(t)
is endowed with the topology of uniform
convergence.
For any x ∈ IR, we deﬁne P
(t)
x
as the distribution on (Ω
∗
(t)
, T
t
) of the process:
_
ux
t
+B
u
−
u
t
B
t
; u ≤ t
_
.
Clearly, the family (P
(t)
x
; x ∈ IR) is weakly continuous, and, by construction,
it satisﬁes:
E [F(B
u
, u ≤ t) [ B
t
= x] = E
(t)
x
[F(X
u
, u ≤ t)] dx a.e. ,
for every (T
t
) measurable, bounded functional F. Hence, there is no ambi
guity in deﬁning, for any x ∈ IR, P
(t)
x
as the law of the Brownian bridge, of
duration t, starting at 0, and ending at x.
We shall call P
(t)
0
the law of the standard Brownian bridge of duration t.
Hence, a realization of this bridge is:
_
B
u
−
u
t
B
t
; u ≤ t
_
.
1.2 The ﬁltration of Brownian bridges 3
1.2 The ﬁltration of Brownian bridges
If G is a subset of the Gaussian space generated by (B
u
, u ≥ 0), we denote
by Γ(G) the Gaussian space generated by G, and we use the script letter (
for the σﬁeld σ(G).
We now deﬁne Γ
t
= Γ(G
t
), where G
t
=
_
B
u
−
u
t
B
t
; u ≤ t
_
and (
t
= σ(G
t
).
It is immediate that Γ
t
is the orthogonal of Γ(B
t
) in Γ(B
u
, u ≤ t), that is,
we have:
Γ(B
u
, u ≤ t) = Γ
t
⊕Γ(B
t
) .
Remark that ¦Γ
t
, t ≥ 0¦ is an increasing family, since, for u ≤ t ≤ t +h:
B
u
−
u
t
B
t
=
_
B
u
−
u
t + h
B
t+h
_
−
u
t
_
B
t
−
t
t +h
B
t+h
_
,
and that, moreover: Γ
∞
def
= lim
t↑∞
↑ Γ
t
≡ Γ(B
u
, u ≥ 0), since:
B
u
= a.s. lim
t→∞
_
B
u
−
u
t
B
t
_
.
Hence, ((
t
, t ≥ 0) is a subﬁltration of (B
t
≡ σ(B
u
, u ≤ t), t≥0), and (
∞
=B
∞
.
Here are some more precisions about ((
t
, t ≥ 0).
Theorem 1.1 1) For any t > 0, we have:
Γ
t
=
⎧
⎨
⎩
t
_
0
f(u)dB
u
; f ∈ L
2
([0, t], du) , and
t
_
0
du f(u) = 0
⎫
⎬
⎭
2) For any t > 0, the process:
γ
(t)
u
= B
u
−
u
_
0
ds
B
t
−B
s
t −s
, u ≤ t ,
is a Brownian motion, which is independent of the variable B
t
. Moreover,
we have: Γ
t
= Γ(γ
(t)
u
, u ≤ t)
4 1 The Gaussian space of BM
3) The process: β
t
= B
t
−
t
_
0
ds
s
B
s
, t ≥ 0, is a Brownian motion, and we
have:
Γ
t
= Γ(β
s
, s ≤ t) .
Consequently, ((
t
, t ≥ 0) is the natural ﬁltration of the Brownian motion
(β
t
, t ≥ 0).
Proof:
1) The ﬁrst assertion of the Theorem follows immediately from the Hilbert
spaces isomorphism between L
2
([0, t], du) and G
t
, which transfers a func
tion f into
t
_
0
f(u)dB
u
2) Before we prove precisely the second and third assertions of the Theorem,
it is worth explaining how the processes (γ
(t)
u
, u ≤ t) and (β
t
, t ≥ 0) arise
naturally. It is not diﬃcult to show that (γ
(t)
u
, u ≤ t) is the martingale
part in the canonical decomposition of (B
u
, u ≤ t) as a semimartingale in
the ﬁltration
_
B
(t)
u
≡ B
u
∨ σ(B
t
); u ≤ t
_
, whereas the idea of considering
(β
u
, u ≥ 0) occured by looking at the Brownian motion (γ
(t)
u
, u ≤ t),
reversed from time t, that is:
γ
(t)
t
−γ
(t)
t−u
= (B
t
−B
t−u
) −
u
_
0
ds
B
t
−B
t−s
s
.
3) Now, let (Z
u
, u ≤ t) be a family of Gaussian variables which belong to Γ
t
;
in order to show that Γ
t
= Γ(Z
u
, u ≤ t), it suﬃces, using the ﬁrst assertion
of the theorem, to prove that the only functions f ∈ L
2
([0, t], du) such that
E
⎡
⎣
Z
u
⎛
⎝
t
_
0
f(v)dB
v
⎞
⎠
⎤
⎦
= 0 , for every u ≤ t (1.2)
are the constants.
When we apply this remark to Z
u
= γ
(t)
u
, u ≤ t, we ﬁnd that f satis
ﬁes (1.2) if and only if
1.3 An ergodic property 5
u
_
0
dv f(v) −
u
_
0
ds
1
(t −s)
t
_
s
dv f(v) = 0 , for every u ≤ t,
hence:
f(v) =
1
t −v
t
_
v
du f(u) , dv a.s.,
from which we now easily conclude that f(v) = c, dv a.s., for some constant
c. A similar discussion applies with Z
u
= β
u
, u ≤ t. ¯.
Exercise 1.1:
Let f : IR
+
→ IR be an absolutely continuous function which satisﬁes:
f(0) = 0, and for t > 0, f(t) ,= 0, and
t
_
0
du
[f(u)[
⎛
⎝
u
_
0
(f
(s))
2
ds
⎞
⎠
1/2
< ∞
1. Show that the process:
Y
(f)
t
= B
t
−
t
_
0
du
f(u)
⎛
⎝
u
_
0
f
(s)dB
s
⎞
⎠
, t ≥ 0 ,
admits ((
t
) as its natural ﬁltration.
2. Show that the canonical decomposition of (Y
(f)
t
, t ≥ 0) in its natural
ﬁltration ((
t
) is:
Y
(f)
t
= β
t
+
t
_
0
du
f(u)
⎛
⎝
u
_
0
_
f(s)
s
−f
(s)
_
dβ
s
⎞
⎠
.
1.3 An ergodic property
We may translate the third statement of Theorem 1.1 by saying that, if
(X
t
, t ≥ 0) denotes the process of coordinates on the canonical space Ω
∗
≡
Ω
∗
(∞)
≡ C ([0, ∞), IR), then the transformation T deﬁned by:
6 1 The Gaussian space of BM
T(X)
t
= X
t
−
t
_
0
ds
s
X
s
(t ≥ 0)
leaves the Wiener measure W invariant.
Theorem 1.2 For any t > 0,
n
(T
n
)
−1
(T
t
) is Wtrivial. Moreover, for any
n ∈ IN, we have: (T
n
)
−1
(T
∞
) = T
∞
, W a.s. (in the language of ergodic
theory, T is a Kautomorphism). Consequently, the transformation T on
(Ω
∗
, T
∞
, W) is strongly mixing and, a fortiori, ergodic.
Proof:
a) The third statement follows classically from the two ﬁrst ones.
b) We already remarked that T
−1
(T
∞
) = T
∞
, W a.s., since (
∞
= B
∞
, which
proves the second statement.
c) The ﬁrst statement shall be proved later on as a consequence of the next
Proposition 1.1. ¯.
To state simply the next Proposition, we need to recall the deﬁnition of the
classical Laguerre polynomials:
L
n
(x) =
n
k=0
_
n
k
_
1
k!
(−x)
k
, n ∈ IN ,
is the sequence of orthonormal polynomials for the measure e
−x
dx on IR
+
which is obtained from (1, x, x
2
, . . . , x
n
, . . . ) by the GramSchmidt procedure.
Proposition 1.1 Let (X
t
)
t≤1
be a realvalued BM, starting from 0. Deﬁne
γ
n
= T
n
(X)
1
. Then, we have:
γ
n
=
1
_
0
dX
s
L
n
_
log
1
s
_
.
(γ
n
, n ∈ IN) is a sequence of independent centered Gaussian variables, with
variance 1, from which (X
t
, t ≤ 1) may be represented as:
1.4 A relationship with spacetime harmonic functions 7
X
t
=
n∈IN
λ
n
_
log
1
t
_
γ
n
, where λ
n
(a) =
a
_
0
dx e
−x
L
n
(x)
Proof: The expression of γ
n
as a Wiener integral involving L
n
is obtained
by iteration of the transformation T.
The identity: E[γ
n
γ
m
] = δ
nm
then appears as a consequence of the fact that
the sequence ¦L
n
, n ∈ IN¦ constitutes an orthonormal basis of L
2
(IR
+
, e
−x
dx).
Indeed, we have:
E[γ
n
γ
m
] =
1
_
0
ds L
n
_
log
1
s
_
L
m
_
log
1
s
_
=
∞
_
0
dx e
−x
L
n
(x)L
m
(x) = δ
nm
.
More generally, the application:
(f(x), x > 0) −→
_
f
_
log
1
s
_
, 0 < s < 1
_
is an isomorphism of Hilbert spaces between L
2
(e
−x
dx; IR
+
) and L
2
(ds; [0, 1]),
and the development of (X
t
)
t≤1
along the (γ
n
) sequence corresponds to the
development of 1
[0,t]
(s) along the basis
_
L
n
_
log
1
s
__
n∈IN
. ¯.
1.4 A relationship with spacetime harmonic functions
In this paragraph, we are interested in a question which in some sense is
dual to the study of the transformation T which we considered above. More
precisely, we wish to give a description of the set ¸ of all probabilities P on
(Ω
∗
, T
∞
) such that:
8 1 The Gaussian space of BM
i)
⎛
⎝ ˜
X
t
≡ X
t
−
t
_
0
ds
s
X
s
; t ≥ 0
⎞
⎠
is a real valued BM; here, we only assume
that the integral
t
_
0
ds
s
X
s
≡ a.s. lim
ε→0
t
_
ε
ds
s
X
s
exists a.s., but we do not
assume a priori that is converges absolutely.
ii) for every t ≥ 0, the variable X
t
is P independent of (
˜
X
s
, s ≤ t).
We obtain the following characterization of the elements of ¸.
Theorem 1.3 Let W denote the Wiener measure on (Ω
∗
, T
∞
) (W is the
law of the real valued Brownian motion B starting from 0).
Let P be a probability on (Ω
∗
, T
∞
).
The three following properties are equivalent:
1) P ∈ ¸.
2) P is the law of (B
t
+tY, t ≥ 0), where Y is a r.v. which is independent of
(B
t
, t ≥ 0);
3) there exists a function h : IR
+
IR→IR
+
, which is spacetime harmonic,
that is: such that (h(t, X
t
), t ≥ 0) is a (W, T
t
) martingale, with expecta
tion 1, and P = W
h
, where W
h
is the probability on (Ω
∗
, T
∞
) deﬁned
by:
W
h
¸
¸
F
t
= h(t, X
t
) W
¸
¸
F
t
.
We ﬁrst describe all solutions of the equation
(∗) X
t
= β
t
+
t
_
0
ds
s
X
s
,
where (β
t
, t ≥ 0) is a realvalued BM, starting from 0.
Lemma 1.1 (X
t
) is a solution of (∗) iﬀ there exists a r.v. Y such that:
1.4 A relationship with spacetime harmonic functions 9
X
t
= t
⎛
⎝
Y −
∞
_
t
dβ
u
u
⎞
⎠
.
Proof: From Itˆo’s formula, we have, for 0 < s < t:
1
t
X
t
=
1
s
X
s
+
t
_
s
dβ
u
u
.
As t → ∞, the righthand side converges, hence, so does the lefthand side;
we call Y the limit of
X
t
t
, as t → ∞; we have
1
s
X
s
= Y −
∞
_
s
dβ
u
u
.
¯.
We may now give a proof of Theorem 1.3; the rationale of the proof shall be:
1)⇒2)⇒3)⇒1).
1)⇒2): from Lemma 1.1, we have:
X
t
t
= Y −
∞
_
t
d
˜
X
u
u
, and we now remark
that
B
t
= −t
∞
_
t
d
˜
X
u
u
, t ≥ 0, is a BM. (1.3)
Hence, it remains to show that Y is independent from B; in fact, we have:
σ¦B
u
, u ≥ 0¦ = σ¦
˜
X
u
, u ≥ 0¦, up to negligible sets, since, from (1.3), it
follows that:
d
_
B
t
t
_
=
d
˜
X
t
t
.
However, from our hypothesis, X
t
is independent of
˜
X
u
, u ≤ t, so that
Y ≡ lim
t→∞
_
X
t
t
_
is independent of (
˜
X
u
, u ≥ 0).
2)⇒3): We condition with respect to Y ; indeed, let ν(dy) = P(Y ∈ dy), and
deﬁne:
h(t, x) =
_
ν(dy) exp
_
yx −
y
2
t
2
_
≡
_
ν(dy)h
y
(t, x) .
10 1 The Gaussian space of BM
From Girsanov’s theorem, we know that:
P ¦(B
u
+yu, u ≥ 0) ∈ Γ¦ = W
h
y
(Γ) ,
and therefore, here, we have: P = W
h
.
3)⇒1): If P = W
h
, then we know that (
˜
X
u
, u ≤ t) is independent of X
t
under W, hence also under P, since the density
dP
dW
¸
¸
¸
F
t
= h(X
t
, t) depends
only on X
t
. ¯.
Exercise 1.2: Let λ ∈ IR. Deﬁne β
(λ)
t
= B
t
−λ
t
_
0
ds
s
B
s
(t ≥ 0).
Let T
(λ)
t
= σ¦β
(λ)
s
; s ≤ t¦, t ≥ 0, be the natural ﬁltration of (β
(λ)
t
, t ≥ 0),
and (T
t
, t ≥ 0) be the natural ﬁltration of (β
(λ)
t
, t ≥ 0).
1. Show that (T
(λ)
t
, t ≥ 0) is a strict subﬁltration of (T
t
, t ≥ 0) if, and only
if, λ >
1
2
.
2. We now assume: λ >
1
2
.
Prove that the canonical decomposition of (β
(λ)
t
, t ≥ 0) in its natural
ﬁltration (T
(λ)
t
, t ≥ 0) is:
β
(λ)
t
= γ
(λ)
t
−(1 −λ)
t
_
0
ds
s
γ
(λ)
s
, t ≥ 0 ,
where (γ
(λ)
t
, t ≥ 0) is a (T
(λ)
t
, t ≥ 0) Brownian motion.
3. Prove that the processes: B, β
(λ)
, and γ
(λ)
satisfy the following relations:
d
_
B
t
t
λ
_
=
dβ
(λ)
t
t
λ
and d
_
γ
(λ)
t
t
1−λ
_
=
dβ
(λ)
t
t
1−λ
.
Exercise 1.3: (We use the notation introduced in the statement or the
proof of Theorem 1.3).
Let Y be a realvalued r.v. which is independent of (B
t
, t ≥ 0); let
ν(dy) = P(Y ∈ dy) and deﬁne: B
(ν)
t
= B
t
+Y t.
1.5 Brownian motion and Hardy’s inequality in L
2
11
1. Prove that if f : IR → IR
+
is a Borel function, then:
E
_
f(Y ) [ B
(ν)
s
, s ≤ t
_
=
_
ν(dy)f(y) exp
_
yB
(ν)
t
−
y
2
t
2
_
_
ν(dy) exp
_
yB
(ν)
t
−
y
2
t
2
_
2. With the help of the spacetime harmonic function h featured in prop
erty 3) of Theorem 1.3, write down the canonical decomposition of
(B
(ν)
t
, t ≥ 0) in its own ﬁltration.
1.5 Brownian motion and Hardy’s inequality in L
2
(1.5.1) The transformation T which we have been studying is closely related
to the Hardy transform:
H : L
2
([0, 1]) −→ L
2
([0, 1])
f −→ Hf : x→
1
x
x
_
0
dy f(y)
We remark that the adjoint of H, which we denote by
˜
H, satisﬁes:
˜
Hf(x) =
1
_
x
dy
y
f(y), f ∈ L
2
([0, 1]) .
The operator K = H, or
˜
H, satisﬁes Hardy’s L
2
inequality:
1
_
0
dx(Kf)
2
(x) ≤ 4
1
_
0
dx f
2
(x) ,
which may be provedby several simple methods, among which one is to consider
martingales deﬁned on [0, 1], ﬁtted with Lebesgue measure, and the ﬁltration
¦T
t
= σ(A, Borel set; A ⊂ [0, t]; t ≤ 1¦ (see, for example, DellacherieMeyer
Yor [29]). In this paragraph, we present another approach, which is clearly
related to the Brownian motion (β
t
) introduced in Theorem 1.1. We ﬁrst
remark that if, to begin with, f is bounded, we may write
12 1 The Gaussian space of BM
(∗)
1
_
0
f(u)dB
u
=
1
_
0
f(u)dβ
u
+
1
_
0
du
u
B
u
f(u),
and then, we remark that
1
_
0
du
u
B
u
f(u) =
1
_
0
dB
u
(
˜
Hf)(u) ;
hence, from (∗)
1
_
0
dB
u
(
˜
Hf)(u) =
1
_
0
f(u)dB
u
−
1
_
0
f(u)dβ
u
,
from which we immediately deduce Hardy’s L
2
inequality.
We now go back to (∗) to remark that, for any f ∈ L
2
[0, 1], or, in fact more
generally, for any ((
u
)
u≤1
predictable process (ϕ(u, ω)) such that:
1
_
0
duϕ
2
(u, ω) < ∞ a.s. ,
the limit: lim
ε↓0
1
_
ε
du
u
B
u
ϕ(u, ω) exists , since both limits, as ε → 0, of
1
_
ε
dB
u
ϕ(u, ω) and
1
_
ε
β
u
ϕ(u, ω) exist. This general existence result should
be contrasted with the following
Lemma 1.2 Let (ϕ(u, w); u ≤ 1) be a ((
u
)
u≤1
predictable process such that:
1
_
0
duϕ
2
(u, ω) < ∞ a.s. Then, the following properties are equivalent
(i)
1
_
0
du
√
u
[ϕ(u, ω)[ < ∞; (ii)
1
_
0
du
u
[B
u
[ [ϕ(u, ω)[ < ∞;
(iii) the process
⎛
⎝
t
_
0
dβ
u
ϕ(u, ω), t ≤ 1
⎞
⎠
is a (B
t
, t ≤ 1) semimartingale.
1.5 Brownian motion and Hardy’s inequality in L
2
13
For a proof of this Lemma, we refer the reader to JeulinYor ([53]); the
equivalence between (i) and (ii) is a particular case of a useful lemma due to
Jeulin ([51], p. 44).
(1.5.2) We now translate the above existence result, at least for ϕ(u, ω) =
f(u), with f in L
2
([0, 1]) in terms of a convergence result for certain integrals
of the OrnsteinUhlenbeck process.
Deﬁne the OrnsteinUhlenbeck process with parameter µ ∈ IR, as the unique
solution of Langevin’s equation:
X
t
= x +B
t
+µ
t
_
0
ds X
s
;
the method of variation of constants yields the formula:
X
t
= e
µt
⎛
⎝
x +
t
_
0
e
−µs
dB
s
⎞
⎠
.
When µ = −λ, with λ > 0, and x is replaced by a Gaussian centered vari
able X
0
, with variance β =
1
2λ
, then the process:
Y
t
= e
−λt
⎛
⎝
X
0
+
t
_
0
e
λs
dB
s
⎞
⎠
is stationary, and may also be represented as:
Y
t
=
1
√
2λ
e
−λt
˜
B
e
2λt =
1
√
2λ
e
λt
ˆ
B
e
−2λt ,
where (
˜
B
u
)
u≥0
and (
ˆ
B
u
)
u≥0
are two Brownian motions, which are linked by:
˜
B
u
= u
ˆ
B
1/u
.
We now have the following
Proposition 1.2 For any g ∈ L
2
([0, ∞]),
⎛
⎝
t
_
0
ds g(s)Y
s
, t → ∞
⎞
⎠
converges
a.s. and in L
2
(in fact, in every L
p
, p < ∞).
Proof: Using the representation of (Y
t
, t ≥ 0) in terms of
ˆ
B, we have:
14 1 The Gaussian space of BM
t
_
0
ds g(s)Y
s
=
1
_
e
−2λt
du
u
ˆ
B
u
1
√
2λu
g
_
1
2λ
log
1
u
_
.
Now, the application
g −→
1
√
2λu
g
_
1
2λ
log
1
u
_
L
2
([0, ∞]) −→ L
2
([0, 1])
is an isomorphism of Hilbert spaces; the result follows. ¯.
1.6 Fourier transform and Brownian motion
There has been, since L´evy’s discovery of local times, a lot of interest in
the occupation measure of Brownian motion, that is, for ﬁxed t and w, the
measure λ
w,t
(dx) deﬁned by:
_
λ
w,t
(dx)f(x) =
t
_
0
ds f (B
s
(w)) .
In particular, one may show that, a.s., the Fourier transform of λ
w,t
, that is:
ˆ
λ
w,t
(µ) ≡
t
_
0
ds exp(iµB
s
(w)) is in L
2
(dµ); therefore, λ
w,t
(dx) is absolutely
continuous and its family of densities are the local times of B up to time t.
Now, we are interested in a variant of this, namely we consider:
t
_
0
ds g(s) exp(iµB
s
), µ ,= 0 ,
where g satisﬁes:
t
_
0
ds[g(s)[ < ∞, for every t > 0. We note the following
Proposition 1.3 Let µ ∈ IR, µ ,= 0, and deﬁne: λ =
µ
2
2
. Let (Y
t
, t ≥ 0) be
the stationary OrnsteinUhlenbeck process, with parameter λ. Then, we have
1.6 Fourier transform and Brownian motion 15
the following identities:
E
⎡
⎢
⎣
¸
¸
¸
¸
¸
¸
t
_
0
ds g(s) exp(iµB
s
)
¸
¸
¸
¸
¸
¸
2
⎤
⎥
⎦
= µ
2
E
⎡
⎢
⎣
⎛
⎝
t
_
0
ds g(s)Y
s
⎞
⎠
2
⎤
⎥
⎦
=
t
_
0
ds
t
_
0
du g(s)g(u)e
−λu−s
.
Corollary 1.3.1 For any µ ,= 0, and for any function g ∈ L
2
([0, ∞)],
⎛
⎝
t
_
0
ds g(s) exp(iµB
s
), t → ∞
⎞
⎠
converges a.s. and in L
2
(also in every
L
p
, p < ∞).
Proof: The L
2
convergence follows immediately from the Proposition and
from the L
2
convergence of the corresponding quantity for Y . The a.s. con
vergence is obtained from the martingale convergence theorem. Indeed, if we
deﬁne: Γ(µ) = L
2
 lim
t→∞
t
_
0
dsg(s)e
iµB
s
, we have
E [Γ(µ) [ B
t
] =
t
_
0
ds g(s)e
iµB
s
+e
iµB
t
∞
_
t
ds g(s)e
−λ(s−t)
.
The lefthand side converges a.s., hence, so does the righthand side; but, the
second term on the righthand side goes to 0, since:
¸
¸
¸
¸
¸
¸
e
iµB
t
∞
_
t
ds g(s)e
−λ(s−t)
¸
¸
¸
¸
¸
¸
≤
⎛
⎝
∞
_
t
ds g
2
(s)
⎞
⎠
1/2
1
√
2λ
t→∞
→ 0 .
¯.
From the above results, the r.v. Γ(µ) ≡
∞
_
0
ds g(s) exp(iµB
s
) is welldeﬁned;
it admits the following representation as a stochastic integral:
Γ(µ) =
∞
_
0
ds g(s) exp(−λs) +iµ
∞
_
0
dB
s
exp(iµB
s
)G
λ
(s) ,
16 1 The Gaussian space of BM
where:
G
λ
(s) =
∞
_
s
du g(u) exp −λ(u −s) .
Hence, Γ(µ) is the terminal variable of a martingale in the Brownian ﬁltra
tion, the increasing process of which is uniformly bounded. Therefore, we
have:
E
_
exp
_
α[Γ(µ)[
2
_¸
< ∞, for α suﬃciently small.
Many properties of the variables Γ(µ) have been obtained by C. Donati
Martin [30].
Comments on Chapter 1
 In paragraph 1.1, some explicit and wellknown realizations of the Brownian
bridges are presented, with the help of the Gaussian character of Brownian
motion.
 In paragraph 1.2, it is shown that the ﬁltration of those Brownian bridges
is that of a Brownian motion, and in paragraph 1.3, the application which
transforms the original Brownian motion into the new one is shown to be
ergodic; these two paragraphs follow JeulinYor [54] closely.
One may appreciate how much the Gaussian structure facilitates the proofs
in comparing the above development (Theorem 1.2, say) with the problem,
not yet completely solved, of proving that L´evy’s transformation:
(B
t
, t ≥ 0) −→
⎛
⎝
t
_
0
sgn(B
s
)dB
s
; t ≥ 0
⎞
⎠
is ergodic. Dubins and Smorodinsky [35] have made some important progress
on this question.
 Paragraph 1.4 is taken from JeulinYor [54]; it is closely connected to works
of H. F¨ ollmer [43] and O. Brockhaus [22]. Also, the discussion and the results
found in the same paragraph 1.4 look very similar to those in Carlen [23],
but we have not been able to establish a precise connection between these
two works.
 Paragraph 1.5 is taken mostly from DonatiMartin and Yor [32], whilst the
content of paragraph 1.6 has been the starting point of DonatiMartin [30].
Chapter 2
The laws of some quadratic functionals
of BM
In Chapter 1, we studied a number of properties of the Gaussian space of
Brownian motion; this space may be seen as corresponding to the ﬁrst level of
complexity of variables which are measurable with respect to T
∞
≡σ¦B
s
,s≥0¦,
where (B
s
, s ≥ 0) denotes Brownian motion. Indeed, recall that N. Wiener
proved that every L
2
(T
∞
) variable X may be represented as:
X = E(X) +
∞
n=1
∞
_
0
dB
t
1
t
1
_
0
dB
t
2
. . .
t
n−1
_
0
dB
t
n
ϕ
n
(t
1
, . . . , t
n
)
where ϕ
n
is a deterministic Borel function which satisﬁes:
∞
_
0
dt
1
. . .
t
n−1
_
0
dt
n
ϕ
2
n
(t
1
, . . . , t
n
) < ∞ .
In this Chapter, we shall study the laws of some of the variables X which
correspond to the second level of complexity, that is: which satisfy ϕ
n
= 0,
for n ≥ 3. In particular, we shall obtain the Laplace transforms of certain
quadratic functionals of B, such as:
αB
2
t
+β
t
_
0
ds B
2
s
,
t
_
0
dµ(s)B
2
s
, and so on...
17
18 2 Quadratic Functionals of Brownian motion
2.1 L´evy’s area formula and some variants
(2.1.1) We consider (B
t
, t ≥ 0) a δdimensional BM starting from a ∈ IR
δ
.
We write x = [a[
2
, and we look for an explicit expression of the quantity:
I
α,b
def
= E
⎡
⎣
exp
⎛
⎝
−α[B
t
[
2
−
b
2
2
t
_
0
ds[B
s
[
2
⎞
⎠
⎤
⎦
.
We now show that, as a consequence of Girsanov’s transformation, we may
obtain the following formula
1
for I
α,b
:
I
α,b
=
_
ch(bt) + 2
α
b
sh(bt)
_
−δ/2
exp −
xb
2
_
1 +
2α
b
coth bt
_
_
coth(bt) +
2α
b
_ (2.1)
Proof: We may assume that b ≥ 0. We consider the new probability P
(b)
deﬁned by:
P
(b)

F
t
= exp
⎧
⎨
⎩
−
b
2
_
[B
t
[
2
−x −δt
_
−
b
2
2
t
_
0
ds[B
s
[
2
⎫
⎬
⎭
P

F
t
.
Then, under P
(b)
, (B
u
, u ≤ t) satisﬁes the following equation,
B
u
= a +β
u
−b
u
_
0
ds B
s
, u ≤ t ,
where (β
u
, u ≤ t) is a (P
(b)
, T
t
) Brownian motion.
Hence, (B
u
, u ≤ t) is an OrnsteinUhlenbeck process with parameter −b,
starting from a. Consequently, (B
u
, u ≤ t) may be expressed explicitly in
terms of β, as
B
u
= e
−bu
⎛
⎝
a +
u
_
0
e
bs
dβ
s
⎞
⎠
, (2.2)
a formula from which we can immediately compute the mean and the variance
of the Gaussian variable B
u
(considered under P
(b)
). This clearly solves the
problem, since we have:
1
Throughout the volume, we use the French abbreviations ch, sh, th for, respectively,
cosh, sinh, tanh,...
2.1 L´evy’s area formula and some variants 19
I
α,b
= E
(b)
_
exp
_
−α[B
t
[
2
+
b
2
_
[B
t
[
2
−x −δt
_
__
,
and formula (2.1) now follows from some straightforward, if tedious, compu
tations. ¯.
Exercise 2.1: Show that exp
⎧
⎨
⎩
b
2
_
[B
t
[
2
−x −δt
_
−
b
2
2
t
_
0
ds[B
s
[
2
⎫
⎬
⎭
is also
a (P, (T
t
)) martingale, and that we might have considered this martingale as
a RadonNikodym density to arrive to the same formula (2.1).
(2.1.2) The same method allows to compute the joint FourierLaplace trans
form of the pair:
⎛
⎝
t
_
0
f(u)dB
u
,
t
_
0
du B
2
u
⎞
⎠
where for simplicity, we take here
the dimension δ to be 1.
Indeed, to compute:
E
⎡
⎣
exp
⎛
⎝
i
t
_
0
f(u)dB
u
−
b
2
2
t
_
0
du B
2
u
⎞
⎠
⎤
⎦
, (2.3)
all we need to know, via the above method, is the joint distribution of
t
_
0
f(u)dB
u
and B
t
, under P
(b)
.
This is clearly equivalent to being able to compute the mean and variance of
t
_
0
g(u)dB
u
, for any g ∈ L
2
([0, t], du).
However, thanks to the representation (2.2), we have:
t
_
0
g(u)dB
u
=
t
_
0
g(u)
⎧
⎨
⎩
−be
−bu
du
⎛
⎝
a +
u
_
0
e
bs
dβ
s
⎞
⎠
+e
−bu
(e
bu
dβ
u
)
⎫
⎬
⎭
= −ba
t
_
0
g(u)e
−bu
du +
t
_
0
dβ
u
⎛
⎝
g(u) − e
bu
b
t
_
u
e
−bs
g(s)ds
⎞
⎠
.
20 2 Quadratic Functionals of Brownian motion
Hence, the mean of
t
_
0
g(u)dB
u
under P
(b)
is: −ba
t
_
0
g(u)e
−bu
du, and its
variance is:
t
_
0
du
⎛
⎝
g(u) −be
bu
t
_
u
e
−bs
g(s)ds
⎞
⎠
2
.
We shall not continue the discussion at this level of generality, but instead, we
indicate one example where the computations have been completely carried
out.
The next formulae will be simpler if we work in a twodimensional setting;
therefore, we shall consider Z
u
= X
u
+ iY
u
, u ≥ 0, a Cvalued BM starting
from 0, and we deﬁne G =
1
_
0
ds Z
s
, the barycenter of Z over the time
interval [0,1].
The above calculations lead to the following formula (taken with small enough
ρ, σ ≥ 0):
E
⎡
⎣
exp −
λ
2
2
⎛
⎝
1
_
0
ds[Z
s
[
2
−ρ[G[
2
−σ[Z
1
[
2
⎞
⎠
⎤
⎦
=
_
(1 −ρ)chλ +ρ
shλ
λ
+σ [(ρ −1)λshλ −2ρ(chλ −1)]
_
−1
(2.4)
which had been obtained by a diﬀerent method by ChanDeanJansons
Rogers [26].
(2.1.3) Before we continue with some consequences of formulae (2.1)
and (2.4), let us make some remarks about the above method:
it consists in changing probability so that the quadratic functional disappears,
and the remaining problem is to compute the mean and variance of a Gaussian
variable. Therefore, this method consists in transfering some computational
problem for a variable belonging to (the ﬁrst and) the second Wiener chaos
to computations for a variable in the ﬁrst chaos; in other words, it consists
in a linearization of the original problem.
In the last paragraph of this Chapter, we shall use this method again to deal
with the more general problem, when
t
_
0
ds[B
s
[
2
is replaced by
t
_
0
dµ(s)[B
s
[
2
.
2.1 L´evy’s area formula and some variants 21
(2.1.4) A number of computations found in the literature can be obtained
very easily from the formulae (2.1) and (2.4).
a) The following formula is easily deduced from formula (2.1):
E
a
⎡
⎣
exp−
b
2
2
t
_
0
ds[B
s
[
2
[ B
t
= 0
⎤
⎦
= E
0
⎡
⎣
exp−
b
2
2
t
_
0
ds[B
s
[
2
[ B
t
= a
⎤
⎦
=
_
bt
sh(bt)
_
δ/2
exp−
[a[
2
2t
(bt coth(bt) −1)
(2.5)
which, in the particular case a = 0, yields the formula:
E
0
⎡
⎣
exp
⎛
⎝
−
b
2
2
t
_
0
ds[B
s
[
2
⎞
⎠
[ B
t
= 0
⎤
⎦
=
_
bt
sh(bt)
_
δ/2
(2.6)
L´evy’s formula for the stochastic area
/
t
def
=
t
_
0
(X
s
dY
s
−Y
s
dX
s
)
of planar Brownian motion B
t
= (X
t
, Y
t
) may now be deduced from for
mula (2.5); precisely, one has:
E
0
[exp(ib/
t
) [ B
t
= a] =
_
bt
sh bt
_
exp−
[a[
2
2t
(bt cothbt −1) (2.7)
To prove formula (2.7), ﬁrst remark that, thanks to the rotational invari
ance of the law of Brownian motion (starting from 0), we have:
E
0
[exp(ib/
t
) [ B
t
= a] = E
0
[exp(ib/
t
) [ [B
t
[ = [a[] ,
and then, we can write:
/
t
=
t
_
0
[B
s
[dγ
s
,
where (γ
t
, t ≥ 0) is a one dimensional Brownian motion independent from
([B
s
[, s ≥ 0). Therefore, we obtain:
22 2 Quadratic Functionals of Brownian motion
E
0
[exp(ib/
t
) [ [B
t
[ = [a[] = E
0
⎡
⎣
exp
⎛
⎝
−
b
2
2
t
_
0
ds[B
s
[
2
⎞
⎠
[ B
t
= a
⎤
⎦
and formula (2.7) is now deduced from formula (2.5).
b) Similarly, from formula (2.4), one deduces:
E
⎡
⎣
exp
⎛
⎝
−
µ
2
2
1
_
0
ds[Z
s
−G[
2
⎞
⎠
[ Z
1
= z
⎤
⎦
=
_
µ/2
shµ/2
_
2
exp −
[z[
2
2
_
µ
2
coth
µ
2
−1
_
(2.8)
c) As yet another example of application of the method, we now derive the
following formula obtained by M. Wenocur [91] (see also, in the same vein,
[92]):
consider (W(t), t ≥ 0) a 1dimensional BM, starting from 0, and deﬁne:
X
t
= W
t
+µt +x, so that (X
t
, t ≥ 0) is the Brownian motion with drift µ,
starting from x.
Then, M. Wenocur [91] obtained the following formula:
E
⎡
⎣
exp
⎛
⎝
−
λ
2
2
1
_
0
ds X
2
s
⎞
⎠
⎤
⎦
=
1
(chλ)
1/2
exp(H(x, µ, λ)) , (2.9)
where
H(x, µ, λ) = −
µ
2
2
_
1 −
thλ
λ
_
−xµ
_
1 −
1
chλ
_
−
x
2
2
λthλ .
We shall now sketch a proof of this formula, by applying twice Girsanov’s
theorem. First of all, we may “get rid of the drift µ”, since:
E
⎡
⎣
exp
⎛
⎝
−
λ
2
2
1
_
0
ds X
2
s
⎞
⎠
⎤
⎦
= E
x
⎡
⎣
exp
_
µ(X
1
−x) −
µ
2
2
_
exp−
λ
2
2
1
_
0
ds X
2
s
⎤
⎦
where P
x
denotes the law of Brownian motion starting from x. We apply
Girsanov’s theorem a second time, thereby replacing P
x
by P
(λ)
x
, the law
2.1 L´evy’s area formula and some variants 23
of the OrnsteinUhlenbeck process, with parameter λ, starting from x. We
then obtain:
E
x
⎡
⎣
exp
⎛
⎝
µX
1
−
λ
2
2
1
_
0
ds X
2
s
⎞
⎠
⎤
⎦
= E
(λ)
x
_
exp
_
µX
1
+
λ
2
X
2
1
_
exp
_
−
λ
2
(x
2
+ 1)
__
,
and it is now easy to ﬁnish the proof of (2.9), since, as shown at the
beginning of this paragraph, the mean and variance of X
1
under P
(λ)
x
are
known.
Exercise 2.2: 1) Extend formula (2.9) to a δdimensional Brownian motion
with constant drift.
2) Derive formula (2.1) from this extended formula (2.9).
Hint: Integrate both sides of the extended formula (2.9) with respect to
dµexp −
_
c[µ[
2
_
on IR
δ
.
Exercise 2.3: Let (B
t
, t ≥ 0) be a 3dimensional Brownian motion starting
from 0.
1. Prove the following formula:
for every m ∈ IR
3
, ξ ∈ IR
3
with [ξ[ = 1, and λ ∈ IR
∗
,
E
⎡
⎣
exp
⎛
⎝
iλξ
1
_
0
B
s
dB
s
⎞
⎠
[ B
1
= m
⎤
⎦
=
_
λ
shλ
_
exp
_
[m[
2
−(ξ m)
2
2
(1 −λcoth λ)
_
,
where x y, resp.: x y, denotes the scalar product, resp.: the vector
product, of x and y in IR
3
.
Hint: Express ξ
1
_
0
B
s
dB
s
in terms of the stochastic area of the
2dimensional Brownian motion: (η B
s
; (ξ η) B
s
; s ≥ 0) where η is a
suitably chosen unit vector of IR
3
, which is orthogonal to ξ.
24 2 Quadratic Functionals of Brownian motion
2. Prove that, for any λ ∈ IR
∗
, z ∈ IR
3
, and ξ ∈ IR
3
, with [ξ[ = 1, one has:
E
⎡
⎣
exp i
⎛
⎝
z B
1
+λξ
1
_
0
B
s
dB
s
⎞
⎠
⎤
⎦
=
1
(chλ)
exp −
1
2
_
[z[
2
thλ
λ
+ (z ξ)
2
_
1 −
thλ
λ
__
.
2.2 Some identities in law and an explanation of them
via Fubini’s theorem
(2.2.1) We consider again formula (2.4), in which we take ρ = 1, and σ = 0.
We then obtain:
E
⎡
⎣
exp
⎛
⎝
−
λ
2
2
1
_
0
ds[Z
s
−G[
2
⎞
⎠
⎤
⎦
=
λ
shλ
,
but, from formula (2.6), we also know that, using the notation (
˜
Z
s
, s ≤ 1) for
the complex Brownian bridge of length 1:
E
⎡
⎣
exp
⎛
⎝
−
λ
2
2
1
_
0
ds[
˜
Z
s
[
2
⎞
⎠
⎤
⎦
=
λ
shλ
;
hence, the following identity in law holds:
1
_
0
ds[Z
s
−G[
2
(law)
=
1
_
0
ds[
˜
Z
s
[
2
, (2.10)
an identity which had been previously noticed by several authors (see, e.g.,
[33]).
Obviously, the fact that, in (2.10), Z, resp.
˜
Z, denotes a complex valued BM,
resp. Brownian bridge,instead of a realvalued process, is of no importance,
and (2.10) is indeed equivalent to:
1
_
0
dt(B
t
−G)
2
(law)
=
1
_
0
dt
˜
B
2
t
, (2.11)
2.2 Some identities in law and Fubini’s theorem 25
where (B
t
, t ≤ 1), resp. (
˜
B
t
, t ≤ 1) now denotes a 1dimensional BM, resp.
Brownian bridge, starting from 0.
(2.2.2) Our ﬁrst aim in this paragraph is to give a simple explanation of
(2.11) via Fubini’s theorem.
Indeed, if B and C denote two independent Brownian motions and
ϕ ∈ L
2
([0, 1], du ds), we have:
1
_
0
dB
u
1
_
0
dC
s
ϕ(u, s)
a.s.
=
1
_
0
dC
s
1
_
0
dB
u
ϕ(u, s) ,
which, as a corollary, yields:
1
_
0
du
⎛
⎝
1
_
0
dC
s
ϕ(u, s)
⎞
⎠
2
(law)
=
1
_
0
du
⎛
⎝
1
_
0
dC
s
ϕ(s, u)
⎞
⎠
2
(2.12)
(in the sequel, we shall refer to this identity as to the “FubiniWiener identity
in law”).
The identity (2.11) is now a particular instance of (2.12), as the following
Proposition shows.
Proposition 2.1 Let f : [0, 1]→IR be a C
1
function such that f(1) = 1.
Then, we have:
1
_
0
ds
⎛
⎝
B
s
−
1
_
0
dt f
(t)B
t
⎞
⎠
2
(law)
=
1
_
0
ds(B
s
−f(s)B
1
)
2
. (2.13)
In particular, in the case f(s) = s, we obviously recover (2.11).
Proof: It follows from the identity in law (2.12), where we take:
ϕ(s, u) =
_
1
(u≤s)
−(f(1) −f(u))
_
1
((s,u)∈[0,1]
2
)
. ¯.
Here is another variant, due to Shi Zhan, of the identity in law (2.13).
Exercise 2.4: Let µ(dt) be a probability on IR
+
. Then, prove that:
26 2 Quadratic Functionals of Brownian motion
∞
_
0
µ(dt)
⎛
⎝
B
t
−
∞
_
0
µ(ds)B
s
⎞
⎠
2
(law)
=
∞
_
0
˜
B
2
µ[0,t]
dt ,
where (
˜
B
u
, u ≤ 1) is a standard Brownian bridge.
As a second application of (2.12), or rather of a discrete version of (2.12),
we prove a striking identity in law (2.14), which resembles the integration by
parts formula.
Theorem 2.1 Let (B
t
, t ≥ 0) be a 1dimensional BM starting from 0. Let
0 ≤ a ≤ b < ∞, and f, g : [a, b]→IR
+
be two continuous functions, with f
decreasing, and g increasing.
b
_
a
−df(x)B
2
g(x)
+f(b)B
2
g(b)
(law)
= g(a)B
2
f(a)
+
b
_
a
dg(x)B
2
f(x)
. (2.14)
In order to prove (2.14), it suﬃces to show that the identity in law:
−
n
i=1
(f(t
i+1
) −f(t
i
))B
2
g(t
i
)
+f(t
n
)B
2
g(t
n
)
(law)
= g(t
1
)B
2
f(t
1
)
+
n
i=2
(g(t
i
) −g(t
i−1
))B
2
f(t
i
)
, (2.15)
where a = t
1
< t
2
< < t
n
= b, holds, and then to let the mesh of the
subdivision tend to 0.
Now, (2.15) is a particular case of a discrete version of (2.12), which we now
state.
Theorem 2.2 Let X
n
= (X
1
, . . . , X
n
) be an ndimensional Gaussian vector,
the components of which are independent, centered, with variance 1. Then,
for any n n matrix A, we have:
[AX
n
[
(law)
= [A
∗
X
n
[ ,
where A
∗
is the transpose of A, and, if x
n
= (x
1
, −−−−−, x
n
) ∈ IR
n
, we denote:
[x
n
[ =
_
n
i=1
x
2
i
_
1/2
.
2.3 The laws of squares of Bessel processes 27
Corollary 2.2.1 Let (Y
1
, . . . , Y
n
) and (Z
1
, . . . , Z
n
) be two ndimensional
Gaussian vectors such that
i) Y
1
, Y
2
−Y
1
, . . . , Y
n
− Y
n−1
are independent;
ii) Z
n
, Z
n
−Z
n−1
, . . . , Z
2
−Z
1
are independent.
Then, we have
−
n
i=1
Y
2
i
_
E(Z
2
i+1
) −E(Z
2
i
)
_
(law)
=
n
i=1
Z
2
i
_
E(Y
2
i
) −E(Y
2
i−1
)
_
(∗)
where we have used the convention: E(Z
2
n+1
) = E(Y
2
0
) = 0.
The identity in law (2.15) now follows as a particular case of (∗) .
2.3 The laws of squares of Bessel processes
Consider (B
t
, t ≥ 0) a δdimensional (δ ∈ IN, for the moment...) Brownian
motion starting from a, and deﬁne: X
t
= [B
t
[
2
. Then, (X
t
, t ≥ 0) satisﬁes
the following equation
X
t
= x + 2
t
_
0
_
X
s
dβ
s
+δt , (2.16)
where x = [a[
2
, and (β
t
, t ≥ 0) is a 1dimensional Brownian motion. More
generally, from the theory of 1dimensional stochastic diﬀerential equations,
we know that for any pair x, δ ≥ 0, the equation (2.16) admits one strong
solution, hence, a fortiori, it enjoys the uniqueness in law property.
Therefore, we may deﬁne, on the canonical space Ω
∗
+
≡ C(IR
+
, IR
+
), Q
δ
x
as
the law of a process which satisﬁes (2.16).
The family (Q
δ
x
, x ≥ 0, δ ≥ 0) possesses the following additivity property,
which is obvious for integer dimensions.
28 2 Quadratic Functionals of Brownian motion
Theorem 2.3 (ShigaWatanabe [83]) For any δ, δ
, x, x
≥ 0, the identity:
Q
δ
x
∗ Q
δ
x
= Q
δ+δ
x+x
holds, where ∗ denotes the convolution of two probabilities on Ω
∗
+
.
Now, for any positive, σﬁnite, measure µ on IR
+
, we deﬁne:
I
µ
(ω) =
∞
_
0
dµ(s)X
s
(ω) ,
and we deduce from the theorem that there exist two positive constants A(µ)
and B(µ) such that:
Q
δ
x
_
exp −
1
2
I
µ
_
= (A(µ))
x
(B(µ))
δ
.
The next theorem allows to compute A(µ) and B(µ).
Theorem 2.4 For any ≥ 0 Radon measure µ on [0, ∞), one has:
Q
δ
x
_
exp −
1
2
I
µ
_
= (φ
µ
(∞))
δ/2
exp
_
x
2
φ
+
µ
(0)
_
,
where φ
µ
denotes the unique solution of:
φ
= µφ on (0, ∞) , φ
µ
(0) = 1, 0 ≤ φ ≤ 1 ,
and φ
+
µ
(0) is the right derivative of φ
µ
at 0.
Proof: For simplicity, we assume that µ is diﬀuse, and that its support is
contained in (0, 1).
Deﬁne: F
µ
(t) =
φ
µ
(t)
φ
µ
(t)
, and
ˆ
F
µ
(t) =
t
_
0
φ
µ
(s)ds
φ
µ
(s)
= log φ
µ
(t).
Then, remark that:
Z
µ
t
def
= exp
⎧
⎨
⎩
1
2
_
F
µ
(t)X
t
−F
µ
(0)x −δ
ˆ
F
µ
(t)
_
−
1
2
t
_
0
X
s
dµ(s)
⎫
⎬
⎭
2.3 The laws of squares of Bessel processes 29
is a Q
δ
x
martingale, since it may be written as:
exp
⎧
⎨
⎩
t
_
0
F
µ
(s)dM
s
−
1
2
t
_
0
F
2
µ
(s)d ¸M)
s
⎫
⎬
⎭
,
where: M
t
=
1
2
(X
t
−δt), and ¸M)
t
=
t
_
0
ds X
s
.
It now remains to write: Q
δ
x
(Z
µ
1
) = 1, and to use the fact that F
µ
(1) = 0 to
obtain the result stated in the theorem. ¯.
Exercise 2.5: 1) Prove that the integration by parts formula (2.14) can
be extended as follows:
(∗)
b
_
a
−df(x)X
g(x)
+f(b)X
g(b)
(law)
= g(a)X
f(a)
+
b
_
a
dg(x)X
f(x)
,
where X is a BESQ process, with any strictly positive dimension, starting
from 0.
2) Prove the following convergence in law result:
_
√
n
_
1
n
X
(n)
t
−t
_
, t ≥ 0
_
(law)
n→∞
→ (cβ
t
2; t ≥ 0) ,
for a certain constant c > 0, where (X
(n)
t
, t ≥ 0) denotes a BESQ
n
process,
starting from 0, and (β
t
, t ≥ 0) denotes a realvalued BM, starting from 0.
3) Prove that the process (X
t
≡ β
t
2 , t ≥ 0) satisﬁes (∗) .
Comments on Chapter 2
For many reasons, a number of computations of the Laplace or Fourier trans
form of the distribution of quadratic functionals of Brownian motion, or re
lated processes, are being published almost every year; the origins of the in
terests in such functionals range from Bismut’s proof of the AtiyahSinger
30 2 Quadratic Functionals of Brownian motion
theorem, to polymer studies (see ChanDeanJansonsRogers [26] for the
latter).
Duplantier [36] presents a good list of references to the literature.
The methods used by the authors to obtain closed formulae for the corre
sponding characteristic functions or Laplace transforms fall essentially into
one of the three following categories:
i) P. L´evy’s diagonalisation procedure, which has a strong functional analysis
ﬂavor; this method may be applied very generally and is quite powerful;
however, the characteristic functions or Laplace transforms then appear as
inﬁnite products, which have to be recognized in terms of, say, hyperbolic
functions...
ii) the change of probability method which, in eﬀect, linearizes the problem,
i.e.: it allows to transform the study of a quadratic functional into the
computation of the mean and variance of an adequate Gaussian variable;
paragraph 2.1 above gives an important example of this method.
iii)ﬁnally, the reduction method, which simply consists in trying to reduce
the computation for a certain quadratic functional to similar computa
tions which have already been done. Exercise 2.3, and indeed the whole
paragraph 2.2 above give some examples of application. The last formula
in Exercise 2.3 is due to Foschini and Shepp [44] and the whole exercise
is closely related to the work of Berthuet [6] on the stochastic volume of
(B
u
, u ≤ 1).
Paragraph 2.3 is closely related to PitmanYor ([73], [74]).
Some extensions of the integration by parts formula (2.14) to stable pro
cesses and some converse studies have been made by DonatiMartin, Song
and Yor [31].
Chapter 3
Squares of Bessel processes and
RayKnight theorems for Brownian
local times
Chapters 1 and 2 were devoted to the study of some properties of variables in
the ﬁrst and second Wiener chaos. In the present Chapter, we are studying
variables which are deﬁnitely at a much higher level of complexity in the
Wiener chaos decomposition; in fact, they have inﬁnitely many Wiener chaos
components.
More precisely, we shall study, in this Chapter, some properties of the Brow
nian local times, which may be deﬁned by the occupation times formula:
t
_
0
ds f(B
s
) =
∞
_
−∞
da f(a)
a
t
, f ∈ b (B(IR)) ,
and, from Trotter’s theorem, we may, and we shall, choose the family
(
a
t
; a ∈ IR, t ≥ 0) to be jointly continuous.
This occupation times formula transforms an integration in time, into an in
tegration in space, and it may be asked: what becomes of the Markov property
through this change from time to space?
In fact, the RayKnight theorems presented below show precisely that there
is some Markov property in space, that is: at least for some suitably chosen
stopping times T, the process (
a
T
, a ∈ IR) is a strong Markov process, the
law of which can be described precisely.
More generally, we shall try to show some evidence, throughout this Chapter,
of a general transfer principle from time to space, which, in our opinion,
permeates the various developments made around the RayKnight theorems
on Brownian local times.
31
32 3 Squares of Bessel processes and RayKnight theorems
3.1 The basic RayKnight theorems
There are two such theorems, the ﬁrst one being related to T ≡ τ
x
= inf¦t ≥ 0 :
0
t
= x¦, and the second one to T
≡ T
1
= inf¦t : B
t
= 1¦.
(RK1) The processes (
a
τ
x
; a ≥ 0) and (
−a
τ
x
; a ≥ 0) are two independent
squares, starting at x, of 0dimensional Bessel processes, i.e.: their
common law is Q
0
x
.
(RK2) The process (
1−a
T
1
; 0 ≤ a ≤ 1) is the square of a 2dimensional Bessel
process starting from 0, i.e.: its law is Q
2
0
.
There are several important variants of (RK2), among which the two following
ones.
(RK2)(a) If (R
3
(t), t ≥ 0) denotes the 3dimensional Bessel process starting
from 0, then the law of (
a
∞
(R
3
), a ≥ 0) is Q
2
0
.
(RK2)(b) The law of
_
a
∞
([B[ +
0
); a ≥ 0
_
is Q
2
0
.
We recall that (RK2)(a) follows from (RK2), thanks to Williams’ time rever
sal result:
(B
t
; t ≤ T
1
)
(law)
= (1 −R
3
(L
1
−t); t ≤ L
1
) ,
where L
1
= sup ¦t > 0 : R
3
(t) = 1¦.
Then, (RK2)(b) follows from (RK2)(a) thanks to Pitman’s representation
of R
3
(see [71]), which may be stated as
(R
3
(t), t ≥ 0)
(law)
= ([B
t
[ +
0
t
; t ≥ 0)
We now give a ﬁrst example of the transfer principle from time to space
mentioned above. Consider, for µ ∈ IR, the solution of:
(∗) X
t
= B
t
+µ
t
_
0
ds 1
(X
s
>0)
,
and call P
µ,+
the law of this process on the canonical space Ω
∗
; in the
following, we simply write P for the standard Wiener measure.
Then, from Girsanov’s theorem, we have:
3.1 The basic RayKnight theorems 33
P
µ,+

F
t
= exp
⎧
⎨
⎩
µ
t
_
0
1
(X
s
>0)
dX
s
−
µ
2
2
t
_
0
ds 1
(X
s
>0)
⎫
⎬
⎭
P

F
t
= exp
⎧
⎨
⎩
µ
_
X
+
t
−
1
2
0
t
_
−
µ
2
2
t
_
0
ds 1
(X
s
>0)
⎫
⎬
⎭
P

F
t
,
where (X
t
)
t≥0
denotes the canonical process on Ω
∗
, and (
0
t
)
t≥0
its local time
at 0 (which is well deﬁned P a.s.).
It follows from the above RadonNikodym relationship that, for any ≥ 0
measurable functional F on Ω
∗
+
, we have:
E
µ,+
_
F(
1−a
T
1
; 0 ≤ a ≤ 1)
¸
= E
⎡
⎣
F(
1−a
T
1
; 0 ≤ a ≤ 1) exp
⎧
⎨
⎩
−
µ
2
(
0
T
1
−2) −
µ
2
2
1
_
0
da
a
T
1
⎫
⎬
⎭
⎤
⎦
(†) = Q
2
0
⎡
⎣
F(Z
a
; 0 ≤ a ≤ 1) exp
⎧
⎨
⎩
−
µ
2
(Z
1
−2) −
µ
2
2
1
_
0
da Z
a
⎫
⎬
⎭
⎤
⎦
where (Z
a
, a ≥ 0) now denotes the canonical process on Ω
∗
+
(to avoid confu
sion with X on Ω
∗
). The last equality follows immediately from (RK2).
Now, the exponential which appears as a RadonNikodym density in (†) trans
forms Q
2
0
into
(−µ)
Q
2
0
, a probability which is deﬁned in the statement of The
orem 3.1 below (see paragraph 6 of PitmanYor [73] for details).
Hence, we have just proved the following
Theorem 3.1 If X
(µ)
denotes the solution of the equation (∗) above, then,
the law of
_
1−a
T
1
(X
(µ)
); 0 ≤ a ≤ 1
_
is
(−µ)
Q
2
0
, where
β
Q
δ
x
denotes the law of
the square, starting at x, of the norm of a δdimensional OrnsteinUhlenbeck
process with parameter β, i.e.: a diﬀusion on IR
+
whose inﬁnitesimal gener
ator is:
2y
d
2
dy
2
+ (2βy +δ)
d
dy
.
34 3 Squares of Bessel processes and RayKnight theorems
3.2 The L´evyKhintchine representation of Q
δ
x
We have seen, in the previous Chapter, that for any x, δ ≥ 0, Q
δ
x
is in
ﬁnitely divisible (Theorems 2.3 and 2.4). We are now able to express its
L´evyKhintchine representation as follows
Theorem 3.2 For any Borel function f : IR
+
→ IR
+
, and ω ∈ Ω
∗
+
, we set
I
f
(ω) = ¸ω, f) =
∞
_
0
dt ω(t)f(t) and f
u
(t) = f(u +t) .
Then, we have, for every x, δ ≥ 0:
Q
δ
x
(exp −I
f
)
= exp −
_
M(dω)
⎧
⎨
⎩
x[1 −exp(−I
f
(ω))] +δ
∞
_
0
du(1 −exp −I
f
u
(ω))
⎫
⎬
⎭
,
where M(dω) is the image of the Itˆ o measure n
+
of positive excursions by the
application which associates to an excursion ε the process of its local times:
ε→(
x
R
(ε); x ≥ 0) .
Before we give the proof of the theorem, we make some comments about the
representations of Q
0
x
and Q
δ
0
separately:
obviously, from the theorem, the representing measure of Q
0
x
is xM(dω),
whereas the representing measure of Q
δ
0
is δN(dω), where N(dω) is charac
terized by:
_
N(dω)
_
1 −e
−I
f
(ω)
_
=
_
M(dω)
∞
_
0
du
_
1 −e
−I
f
u
(ω)
_
and it is not diﬃcult to see that this formula is equivalent to:
_
N(dω)F(ω) =
_
M(dω)
∞
_
0
du F
_
ω(( −u)
+
)
_
,
for any measurable ≥ 0 functional F.
3.2 The L´evyKhintchine representation of Q
δ
x
35
Now, in order to prove the theorem, all we need to do is to represent Q
0
x
, and
Q
δ
0
, for some dimension δ; in fact, we shall use (RK1) to represent Q
0
x
, and
(RK2) (b) to represent Q
2
0
.
Our main tool will be (as is to be expected!) excursion theory. We ﬁrst
state the following consequences of the master formulae of excursion theory
(see [81], Chapter XII, Propositions (1.10) and (1.12)).
Proposition 3.1 Let (M
t
, t ≥ 0) be a bounded, continuous process with
bounded variation on compacts of IR
+
, such that: 1
(B
t
=0)
dM
t
= 0.
Then, (i) if, moreover, (M
t
, t ≥ 0) is a multiplicative functional, we have:
E[M
τ
x
] = exp
_
−x
_
n(dε)(1 −M
R
(ε))
_
,
where n(dε) denotes the Itˆo characteristic measure of excursions.
(ii) More generally, if the multiplicativity property assumption is replaced by:
(M
t
, t ≥ 0) is a skew multiplicative functional, in the following sense:
M
τ
s
= M
τ
s−
(M
(s)
R
) ◦ θ
τ
s−
(s ≥ 0) ,
for some measurable family of r.v.’s (M
(s)
R
; s ≥ 0), then the previous formula
should be modiﬁed as
E[M
τ
x
] = exp
⎛
⎝
−
x
_
0
ds
_
n(dε)
_
1 −M
(s)
R
(ε)
_
⎞
⎠
.
Taking M
t
≡ exp−
t
_
0
ds f(B
s
,
s
), for f : IR IR
+
→ IR, a Borel function,
we obtain, as an immediate consequence of the Proposition, the following
important formula:
36 3 Squares of Bessel processes and RayKnight theorems
(∗) E
⎡
⎣
exp
⎛
⎝
−
τ
x
_
0
ds f(B
s
,
s
)
⎞
⎠
⎤
⎦
= exp −
x
_
0
ds
_
n(dε)
⎛
⎝
1 −exp −
R
_
0
du f(ε(u), s)
⎞
⎠
.
As an application, if we take f(y, ) ≡ 1
(y≥0)
g(y), then the lefthand side of
(∗) becomes:
Q
0
x
(exp −I
g
) , thanks to (RK1),
while the righthand side of (∗) becomes:
exp−x
_
n
+
(dε)
⎛
⎝
1 −exp −
R
_
0
du g(ε(u))
⎞
⎠
= exp −x
_
M(dω)
_
1 −e
−I
g
(ω)
_
from the deﬁnition of M.
Next, if we write formula (∗) with f(y, ) = g([y[ + ), and x = ∞, the
lefthand side becomes:
Q
2
0
(exp −I
g
) , thanks to (RK2) (b),
while the righthand side becomes:
exp −
∞
_
0
ds
_
n(dε)
⎛
⎝
1 −exp −
R
_
0
du g([ε(u)[ +s)
⎞
⎠
= exp −2
∞
_
0
ds
_
M(dω)(1 −exp −¸ω, g
s
))
= exp −2
_
N(dω)(1 −exp −¸ω, g)) from the deﬁnition of N.
Thus, we have completely proved the theorem.
3.3 An extension of the RayKnight theorems 37
3.3 An extension of the RayKnight theorems
(3.3.1) Now that we have obtained the L´evyKhintchine representation of Q
δ
x
,
we may use the inﬁnite divisibility property again to obtain some extensions
of the basic RayKnight theorems.
First of all, it may be of some interest to deﬁne squares of Bessel processes
with generalized dimensions, that is: some IR
+
valued processes which satisfy:
(∗) X
t
= x + 2
t
_
0
_
X
s
dβ
s
+∆(t)
where ∆ : IR
+
→ IR
+
is a strictly increasing, continuous C
1
function, with
∆(0) = 0 and ∆(∞) = ∞.
Then, it is not diﬃcult to show, with the help of some weak convergence
argument, that the law Q
∆
x
of the unique solution of (∗) satisﬁes:
Q
∆
x
(e
−I
f
) = exp −
_
M(dω)
⎧
⎨
⎩
x
_
1 −exp−I
f
(ω)
_
+
∞
_
0
∆(ds)(1 −exp−I
f
s
(ω))
⎫
⎬
⎭
.
Now, we have the following
Theorem 3.3 The family of local times of
_
[B
u
[ +∆
−1
(2
u
); u ≥ 0
_
is Q
∆
0
.
In particular, the family of local times of
_
[B
u
[ +
2
δ
u
; u ≥ 0
_
is Q
δ
0
.
Proof: We use Proposition 3.1 with
M
t
= exp −
t
_
0
ds f
_
[B
s
[ +∆
−1
(2
s
)
_
,
and we obtain, for any x ≥ 0:
38 3 Squares of Bessel processes and RayKnight theorems
E[M
τ
x
] = exp−
x
_
0
ds
_
n(dε)
⎧
⎨
⎩
(1 −exp−
R
_
0
du f
_
[ε(u)[ +∆
−1
(2s)
_
⎫
⎬
⎭
= exp−2
x
_
0
ds
_
n
+
(dε)
⎧
⎨
⎩
(1 −exp −
R
_
0
du f
_
[ε(u)[ +∆
−1
(2s)
_
⎫
⎬
⎭
= exp−
2x
_
0
dt
_
n
+
(dε)
⎧
⎨
⎩
(1 −exp −
R
_
0
du f
_
[ε(u)[ +∆
−1
(t)
_
⎫
⎬
⎭
= exp−
∆
−1
(2x)
_
0
d∆(h)
_
n
+
(dε)
⎛
⎝
1 −exp−
R
_
0
du f(ε(u) +h)
⎞
⎠
,
and the result of the theorem now follows by letting x → ∞. ¯.
In fact, in the previous proof, we showed more than the ﬁnal statement, since
we considered the local times of
_
[B
u
[ +∆
−1
(2
u
) : u ≤ τ
x
_
. In particular,
the above proof shows the following
Theorem 3.4 Let x > 0, and consider τ
x
≡ inf¦t ≥ 0 :
t
> x¦. Then,
the processes
_
a−2x/δ
τ
x
_
[B[ −
2
δ
_
; a ≥ 0
_
and
_
a
τ
x
_
[B[ +
2
δ
_
; a ≥ 0
_
have
the same law, namely that of an inhomogeneous Markov process (Y
a
; a ≥ 0),
starting at 0, which is the square of a δdimensional Bessel process for a
≤
2x
δ
, and a square Bessel process of dimension 0, for a ≥
2x
δ
.
(3.3.2) These connections between Brownian occupation times and squares
of Bessel processes explain very well why, when computing quantities to do
with Brownian occupation times, we ﬁnd formulae which also appeared in
relation with L´evy’s formula (see Chapter 2). Here is an important example.
We consider a onedimensional Brownian motion (B
t
, t ≥ 0), starting from 0,
and we deﬁne σ = inf¦t : B
t
= 1¦, and S
t
= sup
s≤t
B
s
(t ≥ 0). Let a < 1; we
are interested in the joint distribution of the triple:
A
−
σ
(a)
def
=
σ
_
0
ds 1
(B
s
<aS
s
)
;
(a)
σ
def
=
0
σ
(B −aS) ; A
+
σ
(a)
def
=
σ
_
0
ds 1
(B
s
>aS
s
)
Using standard stochastic calculus, we obtain: for every µ, λ, ν > 0,
E
_
exp −
_
µ
2
2
A
−
σ
(a) +λ
(a)
σ
+
ν
2
2
A
+
σ
(a)
__
=
_
ch(ν¯ a) + (µ + 2λ)
sh(ν¯ a)
ν
_
−1/¯ a
3.4 Brownian local times at an exponential time 39
where ¯ a = 1 −a > 0.
On the other hand, we deduce from formula (2.1) and the additivity property,
presented in Theorem 2.3, of the family (Q
δ
x
; δ ≥ 0, x ≥ 0) the following
formula: for every δ ≥ 0, and ν, λ, x ≥ 0,
Q
δ
0
⎛
⎝
exp −
⎛
⎝
ν
2
2
x
_
0
dy X
y
+λ X
x
⎞
⎠
⎞
⎠
=
_
ch(νx) +
2λ
ν
sh(νx)
_
−δ/2
Comparing the two previous expectations, we obtain the following identity
in law, for b > 0:
_
A
+
σ
(1−b);
(1−b)
σ
_
(law)
=
⎛
⎝
b
_
0
dy X
(2/b)
y
; X
(2/b)
b
⎞
⎠
(∗) ,
where, on the righthand side of (∗) , we denote by (X
(δ)
y
, y ≥ 0) a BESQ
δ
process, starting from 0.
Thanks to L´evy’s representation of reﬂecting Brownian motion as
(S
t
−B
t
; t ≥ 0), the lefthand side in (∗) is identical in law to:
⎛
⎝
b
_
0
dy
y−b
τ
1
_
[B[ −b
0
_
;
0
τ
1
_
[B[ − b
0
_
⎞
⎠
Until now in this subparagraph (3.3.2), we have not used any RayKnight
theorem; however, we now do so, as we remark that the identity in law be
tween the last written pair of r.v.’s and the righthand side of (∗) follows
directly from Theorem 3.4.
3.4 The law of Brownian local times taken at an
independent exponential time
The basic RayKnight theorems (RK1) and (RK2) express the laws of Brow
nian local times in the space variable up to some particular stopping times,
namely τ
x
and T
1
. It is a natural question to look for an identiﬁcation of
the law of Brownian local times up to a ﬁxed time t. One of the inherent
diﬃculties of this question is that now, the variable B
t
is not a constant; one
40 3 Squares of Bessel processes and RayKnight theorems
way to circumvent this problem would be to condition with respect to the
variable B
t
; however, even when this is done, the answer to the problem is not
particularly simple (see Perkins [68], and Jeulin [52]). In fact, if one considers
the same problem at an independent exponentially distributed time, it then
turns out that all is needed is to combine the two basic RK theorems. This
shows up clearly in the next
Proposition 3.2 Let S
θ
be an independent exponential time, with parameter
θ
2
2
, that is: P(S
θ
∈ ds) =
θ
2
2
exp
_
−
θ
2
s
2
_
ds. Then
1)
S
θ
and B
S
θ
are independent, and have respective distributions:
P(
S
θ
∈ d) = θe
−θ
d ; P(B
S
θ
∈ da) =
θ
2
e
−θa
da .
2) for any IR
+
valued, continuous additive functional A, the following formula
holds:
E [exp(−A
S
θ
) [
S
θ
= ; B
S
θ
= a]
= E
_
exp
_
−A
τ
−
θ
2
2
τ
__
e
θ
E
a
_
exp
_
−A
T
0
−
θ
2
2
T
0
__
e
θa
Then, using the same sort of transfer principle arguments as we did at the
end of paragraph (3.1), one obtains the following
Theorem 3.5 Conditionally on
0
S
θ
= , and B
S
θ
= a > 0, the process
(
x
S
θ
; x ∈ IR) is an inhomogeneous Markov process which may be described as
follows:
i) (
−x
S
θ
; x ≥ 0) and (
x
S
θ
; x ≥ a) are diﬀusions with common inﬁnitesimal
generator:
2y
d
2
dy
2
−2θy
d
dy
ii) (
x
S
θ
; 0 ≤ x ≤ a) is a diﬀusion with inﬁnitesimal generator:
2y
d
2
dy
2
+ (2 −2θy)
d
dy
.
3.5 Squares of Bessel processes and of Bessel bridges 41
This theorem may be extended to describe the local times of [B[ +
2
δ
0
,
considered up to an independent exponential time (see BianeYor [19]).
Exercise 3.1 Extend the second statement of Proposition 3.2 by showing
that, if A
−
and A
+
are two IR
+
valued continuous additive functionals, the
following formula holds:
E
_
exp −
_
A
−
g
s
θ
+A
+
s
θ
−A
+
g
s
θ
_
[
s
θ
= ; B
s
θ
= a
_
= E
_
exp −
_
A
−
τ
+
θ
2
2
τ
__
e
θ
E
a
_
exp −
_
A
+
T
0
+
θ
2
2
T
0
__
e
θa
3.5 Squares of Bessel processes and squares of Bessel
bridges
From the preceding discussion, the reader might draw the conclusion that
the extension of RayKnight theorems from Brownian (or Bessel) local times
to the local times of the processes: Σ
δ
t
≡ [B
t
[ +
2
δ
t
(t ≥ 0) is plainsailing.
It will be shown, in this paragraph, that except for the case δ = 2, the
nonMarkovian character of Σ
δ
creates some important, and thought pro
voking, diﬃculties. On a more positive view point, we present an additive
decomposition of the square of a Bessel process of dimension δ as the sum of
the square of a δdimensional Bessel bridge, and an interesting independent
process, which we shall describe. In terms of convolution, we show:
(∗) Q
δ
0
= Q
δ
0→0
∗ R
δ
,
where R
δ
is a probability on Ω
∗
+
, which shall be identiﬁed. (We hope that the
notation R
δ
for this remainder or residual probability will not create any con
fusion with the notation for Bessel processes, often written as (R
δ
(t), t ≥ 0);
the context should help...)
(3.5.1) The case δ = 2.
In this case, the decomposition (∗) is obtained by writing:
a
∞
(R
3
) =
a
T
1
(R
3
) +
_
a
∞
(R
3
) −
a
T
1
(R
3
)
_
.
42 3 Squares of Bessel processes and RayKnight theorems
The process
_
a
T
1
(R
3
); 0 ≤ a ≤ 1
_
has the law Q
2
0→0
, which may be seen by a
Markovian argument:
_
a
T
1
(R
3
); 0 ≤ a ≤ 1
_
(law)
=
_
(
a
∞
(R
3
); 0 ≤ a ≤ 1)
¸
¸
¸
1
∞
(R
3
) = 0
_
but we shall also present a diﬀerent argument in the subparagraph (3.5.3).
We now deﬁne R
2
as the law of
_
a
∞
(R
3
) −
a
T
1
(R
3
); 0 ≤ a ≤ 1
_
which is also,
thanks to the strong Markov property of R
3
, the law of the local times
_
a
∞
(R
(1)
3
); 0 ≤ a ≤ 1
_
below level 1, of a 3dimensional Bessel process start
ing from 1.
In the sequel, we shall use the notation
ˆ
P to denote the probability on Ω
∗
+
obtained by time reversal at time 1 of the probability P, that is:
ˆ
E [F(X
t
; t ≤ 1)] = E[F(X
1−t
; t ≤ 1)] .
We may now state two interesting representations of R
2
. First, R
2
can be
represented as:
R
2
= L
_
r
2
4
_
(a −U)
+
_
; 0 ≤ a ≤ 1
_
, (3.1)
where L(γ(a); 0 ≤ a ≤ 1) denotes the law of the process γ, and (r
4
(a); 0 ≤
a ≤ 1) denotes a 4dimensional Bessel process starting from 0, and U is
a uniform variable on [0, 1], independent of r
4
. This representation follows
from Williams’ path decomposition of Brownian motion (B
t
; t ≤ σ), where
σ = inf¦t : B
t
= 1¦, and (RK2).
The following representation of R
2
is also interesting:
R
2
=
∞
_
0
dx
2
e
−x/2
ˆ
Q
0
x→0
(3.2)
This formula may be interpreted as:
the law of
_
a
∞
(R
(1)
3
); 0 ≤ a ≤ 1
_
given
1
∞
(R
(1)
3
) = x is
ˆ
Q
0
x→0
or, using Williams’ time reversal result:
ˆ
R
2
=
∞
_
0
dx
2
e
−x/2
Q
0
x→0
is the law of
_
a
g
θ
(B); 0 ≤ a ≤ 1
_
, (3.3)
where g
σ
= sup¦t ≤ σ : B
t
= 0¦.
3.5 Squares of Bessel processes and of Bessel bridges 43
To prove (3.2), we condition Q
2
0
with respect to X
1
, and we use the additivity
and time reversal properties of the squared Bessel bridges. More precisely, we
have:
Q
2
0
=
∞
_
0
dx
2
e
−x/2
Q
2
0→x
=
∞
_
0
dx
2
e
−x/2
ˆ
Q
2
x→0
.
However, we have: Q
2
x→0
= Q
2
0→0
∗ Q
0
x→0
, hence:
ˆ
Q
2
x→0
= Q
2
0→0
∗
ˆ
Q
0
x→0
,
so that we now obtain:
Q
2
0
= Q
2
0→0
∗
∞
_
0
dx
2
e
−x/2
ˆ
Q
0
x→0
Comparing this formula with the deﬁnition of R
2
given in (∗) , we ob
tain (3.2).
(3.5.2) The general case δ > 0.
Again, we decompose Q
δ
0
by conditioning with respect to X
1
, and using the
additivity and time reversal properties of the squared Bessel bridges. Thus,
we have:
Q
δ
0
=
∞
_
0
γ
δ
(dx)Q
δ
0→x
, where γ
δ
(dx) = Q
δ
0
(X
1
∈ dx) =
dx
2
_
x
2
_δ
2
−1
e
−x/2
Γ
_
δ
2
_.
From the additivity property:
Q
δ
x→0
= Q
δ
0→0
∗ Q
0
x→0
,
we deduce:
Q
δ
0→x
= Q
δ
0→0
∗
ˆ
Q
0
x→0
and it follows that:
Q
δ
0
= Q
δ
0→0
∗
∞
_
0
γ
δ
(dx)
ˆ
Q
0
x→0
,
so that:
R
δ
=
∞
_
0
γ
δ
(dx)
ˆ
Q
0
x→0
≡
∞
_
0
γ
2
(dx)g
δ
(x)
ˆ
Q
0
x→0
,
44 3 Squares of Bessel processes and RayKnight theorems
with:
g
δ
(x) = c
δ
x
δ
2
−1
, where c
δ
=
1
Γ
_
δ
2
_
1
2
δ
2
−1
.
Hence, we have obtained the following relation:
R
δ
= c
δ
(X
1
)
δ
2
−1
R
2
,
and we may state the following
Theorem 3.6 For any δ > 0, the additive decomposition:
Q
δ
0
= Q
δ
0→0
∗ R
δ
holds, where R
δ
may be described as follows:
R
δ
is the law of the local times, for levels a ≤ 1, of the 3dimensional Bessel
process, starting from 1, with weight: c
δ
(
1
∞
(R
3
))
δ
2
−1
, or, equivalently:
ˆ
R
δ
is the law of the local times process:
_
a
g
σ
(B
(δ)
); 0 ≤ a ≤ 1
_
where B
(δ)
has the law W
δ
deﬁned by:
W
δ

F
σ
= c
δ
(
0
σ
)
δ
2
−1
W

F
σ
Before going any further, we remark that the family (R
δ
, δ > 0) also possesses
the additivity property:
R
δ+δ
= R
δ
∗ R
δ
and, with the help of the last written interpretation of R
δ
, we can now present
the following interesting formula:
Theorem 3.7 Let f : IR→IR
+
be any Borel function. Then we have:
W
δ
⎛
⎝
exp −
g
σ
_
0
ds f(B
s
)
⎞
⎠
=
⎛
⎝
W
⎛
⎝
exp −
g
σ
_
0
ds f(B
s
)
⎞
⎠
⎞
⎠
δ/2
(3.5.3) An interpretation of Q
δ
0→0
The development presented in this subparagraph follows from the wellknown
fact that, if (b(t); 0 ≤ t ≤ 1) is a standard Brownian bridge, then:
(∗) B
t
= (t + 1)b
_
t
t + 1
_
, t ≥ 0 ,
3.5 Squares of Bessel processes and of Bessel bridges 45
is a Brownian motion starting from 0, and, conversely, the formula (∗) allows
to deﬁne a Brownian bridge b from a Brownian motion B.
Consequently, to any Borel function
˜
f : [0, 1] → IR
+
, there corresponds a
Borel function f : IR
+
→ IR
+
, and conversely, such that:
∞
_
0
dt f(t)B
2
t
=
1
_
0
du
˜
f(u)b
2
(u) .
This correspondance is expressed explicitely by the two formulae:
f(t) =
1
(1 +t)
4
˜
f
_
t
t + 1
_
and
˜
f(u) =
1
(1 +u)
4
f
_
u
1 −u
_
These formulae, together with the additivity properties of Q
δ
0
and Q
δ
0→0
allow
us to obtain the following
Theorem 3.8 Deﬁne
⎛
⎝
D
δ
t
, t <
˜
T
δ
1
≡
∞
_
0
ds
(1 +Σ
δ
s
)
4
⎞
⎠
via the following space
and time change formula:
Σ
δ
t
1 +Σ
δ
t
= D
δ
⎛
⎝
t
_
0
ds
(1 +Σ
δ
s
)
4
⎞
⎠
Then, Q
δ
0→0
is the law of the local times of (D
δ
t
, t <
˜
T
δ
1
).
(Remark that (D
δ
t
, t <
˜
T
δ
1
) may be extended by continuity to t =
˜
T
δ
1
, and
then we have:
˜
T
δ
1
= inf
_
t : D
δ
t
= 1
_
.).
Proof: For any Borel function
˜
f : [0, 1] → IR
+
, we have, thanks to the
remarks made previously:
46 3 Squares of Bessel processes and RayKnight theorems
Q
δ
0→0
_
exp −¸ω,
˜
f)
_
= Q
δ
0
(exp −¸ω, f))
= E
⎡
⎣
exp
⎛
⎝
−
∞
_
0
du f
_
Σ
δ
u
_
⎞
⎠
⎤
⎦
(from Theorem (3.3))
= E
⎡
⎣
exp−
∞
_
0
du
(1 +Σ
δ
u
)
4
˜
f
_
Σ
δ
u
1 +Σ
δ
u
_
⎤
⎦
(from the relation f ↔
˜
f)
= E
⎡
⎢
⎣
exp−
˜
T
δ
1
_
0
dv
˜
f(D
δ
v
)
⎤
⎥
⎦
(from the deﬁnition of D
δ
)
The theorem is proven. ¯.
It is interesting to consider again the case δ = 2 since, as argued in (3.5.1), it
is then known that Q
2
0→0
is the law of the local times of (R
3
(t), t ≤ T
1
(R
3
)).
This is perfectly coherent with the above theorem, since we then have:
(D
2
t
; t ≤ T
2
1
)
(law)
= (R
3
(t), t ≤ T
1
(R
3
))
Proof: If we deﬁne: X
t
=
R
3
(t)
1 +R
3
(t)
(t ≥ 0), we then have:
1
X
t
= 1 +
1
R
3
(t)
;
therefore, (X
t
, t ≥ 0), which is a diﬀusion (from its deﬁnition in terms of R
3
)
is also such that
_
1
X
t
, t ≥ 0
_
is a local martingale. Then, it follows easily that
X
t
=
˜
R
3
(¸X)
t
), t ≥ 0, where (
˜
R
3
(u), u ≥ 0) is a 3 dimensional Bessel process,
and, ﬁnally, since:
¸X)
t
=
t
_
0
ds
(1 +R
3
(s))
4
, we get the desired result.
Remark: We could have obtained this result more directly by applying
Itˆo’s formula to g(r) =
r
1+r
, and then timechanging. But, in our opinion, the
above proof gives a better explanation of the ubiquity of R
3
in this question.
Exercise 3.2: Let a, b > 0, and δ > 2. Prove that, if (R
δ
(t), t ≥ 0) is
a δdimensional Bessel process, then: R
δ
(t)/(a +b(R
δ
(t))
δ−2
)
1/δ−2
may be
obtained by time changing a δdimensional Bessel process, up to its ﬁrst
hitting time of c = b
−(1/δ−2)
.
This exercise may be generalised as follows:
3.6 Generalized meanders and squares of Bessel processes 47
Exercise 3.3: Let (X
t
, t ≥ 0) be a realvalued diﬀusion, whose inﬁnitesimal
generator L satisﬁes:
Lϕ(x) =
1
2
ϕ
(x) +b(x)ϕ
(x) , for ϕ ∈ C
2
(IR) .
Let f : IR → IR
+
be a C
2
function, and, ﬁnally, let δ > 1.
1. Prove that, if b and f are related by:
b(x) =
δ −1
2
f
(x)
f(x)
−
1
2
f
(x)
f
(x)
then, there exists (R
δ
(u), u ≥ 0) a δdimensional Bessel process, possibly
deﬁned on an enlarged probability space, such that:
f(X
t
) = R
δ
⎛
⎝
t
_
0
ds(f
)
2
(X
s
)
⎞
⎠
, t ≥ 0 .
2. Compute b(x) in the following cases:
(i) f(x) = x
α
; (ii) f(x) = exp(ax) .
3.6 Generalized meanders and squares of Bessel
processes
(3.6.1) The Brownian meander, which plays an important role in a number
of studies of Brownian motion, may be deﬁned as follows:
m(u) =
1
√
1 −g
[B
g+u(1−g)
[ , u ≤ 1 ,
where g = sup¦u ≤ 1 : B
u
= 0¦, and (B
t
, t ≥ 0) denotes a realvalued
Brownian motion starting from 0.
Imhof [49] proved the following absolute continuity relation:
M =
c
X
1
S
_
c =
_
π
2
_
(3.4)
48 3 Squares of Bessel processes and RayKnight theorems
where M, resp.: S, denotes the law of (m(u), u ≤ 1), resp.: the law of
(R(u), u ≤ 1) a BES(3) process, starting from 0.
Other proofs of (3.4) have been given by BianeYor [18], using excursion
theory, and Az´emaYor ([1], paragraph 4) using an extension of Girsanov
theorem.
It is not diﬃcult, using the same kind of arguments, to prove the more general
absolute continuity relationship:
M
ν
=
c
ν
X
2ν
1
S
ν
(3.5)
where ν ∈ (0, 1) and M
ν
, resp.: S
ν
, denotes the law of
m
ν
(u) ≡
1
√
1 −g
ν
R
−ν
(g
ν
+u(1 −g
ν
)) (u ≤ 1)
the Bessel meander associated to the Bessel process R
−ν
of dimension 2(1−ν),
starting from 0, resp.: the law of the Bessel process of dimension 2(1 + ν),
starting from 0.
Exercise 3.4: Deduce from formula (3.5) that:
M
ν
[F(X
u
; u ≤ 1) [ X
1
= x] = S
ν
[F(X
u
; u ≤ 1) [ X
1
= x]
and that:
M
ν
(X
1
∈ dx) = xexp
_
−
x
2
2
_
dx .
In particular, the law of m
ν
(1), the value at time 1 of the Bessel meander
does not depend on ν, and is distributed as the 2dimensional Bessel process
at time 1 (See Corollary 3.6.1 for an explanation).
(3.6.2) BianeLe GallYor [16] proved the following absolute continuity re
lation, which looks similar to (3.5): for every ν > 0,
N
ν
=
2ν
X
2
1
S
ν
(3.6)
where N
ν
denotes the law on C([0, 1], IR
+
) of the process:
n
ν
(u) =
1
√
L
ν
R
ν
(L
ν
u) (u ≤ 1) ,
with L
ν
≡ sup¦t > 0 : R
ν
(t) = 1¦.
3.6 Generalized meanders and squares of Bessel processes 49
Exercise 3.5: Deduce from formula (3.6) that:
N
ν
[F(X
u
; u ≤ 1) [ X
1
= x] = S
ν
[F(X
u
; u ≤ 1) [ X
1
= x]
and that:
N
ν
(X
1
∈ dx) = S
ν−1
(X
1
∈ dx) .
(Corollary 3.9.2 gives an explanation of this fact)
(3.6.3) In this subparagraph, we shall consider, more generally than the
righthand sides of (3.5) and (3.6), the law S
ν
modiﬁed via a RadonNikodym
density of the form:
c
µ,ν
X
µ
1
,
and we shall represent the new probability in terms of the laws of Bessel
processes and Bessel bridges.
Precisely, consider (R
t
, t ≤ 1) and (R
t
, t ≤ 1) two independent Bessel pro
cesses, starting from 0, with respective dimensions d and d
; condition R by
R
1
= 0, and deﬁne M
d,d
to be the law of the process
_
_
R
2
t
+ (R
t
)
2
_
1/2
, t ≤ 1
_
obtained in this way; in other terms, the law of the square of this process,
that is:
_
R
2
t
+ (R
t
)
2
, t ≤ 1
_
is Q
d
0→0
∗ Q
d
0
.
We may now state and prove the following
Theorem 3.9 Let P
δ
0
be the law on C([0, 1]; IR
+
) of the Bessel process with
dimension δ, starting from 0. Then, we have:
M
d,d
=
c
d,d
X
d
1
P
d+d
0
(3.7)
where c
d,d
= M
d,d
(X
d
1
) = (2
d
/2
)
Γ
_
d+d
2
_
Γ
_
d
2
_ .
Proof: From the additivity property of squares of Bessel processes, which
in terms of the probabilities (Q
δ
x
; δ ≥ 0, x ≥ 0) is expressed by:
Q
d
x
∗ Q
d
x
= Q
d+d
x+x
(see Theorem 2.3 above), it is easily deduced that:
Q
d+d
x+x
→0
= Q
d
x→0
∗ Q
d
x
→0
.
50 3 Squares of Bessel processes and RayKnight theorems
Hence, by reverting time from t = 1, we obtain:
Q
d+d
0→x+x
= Q
d
0→x
∗ Q
d
0→x
,
and, in particular:
Q
d+d
0→x
= Q
d
0→0
∗ Q
d
0→x
.
From this last formula, we deduce that, conditionally on X
1
= x, both sides
of (3.7) are equal, so that, to prove the identity completely, it remains to
verify that the laws of X
1
relatively to each side of (3.7) are the same, which
is immediate. ¯.
As a consequence of Theorem 3.9, and of the absolute continuity relations
(3.5) and (3.6), we are now able to identify the laws M
ν
(0 < ν < 1), and N
ν
(ν > 0), as particular cases of M
d,d
.
Corollary 3.9.1 Let 0 < ν < 1. Then, we have:
M
ν
= M
2ν,2
In other words, the square of the Bessel meander of dimension 2(1 −ν) may
be represented as the sum of the squares of a Bessel bridge of dimension 2ν
and of an independent twodimensional Bessel process.
In the particular case ν = 1/2, the square of the Brownian meander is dis
tributed as the sum of the squares of a Brownian bridge and of an independent
twodimensional Bessel process.
Corollary 3.9.2 Let ν > 0. Then, we have:
N
ν
= M
2,2ν
In other words, the square of the normalized Bessel process
_
1
√
L
ν
R(L
ν
u); ≤ 1
_
with dimension d = 2(1 +ν) is distributed as the sum of the squares of a two
dimensional Bessel bridge and of an independent Bessel process of dimen
sion 2ν.
3.7 Generalized meanders and Bessel bridges 51
3.7 Generalized meanders and Bessel bridges
(3.7.1) As a complement to the previous paragraph 3.6, we now give a
representation of the Bessel meander m
ν
(deﬁned just below formula (3.5))
in terms of the Bessel bridge of dimension δ
+
ν
≡ 2(1 +ν).
We recall that this Bessel bridge may be realized as:
ρ
ν
(u) =
1
√
d
ν
−g
ν
R
−ν
(g
ν
+u(d
ν
−g
ν
)) , u ≤ 1,
where d
ν
= inf ¦u ≥ 1 : R
−ν
(u) = 0¦.
Comparing the formulae which deﬁne m
ν
and ρ
ν
, we obtain
Theorem 3.10 The following equality holds:
m
ν
(u) =
1
√
V
ν
ρ
ν
(uV
ν
) (u ≤ 1) (3.8)
where V
ν
=
1 −g
ν
d
ν
−g
ν
.
Furthermore, V
ν
and the Bessel bridge (ρ
ν
(u), u ≤ 1) are independent, and
the law of V
ν
is given by:
P(V
ν
∈ dt) = νt
ν−1
dt (0 < t < 1) .
Similarly, it is possible to present a realization of the process n
ν
in terms
of ρ
ν
.
Theorem 3.11 1) Deﬁne the process:
˜ n
ν
(u) =
1
√
d
ν
−1
ρ
ν
(d
ν
−u(d
ν
−1)) ≡
1
_
ˆ
V
ν
ˆ ρ
ν
(u
ˆ
V
ν
) (u ≤ 1)
where
ˆ ρ
ν
(u) = ρ
ν
(1 −u) (u ≤ 1) and
ˆ
V
ν
= 1 −V
ν
=
d
ν
−1
d
ν
−g
ν
.
Then, the processes n
ν
and ˜ n
ν
have the same distribution.
52 3 Squares of Bessel processes and RayKnight theorems
2) Consequently, the identity in law
(n
ν
(u), u ≤ 1)
(law)
=
_
1
_
ˆ
V
ν
ˆ ρ
ν
(u
ˆ
V
ν
), u ≤ 1
_
(3.9)
holds, where, on the righthand side, ˆ ρ
ν
is a Bessel bridge of dimension
δ
+
ν
≡ 2(1 +ν), and
ˆ
V
ν
is independent of ˆ ρ
ν
, with:
P(
ˆ
V
ν
∈ dt) = ν(1 −t)
ν−1
dt (0 < t < 1) .
(3.7.2) Now, the representations of the processes m
ν
and n
ν
given in Theo
rems 3.10 and 3.11 may be generalized as follows to obtain a representation
of a process whose distribution is M
d,d
(see Theorem 3.9 above).
Theorem 3.12 Let d, d
> 0, and deﬁne (ρ
d+d
(u), u ≤ 1) to be the Bessel
bridge with dimension d +d
.
Consider, moreover, a beta variable V
d,d
, with parameters
_
d
2
,
d
2
_
, i.e.:
P(V
d,d
∈ dt) =
t
d
2
−1
(1 −t)
d
2
−1
dt
B
_
d
2
,
d
2
_ (0 < t < 1)
such that V
d,d
is independent of ρ
d+d
.
Then, the distribution of the process:
_
m
d,d
(u)
def
=
1
_
V
d,d
ρ
d+d
(uV
d,d
), u ≤ 1
_
is M
d,d
.
In order to prove Theorem 3.12, we shall use the following Proposition, which
relates the laws of the Bessel bridge and Bessel process, for any dimension.
Proposition 3.3 Let Π
µ
, resp.: S
µ
, be the law of the standard Bessel bridge,
resp.: Bessel process, starting from 0, with dimension δ = 2(µ + 1).
Then, for any t < 1 and every Borel functional F : C([0, t], IR
+
) → IR
+
, we
have:
3.7 Generalized meanders and Bessel bridges 53
Π
µ
[F(X
u
, u ≤ t)] = S
µ
[F(X
u
, u ≤ t)h
µ
(t, X
t
)]
where:
h
µ
(t, x) =
1
(1 −t)
µ+1
exp−
x
2
2(1 − t)
Proof: This is a special case of the partial absolute continuity relationship
between the laws of a nice Markov process and its bridges (see, e.g. [41]). ¯.
We now prove Theorem 3.12.
In order to present the proof in a natural way, we look for V , a random
variable taking its values in (0, 1), and such that:
i) P(V ∈ dt) = θ(t)dt; ii) V is independent of ρ
d+d
;
iii) the law of the process
_
1
√
V
ρ
d+d
(uV ); u ≤ 1
_
is M
d,d
.
We deﬁne the index µ by the formula: d +d
= 2(µ + 1). Then, we have, for
every Borel function F : C([0, 1], IR
+
) → IR
+
:
E
_
F
_
1
√
V
ρ
d+d
(uV ); u ≤ 1
__
=
1
_
0
dtθ(t)Π
µ
_
F
_
1
√
t
X
ut
; u ≤ 1
__
(by Proposition 3.3) =
1
_
0
dtθ(t)S
µ
_
F
_
1
√
t
X
ut
; u ≤ 1
_
h
µ
(t, X
t
)
_
(by scaling) =
1
_
0
dtθ(t)S
µ
_
F(X
u
; u ≤ 1)h
µ
(t,
√
tX
1
)
_
= S
µ
⎡
⎣
F(X
u
, u ≤ 1)
1
_
0
dtθ(t)h
µ
(t,
√
tX
1
)
⎤
⎦
Hence, by Theorem 3.9, the problem is now reduced to ﬁnding a function θ
such that:
1
_
0
dtθ(t)h
µ
(t,
√
tx) =
c
d,d
x
d
Using the explicit formula for h
µ
given in Proposition 3.3, and making some
elementary changes of variables, it is easily found, by injectivity of the Laplace
transform, that:
54 3 Squares of Bessel processes and RayKnight theorems
θ(t) =
t
d
2
−1
(1 −t)
d
2
−1
B
_
d
2
,
d
2
_ (0 < t < 1)
which ends the proof of Theorem 3.12.
(3.7.3) We now end up this Chapter by giving the explicit semimartingale
decomposition of the process m
d,d
, for d +d
≥ 2, which may be helpful, at
least in particular cases, e.g.: for the processes m
ν
and n
ν
).
Exercise 3.6: (We retain the notation of Theorem 3.9).
1) Deﬁne the process
D
u
= E
d+d
0
_
c
d,d
X
d
1
¸
¸
¸T
u
_
(u < 1) .
Then, prove that:
D
u
=
1
(1 −u)
d/2
Φ
_
d
2
,
d +d
2
; −
X
2
u
2(1 −u)
_
,
where Φ(a, b; z) denotes the conﬂuent hypergeometric function with
parameters (a, b) (see Lebedev [63], p.260–268).
Hint: Use the integral representation formula (with d +d
= 2(1 +µ))
Φ
_
d
2
,
d +d
2
; −b
_
= c
d,d
∞
_
0
dt
(
√
2t)
d/2
_
t
b
_
µ
2
exp(−(b +t))I
µ
(2
√
bt)
(3.10)
(see Lebedev [63], p. 278) and prove that the righthand side of formula
(3.10) is equal to:
E
d,d
a
_
c
d,d
X
d
1
_
, where a =
√
2b .
2) Prove that, under M
d,d
, the canonical process (X
u
, u ≤ 1) admits the
semimartingale decomposition:
3.7 Generalized meanders and Bessel bridges 55
X
u
= β
u
+
(d +d
) −1
2
u
_
0
ds
X
s
−
u
_
0
dsX
s
1 −s
_
Φ
Φ
__
d
2
,
d +d
2
; −
X
2
s
2(1 −s)
_
where (β
u
, u ≤ 1) denotes a Brownian motion, and, to simplify the formula,
we have written
Φ
Φ
(a, b; z) for
d
dz
(log Φ(a, b; z)).
Comments on Chapter 3
The basic RayKnight theorems are recalled in paragraph 3.1, and an easy
example of the transfer principle is given there.
Paragraph 3.2 is taken from PitmanYor [73], and, in turn, following Le Gall
Yor [60], some important extensions (Theorem 3.4) of the RK theorems are
given. For more extensions of the RK theorems, see Eisenbaum [39] and
Vallois [88].
An illustration of Theorem 3.4, which occurred naturally in the asymptotic
study of the windings of the 3dimensional Brownian motion around certain
curves (Le GallYor [62]) is developed in the subparagraph (3.3.2).
There is no easy formulation of a RayKnight theorem for Brownian local
times taken at a ﬁxed time t (see Perkins [68] and Jeulin [52], who have
independently obtained a semimartingale decomposition of the local times
in the space variable); the situation is much easier when the ﬁxed time is
replaced by an independent exponential time, as is explained brieﬂy in para
graph 3.4, following BianeYor [19]; the original result is due to Ray [80],
but it is presented in a very diﬀerent form than Theorem 3.5 in the present
chapter.
In paragraphs 3.5, 3.6 and 3.7, some relations between Bessel processes, Bessel
bridges and Bessel meanders are presented. In the literature, one will ﬁnd
this kind of study made essentially in relation with Brownian motion and the
3dimensional Bessel process (see BianeYor [18] and BertoinPitman [11] for
an exposition of known and new results up to 1994). The extensions which
are presented here seem very natural and in the spirit of the ﬁrst half of
Chapter 3, in which the laws of squares of Bessel processes of any dimension
are obtained as the laws of certain local times processes.
The discussion in subparagraph (3.5.5), leading to Theorem 3.8, was inspired
by Knight [58].
Chapter 4
An explanation and some extensions
of the CiesielskiTaylor identities
The CiesielskiTaylor identities in law, which we shall study in this Chapter,
were published in 1962, that is one year before the publication of the papers
of Ray and Knight (1963; [80] and [57]) on Brownian local times; as we shall
see below, this is more than a mere coincidence!
Here are these identities: if (R
δ
(t), t ≥ 0) denotes the Bessel process of di
mension δ > 0, starting at 0, then:
∞
_
0
ds 1
(R
δ+2
(s)≤1)
(law)
= T
1
(R
δ
) , (4.1)
where T
1
(R
δ
) = inf ¦t : R
δ
(t) = 1¦.
(More generally, throughout this chapter, the notation H(R
δ
) shall indicate
the quantity H taken with respect to R
δ
).
Except in the case δ = 1, there exists no path decomposition explanation
of (4.1); in this Chapter, a spectral type explanation shall be provided, which
relies essentially upon the two following ingredients:
a) both sides of (4.1) may be written as integrals with respect to the Lebesgue
measure on [0, 1] of the total local times of R
δ+2
for the lefthand side, and
of R
δ
, up to time T
1
(R
δ
), for the righthand side; moreover, the laws of the
two local times processes can be deduced from (RK2)(a) (see Chapter 3,
paragraph 1);
b) the use of the integration by parts formula obtained in Chapter 2, Theo
rem 2.1.
57
58 4 On the CiesielskiTaylor identities
This method (the use of ingredient b) in particular) allows to extend the
identities (4.1) by considering the time spent in an annulus by a (δ + 2) di
mensional Brownian motion; they may also be extended to pairs of diﬀusions
(X,
ˆ
X) which are much more general than the pairs (R
δ+2
, R
δ
); this type of
generalization was ﬁrst obtained by Ph. Biane [12], who used the expression
of the Laplace transforms of the occupation times in terms of diﬀerential
equations, involving the speed measures and scale functions of the diﬀusions.
4.1 A pathwise explanation of (4.1) for δ = 1
Thanks to the timereversal result of D. Williams:
(R
3
(t), t ≤ L
1
(R
3
))
(law)
= (1 −B
σ−t
, t ≤ σ) ,
where
L
1
(R
3
) = sup¦t : R
3
(t) = 1¦ ,
and
σ = inf¦t : B
t
= 1¦ ,
the lefthand side of (4.1) may be written as:
σ
_
0
ds 1
(B
s
>0)
, so that, to ex
plain (4.1) in this case, it now remains to show:
σ
_
0
ds 1
(B
s
>0)
(law)
= T
1
([B[) . (4.2)
To do this, we use the fact that (B
+
t
, t ≥ 0) may be written as:
B
+
t
= [β[
⎛
⎝
t
_
0
ds 1
(B
s
>0)
⎞
⎠
, t ≥ 0 ,
where (β
u
, u ≥ 0) is another onedimensional Brownian motion starting
from 0, so that, from this representation of B
+
, we deduce:
σ
_
0
ds 1
(B
s
>0)
= T
1
([β[) ;
this implies (4.2).
4.2 An identity in law between Brownian quadratic functionals 59
4.2 A reduction of (4.1) to an identity in law between
two Brownian quadratic functionals
To explain the result for every δ > 0, we write the two members of the
CT identity (4.1) as local times integrals, i.e.
1
_
0
da
a
∞
(R
δ+2
)
(law)
=
1
_
0
da
a
T
1
(R
δ
) ,
with the understanding that the local times (
a
t
(R
γ
); a > 0, t ≥ 0) satisfy the
occupation density formula: for every positive measurable f,
t
_
0
ds f(R
γ
(s)) =
∞
_
0
da f(a)
a
t
(R
γ
) .
It is not diﬃcult (e.g. see Yor [101]) to obtain the following representations
of the local times processes of R
γ
taken at t = ∞, or t = T
1
(R
δ
), with the
help of the basic RayKnight theorems (see Chapter 3).
Theorem 4.1 Let (B
t
, t ≥ 0), resp.: (
˜
B
t
, t ≥ 0) denote a planar BM start
ing at 0, resp. a standard complex Brownian bridge. Then, we have:
1) for γ > 0, (
a
∞
(R
2+γ
); a > 0)
(law)
=
_
1
γa
γ−1
[B
a
γ [
2
; a > 0
_
(
a
T
1
(R
2+γ
); 0 < a ≤ 1)
(law)
=
_
1
γa
γ−1
[
˜
B
a
γ [
2
; 0 < a ≤ 1
_
2) for γ = 0, (
a
T
1
(R
2
); 0 < a ≤ 1)
(law)
=
_
a
¸
¸
¸B
log
(
1
a
)
¸
¸
¸
2
; 0 < a ≤ 1
_
3) for 0 < γ ≤ 2, (
a
T
1
(R
2−γ
); 0 < a ≤ 1)
(law)
=
_
1
γa
γ−1
[B
1−a
γ [
2
; 0 < a ≤ 1
_
With the help of this theorem, we remark that the CT identities (4.1) are
equivalent to
1
δ
1
_
0
da
a
δ−1
[B
a
δ [
2
(law)
=
1
δ −2
1
_
0
da
a
δ−3
[
˜
B
a
δ−2 [
2
(δ > 2) (4.3)
60 4 On the CiesielskiTaylor identities
1
2
1
_
0
da
a
[B
a
2 [
2
(law)
=
1
_
0
da a
¸
¸
¸B
(
log
1
a
)
¸
¸
¸
2
(δ = 2) (4.4)
1
δ
1
_
0
da
a
δ−1
[B
a
δ [
2
(law)
=
1
_
0
da
a
1−δ
[B
1−a
2−δ [
2
(δ < 2) (4.5)
where, on both sides, B, resp.
˜
B, denotes a complex valued Brownian motion,
resp.: Brownian bridge.
In order to prove these identities, it obviously suﬃces to take realvalued
processes for B and
˜
B, which is what we now assume.
It then suﬃces to remark that the identity in law (4.4) is a particular case
of the integration by parts formula (2.14), considered with f(a) = log
1
a
,
and g(a) = a
2
; the same argument applies to the identity in law (4.5), with
f(a) = 1 −a
2−δ
, and g(a) = a
δ
; with a little more work, one also obtains the
identity in law (4.3).
4.3 Some extensions of the CiesielskiTaylor identities
(4.3.1) The proof of the CT identities which was just given in paragraph 4.2
uses, apart from the RayKnight theorems for (Bessel) local times, the inte
gration by parts formula (obtained in Chapter 2) applied to some functions f
and g which satisfy the boundary conditions: f(1) = 0 and g(0) = 0.
In fact, it is possible to take some more advantage of the integration by parts
formula, in which we shall now assume no boundary condition, in order to
obtain the following extensions of the CT identities.
Theorem 4.2 Let δ > 0, and a ≤ b ≤ c. Then, if R
δ
and R
δ+2
denote two
Bessel processes starting from 0, with respective dimensions δ and δ + 2, we
have:
4.3 Some extensions of the CiesielskiTaylor identities 61
∞
_
0
ds 1
(a≤R
δ+2
(s)≤b)
+
⎛
⎝
b
δ−1
c
_
b
dx
x
δ−1
⎞
⎠
b
∞
(R
δ+2
)
I
(δ)
a,b,c
:
(law)
=
a
δ
a
T
c
(R
δ
) +
T
c
_
0
ds 1
(a≤R
δ
(s)≤b)
.
(4.3.2) We shall now look at some particular cases of I
(δ)
a,b,c
.
1) δ > 2, a = b, c = ∞.
The identity then reduces to:
1
δ −2
b
∞
(R
δ+2
)
(law)
=
1
δ
b
∞
(R
δ
)
In fact, both variables are exponentially distributed, with parameters
which match the identity; moreover, this identity expresses precisely how
the total local time at b for R
δ
explodes as δ ↓ 2.
2) δ > 2, a = 0, c = ∞.
The identity then becomes:
∞
_
0
ds 1
(R
δ+2
(s)≤b)
+
b
δ −2
b
∞
(R
δ+2
)
(law)
=
∞
_
0
ds 1
(R
δ
(s)≤b)
Considered together with the original CT identity (4.1), this gives a func
tional of R
δ+2
which is distributed as T
b
(R
δ−2
).
3) δ = 2.
Taking a = 0, we obtain:
∞
_
0
ds 1
(R
4
(s)≤b)
+b log
_
c
b
_
b
∞
(R
4
)
(law)
=
T
c
_
0
ds 1
(R
2
(s)≤b)
,
whilst taking a = b > 0, we obtain:
log
_
c
b
_
b
∞
(R
4
)
(law)
=
1
2
b
T
c
(R
2
) .
In particular, we deduce from these identities in law the following limit
results:
62 4 On the CiesielskiTaylor identities
1
log c
T
c
_
0
ds 1
(R
2
(s)≤b)
(law)
−−−−−→
c→∞
b
b
∞
(R
4
) ,
and
1
log c
b
T
c
(R
2
)
(law)
−−−−−→
c→∞
2
b
∞
(R
4
) .
In fact, these limits in law may be seen as particular cases of the
KallianpurRobbins asymptotic result for additive functionals of planar
Brownian motion (Z
t
, t ≥ 0), which states that:
i) If f belongs to L
1
(C, dx dy), and is locally bounded, and if
A
f
t
def
=
t
_
0
ds f(Z
s
) ,
then:
1
log t
A
f
t
(law)
−−−−−→
t→∞
_
1
2π
¯
f
_
e ,
where
¯
f =
_
C
dx dy f(x, y), and e is a standard exponential variable.
Moreover, one has:
1
log t
_
A
f
t
−A
f
T
√
t
_
(P)
−−−−−→
t→∞
0
ii) (Ergodic theorem) If f and g both satisfy the conditions stated in (i),
then:
A
f
t
/A
g
t
a.s
−−−−−→
t→∞
¯
f/¯ g .
4) δ < 2, a = 0.
The identity in law then becomes:
∞
_
0
ds 1
(R
δ+2
(s)≤b)
+b
δ−1
_
c
2−δ
−b
2−δ
2 −δ
_
b
∞
(R
δ+2
)
(law)
=
T
c
_
0
ds 1
(R
δ
(s)≤b)
which, as a consequence, implies:
4.3 Some extensions of the CiesielskiTaylor identities 63
1
c
2−δ
T
c
_
0
ds 1
(R
δ
(s)≤b)
(law)
−−−−−→
c→∞
b
δ−1
2 −δ
b
∞
(R
δ+2
) . (4.6)
In fact, this limit in law can be explained much easier than the limit in
law in the previous example. Here is such an explanation:
the local times (
a
t
(R
δ
); a > 0, t ≥ 0), which, until now in this Chapter,
have been associated to the Bessel process R
δ
, are the semimartingale
local times, i.e.: they may be deﬁned via the occupation density formula,
with respect to Lebesgue measure on IR
+
. However, at this point, it is
more convenient to deﬁne the family ¦λ
x
t
(R
δ
)¦ of diﬀusion local times by
the formula:
t
_
0
ds f (R
δ
(s)) =
∞
_
0
dx x
δ−1
λ
x
t
(R
δ
)f(x) (4.7)
for every Borel function f : IR
+
→ IR
+
.
(The advantage of this deﬁnition is that the diﬀusion local time (λ
0
t
(R
δ
);
t > 0) will be ﬁnite and strictly positive). Now, we consider the lefthand
side of (4.6); we have:
1
c
2
T
c
_
0
ds 1
(R
δ
(s)≤b)
(law)
=
T
1
_
0
du 1
(
R
δ
(u)≤
b
c
)
(by scaling)
(law)
=
b/c
_
0
dx x
δ−1
λ
x
T
1
(R
δ
) (by formula (4.7))
(law)
=
b
_
0
dy
c
_
y
c
_
δ−1
λ
y/c
T
1
(R
δ
) (by change of variables)
Hence, we have:
1
c
2−δ
T
c
_
0
ds 1
(R
δ
(s)≤b)
(law)
−−−−−→
c→∞
λ
0
T
1
(R
δ
)
b
δ
δ
(4.8)
The convergence results (4.6) and (4.8) imply:
λ
0
T
1
(R
δ
)
b
δ
δ
(law)
=
b
δ−1
2 −δ
b
∞
(R
δ+2
) (4.9)
It is not hard to convince oneself directly that the identity in law (4.9)
holds; indeed, from the scaling property of R
δ+2
, we deduce that:
b
∞
(R
δ+2
)
(law)
= b
1
∞
(R
δ+2
) ,
64 4 On the CiesielskiTaylor identities
so that formula (4.9) reduces to:
λ
0
T
1
(R
δ
)
(law)
=
δ
2 −δ
1
∞
(R
δ+2
) (4.10)
Exercise 4.1 Give a proof of the identity in law (4.10) as a consequence
of the RayKnight theorems for
_
λ
x
T
1
(R
δ
); x ≤ 1
_
and (
x
∞
(R
δ+2
); x ≥ 0).
Exercise 4.2 Let c > 0 be ﬁxed. Prove that:
1
c −a
a
T
c
(R
δ
) converges in
law, as a ↑ c, and identify the limit in law.
Hint: Either use the identity in law I
(δ)
a,c,c
or a RayKnight theorem for
_
a
T
c
(R
δ
); a ≤ c
_
.
4.4 On a computation of F¨ oldesR´ev´esz
F¨ oldesR´ev´esz [42] have obtained, as a consequence of formulae in Borodin [21]
concerning computations of laws of Brownian local times, the following iden
tity in law, for r > q:
∞
_
0
dy 1
(0<
y
τ
r
<q)
(law)
= T
√
q
(R
2
) (4.11)
where, on the lefthand side,
y
τ
r
denotes the local time of Brownian motion
taken at level y, and at time τ
r
, the ﬁrst time local time at 0 reaches r, and,
on the righthand side, T
√
q
(R
2
) denotes the ﬁrst hitting time of
√
q by R
2
,
a twodimensional Bessel process starting from 0.
We now give an explanation of formula (4.11), using jointly the RayKnight
theorem and the CiesielskiTaylor identity in law (4.1).
From the RayKnight theorem on Brownian local times up to time τ
r
, we
know that the lefthand side of formula (4.11) is equal, in law, to:
T
0
_
0
dy 1
(Y
y
<q)
,
4.4 On a computation of F¨oldesR´ev´esz 65
where (Y
y
, y ≥ 0) is a BESQ process, with dimension 0, starting from r. Since
r > q, we may as well assume, using the strong Markov property that Y
0
= q,
which explains why the law of the lefthand side of (4.11) does not depend
on r(≥ q).
Now, by time reversal, we have:
T
0
_
0
dy 1
(Y
y
<q)
(law)
=
L
q
_
0
dy 1
(
ˆ
Y
y
<q)
,
where (
ˆ
Y
y
, y ≥ 0) is a BESQ process, with dimension 4, starting from 0, and
L
q
= sup¦y :
ˆ
Y
y
= q¦. Moreover, we have, obviously:
L
q
_
0
dy 1
(
ˆ
Y
y
<q)
=
∞
_
0
dy 1
(
ˆ
Y
y
<q)
(4.12)
and we deduce from the original CiesielskiTaylor identity in law (4.1), taken
for δ = 2, together with the scaling property of a BES process starting from 0,
that the righthand side of (4.12) is equal in law to T
√
q
(R
2
).
Comments on Chapter 4
The proof, presented here, of the CiesielskiTaylor identities follows Yor [101];
it combines the RK theorems with the integration by parts formula (2.14).
More generally, Biane’s extensions of the CT identities to a large class of dif
fusions ([12]) may also be obtained in the same way. It would be interesting to
know whether another family of extensions of the CT identities, obtained by
CarmonaPetitYor [25] for certain c` adl` ag Markov processes which are related
to Bessel processes through some intertwining relationship, could also be de
rived from some adequate version of the integration by parts formula (2.14).
In paragraph 4.3, it seemed an amusing exercise to look at some particular
cases of the identity in law I
(δ)
a,b,c
, and to relate these examples to some better
known relations, possibly of an asymptotic kind.
Finally, paragraph 4.4 presents an interesting application of the CT identities.
Chapter 5
On the winding number of planar BM
The appearance in Chapter 3 of Bessel processes of various dimensions is
very remarkable, despite the several proofs of the RayKnight theorems on
Brownian local times which have now been published.
It is certainly less astonishing to see that the 2dimensional Bessel process
plays an important part in the study of the windings of planar Brownian
motion (Z
t
, t ≥ 0); indeed, one feels that, when Z wanders far away from
0, or, on the contrary, when it gets close to 0, then it has a tendency to
wind more than when it lies in the annulus, say ¦z : r ≤ [z[ ≤ R¦, for some
given 0 < r < R < ∞. However, some other remarkable feature occurs:
the computation of the law, for a ﬁxed time t, of the winding number θ
t
of
(Z
u
, u ≤ t) around 0, is closely related to the knowledge of the semigroups of
all Bessel processes, with dimensions δ varying between 2 and ∞.
There have been, in the 1980’s, a number of studies about the asymptotics
of winding numbers of planar BM around a ﬁnite set of points (see, e.g.,
a short summary in [81], Chapter XII). It then seemed more interesting to
develop here some exact computations for the law of the winding number up
to a ﬁxed time t, for which some open questions still remain.
5.1 Preliminaries
(5.1.1) Consider Z
t
= X
t
+ iY
t
, t ≥ 0, a planar BM starting from z
0
,= 0.
We have
67
68 5 On the winding number of planar BM
Proposition 5.1 1) With probability 1, (Z
t
, t ≥ 0) does not visit 0.
2) A continuous determination of the logarithm along the trajectory (Z
u
(ω),
u ≥ 0) is given by the stochastic integral:
log
ω
(Z
t
(ω)) −log
ω
(Z
0
(ω)) =
t
_
0
dZ
u
Z
u
, t ≥ 0 . (5.1)
We postpone the proof of Proposition 5.1 for a moment, in order to write
down the following pair of formulae, which are immediate consequences
of (5.1):
log [Z
t
(ω)[ −log [Z
0
(ω)[ = Re
t
_
0
dZ
u
Z
u
=
t
_
0
X
u
dX
u
+Y
u
dY
u
[Z
u
[
2
(5.2)
and
θ
t
(ω) −θ
0
(ω) = Im
t
_
0
dZ
u
Z
u
=
t
_
0
X
u
dY
u
− Y
u
dX
u
[Z
u
[
2
, (5.3)
where (θ
t
(ω), t ≥ 0) denotes a continuous determination of the argument of
(Z
u
(ω), u ≤ t) around 0.
We now note that:
β
u
=
u
_
0
X
s
dX
s
+Y
s
dY
s
[Z
s
[
(u ≥ 0) and γ
u
=
u
_
0
X
s
dY
s
−Y
s
dX
s
[Z
s
[
(u ≥ 0)
are two orthogonal martingales with increasing processes ¸β)
u
= ¸γ)
u
≡
u, hence, they are two independent Brownian motions; moreover, it is not
diﬃcult to show that:
¹
t
def
= σ ¦[Z
u
[, u ≤ t¦ ≡ σ ¦β
u
, u ≤ t¦ , up to negligible sets;
hence, we have the following:
for ν ∈ IR, E [exp (iν(θ
t
−θ
0
)) [ ¹
∞
] = exp
_
−
ν
2
2
H
t
_
, where H
t
def
=
t
_
0
ds
[Z
s
[
2
(5.4)
This formula shall be of great help, in the next paragraph, to compute the
law of θ
t
, for ﬁxed t.
5.1 Preliminaries 69
(5.1.2) We now prove Proposition 5.1. The ﬁrst statement of Proposition 5.1
follows from
Proposition 5.2 (B. Davis [28]) If f : C → C is holomorphic and not con
stant, then there exists a planar BM (
ˆ
Z
t
, t ≥ 0) such that:
f(Z
t
) =
ˆ
Z
A
f
t
, t ≥ 0 , and A
f
∞
= ∞ a.s.
To prove Proposition 5.1, we apply Proposition 5.2 with f(z) = exp(z). Then,
exp(Z
t
) =
ˆ
Z
A
t
, with: A
t
=
t
_
0
ds exp(2X
s
) .
The “trick” is to consider, instead of Z, the planar BM (
ˆ
Z
u
, u ≥ 0), which
starts from exp(Z
0
) ,= 0, at time t = 0, and shall never reach 0, since exp(z) ,=
0, for every z ∈ C.
Next, to prove formula (5.1), it suﬃces to show:
exp
⎛
⎝
t
_
0
dZ
u
Z
u
⎞
⎠
=
Z
t
Z
0
, for all t ≥ 0 . (5.5)
This follows from Itˆo’s formula, from which we easily deduce:
d
⎛
⎝
1
Z
t
exp
⎛
⎝
t
_
0
dZ
u
Z
u
⎞
⎠
⎞
⎠
= 0
Exercise 5.1 Give another proof of the identity (5.5), using the uniqueness
of solutions of the stochastic equation:
U
t
= Z
0
+
t
_
0
U
s
dZ
s
Z
s
, t ≥ 0 .
(5.1.3) In the sequel, we shall also need the two following formulae, which
involve the modiﬁed Bessel functions I
ν
.
a) The semigroup P
(ν)
t
(r, dρ) = p
(ν)
t
(r, ρ)dρ of the Bessel process of index
ν > 0 is given by the formula:
70 5 On the winding number of planar BM
p
(ν)
t
(r, ρ) =
1
t
_
ρ
r
_
ν
ρ exp−
_
r
2
+ρ
2
2t
_
I
ν
_
rρ
t
_
(r, ρ, t > 0)
(see, e.g., RevuzYor [81], p. 411).
b) For any λ ∈ IR, and r > 0, the modiﬁed Bessel function I
λ
(r) admits the
following integral representation:
I
λ
(r) =
1
π
π
_
0
dθ(exp(r cos θ)) cos(λθ) −
sin(λπ)
π
∞
_
0
du e
−rchu−λu
(see, e.g., Watson [90], and Lebedev [63], formula (5.10.8), p. 115).
5.2 Explicit computation of the winding number
of planar Brownian motion
(5.2.1) With the help of the preliminaries, we shall now prove the following
Theorem 5.1 For any z
0
,= 0, r, t > 0, and ν ∈ IR, we have:
E
z
0
_
exp (iν(θ
t
−θ
0
))
¸
¸
¸[Z
t
[ = ρ
_
=
I
ν
I
0
_
[z
0
[ρ
t
_
(5.6)
Before we prove formula (5.6), let us comment that this formula shows in
particular that, for every given r > 0, the function: ν→
I
ν
I
0
(r) is the Fourier
transform of a probability measure, which we shall denote by µ
r
; this distri
bution was discovered, by analytic means, by HartmanWatson [48], hence
we shall call µ
r
the HartmanWatson distribution with parameter r. Hence,
µ
r
is characterized by:
I
ν
I
0
(r) =
∞
_
−∞
exp(iνθ)µ
r
(dθ) (ν ∈ IR) . (5.7)
The proof of Theorem 5.1 shall follow from
5.2 The winding number of planar Brownian motion 71
Proposition 5.3 Let r >0. For any ν ≥0, deﬁne P
(ν)
r
to be the law of the
Bessel process, with index ν, starting at r, on the canonical space Ω
∗
+
≡
C(IR
+
, IR
+
).
Then, we have:
P
(ν)
r
¸
¸
R
t
=
_
R
t
r
_
ν
exp
_
−
ν
2
2
H
t
_
P
(0)
r
¸
¸
R
t
. (5.8)
Proof: This is a simple consequence of Girsanov’s theorem.
However, remark that the relation (5.8) may also be considered as a variant
of the simpler CameronMartin relation:
W
(ν)
¸
¸
F
t
= exp
_
νX
t
−
ν
2
t
2
_
W
¸
¸
F
t
(5.9)
where W
(ν)
denotes the law, on C(IR
+
, IR), of Brownian motion with drift ν.
Formula (5.9) implies (5.8) after timechanging, since, under P
(ν)
r
, one has:
R
t
= r exp(B
u
+νu)
¸
¸
u=H
t
, and H
t
= inf
⎧
⎨
⎩
u :
u
_
0
ds exp 2(B
s
+νs) > t
⎫
⎬
⎭
¯.
We now ﬁnish the proof of Theorem 5.1; from formulae (5.2) and (5.3), and
the independence of β and γ, we deduce, denoting r = [z
0
[, that:
E
z
0
_
exp (iν(θ
t
−θ
0
)
¸
¸
¸ [Z
t
[ = ρ
_
= E
z
0
_
exp
_
−
ν
2
2
H
t
_
¸
¸
¸ [Z
t
[ = ρ
_
= E
(0)
r
_
exp
_
−
ν
2
2
H
t
_
¸
¸
¸R
t
= ρ
_
Now, from formula (5.8), we deduce that for every Borel function f : IR
+
→
IR
+
, we have:
E
(ν)
r
[f(R
t
)] = E
(0)
r
_
f(R
t
)
_
R
t
t
_
ν
exp
_
−
ν
2
2
H
t
__
,
which implies:
p
(ν)
t
(r, ρ) = p
(0)
t
(r, ρ)
_
ρ
r
_
ν
E
(0)
r
_
exp
_
−
ν
2
2
H
t
_
¸
¸
¸R
t
= ρ
_
,
72 5 On the winding number of planar BM
and formula (5.6) now follows immediately from the explicit expressions of
p
(ν)
t
(r, ρ) and p
(0)
t
(r, ρ) given in (5.1.3). ¯.
With the help of the classical integral representation of I
λ
, which was pre
sented above in (5.1.3), we are able to give the following explicit additive
decomposition of µ
r
.
Theorem 5.2 1) For any r > 0, we have
µ
r
(dθ) = p
r
(dθ) +q
r
(dθ) ,
where: p
r
(dθ) =
1
2πI
0
(r)
exp(r cos θ)1
[−π,π[
(θ)dθ is the Von Mises distribu
tion with parameter r, and q
r
(dθ) is a bounded signed measure, with total
mass equal to 0.
2) q
r
admits the following representation:
q
r
(dθ) =
1
I
0
(r)
⎧
⎨
⎩
−e
−r
m+m∗
∞
_
0
π
r
(du)c
u
⎫
⎬
⎭
,
where:
m(dθ) =
1
2π
1
[−π,π[
(θ)dθ; π
r
(du) = e
−r chu
r(shu)du; c
u
(dθ) =
udθ
π(θ
2
+u
2
)
3) q
r
may also be written as follows:
q
r
(dθ) =
1
2πI
0
(r)
¦Φ
r
(θ − π) −Φ
r
(θ +π)¦ dθ ,
where
Φ
r
(x) =
∞
_
0
dt e
−r cht
x
π(t
2
+x
2
)
=
∞
_
0
π
r
(dt)
1
π
Arc tg
_
t
x
_
. (5.10)
It is a tantalizing question to interpret precisely every ingredient in the above
decomposition of µ
r
in terms of the winding number of planar Brownian mo
tion. This is simply solved for p
r
, which is the law of the principal deter
mination α
t
(e.g.: with values in [−π, π[), of the argument of the random
variable Z
t
, given R
t
≡ [Z
t
[, a question which does not involve the compli
cated manner in which Brownian motion (Z
u
, u ≤ t) has wound around 0 up
5.2 The winding number of planar Brownian motion 73
to time t, but depends only on the distribution of the 2dimensional random
variable Z
t
.
On the contrary, the quantities which appear in the decomposition of q
r
in
the second statement of Theorem 5.2 are not so easy to interpret. However,
the Cauchy distribution c
1
which appears there is closely related to Spitzer’s
asymptotic result, which we now recall.
Theorem 5.3 As t → ∞,
2θ
t
log t
(law)
−−−−−→
t→∞
C
1
, where C
1
is a Cauchy variable
with parameter 1.
Proof: Following Itˆ oMcKean ([50], p. 270) we remark that, from the con
vergence in law of
R
t
√
t
, as t → ∞, it is suﬃcient, to prove the theorem, to
show that, for every ν ∈ IR, we have:
E
z
0
_
exp
_
2iνθ
t
log t
_
[ R
t
= ρ
√
t
_
−−−−−→
t→∞
exp(−[ν[) ,
which, thanks to formula (5.6), amounts to showing:
I
λ
I
0
_
[z
0
[ρ
√
t
_
−−−−−→
t→∞
e
−ν
(5.11)
with the notation: λ =
2[ν[
log t
.
Making an integration by parts in the integral representation of I
λ
(r) in
(5.1.3), b), the proof of (5.11) shall be ﬁnished once we know that:
_
π
p/
√
t
(du) exp
_
−
[ν[u
log
√
t
_
−−−−−→
t→∞
e
−ν
,
where p = ρ[z
0
[. However, if we consider the linear application
t
: u →
u
log t
(u ∈ IR
+
), it is easily shown that:
t
(π
p/t
)
(w)
−−−−−→
t→∞
ε
1
(du), i.e.: the image of
π
p/t
(du) by
t
converges weakly, as t → ∞, to the Dirac measure at 1. ¯.
The ﬁnite measure π
r
(du) appears also naturally in the following represen
tation of the law of the winding number around 0 of the “Brownian lace” (=
complex Brownian bridge) with extremity z
0
,= 0, and length t.
74 5 On the winding number of planar BM
Theorem 5.4 Let W =
θ
t
2π
be the winding number of the Brownian lace of
length t, starting and ending at z
0
. Then, with the notation: r =
z
0

2
t
, we
have:
W
(law)
= ε
_
C
T
2π
+
1
2
_
(5.12)
where T is a random variable with values in IR
+
, such that:
P(T ∈ du) = e
r
π
r
(du) ,
ε takes the values 0 and 1, with probabilities:
P(ε = 0) = 1 −e
−2r
, P(ε = 1) = e
−2r
,
(C
u
)
u≥0
is a symmetric Cauchy process starting from 0, T, ε and (C
u
)
u≥0
are independent, and, ﬁnally, [x] denotes the integer part of x ∈ IR.
For the sake of clarity, we shall now assume, while proving Theorem 5.4, that
z
0
= [z
0
[; there is no loss of generality, thanks to the conformal invariance of
Brownian motion.
In particular, we may choose θ
0
= 0, and it is then easy to deduce from the
identity (5.6), and the representation of µ
r
given in Theorem 5.2 that, for
any Borel function
f : IR → IR
+
, one has:
E
z
0
[f(θ
t
) [ Z
t
= z] = f(α
t
) +e
−˜ r cos(α
t
)
n∈ZZ
a
n
(t, ˜ r)f(α
t
+ 2nπ) (∗)
where α
t
is equal, as above, to the determination of the argument of the
variable Z
t
in ] −π, π], ˜ r =
z
0
 z
t
, and
a
n
(˜ r, t) = Φ
˜ r
(α
t
+ (2n −1)π) −Φ
˜ r
(α
t
+ (2n + 1)π) .
In particular, for z = z
0
, one has: r = ˜ r, α
t
= 0, and the previous formula (∗)
becomes, for n ,= 0:
5.2 The winding number of planar Brownian motion 75
P
z
0
(θ
t
= 2nπ [ Z
t
= z
0
)
=
∞
_
0
π
r
(du)e
−r
1
π
_
Arc tg
_
u
(2n −1)π
_
−Arc tg
_
u
(2n + 1)π
__
(from (5.10))
=
∞
_
0
π
r
(du)e
−r
(2n+1)π
_
(2n−1)π
u dx
π(u
2
+x
2
)
=
_
P(T ∈ du)e
−2r
P ((2n −1)π ≤ C
u
≤ (2n + 1)π) (5.13)
Likewise, for n = 0, one deduces from (∗) that:
P
z
0
(θ
t
= 0 [ Z
t
= z
0
)=
_
P(T ∈ du)
_
(1 −e
−2r
) +e
−2r
P(−π ≤ C
u
≤ π)
_
(5.14)
The representation (5.12) now follows from the two formulae (5.13) and (5.14).
From Theorem 5.4, we deduce the following interesting
Corollary 5.4.1 Let θ
∗
t
be the value at time t of a continuous determination
of the argument of the Brownian lace (Z
u
, u ≤ t), such that Z
0
= Z
t
= z
0
.
Then, one has:
1
log t
θ
∗
t
(law)
−−−−−→
t→∞
C
1
(5.15)
Remark: Note that, in contrast with the statement in Theorem 5.3, the
asymptotic winding θ
∗
t
of the “long” Brownian lace (Z
u
, u ≤ t), as t → ∞,
may be thought of as the sum of the windings of two independent “free”
Brownian motions considered on the interval [0, t]; it is indeed possible to
justify directly this assertion. ¯.
Proof of the Corollary: From the representation (5.12), it suﬃces to
show that:
1
log t
C
T
(law)
−−−−−→
t→∞
C
1
,
where T is distributed as indicated in Theorem 5.4.
Now, this convergence in law follows from
C
T
(law)
= T C
1
,
and the fact, already seen at the end of the proof of Theorem 5.3, that:
76 5 On the winding number of planar BM
T
log t
(P)
−−−−−→
t→∞
1 .
¯.
(5.2.2) In order to understand better the representation of W given by
formula (5.12), we shall now replace the Brownian lace by a planar Brownian
motion with drift, thanks to the invariance of Brownian motion by time
inversion. From this invariance property, we ﬁrst deduce the following easy
Lemma 5.1 Let z
1
, z
2
∈ C, and let P
z
2
z
1
be the law of (z
1
+
ˆ
Z
u
+uz
2
; u ≥ 0),
where (
ˆ
Z
u
, u ≥ 0) is a planar BM starting from 0.
Then, the law of
_
uZ
_
1
u
_
, u > 0
_
under P
z
2
z
1
is P
z
1
z
2
.
As a consequence, we obtain: for every positive functional F,
E
z
0
[F(Z
u
, u ≤ t) [ Z
t
= z] = E
z
0
z/t
_
F
_
uZ
_
1
u
−
1
t
_
; u ≤ t
__
.
We may now state the following
Theorem 5.5 Let Z
u
= X
u
+iY
u
, u ≥ 0, be a Cvalued process, and deﬁne
T
t
=inf ¦u≤t : X
u
=0¦, with T
t
= t, if ¦ ¦ is empty, and L=sup¦u: X
u
=0¦,
with L = 0 if ¦ ¦ is empty.
Then, we have:
1) for any Borel function f : IR IR
+
→ IR
+
,
E
z
0
_
f
_
θ
t
,
1
t
−
1
T
t
_
[ Z
t
= z
_
= E
z
0
z/t
[f(θ
∞
, L)] ;
2) moreover, when we take z
0
= z, we obtain, with the notation of Theo
rem 5.4:
E
z
0
_
f(θ
t
)1
(T
t
<t)
[ Z
t
= z
0
¸
= E
z
0
z
0
/t
_
f(θ
∞
)1
(L>0)
¸
= E
_
f
_
2πε
_
C
T
2π
+
1
2
__
ε
_
The proof of Theorem 5.5 follows easily from Lemma 5.1 and Theorem 5.4.
5.2 The winding number of planar Brownian motion 77
Comments on Chapter 5
The computations presented in paragraph 5.1 are, by now, wellknown; the
development in paragraph 5.2 is taken partly from Yor [98]; some related
computations are found in BergerRoberts [5].
It is very interesting to compare the proof of Theorem 5.3, which follows
partly Itˆ o  Mc Kean ([50] and, in fact, the original proof of Spitzer [85]) and
makes use of some asymptotics of the modiﬁed Bessel functions, with the
“computationfree” arguments of Williams (1974; unpublished) and Durrett
[37] discussed in detail in MessulamYor [65] and PitmanYor [75].
It would be interesting to obtain a better understanding of the identity in
law (5.12), an attempt at which is presented in Theorem 5.5.
Chapter 6
On some exponential functionals
of Brownian motion and the problem
of Asian options
In the asymptotic study of the winding number of planar BM made in the
second part of Chapter 5, we saw the important role played by the represen
tation of (R
t
, t ≥ 0), the 2dimensional Bessel process, as:
R
t
= exp(B
H
t
) , where H
t
=
t
_
0
ds
R
2
s
,
with (B
u
, u ≥ 0) a realvalued Brownian motion.
In this chapter, we are interested in the law of the exponential functional:
t
_
0
ds exp(aB
s
+bs) ,
where a, b ∈ IR, and (B
s
, s ≥ 0) is a 1dimensional Brownian motion. To
compute this distribution, we can proceed in a manner which is similar to
that used in the second part of Chapter 5, in that we also rely upon the exact
knowledge of the semigroups of the Bessel processes.
The problem which motivated the development in this chapter is that of the
socalled Asian options which, on the mathematical side, consists in comput
ing as explicitly as possible the quantity:
C
(ν)
(t, k) = E
_
(A
(ν)
t
−k)
+
_
, (6.1)
where k, t ≥ 0, and:
79
80 6 Exponential functionals of Brownian motion
A
(ν)
t
=
t
_
0
ds exp 2(B
s
+ νs) ,
with B a realvalued Brownian motion starting from 0.
The method alluded to above, and developed in detail in [102], yields an
explicit formula for the law of A
(ν)
t
, and even for that of the pair (A
(ν)
t
, B
t
).
However, then, the density of this law is given in an integral form, and it
seems diﬃcult to use this result to obtain a “workable” formula for (6.1).
It is, in fact, easier to consider the Laplace transform in t of C
(ν)
(t, k), that
is:
λ
∞
_
0
dt e
−λt
E
_
(A
(ν)
t
−k)
+
_
≡ E
_
(A
(ν)
T
λ
−k)
+
_
,
where T
λ
denotes an exponential variable with parameter λ, which is inde
pendent of B. It is no more diﬃcult to obtain a closed form formula for
E
__
(A
(ν)
T
λ
− k)
+
_
n
_
for any n ≥ 0, and, therefore, we shall present the main
result of this chapter in the following form.
Theorem 6.1 Consider n ≥ 0 (n is not necessarily an integer) and λ > 0.
Deﬁne µ =
√
2λ +ν
2
. We assume that: λ > 2n(n + ν), which is equivalent
to: µ > ν + 2n. Then, we have, for every x > 0:
E
__
_
A
(ν)
T
λ
−
1
2x
_
+
_
n
_
=
E
_
(A
(ν)
T
λ
)
n
_
Γ
_
µ−ν
2
−n
_
x
_
0
dt e
−t
t
µ−ν
2
−n−1
_
1 −
t
x
_
µ+ν
2
+n
(6.2)
Moreover, we have:
E
_
(A
(ν)
T
λ
)
n
_
=
Γ(n + 1)Γ
_
µ+ν
2
+ 1
_
Γ
_
µ−ν
2
−n
_
2
n
Γ
_
µ−ν
2
_
Γ
_
n +
µ+ν
2
+ 1
_ . (6.3)
In the particular case where n is an integer, this formula simpliﬁes into:
E
_
(A
(ν)
T
λ
)
n
_
=
n!
n
j=1
(λ −2(j
2
+jν))
. (6.4)
6.1 The integral moments of A
(ν)
t
81
Remarks:
1) It is easily veriﬁed, using dominated convergence, that, as x → ∞, both
sides of (6.2) converge towards E
_
(A
(ν)
T
λ
)
n
_
.
2) It appears clearly from formula (6.2) that, in some sense, a ﬁrst step in
the computation of the lefthand side of this formula is the computation
of the moments of A
(ν)
T
λ
. In fact, in paragraph (6.1), we shall ﬁrst show
how to obtain formula (6.4), independently from the method used in the
sequel of the chapter.
6.1 The integral moments of A
(ν)
t
In order to simplify the presentation, and to extend easily some of the compu
tations made in the Brownian case to some other processes with independent
increments, we shall write, for λ ∈ IR
E [exp(λB
t
)] = exp (tϕ(λ)) , where, here, ϕ(λ) =
λ
2
2
. (6.5)
We then have the following
Theorem 6.2
1) Let µ ≥ 0, n ∈ IN, and α > ϕ(µ +n). Then, the formula:
∞
_
0
dt exp(−αt)E
⎡
⎣
⎛
⎝
t
_
0
ds exp(B
s
)
⎞
⎠
n
exp(µB
t
)
⎤
⎦
=
n!
n
j=0
(α −ϕ(µ +j))
(6.6)
holds.
2) Let µ ≥ 0, n ∈ IN, and t ≥ 0. Then, we have:
E
⎡
⎣
⎛
⎝
t
_
0
ds exp B
s
⎞
⎠
n
exp(µB
t
)
⎤
⎦
= E
_
P
(µ)
n
(exp B
t
) exp(µB
t
)
_
, (6.7)
where (P
(µ)
n
, n ∈ IN) is the following sequence of polynomials:
82 6 Exponential functionals of Brownian motion
P
(µ)
n
(z) = n!
n
j=0
c
(µ)
j
z
j
, with c
(µ)
j
=
k=j
0≤k≤n
(ϕ(µ +j) −ϕ(µ +k))
−1
.
Remark: With the following modiﬁcations, this theorem may be applied
to a large class of processes with independent increments:
i) we assume that (X
t
) is a process with independent increments which ad
mits exponential moments of all orders;
under this only condition, formula (6.6) is valid for α large enough;
ii) Let ϕ be the L´evy exponent of X which is deﬁned by:
E
0
[exp(mX
s
)] = exp (sϕ(m)) .
Then, formula (6.7) also extends to (X
t
), provided ϕ
¸
¸
IR
+
is injective, which
implies that the argument concerning the additive decomposition formula
in the proof below still holds.
Proof of Theorem 6.2
1) We deﬁne
φ
n,t
(µ) = E
⎡
⎣
⎛
⎝
t
_
0
ds exp(B
s
)
⎞
⎠
n
exp(µB
t
)
⎤
⎦
= n!E
⎡
⎣
t
_
0
ds
1
s
1
_
0
ds
2
. . .
s
n−1
_
0
ds
n
exp(B
s
1
+ +B
s
n
+µB
t
)
⎤
⎦
We then remark that
E [exp(µB
t
+B
s
1
+ +B
s
n
)]
= E [exp ¦µ(B
t
−B
s
1
) + (µ + 1)(B
s
1
−B
s
2
) + + (µ +n)B
s
n
¦]
= exp ¦ϕ(µ)(t −s
1
) +ϕ(µ + 1)(s
1
−s
2
) + +ϕ(µ +n)s
n
¦ .
Therefore, we have:
6.1 The integral moments of A
(ν)
t
83
∞
_
0
dt exp(−αt)φ
n,t
(µ)
= n!
∞
_
0
dt e
−αt
t
_
0
ds
1
s
1
_
0
ds
2
s
n−1
_
0
ds
n
exp{ϕ(µ)(t −s
1
) +· · · +ϕ(µ +n)s
n
}
= n!
∞
_
0
ds
n
exp(−(α −ϕ(µ +n))s
n
) . . .
. . .
∞
_
s
n
ds
n−1
exp(−(α −ϕ(µ +n −1))(s
n−1
−s
n
))
∞
_
s
1
dt exp(−(α −ϕ(µ))(t −s
1
)),
so that, in the case: α > ϕ(µ +n), we obtain formula (6.6) by integrating
successively the (n + 1) exponential functions.
2) Next, we use the additive decomposition formula:
1
n
j=0
(α −ϕ(µ +j))
=
n
j=0
c
(µ)
j
1
(α − ϕ(µ +j))
where c
(µ)
j
is given as stated in the Theorem, and we obtain, for
α>ϕ(µ +n):
∞
_
0
dt e
−αt
φ
n,t
(µ) = n!
n
j=0
c
(µ)
j
∞
_
0
dt e
−αt
e
ϕ(µ+j)t
a formula from which we deduce:
φ
n,t
(µ) = n!
n
j=0
c
(µ)
j
exp(ϕ(µ +j)t) = n!
n
j=0
c
(µ)
j
E[exp(jB
t
) exp(µB
t
)]
= E
_
P
(µ)
n
(exp B
t
) exp(µB
t
)
_
.
Hence, we have proved formula (6.7). ¯.
As a consequence of Theorem 6.2, we have the following
Corollary 6.2.1 For any λ ∈ IR, and any n ∈ IN, we have:
λ
2n
E
⎡
⎣
⎛
⎝
t
_
0
du exp(λB
u
)
⎞
⎠
n
⎤
⎦
= E[P
n
(exp λB
t
)] (6.8)
84 6 Exponential functionals of Brownian motion
where
P
n
(z) = 2
n
(−1)
n
⎧
⎨
⎩
1
n!
+ 2
n
j=1
n!(−z)
j
(n −j)!(n + j)!
⎫
⎬
⎭
. (6.9)
Proof: Thanks to the scaling property of Brownian motion, it suﬃces to
prove formula (6.8) for λ = 1, and any t ≥ 0. In this case, we remark that
formula (6.8) is then precisely formula (6.7) taken with µ = 0, once the
coeﬃcients c
(0)
j
have been identiﬁed as:
c
(0)
0
= (−1)
n
2
n
(n!)
2
; c
(0)
j
=
2
n
(−1)
n−j
2
(n −j)!(n +j)!
(1 ≤ j ≤ n) ;
therefore, it now appears that the polynomial P
n
is precisely P
(0)
n
, and this
ends the proof. ¯.
It may also be helpful to write down explicitly the moments of A
(ν)
t
.
Corollary 6.2.2 For any λ ∈ IR
∗
, µ ∈ IR, and n ∈ IN, we have:
λ
2n
E
⎡
⎣
⎛
⎝
t
_
0
du expλ(B
u
+µu)
⎞
⎠
n
⎤
⎦
= n!
n
j=0
c
(µ/λ)
j
exp
__
λ
2
j
2
2
+λjµ
_
t
_
.
(6.10)
In particular, we have, for µ = 0
λ
2n
E
⎡
⎣
⎛
⎝
t
_
0
duexpλB
u
⎞
⎠
n
⎤
⎦
= n!
⎧
⎨
⎩
(−1)
n
(n!)
2
+ 2
n
j=1
(−1)
n−j
(n −j)!(n +j)!
exp
λ
2
j
2
t
2
⎫
⎬
⎭
(6.11)
6.2 A study in a general Markovian setup
It is interesting to give a theoretical solution to the problem of Asian options
in a general Markovian setup, for the two following reasons, at least:
 on one hand, the general presentation allows to understand simply the
nature of the quantities which appear in the computations;
6.2 A study in a general Markovian setup 85
 on the other hand, this general approach may allow to choose some other
stochastic models than the geometric Brownian motion model.
Therefore, we consider ¦(X
t
), (θ
t
), (P
x
)
x∈E
¦ a strong Markov process, and
(A
t
, t ≥ 0) a continuous additive functional, which is strictly increasing, and
such that: P
x
(A
∞
= ∞) = 1, for every x ∈ E.
Consider, moreover, g : IR → IR
+
, a Borel function such that g(x) = 0 if
x ≤ 0. (In the applications, we shall take: g(x) = (x
+
)
n
).
Then, deﬁne:
G
x
(t) = E
x
[g(A
t
)] , G
x
(t, k) = E
x
[g(A
t
−k)]
and
G
(λ)
x
(k) = E
x
⎡
⎣
∞
_
0
dt e
−λt
g(A
t
−k)
⎤
⎦
.
We then have the important
Proposition 6.1 Deﬁne τ
k
= inf¦t : A
t
≥ k¦. The two following formulae
hold:
G
(λ)
x
(k) =
∞
_
0
dv e
−λv
E
x
_
e
−λτ
k
G
X
τ
k
(v)
_
(6.12)
and, if g is increasing, and absolutely continuous,
G
(λ)
x
(k) =
1
λ
∞
_
k
dv g
(v −k)E
x
[e
−λτ
v
] . (6.13)
Remark: In the application of these formulae to Brownian motion, we
shall see that the equality between the righthand sides of formulae (6.12)
and (6.13) is the translation of a classical “intertwining” identity between
conﬂuent hypergeometric functions. This is one of the reasons why it seems
important to insist upon this identity; in any case, this discussion shall be
taken up in paragraph 6.5.
Proof of Proposition 6.1:
1) We ﬁrst remark that, on the set ¦t ≥ τ
k
¦, the following relation holds:
A
t
(ω) = A
τ
k
(ω) +A
t−τ
k
(θ
τ
k
ω) = k +A
t−τ
k
(θ
τ
k
ω) ;
86 6 Exponential functionals of Brownian motion
then, using the strong Markov property, we obtain:
G
x
(t, k) ≡ E
x
[g(A
t
−k)] = E
x
_
E
X
τ
k
(ω)
_
g(A
t−τ
k
(ω)
)
¸
1
(τ
k
(ω)≤t)
_
;
hence: G
x
(t, k) = E
x
_
G
X
τ
k
(t −τ
k
)1
(τ
k
≤t)
_
.
This implies, using Fubini’s theorem:
G
(λ)
x
(k) = E
x
⎡
⎣
∞
_
τ
k
dt e
−λt
G
X
τ
k
(t −τ
k
)
⎤
⎦
,
and formula (6.12) follows.
2) Making the change of variables t = v−k in the integral in (6.13) and using
the strong Markov property, we may write the righthand side of (6.13) as:
1
λ
E
x
⎡
⎣
∞
_
0
dt g
(t)e
−λτ
k
E
X
τ
k
(e
−λτ
t
)
⎤
⎦
Therefore, in order to prove that the righthand sides of (6.12) and (6.13)
are equal, it suﬃces to prove the identity:
∞
_
0
dv e
−λv
E
z
(g(A
v
)) =
1
λ
∞
_
0
dt g
(t)E
z
[e
−λτ
t
] . (6.14)
(here, z stands for X
τ
k
(ω) in the previous expressions).
In fact, we now show
∞
_
0
dve
−λv
g(A
v
) =
1
λ
∞
_
0
dt g
(t)e
−λτ
t
(6.15)
which, a fortiori, implies (6.14).
Indeed, if we write: g(a) =
a
_
0
dt g
(t), we obtain:
∞
_
0
dv e
−λv
g(A
v
) =
∞
_
0
dv e
−λv
A
v
_
0
dt g
(t)
6.3 The case of L´evy processes 87
=
∞
_
0
dt g
(t)
∞
_
τ
t
dv e
−λv
=
1
λ
∞
_
0
dt g
(t)e
−λτ
t
which is precisely the identity (6.15). ¯.
Exercise 6.1 Let (M
t
, t ≥ 0) be an IR
+
valued multiplicative functional
of the process X; prove the following generalizations of formulae (6.12)
and (6.13):
∞
_
0
dt E
x
[M
t
g(A
t
−k)] =
∞
_
0
dv E
x
_
M
τ
k
E
X
τ
k
(M
v
g(A
v
))
_
=
∞
_
0
dt g
(t)E
x
⎡
⎣
M
τ
k
E
X
τ
k
⎛
⎝
∞
_
τ
t
dv M
v
⎞
⎠
⎤
⎦
6.3 The case of L´evy processes
We now consider the particular case where (X
t
) is a L´evy process, that is
a process with homogeneous, independent increments, and we take for (A
t
)
and g the following:
A
t
=
t
_
0
ds exp(mX
s
) , and g(x) = (x
+
)
n
,
for some m ∈ IR, and n > 0.
We deﬁne Y
k
= exp(X
τ
k
), y = exp(x), and we denote by (
˜
P
y
)
y∈IR
+
the family
of laws of the strong Markov process (Y
k
; k ≥ 0).
We now compute the quantities G
x
(t) and G
(λ)
x
(k) in this particular case; we
ﬁnd:
G
x
(t) = exp(mnx)e
n
(t) = y
mn
e
n
(t) ,
where:
e
n
(t) = G
0
(t) ≡ E
0
⎡
⎣
⎛
⎝
t
_
0
ds exp(mX
s
)
⎞
⎠
n
⎤
⎦
.
88 6 Exponential functionals of Brownian motion
On the other hand, we have:
τ
k
=
k
_
0
dv
(Y
v
)
m
, (6.16)
and formula (6.12) now becomes:
G
(λ)
x
(k) =
˜
E
y
[(Y
k
)
mn
exp(−λτ
k
)] e
(λ)
n
, where: e
(λ)
n
=
∞
_
0
dte
−λt
e
n
(t) .
We may now write both formulae (6.12) and (6.13) as follows.
Proposition 6.2 With the above notation, we have:
G
(λ)
x
(k) =
(i)
˜
E
y
[(Y
k
)
mn
exp(−λτ
k
)] e
(λ)
n
=
(ii)
n
λ
∞
_
k
dv(v −k)
n−1
˜
E
y
[e
−λτ
v
] (6.17)
In the particular case n = 1, this double equality takes a simpler form: indeed,
in this case, we have
e
(λ)
1
=
∞
_
0
dt e
−λt
e
1
(t) =
∞
_
0
dt e
−λt
t
_
0
ds exp(s ϕ(m)) ,
where ϕ is the L´evy exponent of X. It is now elementary to obtain, for
λ > ϕ(m), the formula: e
(λ)
1
=
1
λ(λ −ϕ(m))
, and, therefore, for n = 1, the
formulae (6.17) become
λG
(λ)
x
(k) =
˜
E
y
[(Y
k
)
m
exp(−λτ
k
)]
1
(λ −ϕ(m))
=
∞
_
k
dv
˜
E
y
[exp(−λτ
v
)] .
(6.18)
6.4 Application to Brownian motion
We now assume that: X
t
= B
t
+ νt, t ≥ 0, with (B
t
) a Brownian motion,
and ν ≥ 0, and we take m = 2, which implies:
6.4 Application to Brownian motion 89
A
t
=
t
_
0
ds exp(2X
s
) .
In this particular situation, the process (Y
k
, k ≥ 0) is now the Bessel process
with index ν, or dimension δ
ν
= 2(1 +ν). We denote by P
(ν)
y
the law of this
process, when starting at y, and we write simply P
(ν)
for P
(ν)
1
. Hence, for
example, P
(0)
denotes the law of the 2dimensional Bessel process, starting
from 1. We now recall the Girsanov relation, which was already used in
Chapter 5, formula (5.8):
P
(ν)
y
¸
¸
R
t
=
_
R
t
y
_
ν
exp
_
−
ν
2
2
τ
t
_
P
(0)
y
¸
¸
R
t
, where τ
t
=
t
_
0
ds
R
2
s
. (6.19)
In Chapter 5, we used the notation H
t
for τ
t
; (R
t
, t ≥ 0) denotes, as usual,
the coordinate process on Ω
∗
+
, and ¹
t
= σ¦R
s
, s ≤ t¦. The following Lemma
is now an immediate consequence of formula (6.19).
Lemma 6.1 For every α ∈ IR, for every ν ≥ 0, and λ ≥ 0, we have, if we
denote: µ =
√
2λ +ν
2
,
E
(ν)
[(R
k
)
α
exp(−λτ
k
)] = E
(0)
_
(R
k
)
α+ν
exp
_
−
µ
2
2
τ
k
__
= E
(µ)
_
R
α+ν−µ
k
¸
(6.20)
We are now able to write the formulae (6.17) in terms of the moments of
Bessel processes.
Proposition 6.3 We now write simply G
(λ)
(k) for G
(λ)
0
(k), and we intro
duce the notation:
H
µ
(α; s) = E
(µ)
((R
s
)
α
) . (6.21)
Then, we have:
G
(λ)
(k) =
(i)
H
µ
(2n +ν −µ; k)e
(λ)
n
=
(ii)
n
λ
∞
_
k
dv(v −k)
n−1
H
µ
(ν −µ; v) (6.22)
which, in the particular case n = 1, simpliﬁes, with the notation: δ
ν
= 2(1 +
ν), to:
λG
(λ)
(k) =
(i)
H
µ
(2 +ν −µ; k)
1
(λ −δ
ν
)
=
(ii)
∞
_
k
dvH
µ
(ν −µ; v) . (6.23)
90 6 Exponential functionals of Brownian motion
It is now clear, from formula (6.22) that in order to obtain a closed form
formula for G
(λ)
(k), it suﬃces to be able to compute explicitly H
µ
(α, k) and
e
(λ)
n
. In fact, once H
µ
(α; k) is computed for all admissible values of α and k,
by taking k = 0 in formula (6.22) (ii), we obtain:
e
(λ)
n
=
n
λ
∞
_
0
dv v
n−1
H
µ
(ν −µ; v) , (6.24)
from which we shall deduce formula (6.3) for λe
(λ)
n
≡ E
_
(A
(ν)
T
λ
)
n
_
.
We now present the quickest way, to our knowledge, to compute H
µ
(α; k).
In order to compute this quantity, we ﬁnd it interesting to introduce the
laws Q
δ
z
of the square Bessel process (Σ
u
, u ≥ 0) of dimension δ, starting
from z, for δ > 0, and z > 0, because of the additivity property of this family
(see Chapter 2, Theorem 2.3).
We then have the following
Proposition 6.4 For z > 0, and for every γ such that: 0 < γ < µ + 1, we
have:
1
z
γ
H
µ
_
−2γ;
1
2z
_
=
(i)
Q
δ
µ
z
_
1
(Σ
1/2
)
γ
_
=
(ii)
1
Γ(γ)
1
_
0
du e
−zu
u
γ−1
(1 −u)
µ−γ
(6.25)
Proof:
a) Formula (6.25)(i) is a consequence of the invariance property of the laws
of Bessel processes by timeinversion;
b) We now show how to deduce formula (6.25)(ii) from (6.25)(i). Using the
elementary identity:
1
r
γ
=
1
Γ(γ)
∞
_
0
dt e
−rt
t
γ−1
,
we obtain:
Q
δ
µ
z
_
1
(Σ
1/2
)
γ
_
=
1
Γ(γ)
∞
_
0
dt t
γ−1
Q
δ
µ
z
(e
−tΣ
1/2
)
6.4 Application to Brownian motion 91
and the result now follows from the general formula:
Q
δ
z
(exp(−αΣ
s
)) =
1
(1 + 2αs)
δ/2
exp
_
−z
α
1 + 2αs
_
, (6.26)
which we use with α = t, and s = 1/2. ¯.
Remark: Formula (6.26) is easily deduced from the additivity property of
the family (Q
δ
z
) (see RevuzYor [81], p. 411). ¯.
We now show how formulae (6.2) and (6.3) are consequences of formula (6.25):
 ﬁrstly, we apply formula (6.22)(i), together with formula (6.25)(ii), with
γ =
µ−ν
2
− n, and z = x. Formula (6.2) then follows after making the
change of variables: u =
t
x
in the integral in formula (6.25);
 secondly, we take formula (6.22)(ii) with k = 0, which implies:
E
_
(A
(ν)
T
λ
)
n
_
= n
∞
_
0
dv v
n−1
H
µ
(ν −µ; v) ,
and we then obtain formula (6.3) by replacing in the above integral
H
µ
(ν −µ; v) by its value given by (6.25)(ii), with γ =
µ−ν
2
.
In fact, when we analyze the previous arguments in detail, we obtain a repre
sentation of the r.v. A
(ν)
T
λ
as the ratio of a beta variable to a gamma variable,
both variables being independent; such analysis also provides us with some
very partial explanation of this independence property. Precisely, we have
obtained the following result.
Theorem 6.3 1. The law of the r.v. A
(ν)
T
λ
satisﬁes
A
(ν)
T
λ
(law)
=
Z
1,a
2Z
b
, where a =
µ +ν
2
and b =
µ −ν
2
(6.27)
and where Z
α,β
, resp. Z
b
, denotes a beta variable with parameters α and β,
resp. a gamma variable with parameter b, and both variables on the right hand
side of (6.27) are independent.
2. More generally, we obtain:
_
A
(ν)
T
λ
;
Z
a
Z
b
exp(2B
(ν)
T
λ
)
_
(law)
=
_
Z
1
2(Z
1
+Z
a
)Z
b
;
Z
a
Z
b
exp(2B
(ν)
T
λ
)
_
(6.28)
92 6 Exponential functionals of Brownian motion
where Z
1
, Z
a
, Z
b
are three independent gamma variables, with respective pa
rameters 1, a, b, and these variables are also assumed to be independent of B
and T
λ
.
Remark: Our aim in establishing formula (6.28) was to try and understand
better the factorization which occurs in formula (6.27), but, at least at ﬁrst
glance, formula (6.28) does not seem to be very helpful.
Proof of the Theorem:
a) From formula (6.24), if we take n suﬃciently small, we obtain:
E
_
(A
(ν)
T
λ
)
n
_
=
∞
_
0
dv nv
n−1
H
µ
(ν −µ; v)
=
∞
_
0
dy
y
n
_
1
2y
_
n
y
b
_
1
y
b
H
µ
_
−2b;
1
2y
__
, where b =
µ −ν
2
=
∞
_
0
dy
y
n
_
1
2y
_
n
y
b
Q
δ
µ
y
_
1
Σ
b
1/2
_
, from (6.25)(i)
=
∞
_
0
dy
y
n
_
1
2y
_
n
E
_
exp −yZ
(b,a+1)
¸
c
µ,ν
, from (6.25)(ii).
In the sequel, the constant c
µ,ν
may vary, but shall never depend on n.
For simplicity, we now write Z instead of Z
(b,a+1)
, and we obtain, after
making the change of variables: y = z/Z:
E
_
(A
(ν)
T
λ
)
n
_
= c
µ,ν
E
⎡
⎣
∞
_
0
dz
z
n
_
Z
2z
_
n
_
z
Z
_
b
exp(−z)
⎤
⎦
= c
µ,ν
E
_
nZ
n−1
1
Z
b−1
_
∞
_
0
dz
_
1
2z
_
n
z
b−1
e
−z
,
and, after performing an integration by parts in the ﬁrst expectation, we
obtain:
E
_
(A
(ν)
T
λ
)
n
_
= E [(Z
1,a
)
n
] E
__
1
2Z
b
_
n
_
(6.29)
which implies (6.27).
b) We take up the same method as above, that is: we consider
6.4 Application to Brownian motion 93
E
_
(A
(ν)
T
λ
)
α
exp(βB
(ν)
T
λ
)
_
, for small α and β’s.
Applying CameronMartin’s absolute continuity relationship between
Brownian motion and Brownian motion with drift, we ﬁnd:
E
_
(A
(ν)
T
λ
)
α
exp(βB
(ν)
T
λ
)
_
= λ
∞
_
0
dt exp
_
−
_
λ +
ν
2
2
_
t
_
E
_
(A
(0)
t
)
α
exp(β +ν)B
t
_
= λ
∞
_
0
dt e
−θt
E
_
(A
(β+ν)
t
)
α
_
=
λ
θ
E
_
(A
(β+ν)
T
θ
)
α
_
,
where θ = λ +
ν
2
2
−
(β+ν)
2
2
= λ −
β
2
2
−βν.
We now remark that µ
def
=
_
2θ + (β +ν)
2
is in fact equal to µ =
√
2λ +ν
2
, so that we may write, with the help of formula (6.29):
E
_
(A
(ν)
T
λ
)
α
exp(βB
(ν)
T
λ
)
_
=
_
λ
θ
_
E
__
Z
1,a+
β
2
_
α
_
E
__
1
2Z
b−
β
2
_
α
_
.
(6.30)
Now, there exist constants C
1
and C
2
such that:
E
_
(Z
1,a+
β
2
)
α
_
= E
_
(Z
1,a
)
α
(1 −Z
1,a
)
β/2
_
C
1
E
_
_
2Z
b−
β
2
_
−α
_
= E
_
(2Z
b
)
−α
(Z
b
)
−β/2
_
C
2
and it is easily found that: C
1
=
a+β/2
a
and C
2
=
Γ(b)
Γ
(
b−
β
2
)
. Furthermore,
we now remark that, by taking simply α = 0 in formula (6.30):
E
_
exp
_
βB
(ν)
T
λ
__
=
λ
θ
.
Hence, we may write formula (6.30) as:
E
_
(A
(ν)
T
λ
)
α
exp(βB
(ν)
T
λ
)
_
1
C
1
C
2
= E
_
_
Z
1,a
2Z
b
_
α
_
1 −Z
1,a
Z
b
_
β/2
exp(βB
(ν)
T
λ
)
_
.
Now, since:
1
C
1
C
2
= E
_
_
Z
a,1
Z
b
_
β/2
_
, we deduce from the above identity
that:
94 6 Exponential functionals of Brownian motion
_
A
(ν)
T
λ
;
_
Z
a,1
Z
b
_
1/2
exp(B
(ν)
T
λ
)
_
(law)
=
_
Z
1,a
2Z
b
;
_
1 −Z
1,a
Z
b
_
1/2
exp(B
(ν)
T
λ
)
_
(law)
=
_
1 −Z
a,1
2Z
b
;
_
Z
a,1
Z
b
_
1/2
exp
_
B
(ν)
T
λ
_
_
from which we easily obtain (6.28) thanks to the betagamma relationships.
¯.
As a veriﬁcation, we now show that formula (6.2) may be recovered simply
from formula (6.27); it is convenient to write formula (6.2) in the equivalent
form:
1
x
b−n
E
__
_
Z
1,a
Z
b
−
1
x
_
+
_
n
_
=
E
__
Z
1,a
Z
b
_
n
_
Γ(b −n)
1
_
0
dt e
−xt
t
b−n−1
(1 −t)
n+a
(6.31)
for x > 0, n < b, and a > 0.
We now obtain the more general formula
Proposition 6.5 Let Z
α,β
and Z
γ
be two independent random variables,
which are, respectively a beta variable with parameters (α, β) and a gamma
variable with parameter γ. Then, we have, for every x > 0, and n < γ:
E
__
_
Z
α,β
Z
γ
−
1
x
_
+
_
n
_
=
x
γ−n
Γ(γ)B(α, β)
1
_
0
du e
−xu
u
γ−n−1
(1 −u)
β+n
...
...
1
_
0
dw(u +w(1 −u))
α−1
w
n
(1 −w)
β−1
. (6.32)
In the particular case α = 1, formula (6.32) simpliﬁes to:
E
__
_
Z
1,β
Z
γ
−
1
x
_
+
_
n
_
=
x
γ−n
Γ(γ)
⎛
⎝
1
_
0
du e
−xu
u
γ−n−1
(1 −u)
β+n
⎞
⎠
(βB(n + 1, β))
(6.33)
which is precisely formula (6.31), taken with a = β, and b = γ.
Proof: We remark that we need only integrate upon the subset of the prob
ability space
_
1 ≥ Z
α,β
≥
1
x
Z
γ
_
, and after conditioning on Z
γ
, we integrate
with respect to the law of Z
α,β
on the random interval
_
1
x
Z
γ
; 1
¸
. This gives:
6.4 Application to Brownian motion 95
E
__
_
Z
α,β
Z
γ
−
1
x
_
+
_
n
_
= E
_
1
Z
n
γ
_
_
Z
α,β
−
Z
γ
x
_
+
_
n
_
= E
⎡
⎢
⎢
⎣
1
Z
n
γ
1
B(α, β)
1
_
Z
γ
x
du u
α−1
_
u −
Z
γ
x
_
n
(1 −u)
β−1
⎤
⎥
⎥
⎦
and the rest of the computation is routine. ¯.
We now consider some particularly interesting subcases of formula (6.27)
Theorem 6.4 Let U be a uniform variable on [0, 1], and σ
def
= inf¦t : B
t
= 1¦.
1) For any ν ∈ [0, 1[, we have:
A
(ν)
T
2(1−ν)
(law)
=
U
2Z
1−ν
(6.34)
In particular, we have, taking ν = 0, and ν =
1
2
, respectively:
T
1
_
0
ds exp(
√
2B
s
)
(law)
=
U
2Z
1
and
T
1
_
0
ds exp(2B
s
+ s)
(law)
= Uσ (6.35)
where, as usual, the variables which appear on the righthand sides of (6.34)
and (6.35) are assumed to be independent.
2) For any ν ≥ 0,
A
(ν)
T
ν+
1
2
(law)
= Z
1,ν+
1
2
σ . (6.36)
Proof: The diﬀerent statements follow immediately from formula (6.27),
once one has remarked that:
Z
1,1
(law)
= U and
1
2Z
1/2
(law)
=
1
N
2
(law)
= σ ,
where N is a centered Gaussian variable with variance 1. ¯.
96 6 Exponential functionals of Brownian motion
6.5 A discussion of some identities
(6.5.1) Formula (6.25)(ii) might also have been obtained by using the ex
plicit expression of the semigroup of the square of a Bessel process (see, for
example, [81] p. 411, Corollary (1.4)). With this approach, one obtains the
following formula:
1
z
γ
H
µ
_
−2γ;
1
2z
_
≡ Q
δ
µ
z
_
1
(Σ
1/2
)
γ
_
= exp(−z)
Γ(α)
Γ(β)
Φ(α, β; z) (6.37)
where α = −γ + 1 + µ, β = 1 + µ, and Φ(α, β; z) denotes the conﬂuent
hypergeometric function with parameters α and β.
With the help of the following classical relations (see Lebedev [63], p. 266–
267):
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
(i) Φ(α, β; z) = e
z
Φ(β −α, β; −z)
and
(ii) Φ(β −α, β; z) =
Γ(β)
Γ(α)Γ(β −α)
1
_
0
dt e
−zt
t
(β−α)−1
(1 −t)
α−1
,
(6.38)
one may obtain formula (6.25)(ii) as a consequence of (6.37).
(6.5.2) The recurrence formula (6.22)(ii) may be written, after some ele
mentary transformations, in the form:
x
α
H
µ
_
2α;
1
2x
_
e
(λ)
n
=
n
λ2
n−1
1
_
0
dw w
−α−1
(1 −w)
n−1
(xw)
β
H
µ
_
2β;
1
2wx
_
(6.39)
where, now, we take α = n +
ν−µ
2
, and β =
ν−µ
2
≡ α −n.
Assuming that formula (6.25)(ii) is known, the equality (6.39) is nothing
but an analytic translation of the wellknown algebraic relation between beta
variables
Z
a,b+c
(law)
= Z
a,b
Z
a+b,c
(6.40)
for the values of the parameters: a = −α, b = n, c = 1 + µ + β, and Z
p,q
denotes a beta variable with parameters (p, q), and the two variables on the
righthand side of (6.40) are assumed to be independent. In terms of conﬂuent
hypergeometric functions, the equality (6.39) translates into the identity:
6.5 A discussion of some identities 97
Φ(α, γ; z) =
Γ(γ)
Γ(β)Γ(γ −β)
1
_
0
dt t
β−1
(1 −t)
γ−β−1
Φ(α, β; zt) (6.41)
for γ > β (see Lebedev [63], p. 278).
(6.5.3) The relations (6.39) and (6.41) may also be understood in terms of
the semigroups (Q
δ
t
; t ≥ 0) and (Q
δ
t
; t ≥ 0) of squares of Bessel processes
with respective dimensions δ and δ
, via the intertwining relationship:
Q
δ
t
M
k
,k
= M
k
,k
Q
δ
t
, (6.42)
where: 0 < δ
< δ, k
=
δ
2
, k =
δ−δ
2
, and M
a,b
is the “multiplication” Markov
kernel which is deﬁned by:
M
a,b
f(x) = E[f(xZ
a,b
)] , where f ∈ b(B(IR
+
)) (6.43)
(for a discussion of such intertwining relations, which are closely linked with
beta and gamma variables, see Yor [99]).
(6.5.4) Finally, we close up this discussion with a remark which relates the
recurrence formula (6.22)(ii), in which we assume n to be an integer, to the
uniform integrability property of the martingales
M
(p)
k
def
= R
2+p
k
−1 −c
(µ)
p
k
_
0
ds R
p
s
, k ≥ 0, under P
(µ)
, (6.44)
for: −2µ < 2 +p < 0, with c
(µ)
p
= (2 +p)
_
µ +
2+p
2
_
.
Once this uniform integrability property, under the above restrictions for p,
has been obtained, one gets, using the fact that: R
2+p
∞
= 0, the following
relation:
E
(µ)
(R
2+p
k
) = −c
(µ)
p
E
(µ)
⎛
⎝
∞
_
k
ds R
p
s
⎞
⎠
,
and, using this relation recurrently, one obtains formula (6.22)(ii) with the
following expression for the constant λe
(λ)
n
≡ E
_
(A
(ν)
T
λ
)
n
_
98 6 Exponential functionals of Brownian motion
E
_
(A
(ν)
T
λ
)
n
_
= (−1)
n
n!
n−1
j=0
c
(µ)
2j+ν−µ
(6.45)
and an immediate computation shows that formulae (6.45) and (6.4) are
identical.
Comments on Chapter 6
Whereas, in Chapter 5, some studies of a continuous determination of the
logarithm along the planar Brownian trajectory have been made, we are
interested here in the study of the laws of exponential functionals of Brownian
motion, or Brownian motion with drift.
The origin of the present study comes from Mathematical ﬁnance; the so
called ﬁnancial Asian options take into account the past history of the market,
hence the introduction of the arithmetic mean of the geometric Brownian
motion. A thorough discussion of the motivation from Mathematical ﬁnance
is made in GemanYor [47]. The results in paragraph 6.1 are taken from
Yor [102].
The developments made in paragraphs 6.2 and 6.3 show that there are poten
tial extensions to exponential functionals of a large class of L´evy processes.
However, the limitation of the method lies in the fact that, if (X
t
) is a L´evy
process, and (R(u), u ≥ 0) is deﬁned by:
exp(X
t
) = R
⎛
⎝
t
_
0
ds exp(X
s
)
⎞
⎠
,
then, the semigroup of R is only known explicitly in some particular cases,
but a class of examples has been studied, in joint work with Ph. Carmona
and F. Petit [24].
In paragraph 6.4, a simple description of the law of the variable A
(ν)
T
λ
is ob
tained; it would be nice to be able to explain the origin of the beta variable,
resp.: gamma variable, in formula (6.27), from, possibly, a path decomposi
tion. In paragraph 6.5, a discussion of the previously obtained formulae in
terms of conﬂuent hypergeometric functions is presented.
6.5 A discussion of some identities 99
Had we chosen, for our computations, the diﬀerential equations approach
which is closely related to the FeynmanKac formula, these functions would
have immediately appeared. However, throughout this chapter, and in related
publications (GemanYor [46], [47], and Yor [102]), we have preferred to use
some adequate change of probability, and Girsanov theorem.
The methodology used in this Chapter helped to unify certain computations
for Asian, Parisian and barrier options (see [105]).
The SpringerFinance volume [104] gathers ten papers dealing, in a broad
sense, with Asian options.
Chapter 7
Some asymptotic laws for
multidimensional BM
In this chapter, we ﬁrst build upon the knowledge gained in Chapter 5 about
the asymptotic windings of planar BM around one point, together with
the KallianpurRobbins ergodic theorem for planar BM, to extend Spitzer’s
theorem:
2θ
t
log t
(law)
−−−−−→
t→∞
C
1
into a multidimensional result for the winding numbers (θ
1
t
, θ
2
1
, . . . , θ
n
t
) of
planar BM around n points (all notations which may be alluded to, but not
deﬁned in this chapter are found in PitmanYor [75]).
This study in the plane may be extended one step further by considering
BM in IR
3
and seeking asymptotic laws for its winding numbers around a
ﬁnite number of oriented straight lines, or, even, certain unbounded curves
(Le GallYor [62]).
There is, again, a more general setup for which such asymptotic laws may be
obtained, and which allows to unify the previous studies: we consider a ﬁnite
number (B
1
, B
2
, . . . , B
m
) of jointly Gaussian, “linearly correlated”, planar
Brownian motions, and the winding numbers of each of them around z
j
,
where ¦z
j
; 1 ≤ j ≤ n¦ are a ﬁnite set of points.
In the last paragraph, some asymptotic results for Gauss linking numbers rel
ative to one BM, or two independent BM’s, with values in IR
3
are presented.
101
102 7 Some asymptotic laws for multidimensional BM
7.1 Asymptotic windings of planar BM around n points
In Chapter 5, we presented Spitzer’s result
2θ
t
log t
(law)
−−−−−→
t→∞
C
1
. (7.1)
This may be extended as follows:
2
log t
⎛
⎝
θ
r,−
t
, θ
r,+
t
,
t
_
0
ds f(Z
s
)
⎞
⎠
(law)
−−−−−→
t→∞
⎛
⎝
σ
_
0
dγ
s
1
(β
s
≤0)
,
σ
_
0
dγ
s
1
(β
s
≥0)
,
_
¯
f
2π
_
σ
⎞
⎠
(7.2)
where: θ
r,−
t
=
t
_
0
dθ
s
1
(Z
s
≤r)
, θ
r,+
t
=
t
_
0
dθ
s
1
(Z
s
≥r)
, f : C → IR is inte
grable with respect to Lebesgue measure,
¯
f =
_
C
_
dx dy f(z), β and γ are
two independent real Brownian motions, starting from 0, σ = inf¦t : β
t
= 1¦,
and
σ
is the local time at level 0, up to time σ of β (for a proof of (7.2), see
MessulamYor [65] and PitmanYor [75]).
The result (7.2) shows in particular that Spitzer’s law (7.1) takes place jointly
with the KallianpurRobbins law (which is the convergence in law of the third
component on the lefthand side of (7.2) towards an exponential variable; see,
e.g., subparagraph (4.3.2), case 3)).
A remarkable feature in (7.2) is that the righthand side does not depend
on r. The following Proposition provides an explanation, which will be a key
for the extension of (7.1) to the asymptotic study of the winding numbers
with respect to a ﬁnite number of points.
Proposition 7.1 Let ϕ(z) = (f(z); g(z)) be a function from C to IR
2
such
that:
__
dx dy [ϕ(z)[
2
≡
__
dx dy
_
(f(z))
2
+ (g(z))
2
_
< ∞
Then, the following quantity:
1
√
log t
t
_
0
ϕ(Z
s
) dZ
s
≡
1
√
log t
t
_
0
(dX
s
f(Z
s
) +dY
s
g(Z
s
))
converges in law, as t → ∞, towards:
7.1 Windings of planar BM around n points 103
_
k
ϕ
Γ1
2
σ
where k
ϕ
≡
1
2π
_ _
dx dy[ϕ(z)[
2
, and (Γ
t
, t ≥ 0) is a 1dimensional BM.
This convergence in law takes place jointly with (7.2), and Γ, β, γ are inde
pendent.
For this Proposition, see MessulamYor [65] and KasaharaKotani [55].
Proposition 7.1 gives an explanation for the absence of the radius r on the
righthand side of (7.2); more precisely, the winding number in the annulus:
¦z : r ≤ [z[ ≤ R¦ , for 0 < r < R < ∞ ,
is, roughly, of the order of
√
log t, and, therefore:
1
log t
θ
r,R
t
≡
1
log t
t
_
0
dθ
s
1
(r≤Z
s
≤R)
(P)
−−−−−→
t→∞
0 .
We now consider θ
1
t
, θ
2
t
, . . . , θ
n
t
, the winding numbers of (Z
u
, u ≤ t) around
each of the points z
1
, z
2
, . . . , z
n
. Just as before, we separate θ
j
t
into θ
j,−
t
and
θ
j,+
t
, where, for some r
j
> 0, we deﬁne:
θ
j,−
t
=
t
_
0
dθ
j
s
1
(Z
s
−z
j
≤r
j
)
and θ
j,+
t
=
t
_
0
dθ
j
s
1
(Z
s
−z
j
≥r
j
)
Another application of Proposition 7.1 entails:
1
log t
¸
¸
¸θ
i,+
t
−θ
j,+
t
¸
¸
¸
(P)
−−−−−→
t→∞
0 , (7.3)
so that it is now quite plausible, and indeed it is true, that:
2
log t
(θ
1
t
, . . . , θ
n
t
)
(law)
−−−−−→
t→∞
_
W
−
1
+W
+
, W
−
2
+W
+
, . . . , W
−
n
+W
+
_
(7.4)
Moreover, the asymptotic random vector:
_
W
−
1
, W
−
2
, . . . , W
−
n
, W
+
_
may be
represented as
(L
T
(U)C
k
(1 ≤ k ≤ n) ; V
T
) (7.5)
where (U
t
, t ≥ 0) is a reﬂecting BM, T = inf¦t : U
t
= 1¦, L
T
(U) is the local
time of U at 0, up to time T, (V
t
, t ≥ 0) is a onedimensional BM, starting
from 0, which is independent of U, and (C
k
; 1 ≤ k ≤ n) are independent
Cauchy variables with parameter 1, which are also independent of U and V .
The representation (7.5) agrees with (7.2) as one may show that:
104 7 Some asymptotic laws for multidimensional BM
⎛
⎝
σ
_
0
dγ
s
1
(β
s
≤0)
;
σ
_
0
dγ
s
1
(β
s
≥0)
;
1
2
σ
⎞
⎠
(law)
= (L
T
(U)C
1
; V
T
; L
T
(U))
essentially by using, as we already did in paragraph 4.1, the wellknown re
presentation:
β
+
t
= U
t
_
0
ds1
(β
s
≥0)
, t ≥ 0 .
From the formula for the characteristic function:
E [expi (αV
T
+βL
T
(U)C
1
)] =
_
chα +[β[
shα
α
_
−1
,
(which may be derived directly, or considered as a particular case of the ﬁrst
formula in subparagraph (3.3.2)), it is easy to obtain the multidimensional
explicit formula:
E
_
exp
_
i
n
k=1
α
k
W
k
__
=
_
ch
_
n
k=1
α
k
_
+
[α
k
[
α
k
sh
_
n
k=1
α
k
__
−1
, (7.6)
where we have denoted W
k
for W
−
k
+W
+
.
Formula (7.6) shows clearly, if needed, that each of the W
k
’s is a Cauchy
variable with parameter 1, and that these Cauchy variables are stochastically
dependent, in an interesting manner, which is precisely described by the
representation (7.5).
The following asymptotic residue theorem may now be understood as a global
summary of the preceding results.
Theorem 7.1
1) Let f be holomorphic in C ¸ ¦z
1
, . . . , z
n
¦, and let Γ be an open, relatively
compact set such that: ¦z
1
, . . . , z
n
¦ ⊂ Γ.
Then, one has:
2
log t
t
_
0
f(Z
s
)1
Γ
(Z
s
)dZ
s
(law)
−−−−−→
t→∞
n
j=1
Res(f, z
j
)(L
T
(U) +iW
−
j
)
7.2 Windings of BM in IR
3
105
2) If, moreover, f is holomorphic at inﬁnity, and lim
z→∞
f(z) = 0, then:
2
log t
t
_
0
f(Z
s
)dZ
s
(law)
−−−−−→
t→∞
n
j=1
Res(f, z
j
)(L
T
(U) +iW
−
j
) +Res(f, ∞)(L
T
(U) −1 +iW
+
)
7.2 Windings of BM in IR
3
We deﬁne the winding number θ
D
t
of (B
u
, u ≤ t), a 3dimensional BM around
an oriented straight line D as the winding number of the projection of B on a
plane orthogonal to D. Consequently, if D
1
, . . . , D
n
are parallel, the preceding
results apply. If D and D
are not parallel, then:
1
log t
θ
D
t
and
1
log t
θ
D
t
are asymptotically independent, since both winding numbers are obtained,
to the order of log t, by only considering the amount of winding made by
(B
u
, u ≤ t) as it wanders within cones of revolution with axes D, resp: D
,
the aperture of which we can choose as small as we wish. Therefore, these
cones may be taken to be disjoint (except possibly for a common vertex). This
assertion is an easy consequence of the more precise following statement:
consider B ≡ (X, Y, Z) a Brownian motion in IR
3
, such that B
0
,∈ D
∗
≡
¦x = y = 0¦. To a given Borel function f : IR
+
→ IR
+
, we associate the
volume of revolution:
Γ
f
≡
_
(x, y, z) : (x
2
+y
2
)
1/2
≤ f([z[)
_
,
and we deﬁne:
θ
f
t
=
t
_
0
dθ
s
1
(B
s
∈Γ
f
)
.
106 7 Some asymptotic laws for multidimensional BM
We have the following
Theorem 7.2 If
log f(λ)
log λ
−−−−−→
t→∞
a, then:
2θ
f
t
log t
(law)
−−−−−→
t→∞
σ
_
0
dγ
u
1
(β
u
≤aS
u
)
where β
and γ are two independent realvalued Brownian motions, S
u
= sup
s≤u
β
s
, and
σ = inf¦u : β
u
= 1¦.
More generally, if f
1
, f
2
, . . . , f
k
are k functions such that:
log f
j
(λ)
log λ
−−−−−→
t→∞
a
j
, 1 ≤ j ≤ k ,
then the above convergences in law for the θ
f
j
take place jointly, and the joint
limit law is that of the vector:
⎛
⎝
σ
_
0
dγ
s
1
(β
s
≤a
j
S
s
)
; 1 ≤ j ≤ k
⎞
⎠
Now, the preceding assertion about cones may be understood as a particular
case of the following consequence of Theorem 7.2:
if, with the notation of Theorem 7.2 a function f satisﬁes: a ≥ 1, then:
1
log t
(θ
t
−θ
f
t
)
(P)
−−−−−→
t→∞
0 .
With the help of Theorem 7.2, we are now able to present a global state
ment for asymptotic results relative to certain functionals of Brownian motion
in IR
3
, in the form of the following
General principle : The limiting laws of winding numbers and, more gen
erally, of Brownian functionals in diﬀerent directions of IR
3
, take place jointly
and independently, and, in any direction, they are given by the study in the
plane, as described in the above paragraph 7.1.
7.4 A uniﬁed picture of windings 107
7.3 Windings of independent planar BM’s around
each other
The origin of the study presented in this paragraph is a question of Mitchell
Berger concerning solar ﬂares (for more details, see BergerRoberts [5]).
Let Z
1
, Z
2
, . . . , Z
n
be n independent planar BM’s, starting from n diﬀerent
points z
1
, . . . , z
n
. Then, for each i ,= j, we have:
P
_
∃t ≥ 0, Z
i
t
= Z
j
t
_
= 0 ,
since B
i,j
t
=
1
√
2
(Z
i
t
−Z
j
t
), t ≥ 0, is a planar BM starting from
1
√
2
(z
i
−z
j
) ,= 0,
and which, therefore, shall almost surely never visit 0.
Thus, we may deﬁne (θ
i,j
t
, t ≥ 0) as the winding number of B
i,j
around 0,
and ask for the asymptotic law of these diﬀerent winding numbers, indexed
by (i, j), with 1 ≤ i < j ≤ n. This is a dual situation to the situation
considered in paragraph 7.1 in that, now, we consider n BM’s and one point,
instead of one BM and n points.
We remark that, taken all together, the processes (B
i,j
; 1 ≤ i ≤ j ≤ n) are
not independent. Nonetheless, we may prove the following result:
2
log t
_
θ
i,j
t
; 1 ≤ i < j ≤ n
_
(law)
−−−−−→
t→∞
_
C
i,j
; 1 ≤ i < j ≤ n
_
, (7.7)
where the C
i,j
’s are independent Cauchy variables, with parameter 1.
The asymptotic result (7.7) shall appear in the next paragraph as a particular
case.
7.4 A uniﬁed picture of windings
The aim of this paragraph is to present a general setup for which the studies
made in paragraphs 7.1, 7.2, and 7.3, may be understood as particular cases.
Such a uniﬁcation is made possible by considering: B
1
, B
2
, . . . , B
m
, m pla
nar Brownian motions with respect to the same ﬁltration, and which are,
108 7 Some asymptotic laws for multidimensional BM
moreover, linearly correlated, in the following sense:
for any p, q ≤ m, there exists a correlation matrix A
p,q
between B
p
and B
q
such that: for every
−→
u ,
−→
v ∈ IR
2
,
(
−→
u , B
p
t
) (
−→
v , B
q
t
) −(
−→
u , A
p,q
−→
v ) t
is a martingale. (Here, (
−→
x ,
− →
y ) denotes the scalar product in IR
2
). The asymp
totic result (7.7) may now be generalized as follows.
Theorem 7.3 Let θ
p
t
be the winding number of (B
p
s
, s ≤ t) around z
0
, where
B
p
0
,= z
0
, for every p.
If, for all (p, q), p ,= q, the matrix A
p,q
is not an orthogonal matrix, then:
2
log t
(θ
p
t
; p ≤ m)
(law)
−−−−−→
t→∞
(C
p
; p ≤ m) ,
where the variables (C
p
; p ≤ m) are independent Cauchy variables, with
parameter 1.
The asymptotic result (7.7) appears indeed as a particular case of Theo
rem 7.3, since, if: B
p
t
=
1
√
2
(Z
k
t
−Z
t
) and B
q
t
=
1
√
2
(Z
k
t
−Z
j
t
), for k ,= ,= j,
then: A
p,q
=
1
2
Id, which is not an orthogonal matrix! In other cases, A
p,q
= 0.
It is natural to consider the more general situation, for which some of the
matrices A
p,q
may be orthogonal. If a correlation matrix A
p,q
is orthogonal,
then B
p
is obtained from B
q
by an orthogonal transformation and, possibly, a
translation. This allows to consider the asymptotic problem in the following
form: again, we may assume that none of the A
p,q
’s is orthogonal, but we
now have to study the winding numbers of m linearly correlated Brownian
motions around n points (z
1
, . . . , z
n
). We write:
_
θ
p
t
= (θ
p,z
j
t
; j ≤ n) ; p ≤ m
_
.
We may now state the following general result.
Theorem 7.4 We assume that, for all p ,= q, A
p,q
is not orthogonal. Then,
2
log t
(θ
p
t
; p ≤ m)
(law)
−−−−−→
t→∞
(ξ
p
; p ≤ m) ,
where the random vectors (ξ
p
)
p≤m
are independent, and, for every p: ξ
p
(law)
=
(W
1
, . . . , W
n
), the law of which has been described precisely in paragraph 7.1.
7.5 The selflinking number of BM in IR
3
109
We now give a sketch of the main arguments in the proof of Theorem 7.4. The
elementary, but nonetheless crucial fact on which the proof relies is presented
in the following
Lemma 7.1 Let G and G
be two jointly Gaussian, centered, variables
in IR
2
, such that: for every u ∈ IR
2
, and every v ∈ IR
2
,
E
_
(u, G)
2
¸
= [u[
2
= E
_
(u, G
)
2
¸
, and E [(u, G)(v, G
)] = (u, Av) ,
where A is nonorthogonal.
Then, E
_
1
[G[
p
[G
[
q
_
< ∞, as soon as: p <
3
2
, q <
3
2
.
Remark: This integrability result should be compared with the fact that
E
_
1
[G[
2
_
= ∞, which has a lot to do with the normalization of
t
_
0
ds
[Z
s
[
2
by
(log t)
2
(and not (log t), as in the KallianpurRobbins limit law) to obtain a
limit in law.
7.5 The asymptotic distribution of the selflinking
number of BM in IR
3
Gauss has deﬁned the linking number of two closed curves in IR
3
, which do
not intersect each other. We should like to consider such a number for two
Brownian curves, but two independent BM
s in IR
3
almost surely intersect
each other. However, we can deﬁne some approximation to Gauss’ linking
number by excluding the pairs of instants (u, v) at which the two BM’s are
closer than
1
n
to each other, and then let n go to inﬁnity. It may be expected,
and we shall show that this is indeed the case, that the asymptotic study
shall involve some quantity related to the intersections of the two BM’s.
We remark that it is also possible to deﬁne such linking number approxima
tions for only one BM in IR
3
. Thus, we consider:
I
n
(t)
def
=
t
_
0
⎛
⎝
dB
u
,
u
_
0
dB
s
,
B
u
−B
s
[B
u
−B
s
[
3
⎞
⎠
1
(B
u
−B
s
≥
1
n
)
110 7 Some asymptotic laws for multidimensional BM
and
J
n
(s, t)
def
=
s
_
0
⎛
⎝
dB
u
,
t
_
0
dB
v
,
B
u
−B
v
[B
u
−B
v
[
3
⎞
⎠
1
(B
u
−B
v
≥
1
n
)
,
where (a, b, c) = a (b c) denotes the mixed product of the three vectors
a, b, c in IR
3
.
We have to explain the meaning given to each of the integrals:
a) in the case of J
n
, there is no diﬃculty, since B and B
are independent,
b) in the case of I
n
, we ﬁrst ﬁx u, and then:
 we either use the fact that (B
s
, s ≤ u) is a semimartingale in the original
ﬁltration of B, enlarged by the variable B
u
;
 or, we deﬁne the integral with respect to dB
s
for every x(= B
u
), and
having deﬁned these integrals measurably in x, we replace x by B
u
.
Both operations give the same quantity.
We now state the asymptotic result for I
n
.
Theorem 7.5 We have:
_
B
t
,
1
n
I
n
(t); t ≥ 0
_
(law)
−−−−−→
n→∞
(B
t
, cβ
t
; t ≥ 0)
where (β
t
) is a realvalued BM independent of B, and c is a universal con
stant.
To state the asymptotic result for J
n
, we need to present the notion of inter
section local times:
these consist in the a.s. unique family
_
α(x; s, t); x ∈ IR
3
, s, t ≥ 0
_
of occupation densities, which is jointly continuous in (x, s, t), such that:
for every Borel function f : IR
3
→ IR
+
,
s
_
0
du
t
_
0
dv f(B
u
− B
v
) =
_
IR
3
dx f(x)α(x; s, t)
7.5 The selflinking number of BM in IR
3
111
(α(x; du dv) is a random measure supported by ¦u, v) : B
u
−B
v
= x¦).
The asymptotic result for J
n
is the following
Theorem 7.6 We have:
_
B
s
, B
t
;
1
√
n
J
n
(s, t); s, t ≥ 0
_
(law)
−−−−−→
n→∞
(B
s
, B
t
; cIB
α
(s, t); s, t ≥ 0)
where c is a universal constant, and conditionally on (B, B
), the process
(IB
α
(s, t); s, t ≥ 0) is a centered Gaussian process with covariance:
E [IB
α
(s, t)IB
α
(s
, t
) [ B, B
] = α(0; s ∧ s
, t ∧ t
) .
We now end up this chapter by giving a sketch of the proof of Theorem 7.5:
 in a ﬁrst step, we consider, for ﬁxed u, the sequence:
θ
n
(u) =
u
_
0
dB
s
B
u
−B
s
[B
u
−B
s
[
3
1
(
B
u
−B
s
≥
1
n
)
.
It is then easy to show that:
1
n
θ
n
(u)
(law)
−−−−−→
n→∞
θ
∞
def
=
∞
_
0
dB
s
B
s
[B
s
[
3
1
(B
s
≥1)
(7.8)
and the limit variable θ
∞
has moments of all orders, as follows from Exer
cise 7.1 below;
 in a second step, we remark that, for u < v:
1
n
(θ
n
(u), θ
n
(v))
(law)
−−−−−→
n→∞
(θ
∞
,
ˆ
θ
∞
) , (7.9)
where θ
∞
and
ˆ
θ
∞
are two independent copies.
To prove this result, we remark that, in the stochastic integral which de
ﬁnes θ
n
(u), only times s which may be chosen arbitrarily close to u, and
smaller than u, will make some contribution to the limit in law (7.8); then,
the convergence in law (7.9) follows from the independence of the increments
of B.
 in the ﬁnal step, we write:
112 7 Some asymptotic laws for multidimensional BM
1
n
I
n
(t) = γ
(n)
⎛
⎝
1
n
2
t
_
0
ds [θ
n
(s)[
2
⎞
⎠
,
where (γ
(n)
u
, u ≥ 0) is a one dimensional Brownian motion, and it is then easy
to show, thanks to the results obtained in the second step that
1
n
2
t
_
0
ds[θ
n
(s)[
2
L
2
−−−−−→
n→∞
c
2
t .
This convergence in L
2
follows from the convergence of the ﬁrst, resp.: second,
moment of the lefthand side to: c
2
t, resp.: (c
2
t)
2
, as a consequence of (7.9).
This allows to end up the proof of the Theorem.
Exercise 7.1 Let (B
t
, t ≥ 0) be a 3dimensional Brownian motion starting
from 0.
1. Prove that:
∞
_
0
dt
[B
t
[
4
1
(B
t
≥1)
(law)
= T
∗
1
def
= inf ¦u : [β
u
[ = 1¦ (7.10)
where (β
u
, u ≥ 0) is a onedimensional BM starting from 0.
Hint: Show that one may assume: B
0
= 1; then, prove the existence of a
realvalued Brownian motion (γ(u), u ≥ 0) starting from 1, such that:
1
[B
t
[
= γ
⎛
⎝
t
_
0
du
[B
u
[
4
⎞
⎠
, t ≥ 0 .
2. Conclude that θ
∞
(deﬁned in (7.8)) admits moments of all orders.
Hint: Apply the BurkholderGundy inequalities.
Exercise 7.2 (We use the same notation as in Exercise 7.1).
Prove the identity in law (7.10) as a consequence of the RayKnight theorem
(RK2), a), presented in paragraph 3.1:
(
a
∞
(R
3
), a ≥ 0)
(law)
=
_
R
2
2
(a), a ≥ 0
_
7.5 The selflinking number of BM in IR
3
113
and of the invariance by timeinversion of the law of (R
2
(a), a ≥ 0).
Exercise 7.3 Let (
˜
B
t
, t ≥ 0) be a 2dimensional BM starting from 0, and
(β
t
; t ≥ 0) be a onedimensional BM starting from 0.
1. Prove the following identities in law:
⎛
⎝
1
_
0
ds
[
˜
B
s
[
⎞
⎠
2
(law)
=
(a)
4
⎛
⎝
1
_
0
ds[
˜
B
s
[
2
⎞
⎠
−1
(law)
=
(b)
4
T
∗
1
(law)
=
(c)
4
_
sup
s≤1
[β
s
[
_
2
In particular, one has:
1
_
0
ds
[
˜
B
s
[
(law)
= 2 sup
s≤1
[β
s
[ (7.11)
Hints: To prove (a), represent ([
˜
B
s
[
2
, s ≥ 0) as another 2dimensional
Bessel process, timechanged; to prove (b), use, e.g., the RayKnight the
orem on Brownian local times; to prove (c), use the scaling property.
2. Deﬁne S = inf
⎧
⎨
⎩
u :
u
_
0
ds
[
˜
B
s
[
> 1
⎫
⎬
⎭
. Deduce from (7.11) that:
S
(law)
=
T
∗
1
4
and, consequently:
E
_
exp
_
−
λ
2
2
S
__
=
_
ch
_
λ
2
__
−1
(7.12)
Comments on Chapter 7
The proofs of the results presented in paragraph 7.1 are found in Pitman
Yor ([75], [76]); those in paragraph 7.2 are found in Le GallYor [61], and the
results in paragraphs 7.3 and 7.4 are taken from Yor [100].
114 7 Some asymptotic laws for multidimensional BM
The asymptotic study of windings for random walks has been made by
Belisle [3] (see also BelisleFaraway [4]); there are also many publications
on this topic in the physics literature (see, e.g., RudnickHu [82]).
The proof of Theorem 7.5 constitutes a good example that the asymptotic
study of some double integrals with respect to BM may, in a number of cases,
be reduced to a careful study of simple integrals (see, e.g., the reference to
StroockVaradhanPapanicolaou in Chapter XIII of RevuzYor [81]).
Chapter 8
Some extensions of Paul L´evy’s arc
sine law for BM
In his 1939 paper: “Sur certains processus stochastiques homog`enes”, Paul
L´evy [64] proves that both Brownian variables:
A
+
def
=
1
_
0
ds 1
(B
s
>0)
and g = sup¦t < 1 : B
t
= 0¦
are arcsine distributed.
Over the years, these results have been extended in many directions; for a
review of extensions developed up to 1988, see BinghamDoney [20].
In this Chapter, we present further results, which extend L´evy’s computation
in the three following directions, in which (B
t
, t ≥ 0) is replaced respectively
by:
i) a symmetrized Bessel process with dimension 0 < δ < 2,
ii) a Walsh Brownian motion, that is a process (X
t
, t ≥ 0) in the plane which
takes values in a ﬁnite number of rays (≡ halflines), all meeting at 0, and
such that (X
t
, t ≥ 0) behaves, while away from 0, as a Brownian motion,
and, when it meets 0, chooses a ray with equal probability,
iii) a singularly perturbed reﬂecting Brownian motion, that is ([B
t
[−
µ
t
, t ≥ 0) where (
t
, t ≥ 0) is the local time of (B
t
, t ≥ 0) at 0.
A posterior justiﬁcation of these extensions may be that the results which
one obtains in each of these directions are particularly simple, this being
115
116 8 Some extensions of Paul L´evy’s arc sine law
due partly to the fact that, for each of the models, the strong Markov prop
erty and the scaling property are available; for example, in the setup of
(iii), we rely upon the strong Markov property of the 2dimensional process:
¦[B
t
[,
t
; t ≥ 0¦.
More importantly, these three models may be considered as testing grounds
for the use and development of the main methods which have been successful
in recent years in reproving L´evy’s arc sine law, that is, essentially: excursion
theory and stochastic calculus (more precisely, Tanaka’s formula).
Finally, one remarkable feature in this study needs to be underlined: although
the local time at 0 of, say, Brownian motion, does not appear a priori in the
problem studied here, that is: determining the law of A
+
, in fact, it plays an
essential role, and a main purpose of this chapter is to clarify this role.
8.1 Some notation
Throughout this chapter, we shall use the following notation:
Z
a
, resp.: Z
a,b
, denotes a gamma variable with parameter a, resp.: a beta
variable with parameters (a, b), so that
P(Z
a
∈ dt) =
dt t
a−1
e
−t
Γ(a)
(t > 0)
and
P(Z
a,b
∈ dt) =
dt t
a−1
(1 −t)
b−1
B(a, b)
(0 < t < 1)
We recall the wellknown algebraic relations between the laws of the beta and
gamma variables:
Z
a
(law)
= Z
a,b
Z
a+b
and Z
a,b+c
(law)
= Z
a,b
Z
a+b,c
,
where, in both identities in law, the righthand sides feature independent
r.v.’s. We shall also use the notation T
(α)
, with 0 < α < 1, to denote a
onesided stable (α) random variable, the law of which may be characterized
by:
E
_
exp(−λT
(α)
)
¸
= exp(−λ
α
) , λ ≥ 0 .
(It may be worth noting that 2T
(1/2)
, and not T
(1/2)
, is distributed as the
ﬁrst hitting time of 1 by a onedimensional BM starting from 0).
8.2 A list of results 117
8.2 A list of results
(8.2.1) As was already recalled, L´evy (1939) proved that A
+
and g are arc
sine distributed, that is: they have the same law as
N
2
N
2
+N
2
, where N
and N
are two centered, independent Gaussian variables with variance 1, or,
since: N
2
(law)
=
1
2T
(1/2)
, we see that A
+
and g are distributed as:
T
(1/2)
T
(1/2)
+T
(1/2)
(8.1)
where T
(1/2)
and T
(1/2)
are two independent copies. In fact, in the next para
graph, we shall present some proofs which exhibit A
+
in the form (8.1).
For the moment, here is a quick proof that g is arcsine distributed:
let u ≤ 1; then: (g < u) = (d
u
> 1),
where: d
u
= inf¦t ≥ u; B
t
= 0¦
≡ u + inf¦v > 0 : B
v+u
−B
u
= −B
u
¦
(law)
= u +B
2
u
σ
(law)
= u(1 +B
2
1
σ),
with: σ = inf¦t : β
t
= 1¦, and β is a BM, independent of B
u
. Hence, we have
shown:
g
(law)
= 1 +B
2
1
σ
(law)
= 1 +
B
2
1
β
2
1
(law)
= 1 +
N
2
N
2
, which gives the result.
(8.2.2) If we replace Brownian motion by a symmetrized Bessel process of
dimension 0 < δ = 2(1 − α) < 2, then the quantities A
+
(α)
and g
(α)
, the
meaning of which is selfevident, no longer have a common distribution if
α ,=
1
2
. In fact, Dynkin [38] showed that: g
(α)
(law)
= Z
α,1−α
, whereas Barlow
PitmanYor [2] proved that:
A
+
(α)
(law)
=
T
(α)
T
(α)
+T
(α)
, (8.2)
where T
(α)
and T
(α)
are two independent copies.
(8.2.3) In [2], it was also shown that L´evy’s result for A
+
admits the fol
lowing multivariate extension: if we consider (as described informally in the
introduction to this chapter) a Walsh Brownian motion (Z
s
, s ≥ 0) living on
n rays (I
i
; 1 ≤ i ≤ n), and we denote:
118 8 Some extensions of Paul L´evy’s arc sine law
A
(i)
=
1
_
0
ds 1
(Z
s
∈I
i
)
,
then:
_
A
(1)
, . . . , A
(n)
_
(law)
=
⎛
⎜
⎜
⎝
T
(i)
n
j=1
T
(j)
; 1 ≤ i ≤ n
⎞
⎟
⎟
⎠
(8.3)
where (T
(i)
; i ≤ i ≤ n) are n independent onesided stable
_
1
2
_
random
variables. Furthermore, it is possible to give a common extension of (8.2) and
(8.3), by considering a process (Z
s
, s ≥ 0) which, on each of the rays, behaves
like a Bessel process with dimension δ = 2(1 − α), and when arriving at 0,
chooses its ray with equal probability. Then, using a selfevident notation,
we have:
_
A
(1)
(α)
, . . . , A
(n)
(α)
_
(law)
=
⎛
⎜
⎜
⎝
T
(i)
(α)
n
j=1
T
(j)
(α)
; 1 ≤ i ≤ n
⎞
⎟
⎟
⎠
. (8.4)
(8.2.4) However, in this chapter, we shall be more concerned with yet an
other family of extensions of L´evy’s results, which have been obtained by F.
Petit in her thesis [70].
Theorem 8.1 For any µ > 0, we have
1
_
0
ds 1
(B
s
≤µ
s
)
(law)
= Z1
2
,
1
2µ
, (8.5)
and
g
_
0
ds 1
(B
s
≤µ
s
)
(law)
= Z1
2
,
1
2
+
1
2µ
. (8.6)
In the sequel, we shall refer to the identities in law (8.5) and (8.6) as to F.
Petit’s ﬁrst, resp. second result.
With the help of L´evy’s identity in law:
(S
t
−B
t
, S
t
; t ≥ 0)
(law)
= ([B
t
[,
t
; t ≥ 0) ,
8.2 A list of results 119
and Pitman’s theorem ([71]):
(2S
t
−B
t
, S
t
; t ≥ 0)
(law)
= (R
t
, J
t
; t ≥ 0) ,
where (R
t
, t ≥ 0) is a 3dimensional Bessel process starting from 0, and
J
t
= inf
s≥t
R
s
, we may translate, for example, (8.5) in the following terms:
1
_
0
ds 1
(B
s
≥(1−µ)S
s
)
(law)
=
1
_
0
ds 1
(R
s
≤(1+µ)J
s
)
(law)
= Z1
2
,
1
2µ
(8.7)
which shows, in particular, that for µ = 1, the result agrees with L´evy’s arc
sine law.
Using the representation of the standard Brownian bridge (b(u), u ≤ 1) as:
_
1
√
g
B
gu
, u ≤ 1
_
and the independence of this process from g, we may deduce from (8.6) the
following
Corollary 8.1.1 Let (b(u), u ≤ 1) be a standard Brownian bridge, and
(λ
u
, u ≤ 1) be its local time at 0. Then, we have
1
_
0
ds 1
(b(s)≤µλ
s
)
(law)
= Z
1,
1
2µ
(law)
= 1 −U
1
2µ
, (8.8)
where U is uniformly distributed on [0, 1].
In particular, in the case µ =
1
2
, we obtain:
1
_
0
ds 1
(
b(s)≤
1
2
λ
s)
(law)
=
1
_
0
ds 1
(
b(s)+
1
2
λ(s)≤
1
2
λ(1)
)
(law)
= U (8.9)
Using now the following identity in law (8.10) between the Brownian Bridge
(b(u), u ≤ 1) and the Brownian meander:
_
m(u) ≡
1
√
1 −g
B
g+u(1−g)
, u ≤ 1
_
:
120 8 Some extensions of Paul L´evy’s arc sine law
_
m(s), j(s) ≡ inf
s≤u≤1
m(u); s ≤ 1
_
(law)
= ([b(s)[ +λ(s), λ(s); s ≤ 1) (8.10)
which is found in BianeYor [18], and BertoinPitman [11], we obtain the
Corollary 8.1.2 Let (m(s), s ≤ 1) denote the Brownian meander. Then we
have:
1
_
0
ds1
(m(s)+(µ−1)j
s
≤µm
1
)
(law)
= Z
1,
1
2µ
In particular, we obtain, by taking µ =
1
2
and µ = 1:
1
_
0
ds 1
(
m(s)−
1
2
j(s)≤
1
2
m(1)
)
(law)
= U ,
and
P
⎧
⎨
⎩
1
_
0
ds 1
(m(s)≥m(1))
∈ dt
⎫
⎬
⎭
=
dt
2
√
t
.
Proof: Together with the identity in law (8.10), we use the symmetry of the
law of the Brownian bridge by time reversal, i.e.:
(b(u), u ≤ 1)
(law)
= (b(1 −u), u ≤ 1) .
We then obtain:
1
_
0
ds 1
(b(s)≤µλ
s
)
(law)
=
1
_
0
ds 1
(b(s)≤µ(λ
1
−λ
s
))
(law)
=
1
_
0
ds 1
(m(s)+(µ−1)j
s
<µm
1
)
,
and the desired results follow from Corollary 8.1.1. ¯.
8.3 A discussion of methods  Some proofs
(8.3.1) We ﬁrst show how to prove
8.3 A discussion of methods  Some proofs 121
A
+
1
≡
1
_
0
ds 1
(B
s
>0)
(law)
= Z1
2
,
1
2
by using jointly the scaling property of Brownian motion, and excursion the
ory. Set A
+
t
=
t
_
0
ds 1
(B
s
>0)
and A
−
t
=
t
_
0
ds 1
(B
s
<0)
(t ≥ 0).
We have, for every t, and u: (A
+
t
> u) = (t > α
+
u
),
where α
+
u
def
= inf¦s; A
+
s
> u¦. We now deduce, by scaling, that:
A
+
1
(law)
=
1
α
+
1
(8.11)
From the trivial identity: t = A
+
t
+A
−
t
, it follows: α
+
u
= u +A
−
α
+
u
;
then, we write: A
−
α
+
u
= A
−
τ(
α
+
u
)
, with τ(s) = inf¦v;
v
> s¦.
Now, it is a consequence of excursion theory that the two processes
(A
+
τ(t)
, t ≥ 0) and (A
−
τ(t)
, t ≥ 0) are independent; hence, the two processes
(A
−
τ(t)
, t ≥ 0) and (
α
+
u
, u ≥ 0) are independent; consequently, we now deduce
from the previous equalities that, for ﬁxed u:
α
+
u
(law)
= u + (
α
+
u
)
2
A
−
τ(1)
(law)
= u
_
1 +
A
−
τ(1)
A
+
τ(1)
_
, again by scaling (8.12)
Putting together (8.11) and (8.12), we obtain:
A
+
1
(law)
=
A
+
τ(1)
A
+
τ(1)
+A
−
τ(1)
.
Now, from (RK1) in paragraph 3.1, we know that:
_
A
+
τ(1)
, A
−
τ(1)
_
(law)
=
1
2
(T
(1/2)
, T
(1/2)
) ,
from which we obtain the representation (8.1) for A
+
1
, hence:
A
+
1
(law)
= Z
1/2,1/2
.
122 8 Some extensions of Paul L´evy’s arc sine law
(8.3.2) It may also be interesting to avoid using the scaling property, and
only depend on the excursion theory arguments, so that the method may
be used for diﬀusions which do not possess the scaling property; see some
extensions of the arcsine law to realvalued diﬀusions by A. Truman and
D. Williams ([86], [87]).
Recall that, from the master formulae of excursion theory (see Proposi
tion 3.2), we have, for every continuous, positive, additive functional (A
t
,
t ≥ 0):
E
0
[exp(−λA
S
θ
)]
=
θ
2
2
∞
_
0
dsE
0
_
exp
_
−λA
τ
s
−
θ
2
2
τ
s
__
∞
_
−∞
da E
a
_
exp
_
−λA
T
0
−
θ
2
2
T
0
__
.
Applying this formula to A = A
+
, we remark that:
 on one hand,
E
0
_
exp
_
−λA
+
τ
s
−
θ
2
2
τ
s
__
= E
0
_
exp−
_
λ +
θ
2
2
_
A
+
τ
s
_
E
0
_
exp
_
−
θ
2
2
A
−
τ
s
__
= exp
_
−
s
2
_
2λ +θ
2
_
exp
_
−
sθ
2
_
;
 on the other hand:
E
a
_
exp
_
−λA
T
0
−
θ
2
2
T
0
__
=
⎧
⎪
⎨
⎪
⎩
E
a
_
exp −
_
λ +
θ
2
2
_
T
0
_
= exp−a
√
2λ +θ
2
, if a > 0;
E
a
_
exp −
_
θ
2
2
T
0
__
= exp(−[a[θ) , if a < 0;
Consequently, we obtain:
E
0
[exp(−λA
S
θ
)] =
θ
2
_√
2λ +θ
2
+θ
_
_
1
√
2λ +θ
2
+
1
θ
_
,
from which, at least in theory, one is able to deduce, by inversion of the
Laplace transform in θ, that:
A
+
t
(law)
= t Z1
2
,
1
2
.
8.3 A discussion of methods  Some proofs 123
Remark: This approach is the excursion theory variant of the Feynman
Kac approach; see, for example, Itˆ oMc Kean ([50], p. 57–58).
(8.3.3) It is not diﬃcult, with the help of the master formulae of excursion
theory (see Proposition 3.2), to enlarge the scope of the above method and,
using the scaling property again, BarlowPitmanYor [2] arrived to the fol
lowing identity in law:
for every t > 0 and s > 0,
1
2
t
(A
+
t
, A
−
t
)
(law)
=
1
s
2
(A
+
τ
s
, A
−
τ
s
)
(by scaling, the lefthand side is equal in law to:
1
2
S
θ
(A
+
S
θ
, A
−
S
θ
), for every
θ > 0, which enables to use the master formula of excursion theory).
Hence, we have:
1
2
t
(A
+
t
, A
−
t
)
(law)
=
1
4
_
T
(
1
2
)
, T
(
1
2
)
_
,
which implies (8.1): A
+
1
(law)
=
T
(
1
2
)
T
(
1
2
)
+T
(
1
2
)
, i.e.: A
+
1
is arcsine distributed.
PitmanYor [77] give a more complete explanation of the fact that:
1
2
T
(A
+
T
, A
−
T
)
has a distribution which does not depend on T, for a certain class of random
variables; besides the case T = t, another interesting example is:
T ≡ α
+
t
≡ inf¦u : A
+
u
> t¦.
By analogy, F. Petit’s original results (Theorem 8.1 above), together with the
arithmetic of betagamma laws led us to think that the four pairs of random
variables:
(8.13)
1
(
µ
t
)
2
_
A
µ,−
t
, A
µ,+
t
_
; (8.14)
1
t
2
_
A
µ,−
τ
µ
t
, A
µ,+
τ
µ
t
_
;
(8.15)
1
(
µ
α
µ,−
s
)
2
_
s, A
µ,+
α
µ,−
s
_
; (8.16)
1
8
_
1
Z 1
2µ
,
1
Z1
2
_
124 8 Some extensions of Paul L´evy’s arc sine law
may have the same distribution. This is indeed true, as we shall see partly
in the sequel. (Here, and in the following, (
µ
t
, t ≥ 0) denotes the (semi
martingale) local time at 0 of ([B
t
[ − µ
t
, t ≥ 0), (τ
µ
t
, t ≥ 0) is the inverse
of (
µ
t
, t ≥ 0), and (α
µ,−
t
, t ≥ 0) is the inverse of (A
µ,−
t
, t ≥ 0)). It may be
worth, to give a better understanding of the identity in law between (8.15)
and (8.16), to present this identity in the following equivalent way:
Theorem 8.2 1) The identity in law
1
8
_
µ
α
1
α
1
−1
, (
µ
α
1
)
2
_
(law)
=
_
Z1
2
, Z 1
2µ
_
(8.17)
holds. (Here, we have written, for clarity, α
1
for α
µ,−
1
).
2) Consequently, we have:
A
µ,−
1
(law)
=
1
α
µ,−
1
(law)
=
1
1 +
Z
1/2µ
Z
1/2
(law)
= Z1
2
,
1
2µ
.
Comment: The second statement of this Theorem is deduced immediately
from the ﬁrst one, using the scaling property; it gives an explanation of
F. Petit’s ﬁrst result.
(8.3.4) To end up our discussion of methods, we now mention that Knight’s
theorem about continuous orthogonal martingales may replace the excur
sion argument to prove the independence of the processes (A
+
τ
t
, t ≥ 0) and
(A
−
τ
t
, t ≥ 0). To see this, we remark that Tanaka’s formula and Knight’s
theorem, used jointly, imply:
B
+
t
= −β
(+)
A
+
t
+
1
2
t
and B
−
t
= −β
(−)
A
−
t
+
1
2
t
,
with: β
(+)
and β
(−)
two independent BM’s, and:
A
±
τ
t
= inf
_
u : β
(±)
u
=
1
2
t
_
.
In the last paragraph 8.5 of this Chapter, we shall see how to modify this
argument when (B
t
) is replaced by ([B
t
[ −µ
t
, t ≥ 0).
8.4 An excursion theory approach to F. Petit’s results 125
8.4 An excursion theory approach to F. Petit’s results
(8.4.1) As we remarked in paragraph 8.3, F. Petit’s ﬁrst result:
A
µ,−
1
def
=
1
_
0
ds1
(B
s
≤µ
s
)
(law)
= Z
1/2,1/2µ
(8.5)
is equivalent to (see formula (8.11)):
1
α
µ,−
1
(law)
= Z
1/2,1/2µ
(8.18)
To simplify notation, we shall simply write, in the sequel, A, Z, and α for,
respectively: A
µ,−
1
, Z
1/2,1/2µ
and α
µ,−
1
.
To prove (8.5) or equivalently (8.18), we shall compute the following quantity:
E
_
exp
_
−
λ
2
2
A
S
θ
_
ϕ([B
S
θ
[,
S
θ
)
_
≡
θ
2
2
∞
_
0
dt e
−
θ
2
t
2
E
_
e
−
λ
2
2
A
t
ϕ([B
t
[,
t
)
_
(8.19)
where ϕ : IR
+
IR
+
→ IR
+
is a Borel function, and S
θ
denotes an independent
exponential time with parameter
θ
2
2
.
We are able to compute this quantity thanks to the extensions of the RK
theorems obtained in Chapter 3 (to be more precise, see Theorem 3.4, and the
computations made in subparagraph (3.3.2)), and therefore, in some sense, we
may envision F. Petit’s results as consequences of the extended RK theorems.
However, before we embark precisely in this computation, it may be of some
interest to play a little more with the scaling property; this leads us, at no
cost, to the following reinforcement of (8.5).
Theorem 8.3 Let Z
(law)
= Z
1/2,1/2µ
. Then, we have the following
1) P ([B
1
[ ≤ µ
1
) = E(Z) =
µ
1+µ
2) Conditioned on the set Γ
µ
≡ ([B
1
[ ≤ µ
1
), the variable A
1
is distributed
as Z
3/2,1/2µ
.
3) Conditioned on Γ
c
µ
, A
1
is distributed as: Z
1/2,1+1/2µ
.
126 8 Some extensions of Paul L´evy’s arc sine law
4) A
1
is distributed as Z.
These four statements may also be presented in the equivalent form:
A
1
(law)
= Z and P(Γ
µ
[ A
1
= a) = a .
Remark: In fact, using the identity in law between (8.13) and (8.15), it is
not diﬃcult to prove the more general identity:
P (Γ
µ
[ A
1
= a,
µ
1
) = a
Proof of the Theorem:
i) These four statements may be deduced in an elementary way from the two
identities:
E [Γ
µ
; exp(−αA
1
)] = E [Z exp(−αZ)] (8.20)
and
E [exp(−αA
1
)] = E [exp(−αZ)] (8.21)
which are valid for every α ≥ 0.
The identity (8.21) is rephrasing F. Petit’s result (8.5), so that, for the
moment, it remains to prove (8.20).
ii) For this purpose, we shall consider the quantity (8.19), in which we take:
ϕ(x, ) = 1
(x≤µ)
. We then obtain:
8.4 An excursion theory approach to F. Petit’s results 127
E
_
exp
_
−
λ
2
2
A
S
θ
_
1
(
B
S
θ
≤µ
S
θ
)
_
=
θ
2
2
E
⎡
⎣
∞
_
0
dA
t
exp−
1
2
(θ
2
t +λ
2
A
t
)
⎤
⎦
=
θ
2
2
E
⎡
⎣
∞
_
0
ds exp−
1
2
(θ
2
α
s
+λ
2
s)
⎤
⎦
, by time changing
=
θ
2
2
E
⎡
⎣
∞
_
0
ds exp−
1
2
(θ
2
sα
1
+λ
2
s)
⎤
⎦
, by scaling
=
θ
2
2
E
⎡
⎣
∞
_
0
ds exp−
1
2
_
θ
2
s
A
1
+λ
2
s
_
⎤
⎦
, by scaling again
=
θ
2
2
E
⎡
⎣
A
1
∞
_
0
du exp −
1
2
(θ
2
u +λ
2
uA
1
)
⎤
⎦
, by change of variables: s = A
1
u.
= E
_
A
1
exp
_
−
λ
2
2
S
θ
A
1
__
.
Comparing now the two extreme terms of this sequence of equalities, we
obtain, by using the scaling property once again:
E
_
exp
_
−
λ
2
2
S
θ
A
1
_
1
(B
1
≤µ
1
)
_
= E
_
A
1
exp
_
−
λ
2
2
S
θ
A
1
__
(8.22)
Since this relation is true for every θ > 0, we have obtained, thanks to the
injectivity of the Laplace transform, that, for every α ≥ 0:
E
_
exp(−αA
1
)1
(B
1
≤µ
1
)
¸
= E [A
1
exp(−αA
1
)] , (8.23)
which proves (8.20), assuming F. Petit’s result (8.5) ¯.
Remarks:
1) The ﬁrst statement of the theorem, namely:
128 8 Some extensions of Paul L´evy’s arc sine law
P ([B
1
[ ≤ µ
1
) =
µ
1 +µ
is an elementary consequence of the fact that, conditionally on R
def
= [B
1
[+
1
, [B
1
[ is uniformly distributed on [0, R]; hence, if U denotes a uniform
r.v. on [0, 1], which is independent from R, we have:
P ([B
1
[ ≤ µ
1
) = P (RU ≤ µR(1 −U)) = P (U ≤ µ(1 −U)) =
µ
1 +µ
.
2) Perhaps we should emphasize the fact that the obtention of (8.23) in
part (ii) of the proof of the Theorem was done with the only use of the
scaling property; in particular, for this result, no knowledge of F. Petit’s
results is needed whatsoever.
(8.4.2) We now engage properly into the proof of (8.5), by computing ex
plicitely the quantity
γ
θ,λ
def
= E
_
exp
_
−
λ
2
2
A
S
θ
_
1
(
B
S
θ
≤µ
S
θ
)
_
. (8.24)
We ﬁrst recall that, as a consequence of the master formulae of excursion
theory, we have, if we write:
A
t
= A
t
+A
t
, where : A
t
= A
g
t
and A
t
= A
t
−A
g
t
,
E
_
exp
_
−
λ
2
2
A
S
θ
_
¸
¸
¸
S
θ
= s, B
S
θ
= a
_
= E
0
_
exp −
_
λ
2
2
A
τ
s
+
θ
2
2
τ
s
__
e
θs
(8.25)
and
E
_
exp
_
−
λ
2
2
A
S
θ
_
¸
¸
¸
S
θ
= s, B
S
θ
= a
_
= E
a
_
exp −
_
λ
2
2
A
s
T
0
+
θ
2
2
T
0
__
e
θa
(8.26)
Moreover, from the extensions of the RK theorems obtained in Chapter 3
(see Theorem 3.4, and the computations made in subparagraph (3.3.2)), we
have, by denoting: b = µs, ν =
√
λ
2
+θ
2
, and ξ =
θ
ν
:
E
0
_
exp−
_
λ
2
2
A
τ
s
+
θ
2
2
τ
s
__
= (ch(νb) +ξsh(νb))
−1/µ
(8.27)
E
a
_
exp−
_
λ
2
2
A
s
T
0
+
θ
2
2
T
0
__
=
ch(ν(b −a)) +ξsh(ν(b − a))
ch(νb) +ξsh(νb)
, for 0 ≤ a ≤ b
(8.28)
8.4 An excursion theory approach to F. Petit’s results 129
Consequently, using moreover the fact that
S
θ
and B
S
θ
are independent and
distributed as:
P(
S
θ
∈ ds) = θe
−θs
ds and P(B
S
θ
∈ da) =
θ
2
e
−θa
da ,
we obtain:
γ
θ,λ
=
θ
2
µ
∞
_
0
db
b
_
0
da
ch(ν(b −a)) +ξsh(ν(b −a))
(ch(νb) +ξsh(νb))
1+
1
µ
Integrating with respect to da, and making the change of variables x = νb,
we obtain:
γ
θ,λ
=
ξ
2
µ
∞
_
0
dx
shx +ξ(ch x −1)
(ch x +ξ shx)
1+
1
µ
= ξ
2
⎛
⎝
1 −
ξ
µ
∞
_
0
dx
(ch x +ξ shx)
1+
1
µ
⎞
⎠
.
On the other hand, we know, from (8.22), that the quantity γ
θ,λ
is equal to:
E
_
A
1
exp
_
−
λ
2
2
S
θ
A
1
__
= ξ
2
E
_
A
1
A
1
+ξ
2
(1 −A
1
)
_
(the expression on the righthand side is obtained after some elementary
change of variables). Hence, the above computations have led us to the for
mula:
E
_
A
1
A
1
+ξ
2
(1 −A
1
)
_
= 1 −
ξ
µ
∞
_
0
dx
(ch x +ξ shx)
1+
1
µ
,
or, equivalently:
E
_
1 −A
1
A
1
+ξ
2
(1 −A
1
)
_
=
1
ξµ
∞
_
0
dx
(ch x +ξ shx)
1+
1
µ
(8.29)
We now make the change of variables: u = (thx)
2
, to obtain:
h(ξ)
def
= E
_
ξ(1 −A
1
)
A
1
+ξ
2
(1 −A
1
)
_
=
1
2µ
1
_
0
du(1 −u)
1
2µ
−
1
2
u
−
1
2
(1 +ξ
√
u)
−
(
1+
1
µ
)
We deﬁne r =
1
2
+
1
2µ
, and we use the elementary identity:
1
(1 +x)
p
= E [exp(−xZ
p
)]
to obtain:
130 8 Some extensions of Paul L´evy’s arc sine law
h(ξ) =
1
2µ
1
_
0
du u
−
1
2
(1−u)
r−1
E
_
exp(−ξ
√
uZ
2r
)
¸
= c
µ
E
_
exp −ξZ
2r
_
Z1
2
,r
_
(8.30)
where c
µ
is a constant depending only on µ, and Z
2r
and Z1
2
,r
are inde
pendent. The following lemma shall play a crucial role in the sequel of the
proof.
Lemma 8.1 The following identities in law hold:
Z
2
2r
(law)
= 4Z
r+
1
2
Z
r
(8.31)
Z
2r
_
Z1
2
,r
(law)
= 2
_
Z1
2
Z
r
(law)
= [N[
_
2Z
r
. (8.32)
As usual, in all these identities in law, the pairs of random variables featured
in the diﬀerent products are independent.
Proof of the Lemma:
1) The duplication formula for the gamma function:
√
πΓ(2z) = 2
2z−1
Γ
_
z +
1
2
_
Γ(z)
implies that, since for any k > 0, we have:
E[Z
k
p
] =
Γ(p +k)
Γ(p)
,
then:
E[Z
2k
2r
] = 4
k
E[Z
k
r+
1
2
]E[Z
k
r
] .
2) The ﬁrst identity in law in (8.32) follows from (8.31), and the fact that:
Z1
2
,r
Z1
2
+r
(law)
= Z
1/2
, and the second identity in law is immediate since:
[N[
(law)
=
_
2Z1
2
¯.
Apart from the identities in law (8.31) and (8.32), we shall also use the much
easier identity in law:
8.4 An excursion theory approach to F. Petit’s results 131
C[N[
(law)
= N[C[ , (8.33)
where C is a standard Cauchy variable, independent of N. We take up again
the expression in (8.30), and we obtain
E
_
exp
_
−ξZ
2r
_
Z1
2
,r
__
= E
_
exp
_
−ξ[N[
_
2Z
r
__
, by (8.32)
= E
_
exp
_
iξC[N[
_
2Z
r
__
= E
_
exp
_
iξN[C[
_
2Z
r
__
, by (8.33)
= E
_
exp(−ξ
2
C
2
Z
r
)
¸
= E
_
1
(1 + ξ
2
C
2
)
r
_
.
Thus, we obtain, with a constant c
µ
which changes from line to line:
h(ξ) = c
µ
∞
_
0
du
(1 +u
2
)(1 +ξ
2
u
2
)
r
= c
µ
∞
_
0
dv
√
v(1 +v)(1 +vξ
2
)
r
= c
µ
ξ
1
_
0
dz z
−1/2
(1 −z)
r− 1/2
z +ξ
2
(1 −z)
,
with the change of variables: vξ
2
=
z
1 −z
.
Hence, going back to the deﬁnition of h(ξ), we remark that we have obtained
the identity:
E
_
1 −A
1
A
1
+ξ
2
(1 −A
1
)
_
= E
_
1 −Z
Z +ξ
2
(1 −Z)
_
,
where Z
(law)
= Z1
2
,
1
2µ
, which proves the desired result:
A
1
(law)
= Z .
(8.4.3) We now prove the second result of F. Petit, i.e.:
A
1
≡ A
g
1
(law)
= Z1
2
,
1
2
+
1
2µ
Using the identities (8.25) and (8.27), we are able to compute the following
quantity:
γ
θ,λ
def
=
_
exp
_
−
λ
2
2
A
S
θ
__
.
We obtain:
132 8 Some extensions of Paul L´evy’s arc sine law
γ
θ,λ
=
θ
µ
∞
_
0
db (ch(νb) +ξsh(νb))
−
1
µ
=
ξ
µ
∞
_
0
dx(chx +ξshx)
−
1
µ
On the other hand, from the scaling property of (A
t
, t ≥ 0), we also obtain:
γ
θ,λ
= E
_
θ
2
θ
2
+λ
2
A
1
_
= E
_
ξ
2
A
1
+ξ
2
(1 −A
1
)
_
.
Hence, we have obtained the following formula:
E
_
1
A
1
+ξ
2
(1 −A
1
)
_
=
1
ξµ
∞
_
0
dx
(chx +ξshx)
1
µ
(8.34)
In order to prove the desired result, we shall now use formula (8.29), which
will enable us to make almost no computation.
In the case µ < 1, we can deﬁne ˜ µ > 0 by the formula:
1
µ
= 1 +
1
˜ µ
, and we
write
˜
A
1
, for A
˜ µ,−
1
.
Hence, comparing formulae (8.29) and (8.34), we obtain:
E
_
1
A
1
+ξ
2
(1 −A
1
)
_
=
˜ µ
µ
E
_
1 −
˜
A
1
˜
A
1
+ξ
2
(1 −
˜
A
1
)
_
(8.35)
Now, since
˜
A
1
(law)
= Z1
2
,
1
2˜ µ
, it is easily deduced from (8.35) that:
A
1
(law)
= Z1
2
,
1
2˜ µ
+1
,
and since:
1
2˜ µ
+ 1 =
1
2
+
1
2µ
, we have shown, at least in the case µ < 1:
A
1
(law)
= Z1
2
,
1
2
+
1
2µ
,
which is F. Petit’s second result.
(8.4.4) With a very small amount of extra computation, it is possible to
extend P. L´evy’s result even further, by considering, for given α, β > 0:
A
t
≡ A
α,β
t
=
t
_
0
ds 1
(−α
s
≤B
s
≤β
s
)
.
8.5 A stochastic calculus approach to F. Petit’s results 133
Indeed, taking up the above computation again, F. Petit has obtained the
following extension of formula (8.29):
E
_
2ξ(1 −A
1
)
A
1
+ξ
2
(1 −A
1
)
_
=
∞
_
0
ds
ϕ
α
(s) +ϕ
β
(s)
(ϕ
α
(s))
1+
1
2α
(ϕ
β
(s))
1+
1
2β
where we denote by ϕ
a
(s) the following quantity (which depends on ξ):
ϕ
a
(s) ≡ ϕ
(ξ)
a
(s) = ch(as) +ξsh(as) .
8.5 A stochastic calculus approach to F. Petit’s results
(8.5.1) The main aim of this paragraph is to show, with the help of some
arguments taken from stochastic calculus, the independence of the process
(A
µ,−
τ
µ
(t)
, t ≥ 0) and of the random variable
µ
α
µ,+
1
, which, following the method
discussed in the subparagraph (8.3.1), allows to reduce the computation of
the law of A
µ,−
1
to that of the pair (A
µ,−
τ
µ
1
, A
µ,+
τ
µ
1
), already presented in (8.14).
Since µ is ﬁxed throughout the paragraph, we shall use the following simpli
ﬁed notation:
X
t
= [B
t
[ −µ
t
, X
+
t
= sup(X
t
, 0) ,
X
−
t
= sup(−X
t
, 0) , A
±
t
=
t
_
0
ds 1
(±X
s
>0)
,
(
µ
t
, t ≥ 0) denotes the local time at 0 of X, and (τ
µ
t
, t ≥ 0) its right
continuous inverse.
(8.5.2) We shall now adapt the stochastic calculus method developed by
PitmanYor [75] to prove L´evy’s arc sine law.
Tanaka’s formula implies:
X
+
t
= M
(+)
t
+
1
2
µ
t
, where M
(+)
t
=
t
_
0
1
(X
s
>0)
sgn(B
s
)dB
s
(8.36)
134 8 Some extensions of Paul L´evy’s arc sine law
X
−
t
= −M
(−)
t
−(1 −µ)
t
+
1
2
µ
t
, where M
(−)
t
=
t
_
0
1
(X
s
<0)
sgn(B
s
)dB
s
Now, Knight’s theorem about continuous orthogonal martingales allows to
write:
M
(+)
t
= δ
(+)
(A
+
t
) and M
(−)
t
= δ
(−)
(A
−
t
) , t ≥ 0 ,
where δ
(+)
and δ
(−)
denote two independent Brownian motions, and the rest
of the proof shall rely in an essential manner upon this independence result.
Using the time changes α
+
and α
−
, the relations (8.36) become:
(i) X
+
α
+
t
= δ
(+)
t
+
1
2
µ
α
+
t
;
(8.37)
(ii) X
−
α
−
t
= −δ
(−)
t
−(1 −µ)
α
−
t
+
1
2
µ
α
−
t
The identity (i) in (8.37) may be interpreted as Skorokhod’s reﬂection equa
tion for the process (X
+
α
+
t
, t ≥ 0); hence, it follows that, just as in the case
µ = 1,
(X
+
α
+
t
, t ≥ 0) is a reﬂecting Brownian motion, and
1
2
µ
α
+
t
= sup
s≤t
(−δ
(+)
s
)
(8.38)
In particular, we have:
1
2
µ
α
+
1
(law)
= [N[.
We now consider the identity (ii) in (8.37), which we write as:
X
−
α
−
t
= −Y
µ
t
+
1
2
µ
α
−
t
(8.39)
where:
Y
µ
t
def
= δ
(−)
t
+ (1 −µ)
α
−
t
(8.40)
and we deduce from (8.39) that:
1
2
µ
α
−
t
= sup
s≤t
(Y
µ
s
)
def
= S
µ
t
(8.41)
Hence, we have: A
−
τ
µ
2t
= inf
_
s :
1
2
µ
α
−
s
> t
_
= inf ¦s : Y
µ
s
> t¦, from (8.39),
and, in order to obtain the desired independence result, it suﬃces to prove
that the process (Y
µ
t
, t ≥ 0) is measurable with respect to (δ
(−)
t
, t ≥ 0).
8.5 A stochastic calculus approach to F. Petit’s results 135
(8.5.3) To prove this measurability result, we shall ﬁrst express the process
(
α
−
t
, t ≥ 0) in terms of δ
(−)
and Y
µ
, which will enable us to transform the
identity (8.40) into an equation, where Y
µ
is the unknown process, and δ
(−)
is the driving Brownian motion.
Indeed, if we consider again the identity (ii) in (8.37), we see that:
−
_
[B
α
−
t
[ −µ
α
−
t
_
= −X
α
−
t
= −δ
(−)
t
−(1 −µ)
α
−
t
+
1
2
µ
α
−
t
,
which gives:
[B
α
−
t
[ = δ
(−)
t
−
1
2
µ
α
−
t
+
α
−
t
, t ≥ 0 .
Again, this equality may be considered as an example of Skorokhod’s reﬂec
tion equation for the process ([B
α
−
t
[, t ≥ 0). Therefrom, we deduce:
α
−
t
= sup
s≤t
_
−δ
(−)
s
+
1
2
µ
α
−
s
_
= sup
s≤t
_
−δ
(−)
s
+S
µ
s
_
, using (8.41).
Bringing the latter expression of
α
−
t
into (8.40), we obtain:
Y
µ
t
= δ
(−)
t
+ (1 −µ) sup
s≤t
(−δ
(−)
s
+S
µ
s
) (8.42)
Now, in the case µ ∈]0, 2[, the ﬁxed point theorem allows to show that this
equation admits one and only one solution (Y
µ
t
, t ≥ 0), and that this solution
is adapted with respect to (δ
(−)
t
, t ≥ 0).
Indeed, the application:
Φ : Ω
∗
0,T
≡ ¦f ∈ C([0, T]; IR); f(0) = 0¦ −→ Ω
∗
0,T
g −→
_
δ
(−)
t
+ (1 −µ) sup
s≤t
_
−δ
(−)
s
+ sup
u≤s
(g(u))
_
; t ≤ T
_
is Lipschitz, with coeﬃcient K = [1 −µ[, i.e.:
sup
t≤T
[Φ(g)(t) −Φ(h)(t)[ ≤ K sup
t≤T
[g(t) −h(t)[ .
136 8 Some extensions of Paul L´evy’s arc sine law
Hence, if µ ∈]0, 2[, Φ is strictly contracting, and Picard’s iteration procedure
converges, therefore proving at the same time the uniqueness of the solution
of (8.42) and its measurability with respect to δ
(−)
.
Remark: The diﬃculty to solve (8.42) when µ does not belong to the
interval ]0, 2[ was already noticed in Le GallYor [62], and partly dealt with
there.
Comments on Chapter 8
A number of extensions of L´evy’s arc sine law for Brownian motion have been
presented in this chapter, with particular emphasis on F. Petit’s results (8.5)
and (8.6). The paragraph 8.4, and particularly the subparagraph (8.4.2), is
an attempt to explain the results (8.5) and (8.6), using the extension of the
RayKnight theorems proved in Chapter 3 for the process ([B
t
[ −µ
t
; t ≤ τ
s
).
In the next Chapter, another explanation of (8.5) is presented.
Chapter 9
Further results about reﬂecting
Brownian motion perturbed by its
local time at 0
In this Chapter, we study more properties of the process
(X
t
≡ [B
t
[ − µ
t
, t ≥ 0)
which played a central role in the preceding Chapter 8. One of the main aims
of the present Chapter is to give a clear proof of the identity in law between
the pairs (8.14) and (8.16), that is:
1
t
2
_
A
µ,−
τ
µ
t
, A
µ,+
τ
µ
t
_
(law)
=
1
8
_
1
Z 1
2µ
,
1
Z1
2
_
(9.1)
(recall that (τ
µ
t
, t ≥ 0) is the inverse of the local time (
µ
t
, t ≥ 0) at 0 for the
process X.)
9.1 A RayKnight theorem for the local times of X,
up to τ
µ
s
, and some consequences
The main result of this Chapter is the following
Theorem 9.1 Fix s > 0. The processes (
x
τ
µ
s
(X); x ≥ 0) and (
−x
τ
µ
s
(X); x ≥ 0)
are independent, and their respective laws are Q
0
s
and Q
2−
2
µ
s
, where Q
2−
2
µ
s
denotes the law of the square, starting from s, of the Bessel process with
dimension 2 −
2
µ
, and absorbed at 0.
137
138 9 Further results about perturbed reﬂecting Brownian motion
Corollary 9.1.1 We have the following identities in law:
a) µ
τ
µ
s
≡ −inf¦X
u
; u ≤ τ
µ
s
¦
(law)
=
s
2Z1
µ
; b) sup¦X
u
; u ≤ τ
µ
s
¦
(law)
=
s
2Z
1
c) A
µ,−
τ
µ
s
(law)
=
s
2
8Z 1
2µ
; d) A
µ,+
τ
µ
s
(law)
=
s
2
8Z1
2
.
Moreover, the pairs (µ
τ
µ
s
, A
µ,−
τ
µ
s
) and
_
sup¦X
u
; u ≤ τ
µ
s
¦, A
µ,+
τ
µ
s
_
are indepen
dent.
In particular, the identity in law (9.1) holds.
Proof of the Corollary:
1) The independence statement follows immediately from the independence of
the local times indexed by x ∈ IR
+
, and x ∈ IR
−
, as stated in Theorem 9.1.
2) We prove a). Remark that:
−µ
τ
µ
s
= inf¦X
u
; u ≤ τ
µ
s
¦ = inf
_
x ∈ IR;
x
τ
µ
s
(X) > 0
_
;
hence, from Theorem 9.1, we know that the law of µ
τ
µ
s
is that of the
ﬁrst hitting time of 0, by a BESQ
2−
2
µ
s
process, which implies the result a),
using time reversal. The same arguments, used with respect to the local
times
_
x
τ
µ
s
(X); x ≥ 0
_
, give a proof of b).
3) In order to prove c), we ﬁrst remark that, by scaling, we can take s = 1.
Then, we have:
A
µ,−
τ
µ
1
=
∞
_
0
dy
−y
τ
µ
1
(X)
(law)
=
L
1
_
0
dy Y
y
,
where (Y
y
; y ≥ 0) is a BESQ
2+
2
µ
0
process, using Theorem 9.1, and time
reversal, and L
1
= sup¦y : Y
y
= 1¦.
We now use the following result on powers of BESprocesses (see Biane
Yor [17]):
qR
1/q
ν
(t) = R
νq
⎛
⎝
t
_
0
ds R
−2/p
ν
(s)
⎞
⎠
, t ≥ 0 (9.2)
9.1 A RayKnight theorem for the local times of X 139
where (R
λ
) is a BES process with index λ, and
1
p
+
1
q
= 1. We take p = −1,
and q =
1
2
. We then deduce from (9.2) that:
L
1
(R
ν
)
_
0
ds R
2
ν
(s) = L
1/2
(R
ν/2
)
(law)
=
1
4
L
1
(R
ν/2
)
(law)
=
1
8
1
Z
ν/2
and c) follows by taking ν =
1
µ
.
d) follows similarly, by considering
_
x
τ
µ
s
(X); x ≥ 0
_
and ν = 1. ¯.
Using again Theorem 9.1 and the identity (9.2) in conjunction, we obtain,
at no extra cost, the following extension of Corollary 9.1.1.
Corollary 9.1.2 Let α ≥ 0. We have:
⎧
⎨
⎩
0
_
−∞
dy
_
y
τ
µ
s
(X)
_
α
;
∞
_
0
dy
_
y
τ
µ
s
(X)
_
α
⎫
⎬
⎭
(law)
=
s
α+1
2(1 +α)
2
_
1
Z 1
µ(1+α)
;
1
Z 1
1+α
_
(9.3)
where, on the righthand side, the two gamma variables are independent.
Remark: In order to understand better the meaning of the quantities
on the lefthand side of (9.3), it may be interesting to write down the
following equalities, which are immediate consequences of the occupation
density formula for X;
let ϕ : IR → IR
+
, and h : IR
+
→ IR
+
, be two Borel functions; then, the
following equalities hold:
t
_
0
du ϕ(X
u
)h(
X
u
t
) =
∞
_
−∞
dy ϕ(y)h(
y
t
)
y
t
t
_
0
du ϕ(X
u
)h(
X
u
u
) =
∞
_
−∞
dy ϕ(y)H(
y
t
) , where: H(x) =
x
_
0
dz h(z) .
In particular, if we take: h(x) = x
α−1
, for α > 0, we obtain:
∞
_
−∞
dy ϕ(y)(
y
t
)
α
=
t
_
0
du ϕ(X
u
)(
X
u
t
)
α−1
= α
t
_
0
du ϕ(X
u
)(
X
u
u
)
α−1
.
140 9 Further results about perturbed reﬂecting Brownian motion
Exercise 9.1 Prove the following extension of the F¨ oldesR´ev´esz identity
in law (4.11):
for s ≥ q,
∞
_
0
dy 1
(0<
−y
τ
µ
s
(X)<q)
(law)
= T
√
q
_
R2
µ
_
. (9.4)
9.2 Proof of the RayKnight theorem for the local times
of X
(9.2.1) In order to prove Theorem 9.1, it is important to be able to compute
expressions such as:
E
_
exp(−H
τ
µ
s
)
¸
, where: H
t
=
t
_
0
ds h(X
s
), with h : IR → IR
+
a Borel func
tion. The fact that (H
t
, t ≥ 0) is an additive functional of the Markov process
¦Z
t
= (B
t
,
t
); t ≥ 0¦ shall play an important role in the sequel.
To have access to the above quantity, we shall consider in fact:
γ = E
⎡
⎣
∞
_
0
ds exp
_
−
θ
2
s
2
_
exp(−H
τ
µ
s
)
⎤
⎦
and then, after some transformations, we shall invert the Laplace transform
in
θ
2
2
.
(9.2.2) From now on, we shall use freely the notation and some of the results
in BianeYor [17] and Biane [14], concerning Brownian path decomposition;
in particular, we shall use Bismut’s identity:
∞
_
0
dt P
t
0
=
∞
_
0
ds P
τ(s)
◦
∞
_
−∞
da
∨
(P
T
0
a
)
which may be translated as:
9.2 Proof of the RayKnight theorem for the local times of X 141
∞
_
0
dt E [F(B
u
, u ≤ g
t
)G(B
t−v
; v ≤ t −g
t
)]
(9.5)
=
∞
_
0
ds E [F(B
u
; u ≤ τ
s
)]
∞
_
−∞
da E
a
[G(B
h
, h ≤ T
0
)]
where F and G are two measurable, IR
+
valued, Brownian functionals. Here
is an important application of formula (9.5):
if we consider C
t
=
t
_
0
duϕ(B
u
,
u
), where ϕ is an IR
+
valued continuous
function, and f : IR IR
+
→ IR
+
is another continuous function, then:
E
⎡
⎣
∞
_
0
du f(B
u
,
u
) exp(−C
u
)
⎤
⎦
=
∞
_
0
ds
∞
_
−∞
da f(a, s)E
0
[exp(−C
τ
s
)] E
a
_
exp −C
s
T
0
¸
(9.6)
where C
s
t
=
t
_
0
duϕ(B
u
, s).
(9.2.3) We are now ready to transform γ. First, we write:
γ = E
⎡
⎣
∞
_
0
d
µ
u
exp
_
−
θ
2
2
µ
u
_
exp(−H
u
)
⎤
⎦
= lim
ε→0
1
ε
∞
_
−∞
da
∞
_
0
ds 1
(0≤a−µs≤ε)
g(s)k(a, s) (9.7)
where:
g(s) = E
_
exp
_
−
θ
2
2
µ
τ
s
_
exp(−H
τ
s
)
_
k(a, s) = E
a
⎡
⎣
exp
⎛
⎝
−
θ
2
2
µ
T
0
−
T
0
_
0
du h([B
u
[ −µs)
⎞
⎠
⎤
⎦
,
142 9 Further results about perturbed reﬂecting Brownian motion
with
µ
T
0
denoting the local time at 0 of ([B
u
[ −µs; u ≤ T
0
).
From (9.7), it easily follows that:
γ =
∞
_
−∞
db g([b[)k(bµ, [b[) = 2
∞
_
0
db g(b)k(bµ, b) .
It is now natural to introduce ϕ
b
(x)dx, the law of
µ
τ
b
, resp.: ψ
a
(y)dy, the law
of
a
T
0
under P
a
, as well as the conditional expectations:
e
(1)
(b, x) = E
_
exp(−H
τ
b
) [
µ
τ
b
= x
¸
e
(2)
(a, y) = E
a
⎡
⎣
exp
⎛
⎝
−
T
0
_
0
du h([B
u
[ −a)
⎞
⎠
[
a
T
0
= y
⎤
⎦
.
These notations enable us to write γ as follows:
γ = 2
∞
_
0
db
∞
_
0
dxϕ
b
(x) exp
_
−
θ
2
x
2
_
e
(1)
(b, x)
∞
_
0
dyψ
bµ
(y) exp
_
−
θ
2
y
2
_
e
(2)
(bµ, y) .
It is now easy to invert the Laplace transform, and we get:
E
_
exp(−H
τ
µ
s
)
¸
= 2
∞
_
0
db
s
_
0
dxϕ
b
(x)e
(1)
(b, x)ψ
bµ
(s−x)e
(2)
(bµ, s−x) . (9.8)
Plainly, one would like to be able to disintegrate the above integral with
respect to db dx, and, tracing our steps back, we arrive easily, with the help
of Bismut’s decomposition to the following reinforcement of (9.8):
E
_
exp(−H
τ
µ
s
) [
τ
µ
s
= b,
µ
g
τ
µ
s
= x
_
= e
(1)
(b, x)e
(2)
(bµ, s −x) ,
and:
P
_
τ
µ
s
∈ db,
µ
g
τ
µ
s
∈ dx
_
= 2db dxϕ
b
(x)ψ
µb
(s −x)1
(x≤s)
.
However, we know, from Chapter 3, the explicit expressions of ϕ
b
(x) and
ψ
a
(y); this implies the following
Proposition 9.1 For ﬁxed s, the variables
τ
µ
s
and
µ
g
τ
µ
s
are independent and
they satisfy:
µ
τ
µ
s
(law)
=
s
2Z1
µ
(law)
=
s
µ
τ
1
;
µ
g
τ
µ
s
(law)
= s Z1
µ
,1
9.2 Proof of the RayKnight theorem for the local times of X 143
(9.2.4) We are now in a position to prove Theorem 9.1, as we know how to
write e
(1)
(b, x) and e
(2)
(bµ, s −x) in terms of the laws of BESQ processes of
diﬀerent dimensions.
We ﬁrst recall that, in Chapter 3, we proved the following RK theorem
(Theorem 3.4):
_
a−µb
τ
b
(X); a ≥ 0
_
is an inhomogeneous Markov process, which is BESQ
2/µ
0
,
for a ≤ µb, and BESQ
0
, for a ≥ µb.
Hence, we may write:
e
(1)
(b, x) = Q
0
x
⎛
⎝
exp −
∞
_
0
dz h(z)Y
z
⎞
⎠
...
...Q
2/µ
0
⎛
⎝
exp −
µb
_
0
dz h(z −µb)Y
z
[ Y
µb
= x
⎞
⎠
e
(2)
(bµ, s −x) = Q
0
s−x
⎛
⎝
exp −
∞
_
0
dz h(z)Y
z
⎞
⎠
...
...Q
2
0
⎛
⎝
exp−
µb
_
0
dz h(z −µb)Y
z
[ Y
µb
= s −x
⎞
⎠
Therefore, the product of these two expressions is equal, thanks to the addi
tivity properties of ¦Q
0
s
¦ and ¦Q
δ
0
¦, to:
e(b, x, s)
= Q
0
s
⎛
⎝
exp −
∞
_
0
dz h(z)Y
z
⎞
⎠
Q
2+
2
µ
0
⎛
⎝
exp −
µb
_
0
dz h(z −µb)Y
z
[ Y
µb
= s
⎞
⎠
and we make the important remark that this expression no longer depends
on x.
Putting together the diﬀerent results we have obtained up to now, we can
state the following
Theorem 9.2 1) The process
_
x
τ
µ
s
(X); x ∈ IR
_
is independent of the vari
able
µ
g
τ
µ
s
;
144 9 Further results about perturbed reﬂecting Brownian motion
2) The processes
_
x
τ
µ
s
(X); x ≥ 0
_
and
_
−x
τ
µ
s
(X); x ≥ 0
_
are independent;
3) The law of
_
x
τ
µ
s
(X); x ≥ 0
_
is Q
0
s
;
4) The law of
_
y−µb
τ
µ
s
(X); 0 ≤ y ≤ µb
_
is Q
2+
2
µ
0 −→
(µb)
s
.
(9.2.5) We now end the proof of Theorem 9.1, by remarking that, from
Proposition 9.1, T
0
≡ inf
_
x :
−x
τ
µ
s
(X) = 0
_
= µ
τ
µ
s
is distributed as T
0
under
Q
2−
2
µ
s
, and that when we reverse the process:
(
−y
τ
µ
s
; 0 ≤ y ≤ T
0
) from T
0
≡ µb, that is, we consider:
_
−(µb−x)
τ
µ
s
≡
x−µb
τ
µ
s
; 0 ≤ x ≤ µb
_
conditioned on T
0
= µb, we ﬁnd that the
latter process is distributed as Q
2+
2
µ
0 −→
(µb)
s
.
Putting together these two results, we ﬁnd that
_
−x
τ
µ
s
(X); x ≥ 0
_
is distributed as BESQ
2−
2
µ
s
,
since it is wellknown that:
_
R
(ν)
0
(L
s
−u); u ≤ L
s
_
(law)
=
_
R
(−ν)
s
(u); u ≤ T
0
_
where (R
(α)
a
(t); t ≥ 0) denotes here the Bessel process with index α, starting
at a,
L
s
= sup
_
t; R
(ν)
0
(t) = s
_
, and T
0
= inf
_
t; R
(−ν)
s
(t) = 0
_
9.3 Generalisation of a computation of F. Knight
(9.3.1) In his article in the Colloque Paul L´evy (1987), F. Knight [58] proved
the following formula:
E
_
exp
_
−
λ
2
2
A
+
τ
s
M
2
τ
s
__
=
2λ
sh(2λ)
, λ ∈ IR , s > 0 , (9.9)
9.3 Generalisation of a computation of F. Knight 145
where A
+
t
=
t
_
0
ds 1
(B
s
>0)
, M
t
= sup
u≤t
B
u
, and (τ
s
, s ≥ 0) is the inverse of the
local time of B at 0.
Time changing (B
+
t
, t ≥ 0) into a reﬂecting Brownian motion with the help
of the wellknown representation, already used in paragraph 4.1:
B
+
t
= β
⎛
⎝
t
_
0
ds 1
(B
s
>0)
⎞
⎠
,
where (β(u), u ≥ 0) denotes a reﬂecting Brownian motion, formula (9.9) may
also be written in the equivalent form:
E
_
exp
_
−
λ
2
2
τ
s
(M
∗
τ
s
)
2
__
=
2λ
sh(2λ)
, (9.10)
where: M
∗
t
= sup
u≤t
[B
u
[.
Formulae (9.9) and (9.10) show that:
A
+
τ
s
M
2
τ
s
(law)
=
τ
s
(M
∗
τ
s
)
2
(law)
= T
(3)
2
def
= inf¦t : R
t
= 2¦ , (9.11)
where (R
t
, t ≥ 0) is a 3dimensional Bessel process starting from 0.
An explanation of the identity in law (9.11) has been given by Ph. Biane [13]
and P. Vallois [88], with the help of a pathwise decomposition.
For the moment, we generalize formulae (9.9) and (9.10) to the µprocess X,
considered up to τ
µ
s
.
Theorem 9.3 (We use the notation in the above paragraphs)
1) Deﬁne I
µ
u
= inf
v≤u
X
v
. Then, we have:
E
_
exp
_
−
λ
2
2
A
µ,−
τ
µ
s
(I
µ
τ
µ
s
)
2
__
=
_
λ
shλ
__
1
chλ
_
1/µ
. (9.12)
2) Deﬁne: X
∗
t
= sup
s≤t
[X
s
[. Then, if we denote c = 1/µ, we have:
146 9 Further results about perturbed reﬂecting Brownian motion
E
_
exp
_
−
λ
2
2
τ
µ
s
(X
∗
τ
µ
s
)
2
__
=
1
2
c
_
λ
sh λ
__
1
ch λ
_
+cλ(sh λ)
c−1
2λ
_
λ
du
(sh u)
c+1
(9.13)
Proof:
1) Recall that: I
µ
τ
µ
s
= −µ
τ
µ
s
.
In order to prove formula (9.12), we ﬁrst deduce from Theorem (9.2) that:
E
⎡
⎣
exp
⎛
⎝
−
λ
2
2(µb)
2
µb
_
0
dy
y−µb
τ
µ
s
(X)
⎞
⎠
[
τ
µ
s
= b
⎤
⎦
= Q
2+
2
µ
0
⎛
⎝
exp
⎛
⎝
−
λ
2
2(µb)
2
µb
_
0
dy Y
y
⎞
⎠
[ Y
µb
= s
⎞
⎠
Using L´evy’s generalized formula (2.5), this quantity is equal to:
(∗)
_
λ
shλ
_
1+
1
µ
exp
_
−
s
2µb
(λcoth λ −1)
_
.
Then, integrating with respect to the law of
τ
µ
s
in the variable b, we obtain
that the lefthand side of (9.12) is equal to:
_
λ
shλ
_
1+
1
µ
_
thλ
λ
_
1
µ
=
_
λ
shλ
__
1
chλ
_
1
µ
. (9.14)
2) Formula (9.13) has been obtained by F. Petit and Ph. Carmona using
the independence of
_
x
τ
µ
s
(X); x ≥ 0
_
and
_
−x
τ
µ
s
(X); x ≥ 0
_
, as asserted in
Theorem 9.1, together with the same kind of arguments as used in the
proof of formula (9.12). ¯.
The following exercise may help to understand better the law of
τ
µ
s
(X
∗
τ
µ
s
)
2
Exercise 9.2 Let c =
1
µ
> 0. Consider a pair of random variables (T, H)
which is distributed as follows:
9.3 Generalisation of a computation of F. Knight 147
(i) H takes its values in the interval [1,2], with:
P(H = 1) =
1
2
c
; P(H ∈ dx) = c
dx
x
c+1
(1 < x < 2)
(ii) For λ > 0, we have:
E
_
exp
_
−
λ
2
2
T
_
[ H = 1
_
=
_
λ
sh λ
__
1
ch λ
_
c
,
and, for 1 < x < 2:
E
_
exp
_
−
λ
2
2
T
_
[ H = x
_
=
(a)
_
λx
sh λx
_
2
_
x sh λ
sh λx
_
1
µ
−1
≡
(b)
_
λx
sh λx
_1
µ
+1
_
λ
sh λ
_
1−
1
µ
(we present both formulae (a) and (b), since, in the case µ ≤ 1, the right
hand side of (a) clearly appears as a Laplace transform in
λ
2
2
, whereas in
the case µ ≥ 1, the righthand side of (b) clearly appears as a Laplace
transform in
λ
2
2
).
Now, prove that:
τ
µ
s
(X
∗
τ
µ
s
)
2
(law)
= T (9.15)
We now look for some probabilistic explanation of the simpliﬁcation which
occurred in (9.14), or, put another way, what does the quantity
_
thλ
λ
_1
µ
represent in the above computation?
With this in mind, let us recall that:
µ
τ
µ
s
(law)
=
s
µ
τ
1
, and P
_
µ
τ
1
∈ dy
_
= Q
2/µ
0
(Y
1
∈ dy) .
Thus, the integral with respect to (db) of the term featuring (λcoth λ − 1)
in (∗) , above (9.14), gives us:
R
δ
⎛
⎝
exp−
λ
2
2
1
_
0
dy Y
y
⎞
⎠
,
148 9 Further results about perturbed reﬂecting Brownian motion
where δ =
2
µ
, and we have used the notation in Chapter 3, concerning the
decomposition:
Q
δ
0
= Q
δ
0→0
∗ R
δ
.
Developing the same arguments more thoroughly, we obtain a new Ray
Knight theorem which generalizes formula (9.12).
Theorem 9.4 For simplicity, we write I = I
µ
τ
µ
s
. We denote by (λ
x
t
(X); x≥0)
the process of local times deﬁned by means of the formula:
1
I
2
t
_
0
du f
_
1 −
X
u
I
_
=
∞
_
0
dx f(x)λ
x
t
(X) , t ≤ τ
µ
s
,
for every Borel function f : [0, 1] → IR
+
.
Then, the law of (λ
x
τ
µ
s
; 0 ≤ x ≤ 1) is Q
2
0→0
∗ Q
2/µ
0
.
Proof: By scaling, we can take s = 1. Using again Theorem 9.2, it suﬃces,
in order to compute:
E
⎡
⎢
⎣
exp −
1
I
2
τ
µ
1
_
0
ds f
_
1 −
X
u
I
_
⎤
⎥
⎦
,
to integrate with respect to the law of
τ
µ
1
the quantity:
Q
2+
2
µ
0
⎛
⎝
exp −
1
(µb)
2
µb
_
0
dy f
_
y
µb
_
Y
y
[ Y
µb
= 1
⎞
⎠
= Q
2+
2
µ
0
⎛
⎝
exp −
1
_
0
dz f(z)Y
z
[ Y
1
=
1
µb
⎞
⎠
= ((f))
1+
1
µ
exp
_
−
1
2µb
h(f)
_
,
where (f) and h(f) are two constants depending only on f.
When we integrate with respect to the law of µ
τ
µ
1
, which is that of
1
Y
1
under
Q
2/µ
0
, we ﬁnd:
(f)
_
(f)
1 +h(f)
_1
µ
,
9.4 Towards a pathwise decomposition of (X
u
; u ≤ τ
µ
s
) 149
which is equal to the expectation of exp
⎛
⎝
−
1
_
0
dy f(y)Y
y
⎞
⎠
under
Q
2
0→0
∗ Q
2/µ
0
. ¯.
Remark: A more direct proof of Theorem 9.4 may be obtained by using
jointly Theorem 9.1 together with Corollary 3.9.2, which expresses the law of
a Bessel process transformed by Brownian scaling relative to a last passage
time.
9.4 Towards a pathwise decomposition of (X
u
; u ≤ τ
µ
s
)
In order to obtain a more complete picture of (X
u
; u ≤ τ
µ
s
), we consider again
the arguments developed in paragraph 9.2, but we now work at the level of
Bismut’s identity (9.5) itself, instead of remaining at the level of the local
times of X, as was done from (9.7) onwards.
Hence, if we now deﬁne, with the notation introduced in (9.5):
γ = E
⎡
⎣
∞
_
0
ds exp
_
−
θ
2
s
2
_
Φ
τ
µ
s
⎤
⎦
,
where
Φ
t
= F(B
u
, u ≤ g
t
)G(B
t−v
: v ≤ t −g
t
) ,
we obtain, with the same arguments as in paragraph 9.2:
E
_
F(B
u
: u ≤ g
τ
µ
s
)G(B
τ
µ
s
−v
; v ≤ τ
µ
s
−g
τ
µ
s
) [
τ
µ
s
= b,
µ
g
τ
µ
s
= x
_
(9.16)
= E
_
F(B
u
; u ≤ τ
b
) [
µ
τ
b
= x
¸
E
bµ
_
G(B
h
; h ≤ T
0
) [
bµ
T
0
= s −x
_
which may be translated in the form of the integral representation:
P
τ
µ
s
= 2
∞
_
0
db
s
_
0
dxϕ
b
(x)ψ
bµ
(s−x)P
τ
b
_
[
µ
τ
b
= x
_
◦
∨
_
P
T
0
bµ
__
[
bµ
T
0
= s −x
_
or, even more compactly:
150 9 Further results about perturbed reﬂecting Brownian motion
∞
_
0
ds P
τ
µ
s
= 2
∞
_
0
db P
τ
b
◦
∨
_
P
T
0
bµ
_
. (9.17)
For the moment, we simply deduce from (9.16) that, conditionally on
τ
µ
s
= b,
the process:
(X
v
; v ≤ τ
µ
s
−g
τ
µ
s
) ≡ (B
τ
µ
s
−v
−µ
τ
µ
s
; v ≤ τ
µ
s
−g
τ
µ
s
)
= (B
τ
µ
s
−v
−µb; v ≤ τ
µ
s
−g
τ
µ
s
)
is distributed as Brownian motion starting from 0, considered up to its ﬁrst
hitting time of −µb.
It would now remain to study the preg
τ
µ
s
process in the manner of Biane [13]
and Vallois [88], but this is left to the diligent reader.
Comments on Chapter 9
The results presented in this Chapter were obtained by the second author
while teaching the course at ETH (Sept. 91–Feb. 92). In the end, Theorem 9.1
may be used to give, thanks to the scaling property, a quick proof of F. Petit’s
ﬁrst result (8.5).
The main diﬀerence between Chapter 8 and the present Chapter is that, in
Chapter 8, the proof of F. Petit’s ﬁrst result (8.5) was derived from a Ray
Knight theorem for the local times of X, considered up to τ
s
= inf¦t :
t
= s¦,
whereas, in the present chapter, this result (8.5) is obtained as a consequence
of Theorem 9.1, which is a RK theorem for the local times of X, up to
τ
µ
s
≡ inf¦t :
µ
t
= s¦, a more intrinsic time for the study of X.
As a temporary conclusion on this topic, it may be worth to emphasize the
simplicity (in the end!) of the proof of (8.5):
 it was shown in (8.3.1) that a proof of the arc sine law for Brownian motion
may be given in a few moves, which use essentially two ingredients:
(i) the scaling property,
9.4 Towards a pathwise decomposition of (X
u
; u ≤ τ
µ
s
) 151
(ii) the independence, and the identiﬁcation of the laws, of A
+
τ(1)
and A
−
τ(1)
,
the latter being, possibly, deduced from excursion theory;
 to prove F. Petit’s result, the use of the scaling property makes no prob
lem, whilst the independence and the identiﬁcation of the laws of A
µ,−
τ
µ
(1)
and A
µ,+
τ
µ
(1)
are dealt with in Theorem 9.1.
However, the analogy with the Brownian case is not quite complete, since
we have not understood, most likely from excursion theory, the identity in
law between the quantities (8.13) and (8.14), as done in PitmanYor [77] and
PermanPitmanYor [69] in the Brownian case.
Chapter 10
On principal values of Brownian
and Bessel local times
In real and complex analysis, the Hilbert transformH, which may be deﬁned,
for any f ∈ L
2
(IR), as:
Hf(x) =
1
π
lim
ε→0
∞
_
−∞
dy f(y)
y −x
1
(y−x≥ε)
(10.1)
(this limit exists dx a.s.)
plays an important role, partly because of the fundamental identity between
Fourier transforms:
¯
Hf(ξ) = i sgn(ξ)
ˆ
f(ξ)
If, in (10.1), f is assumed to be H¨older continuous, and has compact support,
then the limit in ε exists for every x ∈ IR. This remark applies to f(y) =
y
t
, y ∈ IR, the function, in the space variable y, of the local times of Brownian
motion at time t.
We shall use the notation:
˜
H
t
(a) = lim
ε→0
t
_
0
ds
(B
s
−a)
1
(B
s
−a≥ε)
(10.2)
More generally, we can deﬁne, for α < 3/2:
˜
H
(α)
t
(a) = lim
ε→0
t
_
0
ds
(B
s
−a)
˜ α
1
(B
s
−a≥ε)
(10.3)
with x
˜ α
def
= [x[
α
sgn(x).
153
154 10 On principal values of Brownian and Bessel local times
We shall simply note
˜
H
t
for
˜
H
t
(0), and
˜
H
(α)
t
for
˜
H
(α)
t
(0).
These processes (in the variable t) are quite natural examples of processes
with zero energy, which have been studied, in particular, by Fukushima [45].
They also inherit a scaling property from Brownian motion, which partly
explains why they possess some interesting distributional properties, when
taken at certain random times, as will be proved in this chapter.
Moreover, the onesided version of
˜
H
(α)
plays an essential role in the re
presentation of Bessel processes with dimension d < 1, as shown recently by
Bertoin ([7], [8]). In fact, an important part of this chapter shall be devoted
to the description of a new kind of excursion theory for Bessel processes with
dimension d < 1, developed by Bertoin, and to some of its applications.
To conclude this introduction, a few words about the origin of such studies
is certainly in order: to our knowledge, they may be traced back to Itˆ oMc
Kean ([50], Problem 1, p. 72) and Yamada’s original papers ([94], [95], [96]).
10.1 Yamada’s formulae
(10.1.1) To begin with, we remark that, if (
a
t
; a ∈ IR, t ≥ 0) denotes the
family of Brownian local times, then, for a given x ∈ IR, and ε > 0, we have:
x+ε
_
x−ε
dy
[y −x[
γ
[
y
t
−
x
t
[ < ∞ , as soon as: γ <
3
2
,
thanks to the following H¨ older continuity property of Brownian local times:
for 0 < η <
1
2
, sup
s≤t
[
a
s
−
b
s
[(ω) ≤ C
t,η
(ω)[a −b[
1
2
−η
,
for some (random) constant C
t,η
(ω).
Consequently, the quantities: (
˜
H
(β)
t
(a); a ∈ IR, t ≥ 0) are welldeﬁned for any
β <
3
2
.
Likewise, so are the quantities:
10.1 Yamada’s formulae 155
p.v.
t
_
0
1
(B
s
−a>0)
ds
(B
s
−a)
1+α
def
=
∞
_
0
db
b
1+α
(
a+b
t
−
a
t
)
and
p.v.
t
_
0
ds
[B
s
−a[
1+α
def
=
∞
_
−∞
db
[b[
1+α
(
a+b
t
−
a
t
) ,
for 0 < α <
1
2
.
(10.1.2) The quantities we have just deﬁned appear in fact as the zero
quadratic variation parts in the canonical decompositions as Dirichlet pro
cesses of
(B
t
−a)
¯
1−α
,
_
(B
t
−a)
+
_
1−α
and [B
t
−a[
1−α
, for 0 < α <
1
2
.
For simplicity, we shall take a = 0; then, we have the following formulae:
(B
t
)
¯
1−α
= (1 −α)
t
_
0
(B
s
)
−˜ α
dB
s
+
(1 −α)(−α)
2
p.v.
t
_
0
ds
B
¯
1+α
s
(10.4)
(B
+
t
)
1−α
= (1 −α)
t
_
0
(B
s
)
−α
1
(B
s
>0)
dB
s
+
(1 −α)(−α)
2
p.v.
t
_
0
1
(B
s
>0)
ds
B
1+α
s
(10.5)
[B
t
[
1−α
= (1 −α)
t
_
0
[B
s
[
−α
sgn(B
s
) dB
s
+
(1 −α)(−α)
2
p.v.
t
_
0
ds
[B
s
[
1+α
(10.6)
Exercise 10.1 In RevuzYor ([81], p. 230), the representation of the local
time
y
t
of Brownian motion, for ﬁxed y, and ﬁxed t, as an Itˆ o stochastic
integral, is given in the following explicit form:
y
t
=
t
_
0
ds g
s
(y) −
1
√
2π
t
_
0
sgn(B
s
−y)q
_
B
s
−y
√
t −s
_
dB
s
,
where: q(x) = 2
∞
_
x
du exp
_
−
u
2
2
_
, and g
s
(y) =
1
√
2πs
exp
_
−
y
2
2s
_
.
156 10 On principal values of Brownian and Bessel local times
Derive from this formula the representation as an Itˆ o integral of the diﬀerent
principal values we have just deﬁned, in particular:
t
_
0
ds
B
s
.
(10.1.3) We shall now transform formula (10.6) into a formula which gives
the canonical decomposition, as a Dirichlet process, of a Bessel process
(R
(δ)
t
, t ≥ 0), with dimension δ, such that: 0 < δ < 1. We ﬁrst recall that a
power of a Bessel process is another Bessel process timechanged; precisely,
we have the formula:
qR
1/q
ν
(t) = R
νq
⎛
⎝
t
_
0
ds
R
2/p
ν
(s)
⎞
⎠
(10.7)
where (R
µ
(t), t ≥ 0) denotes a Bessel process with index µ, and ν > −
1
q
,
1
p
+
1
q
= 1 (see, e.g.: RevuzYor ([81], Proposition (1.11), p. 416); in fact,
formula (10.7) was already presented and used in Chapter 9, as formula (9.2)).
Applying this formula with ν = −
1
2
(so that (R
ν
(t), t ≥ 0) is a reﬂecting
Brownian motion, and R
νq
(t) ≡ R
(δ)
(t), t ≥ 0), we obtain the following
consequence of formula (10.6):
R
t
≡ R
(δ)
(t) = β
t
+
δ −1
2
K
t
(10.8)
where (β
t
, t ≥ 0) is a Brownian motion, and:
K
t
= p.v.
t
_
0
ds
R
s
def
=
∞
_
0
a
δ−2
da(L
a
t
−L
0
t
) ,
the family of local times (L
a
t
, a ≥ 0) being deﬁned with respect to the speed
measure of R
(δ)
as:
t
_
0
dsϕ(R
s
) =
∞
_
0
daϕ(a)L
a
t
a
δ−1
for every Borel function ϕ : IR
+
→ IR
+
.
10.2 A construction of stable processes 157
10.2 A construction of stable processes, involving
principal values of Brownian local times
(10.2.1) Let α ∈]−∞,
3
2
[. With the help of the scaling property of the process
(
˜
H
(α)
t
, t ≥ 0), and using the inverse τ
t
≡ inf¦u :
0
u
> t¦ of the Brownian local
time (
0
u
, u ≥ 0), it is easy to construct symmetric stable processes from a
1dimensional BM. Precisely, we have
Theorem 10.1 Let α ∈] − ∞,
3
2
[. Then, the process (
˜
H
(α)
τ
t
, t ≥ 0) is a sym
metric stable process of index ν
α
=
1
2 −α
; in particular, we have:
E
_
exp(iλ
˜
H
(α)
τ
t
)
_
= exp(−t c
α
[λ[
ν
α
) (λ ∈ IR)
for some constant c
α
.
Remarks:
1) As α varies from −∞ to
3
2
(excluded), ν
α
varies from 0 to 2, with extreme
values excluded; hence, with this construction, we can obtain all symmetric
stable processes, except Brownian motion!
2) In the particular case α = 1, (
˜
H
τ
t
, t ≥ 0) is a multiple of the stan
dard Cauchy process. In fact, as we shall see with the next theorem,
_
1
π
˜
H
τ
t
, t ≥ 0
_
is a standard Cauchy process.
3) P. Fitzsimmons and R. Getoor [40] have extended the result concerning
_
˜
H
τ
t
, t ≥ 0
_
to a large class of symmetric L´evy processes in place of the
Brownian motion. They were also intrigued by the presence of the con
stant π. The computations of Fitzsimmons and Getoor have been simpli
ﬁed and generalized by Bertoin [9], using stochastic calculus and Feynman
Kac arguments.
(10.2.2) It now seems natural to look for some relation between the results
of Theorem 10.1 and a more classical construction of the stable symmetric
processes, which may be obtained as timechanges of a Brownian motion
by an independent unilateral stable process. More precisely, Spitzer [85] re
marked that, if (γ
u
, u ≥ 0) is another realvalued Brownian motion, which is
independent of B, then:
158 10 On principal values of Brownian and Bessel local times
(γ
τ
t
, t ≥ 0) is a standard symmetric Cauchy process (10.9)
MolchanovOstrovski [67] replaced (τ
t
, t ≥ 0) by any unilateral stable process
to obtain all symmetric stable processes, except Brownian motion. J.F. Le
Gall [59] presented yet another construction in the general case, which is
closer to Spitzer’s original idea, in that it involves complex Brownian motion.
In any case, coming back precisely to Theorem 10.1 (or, rather, to the second
remark following it) and Spitzer’s result (10.9), we see that
_
1
π
˜
H
u
, u ≥ 0
_
and (γ
u
, u ≥ 0), when restricted to the zero set of the Brownian motion
(B
v
, v ≥ 0), have the same law. Therefore, it now seems natural to consider
their joint distribution for ﬁxed time t.
Theorem 10.2 (We keep the previous notation concerning the independent
Brownian motions B and γ).
For every λ ∈ IR, and θ ,= 0, we have:
E
_
expi
_
λ
π
˜
H
τ
t
+θγ
τ
t
__
= E
_
exp
_
i
λ
π
˜
H
τ
t
−
θ
2
2
τ
t
__
= exp
_
−tλcoth
_
λ
θ
__
.
This formula is reminiscent of L´evy’s stochastic area formula (2.7); it seems
to call for some interpretation in terms of complex Brownian motion, which
we shall attempt, with some partial success, in the next paragraph.
10.3 Distributions of principal values of Brownian local
times, taken at an independent exponential time
We start again with the interesting case α = 1. It will be fruitful to decompose
the process (
˜
H
t
, t ≥ 0) into the sum of:
˜
H
−
t
=
˜
H
g
t
and
˜
H
+
t
=
˜
H
t
−
˜
H
g
t
, where g
t
= sup¦s ≤ t : B
s
= 0¦
Theorem 10.3 Let T denote a r.v. with values in IR
+
, which is exponentially
distributed, with parameter
1
2
; moreover, T is assumed to be independent of B.
Then, we have the following:
10.4 Bertoin’s excursion theory for BES(d), 0 < d < 1 159
i)
˜
H
−
T
and
˜
H
+
T
are independent;
ii) for every λ ∈ IR,
E
_
exp
_
i
λ
π
˜
H
−
T
__
=
th(λ)
λ
and E
_
exp
_
i
λ
π
˜
H
+
T
__
=
λ
sh(λ)
.
Therefore, we have:
E
_
exp
_
i
λ
π
˜
H
T
__
=
1
ch(λ)
(10.10)
iii) In fact, formula (10.10) may be completed as follows:
E
_
exp
_
i
λ
π
˜
H
T
_
[
0
T
= t
_
=
λ
sh(λ)
exp −t(λcoth λ −1) . (10.11)
10.4 Bertoin’s excursion theory for BES(d), 0 < d < 1
In this paragraph, (R
t
, t ≥ 0) denotes a BES(d) process, with 0 < d < 1, and
(K
t
, t ≥ 0) is the process with zero quadratic variation such that:
R
t
= R
0
+B
t
+ (d −1)K
t
(t ≥ 0) ,
a decomposition we already encountered in paragraph 10.1, formula (10.8),
with the factor (
1
2
) deleted.
Bertoin [8] proved that (0, 0) is regular for itself, with respect to the Markov
process (R, K); hence, it admits a local time; such a local time (δ(t), t ≥ 0)
may be constructed explicitly from K as the limit of 2
n(d−1)
d
n
(t), where
d
n
(t) denotes the number of downcrossings of K from 0 to −2
−n
during the
timeinterval [0, t].
Let σ(t) = inf¦s : δ(s) > t¦ be the rightcontinuous inverse of δ, and consider
the Poisson point process: e = (e
1
, e
2
) deﬁned by:
e
1
(t) =
_
R
σ(t−)+h
1
(h≤σ(t)−σ(t−))
; h ≥ 0
_
e
2
(t) =
_
K
σ(t−)+h
1
(h≤σ(t)−σ(t−))
; h ≥ 0
_
160 10 On principal values of Brownian and Bessel local times
Call m the (Itˆ o) characteristic measure of this Poisson point process, which
lives on Ω
abs
0
, the set of continuous functions ε : IR
+
→ IR
+
IR, such that
ε(0) = (0, 0), and ε is absorbed at (0, 0) after its ﬁrst return V (ε) to (0, 0).
For ε ∈ Ω
abs
0
, we deﬁne furthermore: U(ε) = inf
_
t > 0 : ε
2
(t) = 0
_
. We may
now state Bertoin’s description of m.
Theorem 10.4 The σﬁnite measure m is characterized by the following
distributional properties:
1) m(dε) a.s.,
_
ε
2
(t), t ≤ U
_
takes values in IR
−
, and
_
ε
2
(t), U ≤ t ≤ V
_
takes values in IR
+
;
2) m
_
ε
1
(U) ∈ dx
_
=
1 −d
Γ(d)
x
d−2
dx (x > 0)
3) Conditionally (with respect to m) on ε
1
(U) = x, the processes:
_
ε
1
(U −h), −ε
2
(U −h); h ≤ U
_
and
_
ε
1
(U +h), ε
2
(U +h); h ≤ V −U
_
are independent, and have both the same distribution as:
(R
x
(t), K
x
(t); t ≤ S
x
) ,
where (R
x
(t), t ≥ 0) denotes a BES
x
(d) process, with canonical (Dirichlet)
decomposition:
R
x
(t) = x +B
t
+ (d −1)K
x
(t) ,
and S
x
= inf ¦t : K
x
(t) = 0¦.
Bertoin [8] deduced several distributional results from Theorem 10.4. In turn,
we shall use Theorem 10.4 to characterize the law of
A
+
1
=
1
_
0
ds 1
(K
s
>0)
.
Recall that, from excursion theory, we have, for any continuous, increasing
additive functional (A
t
, t ≥ 0) of X ≡ (R, K), which does not charge ¦s :
R
s
= K
s
= 0¦, the following formulae:
10.4 Bertoin’s excursion theory for BES(d), 0 < d < 1 161
E
⎡
⎣
∞
_
0
dt exp −(αt +A
t
)
⎤
⎦
=
_
m(dε)
V
_
0
dt exp −(αt +A
t
)
_
m(dε) (1 −exp−(αV +A
V
))
(10.12)
E
⎡
⎣
∞
_
0
dt exp −(αt +A
g
t
)
⎤
⎦
=
1
α
_
m(dε)(1 −exp(−αV ))
_
m(dε)(1 −exp −(αV +A
V
))
We now apply these formulae with: A
t
= βA
+
t
+ γA
−
t
, where A
−
t
= t − A
+
t
;
the quantities to be computed are:
h(α, β, γ)
def
=
_
m(dε)
_
1 −exp −(αV +βA
+
V
+γA
−
V
)
_
=
_
m(dε) (1 −exp−¦(α +γ)U + (α +β)(V −U)¦)
and
k(α, β, γ)
def
=
_
m(dε)
V
_
0
dt exp −(αt +βA
+
t
+γA
−
t
)
=
_
m(dε)
⎧
⎨
⎩
U
_
0
dt exp (−(α +γ)t) +
V
_
U
dt exp −(αt +β(t −U) +γU)
⎫
⎬
⎭
.
Hence, if we now deﬁne:
f(a, b) =
_
m(dε) (1 −exp−(aU +b(V − U)))
we obtain, with a little algebra: h(α, β, γ) = f(α +γ, α +β) and
k(α, β, γ) =
1
α +β
_
(β −γ)f(α +γ, 0)
α +γ
+f(α +γ, α +β)
_
(10.13)
We are now in a position to state the following
Theorem 10.5 1) For every t ≥ 0, one has:
E
_
exp−(aA
−
σ(t)
+bA
+
σ(t)
)
_
= exp −tf(a, b) ,
where: f(a, b) =
_
√
2a +
√
2b
_
1−d
.
162 10 On principal values of Brownian and Bessel local times
2) The distributions of the variable A
+
1
and of the pair (A
+
g
1
, A
−
g
1
) are char
acterized by the formulae:
E
_
1
1 +βA
+
1
_
=
β +
_
1 +
√
1 +β
_
1−d
(1 +β)
_
1 +
√
1 +β
_
1−d
(10.14)
E
_
1
1 +βA
+
g
1
+γA
−
g
1
_
=
_
2
√
1 +β +
√
1 +γ
_
1−d
In particular, g
1
is distributed as:
Z1−d
2
,
1+d
2
, a beta variable with parameters
_
1−d
2
,
1+d
2
_
.
Proof: 1) Bertoin ([8], Theorem 4.2) proved that if (λ
a
t
; a ∈ IR) denotes the
family of occupation densities of K, which are deﬁned by:
t
_
0
ds f(K
s
) =
∞
_
−∞
da f(a)λ
a
t
,
then, conditionally on λ
0
σ(t)
= x, the processes (λ
a
σ(t)
, a ≥ 0) and (λ
−a
σ(t)
, a ≥ 0)
are two independent BESQ
x
(0) processes.
Furthermore, the law of λ
0
σ(t)
is characterized by:
E
_
exp
_
−
k
2
λ
0
σ(t)
__
= exp(−tk
1−d
) (k ≥ 0) .
Using this result, we obtain:
E
_
exp −
_
aA
−
σ(t)
+bA
+
σ(t)
__
= E
_
exp −
λ
0
σ(t)
2
_
√
2a +
√
2b
_
_
= exp −t
_
√
2a +
√
2b
_
1−d
.
2) It follows from formulae (10.12) and (10.13) that:
E
⎡
⎣
∞
_
0
dt exp−(αt +βA
+
t
)
⎤
⎦
=
k(α, β, 0)
h(α, β, 0)
=
βf(α, 0) +αf(α, α +β)
α(α +β)f(α, α +β)
and
10.4 Bertoin’s excursion theory for BES(d), 0 < d < 1 163
E
⎡
⎣
∞
_
0
dt exp −(αt +βA
+
g
t
+γA
−
g
t
)
⎤
⎦
=
h(α, 0, 0)
αh(α, β, γ)
=
f(α, α)
αf(α +γ, α +β)
.
Now, the expectations on the lefthand sides of these equalities are respec
tively equal, using a scaling argument, to:
E
_
1
α +βA
+
1
_
and E
_
1
α +βA
+
g
1
+γA
−
g
1
_
.
The proof is ended by replacing f(a, b) by
_
√
2a +
√
2b
_
1−d
in the above
equalities. ¯.
Remark: It may be interesting to compare formula (10.14) with yet another
distributional result:
for ﬁxed t,
A
+
σ(t)
σ(t)
(law)
=
A
−
σ(t)
σ(t)
(law)
= Z1
2
,
1
2
, (10.15)
i.e.: both ratios are arc sine distributed.
This follows immediately from the description of the law of (λ
a
σ(t)
, a ∈ IR)
already used in the above proof.
Comments on Chapter 10
The contents of this chapter consist mainly of results relating principal values
for Bessel processes with small dimension, and their excursion theory, as
derived by Bertoin [8]. For a further discussion by Bertoin, see [10].
A more complete exposition of results pertaining to principal values of local
times is given in Yamada [97], and also in the second half of the Monograph
[103], which centers around Alili’s study of:
p.v.
_
t
0
ds coth(λB
s
) ,
and the, up to now little understood , striking identity:
λ
2
_
__
1
0
ds coth(λr
s
)
_
2
− 1
_
(law)
=
__
1
0
ds
r
s
_
2
where (r
s
, s ≤ 1) denotes the standard 3dimensional Bessel bridge, and
λ ∈ IR (thus, the law of the lefthand side does not depend on λ).
164 10 On principal values of Brownian and Bessel local times
More studies of functionals of (r
s
, s ≤ 1), including
_
1
0
ds exp(±λr
s
) are also
found in C. DonatiMartin and M. Yor [34].
Chapter 11
Probabilistic representations of the
Riemann zeta function and some
generalisations related to Bessel
processes
To begin with, it may be wise to state immediately that the aim
1
of this
chapter is not to discuss Riemann’s hypothesis!, but, much more modestly,
to present some of the (wellknown) relations between heat equation, zeta
function, theta functions and Brownian motion.
11.1 The Riemann zeta function and the 3dimensional
Bessel process
(11.1.1) The Riemann zeta function is deﬁned by:
ζ(s) =
∞
n=1
1
n
s
, for s ∈ C, Re(s) > 1 .
It extends analytically to the entire complex plane C, as a meromorphic
function with a unique pole at s = 1.
An essential property of ζ is that it satisﬁes the functional equation:
ξ(s) = ξ(1 −s) (11.1)
1
Researches linking the Riemann Zeta function and random matrix theory, in particular:
“the KeatingSnaith philosophy”, which is closely related to the Lindel¨of hypothesis, are
beyond the scope of this book. However see e.g. the MezzadriSnaith volume [66]
165
166 11 Probabilistic representations of the Riemann zeta function
where:
ξ(s)
def
=
s(s −1)
2
Γ
_
s
2
_
π
−s/2
ζ(s) . (11.2)
We recall that the classical gamma function, which is deﬁned by:
Γ(s) =
_
∞
0
dt t
s−1
e
−t
, for Re(s) > 0 ,
extends analytically to C as a meromorphic function with simple poles at
0, −1, −2, . . . , −m, . . ., thanks to the relation:
Γ(1 +s) = s Γ(s) .
(11.1.2) The functional equation (11.1) may be understood as a symmetry
property of the distribution of the r.v.:
^
def
=
π
2
T
(2)
, where: T
(2)
def
= T
(3)
1
+
˜
T
(3)
1
,
with T
(3)
1
and
˜
T
(3)
1
two independent copies of the ﬁrst hitting time of 1 by a
BES(3) process starting from 0.
Indeed, one has:
2 ξ(2 s) = E[^
s
] (11.3)
Hence, if we assume that the functional equation (11.1) holds, we deduce
from (11.3) that ^ satisﬁes:
E[^
s
] = E[^
(1/2)−s
] , for any s ∈ C,
or, equivalently: for any Borel function f : IR
+
→ IR
+
,
E[f(^)] = E
_
f
_
1
^
_
√
^
_
. (11.4)
In paragraphs 11.2 and 11.3, an explanation of this symmetry property of ^
is given.
(11.1.3) For the moment, we give a proof of (11.4), hence of (11.1), as a
consequence of Jacobi’s identity for the theta function:
Θ
_
1
t
_
=
√
t Θ(t) , where Θ(t) ≡
∞
n=−∞
e
−πn
2
t
. (11.5)
Indeed, the density of ^, which we denote by ϕ(t), satisﬁes:
11.1 The Riemann zeta function and the 3dimensional Bessel process 167
ϕ(t) = 2t Θ
(t) + 3 Θ
(t) ,
and it is easily deduced from this identity that:
ϕ
_
1
t
_
= t
5/2
ϕ(t) (11.6)
which is equivalent to (11.4).
The following exercise should help to understand better the deep connections
which exist between the Riemann zeta function and the distribution of T
(3)
1
(and its powers of convolution).
Exercise 11.1 Let k > 0, and let T
(k)
denote an IR
+
valued r.v. such that
E
_
exp
_
−
λ
2
2
T
(k)
__
=
_
λ
shλ
_
k
(such a variable exists, thanks to the inﬁnite divisibility of T
(1)
; from for
mula (2.6), T
(k)
may be represented as:
_
1
0
ds ρ
2
(k)
(s), where (ρ
(k)
(s), s ≤ 1)
denotes here the (2k)dimensional Bessel bridge).
1. Prove that, for any m > 0, one has:
Γ(m) E
_
1
(T
(k)
)
m
_
=
1
2
m−k−1
_
∞
0
dλ λ
k+2m−1
e
−λk
(1 −e
−2λ
)
k
2. Assume k is an integer, k ≥ 1. Recall that
1
1 −x
=
∞
n=0
x
n
, (x < 1) and,
for k ≥ 2:
(k −1)!
(1 −x)
k
=
∞
n=k−1
n(n −1) (n −(k −2))x
n−(k−1)
More generally, for any k > 0, we have
1
(1 −x)
k
=
∞
p=0
α
(k)
p
x
p
, with α
(k)
p
=
Γ(k +p)
Γ(k) Γ(p + 1)
.
Deduce, from the ﬁrst question, that:
168 11 Probabilistic representations of the Riemann zeta function
Γ(m) E
_
1
(T
(k)
)
m
_
=
Γ(k + 2m)
2
m−k−1
∞
p=0
α
(k)
p
(k + 2p)
k+2m
3. Show the following formulae for E
_
1
(T
(k)
)
m
_
, with k = 1, 2, 3, 4, in terms
of Γ and ζ.
E
_
1
(T
(1)
)
m
_
=
Γ(2m+ 1)
(2
m−2
Γ(m))
_
∞
n=0
1
(2n + 1)
2m+1
_
=
Γ(2m+ 1)
(2
m−2
Γ(m))
_
1 −
1
2
2m+1
_
ζ(2m+ 1) .
E
_
1
(T
(2)
)
m
_
=
Γ(2m+ 2)
(2
3m−1
Γ(m))
ζ(2m+ 1) .
E
_
1
(T
(3)
)
m
_
=
Γ(2m+ 3)
2
m−1
Γ(m)
__
1 −
1
2
2m+1
_
...
...ζ(2m+ 1) −
_
1 −
1
2
2m+3
_
ζ(2m+ 3)
_
.
E
_
1
(T
(4)
)
m
_
=
Γ(2m+ 4)
3 2
3m−2
Γ(m)
¦ζ(2m+ 1) −ζ(2m+ 3)¦ .
Prove that, for any integer k ≥ 1, it is possible to express E
_
1
(T
(k)
)
m
_
in
terms of the Γ and ζ functions.
4. Deduce, from the comparison of the expressions of E
_
1
(T
(1)
)
m
_
and
E
_
1
(T
(2)
)
m
_
that:
(*)
U
2
T
(2)
(law)
=
Y
2
T
(1)
_
(law)
= Y
2
(sup
u≤1
R
(3)
u
)
2
_
,
where U denotes a uniform r.v., independent of T
(2)
, and Y a discrete r.v.
independent of T
(1)
and such that: P
_
Y =
1
2
p
_
=
1
2
p
, (p = 1, 2, . . .).
11.2 The agreement formulae 169
11.2 The right hand side of (11.4), and the agreement
formulae between laws of Bessel processes and
Bessel bridges
(11.2.1) Using (Brownian) excursion theory, we will show below that, for
every Borel function f : IR
+
→ IR
+
, one has:
E[f(m
2
e
)] =
_
π
2
E
_
f
_
1
T
(2)
_
_
T
(2)
_
(11.7)
where (e(u), u ≤ 1) denotes the normalized Brownian excursion, which is
distributed as the 3dimensional standard Bessel bridge, and m
e
def
= sup
u≤1
e(u).
Assuming (11.7) holds, it will remain, in order to ﬁnish the proof of (11.4)
to show:
m
2
e
(law)
=
π
2
4
T
(2)
(11.8)
which will be undertaken in paragraph 11.3.
(11.2.2) The identity (11.7) will appear below as a particular consequence of
the following agreement formulae which are now presented as relationships,
for any dimension d > 0, between the law of the standard ddimensional
Bessel bridge on one hand, and, on the other hand, of the law of two d
dimensional Bessel processes put back to back. Here is this relationship:
Theorem 11.1 Let d > 0, and deﬁne µ =
d
2
−1.
Consider (R
u
, u ≥ 0) and (R
u
, u ≥ 0) two independent BES
µ
≡ BES(d)
processes starting from 0; denote
2
by σ
µ
and σ
µ
their respective ﬁrst hitting
times of 1.
Let
ρ
u
=
_
R
u
, if u ≤ σ
µ
R
σ
µ
+σ
µ
−u
, if σ
µ
≤ u ≤ σ
µ
+σ
µ
,
and
˜ ρ
v
=
1
_
σ
µ
+σ
µ
ρ
v(σ
µ
+σ
µ
)
, v ≤ 1.
Then, if (r
v
, v ≤ 1) denotes the standard Bessel bridge with dimension d, we
have, for every measurable functional F : C([0, 1], IR
+
) → IR
+
:
2
Thus, σ
µ
is another (sometimes more convenient) notation for T
(d)
1
.
170 11 Probabilistic representations of the Riemann zeta function
E[F(r
v
, v ≤ 1)] = 2
µ
Γ(µ + 1) E[F(˜ ρ
v
, v ≤ 1)(σ
µ
+σ
µ
)
µ
]. (11.9)
We now remark that the identity (11.7) follows from the identity (11.10)
below, in the case µ = 1/2.
Corollary 11.1.1 Let m
µ
be the supremum of the standard Bessel bridge
with dimension d = 2(1+µ), and let s
µ
be the unique time at which this supre
mum is attained. Then, we have, for every Borel function f : IR
2
+
→ IR
+
,
E[f(m
2
µ
, s
µ
)] = 2
µ
Γ(µ + 1) E
_
f
_
1
σ
µ
+σ
µ
,
σ
µ
σ
µ
+σ
µ
_
(σ
µ
+σ
µ
)
µ
_
.
(11.10)
Proof: This is immediate from the identity (11.9) above, since m
2
µ
, resp. s
µ
,
considered on the left hand side of (11.9), corresponds to 1/(σ
µ
+σ
µ
), resp.
σ
µ
/(σ
µ
+σ
µ
), considered on the right hand side of (11.9). ¯.
It should be noted, although this is a digression from our main theme, that,
in the particular case µ = 0 (or d = 2), Theorem 11.1 yields a remarkable
identity in law.
Theorem 11.2 We use the same notation as in Theorem 11.1, but now
d = 2.
Then, we have: (r
v
, v ≤ 1)
(law)
= (˜ ρ
v
, v ≤ 1).
Corollary 11.2.1 We use the same notations as in Corollary 11.1.1, but
now µ = 0 (or, d = 2). Then, we have:
(m
2
0
, s
0
)
(law)
=
_
1
σ
0
+σ
0
,
σ
0
σ
0
+σ
0
_
(11.11)
and in particular:
s
0
m
2
0
(law)
= σ
0
. (11.12)
(11.2.3) A family of excursion measures We now give a proof of The
orem 11.1, for µ > 0, which relies upon two diﬀerent descriptions, both due
to D. Williams, of a σ−ﬁnite measure n
µ
already considered by PitmanYor
([73], p.436440) and BianeYor ([17], paragraph (3.2)). n
µ
is deﬁned on the
11.3 A discussion of the identity (11.8) 171
canonical space C(IR
+
, IR
+
), and is carried by the space Ω
abs
of the trajec
tories ω, such that ω(0) = 0 and ω is absorbed at 0 at the ﬁrst (strictly
positive) instant it reaches 0 again. n
µ
may be characterized by either of the
following descriptions. For these descriptions, we shall use the notation:
e
u
(ω) = ω(u) ; V (ω) = inf¦u > 0 : e
u
(ω) = 0¦ ; M(ω) = sup
u
e
u
(ω) .
First description of n
µ
(i) The distribution of M under n
µ
is given by:
n
µ
(M ≥ x) = x
−2µ
(x > 0) .
(ii)For every x > 0, conditionally on M = x, this maximum M is attained
at a unique time R (0 < R < V , a.s.), and the two processes (e
u
, u ≤ R)
and (e
V −u
, u ≤ V − R) are two independent BES
µ
0
processes, stopped at
the ﬁrst time they reach level x.
Second description of n
µ
(i’)The distribution of V under n
µ
is given by:
n
µ
(V ∈ dv) =
α
µ
dv
v
µ+1
, where α
µ
=
1
2
µ
Γ(µ)
(ii’)For every v ∈]0, ∞[, conditionally on V = v, the process (e
u
, u ≤ v) is
a Bessel bridge of index µ, during the time interval [0, v], starting and
ending at 0.
11.3 A discussion of the identity (11.8)
(11.3.1) The identity: m
2
e
(law)
=
π
2
4
T
(2)
(11.8)
is reminiscent of the very wellknown KolmogorovSmirnov identity:
172 11 Probabilistic representations of the Riemann zeta function
m
2
b
def
= sup
u≤1
(b(u))
2
(law)
=
π
2
4
T
(3)
1
_
(law)
= T
(3)
π/2
_
(11.13)
where (b(u), u ≤ 1) denotes here the standard 1dimensional Brownian bridge.
No satisfactory explanation has, until now, been given for the factor (π/2)
2
in either formula (11.8) or (11.13), but, putting them together, Chung [27]
pointed out the puzzling identity in law:
m
2
e
(law)
= m
2
b
+m
2
˜
b
(11.14)
where, on the righthand side of (11.14), b and
˜
b are two independent 1
dimensional Brownian bridges.
It follows from Vervaat’s representation of the normalized Brownian excursion
(e(t), t ≤ 1) (see Vervaat [89], and also Biane [12]), i.e.:
the process ˜ e(t)
def
= b((ρ+t) [mod 1])−b(ρ), t ≤ 1, where ρ is the unique time
at which b attains its minimum, is a normalized Brownian excursion, that:
m
e
(law)
= sup
u≤1
b(u) − inf
u≤1
b(u) ,
and, therefore, the identity (11.14) may be written as:
(sup
u≤1
b(u) − inf
u≤1
b(u))
2
(law)
= m
2
b
+m
2
˜
b
. (11.15)
No pathwise explanation of the identities (11.14) or (11.15) has been found,
and the explicit computation of the joint law of (sup
u≤1
b(u), inf
u≤1
b(u))
presented below in (11.3.2) rules out the possibility that (11.15) might be
explained by the independence (which does not hold) of ¦(sup
u≤1
b(u) −
inf
u≤1
b(u))
2
−m
2
b
¦ and m
2
b
. To conclude with this series of identities, we use
the wellknown representation of brownian motion (B
t
, t ≥ 0) in terms of
the Brownian bridge (b(u), u ≤ 1):
B
t
= (1 +t)b
_
t
1 +t
_
, t ≥ 0,
from which it is easily deduced (see, e.g., RevuzYor [81], Exercise (3.10),
p. 37) that:
sup
t≥0
([B
t
[ −t)
(law)
= sup
u≤1
(b(u))
2
.
Hence, we may write the identity in law (11.14) in the equivalent form:
11.3 A discussion of the identity (11.8) 173
sup
u≥0
(R(u) −u)
(law)
= sup
u≥0
([B
u
[ −u) + sup
u≥0
([
˜
B
u
[ −u)
(law)
= sup
t≥0
(B
+
t
−
_
t
0
ds 1
(B
s
≥0)
) + sup
t≥0
(B
−
t
−
_
t
0
ds 1
(B
s
≤0)
)
(11.16)
where B and
˜
B denote two independent Brownian motions, and (R(u), u ≥ 0)
is a 3dimensional Bessel process. (The last identity in law in (11.16) is left
to the reader as an exercise; Hint : Use the representation of B
±
in terms of
reﬂecting BM, given in Chapter 4, Paragraph 4.1).
(11.3.2) From the theory of Brownian excursions, the joint law of (s
+
b
, s
−
b
,
b
),
where: s
+
b
= sup
u≤1
b(u), s
−
b
= − inf
u≤1
b(u), and
b
is the local time at
level 0 of the standard Brownian bridge (b(u), u ≤ 1) may be characterized
as follows:
P([G[s
+
b
≤ x; [G[s
−
b
≤ y; [G[
b
∈ dλ) = exp(−
λ
2
(coth x+cothy)) dλ (11.17)
where G denotes a gaussian variable, centered, with variance 1, which is
independent of b; consequently, one obtains, after integrating with respect to
λ:
P([G[s
+
b
≤ x; [G[s
−
b
≤ y) =
2
coth x + cothy
,
and it is now easy to deduce from this identity, together with the obvious
equality:
m
b
= max(s
+
b
, s
−
b
), that:
E
_
exp
_
−
α
2
2
m
2
b
__
=
_
πα
2
_
sh
_
πα
2
_
and
E
_
exp
_
−
α
2
2
(s
+
b
+s
−
b
)
2
__
=
_
_
πα
2
_
sh
_
πα
2
_
_
2
.
This proves both identities (11.13) and (11.15) (and as we remarked above,
(11.15) is equivalent to (11.14)).
We now remark, as an exercise, that the identity in law (11.15) may be trans
lated into an identity in law between independent exponential and Bernoulli
variables, the understanding of which does not seem obvious.
Exercise 11.2 (We keep the notation used in formula (11.17).)
174 11 Probabilistic representations of the Riemann zeta function
1. Prove that:
[G[ (
b
, 2 s
+
b
, 2 s
−
b
)
(law)
=
_
T, log(1 +
T
T
), log(1 +
T
T
)
_
and
[G[ (
b
, 2m
b
)
(law)
=
_
T, log(1 +
2 T
T
∗
)
_
,
where (T, T
, T
), resp. (T, T
∗
), are three, respectively two, independent
exponential variables with parameter 1.
2. Prove that the identity in law (11.15) is equivalent to:
__
1 +
T
T
__
1 +
T
T
__
(law)
=
_
1 +
2T
1
T
_
_
1 +
2T
2
T
_
where, on either side, the T’s indicate independent exponential variables,
which are also independent of the i.i.d. Bernoulli variables ,
and
(P( = ±1) = 1/2).
Here is now a proof of the identity (11.17).
Recall that the standard Brownian bridge (b(u), u ≤ 1) may be represented
as
_
b(u) ≡
1
√
g
t
B
ug
t
; u ≤ 1
_
,
where g
t
= sup¦s < t; B
s
= 0¦.
Moreover, from excursion theory, we obtain the following equalities:
P(
T
∈ dλ) = exp(−λ) dλ,
and for any measurable functional F : C([0, ∞), IR) → IR
+
,
E[F(B
u
, u ≤ g
T
) [
T
= λ] = exp(λ)E[F(B
u
, u ≤ τ
λ
) exp(−
τ
λ
2
)] (11.18)
where (τ
λ
, λ ≥ 0) denotes the right continuous inverse of (
t
, t ≥ 0) and T
denotes here an exponential time with parameter 1/2, which is assumed to
be independent of B.
Thanks to the ﬁrst description of n
1/2
, which is given in 11.2, the following
formula is easily obtained:
11.4 A strengthening of Knight’s identity 175
E
_
S
+
τ
λ
≤ x; S
−
τ
λ
≤ y; exp
_
−
τ
λ
2
__
= exp
_
−
λ
2
(coth x + coth y)
_
(11.19)
Furthermore, we remark that, by scaling
(i)
√
g
T
(law)
= [G[, and (ii) (S
+
g
T
, S
−
g
T
.
T
)
(law)
= [G[ (s
+
b
, s
−
b
,
b
) ,
where we have used the notation introduced at the beginning of this sub
paragraph 11.3. Now, in order to obtain formula (11.17), it remains to put
together (i) and (ii) on one hand, and (11.18) and (11.19) on the other hand.
11.4 A strengthening of Knight’s identity, and its
relation to the Riemann zeta function
(11.4.1) In Chapter 9 of these Notes, we have given a proof, and some ex
tensions of Knight’s identity:
for α ∈ IR, E
_
exp
_
−
α
2
2
τ
M
2
τ
__
=
2α
sh(2 α)
(11.20)
where, to simplify notations, we write τ instead of τ
1
.
This identity (11.20) may be presented in the equivalent form:
τ
M
2
τ
(law)
= T
(3)
2
(:= inf¦u; R
u
= 2¦). (11.21)
We now remark that the identity (11.20), or (11.21), may be strengthened as
follows.
Theorem 11.3 (PitmanYor [79]) Deﬁne X =
S
+
τ
S
+
τ
+S
−
τ
,
where S
+
t
= sup
s≤t
B
s
, S
−
t
= −inf
s≤t
B
s
.
Then, X is uniformly distributed on [0, 1], and independent of
τ
(S
+
τ
+S
−
τ
)
2
,
which is distributed as T
(2)
(law)
= T
(3)
1
+
˜
T
(3)
1
.
176 11 Probabilistic representations of the Riemann zeta function
Equivalently, one has:
E
_
exp
_
−
α
2
2
τ
(S
+
τ
+S
−
τ
)
2
__
=
_
α
shα
_
2
.
Theorem 11.3 constitutes indeed a strengthening of Knight’s identity (11.20),
since we can write:
τ
M
2
τ
=
τ
(S
+
τ
+S
−
τ
)
2
(max (X, 1 −X))
−2
and it is easily shown that:
T
(3)
2
(law)
= T
(2)
(max (X, 1 −X))
−2
,
where, on the righthand side, T
(2)
and X are assumed to be independent.
Exercise 11.3 1. Prove that, if X is uniformly distributed on [0, 1], then
V = max (X, 1 −X) is uniformly distributed on [1/2, 1].
2. Prove that the identity in law: T
(3)
2
(law)
=
T
(2)
V
2
we just encountered above
agrees with the identity in law (∗)
U
2
T
(2)
(law)
=
Y
2
T
(3)
1
derived in question 3
of Exercise 11.1.
Hint : Remark that U
(law)
= (2Y ) V , where, on the right hand side Y and V
are independent.
A simple proof of Theorem 11.3 may be deduced from the identity (11.19),
once we use the scaling property of BM to write the lefthand side of (11.19)
as:
P
_
λS
+
τ
≤ x; λS
−
τ
≤ y; exp
_
−
λ
2
2
τ
__
.
However, a more complete explanation of Theorem 11.3 may be given, in
terms of a Vervaattype theorem for the pseudobridge
_
1
√
τ
B
uτ
; u ≤ 1
_
.
Theorem 11.4 ([79]; We keep the above notation)
11.4 A strengthening of Knight’s identity 177
Let ρ be the (unique) instant in the interval [0, τ] at which (B
u
; u ≤ τ) attains
its minimum. Deﬁne the process
˜
B as
(
˜
B(t); t ≤ τ) := (B((ρ +t)[mod τ]) −B(ρ); t ≤ τ)
Then, denoting by (e(u); u ≤ 1) the normalized Brownian excursion, we have:
E
_
F
_
1
√
τ
˜
B(uτ); u ≤ 1
__
=
_
2
π
E [m
e
F(e(u); u ≤ 1)]
for any measurable F : C([0, 1]; IR
+
) → IR
+
.
(11.4.2) The above strengthening of Knight’s identity enables us to present
now a very concise discussion of the identity in law (11.4), which we write in
the equivalent form:
E[f(T
(2)
)] = E
_
f
_
1
(π
2
/4) T
(2)
_ _
π
2
T
(2)
_
. (11.22)
Indeed, the lefthand side of (11.22) is, from Theorem 11.3, equal to
E
_
f
_
τ
(S
+
τ
+S
−
τ
)
2
__
,
but, now from Theorem 11.4, this expression is also equal to:
_
2
π
E
_
f
_
1
(s
+
b
+s
−
b
)
2
_
(s
+
b
+s
−
b
)
_
. (11.23)
Moreover, we proved in 11.3 that:
(s
+
b
+s
−
b
)
2
(law)
=
π
2
4
T
(2)
,
so that the quantity in (11.23) is equal to:
E
_
f
_
1
(π
2
/4) T
(2)
_ _
π
2
T
(2)
_
,
which is the righthand side of (11.22).
178 11 Probabilistic representations of the Riemann zeta function
11.5 Another probabilistic representation of the
Riemann zeta function
Given the relations, discussed above, between the distributions of m
e
and
T
(2)
, the identity in law:
h
e
(law)
= 2 m
e
, where h
e
:=
_
1
0
ds
e(s)
(11.24)
obviously provides us with another probabilistic representation of the Rie
mann zeta function.
It will be shown below that (11.24) is a consequence of the following
Theorem 11.5 (Jeulin [52]) Let (
a
e
; a ≥ 0) be the family of local times
of (e(s), s ≤ 1), and deﬁne:
k(t) = sup¦y ≥ 0;
_
∞
y
dx
x
e
> t¦ .
Then, the process ((1/2)
k(t)
e
; t ≤ 1) is a normalized Brownian excursion.
We now prove (11.24). We deduce from Theorem 11.5 that:
h
e
(law)
=
_
1
0
dt
(1/2)
k(t)
e
,
and the righthand side of this identity in law is equal to 2 m
e
, which is
obtained by making the change of variables y = k(t).
11.6 Some generalizations related to Bessel processes
In this paragraph, the sequence IN
∗
of positive integers will be replaced by
the sequence of the zeros of the Bessel function J
µ
.
Another important change with previous paragraphs is that, instead of study
ing m
2
µ
, or σ
µ
+σ
µ
as in paragraph 11.2, in connection with the Riemann zeta
function, it will be shown in this paragraph that the “Bessel zeta function”
11.6 Some generalizations related to Bessel processes 179
ζ
ν
which will be considered now has some close relationship with the time
spent below 1 by a certain Bessel process.
(11.6.1) “Zeta functions” and probability.
It may be fruitless, for our purpose, to deﬁne which properties a “zeta func
tion” should satisfy, e.g.: an Eulerproduct representation, or a functional
equation, or . . . ; instead, we simply associate to a sequence λ
∗
= (λ
n
; n ≥ 1)
of strictly positive real numbers, the “zeta function”:
ζ
λ
∗
(s) =
∞
n=1
1
λ
s
n
, s > 0 .
In the sequel, we shall assume that: ζ
λ
∗
(1) =
∞
n=1
1
λ
n
< ∞. We then have the
elementary
Proposition 11.1 Deﬁne the probability density:
θ
λ
∗
(t) = c
λ
∗
∞
n=1
e
−λ
n
t
with c
λ
∗
=
1
ζ
λ
∗
(1)
. Then, if X
λ
∗
is a random variable
with distribution θ
λ
∗
(t)dt, we have:
ζ
λ
∗
(s)Γ(s) = ζ
λ
∗
(1)E
_
(X
λ
∗
)
s−1
¸
, s > 0 . (11.25)
Proof: This is an immediate consequence of the equality:
Γ(s)
1
a
s
=
∞
_
0
dx x
s−1
e
−ax
, a > 0 , s > 0 . ¯.
(11.6.2) Some examples related to Bessel processes.
a) In this paragraph, we associate to any ν > 0, the sequence:
ν
∗
=
_
j
2
ν−1,n
; n ≥ 1
_
(11.26)
where (j
µ,n
; n ≥ 1) denotes the increasing sequence of the simple, positive,
zeros of the Bessel function J
µ
(see Watson [90], p. 498).
We shall write ζ
ν
(s) for ζ
ν
∗(s), and θ
ν
(t) for θ
ν
∗(t). The aim of this paragraph
is to exhibit a random variable X
ν
≡ X
ν
∗ which is distributed as θ
ν
(t)dt.
The following series representation shall play an essential rˆ ole:
180 11 Probabilistic representations of the Riemann zeta function
1
x
I
ν
I
ν−1
(x) = 2
∞
n=1
1
x
2
+j
2
ν−1,n
, x > 0 (11.27)
(see Watson [90], p. 498).
Now, we may prove the following
Theorem 11.6 1) Let y > 0, and P
ν
y
the law of the Bessel process (R
t
, t ≥ 0),
with index ν, starting from y at time 0. Then, we have:
E
ν
y
⎡
⎣
exp −α
∞
_
0
du 1
(R
u
≤y)
⎤
⎦
=
2ν
y
√
2α
I
ν
I
ν−1
(y
√
2α) (11.28)
2) Consequently, under P
ν
y
, the distribution of the random variable:
X
y
=
∞
_
0
du 1
(R
u
≤y)
is
1
2y
2
θ
ν
_
t
2y
2
_
dt ,
where: θ
ν
(t) = (4ν)
∞
n=1
e
−j
2
ν−1,n
t
, t ≥ 0
_
since: ζ
ν
(1) =
1
4ν
_
(11.29)
Corollary 11.6.1 For any y > 0, a candidate for the variable X
ν
is
1
2y
2
X
y
≡
1
2y
2
∞
_
0
du 1
(R
u
≤y)
, under P
ν
y
.
Consequently, the following probabilistic representation of ζ
ν
holds:
ζ
ν
(s)Γ(s) =
ζ
ν
(1)
(2y
2
)
s−1
E
ν
y
⎡
⎢
⎣
⎛
⎝
∞
_
0
du 1
(R
u
≤y)
⎞
⎠
s−1
⎤
⎥
⎦
, with ζ
ν
(1) =
1
4ν
.
(11.30)
Proof of Theorem 11.6:
1) It may now be easier to use the following notation:
_
R
(ν)
y
(u), u ≥ 0
_
denotes the Bessel process with index ν, starting at y at
11.6 Some generalizations related to Bessel processes 181
time 0. Then, we have seen, and proved, in Chapter 4, the CiesielskiTaylor
identities:
∞
_
0
du 1
(R
(ν)
0
(u)≤y)
(law)
= T
y
(R
(ν−1)
0
)
Hence, with the help of this remark, and of the strong Markov property,
we obtain:
E
ν
y
⎡
⎣
exp
⎛
⎝
−α
∞
_
0
du 1
(R
u
≤y)
⎞
⎠
⎤
⎦
=
E
ν−1
0
(exp −αT
y
)
E
ν
0
(exp −αT
y
)
and, to deduce formula (11.28), it suﬃces to use the following identity:
E
µ
0
(exp −αT
y
) =
(y
√
2α)
µ
2
µ
Γ(µ + 1)I
µ
(y
√
2α)
, (11.31)
for µ = ν, and µ = ν −1 (see Kent [56], for example).
2) The proof of the second statement of the proposition now follows immedi
ately from formulae (11.28) and (11.27). ¯.
We now recall (see Chapter 6, in particular) that, if (B
t
, t ≥ 0) denotes Brow
nian motion starting from 0, then (exp(B
t
+νt); t ≥ 0) may be represented
as:
exp(B
t
+νt) = R
(ν)
⎛
⎝
t
_
0
du exp2(B
u
+νu)
⎞
⎠
, (11.32)
where (R
(ν)
(t), t ≥ 0) denotes here the Bessel process with index ν, starting
from 1 at time 0. Hence, timechanging R
(ν)
into (exp(B
t
+νt), t ≥ 0) with
the help of formula (11.32), we obtain the following representation of ζ
ν
(s).
Corollary 11.6.2 Let (B
t
, t ≥ 0) be a real valued Brownian motion starting
from 0. Then, we have, for any ν > 0:
ζ
ν
(s)Γ(s) =
ζ
ν
(1)
2
s−1
E
⎡
⎢
⎣
⎛
⎝
∞
_
0
du exp2(B
u
+νu)1
(B
u
+νu≤0)
⎞
⎠
s−1
⎤
⎥
⎦
(11.33)
(11.6.3) The particular case ν =
3
2
.
182 11 Probabilistic representations of the Riemann zeta function
We then have: ν − 1 =
1
2
, and we are interested, from the deﬁnition of ν
∗
given in (11.26), in the sequence of positive zeros of
J1
2
(z) =
_
2
πz
_
1/2
sin(z) .
Therefore, we have:
j
1/2,n
= nπ
Consequently, in the particular case ν = 3/2, we may now write down the
main result contained in Theorem 11.6 and its Corollaries, in the following
form
Proposition 11.2 We simply write ζ
R
(s) =
∞
n=1
1
n
s
. Then, we have
3 2
s/2
Γ
_
s
2
_
π
s
ζ
R
(s) = E
3/2
1
⎛
⎜
⎝
⎛
⎝
∞
_
0
du 1
(R
u
≤1)
⎞
⎠
s
2
−1
⎞
⎟
⎠
(11.34)
= E
⎡
⎢
⎣
⎛
⎝
∞
_
0
dt exp(2B
t
+ 3t)1
(2B
t
+3t≤0)
⎞
⎠
s
2
−1
⎤
⎥
⎦
11.7 Some relations between X
ν
and
Σ
ν−1
≡ σ
ν−1
+σ
ν−1
(11.7.1) We begin with the most important case ν =
3
2
, for which we simply
write X for X
ν
and Σ for Σ
ν−1
. Recall that, at the beginning of this Chapter,
we used T
(2)
as a notation for Σ, which now becomes more convenient.
Theorem 11.7 Let X =
∞
_
0
ds 1
(R
(5)
s
≤1)
, where (R
(5)
s
, s ≥ 0) denotes the
Bessel process with dimension 5 (or index 3/2), starting from 1. Moreover,
deﬁne: Σ
(law)
= σ +σ
, where σ and σ
are two independent copies of the ﬁrst
hitting time of 1 by BES
(3)
0
.
11.7 Some relations between X
ν
and Σ
ν−1
≡ σ
ν−1
+σ
ν−1
183
Consider
˜
Σ, a random variable
3
which satisﬁes:
for every Borel function f : IR
+
→ IR
+
, E
_
f(
˜
Σ)
_
=
3
2
E[f(Σ)Σ] (11.35)
Then, we have:
a) X
(law)
= H
˜
Σ (11.36)
where H and
˜
Σ are independent, and
P(H ∈ dh) =
_
1
√
h
−1
_
dh (0 < h < 1)
or, equivalently:
H
(law)
= UV
2
(law)
= (1 −
√
U)
2
,
where U and V denote two independent uniform r.v’s;
b)
˜
Σ
(law)
= Σ +X (11.37)
where, on the righthand side, Σ and X are assumed to be independent.
Remark: The identity in law: 1 −
√
U
(law)
= V
√
U which appears at the
end of point a) above is a particular case of the identity in law between beta
variables:
Z
a,b+c
(law)
= Z
a,b
Z
a+b,c
(see paragraph (8.1))
with, here: a = b = c = 1
Proof: a) Both identities in law (11.36) and (11.37) may be deduced from
the explicit knowledge of the Laplace transforms of X and
˜
Σ, which are given
by:
E [exp(−αX)] = 3
√
2αcoth(
√
2α) −1
2α
(11.38)
(this is a particular case of formula (11.28), and
E
_
exp(−α
˜
Σ)
_
= 3
√
2αcoth(
√
2α) −1
sh
2
(
√
2α)
(11.39)
The identity in law (11.37) follows immediately from (11.38) and (11.39).
b) It may be interesting to give another proof of the identity in law (11.36).
This second proof, which is in fact how the identity (11.36) was discovered,
is obtained by comparing formula (11.34) with the deﬁnition of the function
3
That is:
˜
Σ is obtained by sizebiased sampling of Σ.
184 11 Probabilistic representations of the Riemann zeta function
ξ(s), or rather with formula (11.3). By doing so, we obtain:
E[X
s
2
−1
] =
3
s(s −1)
E
_
_
2
π
^
_
s/2
_
,
and, changing s into: 2k + 2, we get:
E[X
k
] =
1
(k + 1)(2k + 1)
_
3
2
E[Σ
k+1
]
_
(k ≥ 0).
Now, we remark that
E[H
k
] = E[U
k
] E[V
2k
] ≡
1
(k + 1)(2k + 1)
,
so that
E[X
k
] = E[(H
˜
Σ)
k
] (k ≥ 0).
which implies (11.36). ¯.
Corollary 11.7.1 (We use the same notations as in Theorem 11.7.)
a) The random variable Σ satisﬁes the identity in law
˜
Σ
(law)
= Σ +H
˜
Σ
1
(11.40)
where, on the righthand side,
˜
Σ
1
is independent of the pair (Σ, H), and
is distributed as
˜
Σ.
b) Equivalently, the function g(λ) := E[exp(−λΣ)] ≡
_
√
2λ
sh(
√
2λ)
_
2
satisﬁes:
−
√
λ
g
(λ)
g(λ)
=
1
2
_
λ
0
dx
x
3/2
(1 −g(x)) . (11.41)
Proof: The identity (11.40) follows immediately from (11.36) and (11.37).
We then deduce from (11.40) the identity
g
(λ) = g(λ)
_
1
0
dh
_
1
√
h
−1
_
g
(hλ) ,
from which (11.41) follows, using integration by parts. ¯.
(11.7.2) We now present an extension for any ν of the identity in law (11.37).
11.7 Some relations between X
ν
and Σ
ν−1
≡ σ
ν−1
+σ
ν−1
185
Proposition 11.3 Let X
ν
=
∞
_
0
ds 1
(R
ν
s
≤1)
, where (R
ν
s
, s ≥ 0) denotes the
Bessel process with index ν, starting from 1, and deﬁne Σ
ν−1
= σ
ν−1
+σ
ν−1
,
where σ
ν−1
and σ
ν−1
are two independent copies of the ﬁrst hitting time of 1
by BES
ν−1
0
, the Bessel process with index ν −1 starting from 0.
Consider ﬁnally
˜
Σ
ν−1
, a random variable which satisﬁes:
for every Borel function f : IR
+
→ IR
+
, E
_
f(
˜
Σ
ν−1
)
_
= νE
_
f(Σ
ν−1
)Σ
ν−1
¸
Then, we have
˜
Σ
ν−1
(law)
= Σ
ν−1
+X
ν
(11.42)
where the random variables on the righthand side are assumed to be inde
pendent.
Proof: From formula (11.31), we deduce:
E
_
exp
_
−λΣ
ν−1
_¸
=
⎛
⎝
(
√
2λ)
ν−1
2
ν−1
Γ(ν)I
ν−1
_
√
2λ
_
⎞
⎠
2
,
so that, taking derivatives with respect to λ on both sides, we obtain:
E
_
Σ
ν−1
exp
_
−λΣ
ν−1
_¸
=
_
x
ν−1
2
ν−1
Γ(ν)I
ν−1
(x)
_
2
_
2
x
I
ν
I
ν−1
(x)
_
(11.43)
where x =
√
2λ, and we have used the recurrence formula:
(ν −1)I
ν−1
(x) −xI
ν−1
(x) = −xI
ν
(x) .
It now suﬃces to multiply both sides of (11.43) by ν and to use for
mula (11.28) to conclude. ¯.
Remark: The comparison of Theorem 11.7 and Proposition 11.3 suggests
several questions, two of which are:
(i) is there an extension of the identity in law (11.36) for any ν, in the form:
X
ν
(law)
= H
ν
˜
Σ
ν−1
, for some variable H
ν
, which would be independent of
˜
Σ
ν−1
?
(ii)is there any relation between the functional equation for ζ and the identity
in law (11.40), or equivalently (11.41)?
186 11 Probabilistic representations of the Riemann zeta function
11.8 ζ
ν
(s) as a function of ν
In this paragraph, we show that the dependency in ν of the function ζ
ν
(s) may
be understood as a consequence of the following Girsanov type relationship
between the probability measures P
ν
y
.
Theorem 11.8 Let y > 0. On the canonical space Ω = C(IR
+
, IR
+
), we
deﬁne R
t
(ω) = ω(t) (t ≥ 0), and L
y
(ω) = sup¦t ≥ 0 : R
t
(ω) = y¦. Then,
as ν > 0 varies, the measures P
ν
y
¸
¸
F
L
y
are all mutually absolutely continuous.
More precisely, there exists a σﬁnite measure M
y
on (Ω, T
L
y
) such that, for
every variable Z ≥ 0, which is T
L
y
measurable, and every ν > 0, we have:
M
y
(Z) =
1
ν
E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
⎤
⎦
. (11.44)
Proof: We consider the righthand side of (11.44), and we disintegrate P
ν
y
with respect to the law of L
y
. We obtain:
1
ν
E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
⎤
⎦
=
1
ν
_
P
ν
y
(L
y
∈ dt)E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
[ L
y
= t
⎤
⎦
.
Now, it is wellknown that conditioning with respect to L
y
= t amounts to
condition with respect to R
t
= y (see, for example, RevuzYor [81], Exercise
(1.16), p.378) or FitzsimmonsPitmanYor [41]); therefore, we have:
E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
[ L
y
= t
⎤
⎦
= E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
t
_
0
du
R
2
u
⎞
⎠
[ R
t
= y
⎤
⎦
(11.45)
Next, we use the absolute continuity relationship between P
ν
y
and P
0
y
:
P
ν
y
¸
¸
F
t
=
_
R
t
y
_
ν
exp
⎛
⎝
−
ν
2
2
t
_
0
du
R
2
u
⎞
⎠
P
0
y
¸
¸
F
t
,
11.8 ζ
ν
(s) as a function of ν 187
so that the expression in (11.45) is in fact equal to:
p
0
t
(y, y)
p
ν
t
(y, y)
E
0
y
[Z [ R
t
= y] ,
where ¦p
ν
t
(x, y)¦ is the family of densities of the semigroup P
ν
t
(x; dy) ≡
p
ν
t
(x, y)dy associated to ¦P
ν
x
¦.
Hence, the ﬁrst expression we considered in the proof is equal to:
1
ν
E
ν
y
⎡
⎣
Z exp
⎛
⎝
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
⎤
⎦
=
∞
_
0
P
ν
y
(L
y
∈ dt)
νp
ν
t
(y, y)
p
0
t
(y, y)E
0
y
[Z [ R
t
= y] .
(11.46)
However, it is known that:
P
ν
y
(L
y
∈ dt) = νp
ν
t
(y, y)dt (see PitmanYor [72])
and ﬁnally, the expression in (11.46), which is equal to:
∞
_
0
dt p
0
t
(y, y)E
0
y
[Z [ R
t
= y]
does not depend on ν.
Corollary 11.8.1 1) Let
˜
θ
0
(t)dt be the distribution of X
1
under the σﬁnite
measure M
1
. Then, the distribution of X
y
under M
y
is
˜
θ
0
_
t
y
2
_
dt
y
2
.
2) For every y > 0, and t > 0, we have:
2
˜
θ
0
_
t
y
2
_
∞
n=1
e
−(j
2
ν−1,n
)
_
t
2y
2
_
= M
y
⎛
⎝
exp −
ν
2
2
L
y
_
0
du
R
2
u
[ X
y
= t
⎞
⎠
. (11.47)
3) For every ν > 0, we have:
ζ
ν
(s)Γ(s) =
1
4
1
(2y
2
)
s−1
M
y
⎛
⎜
⎝
⎛
⎝
∞
_
0
du 1
(R
u
≤y)
⎞
⎠
s−1
exp
⎛
⎝
−
ν
2
2
L
y
_
0
du
R
2
u
⎞
⎠
⎞
⎟
⎠
.
(11.48)
188 11 Probabilistic representations of the Riemann zeta function
Consequently, the lefthand side of (11.47), i.e.: the “thetafunction of in
dex ν” and the lefthand side of (11.48), i.e.: the “zeta function of index ν”
are Laplace transforms in
_
ν
2
2
_
.
The last statement of the previous Corollary is conﬁrmed by the explicit for
mulae found in Watson ([90], p. 502) for ζ
ν
(n), for n a small integer (Watson
uses the notation σ
(s)
ν−1
instead of our notation ζ
ν
(s)).
In the following formulae, the function: ν → ζ
√
ν
(n) appears to be a com
pletely monotonic function of ν, as a sum (with positive coeﬃcients) or a
product of completely monotonic functions. Here are these formulae:
ζ
√
ν
(1) =
1
2
2
√
ν
ζ
√
ν
(3) =
1
2
5
ν
3/2
(
√
ν + 1)(
√
ν + 2)
(11.49)
ζ
√
ν
(2) =
1
2
4
ν(
√
ν + 1)
ζ
√
ν
(4) =
5
√
ν + 6
2
8
ν
2
(
√
ν + 1)
2
(
√
ν + 2)(
√
ν + 3)
Comments on Chapter 11
The origin of this chapter is found in BianeYor [17]. We also recommend the
more developed discussion in Biane [15]. D. Williams [93] presents a closely
related discussion. SmithDiaconis [84] start from the standard random walk
before passing to the Brownian limit to obtain the functional equation (11.1).
A detailed discussion of the agreement formula (11.9) is found in PitmanYor
[78].
References
1. J. Az´ema and M. Yor. Sur les z´eros des martingales continues. In S´eminaire de
Probabilit´es, XXVI, volume 1526 of Lecture Notes in Math., pages 248–306. Springer,
Berlin, 1992.
2. M. Barlow, J. Pitman, and M. Yor. Une extension multidimensionnelle de la loi de
l’arc sinus. In S´eminaire de Probabilit´es, XXIII, volume 1372 of Lecture Notes in
Math., pages 294–314. Springer, Berlin, 1989.
3. C. B´elisle. Windings of random walks. Ann. Probab., 17(4):1377–1402, 1989.
4. C. B´elisle and J. Faraway. Winding angle and maximum winding angle of the two
dimensional random walk. J. Appl. Probab., 28(4):717–726, 1991.
5. M. Berger and P. Roberts. On the winding number problem with ﬁnite steps. Adv.
in Appl. Probab., 20(2):261–274, 1988.
6. R. Berthuet.
´
Etude de processus g´en´eralisant l’aire de L´evy. Probab. Theory Related
Fields, 73(3):463–480, 1986.
7. J. Bertoin. Complements on the Hilbert transform and the fractional derivative of
Brownian local times. J. Math. Kyoto Univ., 30(4):651–670, 1990.
8. J. Bertoin. Excursions of a BES
0
(d) and its drift term (0 < d < 1). Probab. Theory
Related Fields, 84(2):231–250, 1990.
9. J. Bertoin. On the Hilbert transform of the local times of a L´evy process. Bull. Sci.
Math., 119(2):147–156, 1995.
10. J. Bertoin. L´evy processes, volume 121 of Cambridge Tracts in Mathematics. Cam
bridge University Press, Cambridge, 1996.
11. J. Bertoin and J. Pitman. Path transformations connecting Brownian bridge, excur
sion and meander. Bull. Sci. Math., 118(2):147–166, 1994.
12. P. Biane. Comparaison entre temps d’atteinte et temps de s´ejour de certaines dif
fusions r´eelles. In S´eminaire de probabilit´es, XIX, 1983/84, volume 1123 of Lecture
Notes in Math., pages 291–296. Springer, Berlin, 1985.
13. P. Biane. Sur un calcul de F. Knight. In S´eminaire de Probabilit´es, XXII, volume
1321 of Lecture Notes in Math., pages 190–196. Springer, Berlin, 1988.
14. P. Biane. Decomposition of brownian trajectories and some applications. Notes from
lectures given at the Probability Winter School of Wuhan, China, Fall 1990.
15. P. Biane. La fonction zˆeta de Riemann et les probabilit´es. In La fonction zˆeta, pages
165–193. Ed.
´
Ec. Polytech., Palaiseau, 2003.
16. P. Biane, J.F. Le Gall, and M. Yor. Un processus qui ressemble au pont brownien.
In S´eminaire de Probabilit´es, XXI, volume 1247 of Lecture Notes in Math., pages
270–275. Springer, Berlin, 1987.
17. P. Biane and M. Yor. Valeurs principales associ´ees aux temps locaux browniens. Bull.
Sci. Math. (2), 111(1):23–101, 1987.
189
190 References
18. P. Biane and M. Yor. Quelques pr´ecisions sur le m´eandre brownien. Bull. Sci. Math.
(2), 112(1):101–109, 1988.
19. P. Biane and M. Yor. Sur la loi des temps locaux browniens pris en un temps
exponentiel. In S´eminaire de Probabilit´es, XXII, volume 1321 of Lecture Notes in
Math., pages 454–466. Springer, Berlin, 1988.
20. N. Bingham and R. Doney. On higherdimensional analogues of the arcsine law. J.
Appl. Probab., 25(1):120–131, 1988.
21. A. N. Borodin. Brownian local time. Uspekhi Mat. Nauk, 44(2(266)):7–48, 1989.
22. O. Brockhaus. The Martin boundary of the Brownian sheet. In Stochastic partial
diﬀerential equations (Edinburgh, 1994), volume 216 of London Math. Soc. Lecture
Note Ser., pages 22–30. Cambridge Univ. Press, Cambridge, 1995.
23. E. A. Carlen. The pathwise description of quantum scattering in stochastic mechanics.
In Stochastic processes in classical and quantum systems (Ascona, 1985), volume 262
of Lecture Notes in Phys., pages 139–147. Springer, Berlin, 1986.
24. P. Carmona, F. Petit, and M. Yor. Sur les fonctionnelles exponentielles de certains
processus de L´evy. Stochastics Stochastics Rep., 47(12):71–101, 1994.
25. P. Carmona, F. Petit, and M. Yor. Betagamma random variables and intertwining
relations between certain Markov processes. Rev. Mat. Iberoamericana, 14(2):311–
367, 1998.
26. T. Chan, D. S. Dean, K. M. Jansons, and L. C. G. Rogers. On polymer conformations
in elongational ﬂows. Comm. Math. Phys., 160(2):239–257, 1994.
27. K. L. Chung. Excursions in Brownian motion. Ark. Mat., 14(2):155–177, 1976.
28. B. Davis. Brownian motion and analytic functions. Ann. Probab., 7(6):913–932, 1979.
29. C. Dellacherie, P.A. Meyer, and M. Yor. Sur certaines propri´et´es des espaces de
Banach H
1
et BMO. In S´eminaire de Probabilit´es, XII (Univ. Strasbourg, Strasbourg,
1976/1977), volume 649 of Lecture Notes in Math., pages 98–113. Springer, Berlin,
1978.
30. C. DonatiMartin. Transformation de Fourier et temps d’occupation browniens.
Probab. Theory Related Fields, 88(2):137–166, 1991.
31. C. DonatiMartin, S. Song, and M. Yor. Symmetric stable processes, Fubini’s the
orem, and some extensions of the CiesielskiTaylor identities in law. Stochastics
Stochastics Rep., 50(12):1–33, 1994.
32. C. DonatiMartin and M. Yor. Mouvement brownien et in´egalit´e de Hardy dans L
2
.
In S´eminaire de Probabilit´es, XXIII, volume 1372 of Lecture Notes in Math., pages
315–323. Springer, Berlin, 1989.
33. C. DonatiMartin and M. Yor. Fubini’s theorem for double Wiener integrals and the
variance of the Brownian path. Ann. Inst. H. Poincar´e Probab. Statist., 27(2):181–
200, 1991.
34. C. DonatiMartin and M. Yor. Some Brownian functionals and their laws. Ann.
Probab., 25(3):1011–1058, 1997.
35. L. E. Dubins and M. Smorodinsky. The modiﬁed, discrete, L´evytransformation is
Bernoulli. In S´eminaire de Probabilit´es, XXVI, volume 1526 of Lecture Notes in
Math., pages 157–161. Springer, Berlin, 1992.
36. B. Duplantier. Areas of planar Brownian curves. J. Phys. A, 22(15):3033–3048, 1989.
37. R. Durrett. A new proof of Spitzer’s result on the winding of twodimensional Brow
nian motion. Ann. Probab., 10(1):244–246, 1982.
38. E. Dynkin. Some limit theorems for sums of independent random variables with
inﬁnite mathematical expectations. In Select. Transl. Math. Statist. and Probability,
Vol. 1, pages 171–189. Inst. Math. Statist. and Amer. Math. Soc., Providence, R.I.,
1961.
39. N. Eisenbaum. Un th´eor`eme de RayKnight li´e au supremum des temps locaux
browniens. Probab. Theory Related Fields, 87(1):79–95, 1990.
40. P. Fitzsimmons and R. Getoor. On the distribution of the Hilbert transform of the
local time of a symmetric L´evy process. Ann. Probab., 20(3):1484–1497, 1992.
References 191
41. P. Fitzsimmons, J. Pitman, and M. Yor. Markovian bridges: construction, Palm
interpretation, and splicing. In Seminar on Stochastic Processes, 1992 (Seattle, WA,
1992), volume 33 of Progr. Probab., pages 101–134. Birkh¨auser Boston, Boston, MA,
1993.
42. A. F¨oldes and P. R´ev´esz. On hardly visited points of the Brownian motion. Probab.
Theory Related Fields, 91(1):71–80, 1992.
43. H. F¨ollmer. Martin boundaries on Wiener space. In Diﬀusion processes and related
problems in analysis, Vol. I (Evanston, IL, 1989), volume 22 of Progr. Probab., pages
3–16. Birkh¨auser Boston, Boston, MA, 1990.
44. G. J. Foschini and L. A. Shepp. Closed form characteristic functions for certain
random variables related to Brownian motion. In Stochastic analysis, pages 169–187.
Academic Press, Boston, MA, 1991. Liber amicorum for Moshe Zakai.
45. M. Fukushima. A decomposition of additive functionals of ﬁnite energy. Nagoya
Math. J., 74:137–168, 1979.
46. H. Geman and M. Yor. Quelques relations entre processus de Bessel, options asi
atiques et fonctions conﬂuentes hyperg´eom´etriques. C. R. Acad. Sci. Paris S´er. I
Math., 314(6):471–474, 1992.
47. H. Geman and M. Yor. Bessel processes, asian options and perpetuities. Math.
Finance, 3 (4):349–375, 1993.
48. P. Hartman and G. Watson. “Normal” distribution functions on spheres and the
modiﬁed Bessel functions. Ann. Probability, 2:593–607, 1974.
49. J.P. Imhof. Density factorizations for Brownian motion, meander and the three
dimensional Bessel process, and applications. J. Appl. Probab., 21(3):500–510, 1984.
50. K. Itˆo and H. McKean. Diﬀusion processes and their sample paths. SpringerVerlag,
Berlin, 1974. Second printing, corrected, Die Grundlehren der mathematischen Wis
senschaften, Band 125.
51. T. Jeulin. Semimartingales et grossissement d’une ﬁltration, volume 833 of Lecture
Notes in Mathematics. Springer, Berlin, 1980.
52. T. Jeulin. Application de la th´eorie du grossissement `a l’´etude des temps locaux
browniens,. In Grossissement de ﬁltrations: Exemples et applications, volume 1118
of Lecture Notes in Math., pages 197–304. Springer, Berlin, 1985.
53. T. Jeulin and M. Yor. In´egalit´e de Hardy, semimartingales, et fauxamis. In S´eminaire
de Probabilit´es, XIII (Univ. Strasbourg, Strasbourg, 1977/78), volume 721 of Lecture
Notes in Math., pages 332–359. Springer, Berlin, 1979.
54. T. Jeulin and M. Yor. Filtration des ponts browniens et ´equations diﬀ´erentielles
stochastiques lin´eaires. In S´eminaire de Probabilit´es, XXIV, 1988/89, volume 1426
of Lecture Notes in Math., pages 227–265. Springer, Berlin, 1990.
55. Y. Kasahara and S. Kotani. On limit processes for a class of additive functionals of
recurrent diﬀusion processes. Z. Wahrsch. Verw. Gebiete, 49(2):133–153, 1979.
56. J. Kent. Some probabilistic properties of Bessel functions. Ann. Probab., 6(5):760–
770, 1978.
57. F. B. Knight. Random walks and a sojourn density process of Brownian motion.
Trans. Amer. Math. Soc., 109:56–86, 1963.
58. F. B. Knight. Inverse local times, positive sojourns, and maxima for Brownian mo
tion. Ast´erisque, (157158):233–247, 1988. Colloque Paul L´evy sur les Processus
Stochastiques (Palaiseau, 1987).
59. J.F. Le Gall. Mouvement brownien, cˆones et processus stables. Probab. Theory
Related Fields, 76(4):587–627, 1987.
60. J.F. Le Gall and M. Yor. Excursions browniennes et carr´es de processus de Bessel.
C. R. Acad. Sci. Paris S´er. I Math., 303(3):73–76, 1986.
61. J.F. Le Gall and M. Yor.
´
Etude asymptotique des enlacements du mouvement
brownien autour des droites de l’espace. Probab. Theory Related Fields, 74(4):617–
635, 1987.
62. J.F. Le Gall and M. Yor. Enlacements du mouvement brownien autour des courbes
de l’espace. Trans. Amer. Math. Soc., 317(2):687–722, 1990.
192 References
63. N. N. Lebedev. Special functions and their applications. Dover Publications Inc.,
New York, 1972. Revised edition, translated from the Russian and edited by Richard
A. Silverman, Unabridged and corrected republication.
64. P. L´evy. Sur certains processus stochastiques homog`enes. Compositio Math., 7:283–
339, 1939.
65. P. Messulam and M. Yor. On D. Williams’ “pinching method” and some applications.
J. London Math. Soc. (2), 26(2):348–364, 1982.
66. F. Mezzadri and N. C. Snaith, editors. Recent perspectives in random matrix theory
and number theory, volume 322 of London Mathematical Society Lecture Note Series.
Cambridge University Press, Cambridge, 2005.
67. E. Molˇcanov and S. Ostrovski˘ı. Symmetric stable processes as traces of degenerate
diﬀusion processes. Teor. Verojatnost. i Primenen., 14:127–130, 1969.
68. E. Perkins. Local time is a semimartingale. Z. Wahrsch. Verw. Gebiete, 60(1):79–117,
1982.
69. M. Perman, J. Pitman, and M. Yor. Sizebiased sampling of Poisson point processes
and excursions. Probab. Theory Related Fields, 92(1):21–39, 1992.
70. F. Petit. Sur le temps pass´e par le mouvement brownien audessus d’un multiple de
son supremum, et quelques extensions de la loi de l’arc sinus. PhD thesis, Universit´e
Paris VII, February 1992.
71. J. Pitman. Onedimensional Brownian motion and the threedimensional Bessel pro
cess. Advances in Appl. Probability, 7(3):511–526, 1975.
72. J. Pitman and M. Yor. Bessel processes and inﬁnitely divisible laws. In Stochastic
integrals (Proc. Sympos., Univ. Durham, Durham, 1980), volume 851 of Lecture
Notes in Math., pages 285–370. Springer, Berlin, 1981.
73. J. Pitman and M. Yor. A decomposition of Bessel bridges. Z. Wahrsch. Verw.
Gebiete, 59(4):425–457, 1982.
74. J. Pitman and M. Yor. Sur une d´ecomposition des ponts de Bessel. In Functional
analysis in Markov processes (Katata/Kyoto, 1981), volume 923 of Lecture Notes in
Math., pages 276–285. Springer, Berlin, 1982.
75. J. Pitman and M. Yor. Asymptotic laws of planar Brownian motion. Ann. Probab.,
14(3):733–779, 1986.
76. J. Pitman and M. Yor. Further asymptotic laws of planar Brownian motion. Ann.
Probab., 17(3):965–1011, 1989.
77. J. Pitman and M. Yor. Arcsine laws and interval partitions derived from a stable
subordinator. Proc. London Math. Soc. (3), 65(2):326–356, 1992.
78. J. Pitman and M. Yor. Decomposition at the maximum for excursions and bridges of
onedimensional diﬀusions. In Itˆ o’s stochastic calculus and probability theory, pages
293–310. Springer, Tokyo, 1996.
79. J. Pitman and Marc Yor. Dilatations d’espacetemps, r´earrangements des trajectoires
browniennes, et quelques extensions d’une identit´e de Knight. C. R. Acad. Sci. Paris
S´er. I Math., 316(7):723–726, 1993.
80. D. Ray. Sojourn times of diﬀusion processes. Illinois J. Math., 7:615–630, 1963.
81. D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293 of
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Math
ematical Sciences]. SpringerVerlag, Berlin, third edition, 1999.
82. J. Rudnick and Y. Hu. The winding angle distribution of an ordinary random walk.
J. Phys. A, 20(13):4421–4438, 1987.
83. T. Shiga and S. Watanabe. Bessel diﬀusions as a oneparameter family of diﬀusion
processes. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 27:37–46, 1973.
84. L. Smith and P. Diaconis. Honest Bernoulli excursions. J. Appl. Probab., 25(3):464–
477, 1988.
85. F. Spitzer. Some theorems concerning 2dimensional Brownian motion. Trans. Amer.
Math. Soc., 87:187–197, 1958.
References 193
86. A. Truman and D. Williams. A generalised arcsine law and Nelson’s stochastic
mechanics of onedimensional timehomogeneous diﬀusions. In Diﬀusion processes
and related problems in analysis, Vol. I (Evanston, IL, 1989), volume 22 of Progr.
Probab., pages 117–135. Birkh¨auser Boston, Boston, MA, 1990.
87. A. Truman and D. Williams. Excursions and Itˆo calculus in Nelson’s stochastic
mechanics. In Recent developments in quantum mechanics (Poiana Bra¸sov, 1989),
volume 12 of Math. Phys. Stud., pages 49–83. Kluwer Acad. Publ., Dordrecht, 1991.
88. P. Vallois. Sur la loi conjointe du maximum et de l’inverse du temps local du mouve
ment brownien: application `a un th´eor`eme de Knight. Stochastics Stochastics Rep.,
35(3):175–186, 1991.
89. W. Vervaat. A relation between Brownian bridge and Brownian excursion. Ann.
Probab., 7(1):143–149, 1979.
90. G. Watson. A treatise on the theory of Bessel functions. Cambridge Mathematical
Library. Cambridge University Press, Cambridge, 1995. Reprint of the second (1944)
edition.
91. M. L. Wenocur. Brownian motion with quadratic killing and some implications. J.
Appl. Probab., 23(4):893–903, 1986.
92. M. L. Wenocur. OrnsteinUhlenbeck process with quadratic killing. J. Appl. Probab.,
27(3):707–712, 1990.
93. D. Williams. Brownian motion and the Riemann zetafunction. In Disorder in phys
ical systems, Oxford Sci. Publ., pages 361–372. Oxford Univ. Press, New York, 1990.
94. T. Yamada. On the fractional derivative of Brownian local times. J. Math. Kyoto
Univ., 25(1):49–58, 1985.
95. T. Yamada. On some limit theorems for occupation times of onedimensional Brow
nian motion and its continuous additive functionals locally of zero energy. J. Math.
Kyoto Univ., 26(2):309–322, 1986.
96. T. Yamada. Representations of continuous additive functionals of zero energy via con
volution type transforms of Brownian local times and the Radon transform. Stochas
tics Stochastics Rep., 48(12):1–15, 1994.
97. T. Yamada. Principal values of Brownian local times and their related topics. In Itˆ o’s
stochastic calculus and probability theory, pages 413–422. Springer, Tokyo, 1996.
98. M. Yor. Loi de l’indice du lacet brownien, et distribution de HartmanWatson. Z.
Wahrsch. Verw. Gebiete, 53(1):71–95, 1980.
99. M. Yor. Une extension markovienne de l’alg`ebre des lois b´etagamma. C. R. Acad.
Sci. Paris S´er. I Math., 308(8):257–260, 1989.
100. M. Yor.
´
Etude asymptotique des nombres de tours de plusieurs mouvements brown
iens complexes corr´el´es. In Random walks, Brownian motion, and interacting parti
cle systems, volume 28 of Progr. Probab., pages 441–455. Birkh¨auser Boston, Boston,
MA, 1991.
101. M. Yor. Une explication du th´eor`eme de CiesielskiTaylor. Ann. Inst. H. Poincar´e
Probab. Statist., 27(2):201–213, 1991.
102. M. Yor. On some exponential functionals of Brownian motion. Adv. in Appl. Probab.,
24(3):509–531, 1992.
103. M. Yor, editor. Exponential functionals and principal values related to Brownian
motion. Biblioteca de la Revista Matem´atica Iberoamericana. [Library of the Revista
Matem´atica Iberoamericana]. Revista Matem´atica Iberoamericana, Madrid, 1997. A
collection of research papers.
104. M. Yor. Exponential functionals of Brownian motion and related processes. Springer
Finance. SpringerVerlag, Berlin, 2001. With an introductory chapter by H´elyette
Geman, Chapters 1, 3, 4, 8 translated from the French by Stephen S. Wilson.
105. M. Yor, M. Chesney, H. Geman, and M. JeanblancPicqu´e. Some combinations of
Asian, Parisian and barrier options. In Mathematics of derivative securities (Cam
bridge, 1995), volume 15 of Publ. Newton Inst., pages 61–87. Cambridge Univ. Press,
Cambridge, 1997.
Further general references about
Brownian Motion and Related
Processes
1. J. Bertoin. Random fragmentation and coagulation processes, volume 102 of Cam
bridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge,
2006.
2. A. N. Borodin and P. Salminen. Handbook of Brownian motion—facts and formulae.
Probability and its Applications. Birkh¨auser Verlag, Basel, second edition, 2002.
3. J. L. Doob. Classical potential theory and its probabilistic counterpart, volume 262 of
Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Math
ematical Sciences]. SpringerVerlag, New York, 1984.
4. R. Durrett. Brownian motion and martingales in analysis. Wadsworth Mathematics
Series. Wadsworth International Group, Belmont, CA, 1984.
5. F. B. Knight. Essentials of Brownian motion and diﬀusion, volume 18 of Mathematical
Surveys. American Mathematical Society, Providence, R.I., 1981.
6. G. F. Lawler. Conformally invariant processes in the plane, volume 114 of Mathe
matical Surveys and Monographs. American Mathematical Society, Providence, RI,
2005.
7. J.F. Le Gall. Some properties of planar Brownian motion. In
´
Ecole d’
´
Et´e de Prob
abilit´es de SaintFlour XX—1990, volume 1527 of Lecture Notes in Math., pages
111–235. Springer, Berlin, 1992.
8. M. B. Marcus and J. Rosen. Markov processes, Gaussian processes, and local times,
volume 100 of Cambridge Studies in Advanced Mathematics. Cambridge University
Press, Cambridge, 2006.
9. J. Pitman. Combinatorial stochastic processes, volume 1875 of Lecture Notes in Math
ematics. SpringerVerlag, Berlin, 2006. Lectures from the 32nd Summer School on
Probability Theory held in SaintFlour, July 7–24, 2002, With a foreword by Jean
Picard.
10. M. Rao. Brownian motion and classical potential theory. Matematisk Institut, Aarhus
University, Aarhus, 1977. Lecture Notes Series, No. 47.
11. L. C. G. Rogers and D. Williams. Diﬀusions, Markov processes, and martingales. Vol.
1. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 2000.
Foundations, Reprint of the second (1994) edition.
12. L. C. G. Rogers and D. Williams. Diﬀusions, Markov processes, and martingales. Vol.
2. Cambridge Mathematical Library. Cambridge University Press, Cambridge, 2000.
Itˆ o calculus, Reprint of the second (1994) edition.
13. W. Werner. Random planar curves and SchrammLoewner evolutions. In Lectures
on probability theory and statistics, volume 1840 of Lecture Notes in Math., pages
107–195. Springer, Berlin, 2004.
195
Universitext
Aguilar, M.; Gitler, S.; Prieto, C.: Algebraic
Topology from a Homotopical Viewpoint
Ahlswede, R.; Blinovsky, V.: Lectures on
Advances in Combinatorics
Aksoy, A.; Khamsi, M. A.: Methods in Fixed
Point Theory
Alevras, D.; Padberg M. W.: Linear
Optimization and Extensions
Andersson, M.: Topics in Complex Analysis
Aoki, M.: State Space Modeling of Time
Series
Arnold, V. I.: Lectures on Partial Differential
Equations
Arnold, V. I.; Cooke, R.: Ordinary
Differential Equations
Audin, M.: Geometry
Aupetit, B.: A Primer on Spectral Theory
Bachem, A.; Kern, W.: Linear Programming
Duality
Bachmann, G.; Narici, L.; Beckenstein, E.:
Fourier and Wavelet Analysis
Badescu, L.: Algebraic Surfaces
Balakrishnan, R.; Ranganathan, K.: A
Textbook of Graph Theory
Balser, W.: Formal Power Series and Linear
Systems of Meromorphic Ordinary
Differential Equations
Bapat, R.B.: Linear Algebra and Linear
Models
Benedetti, R.; Petronio, C.: Lectures on
Hyperbolic Geometry
Benth, F. E.: Option Theory with Stochastic
Analysis
Berberian, S. K.: Fundamentals of Real
Analysis Berger, M.: Geometry I, and II
Bhattacharya, R; Waymire, E. C.: A Basic
Course in Probability Theory Bliedtner, J.;
Hansen, W.: Potential Theory
Blowey, J. F.; Coleman, J. P.; Craig, A. W.
(Eds.): Theory and Numerics of Differential
Equations
Blowey, J. F.; Craig, A.; Shardlow, T. (Eds.):
Frontiers in Numerical Analysis, Durham
2002, and Durham 2004
Blyth, T. S.: Lattices and Ordered Algebraic
Structures
B¨ orger, E.; Gr¨ adel, E.; Gurevich, Y.: The
Classical Decision Problem
B¨ ottcher, A; Silbermann, B.: Introduction to
Large Truncated Toeplitz Matrices
Boltyanski, V.; Martini, H.; Soltan, P. S.:
Excursions into Combinatorial Geometry
Boltyanskii, V. G.; Efremovich, V. A.:
Intuitive Combinatorial Topology
Bonnans, J. F.; Gilbert, J. C.; Lemarchal, C.;
Sagastizbal, C. A.: Numerical Optimization
Booss, B.; Bleecker, D. D.: Topology and
Analysis
Borkar, V. S.: Probability Theory
Brides/Vita: Techniques of Constructive
Analysis
Brunt B. van: The Calculus of Variations
B¨ uhlmann, H.; Gisler, A.: A Course in
Credibility Theory and its Applications
Carleson, L.; Gamelin, T. W.: Complex
Dynamics
Cecil, T. E.: Lie Sphere Geometry: With
Applications of Submanifolds
Chae, S. B.: Lebesgue Integration
Chandrasekharan, K.: Classical Fourier
Transform
Charlap, L. S.: Bieberbach Groups and Flat
Manifolds
Chern, S.: Complex Manifolds without
Potential Theory
Chorin, A. J.; Marsden, J. E.: Mathematical
Introduction to Fluid Mechanics
Cohn, H.: A Classical Invitation to Algebraic
Numbers and Class Fields
Curtis, M. L.: Abstract Linear Algebra
Curtis, M. L.: Matrix Groups
Cyganowski, S.; Kloeden, P.; Ombach, J.:
From Elementary Probability to Stochastic
Differential Equations with MAPLE
Da Prato, G.: An Introduction to Inﬁnite
Dimensional Analysis
Dalen, D. van: Logic and Structure
Das, A.: The Special Theory of Relativity: A
Mathematical Exposition
Debarre, O.: HigherDimensional Algebraic
Geometry
Deitmar, A.: A First Course in Harmonic
Analysis
Demazure, M.: Bifurcations and Cata
strophes
Devlin, K. J.: Fundamentals of
Contemporary Set Theory
DiBenedetto, E.: Degenerate Parabolic
Equations
Diener, F.; Diener, M. (Eds.): Nonstandard
Analysis in Practice
Dimca, A.: Sheaves in Topology
Dimca, A.: Singularities and Topology of
Hypersurfaces
DoCarmo, M. P.: Differential Forms and
Applications
Duistermaat, J. J.; Kolk, J. A. C.: Lie Groups
Dumortier.: Qualitative Theory of Planar
Differential Systems
Dundas, B. I.; Levine, M.; Østvaer, P. A.;
R¨ ondip, O.; Voevodsky, V.: Motivic
Homotopy Theory
Edwards, R. E.: A Formal Background to
Higher Mathematics Ia, and Ib
Edwards, R. E.: A Formal Background to
Higher Mathematics IIa, and IIb
Emery, M.: Stochastic Calculus in Manifolds
Emmanouil, I.: Idempotent Matrices over
Complex Group Algebras
Endler, O.: Valuation Theory
Engel, K.J.; Nagel, R.: A Short Course on
Operator Semigroups
Erez, B.: Galois Modules in Arithmetic
Everest, G.; Ward, T.: Heights of
Polynomials and Entropy in Algebraic
Dynamics
Farenick, D. R.: Algebras of Linear
Transformations
Foulds, L. R.: Graph Theory Applications
Franke, J.; H¨ ardle, W.; Hafner, C. M.:
Statistics of Financial Markets: An
Introduction
Frauenthal, J. C.: Mathematical Modeling in
Epidemiology
Freitag, E.; Busam, R.: Complex Analysis
Friedman, R.: Algebraic Surfaces and
Holomorphic Vector Bundles
Fuks, D. B.; Rokhlin, V. A.: Beginner’s Course
in Topology
Fuhrmann, P. A.: A Polynomial Approach to
Linear Algebra
Gallot, S.; Hulin, D.; Lafontaine, J.:
Riemannian Geometry
Gardiner, C. F.: A First Course in Group
Theory
G˚ arding, L.; Tambour, T.: Algebra for
Computer Science
Godbillon, C.: Dynamical Systems on
Surfaces
Godement, R.: Analysis I, and II
Goldblatt, R.: Orthogonality and Spacetime
Geometry
Gouvˆ ea, F. Q.: pAdic Numbers
Gross, M. et al.: CalabiYau Manifolds and
Related Geometries
Grossman, C.; Roos, H.G.; Stynes, M:
Numerical Treatment of Partial Differential
Equations
Gustafson, K. E.; Rao, D. K. M.: Numerical
Range. The Field of Values of Linear
Operators and Matrices
Gustafson, S. J.; Sigal, I. M.: Mathematical
Concepts of Quantum Mechanics
Hahn, A. J.: Quadratic Algebras, Clifford
Algebras, and Arithmetic Witt Groups
H´ ajek, P.; Havr´ anek, T.: Mechanizing
Hypothesis Formation
Heinonen, J.: Lectures on Analysis on Metric
Spaces
Hlawka, E.; Schoißengeier, J.; Taschner, R.:
Geometric and Analytic Number Theory
Holmgren, R. A.: A First Course in Discrete
Dynamical Systems
Howe, R., Tan, E. Ch.: NonAbelian
Harmonic Analysis
Howes, N. R.: ModernAnalysis and Topology
Hsieh, P.F.; Sibuya, Y. (Eds.): Basic
Theory of Ordinary Differential Equations
Humi, M., Miller, W.: Second Course in
Ordinary Differential Equations for
Scientists and Engineers
Hurwitz, A.; Kritikos, N.: Lectures on
Number Theory
Huybrechts, D.: Complex Geometry: An
Introduction
Isaev, A.: Introduction to Mathematical
Methods in Bioinformatics
Istas, J.: Mathematical Modeling for the Life
Sciences
Iversen, B.: Cohomology of Sheaves
Jacod, J.; Protter, P.: Probability Essentials
Jennings, G. A.: Modern Geometry with
Applications
Jones, A.; Morris, S. A.; Pearson, K. R.:
Abstract Algebra and Famous Inpossibilities
Jost, J.: Compact Riemann Surfaces
Jost, J.: Dynamical Systems. Examples of
Complex Behaviour
Jost, J.: Postmodern Analysis
Jost, J.: Riemannian Geometry and
Geometric Analysis
Kac, V.; Cheung, P.: Quantum Calculus
Kannan, R.; Krueger, C. K.: Advanced
Analysis on the Real Line
Kelly, P.; Matthews, G.: The NonEuclidean
Hyperbolic Plane
Kempf, G.: Complex Abelian Varieties and
Theta Functions
Kitchens, B. P.: Symbolic Dynamics
Kloeden, P.; Ombach, J.; Cyganowski, S.:
From Elementary Probability to Stochastic
Differential Equations with MAPLE
Kloeden, P. E.; Platen; E.; Schurz, H.:
Numerical Solution of SDE Through
Computer Experiments
Koralov, L. B.; Sinai, Ya. G.: Theory of
Probability and Random Processes. 2
n d
edition
Kostrikin, A. I.: Introduction to Algebra
Krasnoselskii, M. A.; Pokrovskii, A. V.:
Systems with Hysteresis
Kuo, H.H.: Introduction to Stochastic
Integration
Kurzweil, H.; Stellmacher, B.: The Theory of
Finite Groups. An Introduction
Kyprianou, A. E.: Introductory Lectures on
Fluctuations of L´ evy Processes with
Applications
Lang, S.: Introduction to Differentiable
Manifolds
Lefebvre, M.: Applied Stochastic Processes
Lorenz, F.: Algebra I: Fields and Galois
Theory
Luecking, D. H., Rubel, L. A.: Complex
Analysis. A Functional Analysis Approach
Ma, ZhiMing; Roeckner, M.: Introduction
to the Theory of (nonsymmetric) Dirichlet
Forms
Mac Lane, S.; Moerdijk, I.: Sheaves in
Geometry and Logic
Marcus, D. A.: Number Fields
Martinez, A.: An Introduction to
Semiclassical and Microlocal Analysis
Matouˇsek, J.: Using the BorsukUlam
Theorem
Matsuki, K.: Introduction to the Mori
Program
Mazzola, G.; Milmeister G.; Weissman J.:
Comprehensive Mathematics for Computer
Scientists 1
Mazzola, G.; Milmeister G.; Weissman J.:
Comprehensive Mathematics for Computer
Scientists 2
Mc Carthy, P. J.: Introduction to
Arithmetical Functions
McCrimmon, K.: A Taste of Jordan Algebras
Meyer, R. M.: Essential Mathematics for
Applied Field
MeyerNieberg, P.: Banach Lattices
Mikosch, T.: NonLife Insurance
Mathematics
Mines, R.; Richman, F.; Ruitenburg, W.: A
Course in Constructive Algebra
Moise, E. E.: Introductory Problem Courses
in Analysis and Topology
MontesinosAmilibia, J. M.: Classical
Tessellations and Three Manifolds
Introduction to Game Theory
Mortveit, H.; Reidys, C.: An Introduction to
Sequential Dynamical Systems
Aspects of Brownian
Motion
Morris, P.:
Mansuy, R.; Yor, M.:
Nikulin, V. V.; Shafarevich, I. R.: Geometries
and Groups
Oden, J. J.; Reddy, J. N.: Variational Methods
in Theoretical Mechanics
Øksendal, B.: Stochastic Differential
Equations
Øksendal, B.; Sulem, A.: Applied Stochastic
Control of Jump Diffusions. 2nd edition
Orlik, P.; Welker, V.: Algebraic Combinatorics
Poizat, B.: A Course in Model Theory
Polster, B.: A Geometrical Picture Book
Porter, J. R.; Woods, R. G.: Extensions and
Absolutes of Hausdorff Spaces
Procesi, C.: Lie Groups
Radjavi, H.; Rosenthal, P.: Simultaneous
Triangularization
Ramsay, A.; Richtmeyer, R. D.: Introduction
to Hyperbolic Geometry
Rautenberg, W.: A concise Introduction to
Mathematical Logic
Rees, E. G.: Notes on Geometry
Reisel, R. B.: Elementary Theory of Metric
Spaces
Rey, W. J. J.: Introduction to Robust and
QuasiRobust Statistical Methods
Ribenboim, P.: Classical Theory of Algebraic
Numbers
Rickart, C. E.: Natural Function Algebras
Rotman, J. J.: Galois Theory
Rubel, L. A.: Entire and Meromorphic
Functions
Ruiz
Tolosa, J. R.; Castillo E.: FromVectors to
Tensors
Runde, V.: A Taste of Topology
Rybakowski, K. P.: The Homotopy Index and
Partial Differential Equations
Sagan, H.: SpaceFilling Curves
Samelson, H.: Notes on Lie Algebras
Sauvigny, F.: Partial Differential Equations I
Sauvigny, F.: Partial Differential Equations II
Schiff, J. L.: Normal Families
Schirotzek, W.: Nonsmooth Analysis
Sengupta, J. K.: Optimal Decisions under
Uncertainty
S´ eroul, R.: Programming for Mathematicians
Seydel, R.: Tools for Computational Finance
Shafarevich, I. R.: Discourses on Algebra
Shapiro, J. H.: Composition Operators and
Classical Function Theory
Simonnet, M.: Measures and Probabilities
Smith, K. E.; Kahanp¨ a¨ a, L.; Kek¨ al¨ ainen, P.;
Traves, W.: An Invitation to Algebraic
Geometry
Smith, K. T.: Power Series from a
Computational Point of View
Smorynski, C.: SelfReference and Modal
Logic
Smory´ nski, C.: Logical Number Theory I. An
Introduction
Srivastava: A Course on Mathematical Logic
Stichtenoth, H.: Algebraic Function Fields
and Codes
Stillwell, J.: Geometry of Surfaces
Stroock, D. W.: An Introduction to the
Theory of Large Deviations
Sunder, V. S.: An Invitation to von Neumann
Algebras
Tamme, G.: Introduction to
´
Etale
Cohomology
Tondeur, P.: Foliations on Riemannian
Manifolds
Toth, G.: Finite M¨ obius Groups, Minimal
Immersions of Spheres, and Moduli
Tu, L. W.: An Introduction to Manifolds
Verhulst, F.: Nonlinear Differential
Equations and Dynamical Systems
Weintraub, S. H.: Galois Theory
Wong, M. W.: Weyl Transforms
Xamb´ oDescamps, S.: Block ErrorCorrecting
Codes
Zaanen, A.C.: Continuity, Integration and
Fourier Theory
Zhang, F.: Matrix Theory
Zong, C.: Sphere Packings
Zong, C.: Strange Phenomena in Convex and
Discrete Geometry
Zorich, V. A.: Mathematical Analysis I
Zorich, V. A.: Mathematical Analysis II
Nicolaescu, L.: An Invitation to Morse Theory
Roger Mansuy Marc Yor
●
Aspects of Brownian Motion
123
Roger Mansuy
21, Boulevard Carnot 92340 BourglaReine France
Marc Yor
Université Paris VI Laboratoire de Probabilités et Modèles Aléatoires 4, place Jussieu 75252 Paris Cedex 5 France deaproba@proba.jussieu.fr
An earlier version of this book was published by Birkhäuser, Basel as Yor, Marc: Some aspects of Brownian Motion, Part I, 1992, and Yor, Marc: Some aspects of Brownian Motion. Part II, 1997.
ISBN 9783540223474
Library of Congress Control Number: 2008930798
eISBN 9783540499664
Mathematics Subject Classiﬁcation (2000): 6002, 6001, 60J65, 60E05 c 2008 SpringerVerlag Berlin Heidelberg This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microﬁlm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a speciﬁc statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Cover design: WMX Design GmbH, Heidelberg The cover illustration is based on a simulation of BESQ processes provided by C. Umansky. Printed on acidfree paper 987654321 springer.com
Introduction
This volume is the result of our eﬀorts to update the eleven ﬁrst chapters of the two previously published ETH Z¨rich Lecture Notes by the second u author: Some Aspects of Brownian Motion, Part I (1992); Part II (1997). The original volumes have been out of print since, roughly, the year 2000. We have already updated the remaining chapters of Part II in: Random Times and Enlargements of Filtrations in a Brownian Setting, Lecture Notes in Maths, n◦ 1873, Springer (2006). Coming back to the present volume, we modiﬁed quite minimally the old eleven ﬁrst chapters, essentially by completing the Bibliography. Here is a detailed description of these eleven chapters; each of them is devoted to the study of some particular class of Brownian functionals; these classes appear in increasing order of complexity. In Chapter 1, various results about certain Gaussian subspaces of the Gaussian space generated by a onedimensional Brownian motion are obtained; the derivation of these results is elementary in that it uses essentially Hilbert spaces isomorphisms between certain Gaussian spaces and some L2 spaces of deterministic functions. In Chapter 2, several results about Brownian quadratic functionals are obtained, with some particular emphasis on a change of probability method, which enables to obtain a number of variants of L´vy’s formula for the e stochastic area of Brownian motion. In Chapter 3, RayKnight theorems on Brownian local times are recalled and extended; the processes which appear there are squares of Bessel processes, which links naturally chapter 3 with the study of Brownian quadratic functionals made in chapter 2; in the second half of chapter 3, some relations with Bessel meanders and bridges are discussed.
v
mansuy. of the exponential of a Brownian motion with drift.are discussed in Chapter 10. but in the defense of our choice. the relation between squares of Bessel processes and Brownian local times is further exploited.vi In Chapter 4.free. In Chapters 5 and 7. Many thanks to Kathleen Qechar for juggling with the diﬀerent versions. this study is important in mathematical ﬁnance. . Some other aspects are excellently treated in a number of lecture notes and books. macros.fr/ Aspects/Aspects references. with particular emphasis on the time spent by Brownian motion below a multiple of its onesided supremum. but were unable to complete this more demanding task. a number of results about Brownian windings are established. The interested reader may consult online (http://roger. at the same time. in Chapter 11. Principal values of Brownian and Bessel local times . and so on. o Some generalizations to Bessel processes are also presented... c. May 4th... quite incomplete and arbitrary. 2008. Between 2004 and 2006. the references of which are gathered at the end of this volume. We are well aware that this particular selection of certain aspects of Brownian motion is. We feel some conﬁdence with these particular aspects. as well as when considering certain signed measures which are absolutely continuous with respect to the Wiener measure. whereas asymptotic studies are presented in chapter 7. some extensions of Paul L´vy’s arc sine law for e Brownian motion are discussed. The Riemann zeta function and Jacobi theta functions are shown. In Chapters 8 and 9. to be somewhat related with the Itˆ measure of Brownian excursions. in order to explain and extend the CiesielskiTaylor identities.html) the extensive Bibliography we had gathered for this purpose. Chapter 6 is devoted to the study of the integral. Brannay. Such principal values occur naturally in the Dirichlet decomposition of Bessel processes with dimension smaller than 1. let us say that: a. b.in particular their Hilbert transforms . exact distributional computations are made in chapter 5. on a time interval. we had undertaken an ambitious updating of the same ETH Lecture Notes .
L´vy’s area formula. e Chapter 4: CiesielskiTaylor (: CT) identities. Chapter 3: RayKnight theorems. Jacobi theta function. chapter by chapter Chapter 1: Gaussian space. F. Petit’s extensions. Knight’s ratio formula. Chapter 7: KallianpurRobbins ergodic theorem. Bessel bridges. Chapter 6: Asian options. Walsh’s Browne ian motion. Bertoin’s excursion theory for BES(d). Bismut’s identity. Convolution of Hitting times. ﬁrst Wiener chaos. Chapter 2: Quadratic functionals. FeynmanKac formula. L´vyKhintchine representation. vii . Chapter 10: Hilbert transform. HartmanWatson distribution. OrnsteinUhlenbeck e process. Biane’s extensions. Chapter 9: Local time perturbation of Brownian motion. Chung’s identity. Spitzer’s theorem. transfer principle. ergodic property. ﬁltration of Brownian bridges. Yamada’s formulae. Chapter 8: P.Keywords. spacetime harmonic functions. Brownian lace. Dirichlet processes. Conﬂuent hypergeometric functions. FubiniWiener integration by parts. L´vy’s arc sine law. Chapter 11: Riemann Zeta function. principal values. Gauss linking number. Chapter 5: Winding number. generalized meanders. Excursion theory Master formulae. beta and gamma variables. additivity property.
17 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . 1.3 An ergodic property . . . 14 2 The laws of some quadratic functionals of BM .2 Some identities in law and an explanation of them via Fubini’s theorem . . . .Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter by chapter . . . . . . . . . .5 Brownian motion and Hardy’s inequality in L2 . . . . . . . . . . . . . . . 18 e 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 ix . . . . . . . . . . . . . . . . . .6 Fourier transform and Brownian motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . .1 L´vy’s area formula and some variants . . . . . . . 11 1. . . . . v Keywords. .2 The ﬁltration of Brownian bridges . . . .3 The laws of squares of Bessel processes . . . . . . . . . . .4 A relationship with spacetime harmonic functions . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 A realization of Brownian bridges . . . . . . . . . . 1 2 3 5 7 1. . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2. . . . . . . . . . . . . . . . . . 1. . . . . . . vii 1 The Gaussian space of BM . . .
6 Generalized meanders and squares of Bessel processes . . . . . . .4 On a computation of F¨ldesR´v´sz . . 64 o e e 5 On the winding number of planar BM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 A reduction of (4. . . . . . . . .1 Preliminaries . . . . . . . . . . . . . . . . . . . . . .2 The L´vyKhintchine representation of Qδ . . . . . . . . . . . . . . . . .1 The basic RayKnight theorems . . . . . 32 3. . 67 5. 51 4 An explanation and some extensions of the CiesielskiTaylor identities . . . . . . . . . . . . . 34 e x 3. . . 57 4. . . . . . . . . . . . . . . . . . . .2 Explicit computation of the winding number of planar Brownian motion . . . . . . . . . 60 4.4 The law of Brownian local times taken at an independent exponential time . . . . . . . . . . . . . . . . . .7 Generalized meanders and Bessel bridges .3 Some extensions of the CiesielskiTaylor identities . . . . . 70 . . . . . . . . . . . . . . . 39 3. . . . . . . . . . . . . . . . . . . . . 41 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1) for δ = 1 . 58 4. . . . . . . . . . . . . . . . . . . . . .1 A pathwise explanation of (4. . . . . . . . . . . . . . . .1) to an identity in law between two Brownian quadratic functionals . . . . . . . 47 3. . . . . . . 31 3. . . . . 59 4. . . 67 5. . . . . . . . . . . . . . . . . . . . . . . . .x Contents 3 Squares of Bessel processes and RayKnight theorems for Brownian local times . . . . . . . . . .5 Squares of Bessel processes and squares of Bessel bridges . . . . . . . . . . . . . .3 An extension of the RayKnight theorems . .
Petit’s results . . . . . 125 8. . . . . . . . 84 6. 109 8 Some extensions of Paul L´vy’s arc sine law for BM . . 107 7. . . . . .5 The asymptotic distribution of the selflinking number of BM in IR3 .2 A list of results . . . . . . . . . 102 7. . . . . . . . 81 6. . . . . . . . . . . 133 . . . . 116 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 A stochastic calculus approach to F.1 Asymptotic windings of planar BM around n points . . . . . . 88 6. . .5 A discussion of some identities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Some notation . . . . . . . . . .2 A study in a general Markovian setup . .3 A discussion of methods . . . . . . . . . . . . . . . . . .4 A uniﬁed picture of windings . . . 101 7. . . . . . 96 7 Some asymptotic laws for multidimensional BM . . . . . . . . . . Petit’s results . . . . . . . . . . . . . .3 Windings of independent planar BM ’s around each other . . . . . . . . . . . . . . 107 7. . . . . . .Some proofs . . . . . . .1 The integral moments of At (ν) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6. . . . . . . . . . . 120 8. . . . . . . . . . . . . . . .4 Application to Brownian motion . . . . . . 105 7. . . 117 8. . . . . . . . . . . . . . . .3 The case of L´vy processes . . . . . . 115 e 8. . . . . . . . . . . . . .4 An excursion theory approach to F. . . . . . . . . . . . . . . . . . . . . . . . .Contents xi 6 On some exponential functionals of Brownian motion and the problem of Asian options . . . . . . . . . . . . . . . . . 87 e 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Windings of BM in IR3 . . . . . . .
. . . . involving principal values of Brownian local times . . . . . . . . 171 11. . . . . . . .2 A construction of stable processes. . . . . . . . . . . . . . . . 153 10. . 169 11. . . . . . . . . . . . 140 9. . . . . . . . . . .1 A RayKnight theorem for the local times of X. . .1 The Riemann zeta function and the 3dimensional Bessel process . . . . . .2 The right hand side of (11. . . . . . . 159 11 Probabilistic representations of the Riemann zeta function and some generalisations related to Bessel processes . . .3 Distributions of principal values of Brownian local times. 137 9. . . . . . . . . . . .4 Bertoin’s excursion theory for BES(d). . . . . . . . . . . . . .3 Generalisation of a computation of F. . . . . .5 Another probabilistic representation of the Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 . . . . u ≤ τs ) . . . . . . . . . . . . . . . . . . . . . taken at an independent exponential time . 0 < d < 1 . . . . . . . . . . . . . . . . . . . . . . . . .8) . . . . . . . . . . . . . . . . .xii Contents 9 Further results about reﬂecting Brownian motion perturbed by its local time at 0 . . . . . . . . . . . . . . . . . . . . . . . . . 165 11. . . . . 137 9. . . . . . . . . . . . . . 165 11. . . . . . . . . . . . . . . . . . . . . . 158 10.2 Proof of the RayKnight theorem for the local times of X .4 Towards a pathwise decomposition of (Xu . . . . . . . . . . . . . . Knight . . . . . .1 Yamada’s formulae . . . . and its relation to the Riemann zeta function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 10.4). . . . . . . . . . . .3 A discussion of the identity (11. . . . . . . . . and the agreement formulae between laws of Bessel processes and Bessel bridges . . 149 10 On principal values of Brownian and Bessel local times . . . . . . and some consequences . . . . . . . . . . . . . . . . . . . . .4 A strengthening of Knight’s identity. . 175 11. 144 µ 9. . . . . . . . . . . . µ up to τs . . . . . 154 10. . . . . . .
. . . . . 195 . . . . .6 Some generalizations related to Bessel processes . . . . . . . . . . . . . . . . . . . . .Contents xiii 11. 182 11. .7 Some relations between X ν and Σ ν−1 ≡ σν−1 + σν−1 . . . . . . . . . . . . . . . . . . 178 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Further general references about BM and Related Processes . . . . . . . . . . . . 186 References . . . . . . . . . . . .8 ζ ν (s) as a function of ν . . . . . . . . . . . . . .
the properties of the transformations being studied may be deduced from corresponding properties of associated transformations of L2 (IR+ .Chapter 1 The Gaussian space of BM In this Chapter. 1 . which is expressed by the identity: ∞ E (B ) f 2 = 0 dt f 2 (t) (1. that is: ⎧ ⎫ ∞ ⎨ ⎬ def Γ (B) = B f ≡ f (s)dBs . thanks to the Hilbert spaces isomorphism: Bf ↔ f between Γ (B) and L2 (IR+ . Recall that this Gaussian space is precisely equal to the ﬁrst Wiener chaos of B.1). and is intended to show that some interesting properties of Brownian motion may be deduced easily from the covariance identity (1.1) This chapter may be considered as a warmup. t ≥ 0) are studied. ds) ⎩ ⎭ 0 In fact. a number of linear transformations of the Gaussian space associated to a linear Brownian motion (Bt . f ∈ L2 (IR+ . ds). ds).
and remark that. t] → ∗ IR. ∗ For any x ∈ IR. . u ≤ t. u ≤ t t . the family (Px . and Fu = σ{Xs . denote Xu (ω) = ω(u).1 A realization of Brownian bridges Let (Bu . for every (Ft ) measurable. Clearly.2 1 The Gaussian space of BM 1. u ≤ t)] dx a. a realization of this bridge is: Bu − u Bt . t]. Ft is ∗ also the Borel σﬁeld when Ω(t) is endowed with the topology of uniform convergence. IR) be the space of continuous functions ω : [0. by construction. of duration t. s ≤ u}. for any x ∈ IR. Ft ) of the process: (t) u ux + Bu − Bt . bounded functional F . starting from 0. for u ≤ t: Bu = u u Bt + Bu − Bt t t is the orthogonal decomposition of the gaussian variable Bu with respect to Bt . it satisﬁes: (t) E [F (Bu . and ending at x. We shall call P0 the law of the standard Brownian bridge of duration t. on Ω(t) . and. the process Bu − Bt . Hence. Px as the law of the Brownian bridge. since (Bu . u Hence. we deﬁne Px as the distribution on (Ω(t) .e. u ≤ t)  Bt = x] = Ex [F (Xu . (t) . ∗ Let now Ω(t) ≡ C ([0. u ≥ 0) is a Gaussian process. u ≤ t t is independent of the variable Bt . Fix t > 0 for one moment. x ∈ IR) is weakly continuous. starting at 0. u ≥ 0) be a 1dimensional BM . there is no ambi(t) guity in deﬁning. Hence. u ≤ t t t (t) .
t It is immediate that Γt is the orthogonal of Γ (Bt ) in Γ (Bu . (t) we have: Γt = Γ (γu . we have: ⎧ t ⎨ f (u)dBu . Moreover. du) .1 1) For any t > 0. t ≥ 0). u ≤ t) = Γt ⊕ Γ (Bt ) . for u ≤ t ≤ t + h: Bu − u Bt = t Bu − def u Bt+h t+h − u t Bt − t Bt+h t+h . since. u≤t . and that. lim . t]. t ≥ 0) is a subﬁltration of (Bt ≡ σ(Bu . Remark that {Γt . moreover: Γ∞ = lim ↑ Γt ≡ Γ (Bu . u ≤ t) . we have: Γ (Bu . u ≥ 0). which is independent of the variable Bt . and Γt = ⎩ 0 t ⎫ ⎬ du f (u) = 0 ⎭ 0 2) For any t > 0. since: t↑∞ Bu = a. Here are some more precisions about (Gt . t ≥ 0). u ≤ t and Gt = σ(Gt ). that is. t−s is a Brownian motion. u ≤ t). where Gt = Bu − u Bt . Theorem 1. f ∈ L2 ([0. we denote by Γ (G) the Gaussian space generated by G. (Gt .1.2 The ﬁltration of Brownian bridges If G is a subset of the Gaussian space generated by (Bu . u ≥ 0). and G∞ = B∞ . t ≥ 0} is an increasing family.2 The ﬁltration of Brownian bridges 3 1. We now deﬁne Γt = Γ (Gt ). and we use the script letter G for the σﬁeld σ(G). the process: u (t) γu = Bu − 0 ds Bt − Bs .s. t→∞ Bu − u Bt t Hence. u ≤ t).
u ≤ t . When we apply this remark to Zu = γu . s ≤ t) . u ≤ t). u ≥ 0) occured by looking at the Brownian motion (γu . 3) Now. u ≤ t) is the martingale part in the canonical decomposition of (Bu . we ﬁnd that f satisﬁes (1. to prove that the only functions f ∈ L2 ([0. t]. is a Brownian motion. t]. for every u ≤ t (1. whereas the idea of considering (βu . let (Zu . du) and Gt . (Gt . u ≤ t). It is not diﬃcult to show that (γu . which transfers a funct tion f into 0 f (u)dBu 2) Before we prove precisely the second and third assertions of the Theorem. reversed from time t. t ≥ 0) arise (t) naturally.2) if and only if (t) . it suﬃces.2) are the constants.4 t 1 The Gaussian space of BM 3) The process: βt = Bt − 0 ds Bs . using the ﬁrst assertion of the theorem. t ≥ 0) is the natural ﬁltration of the Brownian motion (βt . u ≤ t) and (βt . u ≤ t. and we s Γt = Γ (βs . t ≥ 0). in order to show that Γt = Γ (Zu . have: Consequently. (t) it is worth explaining how the processes (γu . u ≤ t) as a semimartingale in (t) the ﬁltration Bu ≡ Bu ∨ σ(Bt ). t ≥ 0. du) such that ⎡ ⎛ t ⎞⎤ E ⎣Zu ⎝ 0 f (v)dBv ⎠⎦ = 0 . Proof: 1) The ﬁrst assertion of the Theorem follows immediately from the Hilbert spaces isomorphism between L2 ([0. that is: u (t) γt (t) − (t) γt−u = (Bt − Bt−u ) − 0 ds Bt − Bt−s s . u ≤ t) be a family of Gaussian variables which belong to Γt .
if (Xt . f (t) = 0. Yt = βt + f (u) s 0 0 (f ) 1. admits (Gt ) as its natural ﬁltration.s. u ≤ t. t ≥ 0) denotes the process of coordinates on the canonical space Ω ∗ ≡ ∗ Ω(∞) ≡ C ([0. Exercise 1.s. 2. ∞). and for t > 0. t≥0 .1: Let f : IR+ → IR be an absolutely continuous function which satisﬁes: ⎞1/2 ⎛ u t du ⎝ (f (s))2 ds⎠ <∞ f (0) = 0.. v dv a. from which we now easily conclude that f (v) = c.3 An ergodic property u u t 5 dv f (v) − 0 0 1 ds (t − s) dv f (v) = 0 . Show that the canonical decomposition of (Yt . then the transformation T deﬁned by: .. dv a. s for every u ≤ t.1.1 by saying that. t ≥ 0) in its natural ﬁltration (Gt ) is: ⎞ ⎛ u t du ⎝ f (s) (f ) − f (s) dβs ⎠ . for some constant c. hence: 1 f (v) = t−v t du f (u) . IR). Show that the process: t (f ) Yt = Bt − 0 ⎛ du ⎝ f (u) 0 u ⎞ f (s)dBs ⎠ . A similar discussion applies with Zu = βu .3 An ergodic property We may translate the third statement of Theorem 1. and f (u) 0 0 1.
x. is the sequence of orthonormal polynomials for the measure e−x dx on IR+ which is obtained from (1. Moreover.1 Let (Xt )t≤1 be a realvalued BM . To state simply the next Proposition. for any n ∈ IN. . since G∞ = B∞ . F∞ . c) The ﬁrst statement shall be proved later on as a consequence of the next Proposition 1. W a.2 For any t > 0. b) We already remarked that T −1 (F∞ ) = F∞ . W a. x2 . Then. . . Theorem 1. Proof: a) The third statement follows classically from the two ﬁrst ones.1.s. Deﬁne γn = T n (X)1 . (γn . T is a Kautomorphism). W ) is strongly mixing and. n ∈ IN) is a sequence of independent centered Gaussian variables. (in the language of ergodic theory. which proves the second statement.. ) by the GramSchmidt procedure. . the transformation T on (Ω ∗ . .s. xn . . k! n ∈ IN . . with variance 1. Proposition 1. a fortiori. starting from 0. n (T n )−1 (Ft ) is W trivial. t ≤ 1) may be represented as: . we have: 1 γn = 0 dXs Ln log 1 s . Consequently. ergodic. from which (Xt .6 t 1 The Gaussian space of BM T (X)t = Xt − 0 ds Xs s (t ≥ 0) leaves the Wiener measure W invariant. we have: (T n )−1 (F∞ ) = F∞ . we need to recall the deﬁnition of the classical Laguerre polynomials: n Ln (x) = k=0 n k 1 (−x)k .
1.4 A relationship with spacetime harmonic functions
a
7
Xt =
n∈IN
λn
1 log t
γn , where λn (a) =
0
dx e−x Ln (x)
Proof: The expression of γn as a Wiener integral involving Ln is obtained by iteration of the transformation T . The identity: E[γn γm ] = δnm then appears as a consequence of the fact that the sequence {Ln , n ∈ IN} constitutes an orthonormal basis of L2 (IR+ , e−x dx). Indeed, we have:
1
E[γn γm ] =
0
ds Ln
1 log s
Lm
1 log s
∞
=
0
dx e−x Ln (x)Lm (x) = δnm .
More generally, the application: (f (x), x > 0) −→ f log 1 s ,0 < s < 1
is an isomorphism of Hilbert spaces between L2 (e−x dx; IR+ ) and L2 (ds; [0, 1]), and the development of (Xt )t≤1 along the (γn ) sequence corresponds to the 1 . development of 1[0,t] (s) along the basis Ln log s n∈IN
1.4 A relationship with spacetime harmonic functions
In this paragraph, we are interested in a question which in some sense is dual to the study of the transformation T which we considered above. More precisely, we wish to give a description of the set J of all probabilities P on (Ω ∗ , F∞ ) such that:
8
1 The Gaussian space of BM
⎛ ˜ i) ⎝Xt ≡ Xt −
0
t
⎞ ds Xs ; t ≥ 0⎠ is a real valued BM ; here, we only assume s
t t
that the integral
0
ds Xs ≡ a.s. lim ε→0 s
ε
ds Xs exists a.s., but we do not s
assume a priori that is converges absolutely. ˜ ii) for every t ≥ 0, the variable Xt is P independent of (Xs , s ≤ t). We obtain the following characterization of the elements of J . Theorem 1.3 Let W denote the Wiener measure on (Ω ∗ , F∞ ) (W is the law of the real valued Brownian motion B starting from 0). Let P be a probability on (Ω ∗ , F∞ ). The three following properties are equivalent: 1) P ∈ J . 2) P is the law of (Bt + tY, t ≥ 0), where Y is a r.v. which is independent of (Bt , t ≥ 0); 3) there exists a function h : IR+ × IR→IR+ , which is spacetime harmonic, that is: such that (h(t, Xt ), t ≥ 0) is a (W, Ft ) martingale, with expectation 1, and P = W h , where W h is the probability on (Ω ∗ , F∞ ) deﬁned by: W h Ft = h(t, Xt ) · W Ft . We ﬁrst describe all solutions of the equation
t
(∗)
X t = βt +
0
ds Xs , s
where (βt , t ≥ 0) is a realvalued BM , starting from 0. Lemma 1.1 (Xt ) is a solution of (∗) iﬀ there exists a r.v. Y such that:
1.4 A relationship with spacetime harmonic functions
9
⎛ Xt = t ⎝Y −
∞
⎞ dβu ⎠ . u
t
Proof: From Itˆ’s formula, we have, for 0 < s < t: o
t
1 1 Xt = Xs + t s
s
dβu . u
As t → ∞, the righthand side converges, hence, so does the lefthand side; Xt , as t → ∞; we have we call Y the limit of t 1 Xs = Y − s
∞
dβu . u
s
We may now give a proof of Theorem 1.3; the rationale of the proof shall be: 1)⇒2)⇒3)⇒1). Xt 1)⇒2): from Lemma 1.1, we have: =Y − t that Bt = −t
t ∞ ∞
˜ dXu , and we now remark u
t
˜ dXu , t ≥ 0, is a BM . u
(1.3)
Hence, it remains to show that Y is independent from B; in fact, we have: ˜ σ{Bu , u ≥ 0} = σ{Xu , u ≥ 0}, up to negligible sets, since, from (1.3), it follows that: ˜ dXt Bt d = . t t ˜ However, from our hypothesis, Xt is independent of Xu , u ≤ t, so that Xt ˜ is independent of (Xu , u ≥ 0). Y ≡ lim t→∞ t 2)⇒3): We condition with respect to Y ; indeed, let ν(dy) = P (Y ∈ dy), and deﬁne: h(t, x) = ν(dy) exp yx − y2t 2 ≡ ν(dy)hy (t, x) .
10
1 The Gaussian space of BM
From Girsanov’s theorem, we know that: P {(Bu + yu, u ≥ 0) ∈ Γ } = W hy (Γ ) , and therefore, here, we have: P = W h . ˜ 3)⇒1): If P = W h , then we know that (Xu , u ≤ t) is independent of Xt dP under W , hence also under P , since the density = h(Xt , t) depends dW Ft only on Xt .
t
Exercise 1.2: Let λ ∈ IR. Deﬁne
(λ) (λ)
(λ) βt
= Bt − λ
0
ds Bs (t ≥ 0). s
(λ)
Let Ft = σ{βs ; s ≤ t}, t ≥ 0, be the natural ﬁltration of (βt , t ≥ 0), (λ) and (Ft , t ≥ 0) be the natural ﬁltration of (βt , t ≥ 0). 1. Show that (Ft , t ≥ 0) is a strict subﬁltration of (Ft , t ≥ 0) if, and only if, λ > 1 . 2 2. We now assume: λ > 1 . 2 (λ) Prove that the canonical decomposition of (βt , t ≥ 0) in its natural (λ) ﬁltration (Ft , t ≥ 0) is:
t (λ) βt (λ)
=
(λ) γt
− (1 − λ)
0
ds (λ) γ , t≥0, s s
where (γt , t ≥ 0) is a (Ft , t ≥ 0) Brownian motion. 3. Prove that the processes: B, β (λ) , and γ (λ) satisfy the following relations: d Bt tλ = dβt tλ
(λ)
(λ)
(λ)
and
d
γt t1−λ
(λ)
=
dβt . t1−λ
(λ)
Exercise 1.3: (We use the notation introduced in the statement or the proof of Theorem 1.3). Let Y be a realvalued r.v. which is independent of (Bt , t ≥ 0); let (ν) ν(dy) = P (Y ∈ dy) and deﬁne: Bt = Bt + Y t.
1]) f 1 −→ Hf : x→ x 0 x dy f (y) ˜ We remark that the adjoint of H.5 Brownian motion and Hardy’s inequality in L2 (1. s ≤ t = y2t 2 2 y t (ν) ν(dy) exp yBt − 2 (ν) − 2. Prove that if f : IR → IR+ is a Borel function. satisﬁes Hardy’s L2 inequality: 1 1 dx(Kf ) (x) ≤ 4 2 0 0 dx f 2 (x) . we present another approach. write down the canonical decomposition of (ν) (Bt . y ˜ The operator K = H.5 Brownian motion and Hardy’s inequality in L2 11 1. In this paragraph. f is bounded. which we denote by H. we may write . and the ﬁltration {Ft = σ(A. A ⊂ [0. 1].1. satisﬁes: 1 ˜ Hf (x) = x dy f (y). t ≤ 1} (see. Borel set. We ﬁrst remark that if. to begin with. t ≥ 0) in its own ﬁltration. among which one is to consider martingales deﬁned on [0. then: ν(dy)f (y) exp yBt (ν) E f (Y )  Bs . which is clearly related to the Brownian motion (βt ) introduced in Theorem 1. f ∈ L2 ([0.5. for example.1. DellacherieMeyerYor [29]). ﬁtted with Lebesgue measure. or H. t]. 1]) −→ L2 ([0. 1]) . With the help of the spacetime harmonic function h featured in property 3) of Theorem 1.1) The transformation T which we have been studying is closely related to the Hardy transform: H : L2 ([0.3. 1. which may be proved by several simple methods.
. ω)) such that: 1 duϕ2 (u. of dBu ϕ(u. u ⎞ (iii) the process ⎝ dβu ϕ(u. t ≤ 1) semimartingale. . from (∗) 1 1 1 ˜ dBu (Hf )(u) = 0 0 f (u)dBu − 0 f (u)dβu . u ⎛ t 0 (ii) 0 du Bu  ϕ(u. the following properties are equivalent 0 1 1 (i) 0 du √ ϕ(u. the limit: lim ε↓0 ε 1 du Bu ϕ(u. ω). in fact more generally. as ε → 0. u and then.s. or. for any f ∈ L2 [0. since both limits. ω) u 1 exists . hence. t ≤ 1⎠ is a (Bt . ω) < ∞ 0 1 a. ω) < ∞. ω) < ∞.12 1 1 1 1 The Gaussian space of BM (∗) 0 f (u)dBu = 0 f (u)dβu + 0 du Bu f (u). We now go back to (∗) to remark that. u ≤ 1) be a (Gu )u≤1 predictable process such that: 1 duϕ2 (u. from which we immediately deduce Hardy’s L2 inequality. ω) exist. Then. we remark that 1 1 du Bu f (u) = u 0 0 ˜ dBu (Hf )(u) . w).s. ω) and ε ε βu ϕ(u. This general existence result should be contrasted with the following Lemma 1. 1]. for any (Gu )u≤1 predictable process (ϕ(u. ω) < ∞ a.2 Let (ϕ(u.
p. with variance β = 2λ .2) We now translate the above existence result. 0 t ⎞ ds g(s)Ys . as the unique solution of Langevin’s equation: t Xt = x + Bt + µ 0 ds Xs . the equivalence between (i) and (ii) is a particular case of a useful lemma due to Jeulin ([51]. ⎝ a. t → ∞⎠ converges ˆ Proof: Using the representation of (Yt .5 Brownian motion and Hardy’s inequality in L2 13 For a proof of this Lemma. 44). which are linked by: ˆ ˜ Bu = uB1/u . the method of variation of constants yields the formula: ⎛ ⎞ t Xt = eµt ⎝x + 0 e−µs dBs ⎠ . Deﬁne the OrnsteinUhlenbeck process with parameter µ ∈ IR. we have: . We now have the following ⎛ Proposition 1.1. 1]) in terms of a convergence result for certain integrals of the OrnsteinUhlenbeck process. and may also be represented as: 1 1 ˜ ˆ Yt = √ e−λt Be2λt = √ eλt Be−2λt . at least for ϕ(u.5. When µ = −λ. we refer the reader to JeulinYor ([53]). ω) = f (u). in every Lp . and in L2 (in fact.s. and x is replaced by a Gaussian centered vari1 able X0 . p < ∞). (1.2 For any g ∈ L2 ([0. t ≥ 0) in terms of B. then the process: ⎛ Yt = e−λt ⎝X0 + 0 t ⎞ eλs dBs ⎠ is stationary. ∞]). with λ > 0. with f in L2 ([0. 2λ 2λ ˜ ˆ where (Bu )u≥0 and (Bu )u≥0 are two Brownian motions.
the Fourier transform of λw. that is: t ˆ λw. one may show that. with parameter λ. Then. for ﬁxed t and w.t (µ) ≡ 0 ds exp(iµBs (w)) is in L2 (dµ). 0 t µ=0. 1.6 Fourier transform and Brownian motion There has been..t (dx) is absolutely continuous and its family of densities are the local times of B up to time t.3 Let µ ∈ IR. where g satisﬁes: 0 dsg(s) < ∞. that is.t . ∞]) −→ L2 ([0. Now. therefore. the measure λw. namely we consider: t ds g(s) exp(iµBs ).14 t 1 1 The Gaussian space of BM ds g(s)Ys = 0 e−2λt du ˆ 1 g Bu √ u 2λu 1 1 log 2λ u . a.s. a lot of interest in e the occupation measure of Brownian motion. In particular. we have 2 .t (dx) deﬁned by: t λw. the result follows. λw. t ≥ 0) be 2 the stationary OrnsteinUhlenbeck process. We note the following Proposition 1. and deﬁne: λ = µ . since L´vy’s discovery of local times. Now. µ = 0. 1]) 1 1 log 2λ u is an isomorphism of Hilbert spaces. Let (Yt . for every t > 0.t (dx)f (x) = 0 ds f (Bs (w)) . we are interested in a variant of this. the application 1 g −→ √ g 2λu L2 ([0.
The lefthand side converges a. the second term on the righthand side goes to 0. but.s. convergence is obtained from the martingale convergence theorem. hence. ∞)]. . p < ∞).v.s. Γ (µ) ≡ 0 ds g(s) exp(iµBs ) is welldeﬁned. we have ∞ t E [Γ (µ)  Bt ] = 0 ds g(s)e iµBs +e iµBt t ds g(s)e−λ(s−t) . since: ∞ ⎛ ds g(s)e−λ(s−t) ≤ ⎝ ∞ ⎞1/2 ds g 2 (s)⎠ 1 √ 2λ t→∞ eiµBt t →0 . Indeed.6 Fourier transform and Brownian motion 15 the following identities: ⎡ ⎢ E⎣ 0 t ⎡⎛ ⎢ ⎥ ds g(s) exp(iµBs ) ⎦ = µ2 E ⎣⎝ 2 ⎤ t ⎞2 ⎤ ⎥ ds g(s)Ys ⎠ ⎦ 0 t t = 0 ds 0 du g(s)g(u)e−λu−s . ⎞ ⎛ t ⎝ 0 ds g(s) exp(iµBs ). t → ∞⎠ converges a.. if we t deﬁne: Γ (µ) = L .1 For any µ = 0.1.s. Proof: The L2 convergence follows immediately from the Proposition and from the L2 convergence of the corresponding quantity for Y . t ∞ From the above results. the r. it admits the following representation as a stochastic integral: ∞ ∞ Γ (µ) = 0 ds g(s) exp(−λs) + iµ 0 dBs exp(iµBs )Gλ (s) . The a. Corollary 1.3.lim 2 t→∞ 0 dsg(s)eiµBs . and in L2 (also in every Lp . so does the righthand side. and for any function g ∈ L2 ([0.
. s Hence.Paragraph 1.In paragraph 1. with the help of the Gaussian character of Brownian motion. it is closely connected to works of H. . Therefore.16 1 The Gaussian space of BM ∞ where: Gλ (s) = du g(u) exp −λ(u − s) . . DonatiMartin [30]. whilst the content of paragraph 1.Paragraph 1. Brockhaus [22]. Γ (µ) is the terminal variable of a martingale in the Brownian ﬁltration. t ≥ 0⎠ is ergodic. not yet completely solved.2. Also. these two paragraphs follow JeulinYor [54] closely.4 is taken from JeulinYor [54]. Many properties of the variables Γ (µ) have been obtained by C. but we have not been able to establish a precise connection between these two works. Comments on Chapter 1 . t ≥ 0) −→ ⎝ 0 sgn(Bs )dBs . it is shown that the ﬁltration of those Brownian bridges is that of a Brownian motion. One may appreciate how much the Gaussian structure facilitates the proofs in comparing the above development (Theorem 1. for α suﬃciently small.5 is taken mostly from DonatiMartin and Yor [32]. the application which transforms the original Brownian motion into the new one is shown to be ergodic. the increasing process of which is uniformly bounded.In paragraph 1. of proving that L´vy’s transformation: e ⎛ t ⎞ (Bt . and in paragraph 1. some explicit and wellknown realizations of the Brownian bridges are presented. . F¨llmer [43] and O. Dubins and Smorodinsky [35] have made some important progress on this question. say) with the problem.6 has been the starting point of DonatiMartin [30].2. the discussion and the results o found in the same paragraph 1. we have: E exp αΓ (µ)2 < ∞.3.4 look very similar to those in Carlen [23].1.
0 0 dtn ϕ2 (t1 . . . such as: t 2 αBt t +β 0 ds 2 Bs . 0 dBtn ϕn (t1 . Indeed. . where (Bs . 0 2 dµ(s)Bs . . . that is: which satisfy ϕn = 0.. In particular. . s ≥ 0) denotes Brownian motion. we studied a number of properties of the Gaussian space of Brownian motion. we shall study the laws of some of the variables X which correspond to the second level of complexity. and so on. tn ) < ∞ . recall that N. .. . . Wiener proved that every L2 (F∞ ) variable X may be represented as: ∞ ∞ t1 tn−1 X = E(X) + n=1 0 dBt1 0 dBt2 . n In this Chapter. .s ≥ 0}. . 17 .Chapter 2 The laws of some quadratic functionals of BM In Chapter 1. . we shall obtain the Laplace transforms of certain quadratic functionals of B. for n ≥ 3. this space may be seen as corresponding to the ﬁrst level of complexity of variables which are measurable with respect to F∞≡ σ{Bs . tn ) where ϕn is a deterministic Borel function which satisﬁes: ∞ tn−1 dt1 .
since we have: Throughout the volume.1 L´vy’s area formula and some variants e (2. (2. (Bu . (Bu . sh..2) a formula from which we can immediately compute the mean and the variance of the Gaussian variable Bu (considered under P (b) ).b = ch(bt) + 2 sh(bt) b −δ/2 exp − xb 1 + 2α coth bt b 2 coth(bt) + 2α b (2. th for. sinh.b : α Iα. as ⎛ ⎞ u Bu = e −bu ⎝a + 0 ebs dβs ⎠ . 1 .. where (βu . Consequently. tanh. Hence. u ≤ t) may be expressed explicitly in terms of β. u ≤ t) satisﬁes the following equation. we use the French abbreviations ch. Iα.b = E ⎣exp ⎝−αBt 2 − 2 0 We now show that. u Bu = a + βu − b 0 ds Bs . starting from a. We consider the new probability P (b) deﬁned by: ⎧ ⎫ t ⎨ b ⎬ 2 b (b) dsBs 2 · PFt . as a consequence of Girsanov’s transformation. u ≤ t) is an OrnsteinUhlenbeck process with parameter −b. This clearly solves the problem. we may obtain the following formula1 for Iα.. PF = exp − Bt 2 − x − δt − t ⎩ 2 ⎭ 2 0 Then. u≤t . under P (b) . respectively. cosh. t ≥ 0) a δdimensional BM starting from a ∈ IRδ . (Bu . Ft ) Brownian motion.1) We consider (Bt .18 2 Quadratic Functionals of Brownian motion 2. and we look for an explicit expression of the quantity: ⎡ ⎛ ⎞⎤ t b2 def dsBs 2 ⎠⎦ . u ≤ t) is a (P (b) . We write x = a2 .1) Proof: We may assume that b ≥ 0.1.
1: Show that exp dsBs 2 0 ⎫ ⎬ ⎭ is also a (P. we have: ⎧ ⎫ ⎛ ⎞ t t u ⎨ ⎬ g(u)dBu = g(u) −be−bu du · ⎝a + ebs dβs ⎠ + e−bu (ebu dβu ) ⎩ ⎭ 0 0 0⎛ ⎞ t t t = −ba 0 g(u)e−bu du + 0 dβu ⎝g(u) − ebu b u e−bs g(s)ds⎠ . thanks to the representation (2. (2. computations.2. du).2) The same method allows to compute the joint FourierLaplace trans⎛ t ⎞ t form of the pair: ⎝ 0 f (u)dBu . and formula (2. to compute: ⎡ ⎛ t E ⎣exp ⎝i 0 b2 f (u)dBu − 2 0 t ⎞⎤ 2 du Bu ⎠⎦ .3) all we need to know. if tedious.2).b = E (b) exp −αBt 2 + b Bt 2 − x − δt 2 . for any g ∈ L2 ([0. via the above method. we take here the dimension δ to be 1.1) now follows from some straightforward. is the joint distribution of t f (u)dBu and Bt .1). ⎧ ⎨b ⎩2 Bt 2 − x − δt − b 2 2 t Exercise 2.1. 0 This is clearly equivalent to being able to compute the mean and variance of t g(u)dBu . 0 2 du Bu ⎠ where for simplicity. .1 L´vy’s area formula and some variants e 19 Iα. 0 However. and that we might have considered this martingale as a RadonNikodym density to arrive to the same formula (2. under P (b) . (2. t]. Indeed. (Ft )) martingale.
20
t
2 Quadratic Functionals of Brownian motion
t
Hence, the mean of
t
g(u)dBu under P
0 t
(b)
is: −ba
0
g(u)e−bu du, and its
⎛
⎞2 e−bs g(s)ds⎠ .
variance is:
0
du ⎝g(u) − bebu
u
We shall not continue the discussion at this level of generality, but instead, we indicate one example where the computations have been completely carried out. The next formulae will be simpler if we work in a twodimensional setting; therefore, we shall consider Zu = Xu + iYu , u ≥ 0, a Cvalued BM starting
1
from 0, and we deﬁne G =
0
ds Zs , the barycenter of Z over the time
interval [0,1]. The above calculations lead to the following formula (taken with small enough ρ, σ ≥ 0): ⎞⎤ ⎛ 1 ⎡ 2 λ E ⎣exp − ⎝ dsZs 2 − ρG2 − σZ1 2 ⎠⎦ 2
0
=
(1 − ρ)chλ + ρ
shλ + σ [(ρ − 1)λshλ − 2ρ(chλ − 1)] λ
−1
(2.4)
which had been obtained by a diﬀerent method by ChanDeanJansonsRogers [26]. (2.1.3) Before we continue with some consequences of formulae (2.1) and (2.4), let us make some remarks about the above method: it consists in changing probability so that the quadratic functional disappears, and the remaining problem is to compute the mean and variance of a Gaussian variable. Therefore, this method consists in transfering some computational problem for a variable belonging to (the ﬁrst and) the second Wiener chaos to computations for a variable in the ﬁrst chaos; in other words, it consists in a linearization of the original problem. In the last paragraph of this Chapter, we shall use this method again to deal
t t
with the more general problem, when
0
dsBs  is replaced by
2 0
dµ(s)Bs 2 .
2.1 L´vy’s area formula and some variants e
21
(2.1.4) A number of computations found in the literature can be obtained very easily from the formulae (2.1) and (2.4). a) The following formula is easily deduced from formula (2.1): ⎤ ⎤ ⎡ ⎡ t t b2 b2 dsBs 2  Bt = 0⎦= E0 ⎣exp − dsBs 2  Bt = a⎦ Ea ⎣exp − 2 2
0 0
=
bt sh(bt)
δ/2
exp −
a2 (bt coth(bt) − 1) 2t (2.5)
which, in the particular case a = 0, yields the formula: ⎤ ⎡ ⎛ ⎞ t 2 b bt E0 ⎣exp ⎝− dsBs 2 ⎠  Bt = 0⎦ = 2 sh(bt)
0
δ/2
(2.6)
L´vy’s formula for the stochastic area e
t
At =
def 0
(Xs dYs − Ys dXs )
of planar Brownian motion Bt = (Xt , Yt ) may now be deduced from formula (2.5); precisely, one has: E0 [exp(ibAt )  Bt = a] = bt sh bt exp − a2 (bt coth bt − 1) 2t (2.7)
To prove formula (2.7), ﬁrst remark that, thanks to the rotational invariance of the law of Brownian motion (starting from 0), we have: E0 [exp(ibAt )  Bt = a] = E0 [exp(ibAt )  Bt  = a] , and then, we can write:
t
At =
0
Bs dγs ,
where (γt , t ≥ 0) is a one dimensional Brownian motion independent from (Bs , s ≥ 0). Therefore, we obtain:
22
2 Quadratic Functionals of Brownian motion
⎡
⎛ b 2
2
t
⎞
⎤
E0 [exp(ibAt )  Bt  = a] = E0 ⎣exp ⎝−
dsBs 2 ⎠  Bt = a⎦
0
and formula (2.7) is now deduced from formula (2.5). b) Similarly, from formula (2.4), one deduces: ⎤ ⎞ ⎡ ⎛ 1 µ2 dsZs − G2 ⎠  Z1 = z ⎦ E ⎣exp⎝ − 2
0
=
µ/2 shµ/2
2
exp −
z2 2
µ µ coth − 1 2 2
(2.8)
c) As yet another example of application of the method, we now derive the following formula obtained by M. Wenocur [91] (see also, in the same vein, [92]): consider (W (t), t ≥ 0) a 1dimensional BM , starting from 0, and deﬁne: Xt = Wt + µt + x, so that (Xt , t ≥ 0) is the Brownian motion with drift µ, starting from x. Then, M. Wenocur [91] obtained the following formula: ⎡ ⎛ ⎞⎤ 1 λ2 1 2 E ⎣exp ⎝− ds Xs ⎠⎦ = exp(H(x, µ, λ)) , 2 (chλ)1/2
0
(2.9)
where H(x, µ, λ) = − µ2 2 1− thλ λ − xµ 1 − 1 chλ − x2 λthλ . 2
We shall now sketch a proof of this formula, by applying twice Girsanov’s theorem. First of all, we may “get rid of the drift µ”, since: ⎡ ⎛ ⎞⎤ 1 λ2 2 ds Xs ⎠⎦ E ⎣exp ⎝− 2 0 ⎡ ⎤ 1 λ2 µ2 2 exp − ds Xs ⎦ = Ex ⎣exp µ(X1 − x) − 2 2
0
where Px denotes the law of Brownian motion starting from x. We apply (λ) Girsanov’s theorem a second time, thereby replacing Px by Px , the law
2.1 L´vy’s area formula and some variants e
23
of the OrnsteinUhlenbeck process, with parameter λ, starting from x. We then obtain: ⎡ ⎛ ⎞⎤ 1 2 λ 2 ds Xs ⎠⎦ Ex ⎣exp ⎝µX1 − 2
0 (λ) = Ex
λ λ 2 exp µX1 + X1 exp − (x2 + 1) 2 2
,
and it is now easy to ﬁnish the proof of (2.9), since, as shown at the (λ) beginning of this paragraph, the mean and variance of X1 under Px are known. Exercise 2.2: 1) Extend formula (2.9) to a δdimensional Brownian motion with constant drift. 2) Derive formula (2.1) from this extended formula (2.9). Hint: Integrate both sides of the extended formula (2.9) with respect to dµ exp − cµ2 on IRδ . Exercise 2.3: Let (Bt , t ≥ 0) be a 3dimensional Brownian motion starting from 0. 1. Prove the following formula: for every m ∈ IR3 , ξ ∈ IR3 with ξ = 1, and λ ∈ IR∗ , ⎤ ⎡ ⎛ ⎞ 1 E ⎣exp ⎝iλξ ·
0
Bs × dBs ⎠  B1 = m⎦ ,
=
λ shλ
exp
m2 − (ξ · m)2 (1 − λ coth λ) 2
where x · y, resp.: x × y, denotes the scalar product, resp.: the vector product, of x and y in IR3 .
1
Hint:
Express ξ ·
0
Bs × dBs in terms of the stochastic area of the
2dimensional Brownian motion: (η · Bs ; (ξ × η) · Bs ; s ≥ 0) where η is a suitably chosen unit vector of IR3 , which is orthogonal to ξ.
the fact that.instead of a realvalued process. We then obtain: ⎡ ⎛ ⎞⎤ 1 2 λ λ E ⎣exp ⎝− . the following identity in law holds: 1 1 dsZs − G 0 2 (law) = ˜ dsZs 2 .10). is of no importance. dsZs − G2 ⎠⎦ = 2 shλ 0 ˜ but. one has: ⎡ ⎛ ⎞⎤ 1 E ⎣exp i ⎝z · B1 + λξ · 0 Bs × dBs ⎠⎦ .11) . Prove that. resp.24 2 Quadratic Functionals of Brownian motion 2.1) We consider again formula (2.4).6).g.2 Some identities in law and an explanation of them via Fubini’s theorem (2. we also know that. using the notation (Zs . Z. ˜ Obviously. resp. from formula (2. [33]). and ξ ∈ IR3 .10) an identity which had been previously noticed by several authors (see.10) is indeed equivalent to: 1 1 dt(Bt − G) 0 2 (law) = ˜2 dtBt . denotes a complex valued BM . Z. z ∈ IR3 . Brownian bridge. 0 (2. for any λ ∈ IR∗ . and (2. 1 1 exp − = (chλ) 2 thλ thλ z2 + (z · ξ)2 1 − λ λ 2.. and σ = 0. s ≤ 1) for the complex Brownian bridge of length 1: ⎡ ⎛ ⎞⎤ 1 λ2 λ ˜ . 0 (2. E ⎣exp ⎝− dsZs 2 ⎠⎦ = 2 shλ 0 hence. in (2. with ξ = 1. in which we take ρ = 1. e.2.
which. u) = 1(u≤s) − (f (1) − f (u)) 1((s. we have: 1 ⎛ ds ⎝Bs − 1 ⎞2 dt f (t)Bt ⎠ (law) 1 = ds(Bs − f (s)B1 )2 . 0 (2. Brownian bridge. where we take: ϕ(s. Indeed.11) via Fubini’s theorem. Then. Proposition 2.4: Let µ(dt) be a probability on IR+ .11) is now a particular instance of (2. Proof: It follows from the identity in law (2.2. we have: 1 1 1 1 dBu 0 0 dCs ϕ(u. s) = a. yields: 1 ⎛ du ⎝ 1 ⎞2 dCs ϕ(u. we obviously recover (2. prove that: . 1]→IR be a C 1 function such that f (1) = 1. as a corollary.12) = 0 0 0 0 (in the sequel.1 Let f : [0.12). Here is another variant. (2. Then. resp. The identity (2. s) . in the case f (s) = s. s)⎠ (law) 1 ⎛ du ⎝ 1 ⎞2 dCs ϕ(s.2. Exercise 2.13). 1]. t ≤ 1).1]2 ) . as the following Proposition shows.u)∈[0. of the identity in law (2. starting from 0.11). du ds).13) 0 0 In particular. 0 dCs 0 dBu ϕ(u. t ≤ 1) now denotes a 1dimensional BM .s.2 Some identities in law and Fubini’s theorem 25 ˜ where (Bt .12). if B and C denote two independent Brownian motions and ϕ ∈ L2 ([0. we shall refer to this identity as to the “FubiniWiener identity in law”).2) Our ﬁrst aim in this paragraph is to give a simple explanation of (2. resp. u)⎠ (2. (Bt . due to Shi Zhan.
. . it suﬃces to show that the identity in law: n − i=1 (law) 2 2 (f (ti+1 ) − f (ti ))Bg(ti ) + f (tn )Bg(tn ) n 2 (g(ti ) − g(ti−1 ))Bf (ti ) . and g increasing. and. which we now state. which resembles the integration by parts formula.2 Let Xn = (X1 . .14) In order to prove (2.12).14).14). Theorem 2. g : [a.t] dt . if xn = (x1 .15) is a particular case of a discrete version of (2. b 2 −df (x)Bg(x) a b (law) 2 f (b)Bg(b) = 2 g(a)Bf (a) 2 dg(x)Bf (x) . and f. b]→IR+ be two continuous functions. where A∗ is the transpose of A. (2. t ≥ 0) be a 1dimensional BM starting from 0. − − xn ) ∈ IRn .26 ∞ 2 Quadratic Functionals of Brownian motion ⎛ µ(dt) ⎝Bt − ∞ ⎞2 µ(ds)Bs ⎠ (law) ∞ = ˜2 Bµ[0. Now. we denote: − −. Then. a + + (2.12). holds. we have: AXn  = A∗ Xn  . . Xn ) be an ndimensional Gaussian vector. Let 0 ≤ a ≤ b < ∞. with variance 1. for any n × n matrix A. the components of which are independent. Theorem 2. 0 0 0 ˜ where (Bu . − xn  = n i=1 1/2 (law) x2 i .1 Let (Bt . . or rather of a discrete version of (2.12). As a second application of (2. we prove a striking identity in law (2.15) where a = t1 < t2 < · · · < tn = b. centered. with f decreasing. u ≤ 1) is a standard Brownian bridge. i=2 2 = g(t1 )Bf (t1 ) + (2. and then to let the mesh of the subdivision tend to 0.
we have n − i=1 2 2 Yi2 E(Zi+1 ) − E(Zi ) (law) n 2 2 Zi E(Yi2 ) − E(Yi−1 ) i=1 = (∗) 2 where we have used the convention: E(Zn+1 ) = E(Y02 ) = 0. .. Z2 − Z1 are independent. Yn ) and (Z1 . 2. t ≥ 0) a δdimensional (δ ∈ IN. Then. a fortiori. from the theory of 1dimensional stochastic diﬀerential equations. Yn − Yn−1 are independent.16) where x = a2 . . we may deﬁne. x ≥ 0.3 The laws of squares of Bessel processes 27 Corollary 2. for the moment. Y2 − Y1 . . and deﬁne: Xt = Bt 2 .. More generally. hence. t ≥ 0) satisﬁes the following equation t Xt = x + 2 0 Xs dβs + δt . Zn − Zn−1 . δ ≥ 0. ∗ Therefore. .2.16) admits one strong solution.3 The laws of squares of Bessel processes Consider (Bt . Qδ as x the law of a process which satisﬁes (2. . (Xt . δ ≥ 0) possesses the following additivity property. IR+ ). . . . . the equation (2. it enjoys the uniqueness in law property. Zn ) be two ndimensional Gaussian vectors such that i) Y1 . t ≥ 0) is a 1dimensional Brownian motion.16).2. and (βt .) Brownian motion starting from a. The family (Qδ . we know that for any pair x. . (2. . x which is obvious for integer dimensions.1 Let (Y1 . . . on the canonical space Ω+ ≡ C(IR+ . . . ii) Zn . Then. . The identity in law (2. .15) now follows as a particular case of (∗) .
and we deduce from the theorem that there exist two positive constants A(µ) and B(µ) such that: 1 Qδ exp − Iµ x 2 = (A(µ))x (B(µ))δ . measure µ on IR+ . Theorem 2. x ≥ 0. φµ (0) = 1. Now. where ∗ denotes the convolution of two probabilities on Ω+ . one has: 1 Qδ exp − Iµ x 2 = (φµ (∞))δ/2 exp x + φ (0) 2 µ . µ Proof: For simplicity.28 2 Quadratic Functionals of Brownian motion Theorem 2. 1).4 For any ≥ 0 Radon measure µ on [0. and that its support is contained in (0. σﬁnite. φµ (s) Then. for any positive. we assume that µ is diﬀuse. the identity: Qδ ∗ Qδ = Qδ+δ x x x+x ∗ holds. δ . x. ∞) . remark that: ⎧ ⎨1 1 def µ ˆ Fµ (t)Xt − Fµ (0)x − δ Fµ (t) − Zt = exp ⎩2 2 t 0 ⎫ ⎬ Xs dµ(s) ⎭ . φµ (t) ˆ Deﬁne: Fµ (t) = . ∞). where φµ denotes the unique solution of: φ = µφ on (0.3 (ShigaWatanabe [83]) For any δ. The next theorem allows to compute A(µ) and B(µ). and Fµ (t) = φµ (t) 0 t φµ (s)ds = log φµ (t). and φ+ (0) is the right derivative of φµ at 0. we deﬁne: ∞ Iµ (ω) = 0 dµ(s)Xs (ω) . 0 ≤ φ ≤ 1 .
t ≥ 0 n t (law) n→∞ → (cβt2 . t ≥ 0) satisﬁes (∗) . t ≥ 0) denotes a BESQn process. for a certain constant c > 0.14) can be extended as follows: b b (∗) a −df (x)Xg(x) + f (b)Xg(b) = g(a)Xf (a) + a (law) dg(x)Xf (x) .2. and to use the fact that Fµ (1) = 0 to x obtain the result stated in the theorem. t ≥ 0) denotes a realvalued BM . t ≥ 0) . with any strictly positive dimension. where X is a BESQ process. starting from 0. Exercise 2. (n) Comments on Chapter 2 For many reasons. since it may be written as: x ⎧ t ⎫ t ⎨ ⎬ 1 2 Fµ (s)dMs − Fµ (s)d M s exp . ⎩ ⎭ 2 0 0 t where: Mt = 1 2 (Xt − δt). 2) Prove the following convergence in law result: √ n 1 (n) X − t . where (Xt . starting from 0. the origins of the interests in such functionals range from Bismut’s proof of the AtiyahSinger . and M t = 0 ds Xs . or related processes. µ It now remains to write: Qδ (Z1 ) = 1. are being published almost every year.5: 1) Prove that the integration by parts formula (2.3 The laws of squares of Bessel processes 29 is a Qδ martingale. 3) Prove that the process (Xt ≡ βt2 . a number of computations of the Laplace or Fourier transform of the distribution of quadratic functionals of Brownian motion. and (βt . starting from 0.
. ii) the change of probability method which.e. the reduction method. Duplantier [36] presents a good list of references to the literature. u ≤ 1). hyperbolic functions. iii)ﬁnally. linearizes the problem. L´vy’s diagonalisation procedure. this method may be applied very generally and is quite powerful. in eﬀect.. Some extensions of the integration by parts formula (2.14) to stable processes and some converse studies have been made by DonatiMartin.. the characteristic functions or Laplace transforms then appear as inﬁnite products. Song and Yor [31].3 is closely related to PitmanYor ([73]. Exercise 2. to polymer studies (see ChanDeanJansonsRogers [26] for the latter).1 above gives an important example of this method. however.3. Paragraph 2. i. paragraph 2. and indeed the whole paragraph 2.2 above give some examples of application.: it allows to transform the study of a quadratic functional into the computation of the mean and variance of an adequate Gaussian variable.3 is due to Foschini and Shepp [44] and the whole exercise is closely related to the work of Berthuet [6] on the stochastic volume of (Bu . say. [74]). The methods used by the authors to obtain closed formulae for the corresponding characteristic functions or Laplace transforms fall essentially into one of the three following categories: i) P. The last formula in Exercise 2. which simply consists in trying to reduce the computation for a certain quadratic functional to similar computations which have already been done. which has a strong functional analysis e ﬂavor.30 2 Quadratic Functionals of Brownian motion theorem. which have to be recognized in terms of.
31 . in fact. in this Chapter. they have inﬁnitely many Wiener chaos components.Chapter 3 Squares of Bessel processes and RayKnight theorems for Brownian local times Chapters 1 and 2 were devoted to the study of some properties of variables in the ﬁrst and second Wiener chaos. and we shall. we are studying variables which are deﬁnitely at a much higher level of complexity in the Wiener chaos decomposition. which may be deﬁned by the occupation times formula: t ∞ ds f (Bs ) = 0 −∞ da f (a) a t . in our opinion. that is: at least for some suitably chosen stopping times T . into an integration in space. a ∈ IR) is a strong Markov process. throughout this Chapter. permeates the various developments made around the RayKnight theorems on Brownian local times. we may. More generally. choose the family ( a . and. the T law of which can be described precisely. the process ( a . of a general transfer principle from time to space. the RayKnight theorems presented below show precisely that there is some Markov property in space. In the present Chapter. and it may be asked: what becomes of the Markov property through this change from time to space? In fact. More precisely. we shall try to show some evidence. from Trotter’s theorem. which. a ∈ IR. we shall study. f ∈ b (B(IR)) . t ≥ 0) to be jointly continuous. t This occupation times formula transforms an integration in time. some properties of the Brownian local times.
Then. the ﬁrst one being related to T ≡ τx = inf{t ≥ 0 : 0 = x}. (RK2)(a) If (R3 (t). i. the solution of: t (∗) Xt = Bt + µ 0 ds 1(Xs >0) . of 0dimensional Bessel processes. in the following.+ the law of this process on the canonical space Ω ∗ . starting at x. t ≤ L1 ) .t ≥ 0) We now give a ﬁrst example of the transfer principle from time to space mentioned above.1 The basic RayKnight theorems There are two such theorems.e. (RK2)(b) follows from (RK2)(a) thanks to Pitman’s representation of R3 (see [71]). 0 We recall that (RK2)(a) follows from (RK2). ∞ 0 (RK2)(b) The law of a ∞ (B + 0 ).e. among which the two following ones. we simply write P for the standard Wiener measure. i. Consider. which may be stated as (R3 (t). for µ ∈ IR. a ≥ 0) are two independent τ τx squares. then the law of ( a (R3 ). where L1 = sup {t > 0 : R3 (t) = 1}. t ≥ 0) denotes the 3dimensional Bessel process starting from 0.32 3 Squares of Bessel processes and RayKnight theorems 3. 0 There are several important variants of (RK2). thanks to Williams’ time reversal result: (law) (Bt . and call P µ. 0 ≤ a ≤ 1) is the square of a 2dimensional Bessel T1 process starting from 0.: their common law is Q0 . t ≤ T1 ) = (1 − R3 (L1 − t). t (RK1) The processes ( ax . a ≥ 0) and ( −a . we have: . t ≥ 0) = (Bt  + (law) 0 t. and the second one to T ≡ T1 = inf{t : Bt = 1}. x (RK2) The process ( 1−a . a ≥ 0) is Q2 .: its law is Q2 . from Girsanov’s theorem. Then. a ≥ 0 is Q2 .
a probability which is deﬁned in the statement of The0 0 orem 3. starting at x. for any ≥ 0 ∗ measurable functional F on Ω+ . we have: ≤ a ≤ 1) ⎧ ⎫⎤ 1 ⎨ µ ⎬ 2 µ = E ⎣F ( 1−a . we have just proved the following Theorem 3.1 below (see paragraph 6 of PitmanYor [73] for details).3.1 If X (µ) denotes the solution of the equation (∗) above. Hence. 0 0 (†) ∗ where (Za .e.s. ⎭ 2 ds 1(Xs >0) 0 ⎫ ⎬ where (Xt )t≥0 denotes the canonical process on Ω ∗ . the exponential which appears as a RadonNikodym density in (†) transforms Q2 into (−µ) Q2 . dy dy . 0 ≤ a ≤ 1) exp − ( 0 1 − 2) − da a 1 ⎦ T T T1 ⎩ 2 ⎭ 2 0 ⎧ ⎫⎤ ⎡ 1 ⎨ µ ⎬ µ2 2⎣ = Q0 F (Za .+ F ( ⎡ 1−a T1 . of the norm of a δdimensional OrnsteinUhlenbeck process with parameter β.). 0 ≤ a ≤ 1) exp − (Z1 − 2) − da Za ⎦ ⎩ 2 ⎭ 2 E µ. i. where β Qδ denotes the law of 0 x T1 the square.: a diﬀusion on IR+ whose inﬁnitesimal generator is: d2 d 2y 2 + (2βy + δ) . 0 ≤ a ≤ 1 is (−µ) Q2 . then. The last equality follows immediately from (RK2). and ( 0 )t≥0 its local time t at 0 (which is well deﬁned P a.1 The basic RayKnight theorems 33 t Pµ. a ≥ 0) now denotes the canonical process on Ω+ (to avoid confu∗ sion with X on Ω ). It follows from the above RadonNikodym relationship that.+ Ft ⎧ ⎨ = exp ⎩ ⎧ ⎨ ⎩ µ 0 t µ2 1(Xs >0) dXs − 2 1 2 0 t = exp + µ Xt − ·P ⎭ Ft 0 ⎫ t ⎬ 2 µ − ds 1(Xs >0) · PFt . the law of 1−a (X (µ) ). Now.
we have.4). δ ≥ 0: Qδ (exp −If ) x = exp − M(dω) ⎧ ⎨ ⎩ ∞ x [1 − exp(−If (ω))] + δ 0 ⎫ ⎬ du(1 − exp −Ifu (ω)) . x ≥ 0) . f = 0 dt ω(t)f (t) and fu (t) = f (u + t) . and ω ∈ Ω+ . for any measurable ≥ 0 functional F .34 3 Squares of Bessel processes and RayKnight theorems 3. the representing measure of Q0 is xM(dω). Before we give the proof of the theorem.2 For any Borel function f : IR+ → IR+ . We are now able to express its L´vyKhintchine representation as follows e ∗ Theorem 3. from the theorem.2 The L´vyKhintchine representation of Qδ e x We have seen. for every x. that for any x. we set ∞ If (ω) = ω. Qδ is inx ﬁnitely divisible (Theorems 2. Then. where N(dω) is charac0 terized by: ∞ N(dω) 1 − e −If (ω) = M(dω) 0 du 1 − e−Ifu (ω) and it is not diﬃcult to see that this formula is equivalent to: ∞ N(dω)F (ω) = M(dω) 0 du F ω((· − u)+ ) .3 and 2. we make some comments about the representations of Q0 and Qδ separately: x 0 obviously. in the previous Chapter. ⎭ where M(dω) is the image of the Itˆ measure n+ of positive excursions by the o application which associates to an excursion ε the process of its local times: ε→ ( x R (ε). . δ ≥ 0. x whereas the representing measure of Qδ is δN(dω).
moreover. in the following sense: Mτs = Mτs− (MR ) ◦ θτs− (s) (s) (s ≥ 0) . for some measurable family of r. in fact. and x Qδ . o (ii) More generally.10) and (1.12)). Proposition 3. all we need to do is to represent Q0 . and 0 x (RK2) (b) to represent Q2 . 0 Our main tool will be (as is to be expected!) excursion theory. Then. where n(dε) denotes the Itˆ characteristic measure of excursions. Propositions (1. such that: 1(Bt =0) dMt = 0. t ≥ 0) is a skew multiplicative functional. (Mt . Chapter XII.3. in order to prove the theorem. continuous process with bounded variation on compacts of IR+ .v. a Borel function.1 Let (Mt . We ﬁrst state the following consequences of the master formulae of excursion theory (see [81]. as an immediate consequence of the Proposition. s ).’s (MR . t ≥ 0) is a multiplicative functional. s ≥ 0). we have: E[Mτx ] = exp −x n(dε)(1 − MR (ε)) . we obtain. if the multiplicativity property assumption is replaced by: (Mt . for f : IR × IR+ → IR. we shall use (RK1) to represent Q0 . t ≥ 0) be a bounded. (i) if.2 The L´vyKhintchine representation of Qδ e x 35 Now. then the previous formula should be modiﬁed as ⎛ x ⎞ E[Mτx ] = exp ⎝− 0 ds (s) n(dε) 1 − MR (ε) ⎠ . for some dimension δ. the following important formula: . t Taking Mt ≡ exp − 0 ds f (Bs .
) = g(y + ). g ) Thus. ⎞ du g(ε(u) + s)⎠ 0 R ds ∞ n(dε) ⎝1 − exp − = exp −2 0 ds M(dω)(1 − exp − ω.36 3 Squares of Bessel processes and RayKnight theorems ⎡ (∗) ⎛ τx ⎞⎤ ds f (Bs . Q0 (exp −Ig ) . x while the righthand side of (∗) becomes: ⎛ exp −x n+ (dε) ⎝1 − exp − 0 R ⎞ du g(ε(u))⎠ = exp −x M(dω) 1 − e−Ig (ω) from the deﬁnition of M. and x = ∞. then the lefthand side of (∗) becomes: thanks to (RK1). = exp −2 N(dω)(1 − exp − ω. Next. s)⎠ . gs ) from the deﬁnition of N. the lefthand side becomes: Q2 (exp −Ig ) . . if we write formula (∗) with f (y. we have completely proved the theorem. ) ≡ 1(y≥0) g(y). 0 while the righthand side becomes: ⎛ ∞ exp − 0 thanks to (RK2) (b). s )⎠⎦ E ⎣exp ⎝− 0 x ⎛ n(dε) ⎝1 − exp − R ⎞ du f (ε(u). = exp − 0 ds 0 As an application. if we take f (y.
continuous C 1 function. Then. for any x ≥ 0: .1 with t Mt = exp − 0 ds f Bs  + ∆−1 (2 s ) . that the law Q∆ of the unique solution of (∗) satisﬁes: x Q∆ (e−If ) = exp − M(dω) x ⎧ ⎨ ⎩ ∞ x 1 − exp −If (ω) + ∆(ds)(1 − exp −Ifs (ω)) 0 ⎫ ⎬ ⎭ . and we obtain. Now. that is: some IR+ valued processes which satisfy: t (∗) Xt = x + 2 0 Xs dβs + ∆(t) where ∆ : IR+ → IR+ is a strictly increasing. it may be of some interest to deﬁne squares of Bessel processes with generalized dimensions.3. u ≥ 0 is Qδ . we have the following Theorem 3.1) Now that we have obtained the L´vyKhintchine representation of Qδ . it is not diﬃcult to show. 0 In particular.3 An extension of the RayKnight theorems 37 3. u ≥ 0 is Q∆ .3 The family of local times of Bu  + ∆−1 (2 u ). with ∆(0) = 0 and ∆(∞) = ∞. First of all.3 An extension of the RayKnight theorems (3. with the help of some weak convergence argument.3. 0 δ Proof: We use Proposition 3. the family of local times of Bu  + 2 u . e x we may use the inﬁnite divisibility property again to obtain some extensions of the basic RayKnight theorems.
starting from 0. and St = sup Bs (t ≥ 0).38 x 3 Squares of Bessel processes and RayKnight theorems E[Mτx ] = exp − 0 ⎧ ⎨ ds x R n(dε) ⎩ (1 − exp − ⎧ ⎨ 0 du f ε(u) + ∆−1 (2s) R ⎫ ⎬ ⎭ ⎫ ⎬ (1 − exp − du f ε(u) + ∆−1 (2s) ⎩ ⎭ 0 0 ⎧ ⎫ 2x R ⎨ ⎬ = exp − dt n+ (dε) (1 − exp − du f ε(u) + ∆−1 (t) ⎩ ⎭ 0 0 ⎛ ⎞ ∆−1 (2x) R = exp −2 ds n+ (dε) = exp − 0 d∆(h) n+ (dε) ⎝1 − exp − 0 du f (ε(u) + h)⎠ . and consider τx ≡ inf{t ≥ 0 : a−2x/δ B − 2 . . when computing quantities to do with Brownian occupation times.3. for a ≥ 2x . a ≥ 0 and ax B + 2 the processes τx τ δ δ t > x}. λ. def A+ (a) = σ 0 ds 1(Bs >aSs ) Using standard stochastic calculus. we ﬁnd formulae which also appeared in relation with L´vy’s formula (see Chapter 2). In fact. starting at 0. and the result of the theorem now follows by letting x → ∞. we showed more than the ﬁnal statement. Here is an important example. we s≤t are interested in the joint distribution of the triple: σ def A− (a) = σ 0 σ ds 1(Bs <aSs ) . ν > 0. a ≥ 0).2) These connections between Brownian occupation times and squares of Bessel processes explain very well why. δ δ (3. t ≥ 0). in the previous proof. we obtain: for every µ. Then. a ≥ 0 have the same law. and a square Bessel process of dimension 0. namely that of an inhomogeneous Markov process (Ya . E exp − µ2 − Aσ (a) + λ 2 (a) σ + ν2 + Aσ (a) 2 = ch(ν¯) + (µ + 2λ) a sh(ν¯) a ν −1/¯ a . e We consider a onedimensional Brownian motion (Bt . (a) def = σ 0 σ (B − aS) . which is the square of a δdimensional Bessel process for a ≤ 2x .4 Let x > 0. Let a < 1. since we considered the local times of Bu  + ∆−1 (2 u ) : u ≤ τx . In particular. and we deﬁne σ = inf{t : Bt = 1}. the above proof shows the following Theorem 3.
3. ¯ On the other hand. where. 0 τ1 B − b 0 ⎠ Until now in this subparagraph (3. starting from 0. we have not used any RayKnight theorem. x ≥ 0. of the family (Qδ . the variable Bt is not a constant. for b > 0: ⎛ A+ (1−b). namely τx and T1 .4 Brownian local times at an exponential time 39 where a = 1 − a > 0. we now do so. presented in Theorem 2.’s and the righthand side of (∗) follows directly from Theorem 3. y ≥ 0) a BESQδ process.v. It is a natural question to look for an identiﬁcation of the law of Brownian local times up to a ﬁxed time t. Xb 0 ⎞ (2/b) ⎠ = ⎝ (∗) .2). we deduce from formula (2. 3. One of the inherent diﬃculties of this question is that now.4. t ≥ 0). the lefthand side in (∗) is identical in law to: ⎞ ⎛ b ⎝ 0 (δ) dy y−b τ1 B − b 0 .3. ⎛ ⎞⎞ ⎛ x −δ/2 2 2λ ν Qδ ⎝exp − ⎝ sh(νx) dy Xy + λ Xx ⎠⎠ = ch(νx) + 0 2 ν 0 Comparing the two previous expectations. as we remark that the identity in law between the last written pair of r. we obtain the following identity in law.3. δ ≥ 0. and ν. Thanks to L´vy’s representation of reﬂecting Brownian motion as e (St − Bt .4 The law of Brownian local times taken at an independent exponential time The basic RayKnight theorems (RK1) and (RK2) express the laws of Brownian local times in the space variable up to some particular stopping times.1) and the additivity property. on the righthand side of (∗) . one . however. λ. we denote by (Xy . x ≥ 0) the following x formula: for every δ ≥ 0. σ (1−b) σ (law) b (2/b) dy Xy .
BSθ = a] θ2 2 τ = E exp −Aτ − eθ Ea exp −AT0 − θ2 2 T0 eθa Then. 0 ≤ x ≤ a) is a diﬀusion with inﬁnitesimal generator: 2y d2 d . one obtains the following Theorem 3. it then turns out that all is needed is to combine the two basic RK theorems. the following formula holds: E [exp(−ASθ )  Sθ = . and have respective distributions: P( Sθ Sθ ∈ d ) = θe−θ d . and Jeulin [52]). x ≥ a) are diﬀusions with common inﬁnitesimal 2y d2 d − 2θy dy 2 dy ii) ( x Sθ . Then 2 . and BSθ = a > 0.1).40 3 Squares of Bessel processes and RayKnight theorems way to circumvent this problem would be to condition with respect to the variable Bt . x ∈ IR) is an inhomogeneous Markov process which may be described as S follows: i) ( −x . This shows up clearly in the next Proposition 3. continuous additive functional A. P (BSθ ∈ da) = θ −θa e da .2 Let Sθ be an independent exponential time. + (2 − 2θy) dy 2 dy . In fact. the answer to the problem is not particularly simple (see Perkins [68]. that is: P (Sθ ∈ ds) = 2 exp − 2 1) and BSθ are independent. if one considers the same problem at an independent exponentially distributed time. the process S ( x θ . x ≥ 0) and ( Sθ generator: x Sθ . 2 2) for any IR+ valued. with parameter θ2 θ2 θ2s ds. even when this is done. however. using the same sort of transfer principle arguments as we did at the end of paragraph (3.5 Conditionally on 0 θ = .
t ≥ 0). the decomposition (∗) is obtained by writing: a ∞ (R3 ) = a T1 (R3 ) + a ∞ (R3 ) − a T1 (R3 ) . the following formula holds:  sθ = . and thought provoking. we show: (∗) δ Qδ = Qδ 0 0→0 ∗ R . in this paragraph.3. which shall be identiﬁed.5 Squares of Bessel processes and squares of Bessel bridges From the preceding discussion. In terms of convolution. which we shall describe. δ It will be shown.1) The case δ = 2.5 Squares of Bessel processes and of Bessel bridges 41 2 0 δ . that except for the case δ = 2. often written as (Rδ (t).5. This theorem may be extended to describe the local times of B + considered up to an independent exponential time (see BianeYor [19]). the context should help.. the reader might draw the conclusion that the extension of RayKnight theorems from Brownian (or Bessel) local times δ to the local times of the processes: Σt ≡ Bt  + 2 t (t ≥ 0) is plainsailing.1 Extend the second statement of Proposition 3. we present an additive decomposition of the square of a Bessel process of dimension δ as the sum of the square of a δdimensional Bessel bridge.) (3.. ∗ where Rδ is a probability on Ω+ . diﬃculties. In this case. . the nonMarkovian character of Σ δ creates some important. (We hope that the δ notation R for this remainder or residual probability will not create any confusion with the notation for Bessel processes. On a more positive view point. Exercise 3. if A− and A+ are two IR+ valued continuous additive functionals. and an interesting independent process.2 by showing that. Bsθ = a E exp − A− + A+θ − A+ gsθ gsθ s 2 θ θ2 eθ Ea exp − A+0 + T0 = E exp − A− + τ τ T 2 2 eθa 3.
We may now state two interesting representations of R2 . where σ = inf{t : Bt = 1}. (3. using Williams’ time reversal result: ∞ ˆ2 R = 0 dx −x/2 0 e Qx→0 is the law of 2 a gθ (B). that is: ˆ E [F (Xt . 0 ≤ a ≤ 1 . which may be seen by a 0→0 T Markovian argument: a T1 (R3 ). 0 ≤ a ≤ 1) denotes a 4dimensional Bessel process starting from 0. t ≤ 1)] = E [F (X1−t . t ≤ σ). 1].5. First. independent of r4 . t ≤ 1)] . and (RK2). and U is a uniform variable on [0. 0 ≤ a ≤ 1) denotes the law of the process γ. ∞ T thanks to the strong Markov property of R3 .2) This formula may be interpreted as: the law of (1) a ∞ (R3 ). 0 ≤ a ≤ 1 has the law Q2 . 0 ≤a≤1 given (1) 1 ∞ (R3 ) ˆ = x is Q0 x→0 or. 0 ≤a≤1 (law) = ( a ∞ (R3 ). R2 can be represented as: 2 R2 = L r4 (a − U )+ . 0 ≤ a ≤ 1) 1 ∞ (R3 ) =0 but we shall also present a diﬀerent argument in the subparagraph (3.3). 0 ≤ a ≤ 1 which is also. . 0 ≤a≤1 . The following representation of R2 is also interesting: ∞ R = 0 2 dx −x/2 ˆ 0 e Qx→0 2 (3. the law of the local times (1) a ∞ (R3 ). (3. of a 3dimensional Bessel process starting from 1. we shall use the notation P to denote the probability on Ω+ obtained by time reversal at time 1 of the probability P .1) where L (γ(a).3) where gσ = sup{t ≤ σ : Bt = 0}. This representation follows from Williams’ path decomposition of Brownian motion (Bt . 0 ≤ a ≤ 1 below level 1. We now deﬁne R2 as the law of a (R3 ) − a 1 (R3 ).42 3 Squares of Bessel processes and RayKnight theorems The process a 1 (R3 ). ∗ ˆ In the sequel. and (r4 (a).
we obtain (3. where γδ (dx) = Qδ (X1 ∈ dx) = 0 dx 2 x 2 δ 2 −1 e−x/2 Γ δ 2 .5. we have: ∞ ∞ dx −x/2 2 dx −x/2 ˆ 2 2 e e Q0→x = Qx→0 .2) The general case δ > 0. Q2 0→0 so that we now obtain: ∞ Q2 0 = Q2 0→0 ∗ 0 dx −x/2 ˆ 0 e Qx→0 2 Comparing this formula with the deﬁnition of R2 given in (∗) . .5 Squares of Bessel processes and of Bessel bridges 43 To prove (3. we condition Q2 with respect to X1 . we decompose Qδ by conditioning with respect to X1 . Thus. Again. and using the 0 additivity and time reversal properties of the squared Bessel bridges. More precisely.3.2). we have: ∞ Qδ = 0 0 γδ (dx)Qδ 0→x . x→0 hence: ˆ x→0 = Q2 ∗ Q0 ˆ x→0 . we have: Q2 x→0 = Q2 0→0 ∗ Q0 . and we use the additivity 0 and time reversal properties of the squared Bessel bridges. Q0 = 2 2 0 0 However. From the additivity property: δ 0 Qδ x→0 = Q0→0 ∗ Qx→0 .2). we deduce: δ ˆ0 Qδ 0→x = Q0→0 ∗ Qx→0 and it follows that: ∞ Qδ = Qδ 0 0→0 ∗ 0 ˆ γδ (dx)Q0 x→0 . (3. so that: R = δ ∞ ∞ ˆ γδ (dx)Q0 x→0 ≡ 0 0 ˆ γ2 (dx)gδ (x)Q0 x→0 .
3) An interpretation of Qδ 0→0 The development presented in this subparagraph follows from the wellknown fact that. if (b(t). Hence. of the 3dimensional Bessel δ process. we can now present the following interesting formula: Theorem 3. we remark that the family (Rδ . .7 Let f : IR→IR+ be any Borel function. we have obtained the following relation: Rδ = cδ (X1 ) 2 −1 R2 . the additive decomposition: δ Qδ = Qδ 0 0→0 ∗ R holds. 0 ≤ t ≤ 1) is a standard Brownian bridge. Then we have: ⎛ W δ ⎝exp − 0 gσ ⎞ ⎛ ⎛ gσ ⎞⎞δ/2 ds f (Bs )⎠⎠ ds f (Bs )⎠ = ⎝W ⎝exp − 0 (3. where cδ = δ 1 Γ δ 2 1 2 δ 2 −1 . starting from 1. 0 ≤ a ≤ 1 gσ where B (δ) has the law W δ deﬁned by: Wδ σ = cδ ( 0 ) 2 −1 · WFσ σ F δ Before going any further. or. t≥0 . then: (∗) Bt = (t + 1)b t t+1 .44 3 Squares of Bessel processes and RayKnight theorems with: gδ (x) = cδ x 2 −1 . equivalently: ∞ ˆδ R is the law of the local times process: a (B (δ) ).5.6 For any δ > 0. where Rδ may be described as follows: Rδ is the law of the local times. with weight: cδ ( 1 (R3 )) 2 −1 . δ > 0) also possesses the additivity property: Rδ+δ = Rδ ∗ Rδ and. with the help of the last written interpretation of Rδ . for levels a ≤ 1. δ and we may state the following Theorem 3.
Qδ 0→0 is the law of the local times of (Dt . t < T1 ) may be extended by continuity to t = T1 .8 Deﬁne ⎝Dt . to any Borel function f : [0. and δ δ ˜ then we have: T1 = inf t : Dt = 1 .3. ˜ Proof: For any Borel function f : [0. conversely. t < T1 ≡ 0 ∞ ⎞ ds ⎠ via the following space δ (1 + Σs )4 ⎞ ds ⎠ δ (1 + Σs )4 and time change formula: ⎛ δ Σt δ 1 + Σt t = Dδ ⎝ 0 δ ˜δ Then. and. 1] → IR+ . and conversely. together with the additivity properties of Qδ and Qδ 0 0→0 allow us to obtain the following ⎛ δ ˜δ Theorem 3. thanks to the remarks made previously: . This correspondance is expressed explicitely by the two formulae: f (t) = 1 t ˜ f 4 (1 + t) t+1 and ˜ f (u) = 1 f (1 + u)4 u 1−u These formulae. such that: ∞ 1 dt 0 2 f (t)Bt = 0 ˜ duf(u)b2 (u) . we have.). ˜ Consequently. the formula (∗) allows to deﬁne a Brownian bridge b from a Brownian motion B.5 Squares of Bessel processes and of Bessel bridges 45 is a Brownian motion starting from 0. there corresponds a Borel function f : IR+ → IR+ . 1] → IR+ . δ ˜δ ˜δ (Remark that (Dt . t < T1 ).
(1 + R3 (s))4 Remark: We could have obtained this result more directly by applying r Itˆ’s formula to g(r) = 1+r . b > 0. it follows easily that ˜ ˜ Xt = R3 ( X t ). t ≤ T1 (R3 )) (law) Proof: If we deﬁne: Xt = R3 (t) (t ≥ 0). we get the desired result. t ≤ T1 ) = (R3 (t). This is perfectly coherent with the above theorem. we then have: 1 + R3 (t) 1 1 . t ≤ T1 (R3 )). the o above proof gives a better explanation of the ubiquity of R3 in this question. Exercise 3. if (Rδ (t). since we then have: 2 2 (Dt . up to its ﬁrst hitting time of c = b−(1/δ−2) . It is interesting to consider again the case δ = 2 since.2: Let a. and δ > 2. it is then known that Q2 0→0 is the law of the local times of (R3 (t). t ≥ 0.46 3 Squares of Bessel processes and RayKnight theorems ˜ Qδ 0→0 exp − ω. then: Rδ (t)/(a + b(Rδ (t))δ−2 )1/δ−2 may be obtained by time changing a δdimensional Bessel process. t ≥ 0) is a δdimensional Bessel process. in our opinion. and then timechanging. which is a diﬀusion (from its deﬁnition in terms of R3 ) 1 is also such that Xt . since: t X t = 0 ds . f ) 0 ⎞⎤ (from Theorem (3. Then.3)) ˜ (from the relation f ↔ f ) δ du f Σu ⎠⎦ ⎤ δ du Σu ˜ ⎦ f = E ⎣exp − δ δ (1 + Σu )4 1 + Σu 0 ⎤ ⎡ ˜δ ⎢ = E ⎣exp − T1 ˜ δ ⎥ dv f(Dv )⎦ (from the deﬁnition of Dδ ) 0 The theorem is proven. But. u ≥ 0) is a 3 dimensional Bessel process.5. f ⎡ ⎛ ∞ = E ⎣exp ⎝− ⎡ 0 ∞ = Qδ (exp − ω. This exercise may be generalised as follows: . where (R3 (u). and. t ≥ 0 is a local martingale. t ≥ 0). (Xt . as argued in (3.1). =1+ Xt R3 (t) therefore. ﬁnally. Prove that.
Let f : IR → IR+ be a C 2 function.6 Generalized meanders and squares of Bessel processes 47 Exercise 3. t≥0 . t ≥ 0) be a realvalued diﬀusion. 2. and (Bt .6. t ≥ 0) denotes a realvalued Brownian motion starting from 0. 1. ﬁnally. where g = sup{u ≤ 1 : Bu = 0}. (ii) f (x) = exp(ax) . u ≥ 0) a δdimensional Bessel process. m(u) = √ 1−g u≤1 . there exists (Rδ (u). Prove that. 2 for ϕ ∈ C 2 (IR) .6 Generalized meanders and squares of Bessel processes (3.3: Let (Xt . Imhof [49] proved the following absolute continuity relation: M= c ·S X1 c= π 2 (3. let δ > 1.3.4) . if b and f are related by: b(x) = δ − 1 f (x) 1 f (x) − 2 f (x) 2 f (x) then.1) The Brownian meander. possibly deﬁned on an enlarged probability space. such that: ⎞ ⎛ t f (Xt ) = Rδ ⎝ 0 ds(f )2 (Xs )⎠ . 3. which plays an important role in a number of studies of Brownian motion. whose inﬁnitesimal generator L satisﬁes: Lϕ(x) = 1 ϕ (x) + b(x)ϕ (x) . Compute b(x) in the following cases: (i) f (x) = xα . may be deﬁned as follows: 1 Bg+u(1−g)  . and.
4: Deduce from formula (3.6.48 3 Squares of Bessel processes and RayKnight theorems where M . starting from 0. In particular. u ≤ 1)  X1 = x] = Sν [F (Xu . 1) and Mν .5) that: Mν [F (Xu .: the law of the Bessel process of dimension 2(1 + ν). resp. Nν = 2ν 2 · Sν X1 (3. denotes the law of (m(u). and Az´maYor ([1]. using excursion theory. 1].5) where ν ∈ (0. Exercise 3.: Sν . resp.4) have been given by BianeYor [18]. Other proofs of (3.: S. denotes the law of 1 mν (u) ≡ √ R−ν (gν + u(1 − gν )) 1 − gν (u ≤ 1) the Bessel meander associated to the Bessel process R−ν of dimension 2(1−ν).: the law of (R(u).6) where Nν denotes the law on C([0. the value at time 1 of the Bessel meander does not depend on ν. paragraph 4) using an extension of Girsanov e theorem. using the same kind of arguments. resp. (u ≤ 1) . to prove the more general absolute continuity relationship: Mν = cν 2ν · Sν X1 (3.6. the law of mν (1). (3. starting from 0. and is distributed as the 2dimensional Bessel process at time 1 (See Corollary 3.1 for an explanation). IR+ ) of the process: 1 nν (u) = √ Rν (Lν u) Lν with Lν ≡ sup{t > 0 : Rν (t) = 1}.5): for every ν > 0. . u ≤ 1)  X1 = x] and that: Mν (X1 ∈ dx) = x exp − x2 2 dx . which looks similar to (3. resp. u ≤ 1). It is not diﬃcult. starting from 0.2) BianeLe GallYor [16] proved the following absolute continuity relation. u ≤ 1) a BES(3) process.
6. Proof: From the additivity property of squares of Bessel processes.3) In this subparagraph. t ≤ 1) two independent Bessel processes. starting from 0. X1 and we shall represent the new probability in terms of the laws of Bessel processes and Bessel bridges. more generally than the righthand sides of (3. u ≤ 1)  X1 = x] and that: Nν (X1 ∈ dx) = Sν−1 (X1 ∈ dx) . the law Sν modiﬁed via a RadonNikodym density of the form: cµ. we have: M d.d = M d. consider (Rt .d to be the law of the process Rt + (Rt )2 . δ ≥ 0. the law of the square of this process.t ≤ 1 obtained in this way. 0 0→0 We may now state and prove the following δ Theorem 3.ν µ . t ≤ 1 is Qd ∗ Qd . IR+ ) of the Bessel process with dimension δ. starting from 0. x ≥ 0) is expressed by: x Qd ∗ Qd = Qd+d x x x+x (see Theorem 2. which in terms of the probabilities (Qδ . Precisely.2 gives an explanation of this fact) (3. it is easily deduced that: Qd+d →0 = Qd ∗ Qd →0 . x→0 x x+x .9 Let P0 be the law on C([0.3.d = Γ where cd.3 above).6). u ≤ 1)  X1 = x] = Sν [F (Xu . 1].9.5) and (3.d d+d · P0 d X1 d+d 2 (3. we shall consider.d d (X1 ) cd.6 Generalized meanders and squares of Bessel processes 49 Exercise 3.6) that: Nν [F (Xu . t ≤ 1) and (Rt . 2 that is: Rt + (Rt )2 .5: Deduce from formula (3. Then. (Corollary 3. in other terms. condition R by 1/2 2 R1 = 0. and deﬁne M d.7) = (2 d /2 ) Γ d 2 . with respective dimensions d and d .
As a consequence of Theorem 3. which is immediate.1 Let 0 < ν < 1. 0→x From this last formula.9.2 Let ν > 0. conditionally on X1 = x. to prove the identity completely.5) and (3. In the particular case ν = 1/2. Corollary 3.7) are the same.7) are equal. both sides of (3. . by reverting time from t = 1. it remains to verify that the laws of X1 relatively to each side of (3. we have: Nν = M 2. and Nν (ν > 0).9. ≤ Lν 1 with dimension d = 2(1 + ν) is distributed as the sum of the squares of a two dimensional Bessel bridge and of an independent Bessel process of dimension 2ν. we have: Mν = M 2ν.d . we obtain: d d Qd+d 0→x+x = Q0→x ∗ Q0→x .2ν In other words. Then.50 3 Squares of Bessel processes and RayKnight theorems Hence. we deduce that.9. so that. and. and of the absolute continuity relations (3. Corollary 3. the square of the Bessel meander of dimension 2(1 − ν) may be represented as the sum of the squares of a Bessel bridge of dimension 2ν and of an independent twodimensional Bessel process. Then. the square of the Brownian meander is distributed as the sum of the squares of a Brownian bridge and of an independent twodimensional Bessel process. the square of the normalized Bessel process √1 R(Lν u).2 In other words. as particular cases of M d. we are now able to identify the laws Mν (0 < ν < 1). in particular: Qd+d = Qd ∗ Qd 0→0 0→x .6).
and the law of Vν is given by: P (Vν ∈ dt) = νtν−1 dt (0 < t < 1) . Similarly.1) As a complement to the previous paragraph 3. Furthermore.3. Vν and the Bessel bridge (ρν (u). dν − gν (u ≤ 1) (3.6. the processes nν and nν have the same distribution. we now give a representation of the Bessel meander mν (deﬁned just below formula (3.7 Generalized meanders and Bessel bridges 51 3. it is possible to present a realization of the process nν in terms of ρν .10 The following equality holds: 1 mν (u) = √ ρν (uVν ) Vν where Vν = 1 − gν . u ≤ 1) are independent. Theorem 3.7. Comparing the formulae which deﬁne mν and ρν .7 Generalized meanders and Bessel bridges (3. dν − gν 1 ˆ ρν (uVν ) ˆ ˆ Vν (u ≤ 1) Then. u ≤ 1.5)) + in terms of the Bessel bridge of dimension δν ≡ 2(1 + ν). We recall that this Bessel bridge may be realized as: 1 R−ν (gν + u(dν − gν )) ρν (u) = √ dν − gν where dν = inf {u ≥ 1 : R−ν (u) = 0}.8) . ˜ .11 1) Deﬁne the process: 1 ρν (dν − u(dν − 1)) ≡ nν (u) = √ ˜ dν − 1 where ρν (u) = ρν (1 − u) ˆ dν − 1 ˆ (u ≤ 1) and Vν = 1 − Vν = . we obtain Theorem 3.
with: ˆ ˆ P (Vν ∈ dt) = ν(1 − t)ν−1 dt (0 < t < 1) . we shall use the following Proposition. (3. IR+ ) → IR+ .52 3 Squares of Bessel processes and RayKnight theorems 2) Consequently.2) Now. ρν is a Bessel bridge of dimension ˆ + ˆ δν ≡ 2(1 + ν). and deﬁne (ρd+d (u). i.d is independent of ρd+d . d 2 2 d d d d 2. with parameters P (Vd.: Sµ . resp.10 and 3.d .d . for any dimension.e. Proposition 3. u ≤ 1 Vd. the distribution of the process: md. u ≤ 1) to be the Bessel bridge with dimension d + d . for any t < 1 and every Borel functional F : C([0. Theorem 3. d > 0. on the righthand side. 2 . which relates the laws of the Bessel bridge and Bessel process.d (see Theorem 3.: Bessel process.d (u) = is M d. Then.9 above).11 may be generalized as follows to obtain a representation of a process whose distribution is M d.7. u ≤ 1) = (law) 1 ˆ ρν (uVν ). Then.3 Let Πµ . resp. be the law of the standard Bessel bridge.d . a beta variable Vd.9) holds.d ∈ dt) = t 2 −1 (1 − t) 2 −1 dt B d. moreover.d ). where. In order to prove Theorem 3.: (0 < t < 1) such that Vd. the representations of the processes mν and nν given in Theorems 3. we have: def 1 ρd+d (uVd. with dimension δ = 2(µ + 1). the identity in law (nν (u). u ≤ 1 ˆ ˆ Vν (3. Consider. and Vν is independent of ρν .12. starting from 0.12 Let d. t].
u ≤ 1 hµ (t.3) = 0 1 dtθ(t)Sµ F (by scaling) = 0 dtθ(t)Sµ F (Xu . ⎡ 1 = Sµ ⎣F (Xu . we look for V . e. by Theorem 3. and making some elementary changes of variables. 1). by injectivity of the Laplace transform. that: . it is easily found. u ≤ 1 t 1 √ Xut .g. In order to present the proof in a natural way. Then. x) = 1 x2 exp − (1 − t)µ+1 2(1 − t) Proof: This is a special case of the partial absolute continuity relationship between the laws of a nice Markov process and its bridges (see. a random variable taking its values in (0. iii) the law of the process ii) V is independent of ρd+d . We now prove Theorem 3. √ Hence. u ≤ t)] = Sµ [F (Xu .12. tx) = d x 0 Using the explicit formula for hµ given in Proposition 3. 1 √ V ρd+d (uV ). for every Borel function F : C([0. u ≤ 1 V = 0 1 dtθ(t)Πµ F 1 √ Xut . u ≤ 1)hµ (t.3.d dtθ(t)hµ (t. u ≤ t)hµ (t. u ≤ 1) 0 dtθ(t)hµ (t.9. [41]). the problem is now reduced to ﬁnding a function θ such that: 1 √ cd. Xt )] where: hµ (t. u ≤ 1 is M d.3.7 Generalized meanders and Bessel bridges 53 Πµ [F (Xu . IR+ ) → IR+ : 1 E F 1 √ ρd+d (uV ). and such that: i) P (V ∈ dt) = θ(t)dt. we have. 1].d . Xt ) t √ tX1 ) ⎤ tX1 )⎦ (by Proposition 3. We deﬁne the index µ by the formula: d + d = 2(µ + 1).
b) (see Lebedev [63].d . 278) and prove that the righthand side of formula (3. at least in particular cases. the canonical process (Xu . d 2 2 d (0 < t < 1) which ends the proof of Theorem 3.d d X1 . 2) Prove that. Exercise 3.10) is equal to: d. b.54 d 3 Squares of Bessel processes and RayKnight theorems θ(t) = t 2 −1 (1 − t) 2 −1 B d.9).260–268). (3.d . Then.6: (We retain the notation of Theorem 3.d Fu d X1 (u < 1) .d 0 dt √ ( 2t)d/2 t b µ 2 √ exp(−(b + t))Iµ (2 bt) (3. . u ≤ 1) admits the semimartingale decomposition: . e.− 2 2 2(1 − u) . where a = √ 2b . p. z) denotes the conﬂuent hypergeometric function with parameters (a. prove that: Du = 1 Φ (1 − u)d/2 2 Xu d d+d .12. for d + d ≥ 2.10) (see Lebedev [63].7.g. under M d. which may be helpful.d Ea cd. 1) Deﬁne the process d+d Du = E0 cd.3) We now end up this Chapter by giving the explicit semimartingale decomposition of the process md. Hint: Use the integral representation formula (with d + d = 2(1 + µ)) Φ d d+d . −b 2 2 ∞ = cd.: for the processes mν and nν ). . where Φ(a. p.
who have independently obtained a semimartingale decomposition of the local times in the space variable). in turn. in which the laws of squares of Bessel processes of any dimension are obtained as the laws of certain local times processes.8. There is no easy formulation of a RayKnight theorem for Brownian local times taken at a ﬁxed time t (see Perkins [68] and Jeulin [52]. and an easy example of the transfer principle is given there.3. some important extensions (Theorem 3. which occurred naturally in the asymptotic study of the windings of the 3dimensional Brownian motion around certain curves (Le GallYor [62]) is developed in the subparagraph (3.4) of the RK theorems are given. we have written Φ dz Comments on Chapter 3 The basic RayKnight theorems are recalled in paragraph 3.4. The extensions which are presented here seem very natural and in the spirit of the ﬁrst half of Chapter 3. as is explained brieﬂy in paragraph 3. For more extensions of the RK theorems. following BianeYor [19]. b.7 Generalized meanders and Bessel bridges 55 (d + d ) − 1 X u = βu + 2 0 u u ds − Xs 0 dsXs 1−s Φ Φ 2 Xs d d+d . 3.3. Bessel bridges and Bessel meanders are presented.1.2 is taken from PitmanYor [73]. In the literature.5).5. z)). the situation is much easier when the ﬁxed time is replaced by an independent exponential time.4. The discussion in subparagraph (3.5 in the present chapter. u ≤ 1) denotes a Brownian motion.5. and. b. In paragraphs 3. leading to Theorem 3. following Le GallYor [60]. An illustration of Theorem 3. one will ﬁnd this kind of study made essentially in relation with Brownian motion and the 3dimensional Bessel process (see BianeYor [18] and BertoinPitman [11] for an exposition of known and new results up to 1994). to simplify the formula. z) for (log Φ(a. Paragraph 3.6 and 3. .− 2 2 2(1 − s) where (βu .7. see Eisenbaum [39] and Vallois [88]. the original result is due to Ray [80]. some relations between Bessel processes. was inspired by Knight [58]. .2). d Φ (a. but it is presented in a very diﬀerent form than Theorem 3. and.
Theorem 2. and of Rδ . a spectral type explanation shall be provided. (More generally. starting at 0.1) where T1 (Rδ ) = inf {t : Rδ (t) = 1}. as we shall see below. 0 (law) (4.1). for the righthand side. which we shall study in this Chapter. [80] and [57]) on Brownian local times. then: ∞ ds 1(Rδ+2 (s)≤1) = T1 (Rδ ) .1.1) may be written as integrals with respect to the Lebesgue measure on [0. paragraph 1). in this Chapter. 1] of the total local times of Rδ+2 for the lefthand side. the laws of the two local times processes can be deduced from (RK2)(a) (see Chapter 3. the notation H(Rδ ) shall indicate the quantity H taken with respect to Rδ ). up to time T1 (Rδ ). throughout this chapter. were published in 1962. t ≥ 0) denotes the Bessel process of dimension δ > 0. b) the use of the integration by parts formula obtained in Chapter 2. this is more than a mere coincidence! Here are these identities: if (Rδ (t). Except in the case δ = 1. there exists no path decomposition explanation of (4. 57 .Chapter 4 An explanation and some extensions of the CiesielskiTaylor identities The CiesielskiTaylor identities in law. moreover. that is one year before the publication of the papers of Ray and Knight (1963. which relies essentially upon the two following ingredients: a) both sides of (4.
Rδ ). 0 + To do this. it now remains to show: σ ds 1(Bs >0) = T1 (B) . and σ = inf{t : Bt = 1} . t ≥ 0) may be written as: ⎞ ⎛ t + Bt = β ⎝ 0 (law) (4.2). so that. this type of generalization was ﬁrst obtained by Ph. . we use the fact that (Bt . so that. involving the speed measures and scale functions of the diﬀusions. who used the expression of the Laplace transforms of the occupation times in terms of diﬀerential equations. we deduce: σ ds 1(Bs >0) = T1 (β) . σ (law) the lefthand side of (4.1) may be written as: 0 ds 1(Bs >0) . to ex plain (4. where (βu .58 4 On the CiesielskiTaylor identities This method (the use of ingredient b) in particular) allows to extend the identities (4. Biane [12].1) in this case. from this representation of B + . where L1 (R3 ) = sup{t : R3 (t) = 1} .1) for δ = 1 Thanks to the timereversal result of D. 0 this implies (4. they may also be extended to pairs of diﬀusions ˆ (X.1 A pathwise explanation of (4. X) which are much more general than the pairs (Rδ+2 . t ≤ L1 (R3 )) = (1 − Bσ−t .1) by considering the time spent in an annulus by a (δ + 2) dimensional Brownian motion. u ≥ 0) is another onedimensional Brownian motion starting from 0.2) ds 1(Bs >0) ⎠ . Williams: (R3 (t). 4. t≥0 . t ≤ σ) .
2 An identity in law between Brownian quadratic functionals 59 4. 0 < a ≤ 1) = (law) a Blog ( 1 ) . t ≥ 0). Then. a a T1 (R2+γ ).1) to an identity in law between two Brownian quadratic functionals To explain the result for every δ > 0.4. t ∞ ds f (Rγ (s)) = 0 0 da f (a) a (Rγ ) . we write the two members of the CT identity (4.e. 0 <a≤1 2) for γ = 0.: (Bt .1 Let (Bt . 0 < a ≤ 1) = (law) 1 2 γ γaγ−1 B1−a  . 0 <a≤1 With the help of this theorem. ( a T1 (R2−γ ). 0 < a ≤ 1 a 2 3) for 0 < γ ≤ 2. ( a T1 (R2 ).3) . 0 > 0) = < a ≤ 1) 1 2 γ γaγ−1 Ba  .1) are equivalent to 1 1 1 δ 0 1 da (law) B δ 2 = aδ−1 a δ−2 da ˜ B δ−2 2 aδ−3 a 0 (δ > 2) (4. with the understanding that the local times ( a (Rγ ). a standard complex Brownian bridge. i. ( ( a ∞ (R2+γ ). see Yor [101]) to obtain the following representations of the local times processes of Rγ taken at t = ∞. resp. t It is not diﬃcult (e.1) as local times integrals. with the help of the basic RayKnight theorems (see Chapter 3). a > 0 (law) 1 ˜ γ 2 = γaγ−1 Ba  .2 A reduction of (4.g. we have: (law) 1) for γ > 0. t ≥ 0) denote a planar BM starting at 0. or t = T1 (Rδ ). 1 1 (law) a ∞ (Rδ+2 ) = 0 a T1 (Rδ ) da 0 da . a > 0. we remark that the CT identities (4. ˜ Theorem 4. t ≥ 0) satisfy the t occupation density formula: for every positive measurable f . resp.
60
1 1
4 On the CiesielskiTaylor identities
1 2
0 1
da (law) Ba2 2 = a
0
2
da a B(log 1 ) a
1
(δ = 2)
(4.4)
1 δ
0
da (law) B δ 2 = aδ−1 a
0
da 2 B 2−δ  a1−δ 1−a
(δ < 2)
(4.5)
˜ where, on both sides, B, resp. B, denotes a complex valued Brownian motion, resp.: Brownian bridge. In order to prove these identities, it obviously suﬃces to take realvalued ˜ processes for B and B, which is what we now assume. It then suﬃces to remark that the identity in law (4.4) is a particular case 1 of the integration by parts formula (2.14), considered with f (a) = log a , 2 and g(a) = a ; the same argument applies to the identity in law (4.5), with f (a) = 1 − a2−δ , and g(a) = aδ ; with a little more work, one also obtains the identity in law (4.3).
4.3 Some extensions of the CiesielskiTaylor identities
(4.3.1) The proof of the CT identities which was just given in paragraph 4.2 uses, apart from the RayKnight theorems for (Bessel) local times, the integration by parts formula (obtained in Chapter 2) applied to some functions f and g which satisfy the boundary conditions: f (1) = 0 and g(0) = 0. In fact, it is possible to take some more advantage of the integration by parts formula, in which we shall now assume no boundary condition, in order to obtain the following extensions of the CT identities. Theorem 4.2 Let δ > 0, and a ≤ b ≤ c. Then, if Rδ and Rδ+2 denote two Bessel processes starting from 0, with respective dimensions δ and δ + 2, we have:
4.3 Some extensions of the CiesielskiTaylor identities
61
∞
⎛ ds 1(a≤Rδ+2 (s)≤b) + ⎝bδ−1
c
⎞ dx ⎠ xδ−1
b ∞ (Rδ+2 )
Ia,b,c :
(δ)
0 (law)
b
=
a δ
Tc a Tc (Rδ )
+
0
ds 1(a≤Rδ (s)≤b) .
(4.3.2) We shall now look at some particular cases of Ia,b,c . 1) δ > 2, a = b, c = ∞. The identity then reduces to: 1 δ−2
(law) b ∞ (Rδ+2 ) =
(δ)
1 δ
b ∞ (Rδ )
In fact, both variables are exponentially distributed, with parameters which match the identity; moreover, this identity expresses precisely how the total local time at b for Rδ explodes as δ ↓ 2. 2) δ > 2, a = 0, c = ∞. The identity then becomes:
∞
0
b ds 1(Rδ+2 (s)≤b) + δ−2
∞ (law) b ∞ (Rδ+2 ) = 0
ds 1(Rδ (s)≤b)
Considered together with the original CT identity (4.1), this gives a functional of Rδ+2 which is distributed as Tb (Rδ−2 ). 3) δ = 2. Taking a = 0, we obtain:
∞ Tc
c ds 1(R4 (s)≤b) + b log b
(law) b ∞ (R4 ) = 0
ds 1(R2 (s)≤b) ,
0
whilst taking a = b > 0, we obtain: log c b
(law) b ∞ (R4 ) =
1 2
b Tc (R2 )
.
In particular, we deduce from these identities in law the following limit results:
62
4 On the CiesielskiTaylor identities
1 log c and
Tc (law) ds 1(R2 (s)≤b) −− b −−→ − c→∞ 0 b ∞ (R4 )
,
1 log c
(law) b b −−→ 2 ∞ (R4 ) − Tc (R2 ) −− c→∞
.
In fact, these limits in law may be seen as particular cases of the KallianpurRobbins asymptotic result for additive functionals of planar Brownian motion (Zt , t ≥ 0), which states that: i) If f belongs to L1 (C, dx dy), and is locally bounded, and if
t def Af = t 0
ds f (Zs ) ,
then:
1 (law) Af −− −−→ − log t t t→∞
1 ¯ f e , 2π
¯ where f =
C
dx dy f (x, y), and e is a standard exponential variable.
Moreover, one has: 1 Af − Af √ t T t log t −− −−→ 0 − t→∞
(P )
ii) (Ergodic theorem) If f and g both satisfy the conditions stated in (i), then: a.s Af /Ag −− f /¯ . − ¯g t t −−→ t→∞ 4) δ < 2, a = 0. The identity in law then becomes:
∞ δ−1
ds 1(Rδ+2 (s)≤b) + b
0
c2−δ − b2−δ 2−δ
Tc (law) b ∞ (Rδ+2 ) = 0
ds 1(Rδ (s)≤b)
which, as a consequence, implies:
4.3 Some extensions of the CiesielskiTaylor identities
63
1 c2−δ
Tc (law) ds 1(Rδ (s)≤b) −− −−→ − c→∞ 0
bδ−1 2−δ
b ∞ (Rδ+2 )
.
(4.6)
In fact, this limit in law can be explained much easier than the limit in law in the previous example. Here is such an explanation: the local times ( a (Rδ ); a > 0, t ≥ 0), which, until now in this Chapter, t have been associated to the Bessel process Rδ , are the semimartingale local times, i.e.: they may be deﬁned via the occupation density formula, with respect to Lebesgue measure on IR+ . However, at this point, it is more convenient to deﬁne the family {λx (Rδ )} of diﬀusion local times by t the formula:
t ∞
ds f (Rδ (s)) =
0 0
dx xδ−1 λx (Rδ )f (x) t
(4.7)
for every Borel function f : IR+ → IR+ . (The advantage of this deﬁnition is that the diﬀusion local time (λ0 (Rδ ); t t > 0) will be ﬁnite and strictly positive). Now, we consider the lefthand side of (4.6); we have: 1 c2
Tc T1 (law)
ds 1(Rδ (s)≤b) =
0
du 1(Rδ (u)≤ b )
c
(by scaling)
0 b/c (law)
=
dx xδ−1 λx1 (Rδ ) T
0 b
(by formula (4.7))
(law)
=
dy c
0
y c
δ−1
λT1 (Rδ ) (by change of variables)
y/c
Hence, we have: 1 c2−δ
0 Tc (law) ds 1(Rδ (s)≤b) −− λ0 1 (Rδ ) −−→ T − c→∞
bδ δ
(4.8)
The convergence results (4.6) and (4.8) imply: λ0 1 (Rδ ) T bδ (law) bδ−1 = δ 2−δ
b ∞ (Rδ+2 )
(4.9)
It is not hard to convince oneself directly that the identity in law (4.9) holds; indeed, from the scaling property of Rδ+2 , we deduce that:
(law) b ∞ (Rδ+2 ) =
b
1 ∞ (Rδ+2 )
,
as a ↑ c. using jointly the RayKnight theorem and the CiesielskiTaylor identity in law (4. We now give an explanation of formula (4. on the lefthand side. and at time τr .64 4 On the CiesielskiTaylor identities so that formula (4.11) is equal. ∞ T 1 c−a Exercise 4.1 Give a proof of the identity in law (4. From the RayKnight theorem on Brownian local times up to time τr . Prove that: law.9) reduces to: λ0 1 (Rδ ) = T (law) δ 2−δ 1 ∞ (Rδ+2 ) (4.4 On a computation of F¨ldesR´v´sz o e e F¨ldesR´v´sz [42] have obtained. and identify the limit in law. in law.c. T√q (R2 ) denotes the ﬁrst hitting time of q by R2 . √ on the righthand side.10) as a consequence of the RayKnight theorems for λx1 (Rδ ). as a consequence of formulae in Borodin [21] o e e concerning computations of laws of Brownian local times. (δ) a Tc (Rδ ) converges in Hint: Either use the identity in law Ia. for r > q: ∞ dy 1(0< 0 (law) y τr <q) = T√q (R2 ) (4.c or a RayKnight theorem for a Tc (Rδ ). 0 . yr denotes the local time of Brownian motion τ taken at level y. we know that the lefthand side of formula (4. x ≥ 0). 4. x ≤ 1 and ( x (Rδ+2 ). and.11) where. a ≤ c .11).1). the following identity in law. a twodimensional Bessel process starting from 0. the ﬁrst time local time at 0 reaches r.10) Exercise 4. to: T0 dy 1(Yy <q) .2 Let c > 0 be ﬁxed.
starting from 0. . Now.b.4. using the strong Markov property that Y0 = q. we have: T0 Lq (law) 0 dy 1(Yy <q) = 0 dy 1(Yy <q) .3. which explains why the law of the lefthand side of (4. and ˆ Lq = sup{y : Yy = q}.14).14). we have.12) and we deduce from the original CiesielskiTaylor identity in law (4. that the righthand side of (4. together with the scaling property of a BES process starting from 0. we may as well assume. it combines the RK theorems with the integration by parts formula (2.c .12) is equal in law to T√q (R2 ). It would be interesting to know whether another family of extensions of the CT identities. could also be derived from some adequate version of the integration by parts formula (2.4 On a computation of F¨ldesR´v´sz o e e 65 where (Yy . presented here. taken for δ = 2. ˆ ˆ where (Yy . possibly of an asymptotic kind. y ≥ 0) is a BESQ process. paragraph 4.1). Biane’s extensions of the CT identities to a large class of diffusions ([12]) may also be obtained in the same way. Finally. In paragraph 4. it seemed an amusing exercise to look at some particular (δ) cases of the identity in law Ia. by time reversal.4 presents an interesting application of the CT identities. and to relate these examples to some better known relations. Moreover. starting from r. More generally.11) does not depend on r(≥ q). Since r > q. Comments on Chapter 4 The proof. of the CiesielskiTaylor identities follows Yor [101]. with dimension 4. y ≥ 0) is a BESQ process. with dimension 0. obtained by CarmonaPetitYor [25] for certain c`dl`g Markov processes which are related a a to Bessel processes through some intertwining relationship. obviously: Lq ∞ dy 1(Yy <q) = ˆ 0 0 dy 1(Yy <q) ˆ (4.
Chapter 5 On the winding number of planar BM The appearance in Chapter 3 of Bessel processes of various dimensions is very remarkable. when Z wanders far away from 0.1.1 Preliminaries (5. We have 67 . then it has a tendency to wind more than when it lies in the annulus. say {z : r ≤ z ≤ R}. However. a number of studies about the asymptotics of winding numbers of planar BM around a ﬁnite set of points (see. with dimensions δ varying between 2 and ∞. for which some open questions still remain. of the winding number θt of (Zu . Chapter XII). one feels that. e. t ≥ 0. It then seemed more interesting to develop here some exact computations for the law of the winding number up to a ﬁxed time t. is closely related to the knowledge of the semigroups of all Bessel processes. when it gets close to 0. in the 1980’s. u ≤ t) around 0. There have been. a planar BM starting from z0 = 0.g. t ≥ 0). despite the several proofs of the RayKnight theorems on Brownian local times which have now been published. indeed.. 5. It is certainly less astonishing to see that the 2dimensional Bessel process plays an important part in the study of the windings of planar Brownian motion (Zt .1) Consider Zt = Xt + iYt . a short summary in [81]. for some given 0 < r < R < ∞. some other remarkable feature occurs: the computation of the law. or. on the contrary. for a ﬁxed time t.
to compute the law of θt . u ≤ t} . We now note that: u u βu = 0 Xs dXs + Ys dYs Zs  (u ≥ 0) and γu = 0 Xs dYs − Ys dXs Zs  (u ≥ 0) are two orthogonal martingales with increasing processes β u = γ u ≡ u. hence. in the next paragraph. 2) A continuous determination of the logarithm along the trajectory (Zu (ω).68 5 On the winding number of planar BM Proposition 5. u ≥ 0) is given by the stochastic integral: t logω (Zt (ω)) − logω (Z0 (ω)) = 0 dZu . where Ht = 2 0 t def up to negligible sets.3) where (θt (ω). it is not diﬃcult to show that: Rt = σ {Zu . t ≥ 0) denotes a continuous determination of the argument of (Zu (ω). E [exp (iν(θt − θ0 ))  R∞ ] = exp − Ht . for ﬁxed t. we have the following: ν2 def for ν ∈ IR. (Zt .1 1) With probability 1.1): t t log Zt (ω) − log Z0 (ω) = Re 0 dZu = Zu 0 Xu dXu + Yu dYu Zu 2 (5.4) This formula shall be of great help. hence. u ≤ t} ≡ σ {βu .1 for a moment. (5. u ≤ t) around 0. moreover. Zu t≥0 . which are immediate consequences of (5.1) We postpone the proof of Proposition 5.2) and θt (ω) − θ0 (ω) = Im 0 t t dZu = Zu 0 Xu dYu − Yu dXu Zu 2 . ds Zs 2 (5. in order to write down the following pair of formulae. they are two independent Brownian motions. . (5. t ≥ 0) does not visit 0.
(5. which starts from exp(Z0 ) = 0. and shall never reach 0. it suﬃces to show: ⎛ t ⎞ dZu ⎠ Zt . we apply Proposition 5. for all t ≥ 0 . using the uniqueness of solutions of the stochastic equation: t Ut = Z0 + 0 Us dZs . u ≥ 0).1. Next.2 with f (z) = exp(z). Then.1. the planar BM (Zu .s. a) The semigroup Pt (r. instead of Z. dρ) = pt (r. ˆ The “trick” is to consider. then there exists a planar BM (Zt .1. t ≥ 0) such that: ˆ f (Zt ) = ZAf . Zs t≥0 . to prove formula (5. exp ⎝ = Zu Z0 0 (5. The ﬁrst statement of Proposition 5.5). from which we easily deduce: o ⎛ t ⎞⎞ ⎛ dZu ⎠⎠ 1 =0 d ⎝ exp ⎝ Zt Zu 0 Exercise 5. for every z ∈ C. we shall also need the two following formulae.2 (B.2) We now prove Proposition 5. ρ)dρ of the Bessel process of index ν > 0 is given by the formula: (ν) (ν) . t ˆ exp(Zt ) = ZAt . since exp(z) = 0. at time t = 0.3) In the sequel. To prove Proposition 5. and Af = ∞ ∞ a. which involve the modiﬁed Bessel functions Iν . t t≥0 .1.1 follows from Proposition 5.5) This follows from Itˆ’s formula. with: At = 0 ds exp(2Xs ) .1 Preliminaries 69 (5.5.1). Davis [28]) If f : C → C is holomorphic and not conˆ stant.1 Give another proof of the identity (5.
1 shall follow from .1 For any z0 = 0. RevuzYor [81].g. r. this distribution was discovered.8).2. and Lebedev [63].2 Explicit computation of the winding number of planar Brownian motion (5. ρ. the modiﬁed Bessel function Iλ (r) admits the following integral representation: π 1 Iλ (r) = π 0 sin(λπ) dθ(exp(r cos θ)) cos(λθ) − π ∞ du e−rchu−λu 0 (see. and r > 0.. p. let us comment that this formula shows in Iν particular that. p. by HartmanWatson [48]. 5. we shall now prove the following Theorem 5. Watson [90]. Hence. the function: ν→ I0 (r) is the Fourier transform of a probability measure. for every given r > 0. and ν ∈ IR.7) The proof of Theorem 5.70 5 On the winding number of planar BM pt (r. we have: Ez0 exp (iν(θt − θ0 )) Zt  = ρ = Iν I0 z0 ρ t (5.6) Before we prove formula (5.10.1) With the help of the preliminaries. which we shall denote by µr .6). e. hence we shall call µr the HartmanWatson distribution with parameter r. t > 0.. e. 115). (5. t > 0) (see. formula (5. 411). ρ) = (ν) 1 t ρ r ν ρ exp − r 2 + ρ2 2t Iν rρ t (r. b) For any λ ∈ IR.g. µr is characterized by: Iν (r) = I0 ∞ exp(iνθ)µr (dθ) −∞ (ν ∈ IR) . by analytic means.
from formula (5.8). For any ν ≥ 0. ρ) (ν) (0) ρ r ν (0) Er exp − ν2 Ht 2 Rt = ρ . denoting r = z0 . from formulae (5. and Ht = inf u : ds exp 2(Bs + νs) > t Rt = r exp(Bu + νu) ⎩ ⎭ u=Ht 0 We now ﬁnish the proof of Theorem 5.9) where W (ν) denotes the law. under Pr . IR). deﬁne Pr to be the law of the ∗ Bessel process.8) after timechanging. and the independence of β and γ. of Brownian motion with drift ν. on the canonical space Ω+ ≡ C(IR+ . we have: (ν) Pr Rt = Rt r ν exp − ν2 Ht 2 (0) · Pr Rt .2 The winding number of planar Brownian motion (ν) 71 Proposition 5. on C(IR+ .1.2) and (5. (ν) Formula (5. one has: ⎧ ⎫ u ⎨ ⎬ . . IR+ ). since. Then. (5. we deduce that for every Borel function f : IR+ → IR+ . ρ) = pt (r. we have: (ν) (0) Er [f (Rt )] = Er f (Rt ) Rt t ν exp − ν2 Ht 2 .3 Let r > 0.8) may also be considered as a variant of the simpler CameronMartin relation: W (ν) Ft = exp νXt − ν2t 2 ·W Ft (5. with index ν. remark that the relation (5. starting at r.5. which implies: pt (r. However. we deduce. that: Ez0 exp (iν(θt − θ0 ) Zt  = ρ = Ez0 exp − ν2 Ht 2 ν2 Ht 2 Zt  = ρ Rt = ρ (0) = Er exp − Now.8) Proof: This is a simple consequence of Girsanov’s theorem.9) implies (5.3).
which was presented above in (5. 2πI0 (r) Φr (x) = 0 dt e−r cht x = π(t2 + x2 ) ∞ 1 πr (dt) Arc tg π t x . 2) qr admits the following representation: ⎧ 1 ⎨ −r qr (dθ) = −e m + m ∗ I0 (r) ⎩ where: m(dθ) = 1 1[−π. a question which does not involve the complicated manner in which Brownian motion (Zu .3). π[). where: pr (dθ) = 2πI1 (r) exp(r cos θ)1[−π.72 5 On the winding number of planar BM and formula (5. ρ) given in (5. of the argument of the random variable Zt . we have µr (dθ) = pr (dθ) + qr (dθ) . πr (du) = e−r 2π chu ∞ ⎫ ⎬ πr (du)cu ⎭ .π[ (θ)dθ. u ≤ t) has wound around 0 up . This is simply solved for pr . and qr (dθ) is a bounded signed measure. which is the law of the principal determination αt (e.1. ρ) and pt (r. With the help of the classical integral representation of Iλ . we are able to give the following explicit additive decomposition of µr . (5.: with values in [−π.10) 0 It is a tantalizing question to interpret precisely every ingredient in the above decomposition of µr in terms of the winding number of planar Brownian motion.3).π[ (θ)dθ is the Von Mises distribu0 tion with parameter r. given Rt ≡ Zt .6) now follows immediately from the explicit expressions of (ν) (0) pt (r.1. with total mass equal to 0. cu (dθ) = π(θ2 udθ + u2 ) 3) qr may also be written as follows: qr (dθ) = where ∞ 1 {Φr (θ − π) − Φr (θ + π)} dθ .2 1) For any r > 0.g. Theorem 5. 0 r(shu)du.
e.11) shall be ﬁnished once we know that: πp/√t (du) exp − νu √ log t −− e −−→ − t→∞ −ν . to t show that.2 The winding number of planar Brownian motion 73 to time t. However. the Cauchy distribution c1 which appears there is closely related to Spitzer’s asymptotic result. 2θt (law) −− C1 . −−→ − t→∞ which. to the Dirac measure at 1.5. as t → ∞. the proof of (5. thanks to formula (5. to prove the theorem. log t z0 ρ √ t −− e −−→ − t→∞ −ν (5. Proof: Following ItˆMcKean ([50].3). if we consider the linear application (u ∈ IR+ ). but depends only on the distribution of the 2dimensional random variable Zt . amounts to showing: Iλ I0 with the notation: λ = 2ν .: the image of converges weakly. it is easily shown that: πp/t (du) by t (w) −−→ ε1 (du). b). we have: Ez0 exp 2iνθt log t √  Rt = ρ t −− exp (−ν) . it is suﬃcient. − t (πp/t ) −− t→∞ :u→ u log t i. for every ν ∈ IR. p.2 are not so easy to interpret. However. t where p = ρz0 .6).11) Making an integration by parts in the integral representation of Iλ (r) in (5.3 As t → ∞. On the contrary. The ﬁnite measure πr (du) appears also naturally in the following representation of the law of the winding number around 0 of the “Brownian lace” (= complex Brownian bridge) with extremity z0 = 0. where C1 is a Cauchy variable −−→ − log t t→∞ Theorem 5.1. from the cono Rt vergence in law of √ . and length t. the quantities which appear in the decomposition of qr in the second statement of Theorem 5. 270) we remark that. . with parameter 1. which we now recall. as t → ∞.
as above. and the previous formula (∗) ˜ becomes. r = z0 t z .6). and the representation of µr given in Theorem 5. T. αt = 0. for z = z0 . we t have: 1 CT (law) + (5. starting and ending at z0 . that z0 = z0 . we may choose θ0 = 0. one has: r Ez0 [f (θt )  Zt = z] = f (αt ) + e−˜ cos(αt ) n∈Z Z an (t. π]. and. r )f (αt + 2nπ) ˜ (∗) where αt is equal. while proving Theorem 5. for any Borel function f : IR → IR+ . for n = 0: . ε and (Cu )u≥0 are independent. [x] denotes the integer part of x ∈ IR. (Cu )u≥0 is a symmetric Cauchy process starting from 0. to the determination of the argument of the ˜ variable Zt in ] − π. and it is then easy to deduce from the identity (5. P (ε = 1) = e−2r . t) = Φr (αt + (2n − 1)π) − Φr (αt + (2n + 1)π) . and an (˜. In particular.12) W = ε 2π 2 Theorem 5.2 that. one has: r = r .4 Let W = where T is a random variable with values in IR+ .74 5 On the winding number of planar BM θt be the winding number of the Brownian lace of 2π 2 length t. ε takes the values 0 and 1. r ˜ ˜ In particular.4. such that: P (T ∈ du) = er πr (du) . with the notation: r = z0  . thanks to the conformal invariance of Brownian motion. Then. with probabilities: P (ε = 0) = 1 − e−2r . there is no loss of generality. For the sake of clarity. ﬁnally. we shall now assume.
3.14) The representation (5.15) −−→ − log t t t→∞ Remark: Note that. u ≤ t). t]. it is indeed possible to justify directly this assertion.13) = P (T ∈ du)e−2r P ((2n − 1)π ≤ Cu ≤ (2n + 1)π) Likewise. may be thought of as the sum of the windings of two independent “free” Brownian motions considered on the interval [0. Now. Then. already seen at the end of the proof of Theorem 5.4.13) and (5.4.12) now follows from the two formulae (5.4. u ≤ t). that: (law) . this convergence in law follows from CT = T C1 . one has: 1 ∗ (law) θ −− C1 (5.14). the ∗ asymptotic winding θt of the “long” Brownian lace (Zu . and the fact. as t → ∞. −−→ − t→∞ log t where T is distributed as indicated in Theorem 5. From Theorem 5. in contrast with the statement in Theorem 5. such that Z0 = Zt = z0 .12).2 The winding number of planar Brownian motion 75 Pz0 (θt = 2nπ  Zt = z0 ) ∞ = 0 ∞ πr (du)e−r 1 u Arc tg −Arc tg π (2n − 1)π (2n+1)π u (2n + 1)π (from (5. one deduces from (∗) that: Pz0 (θt = 0  Zt = z0 ) = P (T ∈ du) (1 − e−2r ) + e−2r P (−π ≤ Cu ≤ π) (5.5. it suﬃces to show that: 1 (law) CT −− C1 .10)) = 0 πr (du)e−r (2n−1)π u dx π(u2 + x2 ) (5. Proof of the Corollary: From the representation (5. we deduce the following interesting ∗ Corollary 5. for n = 0.3.1 Let θt be the value at time t of a continuous determination of the argument of the Brownian lace (Zu .
. u ≤ t)  Zt = z] = Ez/t F uZ 1 1 − u t .5 follows easily from Lemma 5. We may now state the following Theorem 5. with Tt = t. thanks to the invariance of Brownian motion by timeinversion. −−→ − log t t→∞ (5. and deﬁne Tt = inf {u ≤ t : Xu = 0}. As a consequence. u ≥ 0). we ﬁrst deduce the following easy z ˆ Lemma 5. 2) moreover.76 5 On the winding number of planar BM T (P ) −− 1 . we obtain. with L = 0 if { } is empty. we obtain: for every positive functional F . u > 0 under Pz12 is Pz21 . u ≥ 0) is a planar BM starting from 0. u≤t . we have: 1) for any Borel function f : IR × IR+ → IR+ . L)] .4: z0 Ez0 f (θt )1(Tt <t)  Zt = z0 = Ez0 /t f (θ∞ )1(L>0) =E f 2πε 1 CT + 2π 2 ε The proof of Theorem 5.2) In order to understand better the representation of W given by formula (5. and let Pz12 be the law of (z1 + Zu + uz2 .4. z0 Ez0 [F (Zu .1 Let z1 . From this invariance property. be a Cvalued process. where (Z Then. z2 ∈ C. we shall now replace the Brownian lace by a planar Brownian motion with drift. ˆu .5 Let Zu = Xu + iYu . when we take z0 = z.2. Then. u ≥ 0. the law of uZ 1 u z z .1 and Theorem 5.12). Ez0 f θt . 1 1 − t Tt z0  Zt = z = Ez/t [f (θ∞ . and L = sup {u : Xu = 0}. if { } is empty. with the notation of Theorem 5.
unpublished) and Durrett [37] discussed in detail in MessulamYor [65] and PitmanYor [75]. by now. some related computations are found in BergerRoberts [5].2 The winding number of planar Brownian motion 77 Comments on Chapter 5 The computations presented in paragraph 5. which follows partly Itˆ .5.3. the original proof of Spitzer [85]) and o makes use of some asymptotics of the modiﬁed Bessel functions. with the “computationfree” arguments of Williams (1974. It is very interesting to compare the proof of Theorem 5.5.1 are. an attempt at which is presented in Theorem 5. wellknown. It would be interesting to obtain a better understanding of the identity in law (5. in fact.12).Mc Kean ([50] and.2 is taken partly from Yor [98]. the development in paragraph 5. .
In this chapter. the 2dimensional Bessel process. 2 Rs with (Bu . To compute this distribution. t ≥ 0. b ∈ IR. and: (ν) − k)+ . we are interested in the law of the exponential functional: t ds exp(aBs + bs) . we can proceed in a manner which is similar to that used in the second part of Chapter 5. The problem which motivated the development in this chapter is that of the socalled Asian options which. 0 where a. on the mathematical side. u ≥ 0) a realvalued Brownian motion. and (Bs . k) = E (At where k.1) 79 . consists in computing as explicitly as possible the quantity: C (ν) (t. where Ht = 0 ds . s ≥ 0) is a 1dimensional Brownian motion. in that we also rely upon the exact knowledge of the semigroups of the Bessel processes. (6. t ≥ 0). we saw the important role played by the representation of (Rt . as: t Rt = exp(BHt ) .Chapter 6 On some exponential functionals of Brownian motion and the problem of Asian options In the asymptotic study of the winding number of planar BM made in the second part of Chapter 5.
and. We assume that: λ > 2n(n + ν). It is. we have. and developed in detail in [102]. this formula simpliﬁes into: E (ATλ )n = (ν) n! n j=1 . Theorem 6. we have: E (ATλ )n = (ν) Γ (n + 1)Γ 2n Γ µ+ν 2 µ−ν Γ 2 +1 Γ n+ µ−ν 2 − µ+ν 2 +1 n .4) (λ − 2(j 2 + jν)) . (6. in fact.1). for every x > 0: (ν) ATλ E 1 − 2x + n = E (ATλ )n Γ µ−ν 2 (ν) x −n dt e 0 −t t µ−ν 2 −n−1 t 1− x µ+ν 2 +n (6. we shall present the main E (ATλ − k)+ result of this chapter in the following form. then. It is no more diﬃcult to obtain a closed form formula for n (ν) for any n ≥ 0.1 Consider n ≥ 0 (n is not necessarily an integer) and λ > 0. and even for that of the pair (At . with B a realvalued Brownian motion starting from 0. The method alluded to above. which is equivalent to: µ > ν + 2n. yields an (ν) (ν) explicit formula for the law of At . Then. that is: ∞ λ 0 dt e−λt E (At (ν) − k)+ ≡ E (ATλ − k)+ (ν) .2) Moreover. (6. therefore. k). √ Deﬁne µ = 2λ + ν 2 .3) In the particular case where n is an integer. Bt ). and it seems diﬃcult to use this result to obtain a “workable” formula for (6.80 t (ν) At 6 Exponential functionals of Brownian motion = 0 ds exp 2(Bs + νs) . which is independent of B. the density of this law is given in an integral form. However. easier to consider the Laplace transform in t of C (ν) (t. where Tλ denotes an exponential variable with parameter λ.
using dominated convergence.2 1) Let µ ≥ 0. 2) Let µ ≥ 0. and t ≥ 0. We then have the following Theorem 6.1). that. Then. a ﬁrst step in the computation of the lefthand side of this formula is the computation (ν) of the moments of ATλ .1 The integral moments of At (ν) 81 Remarks: 1) It is easily veriﬁed.6) holds.2) that. we have: ⎡⎛ t ⎤ ⎞n E ⎣⎝ 0 (µ) ds exp Bs ⎠ exp(µBt )⎦ = E Pn (exp Bt ) exp(µBt ) . (ν) 6. independently from the method used in the sequel of the chapter. in paragraph (6. in some sense. n ∈ IN. both sides of (6. here.5) ds exp(Bs )⎠ exp(µBt )⎦ = n! n (α − ϕ(µ + j)) (6. n ∈ IN) is the following sequence of polynomials: (µ) . and to extend easily some of the computations made in the Brownian case to some other processes with independent increments. we shall ﬁrst show how to obtain formula (6. and α > ϕ(µ + n). we shall write.7) where (Pn .1 The integral moments of At (ν) In order to simplify the presentation.4).2) converge towards E (ATλ )n . the formula: ⎡⎛ t ⎞n ⎤ ∞ dt exp(−αt)E ⎣⎝ 0 0 j=0 where. In fact. (6. for λ ∈ IR E [exp(λBt )] = exp (tϕ(λ)) . ϕ(λ) = λ2 .6. as x → ∞. 2) It appears clearly from formula (6. 2 (6. Then. n ∈ IN.
we have: . Remark: With the following modiﬁcations.2 1) We deﬁne ⎡⎛ φn. ii) Let ϕ be the L´vy exponent of X which is deﬁned by: e E0 [exp(mXs )] = exp (sϕ(m)) . . 0 dsn exp(Bs1 + · · · + Bsn + µBt )⎦ We then remark that E [exp(µBt + Bs1 + · · · + Bsn )] = E [exp {µ(Bt − Bs1 ) + (µ + 1)(Bs1 − Bs2 ) + · · · + (µ + n)Bsn }] = exp {ϕ(µ)(t − s1 ) + ϕ(µ + 1)(s1 − s2 ) + · · · + ϕ(µ + n)sn } . Proof of Theorem 6. Then. under this only condition. provided ϕ IR+ is injective. (µ) with cj (µ) = k=j 0≤k≤n (ϕ(µ + j) − ϕ(µ + k)) −1 .82 n (µ) Pn (z) = n! j=0 6 Exponential functionals of Brownian motion cj z j . this theorem may be applied to a large class of processes with independent increments: i) we assume that (Xt ) is a process with independent increments which admits exponential moments of all orders. formula (6.7) also extends to (Xt ). formula (6.6) is valid for α large enough. . which implies that the argument concerning the additive decomposition formula in the proof below still holds.t (µ) = E ⎣⎝ ⎡0 t = n!E ⎣ 0 t ⎞n ⎤ ⎤ ds exp(Bs )⎠ exp(µBt )⎦ s1 sn−1 ds1 0 ds2 . Therefore.
and any n ∈ IN. . for α > ϕ(µ + n): ∞ n ∞ (µ) cj j=0 0 (µ) dt e 0 −αt φn.t (µ) 0 ∞ t s1 sn−1 = n! 0 ∞ dt e −αt 0 ds1 0 ds2 0 dsn exp {ϕ(µ)(t − s1 ) + · · · + ϕ(µ + n)sn } = n! 0 ∞ dsn exp(−(α − ϕ(µ + n))sn ) . we have the following Corollary 6.t (µ) = n! =E cj exp(ϕ(µ + j)t) = n! j=0 (µ) Pn (exp Bt ) exp(µBt ) j=0 (µ) cj E[exp(jBt ) exp(µBt )] (µ) .2. in the case: α > ϕ(µ + n). . 2) Next. and we obtain. we have: ⎡⎛ t ⎞n ⎤ λ2n E ⎣⎝ 0 du exp(λBu )⎠ ⎦ = E[Pn (exp λBt )] (6. we obtain formula (6.6. As a consequence of Theorem 6. dsn−1 exp(−(α − ϕ(µ + n − 1))(sn−1 − sn )) dt exp(−(α − ϕ(µ))(t − s1 )).2. Hence. ∞ .t (µ) = n! dt e−αt eϕ(µ+j)t a formula from which we deduce: n n φn.7). we use the additive decomposition formula: 1 n j=0 n = j=0 (α − ϕ(µ + j)) cj (µ) 1 (α − ϕ(µ + j)) where cj is given as stated in the Theorem.1 The integral moments of At ∞ (ν) 83 dt exp(−αt)φn.6) by integrating successively the (n + 1) exponential functions. . . sn s1 so that.1 For any λ ∈ IR.8) . we have proved formula (6.
for the two following reasons.2. and this ends the proof. and any t ≥ 0. the general presentation allows to understand simply the nature of the quantities which appear in the computations. (n!)2 cj (0) = 2n (−1)n−j 2 (n − j)!(n + j)! (1 ≤ j ≤ n) . µ ∈ IR. Corollary 6. we have.on one hand. It may also be helpful to write down explicitly the moments of At . for µ = 0 ⎡⎛ λ2n E ⎣⎝ 0 t ⎞n ⎤ du exp λBu ⎠ ⎦ = n! ⎧ ⎨ (−1)n ⎩ (n!)2 n +2 j=1 ⎫ λ2 j 2 t ⎬ (−1)n−j exp (n − j)!(n + j)! 2 ⎭ (6.11) 6. it now appears that the polynomial Pn is precisely Pn . we have: ⎡⎛ t ⎞n ⎤ n λ2 j 2 (µ/λ) + λjµ t cj exp λ2n E ⎣⎝ du exp λ(Bu + µu)⎠ ⎦ = n! 2 j=0 0 (ν) .2 A study in a general Markovian setup It is interesting to give a theoretical solution to the problem of Asian options in a general Markovian setup.8) is then precisely formula (6.7) taken with µ = 0. (0) therefore. .8) for λ = 1.9) Proof: Thanks to the scaling property of Brownian motion. (6. (6. it suﬃces to prove formula (6. once the (0) coeﬃcients cj have been identiﬁed as: c0 = (−1)n (0) 2n . and n ∈ IN.10) In particular.2 For any λ ∈ IR∗ . In this case. at least: .84 6 Exponential functionals of Brownian motion where ⎧ ⎫ n ⎨1 ⎬ n!(−z)j +2 Pn (z) = 2n (−1)n ⎩ n! (n − j)!(n + j)! ⎭ j=1 . we remark that formula (6.
1: 1) We ﬁrst remark that. and (At .13) is the translation of a classical “intertwining” identity between conﬂuent hypergeometric functions. the following relation holds: At (ω) = Aτk (ω) + At−τk (θτk ω) = k + At−τk (θτk ω) . deﬁne: Gx (t) = Ex [g(At )] .on the other hand.1 Deﬁne τk = inf{t : At ≥ k}. a Borel function such that g(x) = 0 if x ≤ 0. we shall take: g(x) = (x+ )n ).6. and ⎡ G(λ) (k) = Ex ⎣ x 0 ∞ Gx (t. Proof of Proposition 6. Consider. in any case.2 A study in a general Markovian setup 85 . on the set {t ≥ τk }. . k (6. g : IR → IR+ . k) = Ex [g(At − k)] ⎤ dt e−λt g(At − k)⎦ . for every x ∈ E. which is strictly increasing. (Px )x∈E } a strong Markov process. Then.12) and (6. and absolutely continuous.5. this discussion shall be taken up in paragraph 6. Therefore.13) Remark: In the application of these formulae to Brownian motion. we shall see that the equality between the righthand sides of formulae (6. We then have the important Proposition 6. (θt ). moreover. (In the applications. this general approach may allow to choose some other stochastic models than the geometric Brownian motion model.12) and. The two following formulae hold: ∞ G(λ) (k) = x 0 dv e−λv Ex e−λτk GXτk (v) (6. we consider {(Xt ). t ≥ 0) a continuous additive functional. G(λ) (k) = x 1 λ ∞ dv g (v − k)Ex [e−λτv ] . This is one of the reasons why it seems important to insist upon this identity. if g is increasing. and such that: Px (A∞ = ∞) = 1.
⎤ dt e−λt GXτk (t − τk )⎦ . if we write: g(a) = 0 ∞ ∞ dt g (t). 0 (6.15) which.14). 2) Making the change of variables t = v − k in the integral in (6.12) follows. k) ≡ Ex [g(At − k)] = Ex EXτk (ω) g(At−τk (ω) ) 1(τk (ω)≤t) hence: Gx (t.13) as: ⎤ ⎡∞ 1 Ex ⎣ dt g (t)e−λτk EXτk (e−λτt )⎦ λ 0 Therefore. it suﬃces to prove the identity: ∞ dv e 0 −λv 1 Ez (g(Av )) = λ ∞ dt g (t)Ez [e−λτt ] . using the strong Markov property. In fact. we may write the righthand side of (6. k) = Ex GXτk (t − τk )1(τk ≤t) .12) and (6. z stands for Xτk (ω) in the previous expressions).13) and using the strong Markov property. a fortiori. in order to prove that the righthand sides of (6. a Indeed. implies (6.13) are equal. This implies. we now show ∞ dve 0 −λv 1 g(Av ) = λ ∞ dt g (t)e−λτt 0 (6.86 6 Exponential functionals of Brownian motion then. we obtain: Gx (t. and formula (6. we obtain: Av dv e 0 −λv g(Av ) = 0 dv e −λv 0 dt g (t) . using Fubini’s theorem: ⎡∞ G(λ) (k) = Ex ⎣ x τk .14) (here.
we ﬁnd: Gx (t) = exp(mnx)en (t) = y mn en (t) .15). where: ⎡⎛ en (t) = G0 (t) ≡ E0 ⎣⎝ 0 t (λ) ⎞n ⎤ ds exp(mXs )⎠ ⎦ . independent increments. . y = exp(x).3 The case of L´vy processes e ∞ ∞ −λv 87 = 0 dt g (t) τt dv e 1 = λ ∞ dt g (t)e−λτt 0 which is precisely the identity (6. t ≥ 0) be an IR+ valued multiplicative functional of the process X. and we take for (At ) and g the following: t At = 0 ds exp(mXs ) . and n > 0. ˜ We deﬁne Yk = exp(Xτk ). prove the following generalizations of formulae (6. and g(x) = (x+ )n . and we denote by (Py )y∈IR+ the family of laws of the strong Markov process (Yk .3 The case of L´vy processes e We now consider the particular case where (Xt ) is a L´vy process. for some m ∈ IR.12) and (6. We now compute the quantities Gx (t) and Gx (k) in this particular case.6. k ≥ 0).13): ∞ ∞ dt Ex [Mt g(At − k)] = 0 0 ∞ dv Ex Mτk EXτk (Mv g(Av )) ⎡ ⎛ ∞ ⎞⎤ dv Mv ⎠⎦ = 0 dt g (t)Ex ⎣Mτk EXτk ⎝ τt 6.1 Let (Mt . Exercise 6. that is e a process with homogeneous.
with (Bt ) a Brownian motion. Proposition 6. the λ > ϕ(m).12) now becomes: ∞ G(λ) (k) x ˜ = Ey [(Yk )mn exp(−λτk )] e(λ) . It is now elementary to obtain.17) k In the particular case n = 1. and.13) as follows.12) and (6.16) and formula (6.2 With the above notation. and we take m = 2. we have: ˜ G(λ) (k) = Ey x (i) [(Yk ) mn exp(−λτk )] e(λ) n n = (ii) λ ∞ ˜ dv(v − k)n−1 Ey [e−λτv ] (6. t ≥ 0. and ν ≥ 0. (6. this double equality takes a simpler form: indeed. n where: e(λ) n = 0 dte−λt en (t) . which implies: . in this case. we have ∞ (λ) e1 ∞ t = 0 dt e −λt e1 (t) = 0 dt e −λt 0 ds exp(s ϕ(m)) .17) become λG(λ) (k) x ˜ = Ey [(Yk )m exp(−λτk )] 1 = (λ − ϕ(m)) ∞ ˜ dv Ey [exp(−λτv )] . for e 1 (λ) . we have: k τk = 0 dv (Yv )m . k (6. therefore. We may now write both formulae (6.18) 6. for n = 1. the formula: e1 = λ(λ − ϕ(m)) formulae (6. where ϕ is the L´vy exponent of X.4 Application to Brownian motion We now assume that: Xt = Bt + νt.88 6 Exponential functionals of Brownian motion On the other hand.
6.4 Application to Brownian motion
t
89
At =
0
ds exp(2Xs ) .
In this particular situation, the process (Yk , k ≥ 0) is now the Bessel process (ν) with index ν, or dimension δν = 2(1 + ν). We denote by Py the law of this (ν) process, when starting at y, and we write simply P (ν) for P1 . Hence, for example, P (0) denotes the law of the 2dimensional Bessel process, starting from 1. We now recall the Girsanov relation, which was already used in Chapter 5, formula (5.8):
(ν) Py Rt
=
Rt y
ν
ν2 exp − τt 2
t
·
(0) Py Rt
,
where τt =
0
ds . 2 Rs
(6.19)
In Chapter 5, we used the notation Ht for τt ; (Rt , t ≥ 0) denotes, as usual, ∗ the coordinate process on Ω+ , and Rt = σ{Rs , s ≤ t}. The following Lemma is now an immediate consequence of formula (6.19). Lemma 6.1√ For every α ∈ IR, for every ν ≥ 0, and λ ≥ 0, we have, if we denote: µ = 2λ + ν 2 , E (ν) [(Rk )α exp(−λτk )] = E (0) (Rk )α+ν exp − µ2 τk 2
α+ν−µ = E (µ) Rk
(6.20) We are now able to write the formulae (6.17) in terms of the moments of Bessel processes. Proposition 6.3 We now write simply G(λ) (k) for G0 (k), and we introduce the notation: (6.21) Hµ (α; s) = E (µ) ((Rs )α ) . Then, we have: G
(λ) (λ)
(k) = Hµ (2n + ν −
(i)
µ; k)e(λ) n
n = (ii) λ
∞
dv(v − k)n−1 Hµ (ν − µ; v)
k
(6.22)
which, in the particular case n = 1, simpliﬁes, with the notation: δν = 2(1 + ν), to: λG
(λ)
1 = (k) = Hµ (2 + ν − µ; k) (λ − δν ) (ii) (i)
∞
dvHµ (ν − µ; v) .
k
(6.23)
90
6 Exponential functionals of Brownian motion
It is now clear, from formula (6.22) that in order to obtain a closed form formula for G(λ) (k), it suﬃces to be able to compute explicitly Hµ (α, k) and (λ) en . In fact, once Hµ (α; k) is computed for all admissible values of α and k, by taking k = 0 in formula (6.22) (ii), we obtain: e(λ) n n = λ
∞
dv v n−1 Hµ (ν − µ; v) ,
0 (λ) (ν)
(6.24)
from which we shall deduce formula (6.3) for λen ≡ E (ATλ )n . We now present the quickest way, to our knowledge, to compute Hµ (α; k). In order to compute this quantity, we ﬁnd it interesting to introduce the laws Qδ of the square Bessel process (Σu , u ≥ 0) of dimension δ, starting z from z, for δ > 0, and z > 0, because of the additivity property of this family (see Chapter 2, Theorem 2.3). We then have the following Proposition 6.4 For z > 0, and for every γ such that: 0 < γ < µ + 1, we have:
1
1 1 Hµ −2γ; zγ 2z
= Qδµ z (i)
1 (Σ1/2 )γ
1 = (ii) Γ (γ)
0
du e−zu uγ−1 (1 − u)µ−γ (6.25)
Proof: a) Formula (6.25)(i) is a consequence of the invariance property of the laws of Bessel processes by timeinversion; b) We now show how to deduce formula (6.25)(ii) from (6.25)(i). Using the elementary identity: 1 1 = rγ Γ (γ) we obtain: Qδµ z 1 (Σ1/2 )γ 1 = Γ (γ)
∞ ∞
dt e−rt tγ−1 ,
0
dt tγ−1 Qδµ (e−tΣ1/2 ) z
0
6.4 Application to Brownian motion
91
and the result now follows from the general formula: Qδ (exp(−αΣs )) = z 1 α exp −z 1 + 2αs (1 + 2αs)δ/2 , (6.26)
which we use with α = t, and s = 1/2. Remark: Formula (6.26) is easily deduced from the additivity property of the family (Qδ ) (see RevuzYor [81], p. 411). z We now show how formulae (6.2) and (6.3) are consequences of formula (6.25):  ﬁrstly, we apply formula (6.22)(i), together with formula (6.25)(ii), with γ = µ−ν − n, and z = x. Formula (6.2) then follows after making the 2 t change of variables: u = in the integral in formula (6.25); x  secondly, we take formula (6.22)(ii) with k = 0, which implies:
∞
E
(ν) (ATλ )n
=n
0
dv v n−1 Hµ (ν − µ; v) ,
and we then obtain formula (6.3) by replacing in the above integral Hµ (ν − µ; v) by its value given by (6.25)(ii), with γ = µ−ν . 2 In fact, when we analyze the previous arguments in detail, we obtain a repre(ν) sentation of the r.v. ATλ as the ratio of a beta variable to a gamma variable, both variables being independent; such analysis also provides us with some very partial explanation of this independence property. Precisely, we have obtained the following result. Theorem 6.3 1. The law of the r.v. ATλ satisﬁes ATλ
(ν) (law) (ν)
=
Z1,a , 2Zb
where a =
µ+ν 2
and b =
µ−ν 2
(6.27)
and where Zα,β , resp. Zb , denotes a beta variable with parameters α and β, resp. a gamma variable with parameter b, and both variables on the right hand side of (6.27) are independent. 2. More generally, we obtain: ATλ ;
(ν)
Za (ν) exp(2BTλ ) Zb
(law)
=
Za Z1 (ν) ; exp(2BTλ ) 2(Z1 + Za )Zb Zb
(6.28)
92
6 Exponential functionals of Brownian motion
where Z1 , Za , Zb are three independent gamma variables, with respective parameters 1, a, b, and these variables are also assumed to be independent of B and Tλ . Remark: Our aim in establishing formula (6.28) was to try and understand better the factorization which occurs in formula (6.27), but, at least at ﬁrst glance, formula (6.28) does not seem to be very helpful. Proof of the Theorem: a) From formula (6.24), if we take n suﬃciently small, we obtain:
∞
E
(ν) (ATλ )n
=
0 ∞
dv nv n−1 Hµ (ν − µ; v) dy n y dy n y dy n y 1 2y 1 2y 1 2y
n
=
0 ∞
yb
n
1 1 H −2b; b µ y 2y 1 b Σ1/2
, where b =
µ−ν 2
=
0 ∞
y b Qδµ y
n
, from (6.25)(i)
=
0
E exp −yZ(b,a+1) cµ,ν , from (6.25)(ii).
In the sequel, the constant cµ,ν may vary, but shall never depend on n. For simplicity, we now write Z instead of Z(b,a+1) , and we obtain, after making the change of variables: y = z/Z: ⎡∞ ⎤ n dz Z z b (ν) n n E (ATλ ) = cµ,ν E ⎣ exp(−z)⎦ z 2z Z
0
= cµ,ν E nZ
n−1
1 Z b−1
∞
dz
0
1 2z
n
z b−1 e−z ,
and, after performing an integration by parts in the ﬁrst expectation, we obtain: n 1 (ν) (6.29) E (ATλ )n = E [(Z1,a )n ] E 2Zb which implies (6.27). b) We take up the same method as above, that is: we consider
with the help of formula (6.a )β/2 C1 2 E 2Zb− β 2 −α = E (2Zb )−α (Zb )−β/2 C2 a+β/2 a and it is easily found that: C1 = and C2 = Γ (b) . for small α and β’s. (6. we may write formula (6. by taking simply α = 0 in formula (6. we now remark that. since: that: 1 =E C1 C2 Za.a+ β )α = E (Z1. so that we may write. we deduce from the above identity .4 Application to Brownian motion 93 (ν) E (ATλ )α exp(βBTλ ) (ν) . (0) =λ (β+ν) α where θ = λ + − (β+ν)2 2 =λ− def β2 2 − βν. We now remark that µ = 2θ + (β + ν)2 is in fact equal to µ = √ 2 . Applying CameronMartin’s absolute continuity relationship between Brownian motion and Brownian motion with drift. Γ (b− β ) 2 Furthermore.a Zb β/2 exp(βBTλ ) .a+ β 2 E 1 2Zb− β 2 α .6.30) as: E (ATλ )α exp(βBTλ ) (ν) (ν) 1 =E C1 C2 Z1. (ν) Now.1 Zb β/2 .29): 2λ + ν E (ATλ )α exp(βBTλ ) = (ν) (ν) λ θ α E Z1.a 2Zb α 1 − Z1.30) Now. we ﬁnd: E (ATλ )α exp(βBTλ ) ∞ (ν) (ν) =λ 0 ∞ dt exp − λ + dt e−θt E (At 0 ν2 2 ν2 t 2 ) = E (At )α exp(β + ν)Bt λ (β+ν) E (ATθ )α θ . Hence.a )α (1 − Z1.30): E exp βBTλ (ν) = λ θ . there exist constants C1 and C2 such that: E (Z1.
β)) (6.27).β and Zγ be two independent random variables. n < b.a Zb n 1 Γ (b − n) dt e−xt tb−n−1 (1 − t)n+a 0 (6. β) 0 1 1 du e−xu uγ−n−1 (1 − u)β+n . . and a > 0.32) simpliﬁes to: E Z1. 0 dw (u + w(1 − u)) α−1 wn (1 − w)β−1 . respectively a beta variable with parameters (α.. β) and a gamma variable with parameter γ. formula (6. 2Zb exp BTλ (ν) from which we easily obtain (6. 1 .a .31).(6.1 Zb 1/2 exp(BTλ ) (ν) (law) = = Z1.33) 0 which is precisely formula (6. This gives: . taken with a = β. Za. for every x > 0.2) in the equivalent form: 1 xb−n E 1 Z1. As a veriﬁcation.β − Zγ x + n E = xγ−n Γ (γ)B(α. which are.β ≥ x Zγ .31) for x > 0. Then. and n < γ: 1 Zα. and after conditioning on Zγ . 2Zb 1 − Z1.28) thanks to the betagamma relationships.2) may be recovered simply from formula (6..a − Zb x + n E = Z1. Proof: We remark that we need only integrate upon the subset of the prob1 ability space 1 ≥ Zα. and b = γ. it is convenient to write formula (6.94 (ν) 6 Exponential functionals of Brownian motion ATλ . we integrate 1 with respect to the law of Zα.a Zb Za. we have.1 . we now show that formula (6. We now obtain the more general formula Proposition 6.5 Let Zα.1 Zb 1/2 exp(BTλ ) 1/2 (ν) (law) 1 − Za...32) In the particular case α = 1.β 1 − Zγ x + n ⎛ xγ−n ⎝ = Γ (γ) 1 ⎞ du e −xu γ−n−1 u (1 − u) β+n ⎠ (βB(n + 1.β on the random interval x Zγ .
AT (ν) ν+ 1 2 (law) = Z1.27). .35) 0 where. the variables which appear on the righthand sides of (6.34) In particular. 2Z1/2 N2 where N is a centered Gaussian variable with variance 1.27) def Theorem 6. 1) For any ν ∈ [0. β) du uα−1 u − Zγ x ⎥ (1 − u)β−1 ⎥ ⎦ and the rest of the computation is routine. as usual. respectively: 2 T1 √ (law) U ds exp( 2Bs ) = and 2Z1 T1 ds exp(2Bs + s) = U σ 0 (law) (6.36) Proof: The diﬀerent statements follow immediately from formula (6.4 Let U be a uniform variable on [0. taking ν = 0.ν+ 1 σ .6.35) are assumed to be independent. 1[. 1].β 1 − Zγ x + n 1 =E n Zγ ⎡ Zα. 2) For any ν ≥ 0. we have: AT2(1−ν) = (ν) (law) U 2Z1−ν (6. We now consider some particularly interesting subcases of formula (6. 2 (6. once one has remarked that: Z1.34) and (6. we have.1 = U (law) and 1 (law) 1 (law) = = σ .β − 1 Zγ x + n ⎤ Zγ x n ⎢ 1 1 = E⎢ n ⎣ Zγ B(α. and σ = inf{t : Bt = 1}. and ν = 1 .4 Application to Brownian motion 95 E Zα.
2) The recurrence formula (6. and Zp.c for the values of the parameters: a = −α. In terms of conﬂuent hypergeometric functions.25)(ii) is known. β = 1 + µ.96 6 Exponential functionals of Brownian motion 6. z) Γ (β) (6.40) are assumed to be independent.22)(ii) may be written. b = n. q). after some elementary transformations. now. one obtains the following formula: 1 1 H −2γ.5. Corollary (1.b Za+b.37).39) where. ν−µ 2 . z) = ez Φ(β − α. the equality (6. ν−µ 2 1 2wx (6.5 A discussion of some identities (6. p.4)). 411. β. −z) ⎪ (i) ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ and ⎨ (6.q denotes a beta variable with parameters (p. in the form: 1 α x Hµ 1 2α.5.25)(ii) might also have been obtained by using the explicit expression of the semigroup of the square of a Bessel process (see. β. c = 1 + µ + β. z) denotes the conﬂuent hypergeometric function with parameters α and β.b+c = Za. With the help of the following classical relations (see Lebedev [63]. (6. ⎪ ⎪ ⎩ Γ (α)Γ (β − α) 0 one may obtain formula (6. and Φ(α.39) is nothing but an analytic translation of the wellknown algebraic relation between beta variables (law) (6. z) = dt e−zt t(β−α)−1 (1 − t)α−1 . and the two variables on the righthand side of (6. 2x e(λ) n n = n−1 λ2 0 dw w−α−1 (1 − w)n−1 (xw)β Hµ 2β. β.37) where α = −γ + 1 + µ. 266– 267): ⎧ Φ(α. Assuming that formula (6. γ µ z 2z ≡ Qδµ z 1 (Σ1/2 )γ = exp(−z) Γ (α) Φ(α.25)(ii) as a consequence of (6. β.38) ⎪ 1 ⎪ ⎪ ⎪ Γ (β) ⎪ ⎪ (ii) Φ(β − α.1) Formula (6. β. With this approach. for example.40) Za. [81] p.39) translates into the identity: . the equality (6. we take α = n + and β = ≡ α − n.
(6.k = Mk . β.39) and (6.5. t ≥ 0) of squares of Bessel processes t t with respective dimensions δ and δ .k Qδ . k = 2 kernel which is deﬁned by: δ−δ 2 (6. one obtains formula (6. (6. to the uniform integrability property of the martingales k (p) def Mk = 2+p Rk −1− (µ) cp 0 p ds Rs . t ≥ 0) and (Qδ . 2+p has been obtained. and. Once this uniform integrability property. 278). (6. z) = Γ (β)Γ (γ − β) dt tβ−1 (1 − t)γ−β−1 Φ(α.41) for γ > β (see Lebedev [63]. in which we assume n to be an integer.22)(ii). under the above restrictions for p.22)(ii) with the following expression for the constant λen ≡ E (ATλ )n (λ) (ν) . with cp (µ) = (2 + p) µ + 2+p 2 .5 A discussion of some identities 1 97 Γ (γ) Φ(α. γ.41) may also be understood in terms of the semigroups (Qδ .5.4) Finally. p.43) Ma.3) The relations (6.44) for: −2µ < 2 + p < 0. using this relation recurrently.6.b )] . zt) 0 (6.b f (x) = E[f (xZa. the following relation: ⎛ ⎞ ∞ E (µ) 2+p (Rk ) = (µ) −cp E (µ) ⎝ k p ds Rs ⎠ .b is the “multiplication” Markov where f ∈ b(B(IR+ )) (6. k = δ . k ≥ 0. one gets. and Ma. using the fact that: R∞ = 0. which are closely linked with beta and gamma variables. we close up this discussion with a remark which relates the recurrence formula (6. via the intertwining relationship: Qδ Mk . t t where: 0 < δ < δ. under P (µ) . see Yor [99]).42) . (for a discussion of such intertwining relations.
possibly. A thorough discussion of the motivation from Mathematical ﬁnance is made in GemanYor [47]. in formula (6. the semigroup of R is only known explicitly in some particular cases. in joint work with Ph. a discussion of the previously obtained formulae in terms of conﬂuent hypergeometric functions is presented. The developments made in paragraphs 6. we are interested here in the study of the laws of exponential functionals of Brownian motion. the limitation of the method lies in the fact that. but a class of examples has been studied.4. e However. from. then.45) and an immediate computation shows that formulae (6. the socalled ﬁnancial Asian options take into account the past history of the market. resp. a simple description of the law of the variable ATλ is obtained. The results in paragraph 6. some studies of a continuous determination of the logarithm along the planar Brownian trajectory have been made. it would be nice to be able to explain the origin of the beta variable.5.98 (ν) 6 Exponential functionals of Brownian motion E (ATλ )n = (−1)n n−1 j=0 n! (µ) c2j+ν−µ (6. In paragraph 6.45) and (6.1 are taken from Yor [102]. u ≥ 0) is deﬁned by: ⎛ t ⎞ exp(Xt ) = R ⎝ 0 ds exp(Xs )⎠ . in Chapter 5. or Brownian motion with drift.: gamma variable. The origin of the present study comes from Mathematical ﬁnance. if (Xt ) is a L´vy e process. a path decomposition. In paragraph 6. hence the introduction of the arithmetic mean of the geometric Brownian motion.27). and (R(u). Comments on Chapter 6 Whereas.2 and 6. Carmona and F.3 show that there are potential extensions to exponential functionals of a large class of L´vy processes.4) are identical. (ν) . Petit [24].
Parisian and barrier options (see [105]). these functions would have immediately appeared. . throughout this chapter. for our computations. [47]. and Girsanov theorem. we have preferred to use some adequate change of probability. and in related publications (GemanYor [46]. However. The methodology used in this Chapter helped to unify certain computations for Asian. the diﬀerential equations approach which is closely related to the FeynmanKac formula. and Yor [102]). with Asian options. The SpringerFinance volume [104] gathers ten papers dealing.5 A discussion of some identities 99 Had we chosen. in a broad sense.6.
Chapter 7 Some asymptotic laws for multidimensional BM In this chapter. B 2 . This study in the plane may be extended one step further by considering BM in IR3 and seeking asymptotic laws for its winding numbers around a ﬁnite number of oriented straight lines. some asymptotic results for Gauss linking numbers relative to one BM . to extend Spitzer’s theorem: 2θt (law) −− C1 −−→ − log t t→∞ 1 2 n into a multidimensional result for the winding numbers (θt . and the winding numbers of each of them around z j . There is. a more general setup for which such asymptotic laws may be obtained. In the last paragraph. but not deﬁned in this chapter are found in PitmanYor [75]). . . planar Brownian motions. 101 . . B m ) of jointly Gaussian. θ1 . we ﬁrst build upon the knowledge gained in Chapter 5 about the asymptotic windings of planar BM around one point. . . “linearly correlated”. with values in IR3 are presented. certain unbounded curves (Le GallYor [62]). or two independent BM ’s. where {z j . θt ) of planar BM around n points (all notations which may be alluded to. . together with the KallianpurRobbins ergodic theorem for planar BM . . . or. and which allows to unify the previous studies: we consider a ﬁnite number (B 1 . even. 1 ≤ j ≤ n} are a ﬁnite set of points. again.
towards: . subparagraph (4..2).2) is that the righthand side does not depend on r.2) shows in particular that Spitzer’s law (7. as t → ∞. we presented Spitzer’s result 2θt (law) −− C1 .− r. g(z)) be a function from C to IR2 such that: 2 dx dy (f (z))2 + (g(z))2 < ∞ dx dy ϕ(z) ≡ Then. starting from 0.2) towards an exponential variable. −−→ − log t t→∞ This may be extended as follows: ⎞ ⎛ ⎛ t 2 ⎝ r. ¯ f 2π σ dγs 1(βs ≤0) .1 Asymptotic windings of planar BM around n points In Chapter 5.1 Let ϕ(z) = (f (z). 0 ⎠ (7. f = two independent real Brownian motions.1) to the asymptotic study of the winding numbers with respect to a ﬁnite number of points. θt = 0 0 t (7. σ = inf{t : βt = 1}. the following quantity: t t 1 √ log t 0 1 ϕ(Zs ) · dZs ≡ √ log t 0 (dXs f (Zs ) + dYs g(Zs )) converges in law. e. and σ is the local time at level 0. The following Proposition provides an explanation. f : C → IR is intedx dy f (z). Proposition 7.2). case 3)). which will be a key for the extension of (7.+ (law) ⎝ θ .3. see MessulamYor [65] and PitmanYor [75]). see.2) dθs 1(Zs ≥r) . β and γ are C ¯ grable with respect to Lebesgue measure.1) σ σ ⎞ dγs 1(βs ≥0) .− where: θt = 0 r.1) takes place jointly with the KallianpurRobbins law (which is the convergence in law of the third component on the lefthand side of (7. ds f (Zs )⎠−− −−→ − t→∞ log t t 0 t r. θt .+ dθs 1(Zs ≤r) .g.102 7 Some asymptotic laws for multidimensional BM 7. A remarkable feature in (7. The result (7. up to time σ of β (for a proof of (7.
1 gives an explanation for the absence of the radius r on the righthand side of (7. . T = inf{t : Ut = 1}.5) agrees with (7. more precisely. and Γ. see MessulamYor [65] and KasaharaKotani [55]. for 0 < r < R < ∞ . Wn + W − t→∞ log t t (7. t ≥ 0) is a onedimensional BM . . . .+ − θt −− 0 . . . . . β. .+ θt . (Vt . √ is.5) where (Ut . γ are independent.+ θi. −−→ − t→∞ log t t so that it is now quite plausible. For this Proposition. that: 2 − − n (law) + + − + (θ1 . roughly. .4) (7. Proposition 7.7. . θt . and.2). which are also independent of U and V . and (Ck . of the order of log t. z . W2 + W . This convergence in law takes place jointly with (7. we separate θt into θt and j.− θt t j dθs 1(Zs −zj ≤rj ) 0 = and j. θt ) −− −−→ W1 + W . VT ) (7. θt . the winding numbers of (Zu . t ≥ 0) is a 1dimensional BM . u ≤ t) around j j. we deﬁne: t j. Just as before.2) as one may show that: . W + may be represented as (LT (U )Ck (1 ≤ k ≤ n) . The representation (7. z n . . . . . starting from 0.+ θt = 0 j dθs 1(Zs −zj ≥rj ) Another application of Proposition 7. W2 . . up to time T . for some rj > 0. LT (U ) is the local time of U at 0.3) − − − Moreover. . Wn . where. which is independent of U .2). t ≥ 0) is a reﬂecting BM .1 Windings of planar BM around n points 103 kϕ Γ 1 2 where kϕ ≡ 1 2π σ dx dyϕ(z)2 . therefore: t 1 1 r.− 1 2 each of the points z . the asymptotic random vector: W1 . and indeed it is true. . . and (Γt .R θ ≡ log t t log t 0 dθs 1(r≤Zs ≤R) −(P ) 0 . −−→ − − t→∞ 1 2 n We now consider θt . 1 ≤ k ≤ n) are independent Cauchy variables with parameter 1.1 entails: 1 (P ) j. . the winding number in the annulus: {z : r ≤ z ≤ R} .
as we already did in paragraph 4. (7.2)). Then. (which may be derived directly. z n }. which is precisely described by the representation (7. . . LT (U )) essentially by using. and that these Cauchy variables are stochastically dependent. .6) shows clearly. Theorem 7. . 2 σ⎠ (law) dγs 1(βs ≤0) . it is easy to obtain the multidimensional explicit formula: n n E exp i k=1 αk Wk = ch k=1 αk + αk  sh αk n −1 αk k=1 . t≥0 . or considered as a particular case of the ﬁrst formula in subparagraph (3.1 1) Let f be holomorphic in C \ {z 1 .5). . z j )(LT (U ) + iWj− ) . βt = U t ds1(βs ≥0) 0 From the formula for the characteristic function: E [exp i (αVT + βLT (U )C1 )] = chα + β shα α −1 . that each of the Wk ’s is a Cauchy variable with parameter 1. . 0 0 = (LT (U )C1 .6) − where we have denoted Wk for Wk + W + . VT . The following asymptotic residue theorem may now be understood as a global summary of the preceding results.104 7 Some asymptotic laws for multidimensional BM ⎛ ⎝ σ σ ⎞ 1 dγs 1(βs ≥0) . . if needed. Formula (7. the wellknown representation: + . z n } ⊂ Γ . relatively compact set such that: {z 1 . in an interesting manner. and let Γ be an open. . one has: t 2 log t 0 n (law) f (Zs )1Γ (Zs )dZs −− −−→ − t→∞ j=1 Res(f.3.1.
moreover. resp: D . To a given Borel function f : IR+ → IR+ . Therefore. This assertion is an easy consequence of the more precise following statement: consider B ≡ (X. . the preceding results apply. these cones may be taken to be disjoint (except possibly for a common vertex). z) : (x2 + y 2 )1/2 ≤ f (z) and we deﬁne: f θt t . the aperture of which we can choose as small as we wish. f is holomorphic at inﬁnity. Dn are parallel. = 0 dθs 1(Bs ∈Γ f ) . then: z→∞ t 2 log t 0 n f (Zs )dZs −− −−→ − t→∞ j=1 (law) Res(f. . we associate the volume of revolution: Γ f ≡ (x. Y. If D and D are not parallel.7. y. a 3dimensional BM around an oriented straight line D as the winding number of the projection of B on a plane orthogonal to D.2 Windings of BM in IR3 105 2) If.2 Windings of BM in IR3 D We deﬁne the winding number θt of (Bu . ∞)(LT (U ) − 1 + iW + ) 7. such that B0 ∈ D∗ ≡ {x = y = 0}. and lim f (z) = 0. then: 1 D θ log t t and 1 D θ log t t are asymptotically independent. Consequently. if D1 . . since both winding numbers are obtained. by only considering the amount of winding made by (Bu . z j )(LT (U ) + iWj− ) + Res(f. u ≤ t) as it wanders within cones of revolution with axes D. Z) a Brownian motion in IR3 . . u ≤ t). . to the order of log t.
fk are k functions such that: log fj (λ) −− aj . f2 . with the notation of Theorem 7. we are now able to present a global statement for asymptotic results relative to certain functionals of Brownian motion in IR3 . more generally. Su = sup βs . then: 1 (P ) f (θt − θt ) −− 0 . .106 7 Some asymptotic laws for multidimensional BM We have the following f 2θt (law) log f (λ) Theorem 7. More generally. . . then: −−→ − log λ t→∞ log t t→∞ σ dγu 1(βu ≤aSu ) where β 0 s≤u and γ are two independent realvalued Brownian motions. and. in the form of the following General principle : The limiting laws of winding numbers and. the preceding assertion about cones may be understood as a particular case of the following consequence of Theorem 7. then the above convergences in law for the θfj take place jointly. take place jointly and independently. . in any direction. and the joint limit law is that of the vector: ⎛ σ ⎞ ⎝ 0 dγs 1(βs ≤aj Ss ) . −−→ − t→∞ log t With the help of Theorem 7. .1. −−→ − log λ t→∞ 1≤j≤k .2 a function f satisﬁes: a ≥ 1.2. if f1 .2 If −− −−→ − −− a. 1 ≤ j ≤ k⎠ Now. of Brownian functionals in diﬀerent directions of IR3 . as described in the above paragraph 7. and σ = inf{u : βu = 1}. they are given by the study in the plane.2: if.
may be understood as particular cases. 1 ≤ i ≤ j ≤ n) are not independent.3 Windings of independent planar BM ’s around each other The origin of the study presented in this paragraph is a question of Mitchell Berger concerning solar ﬂares (for more details. Z n be n independent planar BM ’s. and 7. t ≥ 0) as the winding number of B i. .3. shall almost surely never visit 0.7. we have: j i P ∃t ≥ 0.j ’s are independent Cauchy variables. we consider n BM ’s and one point.4 A uniﬁed picture of windings 107 7. We remark that.j θi.1 in that. B 2 . t ≥ 0. taken all together. Such a uniﬁcation is made possible by considering: B 1 . the processes (B i.j .j j 1 i since Bt = √2 (Zt −Zt ). now. 7. 7. . zn . .j Thus. .j around 0. m planar Brownian motions with respect to the same ﬁltration. instead of one BM and n points. . . and ask for the asymptotic law of these diﬀerent winding numbers. . with parameter 1. (7.j . . i.7) where the C i. Nonetheless. B m . and which are. with 1 ≤ i < j ≤ n. . Z 2 . Let Z 1 . Zt = Zt = 0 . . Then. 1 √ (zi −zj ) 2 = 0. for each i = j. therefore. is a planar BM starting from and which. . j).2. indexed by (i. starting from n diﬀerent points z1 . i.1. 1 ≤ i < j ≤ n − t→∞ log t t . This is a dual situation to the situation considered in paragraph 7. . . we may deﬁne (θt . we may prove the following result: 2 (law) i. 1 ≤ i < j ≤ n −− −−→ C .7) shall appear in the next paragraph as a particular case. The asymptotic result (7. see BergerRoberts [5]).4 A uniﬁed picture of windings The aim of this paragraph is to present a general setup for which the studies made in paragraphs 7.
since.3. then: 2 (law) (θp . a translation. the matrix Ap. p ≤ m) . We may now state the following general result. s ≤ t) around z0 . for k = = j.q may be orthogonal.q is orthogonal. which is not an orthogonal matrix! In other cases. if: Bt = √2 (Zt − Zt ) and Bt = √2 (Zt − Zt ). . Wn ).q is not orthogonal. . The asympx y totic result (7. The asymptotic result (7.q − ) t u v u v → − is a martingale. p ≤ m) −− (ξ p . .q between B p and B q → → such that: for every − . . (law) . where p B0 = z0 . but we now have to study the winding numbers of m linearly correlated Brownian motions around n points (z 1 . 2 It is natural to consider the more general situation. p ≤ m t j . We write: p. for all p = q. u v → p → q → → (− . the law of which has been described precisely in paragraph 7. . there exists a correlation matrix Ap.q ’s is orthogonal. 2 (law) (θ p . . If a correlation matrix Ap.7) may now be generalized as follows. −−→ − t→∞ log t t where the variables (C p . Ap. p ≤ m) −− (C p . Theorem 7. Bt ) (− . →) denotes the scalar product in IR2 ). (Here. − ∈ IR2 . possibly. Ap. (− .1. we may assume that none of the Ap. This allows to consider the asymptotic problem in the following form: again. in the following sense: for any p. p ≤ m) are independent Cauchy variables. for all (p. p = q.q = 0. p ≤ m) . Ap.7) appears indeed as a particular case of Theop q j 1 1 k k rem 7.3 Let θt be the winding number of (Bs . −−→ − t→∞ log t t where the random vectors (ξ p )p≤m are independent. . If. with parameter 1. p p Theorem 7. linearly correlated.q = 1 Id. q ≤ m.z θp = (θt . for which some of the matrices Ap. for every p. j ≤ n) . Then.4 We assume that. then B p is obtained from B q by an orthogonal transformation and.q is not an orthogonal matrix. z n ).108 7 Some asymptotic laws for multidimensional BM moreover. q). for every p: ξ p = (W1 . and. then: Ap. Bt ) − (− . .
G)2 = u2 = E (u. we consider: ⎛ ⎞ t u Bu − Bs ⎠ def ⎝dBu . but two independent BM s in IR3 almost surely intersect each other.7.5 The selflinking number of BM in IR3 109 We now give a sketch of the main arguments in the proof of Theorem 7. 1 In (t) = 1(Bu −Bs ≥ n ) Bu − Bs 3 0 0 . The elementary. E 3 3 1 < ∞. v) at which the two BM ’s are 1 closer than to each other. we can deﬁne some approximation to Gauss’ linking number by excluding the pairs of instants (u. Thus. q < . centered. and every v ∈ IR2 . We remark that it is also possible to deﬁne such linking number approximations for only one BM in IR3 .1 Let G and G be two jointly Gaussian. which has a lot to do with the normalization of 0 ds by Zs 2 (log t)2 (and not (log t). G)(v. that the asymptotic study shall involve some quantity related to the intersections of the two BM ’s. such that: for every u ∈ IR2 . However. as in the KallianpurRobbins limit law) to obtain a limit in law. and E [(u.4.5 The asymptotic distribution of the selflinking number of BM in IR3 Gauss has deﬁned the linking number of two closed curves in IR3 . Av) . dBs . and then let n go to inﬁnity. Remark: This integrability result should be compared with the fact that t E 1 G2 = ∞. G )2 where A is nonorthogonal. It may be expected. but nonetheless crucial fact on which the proof relies is presented in the following Lemma 7. E (u. We should like to consider such a number for two Brownian curves. n and we shall show that this is indeed the case. as soon as: p < . which do not intersect each other. G )] = (u. 7. Then. variables in IR2 . Gp G q 2 2 .
1(Bu −Bv ≥ n ) . and c is a universal constant. s t du 0 0 dv f (Bu − Bv ) = IR3 dx f (x)α(x. enlarged by the variable Bu . and then: . Both operations give the same quantity.110 7 Some asymptotic laws for multidimensional BM and Jn (s. s. such that: for every Borel function f : IR3 → IR+ . t) . In (t). We have to explain the meaning given to each of the integrals: a) in the case of Jn . 0 t ⎞ Bu − Bv ⎠ 1 dBv . c in IR3 . b. t ≥ 0) −−→ − n→∞ (law) where (βt ) is a realvalued BM independent of B. we replace x by Bu . x ∈ IR3 . t ≥ 0 n −− (Bt . Theorem 7. t).or. c) = a · (b × c) denotes the mixed product of the three vectors a. s. b) in the case of In . b. . s. t). s. and having deﬁned these integrals measurably in x. t ≥ 0 of occupation densities. Bu − Bv 3 where (a. we ﬁrst ﬁx u. cβt . We now state the asymptotic result for In . To state the asymptotic result for Jn . which is jointly continuous in (x. s ≤ u) is a semimartingale in the original ﬁltration of B.we either use the fact that (Bs . t) = def 0 s ⎛ ⎝dBu . unique family α(x.s.5 We have: 1 Bt . we need to present the notion of intersection local times: these consist in the a. since B and B are independent. we deﬁne the integral with respect to dBs for every x(= Bu ). there is no diﬃculty.
√ Jn (s. The asymptotic result for Jn is the following Theorem 7. B ).in the ﬁnal step. the sequence: u θn (u) = 0 dBs × Bu − Bs . s. and smaller than u. . t). the process (IBα (s. Bt . we consider. we remark that. t )  B. v) : Bu − Bv = x}). s. s. We now end up this chapter by giving a sketch of the proof of Theorem 7. cIBα (s. t). t). t ≥ 0) −−→ − n→∞ (law) where c is a universal constant.6 We have: 1 Bs .1 below. we remark that.9) follows from the independence of the increments of B. will make some contribution to the limit in law (7. only times s which may be chosen arbitrarily close to u. in the stochastic integral which deﬁnes θn (u). θ∞ ) .8). the convergence in law (7. t ≥ 0 n −− (Bs . for ﬁxed u. we write: (7. Bt .7.in a ﬁrst step. and conditionally on (B. To prove this result. s ∧ s . t ≥ 0) is a centered Gaussian process with covariance: E [IBα (s. then.5: .in a second step.8) and the limit variable θ∞ has moments of all orders. . −−→ − n→∞ n ˆ where θ∞ and θ∞ are two independent copies. for u < v: 1 (law) ˆ (θn (u). 1 1 Bu − Bs 3 (Bu −Bs ≥ n ) It is then easy to show that: 1 def (law) θn (u) −− θ∞ = −−→ − n→∞ n ∞ dBs × 0 Bs 1(Bs ≥1) Bs 3 (7.9) . B ] = α(0. as follows from Exercise 7. du dv) is a random measure supported by {u.5 The selflinking number of BM in IR3 111 (α(x. t)IBα (s . θn (v)) −− (θ∞ . t ∧ t ) .
prove the existence of a realvalued Brownian motion (γ(u).2 (We use the same notation as in Exercise 7. t≥0 . as a consequence of (7. where (γu . a ≥ 0) = (law) 2 R2 (a). t ≥ 0) be a 3dimensional Brownian motion starting from 0. This allows to end up the proof of the Theorem.9).1: ( a ∞ (R3 ). such that: ⎛ t ⎞ 1 du ⎠ =γ⎝ . Exercise 7.1 Let (Bt . moment of the lefthand side to: c2 t. then. Hint: Show that one may assume: B0 = 1. u ≥ 0) is a onedimensional BM starting from 0. Conclude that θ∞ (deﬁned in (7.: (c2 t)2 .112 7 Some asymptotic laws for multidimensional BM ⎛ 1 1 In (t) = γ (n) ⎝ 2 n n 0 (n) t ⎞ ds θn (s)2 ⎠ . 1.10) as a consequence of the RayKnight theorem (RK2). Hint: Apply the BurkholderGundy inequalities. resp. Exercise 7. a). a ≥ 0 .: second. Prove that: ∞ 0 dt (law) ∗ def 1(Bt ≥1) = T1 = inf {u : βu  = 1} Bt 4 (7. presented in paragraph 3. and it is then easy to show. u ≥ 0) starting from 1. u ≥ 0) is a one dimensional Brownian motion. thanks to the results obtained in the second step that t 1 n2 0 L2 dsθn (s)2 −− c2 t . Prove the identity in law (7.8)) admits moments of all orders.10) where (βu . −−→ − n→∞ This convergence in L2 follows from the convergence of the ﬁrst. Bt  Bu 4 0 2.1). resp.
2 are found in Le GallYor [61]. e.3 Let (Bt . to prove (b)..1 are found in PitmanYor ([75]. use the scaling property.11) that: ˜ ⎭ Bs  S = (law) ∗ T1 4 ⎫ ⎬ and. .12) Comments on Chapter 7 The proofs of the results presented in paragraph 7. those in paragraph 7. timechanged.5 The selflinking number of BM in IR3 113 and of the invariance by timeinversion of the law of (R2 (a).11) ˜ Hints: To prove (a). ⎧ ⎨ 2.4 are taken from Yor [100]. one has: 1 0 ds (law) = 2 sup βs  ˜ Bs  s≤1 (7. s ≥ 0) as another 2dimensional Bessel process. Deﬁne S = inf ⎩ u: 0 u ds > 1 . and (βt . a ≥ 0). t ≥ 0) be a 2dimensional BM starting from 0. represent (Bs 2 . and the results in paragraphs 7.7. 1. [76]). use. consequently: E exp − λ2 S 2 = ch λ 2 −1 (7. t ≥ 0) be a onedimensional BM starting from 0. the RayKnight theorem on Brownian local times. ˜ Exercise 7.3 and 7. Deduce from (7.g. to prove (c). Prove the following identities in law: ⎛ ⎝ 0 1 ⎛ ⎞2 ds ⎠ (law) ⎝ = 4 (a) ˜ Bs  1 ⎞−1 ˜ dsBs 2 ⎠ (law) = (b) 0 4 (law) ∗ = 4 sup βs  T1 (c) s≤1 2 In particular.
there are also many publications on this topic in the physics literature (see..g. the reference to StroockVaradhanPapanicolaou in Chapter XIII of RevuzYor [81]).114 7 Some asymptotic laws for multidimensional BM The asymptotic study of windings for random walks has been made by Belisle [3] (see also BelisleFaraway [4]).. in a number of cases. The proof of Theorem 7. be reduced to a careful study of simple integrals (see. .5 constitutes a good example that the asymptotic study of some double integrals with respect to BM may. e.g. RudnickHu [82]). e.
in which (Bt . and such that (Xt . t ≥ 0) is replaced respectively by: i) a symmetrized Bessel process with dimension 0 < δ < 2. this being 115 . Paul e L´vy [64] proves that both Brownian variables: e 1 + def A = ds 1(Bs >0) 0 and g = sup{t < 1 : Bt = 0} are arcsine distributed. t ≥ 0) behaves. for a review of extensions developed up to 1988. all meeting at 0. ii) a Walsh Brownian motion. we present further results. see BinghamDoney [20]. A posterior justiﬁcation of these extensions may be that the results which one obtains in each of these directions are particularly simple. when it meets 0. these results have been extended in many directions. as a Brownian motion. while away from 0. that is (Bt − µ t . iii) a singularly perturbed reﬂecting Brownian motion. t ≥ 0) where ( t . chooses a ray with equal probability. which extend L´vy’s computation e in the three following directions. Over the years.Chapter 8 Some extensions of Paul L´vy’s arc e sine law for BM In his 1939 paper: “Sur certains processus stochastiques homog`nes”. that is a process (Xt . t ≥ 0) at 0. In this Chapter. and. t ≥ 0) is the local time of (Bt . t ≥ 0) in the plane which takes values in a ﬁnite number of rays (≡ halflines).
(It may be worth noting that 2T(1/2) .’s.c . (law) where.b . the strong Markov property and the scaling property are available. and not T(1/2) .: a beta variable with parameters (a. b) (0 < t < 1) We recall the wellknown algebraic relations between the laws of the beta and gamma variables: Za = Za. Brownian motion. in both identities in law. b). the righthand sides feature independent r. it plays an essential role. Finally. t . t ≥ 0}. . the law of which may be characterized by: E exp(−λT(α) ) = exp(−λα ) . to denote a onesided stable (α) random variable. we rely upon the strong Markov property of the 2dimensional process: {Bt .b Za+b. is distributed as the ﬁrst hitting time of 1 by a onedimensional BM starting from 0).1 Some notation Throughout this chapter. λ ≥ 0 . in the setup of (iii). does not appear a priori in the problem studied here. More importantly.b Za+b (law) and Za. we shall use the following notation: Za . denotes a gamma variable with parameter a.: Za. in fact. essentially: excursion e theory and stochastic calculus (more precisely. one remarkable feature in this study needs to be underlined: although the local time at 0 of. We shall also use the notation T(α) . resp. so that P (Za ∈ dt) = and P (Za. that is.b+c = Za. for example.v. that is: determining the law of A+ . 8. for each of the models.b ∈ dt) = dt ta−1 e−t (t > 0) Γ (a) dt ta−1 (1 − t)b−1 B(a. Tanaka’s formula). with 0 < α < 1.116 8 Some extensions of Paul L´vy’s arc sine law e due partly to the fact that. resp. say. and a main purpose of this chapter is to clarify this role. these three models may be considered as testing grounds for the use and development of the main methods which have been successful in recent years in reproving L´vy’s arc sine law.
and β is a BM . (8. 1 ≤ i ≤ n).2 A list of results 117 8. then: (g < u) = (du > 1). In fact.1) where T(1/2) and T(1/2) are two independent copies. β1 N (law) (law) (8. here is a quick proof that g is arcsine distributed: let u ≤ 1.2. that is: they have the same law as N2 + N 2 and N are two centered. L´vy (1939) proved that A+ and g are arc e N2 . with: σ = inf{t : βt = 1}.1) As was already recalled. or. we have shown: 2 B1 (law) N2 (law) 2 (law) g = 1 + B1 σ = 1 + 2 = 1 + 2 . independent Gaussian variables with variance 1.1). where N sine distributed. which gives the result.2.2.2 A list of results (8. Dynkin [38] showed that: g(α) = Zα. where: du = inf{t ≥ u. T(α) + T(α) (8. no longer have a common distribution if α = 1 .2) where T(α) and T(α) are two independent copies.1−α . we shall present some proofs which exhibit A+ in the form (8.2) If we replace Brownian motion by a symmetrized Bessel process of dimension 0 < δ = 2(1 − α) < 2. For the moment. Bt = 0} ≡ u + inf{v > 0 : Bv+u − Bu = −Bu } 2 2 = u + Bu σ = u(1 + B1 σ). the (α) meaning of which is selfevident. In fact.3) In [2].8. it was also shown that L´vy’s result for A+ admits the fole lowing multivariate extension: if we consider (as described informally in the introduction to this chapter) a Walsh Brownian motion (Zs . then the quantities A+ and g(α) . independent of Bu . and we denote: . in the next paragraph. whereas Barlow2 PitmanYor [2] proved that: A+ = (α) (law) (law) T(α) . we see that A+ and g are distributed as: 2T(1/2) T(1/2) T(1/2) + T(1/2) (8. 1 (law) since: N 2 = . Hence. s ≥ 0) living on n rays (Ii .
. ⎛ ⎞ (8. Then.4) However. resp. A(α) (1) (n) (law) ⎜ T(α) ⎟ = ⎜ n .3) A(1) . . (law) .5) and (8. we have 1 (law) s) ds 1(Bs ≤µ 0 1 = Z 1 . we shall be more concerned with yet another family of extensions of L´vy’s results. t . we have: ⎛ ⎞ (i) A(α) . Furthermore. . . t ≥ 0) = (Bt . A(n) (law) ⎜ T (i) ⎟ = ⎜ n .5) and g ds 1(Bs ≤µ 0 (law) s) 1 = Z 1 . it is possible to give a common extension of (8. in this chapter. and when arriving at 0. . Petit’s ﬁrst.118 1 8 Some extensions of Paul L´vy’s arc sine law e A then: (i) = 0 ds 1(Zs ∈Ii ) .3). by considering a process (Zs . chooses its ray with equal probability. . 2 (8.6) as to F. i ≤ i ≤ n) are n independent onesided stable 1 random 2 variables. behaves like a Bessel process with dimension δ = 2(1 − α). . on each of the rays. t ≥ 0) . 1 + 2µ 2 2 . St .1 For any µ > 0. (8.2. With the help of L´vy’s identity in law: e (St − Bt . 1 ≤ i ≤ n⎟ ⎝ ⎠ T (j) j=1 where (T . we shall refer to the identities in law (8. using a selfevident notation. . Theorem 8. second result.6) In the sequel. 1 ≤ i ≤ n⎟ .4) (8. which have been obtained by F. s ≥ 0) which.2) and (8. ⎝ ⎠ (j) T(α) j=1 (i) (8. e Petit in her thesis [70]. 2µ .
(8. Jt .7) which shows.1 Let (b(u). u ≤ 1) be a standard Brownian bridge. u ≤ 1) as: 1 √ Bgu .8) 0 where U is uniformly distributed on [0. we may translate. t ≥ 0) = (Rt . t ≥ 0) . we may deduce from (8. we obtain: 2 1 1 ds 1(b(s)≤ 1 λs ) = 2 (law) 0 ds 1(b(s)+ 1 λ(s)≤ 1 λ(1)) = U 2 2 (law) (8. St . g u≤1 and the independence of this process from g. and (λu . for example. where (Rt . 1]. 2µ = 1 − U 2µ .8.9) 0 Using now the following identity in law (8.5) in the following terms: s≥t 1 1 (law) ds 1(Bs ≥(1−µ)Ss ) = 0 (law) 0 1 1 ds 1(Rs ≤(1+µ)Js ) = Z 2 . t ≥ 0) is a 3dimensional Bessel process starting from 0. that for µ = 1. Using the representation of the standard Brownian bridge (b(u). 2µ (law) (8.6) the following Corollary 8.2 A list of results 119 and Pitman’s theorem ([71]): (2St − Bt . u ≤ 1) be its local time at 0.10) between the Brownian Bridge (b(u). u ≤ 1) and the Brownian meander: m(u) ≡ √ 1 . in particular. we have 1 1 ds 1(b(s)≤µλs ) = Z1. and Jt = inf Rs . the result agrees with L´vy’s arc e sine law. (law) (law) 1 (8. Then. in the case µ = 1 .1. u ≤ 1 : B 1 − g g+u(1−g) . In particular.
we use the symmetry of the law of the Brownian bridge by time reversal. 2µ (law) 0 In particular. and the desired results follow from Corollary 8. we obtain the Corollary 8.e.3. 8.10) which is found in BianeYor [18].Some proofs (8.1.120 8 Some extensions of Paul L´vy’s arc sine law e m(s).3 A discussion of methods . by taking µ = 1 1 2 and µ = 1: (law) ds 1(m(s)− 1 j(s)≤ 1 m(1)) = U . j(s) ≡ inf m(u).1. u ≤ 1) . s ≤ 1) (8. s ≤ 1 s≤u≤1 (law) = (b(s) + λ(s). we obtain.: (b(u). Then we have: 1 1 ds1(m(s)+(µ−1)js ≤µm1 ) = Z1.10). 2 2 0 and P ⎧ ⎨ ⎩ 0 1 ds 1(m(s)≥m(1)) ∈ dt ⎫ ⎬ ⎭ dt = √ 2 t . i.1) We ﬁrst show how to prove . s ≤ 1) denote the Brownian meander.2 Let (m(s). We then obtain: 1 1 1 (law) ds 1(b(s)≤µλs ) = 0 (law) 0 ds 1(b(s)≤µ(λ1 −λs )) = (law) 0 ds 1(m(s)+(µ−1)js <µm1 ) . Proof: Together with the identity in law (8. and BertoinPitman [11].1. u ≤ 1) = (b(1 − u). λ(s).
1) for A+ . we obtain: A+ = 1 (law) A+(1) τ A+(1) + A−(1) τ τ . it follows: α+ = u + A−+ . that: u s A+ = 1 (law) def 1 α+ 1 u (8.1/2 1 . (law) . A+ > u}. We now deduce.11) then. from (RK1) in paragraph 3. t t u α +) αu . we now deduce τ u from the previous equalities that. and excursion thet t ory. A−(1) τ τ (law) = 1 (T(1/2) . t u where α+ = inf{s.3 A discussion of methods .12).1. t ≥ 0) and ( α+ . Now.Some proofs 1 121 A+ 1 ≡ 0 1 ds 1(Bs >0) = Z 2 . T(1/2) ) . Set A+ t = 0 ds 1(Bs >0) and A− t = 0 ds 1(Bs <0) (t ≥ 0). we know that: A+(1) . We have. 1 2 (law) by using jointly the scaling property of Brownian motion.12) Putting together (8. with τ (s) = inf{v.11) and (8. 2 from which we obtain the representation (8.8. consequently. Now. we write: A−+ = A−( τ α u From the trivial identity: t = A+ + A− . t ≥ 0) and (A−(t) . hence. the two processes τ τ (A−(t) . it is a consequence of excursion theory that the two processes (A+(t) . u ≥ 0) are independent. by scaling. for every t. t ≥ 0) are independent. hence: 1 A+ = Z1/2. and u: (A+ > u) = (t > α+ ). again by scaling (8. for ﬁxed u: α+ = u + ( u (law) (law) 2 − α+ ) Aτ (1) u A−(1) τ + + Aτ (1) = u 1 . v > s}.
from which. we have. θ2 − A 2 τs 2λ + θ2 exp − .2) It may also be interesting to avoid using the scaling property. we remark that: . t ≥ 0): E0 [exp(−λASθ )] θ2 = 2 ∞ ∞ dsE0 exp −λAτs 0 θ2 − τs 2 da Ea exp −λAT0 − −∞ θ2 T0 2 . so that the method may be used for diﬀusions which do not possess the scaling property. see some extensions of the arcsine law to realvalued diﬀusions by A. one is able to deduce. if a < 0. at least in theory. [87]). ⎨ 2 ⎪ ⎩ Ea exp − θ2 2 T0 = = exp(−aθ) . positive. if a > 0. E0 exp −λA+ − τs θ2 τs 2 = E0 exp − λ + = exp − s 2 θ2 2 A+ E0 exp − τs sθ 2 . t 2 2 (law) .on one hand. we obtain: θ2 E0 [exp(−λASθ )] = √ 2λ + θ2 + θ 1 1 √ + θ 2λ + θ2 .3. additive functional (At . that: A+ = t Z 1 . Consequently.2). 1 .122 8 Some extensions of Paul L´vy’s arc sine law e (8. Truman and D. Recall that. by inversion of the Laplace transform in θ. for every continuous. and only depend on the excursion theory arguments. Applying this formula to A = A+ . Williams ([86].on the other hand: θ2 Ea exp −λAT0 − T0 2 ⎧ √ 2 ⎪ Ea exp − λ + θ T0 = exp −a 2λ + θ2 . from the master formulae of excursion theory (see Proposition 3.
T 1 4 (2) (2) .1 above). p. (8. together with the arithmetic of betagamma laws led us to think that the four pairs of random variables: (8. At ) = t (law) 2 1 T 1 . Petit’s original results (Theorem 8. for a certain class of random variables.+ t t ( µ )2 t 1 s.8. A−θ ). Aµ. u t By analogy. besides the case T = t. A− ) T T has a distribution which does not depend on T . o (8.3 A discussion of methods . see. (8.3) It is not diﬃcult. which implies (8.+ µ αµ. 1 + − (law) 2 (At .− s ( αµ. i. the lefthand side is equal in law to: (A+θ . Aµ. 1 8 Z 2µ Z 1 2 . for every S S θ > 0.15) 1 Aµ.: A+ is arcsine distributed.+ µ µ τt τt t2 1 1 1 . A ) s2 τ s τ s 1 2 Sθ (by scaling. At ) = t 1 + − (A .e. . with the help of the master formulae of excursion theory (see Proposition 3. to enlarge the scope of the above method and. Aµ.1): A+ = 1 T( 1 ) 2 T( 1 ) + T .− )2 s .13) (8. 1 ( ) 1 2 PitmanYor [77] give a more complete explanation of the fact that: 1 2 T (A+ .3.− . BarlowPitmanYor [2] arrived to the following identity in law: for every t > 0 and s > 0. ItˆMc Kean ([50]. F. which enables to use the master formula of excursion theory). 57–58). using the scaling property again.14) . another interesting example is: T ≡ α+ ≡ inf{u : A+ > t}. we have: 1 + − (law) 2 (At . Hence.2). for example.Some proofs 123 Remark: This approach is the excursion theory variant of the FeynmanKac approach.− .16) 1 Aµ.
− . we shall see how to modify this argument when (Bt ) is replaced by (Bt  − µ t . as we shall see partly in the sequel. used jointly. it gives an explanation of F. to give a better understanding of the identity in law between (8. (Here. Petit’s ﬁrst result. t ≥ 0) and τt (A− . we remark that Tanaka’s formula and Knight’s τt theorem.124 8 Some extensions of Paul L´vy’s arc sine law e may have the same distribution.16). t ≥ 0). ( µ .15) and (8. . and: (±) A± = inf u : βu = τt 1 t 2 .4) To end up our discussion of methods. It may be t t t worth. (τtµ .3. In the last paragraph 8. t ≥ 0). and in the following.− = 1 (law) 1 αµ. Comment: The second statement of this Theorem is deduced immediately from the ﬁrst one. we have: Aµ. to present this identity in the following equivalent way: Theorem 8.− 1 (law) = 1 1+ Z1/2µ Z1/2 (law) 1 1 = Z 2 . (Here. This is indeed true. t ≥ 0)). 1 2) Consequently. we now mention that Knight’s theorem about continuous orthogonal martingales may replace the excursion argument to prove the independence of the processes (A+ .5 of this Chapter. To see this. (8. t ≥ 0) is the inverse of (Aµ.2 1) The identity in law 1 8 α1 − 1 µ α1 . t ≥ 0) is the inverse of ( µ . with: β (+) and β (−) two independent BM ’s. imply: + Bt = −βA+ + t (+) 1 2 t and − Bt = −βA− + (−) t 1 2 t . t ≥ 0) denotes the (semit martingale) local time at 0 of (Bt  − µ t . using the scaling property.17) holds.( µ 2 α1 ) (law) = 1 Z 1 . Z 2µ 2 (8.− ). we have written.− . for clarity. t ≥ 0). and (αµ. t ≥ 0). α1 for αµ. 2µ .
4 An excursion theory approach to F. in the sequel. Petit’s results as consequences of the extended RK theorems.19) where ϕ : IR+ ×IR+ → IR+ is a Borel function.− . .5) or equivalently (8. before we embark precisely in this computation. Z. (law) Theorem 8.1/2µ (8.4. Sθ ) ≡ 2 ∞ dt e− 0 θ2 t 2 E e− λ2 2 At ϕ (Bt .3. Petit’s results 125 8. t ) (8. in some sense.11)): 1 αµ.8.1/2µ (8. see Theorem 3.18) To simplify notation. and the computations made in subparagraph (3. Petit’s ﬁrst result: 1 def Aµ.4.1/2µ . this leads us.4 An excursion theory approach to F. Then. A. and α for.− .1) As we remarked in paragraph 8. We are able to compute this quantity thanks to the extensions of the RK theorems obtained in Chapter 3 (to be more precise. we have the following 1) P (B1  ≤ µ 1 ) = E(Z) = µ 1+µ 2) Conditioned on the set Γµ ≡ (B1  ≤ µ 1 ). and Sθ denotes an independent 2 exponential time with parameter θ2 .5). However. and therefore.1/2µ .2)).3 Let Z = Z1/2. to the following reinforcement of (8. we may envision F. Z1/2. we shall compute the following quantity: λ2 E exp − ASθ 2 θ2 ϕ (BSθ . c 3) Conditioned on Γµ .3.1/2µ and αµ. Petit’s results (8. respectively: Aµ. the variable A1 is distributed as Z3/2.1+1/2µ . at no cost.18).− = 1 0 ds1(Bs ≤µ (law) s) = Z1/2.5) is equivalent to (see formula (8. F. we shall simply write. 1 1 To prove (8.− 1 (law) = Z1/2. A1 is distributed as: Z1/2. it may be of some interest to play a little more with the scaling property.
We then obtain: (8.21) is rephrasing F. Petit’s result (8. Proof of the Theorem: i) These four statements may be deduced in an elementary way from the two identities: (8. Remark: In fact. we shall consider the quantity (8. it is not diﬃcult to prove the more general identity: P (Γµ  A1 = a.20) E [Γµ . it remains to prove (8. ii) For this purpose. exp(−αA1 )] = E [Z exp(−αZ)] and E [exp(−αA1 )] = E [exp(−αZ)] which are valid for every α ≥ 0.20).21) µ 1) =a .13) and (8. using the identity in law between (8.19). for the moment. ) = 1(x≤µ ) . in which we take: ϕ(x.15). The identity (8. so that.126 8 Some extensions of Paul L´vy’s arc sine law e 4) A1 is distributed as Z.5). These four statements may also be presented in the equivalent form: A1 = Z (law) and P (Γµ  A1 = a) = a .
we obtain. by using the scaling property once again: E exp − λ2 Sθ A1 1(B1 ≤µ 2 1) = E A1 exp − λ2 Sθ A1 2 (8. namely: . 2 ⎤ 1 ds exp − (θ2 sα1 + λ2 s)⎦ . A1 ⎤ 1 du exp − (θ2 u + λ2 uA1 )⎦ . that. (8.5) Remarks: 1) The ﬁrst statement of the theorem. by change of variables: s = A1 u. 2 ⎤ ds exp − 1 2 θ s + λ2 s ⎦ .23) which proves (8. we have obtained. Petit’s result (8. for every α ≥ 0: E exp(−αA1 )1(B1 ≤µ 1) = E [A1 exp(−αA1 )] . thanks to the injectivity of the Laplace transform. Comparing now the two extreme terms of this sequence of equalities.4 An excursion theory approach to F.22) Since this relation is true for every θ > 0. assuming F. Petit’s results 127 E exp − ⎡ θ2 ⎣ = E 2 ⎡ = θ E⎣ 2 ⎡ = θ E⎣ 2 ⎡ = θ E⎣ 2 ⎡ = 2 2 2 2 λ2 ASθ 2 1(BS θ ≤µ Sθ ) ∞ ⎤ 1 2 dAt exp − (θ t + λ2 At )⎦ 2 ⎤ 1 ds exp − (θ2 αs + λ2 s)⎦ .20). 2 λ2 Sθ A1 2 2 0 ∞ by time changing 0 ∞ by scaling 0 ∞ by scaling again 0 ∞ θ E ⎣A1 2 0 = E A1 exp − .8.
3.4. R]. as a consequence of the master formulae of excursion theory. hence. if U denotes a uniform r. we have: P (B1  ≤ µ 1 ) = P (RU ≤ µR(1 − U )) = P (U ≤ µ(1 − U )) = µ 1+µ . and the computations made in subparagraph (3. by denoting: b = µs. 1]. ν = λ2 + θ2 . E exp − and E exp − λ2 A 2 Sθ Sθ where : At = Agt and At = At − Agt . we √ θ have. BSθ = a = Ea exp − eθa (8.23) in part (ii) of the proof of the Theorem was done with the only use of the scaling property.128 8 Some extensions of Paul L´vy’s arc sine law e P (B1  ≤ µ 1 ) = µ 1+µ def is an elementary consequence of the fact that. on [0.2)). by computing explicitely the quantity γθ.v.28) .27) θ2 λ2 s AT0 + T0 2 2 ch(ν(b − a)) + ξsh(ν(b − a)) . which is independent from R. Petit’s results is needed whatsoever. B1  is uniformly distributed on [0. 2) Perhaps we should emphasize the fact that the obtention of (8.4. from the extensions of the RK theorems obtained in Chapter 3 (see Theorem 3. no knowledge of F. (8. conditionally on R = B1 + 1 . for 0 ≤ a ≤ b ch(νb) + ξsh(νb) (8. = s.λ = E exp − def λ2 ASθ 2 1(BS θ ≤µ Sθ ) . if we write: At = At + At . in particular.24) We ﬁrst recall that. for this result.5).26) Moreover. (8.2) We now engage properly into the proof of (8.25) θ2 λ2 s AT0 + T0 2 2 λ2 A 2 Sθ Sθ = s. we have. and ξ = ν : E0 exp − Ea exp − θ2 λ2 Aτs + τs 2 2 = = (ch(νb) + ξsh(νb)) −1/µ (8. BSθ = a = E0 exp − θ2 λ2 Aτs + τs 2 2 eθs (8.
from (8.λ is equal to: E A1 exp − λ2 Sθ A1 2 = ξ2E A1 2 (1 ξ − A1 ) A1 + (the expression on the righthand side is obtained after some elementary change of variables).λ θ2 = µ ∞ b Sθ Sθ and BSθ are independent and θ −θa e da . we obtain: ⎛ ⎞ ∞ ∞ ξ2 shx + ξ(ch x − 1) ξ dx 2⎝ ⎠ . 1 2 (1 − A ) A1 + ξ µ 1 (ch x + ξ shx)1+ µ 0 or.8. and we use the elementary identity: 1 = E [exp(−xZp )] (1 + x)p to obtain: .29) 0 We now make the change of variables: u = (thx)2 .4 An excursion theory approach to F. using moreover the fact that distributed as: P( we obtain: γθ. Petit’s results 129 Consequently. the above computations have led us to the formula: ∞ ξ A1 dx E =1− . γθ. Hence.λ = dx 1− 1 = ξ 1 µ µ (ch x + ξ shx)1+ µ (ch x + ξ shx)1+ µ 0 0 On the other hand.22). we know. equivalently: E 1 1 − A1 = A1 + ξ 2 (1 − A1 ) ξµ ∞ dx (ch x + ξ shx)1+ µ 1 (8. that the quantity γθ. 2 ∈ ds) = θe−θs ds and P (BSθ ∈ da) = db 0 0 da ch(ν(b − a)) + ξsh(ν(b − a)) (ch(νb) + ξsh(νb)) 1 1+ µ Integrating with respect to da. and making the change of variables x = νb. to obtain: 1 ξ(1 − A1 ) = h(ξ) = E A1 + ξ 2 (1 − A1 ) 2µ def 1 √ 1 1 1 1 du(1 − u) 2µ − 2 u− 2 (1 + ξ u)−(1+ µ ) 0 We deﬁne r = 1 2 + 1 2µ .
130 1 8 Some extensions of Paul L´vy’s arc sine law e 1 h(ξ) = 2µ 0 √ 1 du u− 2 (1 − u)r−1 E exp(−ξ uZ2r ) = cµ E exp −ξZ2r Z 1 .r 2 (8. we shall also use the much easier identity in law: . Proof of the Lemma: 1) The duplication formula for the gamma function: √ πΓ (2z) = 22z−1 Γ z+ 1 2 Γ (z) implies that.r Z 2 +r = Z1/2 . the pairs of random variables featured in the diﬀerent products are independent.1 The following identities in law hold: 2 Z2r = 4Zr+ 1 Zr 2 (law) (8. since for any k > 0.32) follows from (8. in all these identities in law. and Z2r and Z 1 .r = 2 Z 1 Zr = N  2Zr .30) where cµ is a constant depending only on µ. Lemma 8.r are inde2 pendent.32). we have: k E[Zp ] = Γ (p + k) . The following lemma shall play a crucial role in the sequel of the proof.31) and (8. Γ (p) then: 2k k k E[Z2r ] = 4k E[Zr+ 1 ]E[Zr ] .32) Z2r 1 Z 2 . 2 2) The ﬁrst identity in law in (8.31) (8. 2 (law) (law) As usual. and the fact that: 1 Z 1 .31). and the second identity in law is immediate since: 2 (law) N  = (law) 2Z 1 2 Apart from the identities in law (8.
25) and (8.4 An excursion theory approach to F.λ = exp − ASθ 2 We obtain: .33) where C is a standard Cauchy variable. with the change of variables: vξ 2 = z . we are able to compute the following quantity: λ2 def . 1 (1 + ξ 2 C 2 )r by (8. we remark that we have obtained the identity: E (law) 1 − A1 1−Z =E A1 + ξ 2 (1 − A1 ) Z + ξ 2 (1 − Z) . We take up again the expression in (8.e. with a constant cµ which changes from line to line: ∞ h(ξ) = cµ 0 du = cµ (1 + u2 )(1 + ξ 2 u2 )r dz z (1 − z) z + ξ 2 (1 − z) −1/2 r− 1/2 ∞ dv √ v(1 + v)(1 + vξ 2 )r 0 1 = cµ ξ 0 .3) We now prove the second result of F.4. Petit’s results 131 CN  = N C . and we obtain E exp −ξZ2r Z 1 .27). γθ. by (8.: 1 1 A1 ≡ Ag1 = Z 2 . independent of N . Petit. 1−z Hence. (law) (8. 1 + 2µ 2 (law) Using the identities (8. we obtain.33) .r 2 = E exp −ξN  2Zr = E exp iξCN  2Zr = E exp iξN C 2Zr = E exp(−ξ 2 C 2 Zr ) = E . . (law) (8. 1 where Z = Z 1 . going back to the deﬁnition of h(ξ).30). which proves the desired result: 2 A1 = Z . 2µ .32) Thus.8. i.
comparing formulae (8. it is easily deduced from (8.29) and (8. we obtain: E 1 A1 + ξ 2 (1 − A1 ) = ˜ µ ˜ 1 − A1 E ˜ ˜ µ A1 + ξ 2 (1 − A1 ) (8.34) 0 In order to prove the desired result. Petit’s second result. we also obtain: γθ. from the scaling property of (At . it is possible to extend P.− . since A1 = Z 1 .β t = 0 ds 1(−α s ≤Bs ≤β s ) . Hence.λ θ = µ db (ch(νb) + ξsh(νb)) 0 1 −µ ξ = µ dx(chx + ξshx)− µ 0 1 On the other hand.4) With a very small amount of extra computation. 1 + 2µ . (8. we shall now use formula (8. 2 2 (law) which is F. for given α. β > 0: e t At ≡ Aα. for Aµ. by considering. we have shown.29). .λ = E θ2 ξ2 θ2 =E 2A 2 (1 − A ) +λ 1 A1 + ξ 1 .4. 21˜ . 21˜ +1 . we have obtained the following formula: 1 1 E = A1 + ξ 2 (1 − A1 ) ξµ ∞ dx (chx + ξshx) µ 1 (8. and we ˜ Hence. t ≥ 0).35) that: 2 µ A1 = Z 1 . at least in the case µ < 1: 1 A1 = Z 1 . 2 µ and since: 1 2˜ µ (law) +1= 1 2 + 1 2µ . write A 1 1 µ 1 = 1 + µ . we can deﬁne µ > 0 by the formula: ˜ ˜ ˜1 . In the case µ < 1.132 ∞ 8 Some extensions of Paul L´vy’s arc sine law e ∞ γθ.35) ˜ (law) Now.34). which will enable us to make almost no computation. L´vy’s result even further.
14). the independence of the process (Aµ.5.8. t ≥ 0) its rightt continuous inverse.1).+ .− to that of the pair (Aµ. where (+) Mt = 0 1(Xs >0) sgn(Bs )dBs (8. F.2) We shall now adapt the stochastic calculus method developed by PitmanYor [75] to prove L´vy’s arc sine law. (8.5. following the method τ µ (t) α discussed in the subparagraph (8.5 A stochastic calculus approach to F. which.3. already presented in (8. a 8. Petit’s results (8. Aµ. 0) . 0) . we shall use the following simpliﬁed notation: Xt = Bt  − µ − Xt t . Petit has obtained the following extension of formula (8. t ≥ 0) denotes the local time at 0 of X. taking up the above computation again.− . e Tanaka’s formula implies: t + Xt = (+) Mt 1 + 2 µ t .29): 2ξ(1 − A1 ) = E A1 + ξ 2 (1 − A1 ) ∞ ds 0 ϕα (s) + ϕβ (s) (ϕα (s))1+ 2α (ϕβ (s))1+ 2β 1 1 where we denote by ϕa (s) the following quantity (which depends on ξ): ϕa (s) ≡ ϕ(ξ) (s) = ch(as) + ξsh(as) .1) The main aim of this paragraph is to show. A± t = 0 ds 1(±Xs >0) . t ≥ 0) and of the random variable µ µ. t = sup(−Xt . + Xt = sup(Xt . allows to reduce the computation of the law of Aµ. Petit’s results 133 Indeed. with the help of some arguments taken from stochastic calculus. 1 τµ τµ 1 1 1 Since µ is ﬁxed throughout the paragraph. and (τtµ .36) . ( µ .− .+ ).5 A stochastic calculus approach to F.
. and 1 µ (law) 2 α+ = 1 1 2 µ α+ t (+) = sup(−δs ) s≤t (8. Knight’s theorem about continuous orthogonal martingales allows to write: Mt (+) = δ (+) (A+ ) t and Mt (−) = δ (−) (A− ) .37). α+ t (8. t ≥ 0). where (−) Mt = 0 1(Xs <0) sgn(Bs )dBs Now.39).39) that: 1 2 µ α− t def (−) + (1 − µ) α− t (8. t ≥ 0). which we write as: − Xα− = −Ytµ + t 1 2 µ α− t (8.39) where: Ytµ = δt and we deduce from (8. and the rest of the proof shall rely in an essential manner upon this independence result. the relations (8. 2 αs τ2t and.41) Hence. just as in the case t µ = 1. we have: A−µ = inf s : 1 µ − > t = inf {s : Ysµ > t}.40) µ = sup(Ysµ ) = St s≤t def (8. + (Xα+ . it follows that. from (8. where δ (+) and δ (−) denote two independent Brownian motions. hence. t ≥ 0) t is a reﬂecting Brownian motion.38) In particular. it suﬃces to prove (−) that the process (Ytµ .36) become: (i) + Xα+ = δt t (+) + 1 2 µ . we have: N . t ≥ 0) is measurable with respect to (δt .37) α− t (ii) − Xα− = −δt t (−) − (1 − µ) + 1 2 µ α− t The identity (i) in (8. Using the time changes α+ and α− .134 8 Some extensions of Paul L´vy’s arc sine law e t − Xt = (−) −Mt 1 − (1 − µ) t + 2 µ t . We now consider the identity (ii) in (8. t t≥0 . in order to obtain the desired independence result.37) may be interpreted as Skorokhod’s reﬂection equa+ tion for the process (Xα+ .
the ﬁxed point theorem allows to show that this equation admits one and only one solution (Ytµ .e. t ≥ 0) in terms of δ (−) and Y µ . we see that: − Bα−  − µ t α− t = −Xα− = −δt t (−) − (1 − µ) α− t + 1 2 µ α− t . IR).37). Indeed. where Y µ is the unknown process. Therefrom. t ≤ T s≤t u≤s is Lipschitz. T ]. 2[.8.42) (−) µ + (1 − µ) sup(−δs + Ss ) s≤t Now. Indeed. the application: ∗ ∗ Φ : Ω0.3) To prove this measurability result. this equality may be considered as an example of Skorokhod’s reﬂection equation for the process (Bα− . which gives: 1 µ t≥0 . we shall ﬁrst express the process ( α− .5 A stochastic calculus approach to F. t ≥ 0). and δ (−) is the driving Brownian motion. f (0) = 0} −→ Ω0. t ≥ 0). which will enable us to transform the t identity (8. if we consider again the identity (ii) in (8.: sup Φ(g)(t) − Φ(h)(t) ≤ K sup g(t) − h(t) . t ≥ 0). Petit’s results 135 (8.T g −→ δt (−) (−) + (1 − µ) sup −δs + sup(g(u)) . t 2 αt Again. and that this solution (−) is adapted with respect to (δt .40). Bringing the latter expression of Ytµ = δt (−) α− t into (8. with coeﬃcient K = 1 − µ. using (8.5.41). we obtain: (8. we deduce: Bα−  = δt t (−) − t α− t (−) = sup −δs + s≤t 1 2 µ α− s (−) µ = sup −δs + Ss s≤t .40) into an equation. in the case µ ∈]0. i. t≤T t≤T . − + α− .T ≡ {f ∈ C([0.
5) and (8.42) when µ does not belong to the interval ]0. The paragraph 8. Comments on Chapter 8 A number of extensions of L´vy’s arc sine law for Brownian motion have been e presented in this chapter. and Picard’s iteration procedure converges. and particularly the subparagraph (8.5) and (8. t ≤ τs ).6).5) is presented. using the extension of the RayKnight theorems proved in Chapter 3 for the process (Bt  − µ t .4. is an attempt to explain the results (8. Φ is strictly contracting. therefore proving at the same time the uniqueness of the solution of (8.136 8 Some extensions of Paul L´vy’s arc sine law e Hence.4. if µ ∈]0.2).6). Remark: The diﬃculty to solve (8. and partly dealt with there. In the next Chapter. 2[. another explanation of (8.42) and its measurability with respect to δ (−) . Petit’s results (8. 2[ was already noticed in Le GallYor [62]. with particular emphasis on F. .
and their respective laws are Q0 and Qs . where Qs µ s denotes the law of the square. Aµ.14) and (8.Chapter 9 Further results about reﬂecting Brownian motion perturbed by its local time at 0 In this Chapter.) 9. and some consequences The main result of this Chapter is the following Theorem 9. t ≥ 0) is the inverse of the local time ( µ . 1 1 Z 2µ Z 2 (9. x ≥ 0) and ( 2 2− µ −x µ (X).+ µ µ τt τt t2 (law) = 1 8 1 1 . that is: 1 Aµ. x τs ≥ 0) 2− 2 are independent. and absorbed at 0.1) (recall that (τtµ . t ≥ 0) at 0 for the t process X.1 Fix s > 0. we study more properties of the process (Xt ≡ Bt  − µ t . One of the main aims of the present Chapter is to give a clear proof of the identity in law between the pairs (8. t ≥ 0) which played a central role in the preceding Chapter 8. The processes ( x µ τs (X).1 A RayKnight theorem for the local times of X. starting from s. µ up to τs . of the Bessel process with 2 dimension 2 − µ .16). 137 .− .
1 8Z 2 Moreover. the pairs (µ dent.1. and timereversal. t≥0 (9. which implies the result a).1. and x ∈ IR− . µ and sup{Xu . the identity in law (9. 2) We prove a). give a proof of b).1) holds. where (Yy . using Theorem 9. using time reversal. Proof of the Corollary: 1) The independence statement follows immediately from the independence of the local times indexed by x ∈ IR+ .+ = µ τs (law) Aµ. and L1 = sup{y : Yy = 1}.− µ τ1 = 0 dy 2+ 2 (law) −y = µ (X) τ1 0 dy Yy . Aτs ) (law) s2 . x ≥ 0 .138 9 Further results about perturbed reﬂecting Brownian motion Corollary 9.− = µ τs µ.2) . x µ τs (X) >0 . we ﬁrst remark that. b) sup{Xu . by scaling. We now use the following result on powers of BESprocesses (see BianeYor [17]): ⎞ ⎛ t 1/q qRν (t) = Rνq ⎝ 0 −2/p ds Rν (s)⎠ . Then. from Theorem 9. τµ 3) In order to prove c). Remark that: −µ µ τs µ = inf{Xu . The same arguments. we have: ∞ L1 Aµ. we know that the law of µ 2− 2 BESQs µ µ τs is that of the ﬁrst hitting time of 0. u ≤ τs } = 1 2Z µ 2Z1 s2 .1. Aµ. hence. by a process. u ≤ τs }. as stated in Theorem 9. we can take s = 1.1 We have the following identities in law: a) µ c) µ τs µ ≡ − inf{Xu . y ≥ 0) is a BESQ0 µ process. d) 1 8Z 2µ Aµ.1.− µ µ τs .+ µ τs are indepen In particular. u ≤ τs } = (law) s µ (law) s . used with respect to the local times xs (X). u ≤ τs } = inf x ∈ IR.
the following equalities hold: t ∞ du ϕ(Xu )h( 0 t Xu t ) = −∞ ∞ dy ϕ(y)h( y ) t y t) y t x du ϕ(Xu )h( 0 Xu u ) = −∞ dy ϕ(y)H( . let ϕ : IR → IR+ . and h : IR+ → IR+ . Using again Theorem 9.3) where. the following extension of Corollary 9. then. at no extra cost. Remark: In order to understand better the meaning of the quantities on the lefthand side of (9. on the righthand side.1. be two Borel functions.1. we obtain. if we take: h(x) = xα−1 .1 and the identity (9.3).2) in conjunction. . the two gamma variables are independent. dy ys (X) = µ µ τ τ ⎩ ⎭ 2(1 + α)2 −∞ 0 1 Z 1 µ(1+α) . it may be interesting to write down the following equalities.2) that: 2 L1 (Rν ) 2 ds Rν (s) = L1/2 (Rν/2 ) = 0 (law) 1 (law) 1 1 L1 (Rν/2 ) = 4 8 Zν/2 and c) follows by taking ν = 1 . which are immediate consequences of the occupation density formula for X. µ x µ τs (X). 1 1 Z 1+α (9.1. We have: ⎧ 0 ⎫ ∞ ⎨ α α ⎬ (law) sα+1 dy ys (X) . Corollary 9.2 Let α ≥ 0.9. x d) follows similarly. We take p = −1. where: H(x) = 0 dz h(z) . p q and q = 1 . In particular. and 1 + 1 = 1. by considering ≥ 0 and ν = 1. we obtain: ∞ t t y α t) Xu α−1 t ) Xu α−1 u ) dy ϕ(y)( −∞ = 0 du ϕ(Xu )( =α 0 du ϕ(Xu )( . for α > 0. We then deduce from (9.1 A RayKnight theorem for the local times of X 139 where (Rλ ) is a BES process with index λ.
2. t ≥ 0) is an additive functional of the Markov process {Zt = (Bt . we shall use freely the notation and some of the results in BianeYor [17] and Biane [14]. it is important to be able to compute expressions such as: t µ E exp(−Hτs ) .1) In order to prove Theorem 9. in 2 (9. we shall invert the Laplace transform θ2 . where: Ht = ds h(Xs ). we shall consider in fact: ⎡∞ ⎤ θ2 s µ exp(−Hτs )⎦ γ = E ⎣ ds exp − 2 0 and then. t ). concerning Brownian path decomposition. To have access to the above quantity.1 Prove the following extension of the F¨ldesR´v´sz identity o e e in law (4.1. (9.2.4) 9. t ≥ 0} shall play an important role in the sequel. ∞ dy 1(0< 0 (law) −y µ (X)<q) τs 2 = T√q R µ .11): for s ≥ q. The fact that (Ht . after some transformations.2 Proof of the RayKnight theorem for the local times of X (9.2) From now on. we shall use Bismut’s identity: ∞ ∞ ∞ dt 0 t P0 = 0 ds P τ (s) ◦ −∞ T da ∨ (Pa 0 ) which may be translated as: . with h : IR → IR+ a Borel func0 tion. in particular.140 9 Further results about perturbed reﬂecting Brownian motion Exercise 9.
7) θ2 µ g(s) = E exp − τs exp(−Hτs ) ⎞⎤ ⎡ ⎛ 2 T0 θ2 µ − du h(Bu  − µs)⎠⎦ .5): t if we consider Ct = 0 duϕ(Bu . then: ⎤ ⎡∞ E⎣ 0 ∞ ∞ du f (Bu . where ϕ is an IR+ valued continuous function.9. we write: ⎡ γ = E⎣ ∞ ⎤ θ2 d µ exp − u 2 ∞ ∞ µ u exp(−Hu )⎦ 0 1 = lim ε→0 ε where: da −∞ 0 ds 1(0≤a−µs≤ε) g(s)k(a. v ≤ t − gt )] 0 ∞ ∞ (9. and f : IR × IR+ → IR+ is another continuous function.3) We are now ready to transform γ. u ). s) = Ea ⎣exp ⎝− 2 T0 0 . k(a. h ≤ T0 )] = 0 ds E [F (Bu . Brownian functionals. u ≤ gt )G(Bt−v . u ≤ τs )] −∞ where F and G are two measurable. s).2.5) da Ea [G(Bh . u ) exp(−Cu ) ⎦ = 0 t ds −∞ s da f (a. IR+ valued.6) where s Ct = 0 duϕ(Bu . (9. s)E0 [exp(−Cτs )] Ea exp −CT0 (9. Here is an important application of formula (9. s) (9. First.2 Proof of the RayKnight theorem for the local times of X ∞ 141 dt E [F (Bu .
(9. tracing our steps back. x) = E exp(−Hτb )  ⎡ ⎛ T e(2) (a. and: P µ τs ∈ db.142 9 Further results about perturbed reﬂecting Brownian motion µ T0 with denoting the local time at 0 of (Bu  − µs. with the help of Bismut’s decomposition to the following reinforcement of (9. µ gτ µ s ∈ dx = 2db dxϕb (x)ψµb (s − x)1(x≤s) . y) = Ea ⎣exp ⎝− 0 µ τb =x 0 ⎞ a T0 ⎤ = y⎦ . and. the law of µb . µ gτ µ s = x = e(1) (b. the law τ of a 0 under Pa .8): µ E exp(−Hτs )  µ τs = b. this implies the following Proposition 9. It is now easy to invert the Laplace transform.7). s−x) . resp. b) . x)e(2) (bµ. u ≤ T0 ). we know. (law) µ gτ µ = s 1 s Z µ . from Chapter 3. one would like to be able to disintegrate the above integral with respect to db dx. it easily follows that: ∞ ∞ γ= −∞ db g(b)k(bµ. du h(Bu  − a)⎠  These notations enable us to write γ as follows: θ2 x γ = 2 db dxϕb (x) exp − 2 0 0 ∞ ∞ ∞ e (1) (b.8) Plainly. and we get: ∞ µ E exp(−Hτs ) = 2 s db 0 0 dxϕb (x)e(1) (b. the explicit expressions of ϕb (x) and ψa (y). It is now natural to introduce ϕb (x)dx. y) . s − x) .: ψa (y)dy. x) dyψbµ (y) exp − 0 θ2 y 2 e(2) (bµ. the variables they satisfy: µ (law) µ τs µ τs and µ gτ µ s are independent and = s 1 2Z µ (law) = s µ τ1 .1 For ﬁxed s.1 . as well as the conditional expectations: T e(1) (b. From (9. x)ψbµ (s−x)e(2) (bµ. b) = 2 0 db g(b)k(bµ. However. we arrive easily.
9. We ﬁrst recall that.. we can state the following x µ τs (X)..4): 2/µ a−µb (X).. s − x) in terms of the laws of BESQ processes of diﬀerent dimensions. thanks to the additivity properties of {Q0 } and {Qδ }... which is BESQ0 . s) ⎛ = Q0 ⎝exp − s 0 ∞ ⎞ dz 2+ 2 h(z)Yz ⎠ Q0 µ ⎛ ⎝exp − µb ⎞ dz h(z − µb)Yz  Yµb = s⎠ 0 and we make the important remark that this expression no longer depends on x. Putting together the diﬀerent results we have obtained up to now.Q2 ⎝exp − 0 0 0 µb ∞ ⎞ ⎞ ⎞ ⎞ e(1) (b.1.Q0 ⎝exp − 0 ∞ dz h(z − µb)Yz  Yµb = x⎠ dz h(z)Yz ⎠ . s − x) = Q0 ⎝exp − s−x ⎛ . x) and e(2) (bµ..2.. for a ≥ µb. we proved the following RK theorem (Theorem 3.2 1) The process able µτ µ . x Theorem 9. τb for a ≤ µb. Hence. a ≥ 0 is an inhomogeneous Markov process. x 0 µb 2/µ . in Chapter 3. as we know how to write e(1) (b.. the product of these two expressions is equal. x.4) We are now in a position to prove Theorem 9. x) = Q0 ⎝exp − dz h(z)Yz ⎠ . we may write: ⎛ ⎛ ⎛ e(2) (bµ. and BESQ0 . dz h(z − µb)Yz  Yµb = s − x⎠ Therefore.2 Proof of the RayKnight theorem for the local times of X 143 (9. g s ∈ IR is independent of the vari . to: s 0 e(b.
and that when we reverse the process: ( −y .1) In his article in the Colloque Paul L´vy (1987). 0 ≤ x ≤ µb conditioned on T0 = µ µ τs τs 2 2+ µ latter process is distributed as Q0 −→ s .1. that is. s > 0 . since it is wellknown that: R0 (Ls − u). t ≥ 0) denotes here the Bessel process with index α. 0 ≤ y ≤ T0 ) from T0 ≡ µb. s µ is Q0 −→ s . x 2) The processes 3) The law of ≥0 and −x µ (X). F. u ≤ T0 where (Ra (t).9) .144 9 Further results about perturbed reﬂecting Brownian motion x µ τs (X). Ls = sup t. Rs (t) = 0 9. (µb) µb.2. 0 µ τs ≤ y ≤ µb 2+ 2 (9.3 Generalisation of a computation of F. we ﬁnd that the Putting together these two results. R0 (t) = s (ν) . Knight (9. T0 ≡ inf x : =0 =µ µ τs is distributed as T0 under 2− 2 Qs µ . we ﬁnd that −x µ (X). we consider: µ τs −(µb−x) ≡ x−µb . by remarking that. (µb) 4) The law of y−µb (X). starting at a. from −x µ (X) τs Proposition 9.1.5) We now end the proof of Theorem 9. u ≤ Ls (α) (ν) (law) = (−ν) Rs (u). x µ τs (X). sh(2λ) λ ∈ IR . (9. and (−ν) T0 = inf t. x τs ≥0 are independent. Knight [58] proved e the following formula: E exp − λ2 A+ τs 2 2 Mτs = 2λ .3. x ≥0 is Q0 . x τs ≥0 is distributed as BESQs 2 2− µ .
10) Formulae (9.9) and (9. t ≥ 0) into a reﬂecting Brownian motion with the help of the wellknown representation. we have: . Vallois [88].9. Then.− µ λ2 Aτs µ 2 2 (Iτs ) µ 1/µ (9. we generalize formulae (9. Theorem 9.10) show that: A+ (law) τs (law) (3) def τs = = T2 = inf{t : Rt = 2} .11) E exp − = λ shλ 1 chλ .12) ∗ 2) Deﬁne: Xt = sups≤t Xs .9) may also be written in the equivalent form: E exp − where: Mt∗ = sup Bu . already used in paragraph 4. if we denote c = 1/µ.10) to the µprocess X.9) and (9. formula (9. For the moment. s ≥ 0) is the inverse of the u≤t local time of B at 0. t ≥ 0) is a 3dimensional Bessel process starting from 0. 2 ∗ Mτs (Mτs )2 where (Rt . Mt = sup Bu . (9. sh(2λ) (9. and (τs . with the help of a pathwise decomposition.3 (We use the notation in the above paragraphs) µ 1) Deﬁne Iu = inf Xv . An explanation of the identity in law (9. Then. Biane [13] and P. we have: v≤u µ. u ≥ 0) denotes a reﬂecting Brownian motion. Knight t 145 where A+ t = 0 ds 1(Bs >0) .11) has been given by Ph.1: ⎛ t ⎞ + Bt = β ⎝ 0 ds 1(Bs >0) ⎠ .3 Generalisation of a computation of F. where (β(u). + Time changing (Bt . u≤t λ2 τs ∗ 2 (Mτs )2 = 2λ . µ considered up to τs .
1.5). we ﬁrst deduce from Theorem (9.2) that: ⎡ ⎛ λ2 E ⎣exp⎝− 2(µb)2 ⎛ ⎛ = 2+ 2 Q0 µ µb ⎞ dy y−µb µ (X)⎠  τs µ τs µb ⎤ = b⎦ ⎞ 0 ⎞ 2 ⎝exp ⎝− λ 2(µb)2 dy Yy ⎠  Yµb = s⎠ 0 Using L´vy’s generalized formula (2. H) which is distributed as follows: . integrating with respect to the law of that the lefthand side of (9.2 Let c = µ > 0. together with the same kind of arguments as used in the proof of formula (9. In order to prove formula (9. µ τs ∗ )2 (Xτs µ The following exercise may help to understand better the law of 1 Exercise 9.12).13) Proof: µ 1) Recall that: Iτs = −µ µ µ τs . Consider a pair of random variables (T.12) is equal to: λ shλ 1 1+ µ in the variable b. as asserted in µ τµ τs Theorem 9. we obtain 1 µ thλ λ 1 µ = λ shλ 1 chλ .12). x ≥ 0 and −x (X). Then. this quantity is equal to: e λ shλ 1 1+ µ (∗) exp − s (λ coth λ − 1) 2µb µ τs .146 9 Further results about perturbed reﬂecting Brownian motion 2λ µ λ2 τs E exp − ∗ )2 2 (Xτs µ 1 = c 2 λ sh λ 1 +cλ(sh λ)c−1 ch λ λ du (sh u)c+1 (9. Petit and Ph. (9.13) has been obtained by F. x ≥ 0 . Carmona using the independence of xs (X).14) 2) Formula (9.
the righthand side of (b) clearly appears as a Laplace 2 transform in λ ).9.3 Generalisation of a computation of F.15) We now look for some probabilistic explanation of the simpliﬁcation which 1 thλ µ occurred in (9. 2c P (H ∈ dx) = c dx xc+1 (1 < x < 2) (ii) For λ > 0. we have: E exp − and. whereas in 2 the case µ ≥ 1. in the case µ ≤ 1. put another way. above (9. 2/µ Thus. the right2 hand side of (a) clearly appears as a Laplace transform in λ .14). Knight 147 (i) H takes its values in the interval [1. with: P (H = 1) = 1 .14). what does the quantity λ represent in the above computation? With this in mind. prove that: µ τs (law) ∗ )2 = T (Xτs µ (9. 2 Now. (a) x sh λ sh λx λ sh λ 1 µ −1 (b) ≡ 1 µ +1 1 1− µ (we present both formulae (a) and (b). gives us: ⎛ ⎞ 1 2 λ Rδ ⎝exp − dy Yy ⎠ . for 1 < x < 2: λ2 E exp − T 2 H =x = λx sh λx λx sh λx 2 λ2 T 2 H =1 = λ sh λ 1 ch λ c .2]. 2 0 . let us recall that: µ (law) µ τs = s µ τ1 . the integral with respect to (db) of the term featuring (λ coth λ − 1) in (∗) . since. and P µ τ1 ∈ dy = Q0 (Y1 ∈ dy) . or.
the law of (λxs . for every Borel function f : [0. = ( (f )) exp − where (f ) and h(f ) are two constants depending only on f . 0 ≤ x ≤ 1) is Q2 ∗ Q0 .2.4 For simplicity.148 9 Further results about perturbed reﬂecting Brownian motion 2 where δ = µ . . in order to compute: ⎡ ⎤ µ 1 ⎢ E ⎣exp − 2 I τ1 2/µ ds f 0 1− Xu ⎥ ⎦ . 0 0→0 Developing the same arguments more thoroughly. we write I = Iτs . Then. we ﬁnd: (f ) (f ) 1 + h(f ) 2/µ 1 µ µ τ1 . 1] → IR+ . We denote by (λx (X). t µ t ≤ τs . 0→0 τµ Proof: By scaling. we can take s = 1. which is that of 1 under Y1 . Using again Theorem 9. we obtain a new RayKnight theorem which generalizes formula (9. and we have used the notation in Chapter 3. When we integrate with respect to the law of µ Q0 .12). µ Theorem 9. x ≥ 0) µ t the process of local times deﬁned by means of the formula: t ∞ 1 I2 0 du f Xu 1− I = 0 dx f (x)λx (X) . it suﬃces. concerning the decomposition: Qδ = Qδ ∗ R δ . I to integrate with respect to the law of ⎛ Q0 2 2+ µ µ τ1 the quantity: ⎞ y µb Yy  Yµb = 1⎠ µb ⎝exp − 1 (µb)2 ⎛ 1 ⎝exp − 0 1 1+ µ dy f 0 = Q0 2 2+ µ ⎞ 1⎠ dz f (z)Yz  Y1 = µb 1 h(f ) 2µb .
u ≤ τs ) µ In order to obtain a more complete picture of (Xu .9.5) itself.2.16) = E F (Bu .5): ⎤ ⎡∞ θ2 s µ Φτs ⎦ . instead of remaining at the level of the local times of X. we obtain.2. with the same arguments as in paragraph 9. with the notation introduced in (9.4 Towards a pathwise decomposition of (Xu . bµ T0 µ gτ µ s =x (9. even more compactly: . v ≤ τs − gτs )  µ τs = b.2: µ µ µ µ E F (Bu : u ≤ gτs )G(Bτs −v . 0→0 Remark: A more direct proof of Theorem 9.1 together with Corollary 3. but we now work at the level of Bismut’s identity (9.µ 9. u ≤ τs ) 149 ⎛ which is equal to the expectation of exp ⎝− 0 1 ⎞ dy f (y)Yy ⎠ under Q2 ∗ Q0 . u ≤ gt )G(Bt−v : v ≤ t − gt ) . 2/µ µ 9. we consider again the arguments developed in paragraph 9. h ≤ T0 )  =s−x which may be translated in the form of the integral representation: ∞ s P µ τs =2 0 db 0 dxϕb (x)ψbµ (s − x)P τb ·  µ τb T0 = x ◦∨ Pbµ · bµ T0 =s−x or. if we now deﬁne. Hence. as was done from (9.4 may be obtained by using jointly Theorem 9. which expresses the law of a Bessel process transformed by Brownian scaling relative to a last passage time. u ≤ τs ).4 Towards a pathwise decomposition of (Xu. γ = E ⎣ ds exp − 2 0 where Φt = F (Bu .7) onwards. u ≤ τb )  µ τb = x Ebµ G(Bh .
150 ∞ 9 Further results about perturbed reﬂecting Brownian motion ∞ µ τs ds P 0 =2 0 db P τb ◦ ∨ T0 Pbµ .3. this result (8.5) was derived from a RayKnight theorem for the local times of X. but this is left to the diligent reader.it was shown in (8. v ≤ τs − gτs µ τs = b.1) that a proof of the arc sine law for Brownian motion may be given in a few moves. conditionally on the process: µ µ µ µ µ µ (Xv . v ≤ τs − gτs ) ≡ (Bτs −v − µ τs . Petit’s ﬁrst result (8. The main diﬀerence between Chapter 8 and the present Chapter is that. considered up to τs = inf{t : t = s}. In the end. v ≤ τs − gτs ) µ µ µ) = (Bτs −v − µb. 92). thanks to the scaling property. which use essentially two ingredients: (i) the scaling property.5) is obtained as a consequence of Theorem 9. µ It would now remain to study the pregτs process in the manner of Biane [13] and Vallois [88].5). in Chapter 8. a more intrinsic time for the study of X. (9. Theorem 9. we simply deduce from (9.16) that. . whereas. 91–Feb.1 may be used to give. Petit’s ﬁrst result (8. Comments on Chapter 9 The results presented in this Chapter were obtained by the second author while teaching the course at ETH (Sept. which is a RK theorem for the local times of X. it may be worth to emphasize the simplicity (in the end!) of the proof of (8.1. the proof of F. t As a temporary conclusion on this topic. is distributed as Brownian motion starting from 0. in the present chapter.17) For the moment. a quick proof of F. considered up to its ﬁrst hitting time of −µb. up to µ τs ≡ inf{t : µ = s}.5): .
4 Towards a pathwise decomposition of (Xu . deduced from excursion theory.14).− τ µ (1) and Aµ. the identity in law between the quantities (8. u ≤ τs ) 151 (ii) the independence. .to prove F. as done in PitmanYor [77] and PermanPitmanYor [69] in the Brownian case. the use of the scaling property makes no problem.+ are dealt with in Theorem 9. and the identiﬁcation of the laws.1.13) and (8. τ µ (1) However. τ τ the latter being. . most likely from excursion theory.µ 9. Petit’s result. possibly. since we have not understood. whilst the independence and the identiﬁcation of the laws of Aµ. of A+(1) and A−(1) . the analogy with the Brownian case is not quite complete.
1). 153 . of the local times of Brownian motion at time t. o then the limit in ε exists for every x ∈ IR. This remark applies to f (y) = y t . the function. and has compact support. in the space variable y.1) ˜ Ht (a) = lim ε→0 0 ds 1(Bs −a≥ε) (Bs − a) (10. we can deﬁne.3) ˜ with xα = xα sgn(x). partly because of the fundamental identity between Fourier transforms: ˆ Hf (ξ) = i sgn(ξ)f (ξ) If. as: 1 Hf (x) = lim π ε→0 (this limit exists dx a. the Hilbert transform H. in (10. for any f ∈ L2 (IR). y ∈ IR.) plays an important role. which may be deﬁned. f is assumed to be H¨lder continuous. for α < 3/2: t ˜ (α) Ht (a) = lim def ε→0 0 ds 1 ˜ (Bs −a≥ε) (Bs − a)α (10. We shall use the notation: t ∞ −∞ dy f (y) 1(y−x≥ε) y−x (10.2) More generally.s.Chapter 10 On principal values of Brownian and Bessel local times In real and complex analysis.
1.η (ω)a − b 2 −η .1 Yamada’s formulae (10. the quantities: (Ht (a). by Fukushima [45]. if ( a . then. and to some of its applications. ˜ Moreover. as will be proved in this chapter. 72) and Yamada’s original papers ([94]. [95]. we have: x+ε x−ε dy  y − xγ y t − x t <∞ . they may be traced back to ItˆMc o Kean ([50]. t ≥ 0) are welldeﬁned for any β < 3.154 10 On principal values of Brownian and Bessel local times ˜ ˜ ˜ (α) ˜ (α) We shall simply note Ht for Ht (0). which partly explains why they possess some interesting distributional properties. as soon as: γ < 3 2 . ˜ (β) Consequently. so are the quantities: . 10. for a given x ∈ IR. which have been studied. [96]). a ∈ IR. [8]). a few words about the origin of such studies is certainly in order: to our knowledge. an important part of this chapter shall be devoted to the description of a new kind of excursion theory for Bessel processes with dimension d < 1. in particular. To conclude this introduction. In fact. the onesided version of H (α) plays an essential role in the representation of Bessel processes with dimension d < 1. and Ht for Ht (0). thanks to the following H¨lder continuity property of Brownian local times: o for 0 < η < 1 . as shown recently by Bertoin ([7]. 2 sup  s≤t a s − b s (ω) ≤ Ct. and ε > 0. developed by Bertoin. 2 Likewise. t ≥ 0) denotes the t family of Brownian local times. when taken at certain random times. Problem 1.η (ω). we remark that. a ∈ IR. 1 for some (random) constant Ct. They also inherit a scaling property from Brownian motion. p.1) To begin with. These processes (in the variable t) are quite natural examples of processes with zero energy.
5) Bt 1−α = (1 − α) 0 Bs −α sgn(Bs ) dBs + (1 − α)(−α) p. (Bt − a)+ 1−α and Bt − a1−α . we have the following formulae: t (Bt )1−α = (1 − α) 0 t + (Bt )1−α ˜ (Bs )−α dBs + (1 − α)(−α) p.v.10. 2 0 t ds Bs 1+α (10. 0 t ds def = Bs − a1+α ∞ −∞ db ( b1+α a+b t − a t) . we shall take a = 0. and ﬁxed t. as an Itˆ stochastic o t integral.v. for ﬁxed y. for 0 < α < 1 2.2) The quantities we have just deﬁned appear in fact as the zero quadratic variation parts in the canonical decompositions as Dirichlet processes of (Bt − a)1−α .1.6) Exercise 10.v.4) = (1 − α) 0 t (Bs ) −α (1 − α)(−α) p. 2 0 t ds 1+α Bs t (10. for 0 < α < 1 2 . the representation of the local time y of Brownian motion. where: q(x) = 2 x du exp − y2 u2 1 .1 In RevuzYor ([81].1 Yamada’s formulae t 155 p. (10. p. For simplicity. 1(Bs >0) dBs + 2 0 1(Bs >0) ds 1+α Bs (10. then. and gs (y) = √ . exp − 2 2s 2πs . is given in the following explicit form: t y t t = 0 1 ds gs (y) − √ 2π 0 ∞ sgn(Bs − y)q Bs − y √ t−s dBs .v.v. 230). 0 1(Bs −a>0) ds def = (Bs − a)1+α ∞ db ( b1+α a+b t − a t) 0 and p.
precisely. Bs (10. 0 ds def = Rs ∞ aδ−2 da(La − L0 ) . t ≥ 0) denotes a Bessel process with index µ.156 10 On principal values of Brownian and Bessel local times Derive from this formula the representation as an Itˆ integral of the diﬀerent o t principal values we have just deﬁned. t t 0 the family of local times (La . We ﬁrst recall that a power of a Bessel process is another Bessel process timechanged. Applying this formula with ν = − 1 (so that (Rν (t). t ≥ 0) is a reﬂecting 2 Brownian motion. a ≥ 0) being deﬁned with respect to the speed t measure of R(δ) as: t ∞ dsϕ(Rs ) = 0 0 daϕ(a)La aδ−1 t for every Borel function ϕ : IR+ → IR+ .7) 2/p Rν (s) 0 1 where (Rµ (t).11). .8) where (βt .7) was already presented and used in Chapter 9.2)). and: t Kt = p. such that: 0 < δ < 1. as formula (9. 1 1 p + q = 1 (see. t ≥ 0). and ν > − q . as a Dirichlet process. p. in particular: 0 ds .1. t ≥ 0) is a Brownian motion. with dimension δ.: RevuzYor ([81].3) We shall now transform formula (10.v. we obtain the following consequence of formula (10. Proposition (1. and Rνq (t) ≡ R(δ) (t).6): Rt ≡ R(δ) (t) = βt + δ−1 Kt 2 (10. t ≥ 0).g. formula (10. in fact. of a Bessel process (δ) (Rt . 416).6) into a formula which gives the canonical decomposition. e. we have the formula: ⎞ ⎛ t ds ⎠ 1/q qRν (t) = Rνq ⎝ (10.
t ≥ 0 is a standard Cauchy process. we have ˜ (α) Theorem 10. They were also intrigued by the presence of the constant π. if (γu . as we shall see with the next theorem. involving principal values of Brownian local times (10. Fitzsimmons and R. Remarks: 1) As α varies from −∞ to 3 (excluded). which may be obtained as timechanges of a Brownian motion by an independent unilateral stable process.2 A construction of stable processes. Spitzer [85] remarked that. More precisely. it is easy to construct symmetric stable processes from a u 1dimensional BM. hence. Precisely.2 A construction of stable processes 157 10. t ≥ 0 to a large class of symmetric L´vy processes in place of the e Brownian motion.2.2.10. 3 [. in particular. u ≥ 0) is another realvalued Brownian motion. (Hτt . 3 [. t ≥ 0) is a sym2 1 . να varies from 0 to 2.1 Let α ∈] − ∞. we can obtain all symmetric stable processes. u ≥ 0). t ≥ 0). with this construction. The computations of Fitzsimmons and Getoor have been simpliﬁed and generalized by Bertoin [9]. With the help of the scaling property of the process 2 ˜ (α) (Ht . with extreme 2 values excluded. then: (λ ∈ IR) . and using the inverse τt ≡ inf{u : 0 > t} of the Brownian local u time ( 0 . using stochastic calculus and FeynmanKac arguments.1) Let α ∈]−∞. Then. In fact. t ≥ 0) is a multiple of the standard Cauchy process.1 and a more classical construction of the stable symmetric processes. except Brownian motion! ˜ 2) In the particular case α = 1. we have: metric stable process of index να = 2−α ˜ (α) E exp(iλHτt ) = exp(−t cα λνα ) for some constant cα . π t 3) P. (10.2) It now seems natural to look for some relation between the results of Theorem 10. the process (Hτt . 1 ˜ Hτ . Getoor [40] have extended the result concerning ˜ Hτt . which is independent of B.
it now seems natural to consider their joint distribution for ﬁxed time t.9). and θ = 0. to the second 1 ˜ remark following it) and Spitzer’s result (10. have the same law. moreover. in the next paragraph.3 Let T denote a r.7). T is assumed to be independent of B.3 Distributions of principal values of Brownian local times.158 10 On principal values of Brownian and Bessel local times (γτt . coming back precisely to Theorem 10. J. it seems e to call for some interpretation in terms of complex Brownian motion. in that it involves complex Brownian motion. 10. except Brownian motion. taken at an independent exponential time We start again with the interesting case α = 1.v. which is exponentially distributed. t ≥ 0) is a standard symmetric Cauchy process (10. where gt = sup{s ≤ t : Bs = 0} Theorem 10. rather.2 (We keep the previous notation concerning the independent Brownian motions B and γ). u ≥ 0 π and (γu . Theorem 10. which is closer to Spitzer’s original idea. we have: E exp i λ˜ Hτ + θγτt π t λ˜ θ2 = E exp i Hτt − τt π 2 = exp −tλ coth λ θ . t ≥ 0) into the sum of: ˜− ˜ Ht = Hgt and ˜+ ˜ ˜ Ht = Ht − Hgt . Therefore. u ≥ 0). For every λ ∈ IR.1 (or. This formula is reminiscent of L´vy’s stochastic area formula (2. with some partial success. Le Gall [59] presented yet another construction in the general case. we have the following: . 2 Then. we see that Hu . t ≥ 0) by any unilateral stable process to obtain all symmetric stable processes. with parameter 1 . In any case.F. It will be fruitful to decompose ˜ the process (Ht .9) MolchanovOstrovski [67] replaced (τt . v ≥ 0). which we shall attempt. with values in IR+ . when restricted to the zero set of the Brownian motion (Bv .
a decomposition we already encountered in paragraph 10. formula (10.11) 10. h ≥ 0 .8).10. t ≥ 0) denotes a BES(d) process. ii) for every λ ∈ IR. e2 ) deﬁned by: e1 (t) = Rσ(t−)+h 1(h≤σ(t)−σ(t−)) . where dn (t) denotes the number of downcrossings of K from 0 to −2−n during the timeinterval [0.4 Bertoin’s excursion theory for BES(d). 2 Bertoin [8] proved that (0. it admits a local time. and consider the Poisson point process: e = (e1 .1.10) may be completed as follows: λ˜ E exp i HT π  0 T =t = λ exp −t(λ coth λ − 1) .10) = th(λ) λ and λ ˜+ E exp i HT π = λ sh(λ) . 0 < d < 1 In this paragraph. t]. with respect to the Markov process (R. with 0 < d < 1. iii) In fact. and (Kt . t ≥ 0) may be constructed explicitly from K as the limit of 2n(d−1) dn (t). t ≥ 0) is the process with zero quadratic variation such that: Rt = R0 + Bt + (d − 1)Kt (t ≥ 0) . (Rt . with the factor ( 1 ) deleted. Let σ(t) = inf{s : δ(s) > t} be the rightcontinuous inverse of δ. formula (10.4 Bertoin’s excursion theory for BES(d). we have: λ˜ E exp i HT π = 1 ch(λ) (10. hence. 0) is regular for itself. sh(λ) (10. 0 < d < 1 159 ˜− ˜+ i) HT and HT are independent. h ≥ 0 e2 (t) = Kσ(t−)+h 1(h≤σ(t)−σ(t−)) . such a local time (δ(t). λ ˜− E exp i HT π Therefore. K).
160 10 On principal values of Brownian and Bessel local times Call m the (Itˆ) characteristic measure of this Poisson point process. K).4 to characterize the law of 1 A+ 1 = 0 ds 1(Ks >0) . we have. the processes: ε1 (U − h). ε2 (U + h). In turn.4. from excursion theory. U ≤ t ≤ V takes values in IR− . t ≤ Sx ) . the following formulae: . −ε2 (U − h). where (Rx (t). h ≤ V − U are independent. with canonical (Dirichlet) decomposition: Rx (t) = x + Bt + (d − 1)Kx (t) . and Sx = inf {t : Kx (t) = 0}. Recall that. which o abs lives on Ω0 . and ε is absorbed at (0. Kx (t). increasing additive functional (At . for any continuous. t ≤ U takes values in IR+ . we shall use Theorem 10.s. such that ε(0) = (0. 2) m ε1 (U ) ∈ dx = ε2 (t). h ≤ U and ε1 (U + h). t ≥ 0) of X ≡ (R. we deﬁne furthermore: U (ε) = inf t > 0 : ε2 (t) = 0 . Bertoin [8] deduced several distributional results from Theorem 10. t ≥ 0) denotes a BESx (d) process. which does not charge {s : Rs = Ks = 0}. 0).. ε2 (t). Theorem 10. We may now state Bertoin’s description of m. 0) after its ﬁrst return V (ε) to (0. and have both the same distribution as: (Rx (t). abs For ε ∈ Ω0 .4 The σﬁnite measure m is characterized by the following distributional properties: 1) m(dε) a. and 1 − d d−2 x dx Γ (d) (x > 0) 3) Conditionally (with respect to m) on ε1 (U ) = x. the set of continuous functions ε : IR+ → IR+ × IR. 0).
β.4 Bertoin’s excursion theory for BES(d). t t t t the quantities to be computed are: h(α. b) . b) = √ 2a + √ 2b 1−d . α + β) k(α.12) 1 α ⎡ E⎣ ∞ ⎤ dt exp −(αt + Agt )⎦ = m(dε)(1 − exp(−αV )) 0 m(dε)(1 − exp −(αV + AV )) We now apply these formulae with: At = βA+ + γA− . where A− = t − A+ . 0) 1 + f (α + γ. γ) V def def m(dε) 1 − exp −(αV + βA+ + γA− ) V V m(dε) (1 − exp − {(α + γ)U + (α + β)(V − U )}) = m(dε) 0 ⎧ ⎨ dt exp −(αt + βA+ + γA− ) t t U V = m(dε) ⎩ dt exp (−(α + γ)t) + 0 U ⎫ ⎬ dt exp − (αt + β(t − U ) + γU ) . γ) = (β − γ)f (α + γ. ⎭ Hence. one has: E exp −(aA− + bA+ ) = exp −tf (a. γ) = = and k(α. if we now deﬁne: f (a.10. 0 < d < 1 V 161 ⎡ E⎣ ∞ ⎤ dt exp −(αt + At )⎦ = m(dε) 0 dt exp −(αt + At ) 0 m(dε) (1 − exp −(αV + AV )) (10. β. α + β) α+β α+γ We are now in a position to state the following Theorem 10. with a little algebra: h(α. γ) = f (α + γ.13) we obtain.5 1) For every t ≥ 0. β. σ(t) σ(t) where: f (a. b) = m(dε) (1 − exp −(aU + b(V − U ))) and (10. β. .
a ≥ 0) and (λ−a .12) and (10. 1+d . 0) α(α + β)f (α. α + β) k(α. Theorem 4. g1 is distributed as: Z 1−d . conditionally on λ0 = x. 2 . which are deﬁned by: t ∞ ds f (Ks ) = 0 −∞ da f (a)λa . 0) + αf (α.162 10 On principal values of Brownian and Bessel local times 2) The distributions of the variable A+ and of the pair (A+ . 0) = E ⎣ dt exp −(αt + βA+ )⎦ = t h(α.14) E = 2 √ √ 1+β+ 1+γ 1−d In particular. a ≥ 0) σ(t) σ(t) σ(t) are two independent BESQx (0) processes. t then. the law of λ0 is characterized by: σ(t) k E exp − λ0 2 σ(t) Using this result. Proof: 1) Bertoin ([8]. = exp −t = exp(−tk 1−d ) (k ≥ 0) . a beta variable with parameters 2 2 1−d 1+d 2 . 2) It follows from formulae (10. A− ) are charg1 g1 1 acterized by the formulae: E 1 1 + βA+ 1 1 1 + βA+ + γA− g1 g1 = √ 1−d 1+β √ 1−d (1 + β) 1 + 1 + β β+ 1+ (10. the processes (λa .2) proved that if (λa . Furthermore. β.13) that: ⎤ ⎡∞ βf (α. we obtain: E exp − aA− + bA+ σ(t) σ(t) = E exp − √ λ0 σ(t) √ 2a + 2b 2 1−d √ √ 2a + 2b . β. α + β) 0 and . a ∈ IR) denotes the t family of occupation densities of K.
0 < d < 1 163 ⎡ E⎣ ∞ ⎤ dt exp −(αt + βA+ + γA− )⎦ = gt gt f (α. 0) = .14) with yet another distributional result: for ﬁxed t. α + β) 0 Now. and λ ∈ IR (thus. A more complete exposition of results pertaining to principal values of local times is given in Yamada [97]. as derived by Bertoin [8]. A+ σ(t) σ(t) (law) = A− σ(t) σ(t) (law) = Z1.: both ratios are arc sine distributed. striking identity: 1 2 λ 2 0 ds coth(λrs ) −1 (law) 1 0 = ds rs 2 where (rs . α h(α. the law of the lefthand side does not depend on λ). .4 Bertoin’s excursion theory for BES(d). β. α) h(α. which centers around Alili’s study of: t p. to: E 1 α + βA+ 1 and E 1 α + βA+ + γA− g1 g1 √ 2a + √ 2b . and their excursion theory.10. using a scaling argument. the expectations on the lefthand sides of these equalities are respectively equal. Comments on Chapter 10 The contents of this chapter consist mainly of results relating principal values for Bessel processes with small dimension. up to now little understood . For a further discussion by Bertoin. and also in the second half of the Monograph [103]. s ≤ 1) denotes the standard 3dimensional Bessel bridge. 2 2 (10. and the. 0 ds coth(λBs ) . a ∈ IR) σ(t) already used in the above proof.e.v. γ) αf (α + γ.1 . This follows immediately from the description of the law of (λa . b) by equalities. 0. see [10]. 1−d The proof is ended by replacing f (a.15) i. in the above Remark: It may be interesting to compare formula (10.
s ≤ 1). .164 10 On principal values of Brownian and Bessel local times 1 More studies of functionals of (rs . DonatiMartin and M. Yor [34]. including 0 ds exp(±λrs ) are also found in C.
1) 1 Researches linking the Riemann Zeta function and random matrix theory.1.g.Chapter 11 Probabilistic representations of the Riemann zeta function and some generalisations related to Bessel processes To begin with. theta functions and Brownian motion. An essential property of ζ is that it satisﬁes the functional equation: ξ(s) = ξ(1 − s) (11. much more modestly. in particular: “the KeatingSnaith philosophy”. 11. However see e. the MezzadriSnaith volume [66] 165 . it may be wise to state immediately that the aim1 of this chapter is not to discuss Riemann’s hypothesis!.1) The Riemann zeta function is deﬁned by: ζ(s) = 1 . are o beyond the scope of this book. zeta function. It extends analytically to the entire complex plane C. Re(s) > 1 . but. as a meromorphic function with a unique pole at s = 1. which is closely related to the Lindel¨f hypothesis. to present some of the (wellknown) relations between heat equation. ns n=1 ∞ for s ∈ C .1 The Riemann zeta function and the 3dimensional Bessel process (11.
5) Indeed. 2 (3) ˜ (3) with T1 and T1 two independent copies of the ﬁrst hitting time of 1 by a BES(3) process starting from 0. . for Re(s) > 0 . . which is deﬁned by: ξ(s) = def ∞ where: (11. or. as a consequence of Jacobi’s identity for the theta function: Θ 1 t = √ t Θ(t) .2) The functional equation (11. satisﬁes: . . (11.1) may be understood as a symmetry property of the distribution of the r. −2. Indeed.2 and 11.166 11 Probabilistic representations of the Riemann zeta function s(s − 1) s Γ π −s/2 ζ(s) . (11. where: T(2) = T1 + T1 . for any s ∈ C . we give a proof of (11. if we assume that the functional equation (11.1. . −m. E[f (N )] = E f 1 N √ N . which we denote by ϕ(t). the density of N .3. equivalently: for any Borel function f : IR+ → IR+ .1. extends analytically to C as a meromorphic function with simple poles at 0. .1) holds.1).4). one has: 2 ξ(2 s) = E[N s ] (11. 2 2 We recall that the classical gamma function. an explanation of this symmetry property of N is given.. (11.3) Hence. thanks to the relation: Γ (1 + s) = s Γ (s) . .: N = def π def (3) ˜ (3) T(2) .3) For the moment.4) In paragraphs 11. where Θ(t) ≡ ∞ n=−∞ e− πn t . . hence of (11. 2 (11.2) Γ (s) = 0 dt ts−1 e−t .3) that N satisﬁes: E[N s ] = E[N (1/2)−s ] . we deduce from (11.v. −1.
from the ﬁrst question.6).1 The Riemann zeta function and the 3dimensional Bessel process 167 ϕ(t) = 2t Θ (t) + 3 Θ (t) . The following exercise should help to understand better the deep connections (3) which exist between the Riemann zeta function and the distribution of T1 (and its powers of convolution). s ≤ 1) (k) denotes here the (2k)dimensional Bessel bridge). one has: Γ (m) E 1 (T(k) )m = 1 2m−k−1 0 ∞ dλ λk+2m−1 e−λ k (1 − e−2λ )k 2. for any m > 0. and it is easily deduced from this identity that: ϕ which is equivalent to (11. thanks to the inﬁnite divisibility of T(1) . and let T(k) denote an IR+ valued r. 1 − x n=0 ∞ n(n − 1) · · · (n − (k − 2))xn−(k−1) n=k−1 More generally. where (ρ(k) (s).1 Let k > 0.4). with α(k) = p p k (1 − x) Γ (k) Γ (p + 1) p=0 Deduce. that: ∞ . we have Γ (k + p) 1 . Recall that for k ≥ 2: (k − 1)! = (1 − x)k ∞ 1 = xn . T(k) may be represented as: 0 ds ρ2 (s). Exercise 11.11. 1.v. from formula (2. Assume k is an integer. = α(k) xp .6) = λ shλ k (such a variable exists. (x < 1) and. Prove that. such that E exp − λ2 T(k) 2 1 1 t = t5/2 ϕ(t) (11. k ≥ 1. for any k > 0.
(23m−1 Γ (m)) Γ (2m + 3) 2m−1 Γ (m) 1− 1 22m+1 1 22m+3 . . E 1 (T(1) )m = 1 . and Y a discrete r. ..ζ(2m + 1) − 1 − E Γ (2m + 4) 1 {ζ(2m + 1) − ζ(2m + 3)} . u≤1 where U denotes a uniform r.v. from the comparison of the expressions of E E 1 (T(2) )m that: U 2 (law) Y 2 = T(2) T(1) 1 (T(1) )m and (*) (law) (3) = Y 2 (sup Ru )2 . ζ(2m + 3) . 2. 2.. 4. . = (T(4) )m 3 · 23m−2 Γ (m) 1 (T(k) )m in Prove that. .)... ∞ Γ (2m + 1) (2m−2 Γ (m)) Γ (2m + 1) (2m−2 Γ (m)) = E E 1 (T(2) 1 (T(3) )m )m = = Γ (2m + 2) ζ(2m + 1) . 2 2 . in terms (T(k) )m 1 (2n + 1)2m+1 n=0 1− 1 22m+1 ζ(2m + 1) . 4. for any integer k ≥ 1.168 11 Probabilistic representations of the Riemann zeta function Γ (m) E 1 (T(k) )m = Γ (k + 2m) 2m−k−1 ∞ p=0 αp (k + 2p)k+2m (k) 3. 3.. (p = 1. Show the following formulae for E of Γ and ζ. independent of T(2) . with k = 1. it is possible to express E terms of the Γ and ζ functions. Deduce. 1 1 independent of T(1) and such that: P Y = p = p .v.
of the law of two ddimensional Bessel processes put back to back.8) m2 = e 4 which will be undertaken in paragraph 11.7) holds. which is distributed as the 3dimensional standard Bessel bridge.2 The agreement formulae 169 11. IR+ ) → IR+ : 2 Thus. and. (d) . and me = sup e(u). denote2 by σµ and σµ their respective ﬁrst hitting times of 1.4) to show: 2 (law) π T(2) (11. and deﬁne µ = d 2 − 1.3.11.2. Let ρu = and ρv = ˜ . if u ≤ σµ Ru Rσµ +σµ −u . it will remain. for every measurable functional F : C([0. and the agreement formulae between laws of Bessel processes and Bessel bridges (11.4). one has: E[f (m2 )] = e π E f 2 1 T(2) T(2) (11.2) The identity (11. between the law of the standard ddimensional Bessel bridge on one hand.7) where (e(u).1) Using (Brownian) excursion theory.2 The right hand side of (11. σµ is another (sometimes more convenient) notation for T1 . on the other hand.2. v ≤ 1) denotes the standard Bessel bridge with dimension d. (11. Consider (Ru . for any dimension d > 0. 1 ρv(σµ +σµ ) . if σµ ≤ u ≤ σµ + σµ . u≤1 def Assuming (11. v ≤ 1. we have. u ≥ 0) two independent BESµ ≡ BES(d)processes starting from 0. u ≥ 0) and (Ru . if (rv .7) will appear below as a particular consequence of the following agreement formulae which are now presented as relationships.1 Let d > 0. u ≤ 1) denotes the normalized Brownian excursion. σµ + σµ Then. in order to ﬁnish the proof of (11. Here is this relationship: Theorem 11. 1]. for every Borel function f : IR+ → IR+ . we will show below that.
170 11 Probabilistic representations of the Riemann zeta function E[F (rv .9) We now remark that the identity (11.1. and let sµ be the unique time at which this supremum is attained.2. s0 ) = 0 and in particular: s0 (law) = σ0 . v ≤ 1) = (˜v .2 We use the same notation as in Theorem 11. (11. both due to D.1 Let mµ be the supremum of the standard Bessel bridge with dimension d = 2(1+µ).9). we have: ρ (rv . nµ is deﬁned on the .2.12) (law) σ0 1 . Theorem 11.10) Proof: This is immediate from the identity (11.9) above. resp.2)).1.1. for µ > 0. that. resp. µ considered on the left hand side of (11. but now d = 2. which relies upon two diﬀerent descriptions. + E[f (m2 . we have: (m2 . but now µ = 0 (or. corresponds to 1/(σµ + σµ ). Then.9). although this is a digression from our main theme.10) below. of a σ−ﬁnite measure nµ already considered by PitmanYor ([73]. Williams. for every Borel function f : IR2 → IR+ . v ≤ 1)(σµ + σµ )µ ].1.436440) and BianeYor ([17]. v ≤ 1)] = 2µ Γ (µ + 1) E[F (˜v . considered on the right hand side of (11. m2 0 (11. It should be noted. σ0 + σ0 σ0 + σ0 (11. sµ . in the particular case µ = 0 (or d = 2).11) (11. Then. Then. v ≤ 1). σµ + σµ σµ + σµ (σµ + σµ )µ . ρ (11. σµ /(σµ + σµ ). paragraph (3.1 We use the same notations as in Corollary 11.1. sµ )] = 2µ Γ (µ + 1) E f µ σµ 1 . Theorem 11. Corollary 11. in the case µ = 1/2.7) follows from the identity (11. we have. p. since m2 . (law) Corollary 11.1 yields a remarkable identity in law. d = 2).3) A family of excursion measures We now give a proof of Theorem 11.
starting and ending at 0. (ii)For every x > 0. nµ may be characterized by either of the following descriptions. a. Second description of nµ (i’)The distribution of V under nµ is given by: nµ (V ∈ dv) = αµ dv . 11. conditionally on V = v. u First description of nµ (i) The distribution of M under nµ is given by: nµ (M ≥ x) = x−2µ (x > 0) . such that ω(0) = 0 and ω is absorbed at 0 at the ﬁrst (strictly positive) instant it reaches 0 again. this maximum M is attained at a unique time R (0 < R < V . ∞[. during the time interval [0. v]. u ≤ v) is a Bessel bridge of index µ.8) is reminiscent of the very wellknown KolmogorovSmirnov identity: . and is carried by the space Ω abs of the trajectories ω. the process (eu .8) 171 canonical space C(IR+ . conditionally on M = x. v µ+1 where αµ = 1 2µ Γ (µ) (ii’)For every v ∈]0. stopped at 0 the ﬁrst time they reach level x. For these descriptions.s.11.1) The identity: m2 = e (law) π2 T(2) 4 (11. u ≤ R) and (eV −u . M (ω) = sup eu (ω) .3 A discussion of the identity (11.). we shall use the notation: eu (ω) = ω(u) .3 A discussion of the identity (11. u ≤ V − R) are two independent BES µ processes. IR+ ). and the two processes (eu .8) (11. V (ω) = inf{u > 0 : eu (ω) = 0} .3.
No satisfactory explanation has.14) may be written as: 2 (sup b(u) − inf b(u))2 = m2 + m˜ . t ≥ 0) in terms of the Brownian bridge (b(u). from which it is easily deduced (see. is a normalized Brownian excursion. but. t≥0 u≤1 (law) Hence.15) might be explained by the independence (which does not hold) of {(supu≤1 b(u) − inf u≤1 b(u))2 − m2 } and m2 .14). therefore. t ≤ 1) (see Vervaat [89]. and also Biane [12]). that: me = sup b(u) − inf b(u) .14) or (11. until now. 37) that: sup(Bt  − t) = sup(b(u))2 . where ρ is the unique time ˜ at which b attains its minimum. e. RevuzYor [81]. putting them together. It follows from Vervaat’s representation of the normalized Brownian excursion (e(t). u ≤ 1) denotes here the standard 1dimensional Brownian bridge. inf u≤1 b(u)) presented below in (11. To conclude with this series of identities. on the righthand side of (11.172 def 11 Probabilistic representations of the Riemann zeta function m2 = sup(b(u))2 = b u≤1 (law) π 2 (3) T 4 1 (law) = Tπ/2 (3) (11. the identity (11. we may write the identity in law (11. t ≥ 0.13). u≤1 u≤1 (law) def and. p.2) rules out the possibility that (11.13) where (b(u). b and ˜ are two independent 1b dimensional Brownian bridges.10).14) where. i. u ≤ 1): Bt = (1 + t)b t 1+t .e. Chung [27] pointed out the puzzling identity in law: 2 m2 = m2 + m˜ e b b (law) (11.3.g. and the explicit computation of the joint law of (supu≤1 b(u).: the process e(t) = b((ρ + t) [mod 1]) − b(ρ).15) has been found. t ≤ 1..8) or (11. b b u≤1 u≤1 (law) (11. we use b b the wellknown representation of brownian motion (Bt . Exercise (3.15) No pathwise explanation of the identities (11.14) in the equivalent form: . been given for the factor (π/2)2 in either formula (11.
b ). u ≥ 0) is a 3dimensional Bessel process. and (R(u). with variance 1. s− .3 A discussion of the identity (11.3. that: b b E exp − and α2 + (s + s− )2 E exp − b 2 b = πα 2 sh πα 2 α2 2 m 2 b = πα 2 sh πα 2 2 . Gs− ≤ y) = .17).11.15) may be translated into an identity in law between independent exponential and Bernoulli variables. Paragraph 4.15) (and as we remarked above.15) is equivalent to (11. We now remark.1). consequently. as an exercise. (11. (11. b b where: s+ = supu≤1 b(u). and b is the local time at b b level 0 of the standard Brownian bridge (b(u). the understanding of which does not seem obvious. u ≤ 1) may be characterized as follows: P (Gs+ ≤ x. together with the obvious equality: mb = max(s+ . after integrating with respect to λ: 2 P (Gs+ ≤ x. This proves both identities (11. G b b b λ ∈ dλ) = exp(− (coth x + coth y)) dλ (11. which is independent of b. the joint law of (s+ .16) ˜ where B and B denote two independent Brownian motions. Hint : Use the representation of B ± in terms of reﬂecting BM. one obtains. (The last identity in law in (11.8) 173 ˜ sup(R(u) − u) = sup(Bu  − u) + sup(Bu  − u) u≥0 (law) u≥0 t t≥0 0 + = sup(Bt − u≥0 − ds 1(Bs ≥0) ) + sup(Bt − t≥0 0 t (law) ds 1(Bs ≤0) ) (11.16) is left to the reader as an exercise.) . centered.2) From the theory of Brownian excursions.14)). that the identity in law (11.2 (We keep the notation used in formula (11. Exercise 11. given in Chapter 4.17) 2 where G denotes a gaussian variable. b b coth x + coth y and it is now easy to deduce from this identity. Gs− ≤ y. s− = − inf u≤1 b(u).13) and (11. s− ).
gt where gt = sup{s < t. resp. 2 s+ .2. respectively two. (P ( = ±1) = 1/2). from excursion theory.17). log(1 + T T ). Moreover.15) is equivalent to: 1+ T T 1+ T T (law) = 1+ 2T1 T 1+ 2T2 T where.18) 2 where (τλ . Bs = 0}. u ≤ 1) may be represented as 1 b(u) ≡ √ Bugt . Prove that: G ( b . on either side. Thanks to the ﬁrst description of n1/2 . u ≤ τλ ) exp(− τλ )] (11. u ≤ gT )  T = λ] = exp(λ)E[F (Bu .i. Here is now a proof of the identity (11. Recall that the standard Brownian bridge (b(u). T∗ ). ∞). IR) → IR+ . u ≤ 1 . 2 s− ) = b b and G ( b . log(1 + ) T T 2T ) T∗ T. λ ≥ 0) denotes the right continuous inverse of ( t . which is given in 11. where (T. T ).d. t ≥ 0) and T denotes here an exponential time with parameter 1/2. are three. we obtain the following equalities: P( T ∈ dλ) = exp(− λ) dλ. (T. T . Bernoulli variables . E[F (Bu . and for any measurable functional F : C([0. 2mb ) = (law) (law) T. log(1 + .174 11 Probabilistic representations of the Riemann zeta function 1. the following formula is easily obtained: . 2. independent exponential variables with parameter 1. which is assumed to be independent of B. Prove that the identity in law (11. the T ’s indicate independent exponential variables. and which are also independent of the i.
E exp − α2 τ 2 2 Mτ = 2α sh(2 α) (11. or (11. we remark that. s≤t s≤t Deﬁne X = + Sτ Then. 11. and (11. may be strengthened as follows. T) (law) = G (s+ . by scaling (i) √ (law) + − gT = G. and its relation to the Riemann zeta function (11.4 A strengthening of Knight’s identity. b b where we have used the notation introduced at the beginning of this subparagraph 11. + Sτ −. SgT .19) Furthermore. (3) .11. to simplify notations. we write τ instead of τ1 . b ) . and some extensions of Knight’s identity: for α ∈ IR. Now.1) In Chapter 9 of these Notes. exp − 175 τλ 2 λ = exp − (coth x + coth y) 2 (11.17). and (ii) (SgT . and independent of which is distributed as T(2) = T1 (law) (3) τ + − .18) and (11. s− . in order to obtain formula (11.4 A strengthening of Knight’s identity + − E Sτλ ≤ x. 2 Mτ (11. + Sτ Theorem 11. St = − inf Bs .20).20) may be presented in the equivalent form: τ (law) (3) = T2 (:= inf{u. X is uniformly distributed on [0.3 (PitmanYor [79]) + − where St = sup Bs . we have given a proof. 1]. This identity (11. Sτλ ≤ y.21).20) where.19) on the other hand. (Sτ + Sτ )2 ˜ + T1 .4.21) We now remark that the identity (11. it remains to put together (i) and (ii) on one hand. Ru = 2}).3.
P λSτ ≤ x. since we can write: τ τ = + (max (X. Prove that the identity in law: T2 (3) (law) = agrees with the identity in law (∗) of Exercise 11. 1]. u ≤ 1 .3 may be deduced from the identity (11. A simple proof of Theorem 11.19). on the righthand side. Exercise 11.19) as: λ2 + − . where. 1].20).3 1. on the right hand side Y and V are independent.4 ([79]. (law) U2 T(2) T(2) we just encountered above V2 2 (law) Y = derived in question 3 (3) T1 Hint : Remark that U = (2Y ) V . exp − τ 2 However. T(2) and X are assumed to be independent. 1 − X))−2 . 1 − X))−2 − 2 Mτ (Sτ + Sτ )2 and it is easily shown that: T2 (3) (law) = T(2) (max (X. once we use the scaling property of BM to write the lefthand side of (11.3 may be given.176 11 Probabilistic representations of the Riemann zeta function Equivalently. 2. in terms of a Vervaattype theorem for the pseudobridge 1 √ Buτ . Prove that. if X is uniformly distributed on [0. one has: E exp − α2 τ + − 2 (Sτ + Sτ )2 = α shα 2 . τ Theorem 11. We keep the above notation) . where.1. λSτ ≤ y. 1 − X) is uniformly distributed on [1/2. then V = max (X. a more complete explanation of Theorem 11.3 constitutes indeed a strengthening of Knight’s identity (11. Theorem 11.
u ≤ τ ) attains ˜ its minimum. τ ] at which (Bu .4).2) The above strengthening of Knight’s identity enables us to present now a very concise discussion of the identity in law (11. we have: E F 1 ˜ √ B(uτ ). u ≤ 1)] π for any measurable F : C([0. .3 that: (s+ + s− )2 = b b (law) π2 T(2) . t ≤ τ ) := (B((ρ + t)[mod τ ]) − B(ρ). we proved in 11.23) (s+ b Moreover. from Theorem 11. equal to E f τ + − (Sτ + Sτ )2 . 2 (11. which we write in the equivalent form: E[f (T(2) )] = E f 1 (π 2 /4) T(2) π T(2) .23) is equal to: E f 1 (π 2 /4) T(2) π T(2) . (11.3. but. IR+ ) → IR+ . 4 so that the quantity in (11. this expression is also equal to: 2 E f π 1 + s− )2 b (s+ + s− ) .4 A strengthening of Knight’s identity 177 Let ρ be the (unique) instant in the interval [0.4.4.22). 2 which is the righthand side of (11. u ≤ 1) the normalized Brownian excursion. Deﬁne the process B as ˜ (B(t).22) Indeed. t ≤ τ ) Then.11. 1]. u ≤ 1 τ = 2 E [me F (e(u). the lefthand side of (11. b b (11. now from Theorem 11.22) is. denoting by (e(u).
24) is a consequence of the following Theorem 11. s ≤ 1).5 (Jeulin [52]) of (e(s). instead of studying m2 . 11.2. Another important change with previous paragraphs is that.24).5 that: he = (law) 0 1 dt (1/2) k(t) e . the sequence IN∗ of positive integers will be replaced by the sequence of the zeros of the Bessel function Jµ . which is obtained by making the change of variables y = k(t). Then. and deﬁne: Let ( a . and the righthand side of this identity in law is equal to 2 me . in connection with the Riemann zeta µ function. y dx x e > t} . (law) 1 where he := 0 ds e(s) (11. or σµ +σµ as in paragraph 11. t ≤ 1) is a normalized Brownian excursion. We now prove (11. It will be shown below that (11. discussed above. a ≥ 0) be the family of local times e ∞ k(t) = sup{y ≥ 0.24) obviously provides us with another probabilistic representation of the Riemann zeta function.178 11 Probabilistic representations of the Riemann zeta function 11. between the distributions of me and T(2) . We deduce from Theorem 11.6 Some generalizations related to Bessel processes In this paragraph. it will be shown in this paragraph that the “Bessel zeta function” . the process ((1/2) k(t) e . the identity in law: he = 2 me .5 Another probabilistic representation of the Riemann zeta function Given the relations.
The aim of this paragraph is to exhibit a random variable X ν ≡ Xν ∗ which is distributed as θν (t)dt.n . . a > 0 . p. the sequence: 2 ν ∗ = jν−1. (11. for our purpose. the “zeta function”: ζλ∗ (s) = 1 . we associate to any ν > 0. s>0 . n ≥ 1 (11. we simply associate to a sequence λ∗ = (λn . instead. It may be fruitless.1) “Zeta functions” and probability. or a functional equation. We shall write ζ ν (s) for ζν ∗ (s). zeros of the Bessel function Jµ (see Watson [90]. λs n=1 n ∞ s>0 . a) In this paragraph.2) Some examples related to Bessel processes.g.6 Some generalizations related to Bessel processes 179 ζ ν which will be considered now has some close relationship with the time spent below 1 by a certain Bessel process. n ≥ 1) of strictly positive real numbers. . if Xλ∗ is a random variable e−λn t with cλ∗ = θλ∗ (t) = cλ∗ ζλ∗ (1) n=1 with distribution θλ∗ (t)dt. s > 0. e.6. We then have the λ n=1 n ∞ In the sequel. we shall assume that: ζλ∗ (1) = elementary Proposition 11. 1 < ∞. n ≥ 1) denotes the increasing sequence of the simple.: an Eulerproduct representation.26) where (jµ. to deﬁne which properties a “zeta function” should satisfy.11. The following series representation shall play an essential rˆle: o . we have: ζλ∗ (s)Γ (s) = ζλ∗ (1)E (Xλ∗ )s−1 . or .n . (11. (11.1 Deﬁne the probability density: ∞ 1 . positive. 498). . and θν (t) for θν ∗ (t). Then.6.25) Proof: This is an immediate consequence of the equality: ∞ 1 Γ (s) as = 0 dx xs−1 e−ax .
Now.28) Ey ⎣exp −α du 1(Ru ≤y) ⎦ = √ y 2α Iν−1 0 ν 2) Consequently.29) Corollary 11.n t .6 1) Let y > 0. starting from y at time 0. the following probabilistic representation of ζ ν holds: ⎡⎛ ⎞s−1 ⎤ ∞ ν ζ (1) 1 ⎢ ⎥ . and Py the law of the Bessel process (Rt .6: 1) It may now be easier to use the following notation: (ν) Ry (u). E ν ⎣⎝ du 1(Ru ≤y) ⎠ ⎦ .1 For any y > 0. under Py . 0 Consequently. 2 t≥0 since: ζ ν (1) = 1 4ν (11. a candidate for the variable X ν is 1 1 X ≡ 2 2 y 2y 2y ∞ ν du 1(Ru ≤y) . the distribution of the random variable: ∞ Xy = 0 du 1(Ru ≤y) ∞ is 1 ν θ 2y 2 t 2y 2 dt . under Py . 2 + j2 x Iν−1 x ν−1.6.30) Proof of Theorem 11. we have: ⎡ ⎤ ∞ √ 2ν Iν ν (y 2α) (11. u ≥ 0 denotes the Bessel process with index ν.n n=1 (see Watson [90]. p. where: θν (t) = (4ν) n=1 e−jν−1. t ≥ 0).180 11 Probabilistic representations of the Riemann zeta function 1 Iν 1 (x) = 2 . we may prove the following ∞ x>0 (11. with ζ ν (1) = ζ ν (s)Γ (s) = (2y 2 )s−1 y 4ν 0 (11.27) ν Theorem 11. with index ν. Then. 498). starting at y at .
the CiesielskiTaylor identities: ∞ du 1(R(ν) (u)≤y) = Ty (R0 0 (law) (ν−1) ) 0 Hence. t ≥ 0) may be represented as: ⎞ ⎛ t exp(Bt + νt) = R (ν) (ν) ⎝ 0 du exp 2(Bu + νu)⎠ .32).28). for example). (11.33) 2 0 (11. for any ν > 0: ⎡⎛ ⎞s−1 ⎤ ∞ ν ζ (1) ⎢ ⎥ ζ ν (s)Γ (s) = s−1 E ⎣⎝ du exp 2(Bu + νu)1(Bu +νu≤0) ⎠ ⎦ (11. we obtain: ⎡ ⎛ ⎞⎤ ∞ E ν−1 (exp −αTy ) ν Ey ⎣exp ⎝−α du 1(Ru ≤y) ⎠⎦ = 0 ν E0 (exp −αTy ) 0 and. t ≥ 0) be a real valued Brownian motion starting from 0. t ≥ 0) with the help of formula (11. t ≥ 0) denotes Brownian motion starting from 0. timechanging R(ν) into (exp(Bt + νt). and µ = ν − 1 (see Kent [56]. if (Bt .31) 2µ Γ (µ + 1)Iµ (y 2α) for µ = ν. it suﬃces to use the following identity: √ (y 2α)µ µ √ E0 (exp −αTy ) = . we have seen. with the help of this remark. 2) The proof of the second statement of the proposition now follows immediately from formulae (11. Then. then (exp(Bt + νt). starting from 1 at time 0.6.6. (11. Hence.11.6 Some generalizations related to Bessel processes 181 time 0. We now recall (see Chapter 6. to deduce formula (11. Corollary 11. in particular) that. and proved. in Chapter 4.32) where (R (t). we have.3) The particular case ν = 3 . Then. we obtain the following representation of ζ ν (s). and of the strong Markov property.2 Let (Bt . 2 . t ≥ 0) denotes here the Bessel process with index ν.28) and (11.27).
we used T(2) as a notation for Σ.34) ⎞ s −1 ⎤ ∞ 2 ⎥ ⎠ dt exp(2Bt + 3t)1(2Bt +3t≤0) ⎦ 0 11.1) We begin with the most important case ν = 3 .n = nπ Consequently. we have ns n=1 ∞ 2 πz 1/2 sin(z) .182 11 Probabilistic representations of the Riemann zeta function We then have: ν − 1 = 1 . deﬁne: Σ = σ + σ .7. we may now write down the main result contained in Theorem 11.26). Then.2 We simply write ζR (s) = ⎛⎛ 3· s Γ 2 2s/2 s π ∞ ζR (s) = E1 3/2 ⎜⎝ ⎝ ⎞ s −1 ⎞ 2 ⎟ ⎠ du 1(Ru ≤1) ⎠ 0 ⎡⎛ ⎢ = E ⎣⎝ (11. from the deﬁnition of ν ∗ 2 given in (11. we have: j1/2. and we are interested. s ≥ 0) denotes the s (5) Bessel process with dimension 5 (or index 3/2). ∞ Theorem 11. in the sequence of positive zeros of J 1 (z) = 2 Therefore. in the particular case ν = 3/2. where (Rs . Moreover. for which we simply 2 write X for X ν and Σ for Σ ν−1 .6 and its Corollaries.7 Some relations between X ν and Σ ν−1 ≡ σν−1 + σν−1 (11. at the beginning of this Chapter. where σ and σ are two independent copies of the ﬁrst (3) hitting time of 1 by BES 0 . (law) . Recall that. in the following form 1 . Proposition 11. which now becomes more convenient.7 Let X = 0 ds 1(R(5) ≤1) . starting from 1.
36). and √ √ ˜ = 3 2α coth( 2α) − 1 √ E exp(−αΣ) (11.39). where U and V denote two independent uniform r. is obtained by comparing formula (11.38) and (11.37) follows immediately from (11.1)) with.36) ˜ where H and Σ are independent. equivalently: H = U V 2 = (1 − (law) (law) 1 √ − 1 dh h (0 < h < 1) √ 2 U) . a random variable3 which satisﬁes: 3 ˜ for every Borel function f : IR+ → IR+ .b Za+b.34) with the deﬁnition of the function 3 ˜ That is: Σ is obtained by sizebiased sampling of Σ.11.35) 2 Then. Σ and X are assumed to be independent.v’s. .b+c = Za. √ (law) √ Remark: The identity in law: 1 − U = V U which appears at the end of point a) above is a particular case of the identity in law between beta variables: (law) Za. E f (Σ) = E[f (Σ)Σ] (11. b) ˜ (law) Σ = Σ+X (11.38) 2α (this is a particular case of formula (11. which are given by: √ √ 2α coth( 2α) − 1 E [exp(−αX)] = 3 (11.37) may be deduced from ˜ the explicit knowledge of the Laplace transforms of X and Σ.36) was discovered. we have: a) (law) ˜ X = HΣ (11.37) where.7 Some relations between X ν and Σ ν−1 ≡ σν−1 + σν−1 183 ˜ Consider Σ. which is in fact how the identity (11. and P (H ∈ dh) = or. on the righthand side.28). here: a = b = c = 1 Proof: a) Both identities in law (11. This second proof.36) and (11.c (see paragraph (8. b) It may be interesting to give another proof of the identity in law (11.39) sh2 ( 2α) The identity in law (11.
36).37).2) We now present an extension for any ν of the identity in law (11.40) (k ≥ 0). and ˜ is distributed as Σ. changing s into: 2k + 2. (k + 1)(2k + 1) 1 (k + 1)(2k + 1) 3 E[Σ k+1 ] 2 (k ≥ 0).37).36) and (11.3). b) Equivalently. using integration by parts. We then deduce from (11. H). we get: E[X k ] = Now. x3/2 (11. we obtain: E[X 2 −1 ] = s 3 E s(s − 1) 2 N π s/2 . By doing so. the function g(λ) := E[exp(− λΣ)] ≡ √ g (λ) 1 = λ g(λ) 2 λ 0 √ 2λ √ sh( 2λ) 2 satisﬁes: − dx (1 − g(x)) . Σ1 is independent of the pair (Σ.40) the identity 1 g (λ) = g(λ) 0 dh 1 √ −1 h g (hλ) . on the righthand side.41) follows. from which (11. .7. and. (11.41) Proof: The identity (11.7.7.) a) The random variable Σ satisﬁes the identity in law ˜ ˜ (law) Σ = Σ + H Σ1 (11.1 (We use the same notations as in Theorem 11. ˜ where. we remark that E[H k ] = E[U k ] E[V 2k ] ≡ so that ˜ E[X k ] = E[(H Σ)k ] which implies (11.40) follows immediately from (11. Corollary 11.184 11 Probabilistic representations of the Riemann zeta function ξ(s). or rather with formula (11. 1 .
a random variable which satisﬁes: ˜ for every Borel function f : IR+ → IR+ . It now suﬃces to multiply both sides of (11. which would be independent of ˜ ν−1 ? Σ (ii) is there any relation between the functional equation for ζ and the identity in law (11. we have (law) ˜ Σ ν−1 = Σ ν−1 + X ν (11. √ 2λ so that.7 and Proposition 11.40).36) for any ν.43) by ν and to use formula (11. where σν−1 and σν−1 are two independent copies of the ﬁrst hitting time of 1 by BESν−1 . we deduce: ⎛ E exp −λΣ ν−1 =⎝ √ ( 2λ)ν−1 2ν−1 Γ (ν)Iν−1 ⎞2 ⎠ . and we have used the recurrence formula: (ν − 1)Iν−1 (x) − xIν−1 (x) = −xIν (x) . Remark: The comparison of Theorem 11.3 Let X = 0 ν ν ds 1(Rν ≤1) .11.31).7 Some relations between X ν and Σ ν−1 ≡ σν−1 + σν−1 ∞ 185 Proposition 11.28) to conclude. where (Rs .41)? . in the form: (law) ˜ X ν = Hν Σ ν−1 . we obtain: E Σ ν−1 exp −λΣ ν−1 where x = = xν−1 ν−1 Γ (ν)I 2 ν−1 (x) 2 2 Iν (x) x Iν−1 (11. 0 ˜ Consider ﬁnally Σ ν−1 . or equivalently (11. E f (Σ ν−1 ) = νE f (Σ ν−1 )Σ ν−1 Then.43) √ 2λ. the Bessel process with index ν − 1 starting from 0. taking derivatives with respect to λ on both sides. s ≥ 0) denotes the s Bessel process with index ν.42) where the random variables on the righthand side are assumed to be independent. for some variable Hν . Proof: From formula (11.3 suggests several questions. two of which are: (i) is there an extension of the identity in law (11. and deﬁne Σ ν−1 = σν−1 + σν−1 . starting from 1.
. Theorem 11. RevuzYor [81].16). Exercise (1. FLy ) such that. therefore.44). and Ly (ω) = sup{t ≥ 0 : Rt (ω) = y}. and we disintegrate Py with respect to the law of Ly . for example. We obtain: ⎡ ⎛ 1 ν⎣ ν2 Ey Z exp ⎝ ν 2 1 = ν Ly 0 ⎞⎤ du ⎠⎦ 2 Ru ⎛ Ly ⎡ ν2 ν ν Py (Ly ∈ dt)Ey ⎣Z exp ⎝ 2 0 ⎤ ⎞ du ⎠  Ly = t⎦ .8 Let y > 0.378) or FitzsimmonsPitmanYor [41]). there exists a σﬁnite measure My on (Ω. More precisely. the measures Py F are all mutually absolutely continuous. p. we use the absolute continuity relationship between Py and Py : ν Py Ft = Rt y ν ⎛ ν2 exp ⎝− 2 0 t ⎞ du ⎠ 0 · Py 2 Ru Ft . ν as ν > 0 varies. 2 Ru (11. we have: ⎡ ⎛ 1 ν⎣ ν2 My (Z) = Ey Z exp ⎝ ν 2 Ly Ly 0 ⎞⎤ du ⎠⎦ . IR+ ). for every variable Z ≥ 0.44) ν Proof: We consider the righthand side of (11. we deﬁne Rt (ω) = ω(t) (t ≥ 0).8 ζ ν (s) as a function of ν In this paragraph. and every ν > 0. which is FLy measurable. it is wellknown that conditioning with respect to Ly = t amounts to condition with respect to Rt = y (see. we show that the dependency in ν of the function ζ ν (s) may be understood as a consequence of the following Girsanov type relationship ν between the probability measures Py .186 11 Probabilistic representations of the Riemann zeta function 11. Then. On the canonical space Ω = C(IR+ .45) ν 0 Next. 2 Ru Now. we have: ⎡ ⎛ ν2 ν Ey ⎣Z exp ⎝ 2 Ly 0 ⎞ ⎡ ⎤ ⎛ du ⎠ ν2 ν  Ly = t⎦ = Ey ⎣Z exp ⎝ 2 Ru 2 t 0 ⎞ ⎤ du ⎠  Rt = y ⎦ 2 Ru (11.
y)} is the family of densities of the semigroup Ptν (x. 2 Ru (11. the ﬁrst expression we considered in the proof is equal to: ⎡ ⎛ 1 ν⎣ ν2 Ey Z exp ⎝ ν 2 Ly 0 ⎞⎤ du ⎠⎦ = 2 Ru ∞ 0 ν Py (Ly ∈ dt) 0 0 p (y. Then.8 ζ ν (s) as a function of ν 187 so that the expression in (11. y)Ey [Z  Rt = y] t 0 does not depend on ν. the distribution of Xy under My is θ0 y2 y2 2) For every y > 0. νpν (y. y)dy associated to {Px }.47) 2 Ru 3) For every ν > 0. y) y t where {pν (x.46) However. we have: ⎛⎛ ζ ν (s)Γ (s) = 1 1 ⎜ My ⎝⎝ 4 (2y 2 )s−1 ∞ ⎞s−1 du 1(Ru ≤y) ⎠ ⎛ exp ⎝− ν 2 2 Ly 0 0 ⎞⎞ du ⎠⎟ ⎠.11. we have: 2 ˜ θ0 t y2 ∞ ⎛ e 2 −(jν−1. ˜ Corollary 11. the expression in (11. y)dt t (see PitmanYor [72]) and ﬁnally.1 1) Let θ0 (t)dt be the distribution of X1 under the σﬁnite t dt ˜ .45) is in fact equal to: p0 (y.8. which is equal to: ∞ 0 dt p0 (y. it is known that: ν Py (Ly ∈ dt) = νpν (y.46). pν (y.48) . (11. y) t t (11. and t > 0. y) 0 t E [Z  Rt = y] . dy) ≡ t ν pν (x. y)Ey [Z  Rt = y] .n ) t 2y2 n=1 ν2 = My ⎝exp − 2 Ly 0 ⎞ du  Xy = t⎠ . measure M1 . t Hence.
D. i. as a sum (with positive coeﬃcients) or a product of completely monotonic functions.: the “thetafunction of index ν” and the lefthand side of (11.1).e. for n a small integer (Watson (s) uses the notation σν−1 instead of our notation ζ ν (s)). p. i. the lefthand side of (11.49) ν + 1)( ν + 2) √ √ 5 ν +6 ν √ √ √ ζ (4) = 8 2 2 ν ( ν + 1)2 ( ν + 2)( ν + 3) ν √ (3) = 25 ν 3/2 ( √ Comments on Chapter 11 The origin of this chapter is found in BianeYor [17].188 11 Probabilistic representations of the Riemann zeta function Consequently. Here are these formulae: ζ √ √ ν √ (1) = 22 1 √ ν ζ ζ ν (2) = 1 √ 24 ν( ν + 1) 1 √ (11. A detailed discussion of the agreement formula (11.e. In the following formulae.9) is found in PitmanYor [78]. SmithDiaconis [84] start from the standard random walk before passing to the Brownian limit to obtain the functional equation (11. The last statement of the previous Corollary is conﬁrmed by the explicit formulae found in Watson ([90].47).48). the function: ν → ζ ν (n) appears to be a completely monotonic function of ν. Williams [93] presents a closely related discussion. We also recommend the more developed discussion in Biane [15].: the “zeta function of index ν” are Laplace transforms in ν2 2 . . 502) for ζ ν (n).
1994. P. 17. Biane. R. XXI. volume 1372 of Lecture Notes in e e Math.. China. In S´minaire de e e e Probabilit´s. 1990. P. Windings of random walks. Sci. Ann. C. 1987. C. 1991. Roberts. XXIII. 84(2):231–250. pages 248–306. Decomposition of brownian trajectories and some applications. P. Probab. L´vy processes. 111(1):23–101.. Adv. 14. volume 1526 of Lecture Notes in Math. Le Gall.. Springer.F. excursion and meander. Barlow. 1990. Knight. Palaiseau. volume 1247 of Lecture Notes in Math. Winding angle and maximum winding angle of the twoe dimensional random walk. pages e e e ´ 165–193. 11. 3. Bull. Pitman. 17(4):1377–1402... 10. Bertoin. Faraway. 5. 1995. 73(3):463–480. 1989. On the winding number problem with ﬁnite steps. Yor. Theory Related Fields. P. Probab. Came bridge University Press. and M. Bertoin. 30(4):651–670. e Sci. Bertoin and J. Cambridge. 16. J. pages 294–314. J. XXVI.. Ed. Berger and P. Springer. 2003. 1985. Kyoto Univ. Berthuet. e Berlin. 1986. Probab. 13. Math. J. Berlin. J. J. Berlin. In S´minaire de Probabilit´s. in Appl. Excursions of a BES0 (d) and its drift term (0 < d < 1). 119(2):147–156. Yor. e 4. Appl. 28(4):717–726. Sci. Biane. 9. Math. 1992. and M. ´ 6. 118(2):147–166. In S´minaire de probabilit´s. P. 15. P. e Math. Yor. pages e e 270–275. Bertoin.References 1. 1988. J.. M. On the Hilbert transform of the local times of a L´vy process. Math. Une extension multidimensionnelle de la loi de l’arc sinus. XIX. Ec. volume 121 of Cambridge Tracts in Mathematics. Etude de processus g´n´ralisant l’aire de L´vy. 1983/84. Berlin. 8. Theory Related e e e Fields. 12. 1989. In La fonction zˆta. Sur un calcul de F. Biane. 189 . Probab. Bull. J. Probab. Comparaison entre temps d’atteinte et temps de s´jour de certaines dife fusions r´elles. Polytech. Bertoin. volume 1123 of Lecture e e e Notes in Math. 2. Sur les z´ros des martingales continues. Springer. volume e e 1321 of Lecture Notes in Math. 7.. J. pages 190–196. 1987. Biane and M. Pitman. In S´minaire de Probabilit´s. 20(2):261–274. Biane. J. Valeurs principales associ´es aux temps locaux browniens. J. Yor. Un processus qui ressemble au pont brownien.. Path transformations connecting Brownian bridge. Bull. La fonction zˆta de Riemann et les probabilit´s. Biane. M. 1996. In S´minaire de Probabilit´s. 1988. Springer. B´lisle.. (2). Complements on the Hilbert transform and the fractional derivative of Brownian local times. Berlin. Fall 1990.. Az´ma and M. B´lisle and J. Springer. XXII. pages 291–296.. Notes from lectures given at the Probability Winter School of Wuhan.
Yor. Yor. XXIII. 1978. L. 24. Strasbourg. Some Brownian functionals and their laws. Berlin. Strasbourg.. 29. Phys. Fubini’s theorem for double Wiener integrals and the variance of the Brownian path. 27. Springer. Probab. Comm. P. Mouvement brownien et in´galit´ de Hardy dans L2 . On polymer conformations in elongational ﬂows. Carmona. e e (2). Probab. Yor. Berlin. E. 1989. M. Stochastics Stochastics Rep. 1. 1979.I. 40. XXII. The modiﬁed. In Select. The Martin boundary of the Brownian sheet. E. Getoor. B. 33.. 39. volume 1372 of Lecture Notes in Math. 1989. Yor. N. Symmetric stable processes. Stochastics Stochastics Rep. 10(1):244–246. 112(1):101–109. pages 139–147. 160(2):239–257. C.. 1991. Math. Math. K. Transl. Carmona. 50(12):1–33. Iberoamericana. and M. pages 171–189. 25(1):120–131. Excursions in Brownian motion. Some limit theorems for sums of independent random variables with inﬁnite mathematical expectations. 1988. Statist. Sci. Berlin. Petit. e . P. D. Borodin. 1982. 7(6):913–932. 20(3):1484–1497. Brownian local time. Berlin. S.A.. Jansons. Soc. and M. P. 1995.. pages 22–30. Probab. and Amer.. On the distribution of the Hilbert transform of the local time of a symmetric L´vy process. volume 649 of Lecture Notes in Math. pages 454–466.. Phys. Durrett. Probab. Brownian motion and analytic functions. Vol. Yor. and M.. pages 157–161. 23. Poincar´ Probab. 14(2):155–177. 20. 1994). F. A. C. Ark. J. Eisenbaum. F. Dubins and M. 1976. 28. 88(2):137–166. 26. In Stochastic partial diﬀerential equations (Edinburgh. 87(1):79–95. 25(3):1011–1058. Biane and M. Springer. T. DonatiMartin and M. P. A. C. 22(15):3033–3048. C. Ann. Bull. Ann. Theory Related Fields. Probab. volume 1526 of Lecture Notes in e e Math. N. Cambridge Univ.. Carlen. Math. Transformation de Fourier et temps d’occupation browniens. Math. Springer. Rev. Lecture Note Ser.. Quelques pr´cisions sur le m´andre brownien. N. 36. discrete. Doney. Biane and M. Duplantier. R. Dellacherie. Statist. Probab. L. and L. C. and M. O. Uspekhi Mat. e e In S´minaire de Probabilit´s. 1994. 1991. 30. and some extensions of the CiesielskiTaylor identities in law. C. L´vytransformation is e Bernoulli. In Stochastic processes in classical and quantum systems (Ascona. The pathwise description of quantum scattering in stochastic mechanics. Soc. G. Yor. Appl. 37. Cambridge. On higherdimensional analogues of the arcsine law. 14(2):311– 367. Springer. R. P. Chan. 35. 1997. E. A new proof of Spitzer’s result on the winding of twodimensional Brownian motion. pages 98–113.. Nauk. and Probability. pages e e 315–323.190 References 18. 1994.. Smorodinsky. 1990. Brockhaus. S. Fubini’s theorem. 27(2):181– e 200.. Sur les fonctionnelles exponentielles de certains processus de L´vy. 21. volume 262 of Lecture Notes in Phys. Sur la loi des temps locaux browniens pris en un temps exponentiel. volume 216 of London Math. Berlin. DonatiMartin. Inst. Song. Press. 1998. Ann. Meyer. XXVI. Dynkin. Bingham and R. Ann. Rogers. Inst. 1992. Fitzsimmons and R.. H. 47(12):71–101. In S´minaire de Probabilit´s.. Mat. Probab. DonatiMartin and M. 1989. volume 1321 of Lecture Notes in e e Math. 1986. Chung. 1994. Ann. Areas of planar Brownian curves. 1976/1977). 44(2(266)):7–48. 22. Providence. Yor. P. Yor. In S´minaire de Probabilit´s.. B. 19. Math. Petit. Betagamma random variables and intertwining relations between certain Markov processes. 38. Statist. 1985). 1988. XII (Univ. A. 1992.. 1988. Mat. Davis. DonatiMartin. C. DonatiMartin and M. 34.. Theory Related Fields. J. Yor. K. Dean. 32. Un th´or`me de RayKnight li´ au supremum des temps locaux e e e browniens. 1961. Springer. 31. In S´minaire de Probabilit´s. Sur certaines propri´t´s des espaces de e e e e Banach H 1 et BMO. e 25.
o Berlin. 42. 1992. pages 169–187. 51. 62. In Seminar on Stochastic Processes.F. Boston. XIII (Univ. corrected. Yor. J. 1963. In´galit´ de Hardy. Y. Soc. 1979. Ann. Bessel processes. T. 52. Strasbourg. J. Watson. 1987. pages 227–265. R. and splicing. Acad. Probab.. 43. J. 49. Liber amicorum for Moshe Zakai. 48. Soc. Sci.. Sci. o e e Theory Related Fields. In Stochastic analysis. McKean. 1979. 91(1):71–80. G. Ast´risque. Probab. In S´minaire e e e de Probabilit´s. Yor. (157158):233–247. Jeulin. Appl. Die Grundlehren der mathematischen Wissenschaften. pages 101–134. 1987. Shepp. Berlin. Theory Related Fields. 74(4):617– 635. T. meander and the threedimensional Bessel process. P. 317(2):687–722. Jeulin and M. Semimartingales et grossissement d’une ﬁltration. 57. J. Boston. B. 1985. Diﬀusion processes and their sample paths. I e e e Math. B. 303(3):73–76. J. Probab. et fauxamis. Strasbourg. Some probabilistic properties of Bessel functions. J. 1992). volume 721 of Lecture e Notes in Math. “Normal” distribution functions on spheres and the modiﬁed Bessel functions. J.. J. Nagoya Math. 2:593–607. Ann. 3 (4):349–375. Enlacements du mouvement brownien autour des courbes de l’espace. 1974. On limit processes for a class of additive functionals of recurrent diﬀusion processes. F¨ldes and P. 1990. Excursions browniennes et carr´s de processus de Bessel. Z. Yor. 1988/89. Jeulin and M. and applications. Band 125. I (Evanston. and M.. Gebiete. Imhof. K. pages 197–304. Acad. H. A decomposition of additive functionals of ﬁnite energy. 1987). 49(2):133–153. Probab. and maxima for Brownian motion. Yor. Closed form characteristic functions for certain random variables related to Brownian motion. volume 1426 e e e of Lecture Notes in Math.. Itˆ and H. e C. Mouvement brownien. Verw. I Math. 1992 (Seattle. R´v´sz.. volume 1118 of Lecture Notes in Math. Trans. Theory o Related Fields. Math.F. XXIV. MA. M. Academic Press. Springer. Springer.. Geman and M. A. 53. cˆnes et processus stables. . 1990. 76(4):587–627. SpringerVerlag. Knight. Birkh¨user Boston. J. H. Kent. Kasahara and S. 74:137–168. Finance. volume 833 of Lecture Notes in Mathematics. MA. 46. Kotani.F... Knight. pages 332–359. Le Gall and M. T. Amer. F. F. J. 1980. semimartingales. Math. Random walks and a sojourn density process of Brownian motion. Paris S´r. Yor. In Grossissement de ﬁltrations: Exemples et applications. 60. volume 22 of Progr. 56. Hartman and G.References 191 41. 55. Filtration des ponts browniens et ´quations diﬀ´rentielles e e stochastiques lin´aires. T.P. Le Gall and M. Colloque Paul L´vy sur les Processus e e Stochastiques (Palaiseau. Second printing. Vol. Yor. 1974.. a 44. volume 33 of Progr. Foschini and L. 1990. Probab. On hardly visited points of the Brownian motion. Trans. Etude asymptotique des enlacements du mouvement brownien autour des droites de l’espace. Berlin. Berlin.. MA. Birkh¨user Boston. Springer. Math. 1988.. Yor. Yor. positive sojourns. Le Gall. 109:56–86. Probab. 54. Palm interpretation. 47. Fukushima. Probab. F¨llmer. R. 21(3):500–510. 58. 1986. pages 3–16. 6(5):760– 770. C. A. Inverse local times. e ´ 61. 1991. 1992. Probability. Amer. In Diﬀusion processes and related o problems in analysis. 45. Martin boundaries on Wiener space. 59. 1989). Springer. 1979. 50. Jeulin. Pitman. Markovian bridges: construction. options asiatiques et fonctions conﬂuentes hyperg´om´triques. 1977/78). Application de la th´orie du grossissement ` l’´tude des temps locaux e a e browniens. H. Density factorizations for Brownian motion. Paris S´r. Fitzsimmons. Geman and M. P. IL. Berlin. asian options and perpetuities. Quelques relations entre processus de Bessel. a 1993. 1978. Le Gall and M. In S´minaire de Probabilit´s. 314(6):471–474. 1984. Wahrsch. 1993..F. WA. Boston.
Gebiete.. Pitman and M. Mezzadri and N. Local time is a semimartingale. 1969. L´vy. 75. J. J. 1939. 25(3):464– 477. Compositio Math. Dover Publications Inc. Ann. Pitman and Marc Yor. 1982. Wahrsch. Z. Pitman. 1992. Messulam and M. 1958. M. 1992. J. In Stochastic integrals (Proc. Revuz and M. 65. Durham. 1989. et quelques extensions d’une identit´ de Knight.. Acad. Sojourn times of diﬀusion processes. Pitman and M. Wahrscheinlichkeitstheorie und Verw. Some theorems concerning 2dimensional Brownian motion. Berlin. translated from the Russian and edited by Richard A. Honest Bernoulli excursions. Sci. Shiga and S. Yor. Gebiete. D. 316(7):723–726. C. Appl. Yor. et quelques extensions de la loi de l’arc sinus. 72.. Silverman. J. volume 851 of Lecture Notes in Math. PhD thesis. Ann. Proc. 59(4):425–457. volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Pitman and M. I Math. 82. Amer. J. (2). J. volume 322 of London Mathematical Society Lecture Note Series. Ostrovski˘ Symmetric stable processes as traces of degenerate c ı. 1981. Perman. Sur une d´composition des ponts de Bessel. Yor. 78. 1981). J. Yor. A decomposition of Bessel bridges. Sur le temps pass´ par le mouvement brownien audessus d’un multiple de e son supremum. SpringerVerlag.. In Itˆ’s stochastic calculus and probability theory. 70. Yor.. editors. 1999. 7:615–630. 71. 14:127–130. Diaconis. Smith and P. Special functions and their applications. Verw. Paris e S´r. 74. 27:37–46. P. 79. J. 20(13):4421–4438. Revised edition. February 1992. 81. F. Cambridge. e 80. Snaith. Molˇanov and S. Pitman and M. 1963. 69. 1982. 14(3):733–779. Soc.. 7(3):511–526. volume 923 of Lecture Notes in Math. 1972. Probability. Perkins. and M. Pitman and M. Wahrsch. F. J. J. Z.. Yor. Soc. Rudnick and Y. Hu. Verw. Pitman and M. Theory Related Fields. D.. 1986. Pitman and M. London Math. Recent perspectives in random matrix theory and number theory. i Primenen. 64. 83. Yor. P.. 65(2):326–356.. 2005. Sizebiased sampling of Poisson point processes and excursions. Berlin. Math. Probab. R. Yor. In Functional e analysis in Markov processes (Katata/Kyoto. Unabridged and corrected republication. r´arrangements des trajectoires e browniennes. Advances in Appl. pages o 293–310. 76. 1982. 77. 84. 1987. J. Sur certains processus stochastiques homog`nes. Cambridge University Press. Dilatations d’espacetemps. 60(1):79–117. Probab. Teor. Springer.192 References 63. N. Ray. Phys. E.. E. 1996. Watanabe. Decomposition at the maximum for excursions and bridges of onedimensional diﬀusions. Durham.. Univ. F. 67. 92(1):21–39. Probab. Gebiete. Continuous martingales and Brownian motion. J. Universit´ e Paris VII. London Math. Verojatnost. 1980). Arcsine laws and interval partitions derived from a stable subordinator. Williams’ “pinching method” and some applications. A. Tokyo. Yor. Lebedev. On D. Springer. Soc. third edition. T. Berlin. Sympos. Petit. Further asymptotic laws of planar Brownian motion. 68. Yor. Trans. pages 276–285. 17(3):965–1011. 85. Spitzer. Z. pages 285–370. 66. New York. 1988. Bessel diﬀusions as a oneparameter family of diﬀusion processes. Probab. Springer. Onedimensional Brownian motion and the threedimensional Bessel process. 1982. J. Math. 1993. diﬀusion processes. . (3). 1973. C. 7:283– e e 339. 73. Illinois J. 87:187–197. The winding angle distribution of an ordinary random walk. N. J. 26(2):348–364. Bessel processes and inﬁnitely divisible laws. Asymptotic laws of planar Brownian motion. Pitman. L. 1975.
. Excursions and Itˆ calculus in Nelson’s stochastic o mechanics. Yor. Exponential functionals of Brownian motion and related processes. 27(3):707–712.. 27(2):201–213. Inst. Vol. Adv. Yor. Cambridge. 91.. 1995. Kluwer Acad. Geman. 92. Newton Inst. M. 98. 88. Une explication du th´or`me de CiesielskiTaylor. 1996.. 48(12):1–15. 93. 1986. Reprint of the second (1944) edition. Williams. 2001. Etude asymptotique des nombres de tours de plusieurs mouvements browniens complexes corr´l´s. OrnsteinUhlenbeck process with quadratic killing. Poincar´ e e e Probab. L. Springer Finance. 1990. a 87. Probab. Kyoto Univ.References 193 86. M. Yamada. Springer. Dordrecht. 308(8):257–260. M. 96. New York. M. 94. 1991. Berlin. Cambridge Univ. H. 1997. 23(4):893–903. M. P. Brownian motion. Z.. Probab. Appl. Stud. Statist. Loi de l’indice du lacet brownien. 53(1):71–95. pages 361–372. 101. I Math. 7(1):143–149. Exponential functionals and principal values related to Brownian motion. Some combinations of e Asian. T. M. 103. a e e 35(3):175–186. s volume 12 of Math. L. 90. 26(2):309–322. Gebiete. A treatise on the theory of Bessel functions. Ann. Probab. J. 8 translated from the French by Stephen S. Oxford Sci. Williams. e e Sci. On some limit theorems for occupation times of onedimensional Brownian motion and its continuous additive functionals locally of zero energy. Wilson. M. pages 61–87. 24(3):509–531. Parisian and barrier options. Stochastics Stochastics Rep. Yamada. Oxford Univ. 1992. 1989. Truman and D. Yor. SpringerVerlag. Appl. 105. Representations of continuous additive functionals of zero energy via convolution type transforms of Brownian local times and the Radon transform. Chesney. in Appl. In Recent developments in quantum mechanics (Poiana Bra¸ov. 104. T.. Press. 1980. Yor. IL. M. On the fractional derivative of Brownian local times. Probab. Brownian motion and the Riemann zetafunction. H. 1997. In Random walks. Wahrsch. G. editor. 89. A. W.. Madrid. 3. Boston. e ´ 100. Truman and D. A relation between Brownian bridge and Brownian excursion. M. Press. 1990.. JeanblancPicqu´. Une extension markovienne de l’alg`bre des lois b´tagamma. pages 49–83. and interacting partiee cle systems. Vallois. Cambridge. Ann. 1991. Sur la loi conjointe du maximum et de l’inverse du temps local du mouvement brownien: application ` un th´or`me de Knight. Math. 1979. J. C. 1991. Yor. 99. Yamada. . Biblioteca de la Revista Matem´tica Iberoamericana.. Phys. M. 1995). Yor. Brownian motion with quadratic killing and some implications.. Paris S´r. In Mathematics of derivative securities (Cambridge. pages 117–135. Yor. volume 28 of Progr. a MA.. Cambridge Mathematical Library. Principal values of Brownian local times and their related topics. MA. Tokyo. 1994. Kyoto Univ. 97.. Stochastics Stochastics Rep. 1991. 1989). 95. pages 413–422. 4. Revista Matem´tica Iberoamericana.. A generalised arcsine law and Nelson’s stochastic mechanics of onedimensional timehomogeneous diﬀusions. Boston. 102. D. 25(1):49–58. Probab. R. Yor. In Disorder in physical systems.. Chapters 1. T. On some exponential functionals of Brownian motion. With an introductory chapter by H´lyette e Geman. and M. 1990. Wenocur.. In Itˆ’s o stochastic calculus and probability theory. 1985. Verw. et distribution de HartmanWatson. Math. Publ. 1989). [Library of the Revista a Matem´tica Iberoamericana]. Wenocur. volume 15 of Publ. A. Probab. Publ.. Yamada. M. J. Watson. 1986. T. Birkh¨user Boston. Birkh¨user Boston. In Diﬀusion processes and related problems in analysis. volume 22 of Progr. A a a collection of research papers. I (Evanston. Williams. Vervaat. J. Cambridge University Press. pages 441–455. Acad.
2006. Itˆ calculus. L. Williams. volume 1840 of Lecture Notes in Math. L. 2005. and local times. 8. volume 1875 of Lecture Notes in Mathematics. Basel. Lecture Notes Series. Handbook of Brownian motion—facts and formulae. Lawler. R.F. 1984. Markov processes. Probability and its Applications. Knight. Berlin. Aarhus University. Cambridge University Press. Wadsworth International Group. American Mathematical Society. C. 1984. Classical potential theory and its probabilistic counterpart. Berlin. 2000. M. Cambridge. ´ ´e 7. N. Springer. 195 . second edition. Aarhus. 1. Brownian motion and classical potential theory. Birkh¨user Verlag.. J. Cambridge. In Lectures on probability theory and statistics. Belmont. a 3. G. SpringerVerlag. volume 18 of Mathematical Surveys. Cambridge University Press. Random fragmentation and coagulation processes. Some properties of planar Brownian motion. pages e 111–235. Vol. Reprint of the second (1994) edition. volume 114 of Mathematical Surveys and Monographs. SpringerVerlag. American Mathematical Society. Cambridge University Press. Rao. pages 107–195. Marcus and J.. volume 262 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Werner. 47. Vol. Rogers and D. volume 102 of Cambridge Studies in Advanced Mathematics. W. Essentials of Brownian motion and diﬀusion. Combinatorial stochastic processes. 9. A. L. Wadsworth Mathematics Series. G. Markov processes. J. Cambridge Mathematical Library. J. C. Markov processes. Reprint of the second (1994) edition. In Ecole d’Et´ de Probabilit´s de SaintFlour XX—1990. o 13. 10. J. Brownian motion and martingales in analysis. 2002. Providence. New York. Cambridge. 1977. Random planar curves and SchrammLoewner evolutions. Durrett. Borodin and P. With a foreword by Jean Picard. volume 100 of Cambridge Studies in Advanced Mathematics. volume 1527 of Lecture Notes in Math. Williams. and martingales. 2. Diﬀusions. No. G. Lectures from the 32nd Summer School on Probability Theory held in SaintFlour. July 7–24. Matematisk Institut. Foundations. B. R. Rogers and D. Berlin. 6. CA. 11. M. 5. F. Pitman. Diﬀusions. 2006. Salminen.Further general references about Brownian Motion and Related Processes 1. Le Gall. Bertoin. 2002. 1981. and martingales. 2. Cambridge Mathematical Library. Gaussian processes. 12.I. Cambridge. Rosen. F. 4. RI. Providence. 1992. Doob. 2006. Springer. Cambridge University Press. 2004. 2000. Conformally invariant processes in the plane. B..
: Topics in Complex Analysis Aoki. M. B.Universitext Aguilar.: Linear Optimization and Extensions Andersson. T. A. Y.. E.: Lie Sphere Geometry: With Applications of Submanifolds Chae. van: Logic and Structure Das. M. M. C.: Intuitive Combinatorial Topology Bonnans..: Ordinary Differential Equations Audin. J. Silbermann. Ranganathan. S.. W.: Lectures on Hyperbolic Geometry Benth. Cooke. S. T.: Formal Power Series and Linear Systems of Meromorphic Ordinary Differential Equations Bapat..: Potential Theory Blowey. B.: Bieberbach Groups and Flat Manifolds Chern. F. Khamsi. P.): Theory and Numerics of Differential Equations Blowey.: Topology and Analysis Borkar.): Frontiers in Numerical Analysis. Craig. Gilbert. E. H. F. Martini... Padberg M.. L.. D. W. D. A. V.: Fourier and Wavelet Analysis Badescu.. M. J. Kloeden. W. E.: State Space Modeling of Time Series Arnold. D.. G. I. A. S. T. D.B. R. Lemarchal. P... Kern. V. F.: Geometry Aupetit.: Probability Theory Brides/Vita: Techniques of Constructive Analysis Brunt B. K. R.: Lectures on Partial Differential Equations Arnold. M. P. R. A. Blinovsky.: A Course in u Credibility Theory and its Applications Carleson.: The o a Classical Decision Problem B¨ ttcher. R. J...: Classical Fourier Transform Charlap. Waymire. M. Marsden. A.: An Introduction to Inﬁnite Dimensional Analysis Dalen.: Linear Programming Duality Bachmann. B. R. A. A. L. Shardlow. Sagastizbal. J. I. Beckenstein..: Lectures on Advances in Combinatorics Aksoy. B. T.: Fundamentals of Real Analysis Berger. Soltan.: Option Theory with Stochastic Analysis Berberian. S. (Eds..: A Classical Invitation to Algebraic Numbers and Class Fields Curtis. A. J. L.: Introduction to o Large Truncated Toeplitz Matrices Boltyanski.. W.. W. A.: Complex Dynamics Cecil.: Methods in Fixed Point Theory Alevras. Efremovich. H. L. C.: Linear Algebra and Linear Models Benedetti. J. G. Hansen.: The Special Theory of Relativity: A Mathematical Exposition .: A Textbook of Graph Theory Balser.: Algebraic Topology from a Homotopical Viewpoint Ahlswede. L. F.. V. J. A. C. van: The Calculus of Variations B¨ hlmann.: From Elementary Probability to Stochastic Differential Equations with MAPLE Da Prato. G.. J. and Durham 2004 Blyth. C..: Geometry I. M. E. S. K.: Mathematical Introduction to Fluid Mechanics Cohn.: A Basic Course in Probability Theory Bliedtner. Gitler.. Narici.: Abstract Linear Algebra Curtis.. (Eds. Durham 2002..: Excursions into Combinatorial Geometry Boltyanskii. R. S. K. V. A. and II Bhattacharya. S. Prieto.. Ombach. Bleecker. S..: Matrix Groups Cyganowski. H.. Craig.: Algebraic Surfaces Balakrishnan. E. V. W. C. Petronio. Coleman. V.: Lattices and Ordered Algebraic Structures B¨ rger. Gurevich. E. S.: Numerical Optimization Booss.: A Primer on Spectral Theory Bachem. V. Gisler. J. M.. E. L.: Complex Manifolds without Potential Theory Chorin... C. Gamelin.: Lebesgue Integration Chandrasekharan. Gr¨del.
R. Roos. (Eds. E.: A First Course in Group Theory G˚rding. L. A.: A Formal Background to Higher Mathematics IIa.: Mathematical Concepts of Quantum Mechanics Hahn..): Nonstandard Analysis in Practice Dimca. H. S. J. J. M. Y. W. I. P. A. M.: Valuation Theory Engel. R. K. W.: Heights of Polynomials and Entropy in Algebraic Dynamics Farenick. J. O.Debarre.. Clifford Algebras.. A. Hafner. L.: Algebras of Linear Transformations Foulds.: Galois Modules in Arithmetic Everest.. B. R.: Idempotent Matrices over Complex Group Algebras Endler. R.. (Eds. V. and II Goldblatt.. J. Havr´nek. J. Tambour. K. O. E. O.: Geometric and Analytic Number Theory Holmgren.: Second Course in Ordinary Differential Equations for Scientists and Engineers . M. E. R.: Analysis I.: A Short Course on Operator Semigroups Erez. R. V. M. R. F.J. D. et al. C. and Ib Edwards. Ch. Taschner.. M. N. Busam.. J..: pAdic Numbers e Gross. P. M.: A First Course in Discrete Dynamical Systems Howe.: Stochastic Calculus in Manifolds Emmanouil. E. D. E. J.. Schoißengeier. J.: Numerical Range.: Degenerate Parabolic Equations Diener.: Qualitative Theory of Planar Differential Systems Dundas. T.: Singularities and Topology of Hypersurfaces DoCarmo. and IIb Emery. Levine. B.. A. Tan. B. F.): Basic Theory of Ordinary Differential Equations Humi. R. P. P. M...: a Statistics of Financial Markets: An Introduction Frauenthal. J.: A Polynomial Approach to Linear Algebra Gallot. C.. Sigal. E.: NonAbelian Harmonic Analysis Howes. A.. A. C. Sibuya. R.F. Diener.: Mechanizing a a Hypothesis Formation Heinonen. F. A. J. M. K. Rokhlin. R¨ ndip.: Bifurcations and Catastrophes Devlin. I. Voevodsky. T.. C. E. Hulin. and Arithmetic Witt Groups H´jek.: Motivic o Homotopy Theory Edwards.: Modern Analysis and Topology Hsieh.: Mathematical Modeling in Epidemiology Freitag. Stynes. Østvaer. Kolk. D.: A First Course in Harmonic Analysis Demazure.: Beginner’s Course in Topology Fuhrmann... M. J.. R. G. K.: Lectures on Analysis on Metric Spaces Hlawka. Q. C. H¨rdle. The Field of Values of Linear Operators and Matrices Gustafson. T. P.. M.: Sheaves in Topology Dimca. A.: Lie Groups Dumortier.: Differential Forms and Applications Duistermaat.G. M: Numerical Treatment of Partial Differential Equations Gustafson. Ward.: Fundamentals of Contemporary Set Theory DiBenedetto. C. Rao. Nagel. Miller.: Orthogonality and Spacetime Geometry Gouvˆa. R. S.: CalabiYau Manifolds and Related Geometries Grossman. A.: Quadratic Algebras. D. I.: Algebraic Surfaces and Holomorphic Vector Bundles Fuks.: A Formal Background to Higher Mathematics Ia.: Algebra for a Computer Science Godbillon. R... Lafontaine.. R.: Dynamical Systems on Surfaces Godement.: HigherDimensional Algebraic Geometry Deitmar.: Riemannian Geometry Gardiner.: Graph Theory Applications Franke...: Complex Analysis Friedman.
K. K. Cyganowski... G. D. H. F.: An Introduction to Sequential Dynamical Systems . E. P.: Mathematical Modeling for the Life Sciences Iversen.. P.: Essential Mathematics for Applied Field MeyerNieberg. Milmeister G. V. J. A. H.: Quantum Calculus Kannan. M. Pearson. Reidys. M.: Classical Tessellations and Three Manifolds Morris.: Introduction to Algebra Krasnoselskii. W.: Introduction to Stochastic Integration Kurzweil.: Riemannian Geometry and Geometric Analysis Kac. H. P. G. Weissman J.: Introduction to the Mori Program Mazzola. A. A. Kritikos. A.: Cohomology of Sheaves Jacod.: Aspects of Brownian Motion Marcus.. ZhiMing. R. L.H.: The NonEuclidean Hyperbolic Plane Kempf. B.: Modern Geometry with Applications Jones.: Introduction to Game Theory Mortveit. H..: Complex Geometry: An Introduction Isaev.: Abstract Algebra and Famous Inpossibilities Jost.: Introductory Problem Courses in Analysis and Topology MontesinosAmilibia. A Functional Analysis Approach Ma. A.. P.: Symbolic Dynamics Kloeden. Protter. J.: Postmodern Analysis Jost.: Dynamical Systems. R. L. Matthews.: Compact Riemann Surfaces Jost.. C. Roeckner.. B.: Introduction to the Theory of (nonsymmetric) Dirichlet Forms Mac Lane.: Introduction to Mathematical Methods in Bioinformatics Istas. H.. M. E. Milmeister G.. J. P. S. Schurz. C. Weissman J.: Algebra I: Fields and Galois Theory Luecking.: Comprehensive Mathematics for Computer Scientists 2 Mc Carthy. G. N..: From Elementary Probability to Stochastic Differential Equations with MAPLE Kloeden. J. Ombach. R. A. V.. G..: Sheaves in Geometry and Logic Mansuy. Krueger. K. Cheung. R.. S.: Introductory Lectures on Fluctuations of L´vy Processes with e Applications Lang.: A Course in Constructive Algebra Moise.. M. J..: Complex Analysis.. I. Platen.: Complex Abelian Varieties and Theta Functions Kitchens.. J. F. A.Hurwitz. E. P. Yor. B.: A Taste of Jordan Algebras Meyer.: Applied Stochastic Processes Lorenz. M. Morris. Pokrovskii.: Systems with Hysteresis Kuo. Sinai. P. Rubel. G. P.: Advanced Analysis on the Real Line Kelly. Examples of Complex Behaviour Jost. A. Ruitenburg. Stellmacher. S.: An Introduction to Semiclassical and Microlocal Analysis Matouˇek. J. R.: Banach Lattices Mikosch.: Introduction to Differentiable Manifolds Lefebvre. A... An Introduction Kyprianou.: Introduction to Arithmetical Functions McCrimmon. P. D. J.: Numerical Solution of SDE Through Computer Experiments Koralov. M.: Lectures on Number Theory Huybrechts. Richman.: Comprehensive Mathematics for Computer Scientists 1 Mazzola.: Using the BorsukUlam s Theorem Matsuki..: Probability Essentials Jennings. G. Ya. A. E. S.: NonLife Insurance Mathematics Mines. B. E.: The Theory of Finite Groups. D.: Number Fields Martinez.: Theory of Probability and Random Processes... T. K. I. J. J. Moerdijk. A.. 2nd edition Kostrikin. A.
: From Vectors to Tensors Runde. Kahanp¨a. J. S.: Galois Theory Wong. W. H. W. H. K. G. L. R.: Lie Groups Radjavi. P. G.: Mathematical Analysis I Zorich.: Block ErrorCorrecting o Codes Zaanen. P. B.... E.: An Invitation to Morse Theory Nikulin. B.: Mathematical Analysis II .: Notes on Geometry Reisel. B.: Notes on Lie Algebras Sauvigny...: Programming for Mathematicians e Seydel. Rosenthal.: An Invitation to von Neumann Algebras ´ Tamme.: Foliations on Riemannian Manifolds o Toth.: Galois Theory Rubel.: Optimal Decisions under Uncertainty S´roul. M. D. J.: Variational Methods in Theoretical Mechanics Øksendal. B. Welker. J.: An Introduction to the Theory of Large Deviations Sunder. K. W. E. A.. A. R.: Finite M¨ bius Groups.: Tools for Computational Finance Shafarevich.: An Introduction to Manifolds Verhulst. J. J.: Measures and Probabilities Smith.: SpaceFilling Curves Samelson. R.. Sulem. F. F. A. C. K. Shafarevich. S.. N. J.: Classical Theory of Algebraic Numbers Rickart.: Algebraic Combinatorics Poizat.: Extensions and Absolutes of Hausdorff Spaces Procesi. V.: Algebraic Function Fields and Codes Stillwell.: Logical Number Theory I. R.: Simultaneous Triangularization Ramsay. Kek¨l¨inen. F. J. A. G.: Partial Differential Equations II Schiff.: Geometry of Surfaces Stroock.: Strange Phenomena in Convex and Discrete Geometry Zorich. K. W. I. G.. Minimal Immersions of Spheres. A.: Introduction to Etale Cohomology Tondeur. P. and Moduli Tu.: SelfReference and Modal Logic Smory´ ski. R. J. D. R. H. J. L.: Continuity.: Normal Families Schirotzek. V. Reddy.: Introduction to Robust and QuasiRobust Statistical Methods Ribenboim. 2nd edition Orlik. B.: A concise Introduction to Mathematical Logic Rees. R.: Applied Stochastic Control of Jump Diffusions.C.. W. P. C.: Elementary Theory of Metric Spaces Rey.: Discourses on Algebra Shapiro. V. L.: A Course in Model Theory Polster. H. J. Richtmeyer. H.: A Geometrical Picture Book Porter. J.: Entire and Meromorphic Functions RuizTolosa.: Geometries and Groups Oden.: Nonlinear Differential Equations and Dynamical Systems Weintraub.: Stochastic Differential Equations Øksendal.: A Taste of Topology Rybakowski. V. P. I.: Composition Operators and Classical Function Theory Simonnet. C.: Sphere Packings Zong. A. P. V. V.: Partial Differential Equations I Sauvigny. a¨ aa Traves. S. L. M. W. An n Introduction Srivastava: A Course on Mathematical Logic Stichtenoth.: Natural Function Algebras Rotman. H. W. Woods. R. Castillo E.: Weyl Transforms Xamb´ Descamps. V. J.: The Homotopy Index and Partial Differential Equations Sagan. L. C. R. C.: Introduction to Hyperbolic Geometry Rautenberg. Integration and Fourier Theory Zhang. C.Nicolaescu.: Power Series from a Computational Point of View Smorynski. E. J..: Nonsmooth Analysis Sengupta. T.: An Invitation to Algebraic Geometry Smith. F.: Matrix Theory Zong.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.