Professional Documents
Culture Documents
(Cambridge Tracts in Mathematics 206) Uwe Franz, Nicolas Privault - Probability On Real Lie Algebras-Cambridge University Press (2016)
(Cambridge Tracts in Mathematics 206) Uwe Franz, Nicolas Privault - Probability On Real Lie Algebras-Cambridge University Press (2016)
General Editors
GENERAL EDITORS
B. BOLLOB ÁS, W. FULTON, A. KATOK, F. KIRWAN, P. SARNAK,
B. SIMON, B. TOTARO
UWE FRANZ
Université de Franche-Comté
NICOLAS PRIVAULT
Nanyang Technological University, Singapore
32 Avenue of the Americas, New York, NY 10013-2473, USA
www.cambridge.org
Information on this title: www.cambridge.org/9781107128651
© Uwe Franz and Nicolas Privault 2016
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2016
Printed in the United States of America
A catalogue record for this publication is available from the British Library
Library of Congress Cataloging in Publication Data
Franz, Uwe.
Probability on real Lie algebras / Uwe Franz, Université de Franche-Comté, Nicolas
Privault, Nanyang Technological University, Singapore.
pages cm. – (Cambridge tracts in mathematics)
Includes bibliographical references and index.
ISBN 978-1-107-12865-1 (hardback : alk. paper)
1. Lie algebras. 2. Probabilities. I. Privault, Nicolas. II. Title.
QA252.3.F72 2015
512 .482–dc23 2015028912
ISBN 978-1-107-12865-1 Hardback
Notation page xi
Preface xiii
Introduction xv
1 Boson Fock space 1
1.1 Annihilation and creation operators 1
1.2 Lie algebras on the boson Fock space 4
1.3 Fock space over a Hilbert space 6
Exercises 9
2 Real Lie algebras 10
2.1 Real Lie algebras 10
2.2 Heisenberg–Weyl Lie algebra hw 12
2.3 Oscillator Lie algebra osc 13
2.4 Lie algebra sl2 (R) 14
2.5 Affine Lie algebra 20
2.6 Special orthogonal Lie algebras 21
Exercises 26
3 Basic probability distributions on Lie algebras 27
3.1 Gaussian distribution on hw 27
3.2 Poisson distribution on osc 31
3.3 Gamma distribution on sl2 (R) 36
Exercises 44
4 Noncommutative random variables 47
4.1 Classical probability spaces 47
4.2 Noncommutative probability spaces 48
4.3 Noncommutative random variables 54
4.4 Functional calculus for Hermitian matrices 57
vii
viii Contents
Chapter 10 269
Chapter 11 270
Chapter 12 270
References 271
Index 279
Notation
xi
xii Notation
|φ
ψ| with φ, ψ ∈ h denotes the rank one operator on the Hilbert space h,
defined by |φ
ψ|(v) = ψ, v
φ for v ∈ h.
[·, ·] denotes the commutator [X, Y] = XY − YX.
{·, ·} denotes the anti-commutator {X, Y} = XY + YX.
Ad, resp. ad X, denote the adjoint action on a Lie group, resp. Lie algebra.
S(R) denotes the Schwartz space of rapidly decreasing smooth functions.
C0 (R) denotes the set of continuous functions on R, vanishing at infinity.
Cb∞ (R) denotes the set of infinitely differentiable functions on R which are
bounded together with all their derivatives.
H p,κ (R2 ) denotes the Sobolev space of orders κ ∈ N and p ∈ [2, ∞].
∞
(x) := tx−1 e−t dt denotes the standard gamma function.
0
Jm (x) denotes the Bessel function of the first kind of order m ≥ 0.
Preface
xiii
xiv Preface
xv
xvi Introduction
on the other hand. This approach is exemplified in Chapters 1–3. In this respect
our point of view is consistent with the description of quantum probability by
P.A. Meyer in [80] as a set of prescriptions to extract probability from algebra,
based on various choices for the algebra A.
For example, it is a well-known fact that the Gaussian distribution arises
from the Heisenberg–Weyl algebra which is generated by three elements
{P, Q, I} linked by the commutation relation
[P, Q] = PQ − QP = 2iI.
densities have various applications in, e.g., time-frequency analysis, see, e.g.,
the references given in [29] and [7].
Overall, this monograph puts more emphasis on noncommutative “problems
with fixed time” as compared with “problems in moving time”; see, e.g.,
[31] and [32] for a related organisation of topics in classical probability and
stochastic calculus. Nevertheless, we also include a discussion of noncommu-
tative stochastic processes via quantum Lévy processes in Chapter 8. Lévy
processes, or stochastic processes with independent and stationary increments,
are used as models for random fluctuations, e.g., in physics and finance. In
quantum physics the so-called quantum noises or quantum Lévy processes
occur, e.g., in the description of quantum systems coupled to a heat bath
[47] or in the theory of continuous measurement [53]. See also [122] for a
model motivated by lasers, and [2, 106] for the theory of Lévy processes on
involutive bialgebras. Those contributions extend, in a sense, the theory of
factorisable representations of current groups and current algebras as well as
the theory of classical Lévy processes with values in Euclidean space or, more
generally, semigroups. For a historical survey on the theory of factorisable
representions and its relation to quantum stochastic calculus, see [109, section
5]. In addition, many interesting classical stochastic processes can be shown to
arise as components of quantum Lévy processes, cf. e.g., [1, 18, 42, 105].
We also intend to connect noncommutative probability with the Malliavin
calculus, which was originally designed by P. Malliavin, cf. [75], as a tool
to provide sufficient conditions for the smoothness of partial differential
equation solutions using probabilistic arguments, see Chapter 9 for a review
of its construction. Over the years, the Malliavin calculus has developed into
many directions, including anticipating stochastic calculus and extensions of
stochastic calculus to fractional Brownian motion, cf. [84] and references
therein.
The Girsanov theorem is an important tool in stochastic analysis and the
Malliavin calculus, and we derive its noncommutative, or algebraic version
in Chapter 10, starting with the case of noncommutative Gaussian processes.
By differentiation, Girsanov-type identities can be used to derive integration
by parts formulas for the Wigner densities associated to the noncommutative
processes, by following Bismut’s argument, cf. [22]. In Chapter 10 we
will demonstrate on several examples how quasi-invariance formulas can be
obtained in such a situation. This includes the Girsanov formula for Brownian
motion, as well as a quasi-invariance result of the gamma processes [111, 112],
which actually appeared first in the context of factorisable representations of
current groups [114], and a quasi-invariance formula for the Meixner process.
xviii Introduction
treated in Chapter 7 under the angle of Weyl calculus, and Lévy processes
on real Lie algebras are considered in Chapter 8. The classical, commutative
Malliavin calculus is introduced in Chapter 9, and an introduction to quasi-
invariance and the Girsanov theorem for noncommutative Lévy processes
is given in Chapter 10. The noncommutative counterparts of the Malliavin
calculus for Gaussian distributions, and then for gamma and other related
probability densities are treated in Chapters 11 and 12, respectively, including
the case of so(3).
1
You don’t know who he was? Half the particles in the universe obey
him!
(Reply by a physics professor when a student asked who Bose was.)
We start by introducing the elementary boson Fock space together with
its canonically associated creation and annihilation operators on a space of
square-summable sequences, and in the more general setting of Hilbert spaces.
The boson Fock space is a simple and fundamental quantum model which will
be used in preliminary calculations of Gaussian moments on the boson Fock
space, based on the commutation and duality relations satisfied by the creation
and annihilation operators. Those calculations will also serve as a motivation
for the general framework of the subsequent chapters.
k=0
1
2 Boson Fock space
Definition 1.1.1 Let σ > 0. The annihilation and creation operators are the
linear operators a− and a+ implemented on 2 by letting
√ √
a+ en := σ n + 1 en+1 , a− en := σ n en−1 , n ∈ N.
Note that the above definition means that a− e0 = 0.
The sequence space 2 endowed with the annihilation and creation operators
a and a+ is called the boson (or bosonic) Fock space. In the physical
−
and
∞
∞
∞
√ √
a− f = f (n)a− en = f (n) n en−1 = f (n + 1) n + 1 en ,
n=0 n=1 n=0
hence we have
√ √
(a+ f )(n) = nf (n − 1), and (a− f )(n) = n + 1f (n + 1). (1.2)
a− f , g
2 = f , a+ g
2 .
Q := a− + a+ and P := i(a+ − a− ),
[P, Q] = PQ − QP = −2Id .
a− u, v
2 = u, a+ v
2 ,
4 Boson Fock space
and this relation will also be written as (a+ )∗ = a− , with respect to the
inner product ·, ·
2 .
b) the operators a− and a+ satisfy the commutation relation
[a+ , a− ] = a+ a− − a− a+ = σ 2 Id ,
where Id is the identity operator.
from which the sequence (Pn )n∈N can be uniquely determined based on the
initial condition P−1 = 0, P1 = 1.
en , Y n e0
2 = γ0 · · · γn−1 , n ∈ N,
1 = en , en
2
= en , Pn (Y)e0
2
n
= αk,n en , Y k e0
2
k=0
= αn,n en , Y n e0
2
= αn,n γ1 · · · γn , n ∈ N.
en = Pn (Q)e0 , n ∈ N,
for n ∈ N, with initial condition P−1 = 0, P1 = 1, hence (Pn )n∈N is the family
of normalised Hermite polynomials, cf. Section 12.1.
t −→ e0 , etY e0
2
: hC → hC
on the complexification
of h, defined by letting
h1 + ih2 := h1 − ih2 , h1 , h2 ∈ h.
Definition 1.3.1 The symmetric Fock space over hC is defined by the direct
sum
s (h) = h◦n
C.
n∈N
∞ ⊗n
f
E(f ) := √ , f ∈ hC ,
n=0
n!
E(k1 ), E(k2 )
hC = ek1 ,k2
hC .
The operators a− (h), a+ (h), Q(h), P(h) are unbounded, but their domains
contain the exponential vectors E(f ), f ∈ hC . We will need to compose them
with bounded operators on s (h), and in order to do so we will adopt the
8 Boson Fock space
denote the space of linear operators that are defined on the exponential
vectors and that have an “adjoint” that is also defined on the exponential
vectors. Obviously − +
the operators a (h), a (h), Q(h), P(h), U(h1 , h2 ) belong to
L E(hC ), s (h) . We will say that an expression of the form
n
Xj Bj Yj ,
j=1
with X1 , . . . , Xn , Y1 , . . . , Yn ∈ L E(hC ), s (h) and B1 , . . . , Bn ∈ B s (h)
defines
a bounded operator on s (h), if there exists a bounded operator
M ∈ B s (h) such that
n
∗
E(f ), ME(g) h = Xj E(f ), Bj Yj E(g) h
C C
j=1
holds for all f , g ∈ hC . If it exists, this operator is unique because the exponen-
tial vectors are total in s (h), and we will then write
n
M= Xj Bj Yj .
j=1
U(h1 , h2 ) = exp iP(h1 ) + iQ(h2 ) = exp i a− (h2 − ih1 ) + a+ (h2 − ih1 ) ,
h1 , h1
hC + h2 , h2
hC
U(h1 , h2 ) = exp − E(h1 + ih2 )
2
and on the exponential vectors E(f ) as
U(h1 , h2 )E(f )
h1 , h1
hC + h2 , h2
hC
= exp −f , h1 + ih2
hC − E(f + h1 + ih2 ).
2
Exercises
Exercise 1.1 Moments of the normal distribution.
In this exercise we consider an example in which the noncommutativity
property of a− and a+ naturally gives rise to a fundamental example of
probability distribution, i.e., the normal distribution.
In addition to that we will assume the existence of a unit vector 1 ∈ h
(fundamental or empty state) such that a− 1 = 0 and 1, 1
h = 1. In particular,
this yields the rule
a+ u, 1
h = u, a− 1
h = 0.
Based on this rule, check by an elementary computation that the first four
moments of the centered N (0, σ 2 ) can be recovered from Qn 1, 1
h with
n = 1, 2, 3, 4.
In the following chapters this problem will be addressed in a systematic
way by considering other algebras and probability distributions as well as the
problem of joint distributions such as the distribution of the couple (P, Q).
2
10
2.1 Real Lie algebras 11
i) (X ∗ )∗ = X for all X ∈ g,
ii) [X, Y]∗ = −[X ∗ , Y ∗ ] for all X, Y ∈ g.
In the sequel we will only consider real Lie algebras, i.e., Lie algebras over
either the field K = R of real numbers, or involutive Lie algebras over the field
K = C of complex numbers.
Remark 2.1.3 Let g be a real Lie algebra. Then the complex vector space
gC := C ⊗R g = g ⊕ ig
is a complex Lie algebra with the Lie bracket
[X + iY, X + iY ] := [X, X ] − [Y, Y ] + i [X, Y ] + [Y, X ] ,
for X, X , Y, Y ∈ g. In addition,
1. the conjugate linear map
∗ : gC −→ gC
Z = X + iY −→ Z ∗ = −X + iY
defines an involution on gC , i.e., it satisfies
(Z ∗ )∗ = Z and [Z1 , Z2 ]∗ = [Z2∗ , Z1∗ ]
for all Z, Z1 , Z2 ∈ gC
2. the functor1 g −→ (gC , ∗) is an isomorphism between the category of real
Lie algebras and the category of involutive complex Lie algebras. The
inverse functor associates to an involutive complex Lie algebra (g, ∗) the
real Lie algebra
gR = {X ∈ g : X ∗ = −X},
where the Lie bracket on gR is the restriction of the Lie bracket of g. Note
that [·, ·] leaves gR invariant, since, if X ∗ = −X, Y ∗ = −Y, then
[X, Y]∗ = −[X ∗ , Y ∗ ] = −[(−X), (−Y)] = −[X, Y].
1 This functor is used for the equivalence of real Lie algebras and complex Lie algebras with an
involution by associating a complex Lie algebra with an involution to every real Lie algebra, and
vice versa. Categories are outside the scope of this book.
12 Real Lie algebras
satisfy
Q + iP Q − iP
a+ = and a− = .
2 2
The Lie bracket [·, ·] satisfies
X = B+ + B− + M,
we will check that X has a gamma distribution with parameter β > 0, provided
however this matrix representation is not compatible with the involution of the
Lie algebra. On the other hand, taking
0 0 0 1 −1 0
B− = , B+ = , M= ,
1 0 0 0 0 1
satisfies the correct involution, but with the different commutation relation
[M, B± ] = ∓2B± .
where
β cosh(t) + sinh(t)
γ (β, t) = , t ∈ R+ .
cosh(t) + β sinh(t)
See Section 4.4 of [46] for a proof of Lemma 2.4.1.
τ β−1 −τ
γβ (τ ) = 1{τ ≥0} e , τ ∈ R,
(β)
the gamma probability density function on R with shape parameter β > 0, a
representation {M, B− , B+ } of sl2 (R) can be constructed by letting
as in [93], [95], [97]. The adjoint ã+ of ã− with respect to the gamma density
γβ (τ ) satisfies
∞ ∞
−
g(τ )ã f (τ )γβ (τ )dτ = f (τ )ã+ g(τ )γβ (τ )dτ , f , g ∈ Cc∞ (R),
0 0
(2.1)
2.4 Lie algebra sl2 (R) 17
∂ ∂ ∂2
ã◦ = ã+ = −(β − τ ) −τ 2
∂τ ∂τ ∂τ
β
has the Laguerre polynomials Ln with parameter β as eigenfunctions:
i.e.,
⎧
⎪
⎪ ∂ ∂2
⎪
⎪ Q̃ = τ − β + 2(β − τ ) + 2τ , (2.2a)
⎪
⎪ ∂τ ∂τ 2
⎪
⎨
∂
P̃ = 2iτ − i(τ − β),
⎪
⎪ ∂τ
⎪
⎪
⎪
⎪ ∂ ∂2
⎪
⎩M = β − 2(β − τ ) − 2τ 2 , (2.2b)
∂τ ∂τ
we have
Q̃ + M = τ ,
hence Q̃ + M has the gamma law with parameter β in the vacuum state =
1R+ in LC2 (R , γ (τ )dτ ).
+ β
We will show in Chapter 6 that when |α| < 1, the law (or spectral measure)
of αM + Q̃ is absolutely continuous with respect to the Lebesgue measure on
R. In particular, for α = 0, Q̃ and P̃ have continuous binomial distributions and
M + Q̃ and M − Q̃ are gamma distributed when α = ±1. On the other hand,
Q̃ + αM has a geometric distribution, when |α| > 1, cf. [1], and Exercise 6.3.
18 Real Lie algebras
and
i PQ + QP
P̂ := i(B− − B+ ) = ((α − )2 − (αx+ )2 ) = ,
2 x 4
we have the commutation relations
[M, P̂] = −2iQ̂, [M, Q̂] = 2iP̂, [P̂, Q̂] = 2iM,
and
α + 1 P2 α − 1 Q2
Q̂ + αM = + ,
2 2 2 2
and
! !
α+1 P2 1−α Q2
M + α Q̂ = + .
2 2 2 2
and
" # ! ! !
x2 + y2 ∂ ∂ ∂ ∂ x2 + y2
(αx− )2 + (αy− )2 f = + f
2 ∂x ∂x ∂y ∂y 2
" #
= 2f (τ ) + (x2 + y2 ) f (τ )
= −2(−τ f (τ ) − (1 − τ )f (τ ) − τ f (τ ))
= −2(ã− + ã◦ )f (τ ).
[X1 , X2 ] = X2 ,
and the affine group can be constructed as the group of 2 × 2 matrices of the
form
x
a b e 1 x2 ex1 /2 sinch(x1 /2)
g = ex1 X1 +x2 X2 = = ,
0 1 0 1
a > 0, b ∈ R, where
sinh x
sinchx = , x ∈ R.
x
2.6 Special orthogonal Lie algebras 21
ξ(x) = (x1 ξ1 + x2 ξ2 + x3 ξ3 )
g = ex ξ1 +y ξ2 +z ξ3
= ea(z,−y,x)
= Id + sin(φ) a(u1 , u2 , u3 ) + (1 − cos φ) a(u1 , u2 , u3 )2 ,
1
(u1 , u2 , u3 ) := % (z, −y, x) = (cos α, sin α cos θ , sin α sin θ ) ∈ S2 .
x + y2 + z2
2
2.6 Special orthogonal Lie algebras 23
where we used the Leibniz formula for the commutator, i.e., the fact that we
always have
where x × y denotes the cross product or vector product of two vectors x and y
in three-dimensional space,
⎡ ⎤
x2 y3 − x3 y2
x × y = ⎣ x3 y1 − x1 y3 ⎦ .
x1 y2 − x2 y1
This shows that the element exp ξ(x) of the Lie group SO(3) acts on so(3) as
a rotation. More precisely, we have the following result.
Proof : Recall that the adjoint action of a Lie group of matrices on its Lie
algebra is defined by
ad(X)Y = [X, Y]
by
Ad exp(X) (Y) = exp ad(X) (Y).
e3 = e1 × e2 .
Then we have
⎧
0⎨ if j = 1,
x × ej = e3 if j = 2,
⎩
−e2 if j = 3.
We check that the action of Ad ξ(x) on this basis is given by
"
#
∞
1"
#n
Ad exp ξ(x) ξ(e1 ) = ad ξ(x) ξ(e1 )
n!
n=0
1
= ξ(e1 ) + ξ(x × e1 ) + ξ x × (x × e1 ) + · · ·
& '( ) 2 & '( )
=0 =0
= ξ Rx (e1 ) ,
"
#
1
Ad exp ξ(x) ξ(e2 ) = ξ(e2 ) + ξ(x × e2 ) + ξ x × (x × e2 ) + · · ·
& '( ) 2 & '( )
=||x||e3 =−||x||2 e2
= ξ cos(||x||)e2 + sin(||x||)e3
= ξ Rx (e2 ) ,
"
#
1
Ad exp ξ(x) ξ(e3 ) = ξ(e3 ) + ξ( x × e3 ) + ξ x × (x × e3 ) + · · ·
& '( ) 2 & '( )
=−||x||e2 =−||x||2 e3
= ξ cos(||x||)e3 − sin(||x||)e2
= ξ Rx (e3 ) .
26 Real Lie algebras
Notes
Relation (2.3a) has been used in [92] to study the relationship between
the stochastic calculus of variations on the Wiener and Poisson spaces, cf.
also [64].
Exercises
Exercise 2.1 Consider the Weyl type representation, defined as follows for a
subgroup of sl2 (R). Given z = u + iv ∈ C, u < 1/2, define the operator Wz as
! !
1 x ux
Wz f (x) = √ f exp − + iv(1 − x) .
1 − 2u 1 − 2u 1 − 2u
1. Show that the operator Wz is unitary on L2 with W0 = Id and that for any
λ = κ + iζ , λ = κ + iζ ∈ C
we have
⎧
⎪
⎪ Wλ Wλ = Wκ+κ −2κκ +(ζ +ζ /(1−2κ)) ,
⎪
⎪
⎪
⎪
⎪
⎨ dWtλ
= λã+ − λ̄ã−= − λP̃, λ = κ + iζ ∈ 2 (N; C),
dt |t=0
⎪
⎪
⎪
⎪ !
⎪
⎪ iusτ
⎪
⎩ Wis Wu = exp 2 Wu Wis , u < 1/2, s ∈ R.
1 − 2u
Conclude that Wλ can be extended to L2 (R; C), provided |κ| < 1/2, and
1/a b
−→ W(1−a2 )/2+ib/a , a ∈ R \ {0}, b ∈ R,
0 a
is a representation of the subgroup of SL(2, R) made of upper-triangular
matrices.
2. Show that the representation (Wλ )λ contains the commutation relations
between ã+ and ã− , i.e., we have
d d d d
P̃Q̃ = − Wt Wis|t=s=0 and Q̃P̃ = − Wis Wt|t=s=0 .
dt ds dt ds
3
27
28 Basic probability distributions on Lie algebras
a− u, v
= u, a+ v
, u, v ∈ h (3.1)
[a− , a+ ] = a− a+ − a+ a− = E, (3.2)
a− e0 = 0 and Ee0 = σ 2 e0 ,
by letting
∂ ∂
a− := σ 2 and a+ := x − σ 2
∂x ∂x
and by defining e0 to be the constant function equal to one, i.e., e0 (x) = 1,
x ∈ R, which satisfies the conditions e0 , e0
h = 1 and a− e0 = 0. A standard
integration by parts shows that
∞
σ2
a− u, v
h = √ ū (x)v(x)e−x /(2σ ) dx
2 2
2
2π σ −∞
∞
1
ū(x)(xv(x) − σ 2 v (x))e−x /(2σ ) dx
2 2
=√
2π σ 2 −∞
= u, a+ v
h ,
i.e., (3.1) is satisfied, and
[a− , a+ ]u(x) = a− a+ u(x) − a+ a− u(x)
= a− (xu(x) − σ 2 u (x)) − σ 2 a+ u (x)
∂
= σ2 (xu(x) − σ 2 u (x)) − σ 2 xu (x) + σ 4 u (x)
∂x
= σ 2 u(x),
hence (3.2) is satisfied.
In this representation, we easily check that the position and momentum
operators Q and P are written as
!
− + + − 2 ∂
Q = a + a = xIh and P = i(a − a ) = i xIh − 2σ ,
∂x
and that
∞
1
xn e−x
2 /(2σ 2 )
e0 , Qn e0
h = √ dx
2π σ 2 −∞
is indeed the centered Gaussian moment of order n ∈ N, which recovers in
particular the first four Gaussian moments computed in Exercise 1.1. In
addition, the moment generating function of Q in the state e0 , defined by
∞ n
t
e0 , e e0
h =
tQ
e0 , Qn e0
h ,
n!
n=0
satisfies
∞ !
1 1 22
etx e−x
2 /(2σ 2 )
e0 , etQ e0
h = √ dx = exp σ t .
2π σ 2 −∞ 2
30 Basic probability distributions on Lie algebras
en , em
h = δn,m , n, m ∈ N.
with
a− e0 = 0.
i.e.,
1
en = √ Hn (Q)e0 , n ∈ N,
σ n n!
which is Condition (1.4) of Proposition 1.2.1.
3.2 Poisson distribution on osc 31
Xα,ζ ,β = αN + ζ a+ + ζ a− + βE,
X := X1,1,1 = N + a+ + a− + E
X n e0 , e0
= X n−1 e0 , Xe0
= X n−1 e0 , a+ e0
+ λX n−1 e0 , e0
= a− X n−1 e0 , e0
+ λX n−1 e0 , e0
.
which implies
X n+1 e0 , e0
= X n e0 , Xe0
= X n e0 , a+ e0
+ λX n e0 , e0
= a− X n e0 , e0
+ λX n e0 , e0
= Xa− X n−1 e0 , e0
+ a− X n−1 e0 , e0
+ λX n−1 e0 , e0
+ λX n e0 , e0
,
a◦ a+ a−
where N := λ = λ is the number operator.
λk
pλ (k) = e−λ , k ∈ N,
k!
is the Poisson distribution, with the inner product
∞
∞
λk
f , g
:= f (k)g(k)pλ (k) := e−λ f (k)g(k) .
k!
k=0 k=0
and
[a− , a+ ] = λIh
Proof : The commutation relation follows from (A.3b) and (A.3c). Next, by
the Abel transformation of sums we have
∞
λk
a− f , g
= λe−λ ( f (k + 1) − f (k))g(k)
k!
k=0
∞
∞
λk λk
= λe−λ f (k + 1)g(k) − λe−λ f (k)g(k)
k! k!
k=0 k=0
∞
∞
λk λk
= e−λ f (k)g(k − 1) − λe−λ f (k)g(k)
(k − 1)! k!
k=1 k=0
∞
λk
= e−λ f (k)(kg(k − 1) − λg(k))
k!
k=1
= f , a+ g
.
1
en (k) := √ Cn (k; λ), n ∈ N,
λn/2 n!
and
hence we have
√ %
a− en := λnen−1 , and a+ en := λ(n + 1)en+1 ,
a− f , g
= f , a+ g
∞
λk
a− f , Cn (·, λ)
= λ ( f (k + 1) − f (k))Cn (k, λ)
k!
k=0
∞
λk−1
= −λf (0)Cn (0, λ)) + λ f (k)(kCn (k − 1, λ) − λCn (k, λ))
k!
k=1
∞
λk
= f (0)Cn+1 (0, λ)) + f (k)Cn+1 (k, λ)
k!
k=1
∞
λk
= f (k)Cn+1 (k, λ)
k!
k=0
= f , a+ Cn (·, λ)
,
with Cn (0, λ) = (−λ)n . We also check that (3.6) can equivalently be recovered
as
(N + a+ + a− + E)Cn (k, λ)
= k(Cn (k, λ) − Cn (k − 1, λ)) − λ(Cn (k + 1, λ) − Cn (k, λ))
+ λ(Cn (k + 1, λ) − Cn (k, λ))
+ kCn (k − 1, λ) − λCn (k, λ) + λCn (k, λ)
= kCn (k, λ),
Xα,ζ ,β = αN + ζ a+ + ζ a− + βE
is given by
∞
= f (x)(xg(x) − βg(x) − xg (x))γβ (x)dx
0
= f , ã+ g
h , f , g ∈ Cb∞ (R),
hence we have
ã+ = x − β − ã− ,
i.e.,
∂
ã+ f (x) = (x − β)f (x) − x f (x) = (x − β)f (x) − ã− f (x).
∂x
In other words, the multiplication operator ã− + ã+ = τ −β has a compensated
gamma distribution in the vacuum state e0 in LC 2 (R , γ (τ )dτ ).
+ β
◦
The operator ã defined as
∂ ∂ ∂2
ã◦ = ã+ = −(β − x) − x 2
∂x ∂x ∂x
β
has the Laguerre polynomials Ln with parameter β as eigenfunctions:
ã◦ Lnβ (x) = nLnβ (x), n ∈ N. (3.8)
Recall that the basis {M, B− , B+ } of sl2 (R), which satisfies
[B− , B+ ] = M, [M, B− ] = −2B− , [M, B+ ] = 2B+ ,
can be constructed as
M = β + 2ã◦ , B− = ã− − ã◦ , B+ = ã+ − ã◦ .
For example, for the commutation relation [M, B− ] = −2B− we note that
[M, B− ] = 2[ã◦ , ã− ]
= −2[(β − x)∂ + x∂ 2 , x∂]
= −2(β − x)∂(x∂) − 2x∂ 2 (x∂) + 2x∂((β − x)∂ + x∂ 2 )
= −2(β − x)∂ − 2(β − x)x∂ 2 − 2x∂(∂ + x∂ 2 )
− 2x∂ + 2x(β − x)∂ 2 + 2x∂ 2 + 2x2 ∂ 3
= −2(β − x)∂ − 2(β − x)x∂ 2 − 2x∂ 2 − 2x2 ∂ 3 − 2x∂ 2
− 2x∂ + 2x(β − x)∂ 2 + 2x∂ 2 + 2x2 ∂ 3
= −2β∂ − 2x∂ 2
= −2(x∂ + (β − x)∂ + x∂ 2 )
= −2B− .
38 Basic probability distributions on Lie algebras
We check that
∂ ∂2
B− + B+ = ã− + ã+ − 2ã◦ = x − β + 2(β − x) + 2x 2
∂x ∂x
and
∂
i(B− − B+ ) = i(ã− − ã+ ) = 2ix − i(x − β),
∂x
hence
Q + M = τ,
hence Q + M has the gamma distribution with parameter β in the vacuum state
2 (R , γ (τ )dτ ). In this way we can also recover the moment generating
e0 in LC + β
function
∞
− + 1
e0 , et(B +B +M) e0
= etx γβ (x)dx = , t < 1,
0 (1 − t)β
which is the moment generating function of the gamma distribution with
parameter β > 0.
More generally, the distribution (or spectral measure) of αM + Q has been
completely determined in [1], depending on the value of α ∈ R:
It follows that there exists an inner product on span {vn :n ∈ N} such that the
lowest weight representation with
Me0 = λe0 , B− e0 = 0,
B+ −
(0) e0 = B(0) e0 = M(0) e0 = 0
Y(λ) := B+ −
(λ) + B(λ) + λM(λ) , λ ∈ R,
Our objective in the sequel is to determine the Lévy measure of Y(λ) , i.e., to
determine the measure μ on R for which we have
∞
iux
(u) = e − 1 μ(dx).
−∞
en = pn (Y(λ) )e0 , n ∈ N,
40 Basic probability distributions on Lie algebras
−∞
= δnm ,
for n, m ∈ N. Looking at (3.11a)-(3.11b) and the definition of Y(λ) , we can
easily identify the three-term recurrence relation satisfied by the pn as
% %
Y(λ) en = (n + 1)(n + λ)en+1 + β(2n + λ)en + n(n + λ − 1)en−1 ,
n ∈ N. Therefore, Proposition 1.2.1 shows that the rescaled polynomials
+n ,
k
Pn := pn , n ∈ N,
k+λ
k=1
For the definition of these polynomials see, e.g., [63, Equation (1.7.1)]. For
the measure μ we get
$ $2
(π − 2 arccos β)x $$ λ ix $
$
μ(dx) = C exp % $ + % $ dx,
2 1−β 2 $ 2 2 1−β $ 2
where
-
c = |β| − β 2 − 1.
where
! !
λ 1
xn = n + − c sgnβ, for n ∈ N
2 c
and
∞
1 (λ)n
= c2n = (1 − c2 )−λ .
C n!
n=0
The relation
∂ λ−1
B− Lnλ−1 (x) = x λ−1
L (x) − nLnλ−1 (x) = −(n + λ − 1)Ln−1 (x),
∂x n
n ≥ 1, shows that
.
n!(λ)
B− en = (−1)n B− Lnλ−1 (x)
(n + λ − 1)!
.
n!(λ)
= −(n + λ − 1)(−1) n
Lλ−1 (x)
(n + λ − 1)! n−1
.
%
n (n − 1)!(λ) λ−1
= − n(n + λ − 1)(−1) L (x)
(n + λ − 2)! n−1
%
= n(n + λ − 1)en−1 (x),
hence
%
B+ en (x) = (n + β)(n + 1)en+1 (x).
3.3 Gamma distribution on sl2 (R) 43
ã+ + ã− = 1 − x,
[ã+ , ã− ] = −x, [ã◦ , ã+ ] = ã◦ + ã+ , [ã− , ã◦ ] = ã◦ + ã− .
We have noted earlier that i(ã− − ã+ ) has a continuous binomial distribution
(or spectral measure) in the vacuum state 1, with hyperbolic cosine density
(2 cosh πξ/2)−1 , in relation to a representation of the subgroup of sl2 (R) made
of upper-triangular matrices.
Next, we also notice that although this type of distribution can be studied
for every value of β > 0 in the above framework, the construction can also
be specialised based on Lemma 2.4.2 for half-integer values of β using the
annihilation and creation operators αx− , αy− , αx+ , αy+ on the two-dimensional
boson Fock space (Ce1 ⊕ Ce2 ).
Defining the operator L as L = −Q̃ − 2ã◦ with
Q̃ = 1 − x, P̃ = −i(2x∂x + 1 − x),
we find that
and
hence
*
i i i
L, − P̃, M
2 2 2
44 Basic probability distributions on Lie algebras
also called the Segal–Shale–Weil representation of sl2 (R). Indeed, the above
relations can be proved by ordinary differential calculus as
(1 − x + x∂x )(−x∂x ) − (−x∂x )(1 − x + x∂x )
= −(1 − x)x∂x − x∂x − x2 ∂x2 + (1 − x)x∂x − x + x2 ∂x2 + x∂x
= −x,
and
and
where
β cosh(t) + sinh(t)
γ (β, t) = .
cosh(t) + β sinh(t)
See Section 4.4 of [45] for a proof of Lemma 3.3.1.
Exercises
Exercise 3.1 Define the operators b− and b+ by
b− = −ia− , b+ = ia+ .
3.3 Gamma distribution on sl2 (R) 45
The goal of this question is to show that the first three moments of B− + B+ + M in
the state e0 coincide with the moments (3.14) of a gamma distribution with shape
parameter α > 0 in the state e0 , i.e.,
1. for n = 1, show that
e0 , (B− + B+ + M)e0
= IE[X],
2. for n = 2, show that
1 2
e0 , (B− + B+ + M)2 e0
= IE X 2 ,
In these days the angel of topology and the devil of abstract algebra
fight for the soul of each individual mathematical domain.
(H. Weyl, “Invariants”, Duke Mathematical Journal, 1939.)
Starting with this chapter we move from particular examples to the more gen-
eral framework of noncommutative random variables, with an introduction to
the basic concept of noncommutative probability space. In comparison with the
previous chapters which were mostly concerned with distinguished families
of distributions, we will see here how to construct arbitrary distributions in a
noncommutative setting.
47
48 Noncommutative random variables
for B ∈ B(R) or
fdPX = f ◦ XdP
R
for f : R −→ R, a bounded measurable function. The probability measure PX
is called the distribution of X with respect to P.
This construction is not limited to single random variables, as we can also
define the joint distribution of an n-tuple = (X1 , . . . , Xn ) of real random
variables by
for all a, b ∈ A, λ, μ ∈ C.
First, we note that the “classical” probability spaces described in Section 4.1
can be viewed as special cases of quantum probability spaces.
Example 4.2.2 (Classical ⊆ Quantum) To a classical probability space
(, F, P) we can associate a quantum probability space (A, ) by taking
• A := L∞ (, F, P), the algebra of bounded measurable functions
f : −→ C, called the algebra of random variables. The involution is
given by pointwise complex conjugation, f ∗ = f , where f (ω) = f (ω) for
ω ∈ . 3
• : A f −→ E(f ) = fdP, which assigns to each random variable its
expected value.
50 Noncommutative random variables
i.e.,
⎡ ⎤⎡ ⎤ ⎡ ⎤
x11 x12 ... x1n v1 x11 v1 + x12 v2 + · · · + x1n vn
⎢ x21 x22 ... x2n ⎥⎢ v2 ⎥ ⎢ x21 v1 + x22 v2 + · · · + x2n vn ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. ⎥⎢ .. ⎥=⎢ .. ⎥.
⎣ . . . . ⎦⎣ . ⎦ ⎣ . ⎦
xn1 xn2 ... xnn vn xn1 v1 + xn2 v2 + · · · + xnn vn
The involution on A = B(Cn ) = Mn (C) is defined by complex conjugation
T
and transposition, i.e., X ∗ = X , or equivalently by
⎡ ⎤∗ ⎡ ⎤
x11 x12 . . . x1n x11 x21 . . . xn1
⎢ x21 x22 . . . x2n ⎥ ⎢ x12 x22 . . . xn2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ . . . . ⎥ =⎢ . .. . .. ⎥ ,
⎣ . . .
. . . . ⎦
. ⎣ . . . . . . ⎦
xn1 xn2 . . . xnn x1n x2n . . . xnn
4.2 Noncommutative probability spaces 51
where xij = xij , 1 ≤ i, j ≤ n. For any unit vector ψ ∈ h we can define a state
: Mn (C) −→ C by
(X) = ψ, Xψ
, X ∈ Mn (C).
where
% %
a − c + (a − c)2 + |b|2 c − a + (a − c)2 + |b|2
a1 = % , a2 = % . (4.4)
2 (a − c)2 + |b|2 2 (a − c)2 + |b|2
and
% %
a+c+ (a − c)2 + |b|2 a + c − (a − c)2 + |b|2
λ1 = , λ2 = . (4.5)
2 2
Proof. The characteristic polynomial PX (z) of X is given by
PX (z) = det(zI − X)
z−a −b
= det
−b z−c
= (z − a)(z − c) − |b|2
= z2 − (a + c) + ca − |b|2 , z ∈ C.
We note that the zeroes λ1 , λ2 of the characteristic polynomial PX of X are
given by (4.5), and they are real. Hence for any z ∈ C with z = 0, we have
det(zI − X) = 0 and we can compute the inverse
RX (z) = (zI − X)−1
of zI − X, also called the resolvent of X, as
−1 1 z − c −b
(zI − X) = 2 .
z − (a + c) + ca − |b|2 −b z − a
The expectation
6 7
1 1 z−c
RX (z) = , RX (z) =
0 0 z2 − (a + c) + ca − |b|2
of the resolvent in the state can be written by partial fraction decomposition
as follows:
z−c a1 a2
RX (z) = = +
(z − λ1 )(z − λ2 ) z − λ1 z − λ2
with
a1 = lim (z − λ1 ) RX (z)
z→λ1
λ1 − c
=
λ 1 − λ2
%
a − c + (a − c)2 + |b|2
= % ,
2 (a − c)2 + |b|2
4.2 Noncommutative probability spaces 53
and
%
c − a + (a − c)2 + |b|2
a2 = lim (z − λ2 ) RX (z) = % ,
z→λ2 2 (a − c)2 + |b|2
as in (4.4).
Note that we have 0 ≤ a1 , a2 and a1 + a2 = 1, so that μX (dx) defined in
(4.3) is indeed a probability measure on R.
We have shown that the expectation of RX (z) satisfies
z−c
RX (z) = 2
z − (a + c) + ca − |b|2
∞
a1 a2 1
= + = μX (dx)
z − λ1 z − λ2 −∞ z − x
for z ∈ C\R. From the geometric series
! ∞
−1 1 X Xn
RX (z) = (zI − X) = I− = ,
z z zn+1
n=0
and
xn∞
1
= ,
z−x zn+1
n=0
for z sufficiently large, and finally we check that (4.2) holds for all n ∈ N.
Note that the law of a quantum variable X is defined with respect to the
state . If μ is the law of X in the state , we shall write
L (X) = μ.
V(X, λ) = {v ∈ Cn : Xv = λv}.
X : (, F, P) −→ (M, M).
jX : L∞ (M) −→ L∞ ()
by the composition
jX (f ) = f ◦ X, f ∈ L∞ (M).
j : B −→ (A, ).
i) linear: we have
λ, μ ∈ C;
ii) multiplicative: we have
j(ab) = j(a)j(b), a, b ∈ B;
Note that this definition extends the construction of quantum random variable
given earlier when (A, ) is based on a classical probability space. If X ∈ A
is Hermitian, then we can define a quantum random variable jX on the algebra
C[x] of polynomials in a Hermitian variable x by setting
jX P(x) = P(X), P ∈ C[x].
56 Noncommutative random variables
j : B −→ (A, )
j : g −→ (A, ).
i) Linearity: we have
λ, μ ∈ R;
ii) Lie algebra homomorphism: we have
j [X, Y] = j(X)j(Y) − j(Y)j(X), X, Y ∈ g;
j : B −→ (A, )
j : g −→ (A, )
P2 = P = P∗ .
Example 4.4.3
a) If P is a non-trivial orthogonal projection, then σ (X) = {0, 1} and we find
L (P) = (P)δ1 + (I − P)δ0 .
Since in this sense orthogonal projections can only take the values 0 and
1, they can be considered as the quantum probabilistic analogue of events,
i.e., random experiments that have only two possible outcomes – “yes” and
“no” (or “true” and “false”).
b) Consider now the case where is a vector state, i.e.,
(B) = ψ, Bψ
, B ∈ Mn (C),
for some unit vector ψ ∈ Cn . Let
λEλ
λ∈σ (X)
be a quantum random variable in (Mn (C), ), then the weights (Eλ ) in
the law of X with respect to ,
L (X) = (Eλ )δλ ,
λ∈σ (X)
4.5 The Lie algebra so(3) 59
are given by
(Eλ ) = ψ, Eλ ψ
= ||Eλ ψ||2 ,
i.e., the probability with which X takes a value λ with respect to the state
associated to ψ is exactly the square of the length of the projection of ψ
onto the eigenspace V(X, λ),
L (X) = ||Eλ ψ||2 δλ .
λ∈σ (X)
We note that two vectors ψ and λψ that differ only by a complex factor λ with
modulus |λ| = 1 define the same state since we have
Xλψ, λψ
= |λ|2 Xψ, ψ
= Xψ, ψ
.
⎧
||x||
2
⎪
⎪ if k = 2 + 1 is odd,
⎪
⎨ (−1) ξ(x),
22
ξ(x) = k
⎪
⎪
⎪
⎩ (−1) ||x|| I,
2
if k = 2 is even.
22
Therefore we have
t2 ||x||2 t3 ||x||2 t4 ||x||2
exp tξ(x) = I + tξ(x) − I− ξ(x) + ± ···
!2 4 3! 4 ! 4! 16
t 2 t
= cos ||x|| I + sin ||x|| ξ(x). (4.6)
2 ||x|| 2
For the Fourier transform of the distribution of the quantum random variable
J(x) with respect to the state given by ψ, this yields
ψ, exp itJ(x) ψ
= ψ, exp − tξ(x) ψ
! !
t 2 t
= cos ||x|| ψ, Iψ
− sin ||x|| ψ, ξ(x)ψ
.
2 ||x|| 2
4.5 The Lie algebra so(3) 61
But
6 7
cos θ2 i x3 x1 − ix2 cos θ2
ψ, ξ(x)ψ
= , −
eiφ sin θ2 2 x1 + ix2 −x3 eiφ sin θ2
!
i
θ θ
=− x1 eiφ + e−iφ sin cos
2 2 2
!!
i
θ θ θ θ
− ix2 e−iφ − eiφ sin cos + x3 cos2 − sin2
2 2 2 2 2
i
= − x1 sin θ cos φ + x2 sin θ sin φ + x3 cos θ
2
8⎡ cos φ sin θ ⎤ ⎡ x ⎤9
1
i
= − ⎣ sin φ sin θ ⎦ , ⎣ x2 ⎦
2
cos θ x3
i
= − B(ψ), x
,
2
where the vector ψ = e1 cos θ2 + e−1 eiφ sin θ2 is visualised as the point
⎡ ⎤
cos φ sin θ
B(ψ) = ⎣ sin φ sin θ ⎦ .
cos θ
on the unit sphere1 with polar coordinates (θ, φ) in R3 . Let us now denote by
6 7
x
γ := B(ψ), ∈ [−1, 1]
||x||
the cosine of the angle between B(ψ) and x. We have
! !
t||x|| t||x||
ψ, exp itJ(x) ψ
= cos + iγ sin ,
2 2
which shows that the distribution of the Hermitian element J(x) in the state
associated to the vector ψ is given by
1−γ 1+γ
L J(x) = δ ||x|| + δ ||x|| .
2 − 2 2 2
1 The unit sphere is also called the Bloch sphere in this case.
62 Noncommutative random variables
⎡ x1 − ix2 ⎤
x3 √ 0
⎢ 2 ⎥
⎢ x + ix x1 − ix2 ⎥
⎢ 1 2 ⎥
ξ(x) = −i ⎢ √ 0 √ ⎥
⎢ 2 2 ⎥
⎣ x1 + ix2 ⎦
0 √ −x3
2
and
⎡ x1 − ix2 ⎤
x3 √ 0
⎢ 2 ⎥
⎢ x + ix x1 − ix2 ⎥
⎢ 1 2 ⎥
J(x) = ⎢ √ 0 √ ⎥.
⎢ 2 2 ⎥
⎣ x1 + ix2 ⎦
0 √ −x3
2
Therefore, we have
⎡ ⎤
x2 + x22 (x1 − ix2 )x2 (x1 − ix2 )2
⎢ x32+ 1 √ ⎥
⎢ 2 2 2 ⎥
⎢ (x1 + ix2 )x3 (x1 − ix2 )x2 ⎥
⎢ √ x1 + x22
2
√ ⎥
ξ(x) = − ⎢
2
⎥,
⎢ 2 2 ⎥
⎢ ⎥
⎣ (x1 + ix2 )2 (x1 + ix2 )x3 x 2 + x2 ⎦
√ x32 + 1 2
2 2 2
and
⎧
⎪
⎪ I if n = 0,
⎨
ξ(x) =
n
(−||x||2 )m ξ if n is odd, n = 2m + 1,
⎪
⎪
⎩ (−||x||2 )m ξ 2 if n ≥ 2 is even, n = 2m + 2.
4.5 The Lie algebra so(3) 63
sin(t||x||) (x12 + x22 + 2x32 ) 1 − cos(t||x||)
= I + ix3 −
||x|| 2||x||2
which shows that J(x) has distribution
(1 − γ )2 1 − γ2 (1 + γ )2
L J(x) = δ−||x|| + δ0 + δ−||x|| ,
4 2 4
where γ = x3 /||x|| is the cosine of the angle between ψ and x. This is a
binomial distribution with parameters n = 2 and p = (1 + γ )/2.
bq = pδ+1 + qδ−1 ,
64 Noncommutative random variables
ξ0 1{x} = x1{x} ,
⎧
⎨ 0, if x = +1,
ξ+ 1{x} = q
⎩ 1{+1} if x = −1,
p
⎧ ,
⎨ p1 if x = +1,
{−1}
ξ− 1{x} = q
⎩
0 if x = −1,
for x ∈ {−1, +1}. Clearly, ξ0 is Bernoulli distributed in the state given by the
constant function 1, i.e., L1 (ξ0 ) = bp . More generally, let us consider the
elements
Xθ = cos(θ )ξ0 + sin(θ )(ξ+ + ξ− ) = 2i cos(θ )ξ3 + sin(θ )x1
where
⎡ ⎤
cos(θ ) 0 sin(θ )
Rθ = ⎣ 0 1 0 ⎦.
− sin(θ ) 0 cos(θ )
Therefore, we have
1, exp(itXθ )1
= 1, eθξ2 exp(itX0 )e−θξ2 1
= gθ , exp(itX0 )gθ
,
with
gθ = e−θξ2 1
! !
θ θ
= cos 1 − 2 sin ξ2 1
2 2
! , !! ! , !!
θ q θ θ p θ
= cos + sin 1{+1} + cos − sin 1{+1} ,
2 p 2 2 q 2
where we could use Equation (4.6) to compute the exponential of
√
1 0 − q/p
−θ ξ2 = √ .
2 p/q 0
4.6 Trace and density matrix 65
We see that the law of Xθ has density |gθ |2 with respect to the law of X0 , which
gives
! , !!2
θ q θ
L1 (Xθ ) = p cos sin+ δ+1
2 p 2
! , !!2
θ p θ
+ q cos − sin δ−1
2 q 2
1
√
= 1 + (2p − 1) cos(θ ) + 2 pq sin(θ ) δ+1
2
1
√
+ 1 − (2p − 1) cos(θ ) − 2 pq sin(θ ) δ−1 .
2
tr(I) = 1, (4.7)
1
n
tr(A) = ajj (4.9)
n
j=1
for A = (ajk ) ∈ Mn (C). The trace is a state. We can compute the trace of a
matrix A ∈ Mn (C) also as
1
n
tr(A) = ej , Aej
, (4.10)
n
j=1
1
n
tr(I) = tr (δjk )1≤ j,≤ n = δjj = 1
n
j=1
and
⎛ ⎞
n
tr(AB) = tr ⎝ Ajk Bk ⎠
k=1 1≤ j,≤ n
n
= aj bj
j,=1
n
= bj aj = tr(BA).
j,=1
b) Uniqueness. Denote by ejk with 1 ≤ j, k ≤ n the matrix units, i.e., ejk is the
matrix with all coefficients equal to zero except for the coefficient in the j-th
row and k-th column, which is equal to 1,
k
⎛ ⎞
0 ··· 0 0 0 ··· 0
⎜. . .. .. .. . . .. ⎟
⎜. .. ⎟
⎜. . . . . .⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜ ⎟
⎜ ⎟
ejk = (δjr δks )1≤r,s≤n = j ⎜0 ··· 0 1 0 ··· 0⎟ .
⎜ ⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜. .. ⎟
⎜ . .. .. .. .. . . ⎟
⎝. . . . . . .⎠
0 ··· 0 0 0 ··· 0
The n2 matrix units {e11 , . . . , e1n , e21 , . . . , enn } form a basis of Mn (C). There-
fore two linear functionals coincide, if they have the same values on all matrix
units. For the trace tr we have
0 if j = k,
tr(ejk ) =
1/n if j = k.
Note that we have the following formula for the multiplication of the matrix
units,
since f satisfies (4.8). We also have ejk ekj = ejj , so (4.8) implies
for any j, k ∈ {1, . . . , n}. This means that there exists a constant c ∈ C such that
f (ejj ) = c
f : Mn (C) −→ C
defined by
1
n
f (A) = ej , Aej
,
n
j=1
satisfies Equations (4.7) and (4.8). The first is obvious, we clearly have
1 1
n n
f (A) = ej , Iej
= ||ej ||2 = 1.
n n
j=1 j=1
v
v= ej , v
ej , v ∈ Cn ,
j=1
68 Noncommutative random variables
1
n
f (AB) = ej , ABej
n
j=1
8 n 9
1 n
= ej , A e , Bej
e
n
j=1 =1
1 n
= ej , Ae
e , Bej
n
j,=1
1 n
= e , bej
ej , Ae
n
j,=1
8 ⎛ ⎞9
1
n n
= e , b ⎝ ej , Ae
ej ⎠
n
=1 j=1
1
n
= e , BAe
n
=1
= f (BA).
This formula shows that the trace is a state. We have
1
n
tr(I) = ej , ej
= 1,
n
j=1
Let ρ ∈ Mn (C) be a positive matrix with trace one. Then we can define a
state on Mn (C) on
(A) = tr(ρA)
for A ∈ Mn (C). Indeed, since tr(ρ) = 1 we have (I) = tr(ρI) = 1, and
since ρ is positive, there exists a matrix B ∈ Mn (C) such that ρ = B∗ B and
therefore
(A) = tr(ρA) = tr(B∗ BA) = tr(BAB∗ ) ≥ 0
4.6 Trace and density matrix 69
for any positive matrix A ∈ Mn (C). Here we used the fact that A is of the
form A = C∗ C, since it is positive, and therefore BAB∗ = (CB∗ )(CB∗ ) is also
positive.
All states on Mn (C) are of this form.
(A) = tr(ρA)
for all A ∈ Mn (C). The matrix ρ is positive and has trace equal to one. Its
coefficients can be calculated as
ρjk = n(ekj )
j
⎛ ⎞
0 ··· 0 0 0 ··· 0
⎜. .. .. .. .. ⎟
⎜. .. .. ⎟
⎜. . . . . . .⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜ ⎟
⎜ ⎟
ekj := (δkr δjs )1≤r,s≤n = k ⎜0 ··· 0 1 0 ··· 0⎟ .
⎜ ⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜. .. ⎟
⎜. .. .. .. .. .. ⎟
⎝. . . . . . .⎠
0 ··· 0 0 0 ··· 0
The theorem can be deduced from the fact that Mn (C) is a Hilbert space
with the inner product
A, B
= tr(A∗ B) for A, B ∈ Mn (C),
Classical Quantum
We found that the law of J(x) in the state with state vector ψ is given by
1−γ 1+γ
Lψ J(x) = δ−1/2 + δ1/2 ,
2 2
where γ is the cosine of the angle between B(ψ) and ||x||,
6 7
x
γ = B(ψ), .
||x||
So the measurement of the component of the spin in direction x of a spin
1/2-particle whose spin points in the direction b(ψ), will give +1/2 with
probability (1 + γ )/2, and −1/2 with probability (1 − γ )/2.
The other representations correspond to particles with higher spin; the n+1-
dimensional representation describes a particle with spin n2 . In particular, for
n = 2, we have spin 1-particles, cf. Reference [41, volume 3, chapter 5 “Spin
one”].
Notes
On so(3), see [20, 21] for the Rotation Group SO(3), its Lie algebra so(3),
and their applications to physics. See, e.g., [36, 37, 100] for Krawtchouk
polynomials and their relation to the binomial process (or Bernoulli random
walk).
Table 4.1 presents an overview of the terminology used in classical and
quantum probability, as in e.g., [88].
Exercises
Exercise 4.1 Let n ∈ N, A ∈ Mn (C) be a Hermitian matrix. Let f : R → C be
a function.
m
1. Find a polynomial p(x) = pk xk with
k=1
p(λi ) = f (λi )
we have
f (X) = f (λ)Eλ .
λ∈σ (X)
Exercise 4.2 In the framework of the Examples of Section 4.4, define further
n
(j)
n
(j)
n
(j)
n0 = ξ0 , n+ = ξ+ , n− = ξ− .
j=1 j=1 j=1
and
(Xin0 )∗ = n0 , (n+ )∗ = n− .
2. Show that the indicator functions 1{x} are eigenvectors of n0 , with
eigenvalues equal to the difference of the number of +1s and −1s in x.
3. Show that n0 has a binomial distribution on the set
{−n, −n + 2, . . . , n − 2, n} and compute the density of this distribution
with respect to the constant function.
4. Compute the law of
nθ = n0 + θ (n+ + n+ )
and discuss possible connections with the Krawtchouk polynomials.
Exercise 4.3 Calculate (X 3 ) in (4.1).
5
75
76 Noncommutative stochastic integration
n
λj λk K(xj , xk ) ≥ 0.
j,k=1
Example 5.1.2
The following theorem shows that all positive definite kernels are in a sense of
the form of the examples above.
∀y ∈ M, y, x
= 0
is necessarily the zero vector. This is equivalent to the linear span of M being
dense in H.
There are several constructions that allow to produce positive definite
kernels. For example, if K, L : X × X −→ C are positive definite kernels,
then K · L : X × X −→ C with
where
n
Tr(A) = ajj , A = (ajk ) ∈ Mn (C),
j=1
Proposition 5.1.5 Let H be a Hilbert space. Then there exists a Hilbert space,
denoted exp(H) spanned by the set
> ?
E(h) : h ∈ H ,
for h1 , h2 ∈ H.
Proof : Since the inner product of a Hilbert space is a positive definite kernel,
the preceding discussion shows that
K(h1 , h2 ) = exp h1 , h2
defines a positive definite kernel on H. Theorem 5.1.3 then gives the existence
of the Hilbert space exp H.
Remark 5.1.6 The Hilbert space exp H is called the Fock space over H,
or the symmetric or boson Fock space over H. We will also use the notation
s (H). We will briefly see another kind of Fock space, the full or free Fock
space F(H) over a given Hilbert space H in the next paragraph. But otherwise
we will only use the symmetric Fock space and call it simply Fock space, when
there is no danger of confusion.
H ⊗n := H ⊗ H ⊗ · · · ⊗ H , n ≥ 2.
& '( )
n times
for (hn )n∈N , (kn )n∈N ∈ Falg (H), i.e., with hn , kn ∈ H ⊗n for n ≥ 0. The inner
product on H ⊗0 = C is of course simply z1 , z2
C = z1 z2 . Upon completion
we get the Hilbert space
∞
∞
⊗n ⊗n
F(H) = H = (hn )n∈N : hn ∈ H , ||hn ||H ⊗n < ∞ ,
2
n=0
n=0
H ◦n = Sn (H ⊗n ),
and the symmetric Fock space as the completed direct sum of the symmetric
tensor powers,
∞
s (H) = H ◦n .
n=0
If we denote by S the direct sum of the symmetrisation operators, i.e.,
∞
S (hn )∞
n=0 = Sn (hn ) n=0
for (hn )∞
n=0 ∈ F(H), then we have
s (H) = S F(H) .
v⊗n = v ⊗ · · · ⊗ v
& '( )
n times
are total in the symmetric tensor power H ◦n , see Exercise 5.1. Let us now use
the exponential vectors
80 Noncommutative stochastic integration
!∞
h⊗n
E(h) = √
n! n=0
to show that the symmetric Fock space which we just constructed is the same
as the space exp H that we obtained previously from the theory of positive
definite kernels. We have
∞ = ⊗n =2 ∞
=h = h 2n
=√ = = = exp h 2 < ∞,
= = ⊗n n!
n=0
n! H n=0
and therefore E(h) ∈ F(H). Since each term is a product vector, we have
furthermore
S E(h) = E(h),
by setting
⎧ −
⎪ an (h)v⊗n := h, v
v⊗(n−1) ,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ √ n
⎪
⎨ a+ ⊗n ⊗n
:= n + 1Sn+1 (h ⊗ v ) = √
1
vk ⊗ h ⊗ vn−k ,
n (h)v
n + 1 k=0
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ n
⎪
⎪ ◦ ⊗n
vk−1 ⊗ Tv ⊗ vn−k ,
⎩ n a (T)v :=
k=1
on tensor powers v⊗n ∈ H ◦n . These operators and their extensions to s (H) are
called the annihilation operator, the creation operator, and the conservation
operator, respectively. The conservation operator with T = I is also called the
number operator, since it acts as
a◦n (I)v◦n = nv◦n ,
i.e., it has the symmetric tensor powers as eigenspaces and the eigenvalues give
exactly the order of the tensor power.
We set H ◦(−1) = {0}, then the 0 − th order annihilation operator a− 0 (h) :
◦0 ◦(−1) −
H = C −→ H = {0} must clearly be the zero operator, i.e., a0 (h)(z) = 0
for any h ∈ H and z ∈ C. The direct sums
∞
∞
∞
a− (h) = a−
n (h), a+ (h) = a+
n (h), and a◦ (T) = a◦n (T)
n=0 n=0 n=0
are well-defined on the algebraic direct sum of the tensor powers. We have
− ∗
◦ ∗
a (h) = a+ (h) and a (T) = a◦ (T ∗ ),
so these operators have adjoints and therefore are closable. They extend to
densely defined, closable, (in general) unbounded operators on s (H).
There is another way to let operators T ∈ B(H) act on the Fock spaces F(H)
and s (H), namely by setting
(T)(v1 ⊗ · · · ⊗ vn ) := (Tv1 ) ⊗ · · · ⊗ (Tvn )
for v1 , . . . , vn ∈ H. The operator (T) is called the second quantisation of T. It
is easy to see that (T) leaves the symmetric Fock space invariant. The second
quantisation operator (T) is bounded if and only if T is a contraction, i.e.,
T ≤ 1. The conservation operator a◦ (T) of an operator T ∈ B(H) can be
recovered from the operators (etT )t∈R via
$
◦ d $$
a (T) = $ (etT )
dt t=0
82 Noncommutative stochastic integration
a− (h)E(k) = h, k
E(k),
$
d$
a+ (h)E(k) = $$ E(k + th),
dt t=0
$
d$
a◦ (T)E(k) = $$ E etT k ,
dt t=0
with h, k ∈ H, T ∈ B(H). The creation, annihilation, and conservation operators
satisfy the commutation relations
⎧ −
⎪
⎪ [a (h), a− (k)] = [a+ (h), a+ (k)] = 0,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ [a− (h), a+ (k)] = h, k
I,
⎪
⎪
⎪
⎨
[a◦ (T), a◦ (S)] = a◦ [T, S] , (5.1)
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ [a◦ (T), a− (h)] = −a− (Th),
⎪
⎪
⎪
⎪
⎩ ◦
[a (T), a+ (h)] = a+ (Th),
for h, k ∈ H, S, T ∈ B(H), cf. [87, Proposition 20.12]. Since the operators are
unbounded, these relations can only hold on some appropriate domain. One
can take, e.g., the algebraic direct sum of the symmetric tensor powers of H,
since this is a common invariant domain for these operators. Another common
way to give a meaning to products is to evaluate them between exponential
vectors. The condition
+
a (h)E(1 ), a+ (k)E(2 ) − a− (k)E(1 ), a− (h)E(2 )
= h, k
E(1 ), E(2 )
Proof : It easy to check that U preserves the inner product between expo-
nential vectors. The theorem therefore follows from the totality of these
vectors.
H = L2 (R+ , h) ∼
= L2 (R+ ) ⊗ h
with some Hilbert space h. Since we can write H as a direct sum
of
⎪
⎪
⎪
⎪
0
⎪
⎪ t
⎨
E(k1 ), a+ (h)E(k2 )
= k1 (s), h
h dsE(k1 ), E(k2 )
,
⎪
⎪
t
⎪
⎪
0
⎪
⎪
⎪
⎪ t
⎪
⎪
⎩ E(k1 ), a◦t (T)E(k2 )
= k1 (s), Tk2 (s)
h dsE(k1 ), E(k2 )
,
0
T ∈ B(h), h ∈ h, k1 , k2 ∈ H. An important notion in stochastic calculus is
adaptedness. In our setting this is defined in the following way.
h bea Hilbert space and set H = L (R+ , h). Let t ∈ R+ .
Definition 5.3.1 Let 2
be a partition of the interval [0, t], then the corresponding approximation of the
stochastic integral
t
I(t) = Xs daεs (h), t ∈ R+ ,
0
with ε ∈ {−, ◦, +} and h a vector in h (if ε ∈ {−, +}) or an operator on h (if
ε = ◦), is defined by
n
Iπ (t) := Xtk−1 aεtk (h) − aεtk−1 (h) , t ∈ R+ .
k=1
3oft the partition π goes to zero, and define the quantum stochastic integral
ε (h) as the limit. Evaluating a Riemann–Stieltjes sum over two expo-
0 Xs da s
nential vectors yields
E(k1 ), Iπ (t)E(k2 ) =
⎧ n
⎪ tk
⎪
⎪ E(k E(k k1 (s), h
h ds if ε = +,
⎪
⎪ 1 ), X tk−1 2 )
⎪
⎪ tk−1
⎪ n
⎪ k=1
⎨ tk
E(k1 ), Xtk−1 E(k2 ) h, k2 (s)
h ds if ε = −,
⎪
⎪ tk−1
⎪
⎪ k=1
⎪
⎪ n
tk
⎪
⎪
⎪
⎩ E(k1 ), X tk−1 E(k2 ) k1 (s), hk2 (s)
h ds if ε = ◦.
k=1 tk−1
ε − ◦ +
m1 1 (s), h
h h1 (s), 2 (s)
h h, 2 (s)
h
δ − ◦ +
m2 k, 2 (s)
h 1 (s), k2 (s)
h 1 (s), k
h
ε\δ − ◦ +
− 0 0 0
◦ 0 h1 , k2
h h1 , k
h
+ 0 h, k2
h h, k
h
A stronger form of the Itô formula, which holds on an appropriate domain and
under appropriate conditions on the integrands, is
t t t
It Jt = Is dJs + dIs Js + (dI • dJ)s ,
0 0 0
where the product in the last term is computed according to the rule
Xt daεt (h) • Yt daδt (k) = Xt Yt daεt (h) • daδt (k)
• da−
t (k) da◦t (k) da+
t (k)
da+
t (h) 0 0 0
da◦t (h) 0 da◦ (hk) da+ (hk)
da−
t (h) 0 da− (k∗ h) h, k
dt
If one adds the differential dt and sets all products involving dt equal to zero,
then
span {da+ (h) : h ∈ h} ∪ {da◦ (T) : T ∈ B(h)} ∪ {da− (h) : h ∈ h} ∪ {dt}
becomes an associative algebra with the Itô product • called the Itô algebra
over h. If dim h = n, then the Itô algebra over h has dimension (n + 1)2 .
Example 5.4.2
To realise classical Brownian motion on a Fock space, we can take h = C and
set
t t
Bt := da−
s + da+
s ,
0 0
88 Noncommutative stochastic integration
where we wrote a± ±
s for as (1). Then the quantum stochastic Itô formula given
earlier shows that
t t t t
B2t = 2 Bs da−
s + 2 B da
s s
+
+ ds = 2 Bs dBs + t,
0 0 0 0
i.e., we recover the well-known result from classical Itô calculus. The integral
for Bt can of course be computed explicitly, we get
Bt = a− + − +
t + at = a (1[0,t] ) + a (1[0,t] ).
We have already shown in Section 3.1 that the sum of the creation and the
annihilation operator are Gauss distributed.
Notes
See [16] for more information on positive definite functions and the proofs of
the results quoted in Section 5.1. Guichardet [50] has given another construc-
tion of symmetric Fock space for the case where the Hilbert space H is the
space of square integrable functions on a measurable space (M, M, m). This
representation is used for another approach to quantum stochastic calculus, the
so-called kernel calculus [69, 73].
P.-A. Meyer’s book [79] gives an introduction to quantum probability and
quantum stochastic calculus for readers who already have some familiarity
with classical stochastic calculus. Other introductions to quantum stochastic
calculus on the symmetric Fock space are [17, 70, 87]. For an abstract approach
to Itô algebras, we refer to [15]. Noncausal quantum stochastic integrals, i.e.,
integrals with integrands that are not adapted, were defined and studied by
Belavkin and Lindsay, see [13, 14, 68]. The recent book by M.-H. Chang [26]
focusses on the theory of quantum Markov processes.
We have not treated here the stochastic calculus on the free Fock space
which was introduced by Speicher and Biane, see [19] and the reference
therein. Free probability is intimately related to random matrix theory, cf. [9].
More information on the methods and applications of free probability can also
be found in [81, 120, 121].
Exercises
Exercise 5.1 We want to show that the tensor powers v⊗n do indeed span
the symmetric tensor powers H ◦n . For this purpose, prove the following
polarisation formulas:
5.4 Quantum Itô table 89
1 +
n
Sn (v1 ⊗ · · · ⊗ vn ) = n
k (1 v1 + · · · + n vn )⊗n
n!2
∈{±1}n k=1
1
n
= n−k
(−1) (vl1 + · · · + vlk )⊗n ,
n!
k=1 l1 <···<lk
Any good theorem should have several proofs, the more the better.
For two reasons: usually, different proofs have different strengths and
weaknesses, and they generalise in different directions – they are not
just repetitions of each other.
(M. Atiyah, in Interview with M. Atiyah and I. Singer.)
In Chapter 3 we have considered the distribution of random variables on real
Lie algebras by specifying ad hoc Hilbert space representations. In this chapter
we revisit this construction via a more systematic approach based on the
framework of Chapter 4 and the splitting Lemma 6.1.3. We start as previously
from the Heisenberg–Weyl and oscillator Lie algebras, and then move on to
the Lie algebra sl2 (R).
and
(a+ )∗ = a− , a− e0 = 0, Ne0 = 0.
Xα,ζ ,β := αN + ζ a+ + ζ a− + βE.
90
6.1 Gaussian and Poisson random variables on osc 91
⎧
⎪
⎨ e
iβ−|ζ |2 /2 , for α = 0,
e0 , eiXα,ζ ,β e0
=
⎪
⎩ eiβ+|ζ |2 (eiα −iα−1)/α 2 , for α = 0.
Proof : This can be deduced from the formula for the adjoint actions,
AdeX Y : = eX Ye−X
∞
(−1)m n m
= X YX
n!m!
n,m=0
∞
!
1 k
k
= (−1)m X k−m YX m
k! m
k=0 m=0
1
= Y + [X, Y] + X, [X, Y] + · · ·
2
= e ad X Y,
The following formula, also known as the splitting lemma, cf. Proposition 4.2.1
in Chapter 1 of [38], provides the normally ordered form of the Weyl operators
and is a key tool to calculate characteristic functions of elements of the
oscillator algebra.
Lemma 6.1.3 (Splitting lemma) Let x, u, v, α ∈ C. We have
exp xN + ua+ + va− + αE
"u # "v # " uv #
= exp (ex − 1)a+ eα+xN exp (ex − 1)a− exp 2 (ex − x − 1) .
x x x
In particular, when x = 0 we have
+ −
exp(ua+ + va− + αE) = eua eva e(α+uv/2)E , u, v, α ∈ C.
Proof : We will show that
+ −
exp(xN + ua+ + va− + αE) = eα̃E eũa exN eṽa
on the boson Fock space, where
⎧ ∞ n−1
⎪ x u
⎪
⎪ ũ = u = (ex − 1),
⎪
⎪
⎪
⎪ n! x
⎪
⎪
n=1
⎪
⎪
⎪
⎪
⎨ ∞ n−1
x v
ṽ = v = (ex − 1),
⎪
⎪ n! x
⎪
⎪ n=1
⎪
⎪
⎪
⎪
⎪
⎪ ∞ n−2
⎪
⎪ x uv
⎪
⎩ α̃ = α + uv = α + 2 (ex − x − 1).
n! x
n=2
6.1 Gaussian and Poisson random variables on osc 93
Set
ω1 (t) = exp t(xN + ua+ + va− + αE)
and
+ −
ω2 (t) = eũ(t)a etxN eṽ(t)a eα̃(t)E ,
We have
ω1 (t) = xN + ua+ + va− + αE exp(t(xN + ua+ + va− + αE)),
and
+ − + −
ω2 (t) = ũ (t)a+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xeũ(t)a NetxN eṽ(t)a eα̃(t)E
+ − + −
+ ṽ (t)eũ(t)a etxN a− eṽ(t)a eα̃(t)E + α (t)eũ(t)a etxN eṽ(t)a Eeα̃(t)E
+ − + −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xeũ(t)a NetxN eṽ(t)a eα̃(t)E
+ −
+ vetx eũ(t)a etxN a− eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) eũ(t)a etxN eṽ(t)a Eeα̃(t)E
x
+ −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ x(N − ũ(t)a+ )eũ(t)a etxN eṽ(t)a eα̃(t)E + veũ(t)a a− etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
+ −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ x(N − ũ(t)a+ )eũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ v(a− − ũ(t))eũ(t)a etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
94 Random variables on real Lie algebras
+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) − vũ(t) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E + αEeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E + αEeũ(t)a etxN eṽ(t)a eα̃(t)E
= xN + ua+ + va− + αE exp(t(xN + ua+ + va− + αE)),
where, using Lemma 6.1.2, we checked that both expressions coincide for all
t ∈ [0, 1] since ω1 (0) = ω2 (0) = 1. Therefore, we have ω1 (t) = ω2 (t),
t ∈ R+ , which yields the conclusion.
We find in particular
+ −
exp(uQ) = exp(ua− + ua+ ) = eua eua euvE/2 .
span {Xα,ζ
k
,β e0 : k = 0, 1, . . .} = .
2
for all k ∈ N, if ζ = 0.
Using the above splitting Lemma 6.2.1 for sl2 (R) we write eXβ with
Xβ = B+ + B− + βM, β ∈ R,
as a product
+ −
eXβ = eν+ B eν0 M eν− B .
For β = 0 we find the Fourier transform (cosh λ)−λ which corresponds to the
hyperbolic secant distribution.
More generally when |β| < 1, the above distribution is called the Meixner
distribution. It is absolutely continuous with respect to the Lebesgue measure
and the density is given by
96 Random variables on real Lie algebras
$ $2
(π − 2 arccos β)x $ $
$ λ ix $
C exp % $ + % $ ,
2 1 − β2 $ 2 2 1−β $2
Lemma 6.2.3 The lowest weight vector e0 is cyclic for ρλ (Xβ ) for all β ∈ R,
λ > 0.
Proof : On e0 , we get
%
k−1
ρλ (Xβ )k e0 = k!λ(λ + 1) · · · (λ + k − 1)ek + c e
=0
6 ! 7 6 ! 7
a 0 i a a 0− iθ a
, exp θ = , exp
b −i 0 b b −iθ0 b
6 7
cosh θ sinh θ a a
= ,
− sinh θ cosh θ b b
= (|a|2 + |b|2 ) cosh θ − 2iI(āb) sinh θ
= cosh θ − i(p − q) sinh θ
= (p + q) cosh θ − i(p − q) sinh θ
= peθ + qe−θ ,
with āb − b̄a = 2i(ab) = 2i(p − q) and |a|2 + |b|2 = p + q = 1. This yields
the characteristic function of the Bernoulli distribution
qδ−1 + pδ1
with
1 1
p= (āb − b̄a + 2i|a|2 + i|b|2 ) and q = (−āb + b̄a + 2i|a|2 + 2i|b|2 ),
4i 4i
which is supported by {−1, 1}.
By the version (6.1) of the splitting lemma proved in Exercise 6.1 it is easy
to compute the moment generating function of the law of Eα,ζ in the state given
by the vector e0 . We have
∞ k
z
exp(zE− )e0 = e−k
k!
k=0
and
e0 , exp(λEα,ζ )e0
8 9
ζ " 2λα # ! ζ " #
= exp e − 1 E− e0 , exp(λαM) exp e2λα − 1 E− e0
2α 2α
|ζ |2k " 2λα #2k
∞
= e−2αλk e − 1
(2αk!)2
k=0
∞
|ζ |2k
= sinh(αλ)2k
α 2 k!2
k=0
!
2|ζ |
= J0 sinh(αλ) ,
α
where J0 denotes the modified Bessel (or hyperbolic) Bessel function of the
first kind,
See also Section 5.V “e2 and Lommel polynomials” in Feinsilver and Schott’s,
Algebraic Structures and Operator Calculus, Volume III: Representations of
Lie Groups.
e0 , En+ Em
−
e0
= δn,m .
1
Le0 (E+ + E− ) √ 1(−2,2) (x)dx.
2π 4 − x2
Notes
We also refer the reader to [118], [115], [117], [116] for additional noncom-
mutative relations on real Lie algebras, and to [10] and references therein for
further discussion of “noncommutative (or quantum) mathematics”.
Exercises
Exercise 6.1 Splitting lemma for the two-dimensional Euclidean group.
The goal of this exercise is to prove the following version
exp xM + yE+ + zE− (6.1)
"y " # # "z " # #
= exp e2x − 1 E+ exp(xM) exp e2x − 1 E−
2x 2x
of the splitting lemma, for x, y, z ∈ C, where M, E+ , E− denote a basis of
the Lie algebra of the group of rigid motions in two dimensions (i.e., the
100 Random variables on real Lie algebras
Euclidean Lie algebra). Denote by e(2) the Lie algebra with basis R, Tx , Ty
and the relations
[Tx , Ty ] = 0, [R, Tx ] = Ty , [R, Ty ] = −Tx ,
and R∗ = −R, Tx∗ = −Tx , Ty∗ = −Ty .
1. Show that
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 −1 0 0 0 1 0 0 0
R̃ = ⎣ 1 0 0 ⎦, T̃x = ⎣ 0 0 0 ⎦, T̃y = ⎣ 0 0 1 ⎦,
0 0 0 0 0 0 0 0 0
satisfy the same commutation relations as R, Tx , Ty .
2. Consider the affine subspace
⎧⎡ ⎤ ⎫
⎨ x ⎬
K2 = ⎣ y ⎦ : x, y ∈ R ⊆ R3
⎩ ⎭
1
and show that
exp(θ R̃), exp(vT̃x ), exp(wT̃y )
act as rigid motions on K2 (i.e., maps that preserve distances).
3. Show that we can find a basis M, E+ , E− for e(2) that satisfies the relations
M ∗ = M, (E+ )∗ = E− ,
and
[E+ , E− ] = 0, [M, E± ] = ±2E± .
4. Show that we have
⎧
⎪
⎪ exp(uE+ )M = (M − 2E+ ) exp(uE+ ),
⎪
⎪
⎪
⎨
exp(uM)E− = e−2u E− exp(uM),
⎪
⎪
⎪
⎪
⎪
⎩ exp(uE+ )E− = E− exp(uB+ ).
5. Define
⎧
+ −
⎨ ω1 (t) = exp t(xM + yE + zE ) ,
⎩
ω2 (t) = exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E− ,
with
y " 2tx # z " 2tx #
x̃(t) = tx, ỹ(t) = e −1 , z̃(t) = e −1 .
2x 2x
6.4 The Lie algebra e(2) 101
Show that we have ω1 (0) = ω2 (0) and that ω1 (t) and ω2 (t) satisfy the
same differential equation
ωj (t) = (xM + yE+ + zE− )ωj (t), j = 1, 2.
6. Conclude that Equation (6.1) holds.
Exercise 6.2 Splitting lemma on the Heisenberg–Weyl algebra hw.
Consider the Heisenberg–Weyl algebra hw generated by a− , a+ , E with the
commutation relations
[a− , a+ ] = E, [E, a− ] = [E, a+ ] = 0.
The goal of this question is to prove the splitting lemma
+ −
exp ua+ + va− + wE = eua eva e(w+uv/2)E , u, v, w ∈ C.
1. Using the relation
z2 z3
ezX Ye−zX = Y + z[X, Y] + [X, [X, Y]] + [X[X, [X, Y]]] + · · ·
2 3!
∞ n
z
=Y+ [X, [X, . . . [X, Y] · · · ]], (6.2)
n! & '( )
n=1
n times
+ −
+ veuta a− evta e(tw+t
2 uv/2)E
+ − 2 uv/2)E
+ (w + tuv)euta evta Ee(tw+t . (6.4)
102 Random variables on real Lie algebras
4. Using Relations (6.3) and (6.4) and the result of Question (1), show that
ω2 (t) = (ua+ + va− + wE)ω2 (t), t ∈ R+ .
Show from (6.3) that, as a consequence, we have ω1 (t) = ω2 (t), t ∈ R+ ,
and
+ +va− +wE + −
eua = eua eva e(w+uv/2)E . (6.5)
5. Using the splitting lemma Relation (6.5), show that when E = σ 2 I we have
− +a+ ) − −a+ ) 2 σ 2 /2
e0 , eu(a e0
= e0 , eiu(a e0
= eu ,
where e0 is a unit vector in a Hilbert space H with inner product ·, ·
, such
that e0 , e0
= 1 and a− e0 = 0.
6. From the result of Question (5) show that a− + a+ and i(a− − a+ ) have
centered Gaussian distribution with variance σ 2 .
Exercise 6.3 Consider the differential operators ã+ , ã− , ã◦ defined in Section
3.3 by ã− = −τ ∂τ , ã+ = τ − 1 − τ ∂τ , and ã◦ = −(1 − τ )∂τ − τ ∂τ2 .
1. Show that for s ∈ R we have
" # " #
exp −isQ̃ ã◦ exp isQ̃ = ã◦ + isã+ − isã− + s2 τ ,
and
1
exp isP̃ ã◦ exp −isP̃ = e−2s ã◦ − sinh(2s) ã+ + ã− + sinh2 (s).
2
(6.6)
2. Show that for any s ∈ R the operator
ã◦ + isã+ − isã− + s2 x
has a geometric distribution with parameter s2 /(1 + s2 ) in the vacuum
state 1, and that the operator
1
e−2s ã◦ − sinh(2s) ã+ + ã− + sinh2 (s)
2
has a geometric distribution with parameter tanh2 (s) in the vacuum state.
3. Conclude that the distribution of P̃ has the Fourier transform
1
IE[exp(itP̃)] = , t ∈ R.
cosh(t)
7
Couples are wholes and not wholes, what agrees disagrees, the
concordant is discordant. From all things one and from one all things.
(Heraclitus, On the Universe 59.)
This chapter introduces the notion of joint (Wigner) density of random
variables, for future use in quantum Malliavin calculus. For this we will rely
on functional calculus on general Lie algebras, starting with the Heisenberg–
Weyl algebra. We also consider some applications to quantum optics and time-
frequency analysis.
103
104 Weyl calculus on real Lie algebras
φ, PQψ
= φ, QPψ
On the other hand, the exponentials eibQ and eiaP do not commute, and
expanding the exponential series we get
∞
1
eibQ+iaP = (ibQ + iaP)n
n!
n=0
∞
(ibQ + iaP)2 (ibQ + iaP)3 (ibQ + iaP)n
= I + ibQ + iaP + + +
2 3! n!
n=4
b2 Q2 + a2 P2
+ abQP + abPQ
= I + ibQ + iaP −
2
i " 3 3 #
− b Q + a P + ba (QP + P2 Q + PQP) + ab2 (Q2 P + PQ2 + QPQ)
3 3 2 2
3!
∞ n
i
+ (bQ + aP)n ,
n!
n=4
and by identifying the above with the exponential series of eiby+iax we get
y2 QP2 + P2 Q + PQP
x ←→
2! 3!
and
x2 Q2 P + PQ2 + QPQ
y ←→ .
2! 3!
More generally, by identifying the terms in ak bm , we map
yk xm
k! m!
to the coefficient of order ak bm in
1 1 +
n
(bQ + aP)k+m = a|A| bk+m−|A| (Q)1{l∈A} (P)1{l∈A}
/ ,
n! n!
A⊂{1,...,n} l=1
hence we map
! +
k+m
k+m k m
yx to Q1{l∈A} P1{l∈A}
/ ,
k A⊂{1,...,k+m} l=1
|A|=k
106 Weyl calculus on real Lie algebras
k!m! +
k+m
yk xm ←→ Q1{l∈A} P1{l∈A}
/ .
(k + m)! A⊂{1,...,k+m} l=1
|A|=k
Lemma 7.2.1
(X1 + · · · + Xk )n = X.
j1 ,...,jk ≥0 X∈W(X1 ,...,Xk ;j1 ,...,jk )
j1 +···+jk =n
j j 1
x11 · · · xjk ←→ ! X.
n X∈W(X1 ,...,Xk ;j1 ,...,jk )
j1 , . . . , j k
P(α1 x1 + · · · + αk xk ) ←→ P(α1 X1 + · · · + αk Xk )
ρ1 , ρ2
B := Tr[ρ1∗ ρ2 ], ρ1 , ρ2 ∈ B2 (h).
W|φ
ψ| (u, v)ϕ(u, v)dudv = φ, O(ϕ) ψ
R2
and
W|φ
ψ| (u, v)eiau+ibv dudv
R2
1
= e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdyeiau+ibv dudv
(2π )2 R2 R2
1
= e−i(x−a)u−i(y−b)v ψ, eixP/2−iyQ φ
h dxdydudv
(2π )2 R2 R2
= ψ, eiaP/2−ibQ φ
h ,
∞
f , Pg
h = f̄ (t)Pg(t)dt
−∞
∞
= − 2i f̄ (t)g (t)dt
−∞
∞
= 2i f̄ (t)g(t)dt
−∞
∞
= (−2if )(t)g(t)dt
−∞
= Pf , g
h .
On the other hand, by the following version
! !
P P
exp −ix + iyQ = e−ixy/2 exp (iyQ) exp −ix
2 2
of the splitting lemma on the Heisenberg–Weyl algebra hw, cf. Exercise 6.2,
we have
! !
P −ixy/2 P
exp −ix + iyQ ψ(t) = e exp (iyQ) exp −ix ψ(t)
2 2
∞
xn ∂ n ψ
= eiyt−ixy/2 (−1)n (t)
n! ∂tn
n=0
=e iyt−ixy/2
ψ(t − x), (7.4)
x, y, t ∈ R, ψ ∈ S(R). As a consequence, by (7.2) we can write
1
W|φ
ψ| (u, v) = e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdy
(2π )2 R2
1
= e−ixu−iyv e−ixP/2+iyQ ψ, φ
h dxdy
(2π )2 R2
∞
1 −ixu−iyv
= e eiyt−ixy/2 ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞
∞
1
= eiy(−2v+2t−x)/2−ixu ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞
∞
2
= eiy(−2v+2t−x)−ixu ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞
1 ∞ −2i(t−v)u
= e ψ(2v − t)φ(t)dt
π −∞
1 ∞ −2itu
= e ψ(v − t)φ(t + v)dt
π −∞
∞
1
= e−itu ψ(v − t/2)φ(v + t/2)dt.
2π −∞
112 Weyl calculus on real Lie algebras
2 2
drawn with the QuTiP package, cf. [61], [62].
Figure 7.2 represents the colour map of the aforementioned Wigner function,
also drawn using QuTiP.
2 (G ∗ ; dξ/σ (ξ )) to
The following proposition extends the definition of Wρ in LC
ρ ∈ B2 (h).
h × DomC−1 −→ LC 2 (G ∗ ; dξ/σ (ξ ))
ρ −→ Wρ
Wρ1 , Wρ2
L2 (G ∗ ;dξ/σ (ξ )) = ρ1 , ρ2
B2 (h) , ρ1 , ρ2 ∈ B2 (h).
C
ρ1 = |φ1
ψ1 | and ρ2 = |φ2
ψ2 |,
with (φ1 , ψ1 ), (φ2 , ψ2 ) ∈ h × DomC−1 . From the identity (7.5) and since
∞
1
n
eiξ ,x−x
G ∗ ,G dξ dx = δx (dx ), (7.6)
(2π ) −∞
we have
Wρ1 ,Wρ2
L2 (G ∗ ;dξ/σ (ξ ))
C
%
1
= e−iξ ,x
G ∗ ,G Tr[U e−(x1 X1 +···+xn Xn ) ρ1 C−1 ] m(x)dx
(2π )n G ∗ N0
1 " # 2% !
× e−iξ ,x
G ∗ ,G Tr U e−(x1 X1 +···+xn Xn ) ρ2 C−1 m(x )dx dξ
N0
1 " # 2
= Tr U e−(x1 X1 +···+xn Xn ) ρ1 C−1 Tr U e−(x1 X1 +···+xn Xn ) ρ2 C−1 m(x)dx
N
0 " #
= U ex1 X1 +···+xn Xn C−1 ψ1 , φ1
h U ex1 X1 +···+xn Xn C−1 ψ2 , φ2
h m(x)dx
N
0
= U(g)C−1 ψ1 , φ1
h U(g)C−1 ψ2 , φ2
h dμ(g)
G
= ψ2 , ψ1
h φ1 , φ2
h
= ρ2 , ρ1
B2 (h) ,
ξ ∈ G∗.
ρ|O(f )
B2 (h) = Tr|φ
ψ|∗ O(f )
= φ|O(f )ψ
h
= W|φ
ψ| , f
L2 (G ∗ ;dξ/σ (ξ ))
C
dξ
= W |φ
ψ| (ξ )f (ξ ) .
G ∗ σ (ξ )
Proof : We have
|O(f ), ρ
B2 (h) | = | f , Wρ
L2 (G ∗ ;dξ/σ (ξ )) |
C
≤ f 2 (G ∗ ;dξ/σ (ξ ))
LC Wρ 2 (G ∗ ;dξ/σ (ξ ))
LC
≤ f 2 (G ∗ ;dξ/σ (ξ ))
LC ρ B2 (h) ,
7.5 Functional calculus on the affine algebra 117
and
φ, O( f )ψ
h = Tr|φ
ψ|∗ O( f ) = W |φ
ψ| (ξ )f (ξ )dξ/σ (ξ )
G∗
.
1
m(x)
= eiξ ,x
G ∗ ,G −(x
TrU e 1 1X +···+x n n |φ
ψ|C
X ) −1 dxf (ξ )dξ
(2π ) n/2 ∗
G N0 σ (ξ )
! " # %
f
= F √ (x)φ|U ex1 X1 +···+xn Xn C−1 ψ
h m(x)dx
N0 σ
6 $ ! " # % 7
$ f
= φ$ F √ (x)U ex1 X1 +···+xn Xn C−1 m(x)dxψ .
N0 σ h
and
"
√ 1 √ #
O f σ = O e−i·,x
G ∗ ,G σ (F f )(x)dx,
(2π )n/2 N0
f ∈ 2 (G ∗ ; dξ ).
LC
[X1 , X2 ] = X2 .
σ (ξ1 , ξ2 ) = 2π|ξ2 |, ξ1 , ξ2 ∈ R,
In order to construct the Malliavin calculus on the affine algebra we will have
to use the functional calculus presented in Section 11.1. Letting B2 (h) denote
the space of Hilbert–Schmidt operators on h, the results of Section 11.1 allow
us to define a continuous map
!
2 1
O : LC R ,
2
dξ1 dξ2 −→ B2 (h)
|ξ2 |
as
O( f ) := (F f )(x1 , x2 )e−ix1 P/2+ivx2 (Q+M) dx1 dx2 .
R2
shows that
1 " −u/2 u #−1/2 " −iuξ1 −ivξ2 % #
e−iuP/2+iv(Q+M) = √ e sinch O e |ξ2 | C.
2π 2
The next proposition shows that these relations can be simplified, and that the
Wigner function is directly related to the density of the couple (P, Q+M), with
the property (7.9).
7.5 Functional calculus on the affine algebra 119
φ, e−iuP/2+iv(Q+M) ψ
h
−u/2 −1/2 "
e sinch u2 % #
= √ φ, O e−iuξ1 −ivξ2 |ξ2 | Cψ
h
2π
−u/2 −1/2
e sinch u2 %
= √ W|φ
Cψ| (ξ1 , ξ2 ), e−iuξ1 −ivξ2 |ξ2 |
L2 (G ∗ ; dξ1 dξ2 )
2π C 2π|ξ2 |
.
− 2x ixξ e−x/2 sinch 2x
1 e e 1
= e−iuξ1 −ivξ2 φ x x −u/2
2π R3 sinch 2 sinch 2 e sinch u2
! cosh x !
ξ2 e−x/2 −|ξ2 | sinch 2x |ξ2 | β−1 dx
×ψ e 2 dξ1 dξ2
sinch 2x sinch 2x (β)
! ixξ1 !
1 ξ2 ex/2 e ξ2 ex/2
= e−iuξ1 −ivξ2 φ ψ
2π R3 sinch 2x sinch 2x sinch 2x
cosh x !
−|ξ2 | sinch 2x |ξ2 | β−1 dx
×e 2 dξ1 dξ2
sinch 2x (β)
= W|φ
ψ| , e−iuξ1 −ivξ2
L2 (G ∗ ; dξ1 dξ2 )
C 2π |ξ2 |
−iuξ −ivξ
= φ, O e 1 2 ψ
h .
O(f ) B2 (h) ≤ f dξ dξ
2 (G ∗ ; 1 2 )
LC
.
2π|ξ |
2
120 Weyl calculus on real Lie algebras
and
1 dξ1 dξ2
ψ|eiuP/2−iv(Q+M) φ
h = eiuξ1 +ivξ2 W|φ
ψ| (ξ1 , ξ2 ) ,
2π G ∗ |ξ2 |
1
W̃|φ
ψ| (ξ1 , ξ2 ) = W|φ
ψ| (ξ1 , ξ2 ) (7.10)
2π|ξ2 |
! ! cosh x !β−1
1 ξ2 e−x/2 e−ixξ1 ξ2 ex/2 −|ξ2 | sinch 2x |ξ2 | dx
= φ ψ e 2 ,
2π R sinch 2x sinch 2x sinch 2x sinch 2x (β)
and
1 dξ2
W|φ
ψ| (ξ1 , ξ2 )
2π R |ξ2 |
1 x |ω|β−1
= e−iξ1 x φ(ωex/2 )ψ(ωe−x/2 )e−|ω| cosh 2 dxdω.
2π R2 (β)
7.5 Functional calculus on the affine algebra 121
1
ξ1 −→ .
2 cosh(π ξ1 /2)
Proposition 7.5.2 The characteristic function of (P, Q + M) in the state
|φ
ψ| is given by
|ω|β−1
ψ, eiuP+iv(Q+M) φ
h = eivωsinch u ψ(ωeu )φ(ωe−u )e−|ω| cosh u dω.
R (β)
In the vacuum state = 1R+ we find
1
, eiuP+iv(Q+M)
h = , u, v ∈ R.
(cosh u − ivsinchu)β
In particular, we have
1
ψ, eiv(Q+M) φ
h = eivω ψ(ω)φ(ω)e−|ω| |ω|β−1 dω
(β) R
or
⎡ ⎤
0 −x3 x2
ad(X) = ⎣ x3 0 −x1 ⎦
−x2 x1 0
for X = x1 X1 + x2 X2 + x3 X3 . The dual adjoint action
ad∗ : g −→ Lin(g∗ )
is given by
⎡ ⎤
0 −x3 x2
ad∗ (X) = −ad(X)T = ⎣ x3 0 −x1 ⎦
−x2 x1 0
if we use the dual basis e1 , e2 , e3 ,
ej , Xk
= δjk
for g∗ . Similarly for ad∗ (X1 ), ad∗ (X2 ), ad∗ (X3 ). By exponentiation we get
⎧ ⎡ ⎤
⎪
⎪ 0 0 0
⎪
⎪
⎪
⎪
⎪ Ad∗ (etX1 ) = ead(X1 ) = ⎣ 0 cos 1 − sin t ⎦ ,
⎪
⎪
⎪
⎪ 0 sin t cos t
⎪
⎪
⎪
⎪
⎪
⎪ ⎡ ⎤
⎪
⎨ cos t 0 sin t
Ad∗ (etX2 ) = ead(X2 ) = ⎣ 0 0 0 ⎦,
⎪
⎪
⎪
⎪ − sin t 0 cos t
⎪
⎪
⎪
⎪
⎪
⎪ ⎡ ⎤
⎪
⎪ cos t − sin t 0
⎪
⎪
⎪
⎪ Ad∗ (etX3 ) = ead(X3 ) = ⎣ sin t cos t 0 ⎦ ,
⎪
⎪
⎩ 0 0 0
and we see that g acts on its dual as rotations. The orbits are therefore the
spheres
Or = {ξ = ξ1 e1 + ξ2 e2 + ξ3 e3 : ξ12 + ξ22 + ξ32 = r}, r ≥ 0.
The invariant measure on these orbits is just the uniform distribution, in polar
coordinates sin ϑdϑdφ. The Lebesgue measure on g∗ can be written as
r2 dr sin ϑr dϑr dϕr
so that we get σr (ϑr , ϕr ) = 1, cf. [7, Equation (19)]. The transfer of the Haar
measure μ on the group
dμ(eX ) = m(X)dX
124 Weyl calculus on real Lie algebras
gives
$ ∞ $
$ ad(X)n $$
$
m(X) = $det $
$ (n + 1)! $
n=0
see-Equation (27) in [8]. Since ad(X) is normal and has simple eigenvalues
±i x12 + x22 + x32 and 0, we can use the spectral decomposition to get
-
2 − 2 cos x12 + x22 + x32 sin2 t/2
m(X) = =4 ,
x12 + x22 + x32 t2
-
where t = x12 + x22 + x32 . For N0 we take the ball
-
2 sin x12 + x22 + x32 /2
−iξ ,X
Wρ (ξ ) = e ρ(U(e )) X
- dX
(2π )3/2 N0 x12 + x22 + x32
see [7, Equation (48)]. We compute this for the irreducible n + 1-dimensional
representations Dn/2 = span{en , en−2 , . . . , e−n }, given by
⎧
⎪ 0 if k = n,
⎨
U(X+ )ek = ,
⎩ i %(n − k)(n + k + 2)e
⎪
else.
k+2
2
⎧
⎪ 0 if k = −n,
⎨
U(X− )ek = ,
⎩ i %(n + k)(n − k + 2)e
⎪
else.
k−2
2
ik
U(X3 )ek = ek ,
2
7.6 Wigner functions on so(3) 125
Wρ (ξ )
-
n/2 -
sin x12 + x22 + x32 /2
2 x12 +x22 +x32
e−iξ ,X
ik
= e - dX
(2n + 1)(2π )3/2 N0 x12 + x22 + x32
k=−n/2
n/2 2π π rπ
2
= e−iξ ,X
eikt t sin(t/2) sin θ dtdθdφ
(2n + 1)(2π )3/2
k=−n/2 0 0 0
cf. also [12, 28]. Note that even for the trivial representation with n = 0 and
U(eX ) = 1, this doesn’t give the Dirac measure at the origin.
126 Weyl calculus on real Lie algebras
and this will always have the right marginals. It is again rotationally invariant
for the representation Dn/2 and the state ρ = (2n + 1)−1 tr, and we get
n/2
1
Wρpr (ξ1 , ξ2 , ξ3 ) = e−iξ ,X
eik||X|| dX.
(2n + 1)(2π )3/2 g k=−n/2
so that we get
1
n/2
Wρpr (ξ ) = δ||ξ ||−k ,
2||ξ ||(2n + 1)(2π )1/2
k=−n/2
R 2 sin(r||ξ ||)
= −2π||ξ || 2
rdr
0 r||ξ ||
R
= −4π ||ξ || sin(r||ξ ||)dr
0
= 4π cos(R||ξ ||) − 1 ,
ρ(U(eX )) = cos(||X||/2)
pr
we can rewrite Wρ also as
! !
1 f 2 ∂f
Wρpr (f ) = f (0) + + 2 dX.
4π ||x||≤1/2 r r ∂r
Using Gauss’ integral theorem, we can transform the first part of the integral
into a surface integral,
!
1 ' f 1 1 ∂f
Wρpr (f ) = f (0) + ∇ · d'n + dX.
4π ||x||=1/2 r 2π ||x||≤1/2 r2 ∂r
and this system is quantised using the operators (Q, P) and the harmonic
oscillator Hamiltonian
1 2
H= (P + ω2 Q2 )
2
where ω = kc is the frequency and c is the speed of light. Here the operator
N = a+ a− is called the photon number operator.
Each number eigenstate en is mapped to the wave function
n=0
n!
∞
α n Hn (x)
= e−|α| e−x
2 /2 2 /2
√ n
n! 2
n=0
√
= e−|α| /2 e−x /2 exα/ 2−α x /4
2 2 2 2
!
|α|2 α 2
= exp − − x− √ /2 .
2 2
The Wigner phase-space (quasi)-probability density function in the quasi-state
|φ
ψ| is then given by
∞
1
W|φ
ψ| (x, y) = φ̄(x − t)ψ(x + t)eiyt dt,
2π −∞
while the probability density in a pure state |φ
φ| is
∞
1
W|φ
φ| (x, y) = φ̄(x − t)φ(x + t)eiyt dt.
2π −∞
cf. Section 7.4, does not have a clear interpretation as a joint probability dis-
tribution since it can take negative values, it can give approximate information
on which frequency was present in the signal at what time. The use of Wigner
functions in time-frequency analysis, where they are often called Wigner–Ville
functions after Ville [119], is carefully explained in Cohen’s book [29]. In [85],
Wigner functions are used to analyse the sound of a gearbox in order to predict
when it will break down.
Note
We also refer the reader to [3] for more background on Wigner functions and
their use in quantum optics.
Exercises
Exercise 7.1 Quantum optics.
1. Compute the distribution of the photon number operator N = a+ a− in the
coherent state
∞
αn
(α) = e−|α|
2 /2
√ en .
n=0
n!
2. Show that in the quasi-state |φ
φ| with
1
e−z /4 ,
2
φ(z) =
(2π )1/4
the Wigner phase-space (quasi)-probability density function is a standard
two-dimensional joint Gaussian density.
8
I really have long been of the mind that the quantity of noise that
anyone can carefreely tolerate is inversely proportional to his mental
powers, and can therefore be considered as an approximate measure
of the same.
(A. Schopenhauer, in The World as Will and Representation.)
In this chapter we present the definition and basic theory of Lévy processes
on real Lie algebras with several examples. We use the theories of factorisable
current representations of Lie algebras and Lévy processes on ∗-bialgebras to
provide an elegant and efficient formalism for defining and studying quantum
stochastic calculi with respect to additive operator processes satisfying Lie
algebraic relations. The theory of Lévy processes on ∗-bialgebras can also
handle processes whose increments are not simply additive, but are composed
by more complicated formulas, the main restriction is that they are independent
(in the tensor sense).
8.1 Definition
Lévy processes, i.e., stochastic processes with independent and stationary
increments, are used as models for random fluctuations, in physics, finance,
etc. In quantum physics so-called quantum noises or quantum Lévy processes
occur, e.g., in the description of quantum systems coupled to a heat bath
[47] or in the theory of continuous measurement [53]. Motivated by a model
introduced for lasers [122], Schürmann et al. [2, 106] have developed the
theory of Lévy processes on involutive bialgebras. This theory generalises,
in a sense, the theory of factorisable representations of current groups and
current algebras as well as the theory of classical Lévy processes with values
131
132 Lévy processes on real Lie algebras
in Euclidean space or, more generally, semigroups. Note that many interesting
classical stochastic processes arise as components of these quantum Lévy
processes, cf. [1, 18, 42, 105].
Let D be a complex pre-Hilbert space with inner product ·, ·
. We denote by
L(D) the algebra of linear operators on D having an adjoint defined everywhere
on D, i.e.,
>
L(D) := A : D −→ D linear : ∃A∗ : D −→ D linear operator (8.1)
∗
?
such that x, Ay
= A x, y
for all x, y ∈ D .
By LAH (D) we mean the anti-Hermitian linear operators on D, i.e.,
LAH (D) = {A : D −→ D linear : x, Ay
= −Ax, y
for all x, y ∈ D}.
In the sequel, g denotes a Lie algebra over R, D is a complex pre-Hilbert space,
and ∈ D is a unit vector.
Definition 8.1.1 Any family
jst : g −→ LAH (D) 0≤s≤t
of representations of g is called a Lévy process on g over D (with respect to
the unit vector ∈ D) provided the following conditions are satisfied:
i) (Increment property). We have
jst (X) + jtu (X) = jsu (X)
for all 0 ≤ s ≤ t ≤ u and all X ∈ g.
ii) (Boson independence). We have
[jst (X), js t (Y)] = 0, X, Y ∈ g,
0 ≤ s ≤ t ≤ s ≤ t , and
, js1 t1 (X1 )k1 · · · jsn tn (Xn )kn
defines a positive Hermitian linear functional on U0 (g). In fact, one can prove
that the family (ϕt )t∈R+ is a convolution semigroup of states on U0 (g). The
functional L is also called the generating functional of the process. It satisfies
the conditions of the following definition.
Definition 8.1.2 A linear functional L : U0 −→ C on a (non-unital)
*-algebra U0 is called a generating functional if
i) L is Hermitian, i.e., L(u∗ ) = L(u) for all u ∈ U0 ;
ii) L is positive, i.e., L(u∗ u) ≥ 0 for all u ∈ U0 .
134 Lévy processes on real Lie algebras
Schürmann has shown that there exists indeed a Lévy process for any
generating functional on U0 (g), cf. [106]. Let
(1)
jst : g −→ LAH (D(1) ) 0≤s≤t and j(2) : g −→ LAH (D(2) ) 0≤s≤t
be two Lévy processes on g with respect to the state vectors (1) and (2) ,
respectively. We call them equivalent, if all their moments agree, i.e., if
(1) (2)
(1) , jst (X)k (1)
= (2) , jst (X)k (2)
,
for all k ∈ N, 0 ≤ s ≤ t, X ∈ g. This implies that all joint moments also agree
on U (g), i.e.,
(1) (1)
(1) , js1 t1 (u1 ) · · · jsn tn (u1 )(1)
(2) (2)
= (2) , js1 t1 (u1 ) · · · jsn tn (un )(2)
,
for all X, Y ∈ g,
b) η : g −→ D is a ρ-1-cocycle, i.e., it satisfies
and
c) ψ : g −→ C is a linear functional with imaginary values such that the
bilinear map (X, Y) −→ η(X), η(Y)
is the 2-coboundary of ψ (with
respect to the trivial representation), i.e.,
ψ [X, Y] = η(Y), η(X)
− η(X), η(Y)
, X, Y ∈ g.
L : U0 (g) −→ C
8.2 Schürmann triples 135
by setting
jst (X) = a◦st ρ(X) + a+ −
st η(X) − ast η(X) + ψ(X)(t − s)Id , (8.2)
and the fact that ∼ = ⊗ holds for the vacuum vector with respect to
this factorization, one can show that the increments of (jst )0≤s≤t are boson
independent. The family
" "
##
jst : g −→ LAH L2 (R+ , D)
0≤s≤t
The following theorem can be traced back to the works of Araki and Streater.
In the form given here it is a special case of Schürmann’s representation
theorem for Lévy processes on involutive bialgebras, cf. [106].
Theorem 8.2.2 Let g be a real Lie algebra. Then there is a one-to-one
correspondence (modulo equivalence) between Lévy processes on g and
Schürmann triples on g. Precisely, given (ρ, η, L) a Schürmann triple on g
over D,
jst (X) := a◦st ρ(X) + a+ −
st η(X) − ast η(X) + (t − s)L(X)Id , (8.3)
0
≤ s ≤ t, X ∈ g, defines Lévy process on g over a dense subspace H ⊆
L2 (R+ , D) , with respect to the vacuum vector .
The correspondence between (equivalence classes of) Lévy processes and
Schürmann triples is one-to-one and the representation (8.2) is universal.
Theorem 8.2.3 [106]
i) Two Lévy processes on g are equivalent if and
only if their Schürmann
triples are unitarily equivalent on the subspace ρ U (g) η(g).
ii) A Lévy process (kst )0≤s≤t with generating functional L and Schürmann
triple (ρ, η, ψ) is equivalent to the Lévy process (jst )0≤s≤t associated to
(ρ, η, L) defined in Equation (8.2).
Due to Theorem 8.2.3, the problem of characterising and constructing all Lévy
processes on a given real Lie algebra can be decomposed into the following
steps.
a) First, classify all representations of g by anti-Hermitian operators (modulo
unitary equivalence). This gives the possible choices for the representation
ρ in the Schürmann triple.
b) Next, determine all ρ-1-cocycles. We distinguish between trivial cocycles,
i.e., cocycles which are of the form
η(X) = ρ(X)ω, X∈g
for some vector ω ∈ D in the representation space of ρ, and non-trivial
cocycles, i.e., cocycles, which cannot be written in this form.
c) Finally, determine all functionals L that turn a pair (ρ, η) into a Schürmann
triple (ρ, η, L).
The last step can also be viewed as a cohomological problem. If η is a ρ-1-
cocycle then the bilinear map
(X, Y) −→ η(X), η(Y)
− η(X), η(Y)
for all X, Y, Z ∈ g. For L we can take any functional that has the map
Proposition 8.2.4 Let g be a real Lie algebra, (jst )0≤s≤t a Lévy process on
g with Schürmann triple (ρ, η, ψ) over the pre-Hilbert space D, B a unitary
operator on D, and ω ∈ D. Then (ρ̃, η̃, ψ̃) with
⎧ ∗
⎪
⎪ ρ̃(X) := B ρ(X)B,
⎪
⎪
⎪
⎪
⎪
⎪ ∗ ∗
⎪
⎨ η̃(X) := B η(X) − B ρ(X)Bω,
⎪
⎪
⎪
⎪ ψ̃(X) := ψ(X) − Bω, η(X)
+ η(X), Bω
+ Bω, ρ(X)Bω
⎪
⎪
⎪
⎪
⎪
⎩
= ψ(X) − ω, η̃(X)
+ η̃(X), ω
− ω, ρ̃(X)ω
, X ∈ g,
is also a Schürmann triple on g.
Us∗ Ut = Id ⊗ ust ⊗ Id
138 Lévy processes on real Lie algebras
L : U0 (g) −→ C
L(u) = ω, ρ(u)ω
,
Remark 8.2.6 If the generating functional of Lévy process (jst ) can be written
in the form L(u) = ω, ρ(u)ω
for all u ∈ U0 (g), then we call (jst ) a compound
Poisson process.
In the next examples we will work with the complexification gC of a real Lie
algebra g and the involution
(X + iY)∗ = −X + iY, X, Y ∈ g.
of antisymmetric elements in gC .
8.2 Schürmann triples 139
is a ∗-homomorphism from U0 (g) to the Itô algebra over D, see [45, Propo-
sition 4.4.2]. It follows that the dimension of the Itô algebra generated by
{dL X : X ∈ g} is at least the dimension of D (since η is supposed surjective) and
not bigger than (dim D + 1)2 . If D is infinite-dimensional, then its dimension
is also infinite. Note that it depends on the choice of the Lévy process.
Proposition 8.2.7 The Lévy process of (ρ̃, η̃, ψ̃) in Proposition 8.2.4 is
equivalent to the Lévy process defined by
Proof : Using the quantum Itô table, one can show that (j˜st )0≤s≤t is of the
form
∗
j˜st (X) = a◦st B∗ ρ(X)B + a+ ∗
st B η(X) − B ρ(X)Bω
∗
− a− ∗
st B η(X) − B ρ(X)Bω
+ (t − s) ψ(X) − Bω, η(X)
+ η(X), Bω
+ Bω, ρ(X)Bω
Id .
140 Lévy processes on real Lie algebras
We now consider the oscillator Lie algebra osc which is obtained by addition
of a Hermitian element N with commutation relations
with v ∈ C, b ∈ R.
⎧
⎨ η(N) = v1 , η(A+ ) = v2 , η(A− ) = η(E) = 0,
⎩
L(N) = b, L(E) = ||v2 ||2 , L(A+ ) = L(A− ) = v1 , v2
,
• dL A+ dL N dL A− dL E
dL A+ 0 0 0 0
dL N dL A+ dL N + ||v1 ||2 − b dt 0 0
dL A− dL E dL A− 0 0
dL E 0 0 0 0
Note that for ||v1 ||2 = b, this is the usual Itô table of the four fundamental
noises of Hudson–Parthasarathy calculus.
For n ≥ 2 we may also consider the real Lie algebra with basis X0 , X1 , . . . Xn
and the commutation relations
⎧
⎨ Xk+1 , if 1 ≤ k < n,
[X0 , Xk ] = (8.5)
⎩
0, otherwise,
and
[Xk , X ] = 0, 1 ≤ k, ≤ n.
For n = 2 this algebra coincides with the Heisenberg–Weyl Lie algebra hw,
while for n > 2 it is a n − 1-step nilpotent Lie algebra. Its irreducible unitary
representations can be described and constructed using the “orbit method” (i.e.,
there exists exactly one irredicible unitary representation for each orbit of the
coadjoint representation), see, e.g., [101, 102].
n
gR+ := Xk 1[sk ,tk ) : 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn < ∞, X1 , . . . , Xn ∈ g .
k=1
Then gR+ is a real Lie algebra with the pointwise Lie bracket and the Lévy
process (jst )0≤s≤t on g defines a representation π of gR+ via
n
n
π(X) = jsk tk (Xk ), for X = Xk 1[sk ,tk ) ∈ gR+ . (8.6)
k=1 k=1
8.4 Classical processes 143
Since the expectations of (jst )0≤s≤t factorise, we can choose (Ỹt )t∈R+ to be a
Lévy process, and if jst (Y) is even essentially self-adjoint then the marginal
distributions of (Ỹt )t∈R+ are uniquely determined.
In order to characterize the process (Ỹt )t∈R+ in Theorem 8.4.3 below, we will
need the following analogues of the splitting Lemma 6.1.2 in the framework
of quantum Lévy processes.
Lemma 8.4.1 Let X ∈ L(D), u, v ∈ D, and suppose further that the series
∞ n
∞ n
t n t
X w and (X ∗ )n w (8.7)
n! n!
n=0 n=0
1
AdeX Y = eX Ye−X = Y + [X, Y] + X, [X, Y] + · · · = eadX Y
2
The following lemma, which is the Lévy process analogue of Lemma 6.1.3,
provides the normally ordered form of the generalised Weyl operators, and it
is a key tool to calculate the characteristic functions of classical subprocesses
of Lévy processes on real Lie algebras.
Lemma 8.4.2 Let X ∈ L(D) and u, v ∈ D and suppose further that the series
(8.7) converge in D for all w ∈ D. Then we have
exp α + a◦ (X) + a+ (u) + a− (v) = eα̃ exp a+ (ũ) exp a◦ (X) exp a− (ṽ)
∞
∞
∞
X n−1 (X ∗ ) n−1 1
ũ = u, ṽ = v, α̃ = α + v, X n−2 u
.
n! n! n!
n=1 n=1 n=2
Proof : Let ω ∈ D and set ω1 (t) = exp t α + a◦ (X) + a+ (u) + a− (v) ω and
"
#
"
#
ω2 (t) = eα̃(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) ω
and
! "
dũ #
"
#
ω2 (t)
=e a α̃(t) +
(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) ω
dt
"
#
"
#
+ e exp a+ ũ(t) a◦ (X) exp ta◦ (X) exp a− ṽ(t) ω
α̃(t)
"
!
#
dṽ "
#
+ eα̃(t) exp a+ ũ(t) exp ta◦ (X) a− (t) exp a− ṽ(t) ω
dt
"
#
"
# dα̃
+ eα̃(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) (t)ω
dt
coincide for all t ∈ [0, 1]. Therefore we have ω1 (1) = ω2 (1).
In the next theorem we compute the characteristic exponent (Ỹt )t∈R+ by
application of the splitting Lemma 8.4.2.
Theorem 8.4.3 Let (jst )0≤s≤t be a Lévy process on a real Lie algebra gR with
Schürmann triple (ρ, η, L). Then for any Hermitian element Y of gR such that
η(Y) is analytic for ρ(Y), the associated classical Lévy process (Ỹt )t∈R+ has
characteristic exponent
∞
λn
(λ) = iλL(Y) + in η(Y ∗ ), ρ(Y)n−2 η(Y)
,
n!
n=2
A more direct proof of the theorem is also possible using the convolution of
functionals on U (g) instead of the boson Fock space realisation of (jst )0≤s≤t .
We note (λ) also coincides with
∞
λn
(λ) = in L(Y n ).
n!
n=1
146 Lévy processes on real Lie algebras
Next, we give two corollaries of Theorem 8.4.3; the first of them justifies our
definition of Gaussian generating functionals.
where (Bt )t∈R+ is a standard Brownian motion. The next corollary deals with
the case where L is the restriction to U0 (g) of a positive functional on U(g).
L(u) = ω, ρ(u)ω
, u ∈ U0 (g).
The above corollary suggests to call a Lévy process on g with trivial cocycle
η(u) = ρ(u)ω and generating functional L(u) = ω, ρ(u)ω
for u ∈ U0 (g)
a Poisson process on g. Note that in case the operator ρ(Y) is (essentially)
self-adjoint, the Lévy measure of (Ỹt )t∈R+ can be obtained by evaluating its
spectral measure
μ(dλ) = ω, dPλ ω
3
in the state ω, where ρ(Y) = λdPλ is the spectral resolution of (the closure
of) ρ(Y).
8.4 Classical processes 147
Theorem 8.4.6 Let (jst )0≤s≤t be a Lévy process on a real Lie algebra g and
let π be as in Equation (8.6). Choose X ∈ gR+ , and define
Then there exists a classical stochastic process (X̂t )t∈R+ with independent
increments that has the same finite distributions as X, i.e.,
1
2
, g1 X(f1 ) · · · gn X(fn )
= IE g1 X̂(f1 ) · · · gn X̂(fn )
n
for f = αk 1[sk ,tk ) ∈ (R+ ).
k=1
can be defined by the usual functional calculus for the (essentially) self-adjoint
operators X(f1 ), . . . , X(fn ).
Notes
Lévy processes on real Lie algebras form a special case of Lévy processes
on involutive bialgebras, see [106], [79, chapter VII], [45]. They have already
been studied under the name factorisable representations of current algebras in
the sixties and seventies, see [109] for a historical survey and for references.
They are at the origin of the theory of quantum stochastic differential calculus.
See Section 5 of [109] for more references and a historical survey on the theory
of factorisable representations of current groups and algebra and its relation
to quantum stochastic calculus. Among future problems we can mention
the study of the cohomology of representations and the classification of all
Lévy processes on Lie algebras. We refer to [51] for the cohomology of Lie
algebras and Lie groups. It is known that the cohomology groups of all simple
148 Lévy processes on real Lie algebras
nontrivial representations of the Lie algebra defined in (8.5) are trivial, see [51,
Proposition II.6.2].
Exercises
Exercise 8.1 Example of classical Lévy process. Let Y = B+ + B− + βM with
β ∈ R and Me0 = m0 e0 . This exercise aims at characterising the classical
Lévy process (Ỹt )t∈R+ associated to Y and (jst )0≤s≤t in the manner described
earlier. Corollary 8.4.5 tells us that (Ỹt )t∈R+ is a compound Poisson process
with characteristic exponent
(u) = e0 , eiuX − 1 e0
.
We want to determine the Lévy measure of (Ỹt )t∈R+ , i.e., we want to determine
the measure μ on R, for which
∞
iux
(u) = e − 1 μ(dx).
−∞
This is the spectral measure of X evaluated in the state e0 , · e0
. Consider the
polynomials pn (x) ∈ R[x] defined by the condition
en = pn (X)e0 , n ∈ N.
1. Show that the polynomials pn (x) are orthogonal with respect to μ, i.e.,
∞
pn (x)pm (x)μ(dx) = δnm , n, m ∈ N.
−∞
2. Find the three-term recurrence relation satisfied by the polynomials pn (x).
3. Determine the polynomials pn (x) according to the value of β.
4. Determine the density μ with respect to which the polynomials pn (x) are
orthogonal.
9
I do not think that 150 years from now, people will photocopy pages
from Bourbaki to rhapsodize on them. Some lines in this memoir by
Poisson, on the other hand, are beaming with life . . .
(P. Malliavin, in Dialogues Autour de la Création
Mathématique, 1997.)
This chapter is an introduction to the Malliavin calculus, as a preparation for
the noncommutative setting of Chapters 11 and 12. We adopt the point of view
of normal martingales in a general framework that encompasses Brownian
motion and the Poisson process as particular cases, as in [98]. The Malliavin
calculus originally requires a heavy functional analysis apparatus, here we
assume a basic knowledge of stochastic calculus; proofs are only outlined and
the reader is referred to the literature for details.
Note that a martingale (Mt )t∈R+ is normal if and only if (Mt2 − t)t∈R+ is a
martingale, i.e.,
1 2
E Mt2 − t | Fs = Ms2 − s, 0 ≤ s < t.
I0 ( f 0 ) = f 0 , f0 ∈ L2 (R+ )◦0 R.
Proof : Since the indefinite Itô integral is a martingale from (9.2) we have
∞ tn t2 $
$
E[In (fn ) | Ft ] = n!E ··· fn (t1 , . . . , tn )dMt1 · · · dMtn $Ft
0 0 0
t tn t2
= n! ··· fn (t1 , . . . , tn )dMt1 · · · dMtn
0 0 0
= In fn 1[0,t]n .
and
n
U= 1[ti ,ti−1 ) Fi : Fi ∈ S, 0 = t0 ≤ t1 < · · · < tn , n ≥ 1 ,
i=1
which is contained in
n
◦k
Ũ := Ik (gk (∗, ·)) : gk ∈ L (R+ ) ⊗ L (R+ ), k = 0, . . . , n, n ∈ N ,
2 2
k=0
where the symmetric tensor product ◦ is defined in the Appendix A.8. Next
we state the definition of the operators D and δ on multiple stochastic integrals
(random variables and processes), whose linear combinations span S and U .
fn ∈ L2 (R+ )◦n .
and
∞
Dt F = f1 (t) + kIk−1 ( fk (∗, t)), dtdP − a.e.
k=1
⊗n
ξt (u) := In u1[0,t] , t ∈ R+ ,
n!
n=0
δ : Ũ −→ L2 ()
δ(In ( fn+1 (∗, ·))) = In+1 ( f̃n+1 ), fn+1 ∈ L2 (R+ )◦n ⊗ L2 (R+ ),
1
n+1
f̃n+1 (t1 , . . . , tn+1 ) = fn+1 (t1 , . . . , tk−1 , tk+1 , . . . , tn+1 , tk ).
n+1
k=1
9.1 Creation and annihilation operators 153
In particular we have
1
n+1
f ◦ gn (t1 , . . . , tn+1 ) = f (tk )gn (t1 , . . . , tk−1 , tk+1 , . . . , tn+1 ),
n+1
k=1
and, in particular,
δ(uIn ( fn ))
∞ ∞
=n In ( fn (∗, s) ◦ u· 1[0,s] (∗, ·))dMs +
n us In ( fn 1[0,s]n )dMs ,
0 0
= n!1{n−1=m}
∞ ∞
× ··· fn (s1 , . . . , sn−1 , t)gn (s1 , . . . , sn−1 , t)ds1 · · · dsn−1 dt
0 0
∞
= n1{n−1=m} IE[In−1 (fn (∗, t))In−1 (gn (∗, t))]dt
0
= IE[DF, u
L2 (R+ ) ].
Remark 9.1.7 By construction, the operator D satisfies the stability assump-
tion thus we have Ds F = 0, s > t, for any Ft -measurable F ∈ S, t ∈ R+ .
Proposition 9.1.9 If (Mt )t∈R+ has the chaos representation property then it
has the predictable representation property.
= u L2 (×R+ ) , u ∈ Lad
2
( × R+ ),
as follows from Remark 9.1.7 since Dt us = 0, 0 ≤ s ≤ t.
separable Hilbert space h, such that the W(h) are centered Gaussian random
variables with covariances given by
IE[W(h)W(k)] = h, k
, h, k ∈ h
on a probability space (, F, P).
Setting H1 = W(h) yields a closed Gaussian subspace of L2 (), and
W : h −→ H1 ⊆ L2 () is an isometry, and we will assume that the σ -algebra
F is generated by the elements of H1 .
Let
1
e−(s1 +···+sd )/2 ,
2 2
φdσ (s1 , . . . , sd ) = (s1 , . . . , sd ) ∈ Rd ,
(2π )d/2
denote the standard Gaussian density function with covariance σ 2 Id on Rn .
The multiple stochastic integrals In (fn ) of fn ∈ L2 (R+ )◦n with respect to
(Bt )t∈R+ satisfy the multiplication formula
n∧m ! !
n m
In (fn )Im (gm ) = k! In+m−2k (fn ⊗k gm ),
k k
k=0
⊗nd
+
d
In (u⊗n
1
1
◦ · · · ◦ ud )= Hnk (I1 (uk ); uk 22 ),
k=1
where n = n1 + · · · + nd .
Proof : We have
H0 (I1 (u); u 22 ) = I0 (u⊗0 ) = 1 and H1 (I1 (u); u 22 ) = I1 (u),
9.2 Wiener space 157
In particular we have
" # t sn s2
In 1⊗n
[0,t] = n! ··· dBs1 · · · dBsn = Hn (Bt ; t),
0 0 0
and
" # +d " #
⊗nd ⊗nk
In 1⊗n 1
[t0 ,t1 ] ◦ · · · ◦ 1[td−1 ,td ] = I n k 1[tk−1 ,tk ]
k=1
+
d
= Hnk (Btk − Btk−1 ; tk − tk−1 ).
k=1
= 1{n=m} n!tn .
n1 +···+nd =n
k1 ,...,kd ≥0
Proposition 9.2.3 The Brownian motion (Bt )t∈R+ has the chaos representa-
tion property, i.e., any F ∈ L2 () admits a chaos decomposition
∞
F= Ik (gk ).
k=0
Assume that F has the form F = g(I1 (e1 ), . . . , I1 (ek )) for some
!
1 −|x|2 /2
g ∈ L2 Rk , e dx ,
(2π )k/2
9.2 Wiener space 159
∞
and admits the chaos expansion F = In (fn ). Then for all n ≥ 1 there exists
n=0
a (multivariate) Hermite polynomial Pn of degree n such that
D : S −→ L2 () ⊗ h ∼
= L2 (; h)
is given by
n
∂f
DF = W(h1 ), . . . , W(hn ) ⊗ hi
∂xi
i=1
for F = f W(h1 ), . . . , W(hn ) ∈ S. In particular, D is a derivation with respect
to the natural L∞ ()-bimodule structure of L2 (; h), i.e.,
D : L2 () −→ L2 (; h)
δ : L2 (; h) −→ L2 ().
160 A guide to the Malliavin calculus
Denoting by
⎧ ⎫
⎨
n ⎬
Sh = u = Fj ⊗ hj : F1 , . . . , Fn ∈ S, h1 , . . . , hn ∈ h, n ∈ N
⎩ ⎭
j=1
n
∂f
Dt F = ui (t) (I1 (u1 ), . . . , I1 (un )), t ∈ R+ . (9.8)
∂xi
i=1
n
∂f
Dt f (Bt1 , . . . Btn ) = 1[0,ti ] (t) (Bt , . . . Btn ),
∂xi 1
i=1
DF, h
L2 (R+ )
∞ ∞ !
d
= f u1 (t)(dB(t) + εh(t)dt), . . . , un (t)(dB(t) + εh(t)dt)
dε 0 0 |ε=0
d
= F(ω + h)|ε=0 ,
dε
h ∈ L2 (R+ ), where the limit exists in L2 (). We refer to the above identity as
the probabilistic interpretation of the gradient operator D on the Wiener space.
In other words the scalar product h, DF
L2 (R+ ) coincides with the Fréchet
derivative
$
d $$
Dh F = $ f W(h1 ) + εh, h1
, . . . , W(hn ) + εh, hn
dε ε=0
9.2 Wiener space 161
for all F = f W(h1 ), . . . , W(hn ) ∈ S and all h ∈ h. We also have the integration
by parts formulas
IE[FW(h)] = IE[h, DF
L2 (R+ ) ], (9.9)
and
IE[FGW(h)] = IE[h, DF
L2 (R+ ) G + Fh, DG
L2 (R+ ) ], (9.10)
Dh δ(u)) = h, u
L2 (R+ ) + δ(Dh u), (9.11)
IE[δ(u)δ(v)] = IE[u, v
L2 (R+ ) ] + IE[Tr(Du ◦ Dv)],
From Proposition 9.1.13, the Skorohod integral δ(u) coincides with the Itô
integral of u ∈ L2 (W; H) with respect to Brownian motion, i.e.,
∞
δ(u) = ut dBt ,
0
ω −→ (ω(A1 ), . . . , ω(An ))
where fn ∈ L1 (X n , σ ⊗n )
is symmetric in n variables, n ≥ 1.
Recall that the Fourier transform of πσX via the Poisson stochastic integral
f (x)ω(dx) = f (x), f ∈ L1 (X, σ )
X x∈ω
is given by
! !
IEπσ exp i f (x)ω(dx) = exp (eif (x) − 1)σ (dx) , (9.14)
X X
and
/ !2 0
IE f (x)(ω(dx) − σ (dx) = |f (x)|2 σ (dx), f ∈ L2 (X, σ ).
X X
9.3 Poisson space 163
The standard Poisson process (Nt )t∈R+ with intensity λ > 0 can be constructed
as
with
Letting
we have
InX (fn )(ω) = fn (x1 , . . . , xn )(ω(dx1 ) − σ (dx1 )) · · · (ω(dxn ) − σ (dxn )).
Xn
164 A guide to the Malliavin calculus
The integral InX (fn ) extends to symmetric functions in fn ∈ L2 (X)◦n via the
isometry formula
IEπσ InX (fn )ImX (gm ) = n!1{n=m} fn , gm
L2 (X,σ )◦n ,
= In+1
X
(v⊗n ◦ u) + nInX ((uv) ◦ v⊗(n−1) ) + nu, v
L2 (X,σ ) In−1
X
(v⊗(n−1) ).
2(n∧m)
InX (fn )ImX (gm ) = X
In+m−s (hn,m,s ),
s=0
(xl+1 , . . . , xn , yk+1 , . . . , ym ) −→
fn (x1 , . . . , xn )gm (x1 , . . . , xk , yk+1 , . . . , ym )σ (dx1 ) · · · σ (dxl )
Xl
⊗k
+
d
InX (1⊗k
A1 ◦ · · · ◦ 1Ad )(ω) =
1 d
Cki (ω(Ai ), σ (Ai )),
i=1
u ∈ L2 (X). We note that the Poisson measure has the chaos representation
property, i.e., every square-integrable functional F ∈ L2 (X , πσ ) admits the
orthogonal Wiener–Poisson decomposition
∞
F= InX (fn )
n=0
and
n
◦k
U= IkX (gk (∗, ·)) : gk ∈ L (X)
2
⊗ L (X), k = 0, . . . , n, n ∈ N .
2
k=0
DX : L2 (X , πσ ) → L2 (X × X, P ⊗ σ )
and
δ X : L2 (X × X, P ⊗ σ ) → L2 (X , P)
In particular we have
δ X (f ) = I1X (f ) = f (x)(ω(dx) − σ (dx)), f ∈ L2 (X, σ ),
X
and
IE[DX F, u
L2 (X,σ ) ] = IE[Fδ X (u)],
F ∈ Dom(DX ), u ∈ Dom(δ X ).
x ∈ X, be defined by
DXx F = εx+ F − F, x ∈ X.
On the other hand, the result of Lemma 9.3.5 is clearly verified on simple
functionals. For instance when F = I1X (u) is a single Poisson stochastic integral,
we have
As in [126], the law of the mapping (x, ω) −→ ω∪{x} under 1A (x)σ (dx)πσ (dω)
is absolutely continuous with respect to πσ . In particular, (ω, x) −→ F(ω∪{x})
is well-defined, πσ ⊗ σ , and this justifies the extension of Lemma 9.3.5 in the
next proposition.
Proof : There exists a sequence (Fn )n∈N of functionals of the form (9.15),
such that (DX Fn )n∈N converges everywhere to DX F on a set AF such that
(πσ ⊗ σ )(AcF ) = 0. For each n ∈ N, there exists a measurable set Bn ⊂ X × X
such that (πσ ⊗ σ )(Bcn ) = 0 and
C∞
Taking the limit as n goes to infinity on (ω, x) ∈ AF ∩ n=0 Bn , we get
Fδ X (u) = δ X (uF) + u, DX F
L2 (X,σ ) + δ X (uDX F).
The relation also holds if the series and integrals converge, or if F ∈ Dom(DX )
and u ∈ Dom(δ X ) is such that uDX F ∈ Dom(δ X ).
On the other hand the standard Brownian motion indexed by t ∈ [0, 1] can be
constructed as the Paley–Wiener series
∞
1 τn0
W(t) = tτ00 + √ sin(2nπt), t ∈ [0, 1],
π 2 n=1 n
with
√ 1 1
τn0 = 2 sin(2πnt)dW(t), n ≥ 1, τ00 = dW(t) = W(1),
0 0
Let also
⎧
⎪
⎪ Rd+2 , i = 0,
⎪
⎪
⎪
⎨ > ?
Ei = (y0 , . . . , yd+1 ) ∈ Rd+2 : y1 = 0 , i = 1,
⎪
⎪
⎪
⎪
⎩ >(y0 , . . . , yd+1 ) ∈ Rd+2 : yi ∈ {−1, 1}? ,
⎪
i = 2, . . . , d + 1,
and
ik = {ω ∈ : ωk ∈ Ei } , k ∈ N, i = 1, . . . , d + 1,
and let
> ?
U (X) := u ∈ S(H ⊗ X) : uik = 0 on ik , k ∈ N, i = 0, 1, . . . , d + 1 ,
IEP [DF, u
H⊗X ] = IE [δ(u), F
X ] , u ∈ U (X), F ∈ S(X),
172 A guide to the Malliavin calculus
where δ is defined as
δ(u) = τk0 u0k + u1k − traceDk uk , u ∈ U(X),
k∈N
with
Definition 9.4.4 For p ≥ 1, we call IDp,1 (X) the completion of S(X) with
respect to the norm
In particular, IDUp,1 (H) is the completion of U (R) with respect to the norm
· IDp,1 (H) . For p = 2, let Dom(δ; X) denote the domain of the closed
extension of δ. As shown in the following proposition, IDU
2,1 (H) is a Hilbert
space contained in Dom(δ; X).
δ(F) 2
≤ (d + 2) F 2
, F ∈ IDU
2,1 (H).
L2 () IDU2,1 (H)
∞
d+1
δ(F) = τk0 F(k, 0) + F(k, 1) − Dik F(k, i) ,
k=0 i=0
and
∞
2
(δ(F)) ≤ (d + 2)
2
τk0 F(k, 0) − D0k F(k, 0)
k=0
∞
2 ∞ 2
d+1
+(d + 2) F(k, 1) − D1k F(k, 1) + (d + 2) Dik F(k, i) ,
k=0 i=2 k=0
9.4 Sequence models 173
hence from the Gaussian, exponential and uniform cases, cf. [103], [94], [96],
we have
/∞ 0
δ(F) 2
L2 ()
≤ (d + 2) IEP (F(k, 0))2
k=0
⎡ ⎤
∞
d+1
+ (d + 2) IEP ⎣ (D0k F(l, 0))2 + (D1k F(l, 1))2 + (Dik F(l, i))2 ⎦
k,l=0 i=2
≤ (d + 2) π 0 F 2
.
IDU2,1 (H)
Based on the duality relation between D and δ and on the density of U (X)
in L2 (; H ⊗ X), it can be shown that the operators D and δ are local, i.e.,
for F ∈ ID2,1 (X), resp. F ∈ Dom(δ; X), we have DF = 0 almost surely on
{F = 0}, resp. δ(F) = 0 almost surely on {F = 0}.
Notes
Infinite-dimensional analysis has a long history: it began in the sixties (work of
Gross [49], Hida, Elworthy, Krée, . . .), but it is Malliavin [75] who has applied
it to diffusions in order to give a probabilistic proof of Hörmander’s theorem.
Proposition 9.2.4 is usually taken as a definition of the Malliavin derivative
D, see, e.g., [84]. The relation between multiple Wiener integrals and Hermite
polynomials originates in [107]. Finding the probabilistic interpretation of D
for normal martingales other than the Brownian motion or the Poisson process,
e.g., for the Azéma martingales, is still an open problem.
Exercises
Exercise 9.1 Consider (Bt )t∈R+ and (Nt )t∈R+ as two independent standard
Brownian motion and Poisson process. Compute the mean and variance of
the following stochastic integrals:
T T T T T
Bet dBt , Bt dBt , (Nt − t)d(Nt − t), Bt d(Nt − t), (Nt − t)dBt .
0 0 0 0 0
174 A guide to the Malliavin calculus
Exercise 9.2 Let (Bt )t∈[0,T] denote a standard Brownian motion. Compute the
expectation
T !
IE exp β Bt dBt
0
Exercise 9.3 Let (Bt )t∈[0,T] denote a standard Brownian motion generating the
filtration (Ft )t∈[0,T] and let f ∈ L2 ([0, T]). Compute the conditional expectation
1 3T $ 2
$
IE e 0 f (s)dBs $Ft , 0 ≤ t ≤ T.
Exercise 9.4 Let (Bt )t∈[0,T] denote a standard Brownian motion and let α ∈ R.
Solve the stochastic differential equation
dXt = αXt dt + dBt , 0 ≤ t ≤ T.
Exercise 9.5 Consider (Bt )t∈R+ a standard Brownian motion generating the
filtration (Ft )t∈R+ , and let (St )t∈R+ denote the solution of the stochastic
differential equation
dSt = rSt dt + σ St dBt . (9.18)
1. Solve the stochastic differential equation (9.18).
2. Find the function f (t, x) such that
f (t, St ) = IE[(ST )2 | Ft ], 0 ≤ t ≤ T.
3. Show that the process t −→ f (t, St ) is a martingale.
4. Using the Itô formula, compute the process (ζt )t∈[0,T] in the predictable
representation
t
f (t, St ) = IE[φ(ST )] + ζs dBs .
0
Exercise 9.6 Consider the stochastic integral representation
∞
F = IE[F] + ut dMt (9.19)
0
with respect to the normal martingale (Mt )t∈R+ .
1. Show that the process u in (9.19) is unique in L2 ( × R+ ).
2. Using the Clark–Ocone formula (cf. e.g., § 5.5 of [98]), find the process
(ut )t∈R+ in the following cases:
9.4 Sequence models 175
178
10.1 General method 179
analytic vector for all jst (X), 0 ≤ s ≤ t, X ∈ g, and we will assume that η(g)
consists of analytic vectors.
Denote by g = eX an element of the simply connected Lie group G
associated to g. Our assumptions guarantee that η(g) and L(g) can be defined
for X in a sufficiently small neighborhood of 0. For an explicit expression for
the action of Ust (g) on exponential vectors, see also [106, Proposition 4.1.2].
In order to get a quasi-invariance formula for (X̂t )t∈R+ we choose an element
Y ∈ gR+ that does not commute with X and let the unitary operator U = eπ(Y)
act on the algebra
= , g UX( f )U ∗
= , Ug X( f ) U ∗
= U ∗ , g X( f ) U ∗
= , g X( f )
= G, g X( f ) G
1
2
= IE g X( f ) |Ĝ|2 .
Xα,ζ ,β = αN + ζ A+ + ζ A− + βE
with α, β ∈ R, ζ ∈ C, as
X(t) = αN + (ζ − iαwt)A+ + (ζ + iαwt)A− + β + 2t(wζ ) + α|w|2 t2 E,
10.2 Quasi-invariance on osc 181
where (z) denotes the imaginary part of z. Recall that by Proposition 6.1.1, the
distribution of ρ(Xα,ζ ,β ) in the vacuum vector e0 is either a Gaussian random
variable with variance |ζ |2 and mean β or Poisson random variable with “jump
size” α, intensity |ζ |2 /α 2 , and drift β − |ζ |2 /α. We interpret the result of the
next proposition as
1
2 1 $ $2 2
IE g X(t) = IE g(Xα,ζ ,β )$G(Xα,ζ ,β , t)$ , g ∈ C0 (R).
$ $2
= e0 , g(Xα,ζ ,β )$G(Xα,ζ ,β , t)$ e0
.
As a consequence of Lemma 6.1.4, the function
v(t) := e−tY e0
can be written in the form
∞
v(t) = k
ck (t)Xα,ζ ,β e0 = G(Xα,ζ ,β , t)e0 .
k=0
with
When α = 0, this identity gives the relative density of two Gaussian random
variables with the same variance, but different means. For α = 0, it gives
the relative density of two Poisson random variables with different intensities.
Note that the classical analogue of this limiting procedure is
$ $2
= e0 , g(Xβ )$G(Xβ , t)$ e0
.
By Lemma 6.2.3 the lowest weight vector e0 is cyclic for ρc (Xβ ) for all β ∈ R,
c > 0, therefore the function
where
. .
1 1+β 1+β
(β, t) = % arctan e t
− arctan , (10.3)
1 − β2 1−β 1−β
and
t 1 1 + β + e−2t (1 − β)
(β, t) = + log . (10.4)
2 2 2
10.4 Quasi-invariance on hw
In this section we use the Weyl operators and notation of Section 1.3. Recall
that in Chapter 7, a continuous map O from Lp (R2 ), 1 ≤ p ≤ 2, into the space
of bounded operators on h has been defined via
O( f ) = (F −1 f )(x, y)eixP+iyQ dxdy,
R2
10.5 Quasi-invariance for Lévy processes 185
= F −1 T(k1 ,h1
,k2 ,h2
) ϕ(u, v)ei(uP(h1 )+vQ(h2 )) dudv
Cd
= Oh T(k1 ,h1
,k2 ,h2
) ϕ .
⎧
⎪
⎪ ρ(N) = 1, ρ(A± ) = ρ(E) = 0,
⎪
⎪
⎪
⎨
η(A+ ) = 1, η(N) = η(A− ) = η(E) = 0,
⎪
⎪
⎪
⎪
⎪
⎩ L(N) = L(A± ) = 0, L(E) = 1.
X( f ) = a+ ( f ) + a− ( f )
and the associated classical process (X̂t )t∈R+ is Brownian motion. We choose
for Y = h(A+ − A− ), with h ∈ S(R+ ). A similar calculation as in the previous
subsection yields
t
−Y
X (1[0,t] ) = e X(1[0,t] )e = X(1[0,t] ) − 2
Y
h(s)ds
0
i.e., AX is invariant under eY and (X̂t )t∈R+ is obtained from (X̂t )t∈R+ by adding
a drift. Now eπ(Y) is a Weyl operator and gives an exponential vector when it
acts on the vacuum, i.e., we have
eπ(Y) = e−||h||
2 /2
E(h)
see, e.g., [79, 87]. But – up to the normalisation – we can create the same
exponential vector also by acting on with eX(h) ,
Therefore, we get G = exp X(h) − ||h||2 and the well-known Girsanov
formula for Brownian motion
6 ∞ !! 7
e0 , g X̂ ( f ) e0
= e0 , g X̂( f ) exp 2X(h) − 2 h2 (s)ds e0 .
0
(10.5)
constant we get
∞
X( f ) = a◦ ( f ) + νa+ ( f ) + νa− ( f ) + ν 2 f (s)ds
0
10.5 Quasi-invariance for Lévy processes 187
On the other hand, using the tensor product structure of the Fock space, we can
calculate
n
−π(Y)
e = exp − hk jsk tk (Y)
k=1
= e−h1 js1 t1 (Y) ⊗ · · · ⊗ e−hn jsn tn (Y)
"X
∞ #
−2h1
= exp 1−e 1[s1 ,t1 ) − (t1 − s1 ) h(s)ds ⊗ · · ·
2 0
"X
∞ #
−2hn
· · · ⊗ exp 1−e 1[sn ,tn ) − (tn − sn ) h(s)ds
2 0
∞ !
X −2h
= exp (1 − e ) − h(s)ds ,
2 0
i.e., X = Xϕ ,β with
"
#
ϕ (t) = ϕ(t) cosh 2h(t) + β(t) sinh 2h(t) , and β (t) = γ β(t), 2h(t) .
10.5 Quasi-invariance for Lévy processes 189
Notes
The Girsanov formula for Brownian motion and gamma process appeared
first in the context of factorisable representations of current groups [114], cf.
[111, 112] for the gamma process. The quasi-invariance results of Section 10.2
for the Poisson, gamma, and Meixner processes have been proved for finite
joint distributions. They can be extended to the distribution of the processes
using continuity arguments for the states and endomorphisms on our operator
algebras, or by the use of standard tightness arguments coming from classical
probability. The general idea also applies to classical processes obtained by a
different choice of the commutative subalgebra, cf. e.g., [18]. The classical
Girsanov theorem has been used by Bismut [22] in order to propose a
simpler approach to the Malliavin calculus by the differentiation of related
quasi-invariance formulas in order to obtain integration by parts formulas for
diffusion processes, which where obtained by Malliavin in a different way.
Exercises
Exercise 10.1 Girsanov theorem for gamma random variables. Take β = 1 in
the framework of Proposition 10.3.1. Show that we have
!
1 −t
G(x, t) = exp − (x(e − 1) + ct) ,
2
and that this recover the change of variable identity
1
2
IE g(et Z) = IE g(Z) exp Z(1 − e−t ) − ct
Ad "g ξ , x
G ∗ ,G = ξ , Ad g−1 x
G ∗ ,G , x ∈ G.
We also let G
Ad g , g ∈ G, be defined for f : G ∗ → C as
G "
Ad g f = f ◦ Ad g−1 ,
190
11.1 Noncommutative gradient operators 191
C−1
U(g)∗ C−1 U(g) = % , g ∈ G,
(g−1 )
By duality we have
U(g)O(f )U(g)∗ |ρ
B2 (h) = Tr[(U(g)O(f )U(g)∗ )∗ ρ]
= Tr[U(g)O(f )∗ U(g)∗ ρ]
= Tr[O(f )∗ U(g)∗ ρU(g)]
= O(f )|U(g)∗ ρU(g)
B2 (h)
192 Noncommutative integration by parts
= f |WU(g)∗ ρU(g)
B2 (h)
= f |Wρ ◦ Ad "g
L2 (G ∗ ;dξ/σ (ξ ))
C
"
= f ◦ Ad g−1 |Wρ
L2 (G ∗ ;dξ/σ (ξ ))
C
"
= O(f ◦ Ad g−1 )|ρ
B2 (h) ,
which implies
" #
U(g)O(f )U(g)∗ = O GAd g f ,
hence B = 0.
11.2 Affine algebra 193
The following is the affine algebra analogue of the integration by parts formula
(2.1).
Proposition 11.2.3 For any x = (x1 , x2 ) ∈ R2 and f ∈ DomO, we have
[x1 U(X1 ) + x2 U(X2 ), O(f )] = O(x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 )).
Proof : This is a consequence of the covariance property since from (7.8), the
co-adjoint action is represented by the matrix
1 ba−1
,
0 a−1
i.e.,
G
Ad g f (ξ1 , ξ2 ) = f ◦ Ad g−1 (ξ1 , ξ2 ) = f (ξ1 + ba−1 ξ2 , a−1 ξ2 ),
"
hence
H
ad x f (ξ1 , ξ2 ) = x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 ).
and
IE[[iQ, F]] = , iQF
h − , iFQ
h
= −iQ, F
h + , FP
h
= P, F
h + , FP
h
= IE [{P, F}] ,
and we note that IE[[M, F]] = 0.
In the sequel we fix a value of α ∈ R.
Definition 11.2.5 For any x = (x1 , x2 ) ∈ R2 and F ∈ Sh , let
x1 x2
δ(F ⊗ x) := {Q + α(M − β), F} + {P, F} − Dx F.
2 2
Note also that
1
δ(F ⊗ x) = (x1 (Q + iP + α(M − β)) + x2 (P − i(Q + κM))) F
2
F
+ (x1 (Q − iP + α(M − β)) + x2 (P + i(Q + κM)))
2
= x1 (B+ F + FB− ) − ix2 (B+ F + FB− )
x1 i
+ α {M − β, F} − x2 κ[M, F]
2 2
x1 i
= (x1 − ix2 )(B+ F + FB− ) + α {M − β, F} − x2 κ[M, F].
2 2
The following Lemma shows that the divergence operator has expectation zero.
Lemma 11.2.6 For any x = (x1 , x2 ) ∈ R2 we have
IE [δ(F ⊗ x)] = 0, F ∈ Sh .
Proof : It suffices to apply Lemma 11.2.4 and to note that , M
h = β.
IE[Uδ(F ⊗ x)V]
1
= IE [U ({x1 (Q + α(M − β)) + x2 P, F} + ix1 [P, F] − ix2 [Q + κM, F]) V]
2
1
= IE[{x1 (Q + α(M − β)) + x2 P, UFV} + ix1 U[P, F]V − ix2 U[Q + κM, F]V]
2
1
= IE[{x1 (Q + α(M − β)) + x2 P, UFV} + ix1 [P, UFV]
2
− ix1 [P, U]FV] + IE[−ix1 UF[P, V] − ix2 [Q + κM, UFV]
+ ix2 [Q + κM, U]FV + ix2 UF[Q + κM, V]]
1
= IE[δ(UFV ⊗ x)] + IE[−ix1 [P, U]FV − ix1 UF[P, V]
2
+ ix2 [Q + κM, U]FV + ix2 UF[Q + κM, V]]
←→
= IE[U D F x V].
Dx δ(F ⊗ y) − δ(Dx F ⊗ y)
y − iy2 y
= 1 (x1 {M, F} + ix2 [M, F]) + α 1 (x1 {Q, F} + x2 {P, F}),
2 2
F ∈ Sh .
196 Noncommutative integration by parts
Proof : We have
Dx δ(F ⊗ y)
i i
= − x1 [P, δ(F ⊗ y)] + x2 [Q + κM, δ(F ⊗ y)]
2 2
i 1 y 2
= − x1 P, y1 (B+ F + FB− ) − iy2 (B+ F + FB− ) + 1 α{M − β, F}
2 2
i 1 y 2
+ x2 Q + κM, y1 (B+ F + FB− ) − iy2 (B+ F + FB− ) + 1 α{M − β, F}
2 2
i
= δ(Dx F ⊗ y) − x1 (y1 [P, B ]F + y1 F[P, B ] − iy2 [P, B ]F − iy2 F[P, B− ]
+ − +
2
y1 y i
+ α[P, M]F + 1 αF[P, M]) + x2 (y1 [Q + κM, B+ ]F + y1 F[Q + κM, B− ]
2 2 2
y y
− iy2 [Q + κM, B+ ]F − iy2 F[Q + κM, B− ] + 1 α[Q, M]F + 1 αF[Q, M])
2 2
i y1
= δ(Dx F ⊗ y) − x1 (y1 {iM, F} − iy2 {iM, F} + α{2iQ, F})
2 2
i
+ x2 (y1 [M, F] − iy2 [M, F] + iy1 α{P, F})
2
1 i 1
= δ(Dx F ⊗ y) + x1 y1 {M + αQ, F} + x2 y1 [M, F] + x2 y1 α{P, F}
2 2 2
i 1
− x1 y2 {M, F} + x2 y2 [M, F].
2 2
Similarly we have
x1 x1
δ(FG ⊗ x) = (Q + iP + α(M − β))FG + FG(Q − iP + α(M − β))
2 2
x2 x2
+ (P − iQ)FG + FG(P + iQ)
2 2
x1 x1
= (Q + iP + α(M − β))FG + F(Q − iP + αM − α/2)G
2 2
x2 x2
+ (P − iQ)FG + F(P + iQ)G
2 2
i i x1 x2
+ x1 F[P, G] − x2 F[Q, G] − F[Q + αM, G] − F[P, G].
2 2 2 2
Example 11.3.2
[Q(k1 ) − P(k2 ), Pψ ]φ
= ψ, φ
Q(k1 ) − P(k2 ) (ψ) − ψ, Q(k1 ) − P(k2 ) (φ)
ψ
= ψ, φ
Q(k1 ) − P(k2 ) (ψ) − Q(k1 ) − P(k2 ) ψ, φ
ψ.
ı∈I
Proof : Let (Bı )ı∈I ⊆ Dom Dk ⊆ B(h) be any net such that Bı −→ 0 and
ı∈I
Dk Bı −→ β for some β ∈ B(h) in the weak topology. To show that Dk is
closable, we have to show that this implies β = 0. Let us evaluate β between
two exponential vectors E(h1 ), E(h2 ), h1 , h2 ∈ hC , then we get
E(h1 ), βE(h2 )
= limE(h1 ), Dk Bı E(h2 )
ı∈I
i
= lim Q(k1 ) − P(k2 ) E(h1 ), Bı E(h2 )
2 ı∈I
i
− lim E(h1 ), Bı Q(k1 ) − P(k2 ) E(h2 )
2 ı∈I
= 0,
which implies β = 0 as desired.
Note that S is weakly dense in B(h), i.e., S = B(h), since S contains the
Weyl operators U(h1 , h2 ) with h1 , h2 ∈ h. Next, we define
D : S −→ B(h) ⊗ hC ⊗ C2
where the tensor product is the algebraic tensor product over C, by setting
DOh (ϕ) equal to
⎛ ! ⎞
∂ϕ
O
⎜ h ∂x ⊗ h1 ⎟
DOh (ϕ) = ⎜⎝ ∂ϕ
! ⎟
⎠
Oh ⊗ h2
∂y
and extending it as a derivation with respect to the B(h)-bimodule structure of
B(h) ⊗ hC ⊗ C2 defined by
! ! ! !
O1 ⊗ k1 OO1 ⊗ k1 O1 ⊗ k1 O1 O ⊗ k1
O· = , ·O=
O2 ⊗ k2 OO2 ⊗ k2 O2 ⊗ k2 O2 O ⊗ k2
O2 ⊗ h2 O2 ⊗ k2
This turns B(h)⊗hC ⊗C2 into a pre-Hilbert module over B(h), and by mapping
O ⊗ k ∈ B(h) ⊗ hC ⊗ C2 to the linear map
h v −→ Ov ⊗ k ∈ h ⊗ hC ⊗ C2 ,
11.3 Noncommutative Wiener space 201
hC k −→ idh ⊗ k ∈ M.
Dk O = k, DO
= DO, k
.
∂ϕ ∂ϕ
Proof : For h ∈ h ⊗ R2 and ϕ ∈ Dom Oh such that also ∂x , ∂y ∈ Dom Oh ,
we get
⎛ ! ⎞
8 ∂ϕ
! Oh ⊗ h1 9
k1 ⎜ ∂x ⎟
k, DOh (ϕ)
= ,⎜
⎝ ! ⎟
⎠
k2 ∂ϕ
Oh ⊗ h2
∂y
!
∂ϕ ∂ϕ
= Oh k1 , h1
+ k2 , h2
∂x ∂y
i
= [Q(k1 ) − P(k2 ), Oh (ϕ)] = Dk O,
2
where we used Proposition 11.3.4. The first equality of the proposition now
follows, since both
i
O −→ Dk O = [Q(k1 ) − P(k2 ), O] and O −→ k, DO
2
are derivation operators. The second equality follows immediately.
Q(h) = h = iP(h), h ∈ hC ,
202 Noncommutative integration by parts
which implies
i"
#
IE k, DO
= Q(k1 ) − P(k2 ) , O
− , O Q(k1 ) − P(k2 )
2
i
= k1 + ik2 , O
− , O(k1 + ik2 )
2
1"
#
= P(k1 ) + Q(k2 ) , O
+ , O P(k1 ) + Q(k2 )
2
1
= IE [{P(k1 ) + Q(k2 ), O}] .
2
where the products are ordered such that the indices increase from the left to
the right.
Proof : This is obvious, since O −→ k, DO
is a derivation.
11.3.3 Closability
Corollary 11.3.8 can be used for n = 3 to show the closability of D from B(h)
to M. This also implies that D is also closable in stronger topologies, such as,
e.g., the norm topology and the strong topology. We will denote the closure of
D again by the same symbol.
Corollary 11.3.9 The derivation operator D is a closable operator from B(h)
to the B(h)-Hilbert module M = B(h, h ⊗ hC ⊗ C2 ) with respect to the weak
topologies.
ı∈I
Proof : We have to show that for any net (Aı )ı∈I in S with Aı −→ 0 and
ı∈I
DAı −→ α ∈ M, we get α = 0. Let f , g ∈ hC . Set
f +f f −f g+g g−g
f1 = , f2 = , g1 = , and g2 = .
2 2i 2 2i
Then we have
U(f1 , f2 ) = e−||f ||/2 E(f ) and U(g1 , g2 ) = e−||g||/2 E(g).
11.3 Noncommutative Wiener space 203
Thus we get
e(||f ||
2 +||g||2 )/2
E(f ) ⊗ h, αE(g)
= lim IE U(−f1 , −f2 )h, DAı
U(g1 , g2 )
ı∈I
11> ?
= lim IE P(h1 ) + Q(h2 ), U(−f1 , −f2 )Aı U(g1 , g2 )
ı∈I 2
2
− h, DU(−f1 , −f2 ) Aı U(g1 , g2 ) − U(−f1 , −f2 )Aı h, DU(g1 , g2 )
= lim ψ1 , Aı ψ2
+ ψ3 , Aı ψ4
− ψ5 , Aı ψ6
− ψ7 , Aı ψ8
ı∈I
=0
DO∗ = DO.
Finally, we show how the operator D can be iterated. Given h a complex Hilbert
space we can define the derivation operator
D : S ⊗ h −→ B(h) ⊗ hC ⊗ C2 ⊗ h
by setting
D(O ⊗ h) = DO ⊗ h, O ∈ S, h ∈ h.
n
||O||2n := ||O∗ O|| + ||Dn O, Dn O
||,
j=1
and
n
||O||2ψ,n := ||Oψ||2 + ||ψ, Dn O, Dn O
ψ
||,
j=1
⎪
⎪ ←− ←−
⎪
⎪ O D uX = (O D u )X,
⎪
⎪
⎪
⎪
⎪ " ←
⎪
⎩ (O1 O2 )←
⎪ − ←
− − #
D u = O1 D O2 u + O1 O2 D u .
Proof : These properties can be deduced easily from the definition of the
gradient and the properties of the derivation operator D and the inner product
·, ·
.
←→
For k ∈ hC ⊗ C2 we have O1 D idh ⊗k O2 = Dk (O1 O2 ).
The algebra B(h) of bounded operators on the symmetric Fock space h and
the Hilbert module M are not Hilbert spaces with respect to the expectation in
the vacuum vector . Therefore, we cannot define the divergence operator or
Skorohod integral δ as the adjoint of the derivation D. It might be tempting to
try to define δX as an operator such that the condition
→ 2
1−
IE [(δX)B] = IE D X B (11.1)
−
→
is satisfied for all B ∈ Dom D X . However this is not sufficient to charac-
terise δX. In addition, the following Proposition 11.3.13 shows that this is
not possible without imposing additional commutativity conditions, see also
Proposition 11.3.15.
IE[MB] = IE [Dk B]
Proof : We assume that such an operator M exists and show that this leads to
a contradiction. Letting B ∈ B(h) be the operator defined by
h ψ −→ Bψ := k1 + ik2 , ψ
,
0 = , MB
= IE[MB] = IE[Dk B]
i
= , (Dk B)
= − k1 + ik2 , k1 + ik2
,
2
which is clearly impossible.
n
←→
Given A, B ∈ B(h) and u ∈ Sh of the form u = Fj ⊗h(j) , defining A δu B by
j=1
n n
←→ (j)
A δu B := P h1 + Q h2 , AFj B − A Dh(j) Fj B,
2
j=1 j=1
n
for u = Fj ⊗ h(j) ∈ Sh,δ .
j=1
then we have
1 ←→ 2
IE [Aδ(u)B] = IE A D u B .
Remark 11.3.16 Note that δ : Sh,δ −→ B(h) is the only linear map with this
property, since for one single element h ∈ hC ⊗ C2 , the sets
> ∗ ?
A : A ∈ Dom D ∩ {P (h1 ) + Q (h2 )}
and
> ?
B : B ∈ Dom D ∩ {P (h1 ) + Q (h2 )}
= IE[Aδ(u)B].
We now give an explicit formula for the matrix elements between two
exponential vectors of the divergence of a smooth elementary element u ∈
Sh,δ . This is the analogue of the first fundamental lemma in the Hudson–
Parthasarathy calculus, see Theorem 5.3.2 or [87, Proposition 25.1].
Theorem 11.3.17 Let u ∈ Sh,δ . Then we have the following formula
6 ! 7
ik1 − ik2
E(k1 ), δ(u)E(k2 )
= E(k1 ) ⊗ , uE(k2 )
k1 + k2
for the evaluation of the divergence δ(u) of u between two exponential vectors
E(k1 ), E(k2 ), for k1 , k2 ∈ hC .
Remark 11.3.18 This suggests to extend the definition of δ in the following
way: set
Dom δ = u ∈ M : ∃M ∈ B(h) such that ∀k1 , k2 ∈ hC , (11.3)
6 ! 7*
ik1 − ik2
E(k1 ), ME(k2 )
= E(k1 ) ⊗ , uE(k2 )
k1 + k2
and define δ(u) for u ∈ Dom δ to be the unique operator M that satisfies the
condition in Equation (11.3).
11.3 Noncommutative Wiener space 209
:n
Proof : Let u = j=1 Fj ⊗ h . Recalling the definition of Dh we get the
(j)
j=1
n
(j) (j) (j) (j)
= k1 , h2 − ih1
+ k2 , h2 + ih1
E(k1 ), Fj E(k2 )
j=1
6 ! 7
ik1 − ik2
= E(k1 ) ⊗ , uE(k2 ) .
k1 + k2
E(k1 ), βE(k2 )
= limE(k1 ), δ(uı )E(k2 )
ı∈I
6 ! 7
ik1 − ik2
= lim E(k1 ) ⊗ , uı E(k2 )
ı∈I k1 + k2
= 0,
and therefore
n
Dh δ(u) = Y(Xj − Yj )Fj + YFj (Xj + Yj ) − (Xj − Yj )Fj Y − Fj (Xj + Yj )Y .
j=1
:n
On the other hand, we have Dh (u) = j=1 (YFj − Fj Y) ⊗ h(j) , and
n
δ Dh (u) = (Xj − Yj )YFj − (Xj − Yj )Fj Y + YFj (Xj + Yj ) − Fj Y(Xj + Yj ) .
j=1
←− n
= Fδ(u) − F D u + [Xj , F]Fj
j=1
! 8 9
−h2
where we used that [Xj , F] = i , DF defines a bounded operator,
h1
since F ∈ S ⊆ Dom D. Equation (11.5a) can be shown similarly.
then we have
←− −
→
δ(Fu) = Fδ(u) − F D u , and δ(uF) = δ(u)F − D u F.
This implies that we also have an analogous result for the divergence.
BL !∗ BL BL !∗ BL
Ft da−
r ⊃ Ft∗ da+
t , and Ft da+
r ⊃ Ft∗ da−
t .
R+ R+ R+ R+
)
R+
= D̃E(k1 ) · , F· E(k2 )
= k1 (t)E(k1 ), Ft E(k2 )
dt.
R+
Let (T, B, μ) = (R+ , B(R+ ), dx), i.e., the positive half-line with the Lebesgue
measure, and let X = (X 1 , X 2 ) ∈ Dom δ. Then we have
BL BL
+
Xt dP(t) +
1
Xt dQ(t) =
2
(Xt − iXt )dat +
2 1
(Xt2 + iXt1 )da−
t .
R+ R+ R+ R+
δ(X) = Xt1 dP(t) + Xt2 dQ(t)
T T
coincides with the Hudson–Parthasarathy quantum stochastic integral defined
in [54].
Notes
Another definition of D and δ on noncommutative operator algebra has been
considered by Biane and Speicher in the free case [19], where the operator
algebra is isomorphic to the full Fock space. In [74], Mai, Speicher, and Weber
study the regularity of distributions in free probability. Due to the lack of
commutativity, it seems impossible in their approach to use an integration
by parts formula, so that they were compelled to find alternative methods. It
would be interesting to apply these methods to quantum stochastic differential
equations.
Our approach to quantum white noise calculus is too restrictive so far
since we require the derivatives DX to be bounded operators. Dealing with
unbounded operators is necessary for applications of quantum Malliavin
calculus to more realistic physical models. Ji and Obata [59, 60] have defined
a creation-derivative and an annihilation-derivative in the setting of quantum
white noise theory. Up to a basis change (they derive with respect to a− +
t and at ,
while we derive with respect to P and Q), these are the same as our derivation
operator. But working in the setting of white noise theory, they can derive much
more general (i.e., unbounded) operators.
Exercises
Exercise 11.1 In the framework of Proposition 12.1.4, assume in addition that
X ∈ Dom Dnk , (Dk X)−1 ∈ Dom Dnk , and
F
κ F
κ
ω∈ Dom Q(k1 ) − P(k2 ) Dom Q(k1 ) − P(k2 ) .
1≤κ≤n 1≤κ≤n
Show that the density of the distribution μX, of X ∈ B(h) in the state is
n − 1 times differentiable for all n ≥ 2.
12
217
218 Smoothness of densities on real Lie algebras
(X) = ω, Xω
, X ∈ B(h).
and
h1 , k1
= 0 and h2 , k2
= 0,
C
then we have wh, ∈ 2≤p≤∞ H p,κ (R2 ).
Proof : We will show the result for κ = 1, the general case can be shown
similarly (see also the proof of Theorem 12.1.2). Let ϕ ∈ S(R) be a Schwartz
function, and let p ∈ [1, 2]. Then we have
$ $ $6 ! 7$
$ ∂ϕ $ $ ∂ϕ $
$ dW $ = $ ω, O ω $$
$ ∂x h, $ $ h
∂x
$6 7$
$ i $
= $$ ω, Q(k1 ), Oh (ϕ) ω $$
2|k1 , h1
|
Ch,p ||Q(k1 )ω|| + ||Q(k1 )ω||
≤ ||ϕ||p .
2|k1 , h1
|
Similarly, we get
$ $
$ ∂ϕ $ Ch,p ||P(k2 )ω|| + ||P(k2 )ω||
$ $
$ ∂y dWh, $ ≤ 2|k2 , h2
|
||ϕ||p ,
and together these two inequalities imply wh, ∈ H p ,1 (R2 ) for p = p/(p − 1).
We will give a more general result of this type in the next Theorem 12.1.2.
Namely we show that the derivation operator can be used to obtain sufficient
conditions for the regularity of the joint Wigner densities as of noncom-
mutating random variables as in the next Theorem 12.1.2 which generalises
Proposition 12.1.1 to arbitrary states.
det = 0,
h1 , 1
h2 , 2
F
and ρ ∈ Dom Dκk 1 Dκ2 , and
κ1 +κ2 ≤κ
tr |Dκk 1 Dκ2 ρ| < ∞, κ1 + κ2 ≤ κ,
C
then we have wh, ∈ 2≤p≤∞ H p,κ (R2 ).
The absolute value of a normal operator is well-defined via functional calculus.
For a non-normal operator X we set |X| = (X ∗ X)1/2 . The square root is well-
defined via functional calculus, since X ∗ X is positive and therefore normal.
Proof : Let
h1 , k1
h2 , k2
X1 i Q(k1 ) − P(k2 )
A := and := A−1 ,
h1 , 1
h2 , 2
X2 2 Q(1 ) − P(2 )
then we have
!
1
∂ϕ
[X1 , Oh (ϕ)] = h2 , 2
Dk Oh (ϕ) − h2 , k2
D Oh (ϕ) = Oh
det A ∂x
and
!
1
∂ϕ
[X2 , Oh (ϕ)] = − h1 , 1
Dk Oh (ϕ) + h1 , k2
D Oh (ϕ) = Oh ,
det A ∂y
for all Schwartz functions ϕ ∈ S(R). Therefore, we have
$ κ +κ $ $ !!$
$ ∂ 1 2ϕ $ $ ∂ κ1 +κ2 ϕ $
$ $ $ $
$ ∂xκ1 ∂yκ2 dWh, $ = $ tr ρ Oh ∂xκ1 ∂yκ2 $
$ " $
$ #$
$
= $ tr ρ X1 , . . . X1 , X2 , . . . X2 , Oh (ϕ) $
& '( ) & '( ) $
κ1 times κ2 times
$
$
= $ tr [X2 , . . . [X2 , [X1 , . . . [X1 , ρ]]]]Oh (ϕ) $
≤ Cρ,κ1 ,κ2 ||Oh (ϕ)|| ≤ Cρ,κ1 ,κ2 Ch,p ||ϕ||p ,
C
for all p ∈ [1, 2], since ρ ∈ κ1 +κ2 ≤κ Dom Dκk 1 Dκ2 and tr(|Dκk 1 Dκ2 ρ|) < ∞
for all κ1 + κ2 ≤ κ, and thus
$ $
Cρ,κ1 ,κ2 = tr $[X2 , . . . [X2 , [X1 , . . . [X1 , ρ]]]]$ < ∞.
But this implies that the density of dWh, is contained in the Sobolev spaces
H p,κ (R2 ) for all 2 ≤ p ≤ ∞.
220 Smoothness of densities on real Lie algebras
Tt : hC −→ hC
Tt ej = e−tλj ej , j ∈ N, t ∈ R+ ,
with generator A = λj Pj . If the sequence increases fast enough to ensure
j∈N
∞
that e−tλj < ∞, i.e., if tr Tt < ∞ for t > 0, then the second quantisation
j=1
ρt = (Tt ) : h −→ h is a trace class operator with trace
Zt = tr ρt = en , ρt en
,
n∈N∞
f
en = e◦n ◦nr
1 ◦ · · · ◦ er ,
1
n = (n1 , . . . , nr ) ∈ N∞
f ,
and therefore ρt a (ej ) defines a bounded operator with finite trace for all j, ∈
N and t > 0. Similarly, we get
$ $ $
$
tr $a (ej )ρt $ < ∞, tr $ρt a+ (ej ) $ < ∞, etc.,
and
$ $ $ $
tr $P1 (ej1 )Q2 (ej2 )ρt $ < ∞, tr $P1 (ej1 )ρt Q2 (ej2 )$ < ∞,
t > 0, j1 , j2 , 1 , 2 ∈ N.
and
$ " # $ $$ " #$$
$ $ $$ $$
$ω, p(X)Dk (Dk X)−1 ω
$ ≤ $$D (Dk X)−1 $$ ||p(X)||
$ $
≤ C2 sup $p(x)$,
x∈[−||X||,||X||]
for all polynomials p. But this implies that μX, admits a bounded density.
Wρ (ξ )
,
|ξ2 |1/2 x1
= √ e−iξ1 x1 −iξ2 x2 Tr[e−x1 X1 −x2 X2 ρC−1 ] e−x1 /2 sinch dx1 dx2 ,
2π R 2 2
and for ρ = |φ
ψ|,
W|φ
ψ| (ξ )
,
|ξ2 |1/2 −iξ1 x1 −iξ2 x2 x1 X1 +x2 X2 −1 x1
= √ e Û(e )C ψ|φ
h e−x1 /2 sinch dx1 dx2
2π R2 2
12.2 Affine algebra 223
1 −x1 /2
= e−iξ1 x1 −iξ2 x2 φ(e−x1 τ )ψ(τ )e−iτ x2 e sinch (x1 /2)
2π R3
-
−x1 dτ
× e−(e −1)|τ | e−βx1 /2 |τ |β−1/2 e−x1 /2 sinch(x1 /2) dx1 dx2
(β)
! ! cosh x !
ξ2 e−x/2 |ξ2 |e−ixξ1 ξ2 ex/2 −|ξ2 | sinch 2x |ξ2 | β−1 dx
= φ ψ e 2 .
R sinch 2x sinch 2x sinch 2x sinch 2x (β)
Note that Wρ takes real values when ρ is self-adjoint. Next, we turn to proving
the smoothness of the Wigner function W|φ
ψ| . Let now H1,2 σ (R × (0, ∞))
1R×(0,∞) W|φ
ψ| ∈ H1,2
σ
(R × (0, ∞)).
and for x1 , x2 ∈ R:
$ $
$ $
$ (x1 ∂1 f (ξ1 , ξ2 ) + x2 ∂2 f (ξ1 , ξ2 ))W |φ
ψ| (ξ1 , ξ2 )dξ1 dξ2 $
$ 2 $
R
$ $
= 2π $φ|O(x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 ))ψ
h $
$ $
= 2π $φ|[x1 U(X1 ) + x2 U(X2 ), O( f )]ψ
h $
√
≤ 2π φ h (x1 U(X1 ) + x2 U(X2 ))ψ f L2 (G ∗ ;dξ1 dξ2 /|ξ2 |) .
C
that the above result and the presence of σ (ξ1 , ξ2 ) = 2π |ξ2 | are consistent with
the integrability properties of the gamma law, i.e., if
f (ξ1 , ξ2 ) = g(ξ1 )γβ (ξ2 ), ξ1 ∈ R, ξ2 > 0, g = 0,
σ (R × (0, ∞)) if and only if β > 0.
then f ∈ H1,2
where H is a Hermitian operator, see, e.g., [87, Theorem 26.3]. The operators
Qt and Pt defined by
Combining, these formulas, we get the following expressions for the deriva-
tives of quantum stochastic integrals,
i −
Dh Mt = a (h1 − ih2 ) + a+ (h1 + ih2 ), Mt
2
i t
= Dh Fs da−
s − h1 (s) + ihs (s) Fs ds,
2 0
12.3 Towards a Hörmander-type theorem 227
and
i −
Dh Nt = a (h1 − ih2 ) + a+ (h1 + ih2 ), Nt
2
+ i t
= Dh Gs das + h1 (s) − ih2 (s) Gs ds.
2 0
Time integrals commute with the derivation operator, i.e., we have
t t
Dh Ms ds = Dh Ms ds.
0 0
where
i
i
R̃s = h1 (s) + ih2 (s) R∗ + h1 (s) − ih2 (s) R.
2 2
Similarly, we have
t !
1 ∗
Dh Ut∗ = Dh Us∗ R∗ da− s − Rda +
s − R Rds
0 2
t
i
i t ∗
− Us∗ R∗ h1 (s) + ih2 (s) ds − Us R h1 (s) − ih2 (s) ds
2 0 2 0
t ! t
1 ∗
= Dh Us∗ R∗ da− s − Rda +
s − R Rds − Us∗ R̃s ds.
0 2 0
t !
1 ∗ 1 ∗
+ Us∗ ∗
R XR − R RX − XR R dt
0 2 2
!
+ [X, R]da+ ∗
t + [R , X]das
−
(Dh Us )
t
i
+ h1 (s) − ih2 (s) Us∗ [X, R]Us ds
2 0
i t
− h1 (s) + ih2 (s) Us∗ [R∗ , X]Us ds
2 0
t t
= Dh js L(X) ds + Dh jt R(X) da+ s
0 0
t t
+ Dh jt R(X ∗ )∗ da− s − jt [R̃s , X] ds.
0 0
i.e., the “flow” Dj ◦ jt satisfies an
3 t equation
similar to that of jt , but with an
additional (inhomogenous) term 0 jt [R̃s , X] ds. jt is homomorphic, but Dj ◦ jt
will not be homomorphic in general.
+ h1 (s) − ihs (s) [Ys , R] − h1 (s) + ih2 (s) [R∗ , Ys ] ds.
2 0
t
The last term is R̃s , Ys ds, where
0
i 1
R̃s = h1 (s)(R − R∗ ) + h2 (s)(R + R∗ ).
2 2
We see that Dh Yt satisfies an inhomogeneous quantum stochastic differential
equation, where the inhomogeneity is a function of Yt . The homogeneous part
is the same as for Yt . We try a variation of constants, i.e., we assume that the
solution has the form
Dh Yt = Ut Zt Ut∗ ,
since the solutions of the homogeneous equation are of the form Ut ZUt∗ (at
least for initial conditions acting only on the initial space). For Zt we make the
Ansatz
t t t
Zt = Fs da+
s + G s ds + Hs ds
0 0 0
with some adapted coefficients Ft , Gt , and Ht . Then the Itô formula yields
t !
∗ ∗ 1 ∗ ∗ 1 ∗ ∗
Dh Yt = R Us Zs Us R − R RUs Zs Us − Us Zs Us R R ds
0 2 2
t t
∗
+ Us Zs Us∗ , R da+s + R , Us Zs Us∗ da−
s
0 0
t t t
+ Us dZUs∗ − Us Gs Us∗ Rds − R∗ Us Fs Us∗ ds.
0 0 0
Comparing this equation with the previous equation for Dh Yt , we get
t t t t
Us dZs Us∗ − Us Gs Us∗ Rds − R∗ Us Fs Us∗ ds = [R̃, Ys ]ds.
0 0 0 0
Exercises
Exercise 12.1 Relation to the commutative case. Let
⎧
⎪
⎪ − + 1 − 2 + 2 P2 − Q2
⎪
⎨ Q = B + B = ((a x ) + (a x ) ) = ,
2 4
⎪
⎪
⎩ P = i(B− − B+ ) = i ((a− x )2 − (a+ x )2 ) = PQ + QP .
⎪
2 4
1. Show that we have
[P, Q] = 2iM, [P, M] = 2iQ, [Q, M] = −2iP.
2. Show that
P2 Q2
Q + M = B− + B+ + M = , Q − M = B− + B+ − M = − ,
2 2
i.e., Q + M and M − Q have gamma laws.
3. Give the probability law of Q + M and Q − M.
4. Give the probability law of Q + αM when |α| < 1 and |α| > 1.
5. Find the classical analogues of the integration by parts formula (2.1)
written as
2 *
1 P
IE[D(1,0) F] = IE ,F − F ,
2 2
for α = 1, and
2 *
1 Q
IE[D(1,0) F] = IE F − ,F ,
2 2
for α = −1.
Appendix
I was born not knowing and have had only a little time to change that
here and there.
(R.P. Feynman)
This appendix gathers some background and complements on orthogonal
polynomials, moments and cumulants, the Fourier transform, adjoint action
on Lie algebras, nets, closability of linear operators, and tensor products.
A.1 Polynomials
A.1.1 General idea
Consider a family (Pn )n∈N of polynomials satisfying the orthogonality relation
∞
Pn (x)Pk (x)f (x)μ(dx) = 0, n = k,
−∞
231
232 Appendix - Polynomials
where ·, ·
denotes the inner product of LC 2 (R, μ), i.e.,
f , g
= f (x)g(x)μ(dx).
R
Since 2 (R, μ)
LC has dimension n, it follows that the monomials xm with m ≥ n
are linear combinations of P0 , . . . , Pn−1 . Therefore, we get
Pk = 0, k ≥ n.
Example A.1.1 Consider μ = pδx1 + qδx2 with p, q > 0 such that p + q = 1
and x1 , x2 ∈ R. Here we get P0 = 1 and
P̃1 = x − 1, x
= x − px1 − qx2 ,
3∞
P̃1 , P̃1
= −∞ (x − px1 − qx2 )2 μ(dx),
2 × 2 matrices.
cf. Theorems 4.1 and 4.4 in [27], that a family (Pn )n∈N of polymials such that
deg(Pn ) = n is orthogonal with respect to some measure μ on R if and only if
there exist sequences (αn )n∈N , (βn )n∈N such that (Pn )n∈N satisfies a three-term
recurrence relation of the form
xPn (x) = Pn+1 (x) + (α + β)nPn (x) + n(t + αβ(n − 1))Pn−1 (x),
The Legendre polynomials are associated with μ the uniform distribution, and
this generalises to the family of Gegenbauer polynomials (or ultraspherical
polynomials) in case μ is the measure with density (1 − x2 )α−1/2 1[−1,1] with
respect to the Lebesgue measure, α>−1/2. Important special cases include the
arcsine, uniform, Wigner’s semicircle distributions. The Jacobi polynomials vs.
the beta distribution constitute another generalisation.
Next we review in detail some important particular cases.
In particular we have
Hn (x; 0) = xn , n ∈ N.
i) Generating function:
2 σ 2 /2
ψλ (x, σ 2 ) = eλx−λ , x, λ ∈ R.
Hn (x; 0) = xn , x ∈ R, n ∈ N.
C0 (k, λ) = 1, C1 (k, λ) = k − λ, k ∈ R, λ ∈ R+ ,
Let
λk −λ
pk (λ) = e , k ∈ N, λ ∈ R+ ,
k!
denote the Poisson probability density, which satisfies the finite difference
differential equation
∂pk
(λ) = −pk (λ), (A.2)
∂λ
where is the difference operator
Let also
∞ n
t
ψλ (k, t) = Cn (k, λ), λ ∈ (−1, 1),
n!
n=0
λ, t > 0, k ∈ N.
λn+1 ∂ n+1 pk
Cn+1 (k, λ) = (λ)
pk (λ) ∂λn+1
λn+1 ∂ n pk λn+1 ∂ n pk−1
=− n
(λ) + (λ)
pk (λ) ∂λ pk (λ) ∂λn
λn ∂ n pk λn ∂ n pk−1
= −λ (λ) + k (λ)
pk (λ) ∂λn pk−1 (λ) ∂λn
= −λCn (k, λ) + kCn (k − 1, λ).
λ ∈ (−1, 1), hence the generating function ψλ (k, t) satisfies the differential
equation
∂ψλ
(k, t) = −λψλ (k, t) + kψλ (k − 1, t), ψ0 (k, t) = 1, k ≥ 1,
∂t
which yields (A.4) by induction on k.
We also have
∂ k pk
(λ) = (−)k pk (λ).
∂λk
Appendix - Polynomials 237
|x|m0 −1 −βx
μ(dx) = e 1βR+ dx.
(m0 )
If β = +1, then this measure is, up to a normalisation parameter, the usual
χ 2 -distribution (with parameter m0 ) of probability theory.
238 Appendix - Polynomials
For the definition of these polynomials see, e.g., [63, Equation (1.7.1)]. For
the measure μ we get
$ $2
(π − 2 arccos β)x $ $
$ m0 ix $
μ(dx) = C exp % $ + % $ dx,
2 1 − β2 $ 2 2 1 − β2 $
where
" !
m0 # 1
xn = n + − c sgnβ, n ∈ N,
2 c
and
∞ 2n
c 1
C−1 = (m0 )n = .
n! (1 − c2 )m0
n=0
Here,
based on the Faà di Bruno formula, links the moments IE[X n ] of a random
variable X with its cumulants (κnX )n≥1 , cf. e.g., Theorem 1 of [71], and also
[67] or § 2.4 and Relation (2.4.4) page 27 of [72]. In (A.6), the sum runs
240 Appendix - Moments and cumulants
n ≥ 1, where the sum runs over the partitions Pn1 , . . . , Pna of {1, . . . , n} with
cardinal |Pni | by the Faà di Bruno formula, cf. Theorem 1 of [71], and also [67]
or § 2.4 and Relation (2.4.3) page 27 of [72].
Example A.2.1
a) Gaussian cumulants. When X is centered we have κ1X = 0 and
κ2X = IE[X 2 ] = Var[X], and X becomes Gaussian if and only if κnX = 0,
n ≥ 3, i.e. κnX = 1{n=2} σ 2 , n ≥ 1, or
hence κnZ = λ, n ≥ 1, or
= Tn (λ), (A.8)
i.e., the n-th Poisson moment with intensity parameter λ > 0 is given by
Tn (λ) where Tn is the Touchard polynomial of degree n used in Section 3.2.
In particular the moment generating function of the Poisson distribution
with parameter λ > 0 and jump size α is given by
∞
∞
αt −1) (αt)n (αt)n
t −→ eλ(e = IEλ [Z n ] = Tn (λ).
n! n!
n=0 n=0
i.e.,
∞ ∞
eiξ(x−y) ϕ( y)dξ dy = 2π ϕ(x),
−∞ −∞
Gμ : C\R −→ C
by
1
Gμ (z) = μ(dt).
R z−t
The function Gμ is called the Cauchy transform or Stieltjes transform of μ.
We have
1 1 1
= -
2 ≤
|z − x| |(z)|
(z) − x + (z)2
G : C+ −→ C− = {z ∈ C : (z) < 0}
If the measure μ has compact support, say in the interval [−M, M] for some
M > 0, then we can express Gμ in terms of the moments of μ,
mn (μ) = xn μ(dx),
In particular we have
AdeX Y = e ad X Y,
and
AdeX Y := eX Ye−X
∞
(−1)m n m
= X YX
n!m!
n,m=0
∞
!
1 k
k
= (−1)m X k−m YX m
k! m
k=0 m=0
1
= Y + [X, Y] + X, [X, Y] + · · ·
2
= e ad X Y.
The identity
k !
k
(−1)m X k−m YX m = [X, [X, [· · · [X, [X, Y]] · · · ]]]
m & '( )
m=0 k times
Appendix - Nets 245
[X, [X, [· · · [X, [X, Y]] · · · ]]] = ⎣X, [X, [X, [· · · [X, [X, Y]] · · · ]]]⎦
& '( ) & '( )
k+1 times k times
/ ! 0
k
k
= X, (−1)m X k−m YX m
m
m=0
k !
k
= (−1)m [X, X k−m YX m ]
m
m=0
k !
k
= (−1)m (X k+1−m YX m − X k−m YX m+1 )
m
m=0
k !
k
= (−1)m X k+1−m YX m
m
m=0
k+1 !
k
− (−1)m−1 X k+1−m YX m
m−1
m=1
k+1 ! !!
k k
= (−1)m X k+1−m YX m +
m m−1
m=0
k+1 !
k+1
= (−1)m X k+1−m YX m ,
m
m=0
where on the last step we used the Pascal recurrence relation for the binomial
coefficients.
A.6 Nets
In a metric space (X, d) a point x ∈ X is called an adherent point (also called
point of closure or contact point) of a set A ⊆ X if and only if there exists
a sequence (xn )n∈N ⊂ A that converges to x. This characterisation cannot be
formulated in general topological spaces unless we replace sequences by nets,
which are a generalisation of sequences in which the index set N is replaced
by more general sets.
246 Appendix - Closability of linear operators
A partially ordered set (I, ≤) is called a directed set, if for any j, k ∈ I there
exists an element ∈ I such that j ≤ and k ≤ . A net in a set A is a family
of elements (xi )i∈I ⊆ A indexed by a directed set. A net (xi )i∈I in a topological
space X is said to converge to a point x ∈ X if, for any neighborhood Ux of x
in X, there exists an element i ∈ I such that xj ∈ Ux for all j ∈ I with i ≤ j.
In a topological space X a point x ∈ X is said to be an adherent point of a set
A ∈ X if and only if there exists a net (xı )ı∈I in A that converges to x. A map
f : X −→ Y between topological spaces is continuous, if and only if for any
point x ∈ X and any net in X converging to x, the composition of f with this
net converges to f (x).
where (Fn )n∈N denotes any sequence converging to F and such that (TFn )n∈N
converges in H.
Appendix - Tensor products 247
Q1, 1
= a+ 1, 1
= 1, a− 1
= 0.
Q2 1, 1
= (a+ + a− )2 1, 1
= (a+ + a− + a+ a− + a− a+ )1, 1
2 2
= a− a+ 1, 1
= (a+ a− + σ 2 )1, 1
= σ 2 1, 1
= σ 2.
Q3 1, 1
= (a+ + a− )3 1, 1
= (a+ + a− )2 a+ 1, 1
= (a+ + a− + a+ a− + a− a+ )a+ 1, 1
2 2
249
250 Exercise solutions
= (a− a+ + a− a+ )1, 1
2 2
= 0.
Q4 1, 1
= (a+ + a− )4 1, 1
2 2 2 2
2 2
= (a− a+ + a− a+ + a− a+ + a− a+ a− a+ )1, 1
2 2 3 3
= (a− a+ + a− a+ + a− a+ + a− a+ a− a+ )1, 1
2 2 3 3
= (σ 2 a− a+ + σ 2 a− a+ + σ 2 a− a+ )1, 1
= 3σ 2 (a+ a− + σ 2 I)1, 1
= 3σ 4 1, 1
= 3σ 4 ,
and
∂ ∂ ∂
Wt Wis f (τ )|t=s=0 = −2iτ f (τ ) + i(1 − τ ) Wt f (τ )|t=0
∂t ∂s ∂t
= −2iτ f (τ ) + i(1 − τ )(2τ ∂τ + (1 − τ ))f (τ )
= i((2τ ∂τ + (1 − τ ))((1 − τ )f )(τ ) = −P̃Q̃f (τ ).
∂ ∂ ∂ ∂
Wis Wt f (τ ) |t=s=0 = Wis 1|s=0 Wt f (τ )|t=0 = −Q̃P̃f (τ ).
∂t ∂s ∂s ∂t
Remarks.
Relation (3.13) can be proved using the operator Wz , as a consequence
of the aforementioned proposition. We have from (1)
∂ ∂
−Q̃P̃ = Wis Wt f (τ ) |t=s=0
∂s ∂t ! !
∂ ∂ 2istτ
= exp Wt Wis f (τ ) |t=s=0
∂s ∂t 1 − 2t
!
4itτ 2iτ
= + |t=s=0 Wt Wis f (τ ) |t=s=0
(1 − 2t) 2 1 − 2t
! !
2isτ ∂ ∂
+ exp Wt Wis f (τ ) |t=s=0
1 − 2t ∂s ∂t
∂ ∂
= 2iτ f (τ ) + Wt Wis f (τ ) |t=s=0
∂t ∂s
= 2iτ f (τ ) − P̃Q̃f (τ ).
b− = −ia− , b+ = ia+ .
b− e0 = −ia− e0 = 0.
254 Exercise solutions
2. We have
b− u, v
H = −ia− u, v
H
= ia− u, v
H
= iu, a+ v
H
= u, ia+ v
H
= u, b+ v
H .
a− → −ia+ , a+ → ia−
(−ia+ )u, v
= ia+ u, v
= iu, a− v
= u, ia− v
,
Xe0 , e0
= (N + a+ + a− + E)e0 , e0
= λEe0 , e0
= λe0 , e0
= λ.
b) Similarly we have
X 2 e0 , e0
= λe0 , e0
+ λXe0 , e0
= λ + λ2 .
c) We have
X 3 e0 , e0
= Xa− Xe0 , e0
+ a− Xe0 , e0
+ λXe0 , e0
+ λX 2 e0 , e0
= X 2 a− e0 , e0
+ Xa− e0 , e0
+ λXe0 , e0
+ Xa− e0 , e0
+ a− e0 , e0
+ λe0 , e0
+ λXe0 , e0
+ λX 2 e0 , e0
= λXe0 , e0
+ λe0 , e0
+ λXe0 , e0
+ λX 2 e0 , e0
= λ + 3λ2 + λ3 .
Exercise solutions 255
hence
∂n
IE[X n ] = IE[etX ]|t=0 = α(α + 1) · · · (α + n)(1 − t)−α−n .
∂tn
Exercise 3.4
1. For n = 1 we have
e0 , (B+ + B− + M)e0
= B− e0 , e0
+ e0 , Me0
= αe0 , e0
= α,
since e0 , e0
= 1.
2. For n = 2 we have
e0 , (B+ + B− + M)2 e0
= e0 , (B+ + B− + M)(B+ + B− + M)e0
= e0 , M 2 e0
+ e0 , B− Me0
+ e0 , B− B+ e0
+ e0 , MB+ e0
= α 2 e0 , e0
+ αe0 , B− e0
+ e0 , B− B+ e0
+ e0 , MB+ e0
= α 2 e0 , e0
+ e0 , [B− , B+ ]e0
+ e0 , B+ B− e0
+ e0 , [M, B+ ]e0
+ 2e0 , B− e0
= α 2 e0 , e0
+ e0 , Me0
+ 2e0 , B+ e0
= α(α + 1).
3. For n = 3 we have
e0 , (B+ + B− + M)3 e0
= e0 , (B− + M)(B+ + B− + M)(B+ + M)e0
= e0 , B− B+ (B+ + M)e0
+ e0 , B− B− B+ e0
+ e0 , B− MB+ e0
= e0 , M(B+ + M)e0
+e0 , B− Me0
+2e0 , B− B+ e0
+e0 , B− B+ Me0
+ e0 , MB+ B+ e0
+ e0 , MB+ Me0
+ e0 , MMe0
+ e0 , MMMe0
= e0 , M 2 e0
+ 2e0 , Me0
+ e0 , M 2 e0
+ e0 , MMe0
+ e0 , MMMe0
= α 3 + 3α 2 + 2α
= α(α + 1)(α + 2).
256 Exercise solutions
Define now
⎡ ⎤
1
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ u v ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ .. ⎥
U=⎢ . ⎥,
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ −v u ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
1
where the coefficients that are not marked all vanish. In other words, we
embed V into the unitary group U(n) such that it acts non-trivially only on
the ith and the jth component. Then conjugation of the matrix B with U
will change only the coefficients of the ith and the jth row and column of
B, more precisely we get
U ∗ BU =
⎡ ⎤
ub1i − vb1j vb1i + ub1j
⎢ ⎥
⎢ ⎥
⎢ ubi−1,i − vbi−1 j vbi−1,i + ubi−1 j ⎥
⎢ ⎥
⎢ ⎥
⎢ ubi1 − vbj1 · · ∗ · · · 0 · · ubin − vbjn ⎥
⎢ ⎥
⎢ ubi+1,i − vbi+1 j vbi+1,i + ubi+1 j ⎥
⎢ ⎥
⎢ ⎥,
⎢ ⎥
⎢ ubj−1,i − vbj−1 j vbj−1,i + ubj−1 j ⎥
⎢ ⎥
⎢ ⎥
⎢ vbi1 + ubj1 · · 0 · · · ∗ · · vbin + ubjn ⎥
⎢ ⎥
⎢ ubj+1,i − vbj+1 j vbj+1,i + ubj+1 j ⎥
⎢ ⎥
⎣ ⎦
ubni − vbnj vbni + ubnj
where the coefficients in the empty blocks are unchanged, and the values
marked by ∗ (i.e., (U ∗ BU)ii and (U ∗ BU)jj ) do not matter for our
calculations, since they do not occur in the sum defining (U ∗ BU).
We will now prove that (U ∗ BU) = (B) − |bij |2 < (B).
We have
1
(U ∗ BU) = |(U ∗ BU)k |2 = |(U ∗ BU)k |2
2 k,=1,...,n
1≤k<≤n
k=
since U ∗ BU is Hermitian.
258 Exercise solutions
If we take the sum over a row different from the ith or jth row, say the
kth row, then we have
for the coefficients in the ith and jth column, and, since the other
coefficients are not changed by the conjugation with U, we have
|(U ∗ BU)k |2 = |bk |2 .
=1,...,n =1,...,n
=k =k
For the sum over the ith and jth row, we observe
and
|(U ∗ BU)i |2 + |(U ∗ BU)j |2 = |bi |2 + |bj |2 .
=1,...,n =1,...,n =1,...,n =1,...,n
=i j =i j =i j =i j
|(U ∗ BU)i |2 + |(U ∗ BU)j |2 = |bi |2 + |bj |2
=1,...,n =1,...,n =1,...,n =1,...,n
=i =j =i j =i j
= |bi |2 + |bj |2 − |bij |2 − |bji |2
=1,...,n =1,...,n
=i =j
and finally,
Exercise 4.2
1. This follows by direct computation.
2. We find
n0 1x = #{ j, xj = +1} − #{ j, xj = −1} 1x
for x ∈ n , and n0 has a binomial distribution on the set
{−n, −n + 2, . . . , n − 2, n}, with density
n !
n k n−k
nL1 (n0 ) = p q δ2k−n
k
k=0
with respect to the constant function.
3. The law of
nθ = n0 + θ (n+ + n+ )
can be computed from exp(θ n2 )1.
Next, one checks that the terms with repeated indices vanish. Since an n-tuple
of distinct indices ( j1 , . . . , jn ) defines a permutation by σ (k) = jk for k ∈
{1, . . . , n}, the sum becomes
1
vσ (1) ⊗ · · · vσ (n) .
n!2n
∈{±1} σ ∈nn
The terms in the sum do no longer depend on and we get the desired result.
Note that we can write this polarisation formula equivalently as an expectation
vσ (1) ⊗ · · · ⊗ vσ (n) = IE Z1 · · · Zn (Z1 v1 + · · · + Zn vn )⊗n ,
σ ∈n
Using (iv) we can show that ω2 (t) satisfies the same differential equation
as ω1 (t),
dω2
(t) = ye2xt E+ exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
dt
+ exp ỹ(t)E+ xM exp x̃(t)M exp z̃(t)E−
+ exp ỹ(t)E+ exp x̃(t)M ze2xt E− exp z̃(t)E−
= ye2xt E+ exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
+ x(M − 2ỹ(t)E+ ) exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
+ ze2xt e−2xt E− exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
= (xM + yE+ + zE− )ω2 (t).
6. The functions ω1 (t) and ω2 (t) have the same initial value for t = 0 and
satisfy the same differential equation, therefore they agree for all values
of t. Taking t = 1 we get the desired formula.
Exercise 6.2
1. a) We have
+ +
eza a− e−za
z2 + + − z3
= a− + z[a+ , a− ] +[a , [a , a ]] + [a+ [a+ , [a+ , a− ]]] + · · ·
2 3!
z2 z 3
= a− − zE + [a+ , E] + [a+ [a+ , E]] + · · ·
2 3!
= a− − zE.
3. We have
d " uta+ vta− (tw+t2 uv/2)E #
ω2 (t) = e e e
dt ! !
d uta+ vta− (tw+t2 uv/2)E + d vta− (tw+t2 uv/2)E
= e e e + euta e e
dt dt
+ − d (tw+t2 uv/2)E
+ euta evta e
dt
+ − + −
= ua+ euta evta e(tw+t + veuta a− evta e(tw+t
2 uv/2)E 2 uv/2)E
+ − 2 uv/2)E
+ euta evta (w + tuv)Ee(tw+t , t ∈ R+ .
4. We have
+ − + −
ω2 (t) = ua+ euta evta e(tw+t + veuta a− evta e(tw+t
2 uv/2)E 2 uv/2)E
+ − 2 uv/2)E
+ euta evta (w + tuv)Ee(tw+t
+ − + −
= ua+ euta evta e(tw+t + v(a− − utE)euta evta e(tw+t
2 uv/2)E 2 uv/2)E
+ − 2 uv/2)E
+ (w + tuv)Eeuta evta e(tw+t
+ − + −
= ua+ euta evta e(tw+t + va− euta evta e(tw+t
2 uv/2)E 2 uv/2)E
+ − 2 uv/2)E
+ weuta evta Ee(tw+t
+ −
= (ua+ + va− + w)euta evta Ee(tw+t
2 uv/2)E
and
+ −
= ew+uv/2 e0 , eua eva e0
+
= ew+uv/2 e0 , eua e0
−
= ew+uv/2 eua e0 , e0
= ew+uv/2 e0 , e0
= ew+uv/2 .
hence
show that1
+ 2 −(α − )2 )/2
+ 2 − 2
2e−t((αx ) x αx+ αx− + αx− αx+ eit((αx ) −(αx ) )/2
hence
Ln (τ ) exp(itP̃)1, 1
∞ ∞ !
1 x2 + y2 + 2 −(α − )2 )/2 + 2 −(α − )2 )/2
= Ln et((αx ) x et((αy ) y
2π −∞ −∞ 2
× e−(x
2 +y2 )/2
dxdy
n √ √
(−1)n (2k)! (2n − 2k)! ∞ t + 2 − 2
Hk (x)e 2 ((αx ) −(αx ) ) e−x /2 dx
2
= √
n
2 2π k=0 k!(n − k)! −∞
∞
1 + 2 − 2
Hn−k (y)et((αy ) −(αy ) )/2 e−y /2 dy
2
×√
2π −∞
(−1)n tanh(t)n (2k)!(2n − 2k)!
n
=
4n cosh(t) k!2 (n − k)!2
k=0
%
= (−1)n 1 − tanh(t)2 tanh(t)n , (6.2)
since
n
(2k)!(2n − 2k)!
= 22n , n ∈ N.
k!2 (n − k)!2
k=0
In other words, this result follows from the fact that the random variables
αx+ αx− , αy+ αy− are independent and have negative binomial distributions in
the states
! !
t t
exp ((αx+ )2 − (αx− )2 ) and exp ((αy+ )2 − (αy− )2 ) ,
2 2
hence their half sum a◦ has a geometric distribution in the state exp(itP̃),
cf. [93], [97].
3. Applying (6.2) with n = 0 we find
% 1
IE[exp(itP̃)] = 1 − tanh(t)2 = , t ∈ R.
cosh(t)
266 Exercise solutions
|φ
φ| is given by
∞
1
W|φ
φ| (x, y) = φ̄(x − t)φ(x + t)eiyt dt
2π −∞
∞
1
e−(x−t) /4−(x+t) /4 eiyt dt
2 2
= 3/2
(2π ) −∞
∞
1
e−x /2−t /2+ity dt
2 2
= 3/2
(2π ) −∞
1 −(x2 +y2 )/2
= e , x, y ∈ R.
2π
= pn (X)e0 , pm (X)e0
= δnm , n, m ∈ N.
2. From Section 3.3.2 we get
% %
Xen = (n + 1)(n + m0 )en+1 + β(2n + m0 )en + n(n + m0 − 1)en−1 ,
n ∈ N, and
(n + 1)Pn+1 + (2βn + βm0 − x)Pn + (n + m0 − 1)Pn−1 = 0,
Exercise solutions 267
However, this stochastic integral is not defined as the process Bet is not adapted
since et > t, t ∈ R+ .
∞ !
−βT/2 y2 dy
=e exp −(1 − βT) √
−∞ 2 2π
e−βT/2
=√ , β < 1/T.
1 − βT
Exercise 9.5
1. We have St = S0 ert+σ Bt −σ
2 t/2
, t ∈ R+ .
2. We have
f (t, St ) = IE[(ST )2 | Ft ]
= St2 IE[e2r(T−t)+2σ (BT −Bt )−σ
2 (T−t)
| Ft ]
= St2 e2r(T−t)−σ 2 (T−t)
IE[e2σ (BT −Bt ) | Ft ]
2 (T−t)+2σ 2 (T−t)
= St2 e2r(T−t)−σ ,
∂f 2
ζt = σ St (t, St ) = 2σ St2 e(2r+σ )(T−t) , t ∈ [0, T].
∂x
∂f σ 2 2 ∂ 2f ∂f
rx (t, x) + x (t, x) + (t, x) = 0
∂x 2 ∂x2 ∂t
by induction over n ≥ 2.
[1] L. Accardi, U. Franz, and M. Skeide. Renormalized squares of white noise and
other non-Gaussian noises as Lévy processes on real Lie algebras. Comm. Math.
Phys., 228(1):123–150, 2002. (Cited on pages xvii, 17, 38, 40, 96, 132, 147,
and 187).
[2] L. Accardi, M. Schürmann, and W.v. Waldenfels. Quantum independent incre-
ment processes on superalgebras. Math. Z., 198:451–477, 1988. (Cited on
pages xvii and 131).
[3] G.S. Agarwal. Quantum Optics. Cambridge University Press, Cambridge, 2013.
(Cited on page 130).
[4] N.I. Akhiezer. The Classical Moment Problem and Some Related Questions in
Analysis. Translated by N. Kemmer. Hafner Publishing Co., New York, 1965.
(Cited on page 54).
[5] N.I. Akhiezer and I.M. Glazman. Theory of Linear Operators in Hilbert Space.
Dover Publications Inc., New York, 1993. (Cited on page 243).
[6] S. Albeverio, Yu. G. Kondratiev, and M. Röckner. Analysis and geometry
on configuration spaces. J. Funct. Anal., 154(2):444–500, 1998. (Cited on
page 162).
[7] S.T. Ali, N.M. Atakishiyev, S.M. Chumakov, and K.B. Wolf. The Wigner
function for general Lie groups and the wavelet transform. Ann. Henri Poincaré,
1(4):685–714, 2000. (Cited on pages xvi, xvii, 114, 115, 118, 120, 122, 123,
124, and 191).
[8] S.T. Ali, H. Führ, and A.E. Krasowska. Plancherel inversion as unified approach
to wavelet transforms and Wigner functions. Ann. Henri Poincaré, 4(6):1015–
1050, 2003. (Cited on pages xvi and 124).
[9] G.W. Anderson, A. Guionnet, and O. Zeitouni. An Introduction to Random
Matrices. Cambridge: Cambridge University Press, 2010. (Cited on page 88).
[10] U. Franz and A. Skalski. Noncommutative Mathematics for Quantum Systems.
To appear in the Cambridge IISc Series, 2015. D. Applebaum, Probability on
compact Lie groups, volume 70 of Probability Theory and Stochastic Modelling,
Springer, 2014” after 45th entry (Cited on pages xvi and 99).
[11] M. Anshelevich. Orthogonal polynomials and counting permutations. www
.math.tamu.edu/∼manshel/papers/OP-counting-permutations.pdf, 2014. (Cited
on page 233).
[12] N.M. Atakishiyev, S.M. Chumakov, and K.B. Wolf. Wigner distribution function
for finite systems. J. Math. Phys., 39(12):6247–6261, 1998. (Cited on page 125).
271
272 References
[13] V.P. Belavkin. A quantum nonadapted Itô formula and stochastic analysis in
Fock scale. J. Funct. Anal., 102:414–447, 1991. (Cited on pages 88, 190, 212,
and 213).
[14] V.P. Belavkin. A quantum nonadapted stochastic calculus and nonstationary
evolution in Fock scale. In Quantum Probability and Related Topics VI, pages
137–179. World Sci. Publishing, River Edge, NJ, 1991. (Cited on pages 88, 190,
212, and 213).
[15] V.P. Belavkin. On quantum Itô algebras. Math. Phys. Lett., 7:1–16, 1998. (Cited
on page 88).
[16] C. Berg, J.P.R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups,
volume 100 of Graduate Texts in Mathematics. Springer-Verlag, New York,
1984. Theory of positive definite and related functions. (Cited on page 88).
[17] Ph. Biane. Calcul stochastique non-commutatif. In Ecole d’Eté de Probabilités
de Saint-Flour, volume 1608 of Lecture Notes in Mathematics. Springer-Verlag,
Berlin, 1993. (Cited on pages 7, 88, 211, and 264).
[18] Ph. Biane. Quantum Markov processes and group representations. In Quantum
Probability Communications, QP-PQ, X, pages 53–72. World Sci. Publishing,
River Edge, NJ, 1998. (Cited on pages xvii, 132, and 189).
[19] Ph. Biane and R. Speicher. Stochastic calculus with respect to free Brow-
nian motion and analysis on Wigner space. Probab. Theory Related Fields,
112(3):373–409, 1998. (Cited on pages 88 and 216).
[20] L.C. Biedenharn and J.D. Louck. Angular Momentum in Quantum Physics.
Theory and Application. With a foreword by P.A. Carruthers. Cambridge:
Cambridge University Press, reprint of the 1981 hardback edition edition, 2009.
(Cited on page 72).
[21] L.C. Biedenharn and J.D. Louck. The Racah-Wigner Algebra in Quantum
Theory. With a foreword by P.A. Carruthers. Introduction by G.W. Mackey.
Cambridge: Cambridge University Press, reprint of the 1984 hardback ed.
edition, 2009. (Cited on page 72).
[22] J.-M. Bismut. Martingales, the Malliavin calculus and hypoellipticity under
general Hörmander’s conditions. Z. Wahrsch. Verw. Gebiete, 56(4):469–505,
1981. (Cited on pages xvii and 189).
[23] F. Bornemann. Teacher’s corner - kurze Beweise mit langer Wirkung. Mitteilun-
gen der Deutschen Mathematiker-Vereinigung, 10:55–55, July 2002. (Cited on
page 258).
[24] N. Bouleau, editor. Dialogues autour de la création mathématique. Association
Laplace-Gauss, Paris, 1997.
[25] T. Carleman. Les Fonctions Quasi Analytiques. Paris: Gauthier-Villars, Éditeur,
Paris, 1926. (Cited on page 221).
[26] M.H. Chang. Quantum Stochastics. Cambridge Series in Statistical and Proba-
bilistic Mathematics. Cambridge University Press, Cambridge, 2015. (Cited on
pages xviii and 88).
[27] T.S. Chihara. An Introduction to Orthogonal Polynomials. Gordon and Breach
Science Publishers, New York-London-Paris, 1978. Mathematics and Its Appli-
cations, Vol. 13. (Cited on page 233).
[28] S.M. Chumakov, A.B. Klimov, and K.B. Wolf. Connection between two wigner
functions for spin systems. Physical Review A, 61(3):034101, 2000. (Cited on
page 125).
[29] L. Cohen. Time-Frequency Analysis: Theory and Applications. Prentice-Hall,
New Jersey, 1995. (Cited on pages xvii and 130).
References 273
[102] L. Pukanszky. Unitary representations of solvable Lie groups. Ann. scient. Éc.
Norm. Sup., 4:457–608, 1971. (Cited on page 142).
[103] R. Ramer. On nonlinear transformations of Gaussian measures. J. Funct. Anal.,
15:166–187, 1974. (Cited on page 173).
[104] S. Sakai. C∗ -Algebras and W ∗ -Algebras. Springer-Verlag, New York-
Heidelberg, 1971. (Cited on page 179).
[105] M. Schürmann. The Azéma martingales as components of quantum independent
increment processes. In J. Azéma, P.A. Meyer, and M. Yor, editors, Séminaire
de Probabilités XXV, volume 1485 of Lecture Notes in Math. Springer-Verlag,
Berlin, 1991. (Cited on pages xvii and 132).
[106] M. Schürmann. White Noise on Bialgebras. Springer-Verlag, Berlin, 1993.
(Cited on pages xvii, 131, 134, 136, 147, and 179).
[107] I. Shigekawa. Derivatives of Wiener functionals and absolute continuity of
induced measures. J. Math. Kyoto Univ., 20(2):263–289, 1980. (Cited on
page 173).
[108] K.B. Sinha and D. Goswami. Quantum stochastic processes and noncommu-
tative geometry, volume 169 of Cambridge Tracts in Mathematics. Cambridge
University Press, Cambridge, 2007. (Cited on page xviii).
[109] R. F. Streater. Classical and quantum probability. J. Math. Phys., 41(6):3556–
3603, 2000. (Cited on pages xvii and 147).
[110] T.N. Thiele. On semi invariants in the theory of observations. Kjöbenhavn
Overs., pages 135–141, 1899. (Cited on page 239).
[111] N. Tsilevich, A.M. Vershik, and M. Yor. Distinguished properties of the gamma
process and related topics. math.PR/0005287, 2000. (Cited on pages xvii, 180,
and 189).
[112] N. Tsilevich, A.M. Vershik, and M. Yor. An infinite-dimensional analogue of the
Lebesgue measure and distinguished properties of the gamma process. J. Funct.
Anal., 185(1):274–296, 2001. (Cited on pages xvii, 180, and 189).
[113] A.S. Üstünel. An introduction to analysis on Wiener space, volume 1610
of Lecture Notes in Mathematics. Springer Verlag, Berlin, 1995. (Cited on
page 155).
[114] A.M. Vershik, I.M. Gelfand, and M.I. Graev. A commutative model of the group
of currents SL(2, R)X connected with a unipotent subgroup. Funct. Anal. Appl.,
17(2):137–139, 1983. (Cited on pages xvii and 189).
[115] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 1, volume 72 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1991. (Cited on page 99).
[116] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 3, volume 75 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1992. (Cited on page 99).
[117] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 2, volume 74 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1993. (Cited on page 99).
[118] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions, volume 316 of Mathematics and its Applications. Kluwer Academic
Publishers Group, Dordrecht, 1995. (Cited on page 99).
[119] J. Ville. Théorie et applications de la notion de signal analytique. Câbles et
Transmission, 2:61–74, 1948. (Cited on page 130).
[120] D. Voiculescu. Lectures on free probability theory. In Lectures on probability
theory and statistics (Saint-Flour, 1998), volume 1738 of Lecture Notes in Math.,
pages 279–349. Berlin: Springer, 2000. (Cited on page 88).
278 References
279
280 Index