You are on page 1of 304

C A M B R I D G E T R AC T S I N M AT H E M AT I C S

General Editors

B. BOLLOB ÁS, W. FULTON, A. KATOK, F. KIRWAN,


P. SARNAK, B. SIMON, B. TOTARO

206 Probability on Real Lie Algebras


CAMBRIDGE TRACTS IN MATHEMATICS

GENERAL EDITORS
B. BOLLOB ÁS, W. FULTON, A. KATOK, F. KIRWAN, P. SARNAK,
B. SIMON, B. TOTARO

A complete list of books in the series can be found at www.cambridge.org/mathematics.


Recent titles include the following:
172. Rigid Cohomology. By B. Le Stum
173. Enumeration of Finite Groups. By S. R. Blackburn, P. M. Neumann, and
G. Venkataraman
174. Forcing Idealized. By J. Zapletal
175. The Large Sieve and its Applications. By E. Kowalski
176. The Monster Group and Majorana Involutions. By A. A. Ivanov
177. A Higher-Dimensional Sieve Method. By H. G. Diamond, H. Halberstam, and
W. F. Galway
178. Analysis in Positive Characteristic. By A. N. Kochubei
179. Dynamics of Linear Operators. By F. Bayart and É. Matheron
180. Synthetic Geometry of Manifolds. By A. Kock
181. Totally Positive Matrices. By A. Pinkus
182. Nonlinear Markov Processes and Kinetic Equations. By V. N. Kolokoltsov
183. Period Domains over Finite and p-adic Fields. By J.-F. Dat, S. Orlik, and M. Rapoport
184. Algebraic Theories. By J. Adámek, J. Rosický, and E.M. Vitale
185. Rigidity in Higher Rank Abelian Group Actions I: Introduction and Cocycle Problem. By
A. Katok and V. Niţică
186. Dimensions, Embeddings, and Attractors. By J. C. Robinson
187. Convexity: An Analytic Viewpoint. By B. Simon
188. Modern Approaches to the Invariant Subspace Problem. By I. Chalendar and
J. R. Partington
189. Nonlinear Perron–Frobenius Theory. By B. Lemmens and R. Nussbaum
190. Jordan Structures in Geometry and Analysis. By C.-H. Chu
191. Malliavin Calculus for Lévy Processes and Infinite-Dimensional Brownian Motion. By
H. Osswald
192. Normal Approximations with Malliavin Calculus. By I. Nourdin and G. Peccati
193. Distribution Modulo One and Diophantine Approximation. By Y. Bugeaud
194. Mathematics of Two-Dimensional Turbulence. By S. Kuksin and A. Shirikyan
195. A Universal Construction for Groups Acting Freely on Real Trees. By I. Chiswell and
T. Müller
196. The Theory of Hardy’s Z-Function. By A. Ivić
197. Induced Representations of Locally Compact Groups. By E. Kaniuth and K. F. Taylor
198. Topics in Critical Point Theory. By K. Perera and M. Schechter
199. Combinatorics of Minuscule Representations. By R. M. Green
200. Singularities of the Minimal Model Program. By J. Kollár
201. Coherence in Three-Dimensional Category Theory. By N. Gurski
202. Canonical Ramsey Theory on Polish Spaces. By V. Kanovei, M. Sabok, and J. Zapletal
203. A Primer on the Dirichlet Space. By O. El-Fallah, K. Kellay, J. Mashreghi, and
T. Ransford
204. Group Cohomology and Algebraic Cycles. By B. Totaro
205. Ridge Functions. By A. Pinkus
206. Probability on Real Lie Algebras. By U. Franz and N. Privault
207. Auxiliary Polynomials in Number Theory. By D. Masser
Probability on Real Lie Algebras

UWE FRANZ
Université de Franche-Comté

NICOLAS PRIVAULT
Nanyang Technological University, Singapore
32 Avenue of the Americas, New York, NY 10013-2473, USA

Cambridge University Press is part of the University of Cambridge.


It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781107128651
© Uwe Franz and Nicolas Privault 2016
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2016
Printed in the United States of America
A catalogue record for this publication is available from the British Library
Library of Congress Cataloging in Publication Data
Franz, Uwe.
Probability on real Lie algebras / Uwe Franz, Université de Franche-Comté, Nicolas
Privault, Nanyang Technological University, Singapore.
pages cm. – (Cambridge tracts in mathematics)
Includes bibliographical references and index.
ISBN 978-1-107-12865-1 (hardback : alk. paper)
1. Lie algebras. 2. Probabilities. I. Privault, Nicolas. II. Title.
QA252.3.F72 2015
512 .482–dc23 2015028912
ISBN 978-1-107-12865-1 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy


of URLs for external or third-party internet websites referred to in this publication,
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
To Aline Mio.
To Christine Jing and Sophie Wanlu.
Contents

Notation page xi
Preface xiii
Introduction xv
1 Boson Fock space 1
1.1 Annihilation and creation operators 1
1.2 Lie algebras on the boson Fock space 4
1.3 Fock space over a Hilbert space 6
Exercises 9
2 Real Lie algebras 10
2.1 Real Lie algebras 10
2.2 Heisenberg–Weyl Lie algebra hw 12
2.3 Oscillator Lie algebra osc 13
2.4 Lie algebra sl2 (R) 14
2.5 Affine Lie algebra 20
2.6 Special orthogonal Lie algebras 21
Exercises 26
3 Basic probability distributions on Lie algebras 27
3.1 Gaussian distribution on hw 27
3.2 Poisson distribution on osc 31
3.3 Gamma distribution on sl2 (R) 36
Exercises 44
4 Noncommutative random variables 47
4.1 Classical probability spaces 47
4.2 Noncommutative probability spaces 48
4.3 Noncommutative random variables 54
4.4 Functional calculus for Hermitian matrices 57

vii
viii Contents

4.5 The Lie algebra so(3) 59


4.6 Trace and density matrix 65
4.7 Spin measurement and the Lie algebra so(3) 70
Exercises 72
5 Noncommutative stochastic integration 75
5.1 Construction of the Fock space 75
5.2 Creation, annihilation, and conservation operators 80
5.3 Quantum stochastic integrals 83
5.4 Quantum Itô table 86
Exercises 88
6 Random variables on real Lie algebras 90
6.1 Gaussian and Poisson random variables on osc 90
6.2 Meixner, gamma, and Pascal random variables on sl2 (R) 94
6.3 Discrete distributions on so(2) and so(3) 96
6.4 The Lie algebra e(2) 97
Exercises 99
7 Weyl calculus on real Lie algebras 103
7.1 Joint moments of noncommuting random variables 103
7.2 Combinatorial Weyl calculus 106
7.3 Heisenberg–Weyl algebra 107
7.4 Functional calculus on real Lie algebras 114
7.5 Functional calculus on the affine algebra 117
7.6 Wigner functions on so(3) 122
7.7 Some applications 128
Exercises 130
8 Lévy processes on real Lie algebras 131
8.1 Definition 131
8.2 Schürmann triples 134
8.3 Lévy processes on hw and osc 140
8.4 Classical processes 142
Exercises 148
9 A guide to the Malliavin calculus 149
9.1 Creation and annihilation operators 149
9.2 Wiener space 155
9.3 Poisson space 162
9.4 Sequence models 168
Exercises 173
Contents ix

10 Noncommutative Girsanov theorem 178


10.1 General method 178
10.2 Quasi-invariance on osc 180
10.3 Quasi-invariance on sl2 (R) 183
10.4 Quasi-invariance on hw 184
10.5 Quasi-invariance for Lévy processes 185
Exercises 189
11 Noncommutative integration by parts 190
11.1 Noncommutative gradient operators 190
11.2 Affine algebra 192
11.3 Noncommutative Wiener space 197
11.4 The white noise case 212
Exercises 216
12 Smoothness of densities on real Lie algebras 217
12.1 Noncommutative Wiener space 217
12.2 Affine algebra 222
12.3 Towards a Hörmander-type theorem 224
Exercises 230
Appendix 231
A.1 Polynomials 231
A.2 Moments and cumulants 239
A.3 Fourier transform 241
A.4 Cauchy–Stieltjes transform 243
A.5 Adjoint action 244
A.6 Nets 245
A.7 Closability of linear operators 246
A.8 Tensor products 247

Exercise solutions 249


Chapter 1 249
Chapter 2 250
Chapter 3 253
Chapter 4 256
Chapter 5 259
Chapter 6 260
Chapter 7 266
Chapter 8 266
Chapter 9 267
x Contents

Chapter 10 269
Chapter 11 270
Chapter 12 270
References 271
Index 279
Notation

K = R, resp. K = C, denote the fields of real, resp. complex, numbers.


B(R) denotes the Borel σ -algebra, i.e., the σ -algebra generated by the open
subsets of R.
z̄ denotes the complex conjugate of z ∈ C.

i = −1 denotes the complex square root of −1.
(z) denotes the imaginary part of z ∈ C.
(z) denotes the real part of z ∈ C.
sgn x ∈ {−1, 0, 1} denotes the sign of x ∈ R.

1 if n = m,
δn,m = 1{n=m} = is the Kronecker symbol.
0 if n  = m,
x denotes the Dirac measure at the point x.
2 denotes the space of complex-valued square-summable sequences.
h denotes a (complex) separable Hilbert space and hC denotes its complexifi-
cation when h is real.
“◦” denotes the symmetric tensor product in Hilbert spaces.
s (h) denotes the symmetric Fock space over the real (resp. complexified)
Hilbert space h (resp. hC ).
B(h) denotes the algebra of bounded operators over a Hilbert space h.
trρ denotes the trace of the operator ρ.
|X| denotes the absolute value of a normal operator X, with |X| := (X ∗ X)1/2
when X is not normal.

xi
xii Notation


ψ| with φ, ψ ∈ h denotes the rank one operator on the Hilbert space h,
defined by |φ
ψ|(v) = ψ, v
φ for v ∈ h.
[·, ·] denotes the commutator [X, Y] = XY − YX.
{·, ·} denotes the anti-commutator {X, Y} = XY + YX.
Ad, resp. ad X, denote the adjoint action on a Lie group, resp. Lie algebra.
S(R) denotes the Schwartz space of rapidly decreasing smooth functions.
C0 (R) denotes the set of continuous functions on R, vanishing at infinity.
Cb∞ (R) denotes the set of infinitely differentiable functions on R which are
bounded together with all their derivatives.
H p,κ (R2 ) denotes the Sobolev space of orders κ ∈ N and p ∈ [2, ∞].
 ∞
(x) := tx−1 e−t dt denotes the standard gamma function.
0
Jm (x) denotes the Bessel function of the first kind of order m ≥ 0.
Preface

This monograph develops a pedagogical approach to the role of noncommu-


tativity in probability theory, starting in the first chapter at a level suitable for
graduate and advanced undergraduate students. The contents also aim at being
relevant to the physics student and to the algebraist interested in connections
with probability and statistics.
Our presentation of noncommutativity in probability revolves around con-
crete examples of relations between algebraic structures and probability distri-
bution, especially via recursive relations among moments and their generating
functions. In this way, basic Lie algebras such as the Heisenberg–Weyl algebra
hw, the oscillator algebra osc, the special linear algebra sl(2, R), and other Lie
algebras such as so(2) and so(3), can be connected with classical probability
distributions, notably the Gaussian, Poisson, and gamma distributions, as well
as some other infinitely divisible distributions.
Based on this framework, the Chapters 1–3 allow the reader to directly
manipulate examples and as such they remain accessible to advanced under-
graduates seeking an introduction to noncommutative probability. This setting
also allows the reader to become familiar with more advanced topics, including
the notion of couples of noncommutative random variables via the use of
Wigner densities, in relation with quantum optics.
The following chapters are more advanced in nature, and are targeted to
the graduate and research levels. They include the results of recent research on
quantum Lévy processes and the noncommutative Malliavin [75] calculus. The
Malliavin calculus is introduced in both the commutative and noncommutative
settings and contributes to a better understanding of the smoothness properties
of Wigner densities.
While this text is predominantly based on research literature, part of the
material has been developed for teaching in the course “Special topics in

xiii
xiv Preface

statistics” at the Nanyang Technological University, Singapore, in the second


semester of academic year 2013–2014. We thank the students and participants
for useful questions and suggestions.
We thank Souleiman Omar Hoche and Michaël Ulrich for their comments,
suggestions, and corrections of an earlier version of these notes. During the
writing of this book, UF was supported by the ANR Project OSQPI (ANR-11-
BS01-0008) and by the Alfried Krupp Wissenschaftskolleg in Greifswald. NP
acknowledges the support of NTU MOE Tier 2 Grant No. M4020140.
Uwe Franz
Nicolas Privault
Introduction

Mathematics is the tool specially suited for dealing with abstract


concepts of any kind and there is no limit to its power in this field.
(P.A.M. Dirac, in The Principles of Quantum Mechanics.)
Quantum probability addresses the challenge of merging the apparently dis-
tinct domains of algebra and probability, in view of physical applications.
Those fields typically involve radically different types of thinking, which are
not often mastered simultaneously. Indeed, the framework of algebra is often
abstract and noncommutative while probability addresses the “fluctuating”
but classical notion of a random variable, and requires a good amount of
statistical intuition. On the other hand, those two fields combined yield natural
applications, for e.g., quantum mechanics. On a more general level, the
noncommutativity of operations is a common real life phenomenon which
can be connected to classical probability via quantum mechanics. Algebraic
approaches to probability also have applications in theoretical computer
science, cf. e.g., [39].
In the framework of this noncommutative (or algebraic) approach to prob-
ability, often referred to as quantum probability, real-valued random variables
on a classical probability space become special examples of noncommutative
(or quantum) random variables. For this, a real-valued random variable on a
probability space ( , F, P) is viewed as (unbounded) self-adjoint multipli-
cation operators acting on the Hilbert space L2 ( , P). This has led to the
suggestion by several authors to develop a theory of quantum probability
within the framework of operators and group representations in a Hilbert space,
cf. [87] and references therein.
In this monograph, our approach is to focus on the links between the
commutation relations within a given noncommutative algebra A on the one
hand, and the combinatorics of the moments of a given probability distribution

xv
xvi Introduction

on the other hand. This approach is exemplified in Chapters 1–3. In this respect
our point of view is consistent with the description of quantum probability by
P.A. Meyer in [80] as a set of prescriptions to extract probability from algebra,
based on various choices for the algebra A.
For example, it is a well-known fact that the Gaussian distribution arises
from the Heisenberg–Weyl algebra which is generated by three elements
{P, Q, I} linked by the commutation relation

[P, Q] = PQ − QP = 2iI.

It turns out similarly that other infinitely divisible distributions such as


the gamma and continuous binomial distributions can be constructed via
noncommutative random variables using representations of the special linear
algebra sl2 (R), or more simply on the affine algebra viewed as a sub-algebra
of sl2 (R). Other (joint) probability laws can be deduced in this setting; e.g.,
one can construct noncommutative couples of random variables with gamma
and continuous binomial marginals. Similarly, the Poisson distribution can be
obtained in relation with the oscillator algebra. In Chapters 4 and 6, those
basic examples are revisited and extended in the more general framework
of quantum random variables on real Lie algebras. We often work on real
Lie algebras given by complex Lie algebras with an involution, because
calculations are more convenient on complexifications. The real Lie algebras
can then be recovered as real subspaces of anti-Hermitian elements.
Since the elements of a Lie algebra g can be regarded as functions on its
dual g∗ it might be more precise to view a random variable j : g → A as
taking values in g∗ . In this sense, this book deals with “probability on duals
of real Lie algebras”, which would better reflect the implicit dualisation in the
definition of quantum probability spaces and quantum random variables. For
simplicity of exposition we nonetheless prefer to work with the less precise
terminology “probability on real Lie algebras”. We refer to [10] and the
references therein for further discussion and motivation of “noncommutative
(or quantum) mathematics”.
The notion of joint distribution for random vectors is of capital importance
in classical probability theory. It also has an analog for couples of noncommu-
tative random variables, through the definition of the (not necessarily positive)
Wigner [124] density functions. In Chapter 7 we present a construction of joint
densities for noncommutative random variables, based on functional calculus
on real Lie algebras using the general framework of [7] and [8], in particular
on the affine algebra. In that sense our presentation is also connected to the
framework of standard quantum mechanics and quantum optics, where Wigner
Introduction xvii

densities have various applications in, e.g., time-frequency analysis, see, e.g.,
the references given in [29] and [7].
Overall, this monograph puts more emphasis on noncommutative “problems
with fixed time” as compared with “problems in moving time”; see, e.g.,
[31] and [32] for a related organisation of topics in classical probability and
stochastic calculus. Nevertheless, we also include a discussion of noncommu-
tative stochastic processes via quantum Lévy processes in Chapter 8. Lévy
processes, or stochastic processes with independent and stationary increments,
are used as models for random fluctuations, e.g., in physics and finance. In
quantum physics the so-called quantum noises or quantum Lévy processes
occur, e.g., in the description of quantum systems coupled to a heat bath
[47] or in the theory of continuous measurement [53]. See also [122] for a
model motivated by lasers, and [2, 106] for the theory of Lévy processes on
involutive bialgebras. Those contributions extend, in a sense, the theory of
factorisable representations of current groups and current algebras as well as
the theory of classical Lévy processes with values in Euclidean space or, more
generally, semigroups. For a historical survey on the theory of factorisable
representions and its relation to quantum stochastic calculus, see [109, section
5]. In addition, many interesting classical stochastic processes can be shown to
arise as components of quantum Lévy processes, cf. e.g., [1, 18, 42, 105].
We also intend to connect noncommutative probability with the Malliavin
calculus, which was originally designed by P. Malliavin, cf. [75], as a tool
to provide sufficient conditions for the smoothness of partial differential
equation solutions using probabilistic arguments, see Chapter 9 for a review
of its construction. Over the years, the Malliavin calculus has developed into
many directions, including anticipating stochastic calculus and extensions of
stochastic calculus to fractional Brownian motion, cf. [84] and references
therein.
The Girsanov theorem is an important tool in stochastic analysis and the
Malliavin calculus, and we derive its noncommutative, or algebraic version
in Chapter 10, starting with the case of noncommutative Gaussian processes.
By differentiation, Girsanov-type identities can be used to derive integration
by parts formulas for the Wigner densities associated to the noncommutative
processes, by following Bismut’s argument, cf. [22]. In Chapter 10 we
will demonstrate on several examples how quasi-invariance formulas can be
obtained in such a situation. This includes the Girsanov formula for Brownian
motion, as well as a quasi-invariance result of the gamma processes [111, 112],
which actually appeared first in the context of factorisable representations of
current groups [114], and a quasi-invariance formula for the Meixner process.
xviii Introduction

In Chapter 11 we present the construction of noncommutative Malliavin


calculus on the Heisenberg–Weyl algebra [43], [44], which generalises the
Gaussian Malliavin calculus to Wigner densities, and allows one to prove
the smoothness of joint Wigner distributions with Gaussian marginals using
Sobolev spaces over R2 . Here, noncommutative Gaussian processes can be
built as the couple of the position and momentum Brownian motions on the
Fock space. We also provide a treatment of other probability laws, including
noncommutative couples of random variables with gamma and continuous
binomial marginals based on the affine algebra. More generally, the long term
goal in this field is to extend the hypoellipticity results of the Malliavin calculus
to noncommutative quantum processes. In this chapter, we also point out the
relationship between noncommutative and commutative differential calculi. In
the white noise case, i.e., if the underlying Hilbert space is the L2 -space of
some measure space, the classical divergence operator defines an anticipating
stochastic integral, known as the Hitsuda–Skorohod integral.
Several books on other extensions of the Malliavin calculus have been
recently published, such as [86] which deals with infinitesimal (nonstan-
dard) analysis and [56] which deals with Lévy processes. See [108] for
a recent introduction to quantum stochastic calculus with connections to
noncommutative geometry. See also [26] and [123] for recent introductions
to quantum stochastics based on quantum stochastic calculus and quantum
Markov semigroups.
The outline of the book is as follows (we refer the reader to [55] for an
introduction to the basic concepts of quantum theory used in this book). In
Chapter 1 we introduce the boson Fock space and we show how the first
moments of the associated normal distribution can be computed using basic
noncommutative calculus. Chapter 2 collects the background material on real
Lie algebras and their representations. In Chapter 3 we consider fundamental
examples of probability distributions (Gaussian, Poisson, gamma), and their
connections with the Heisenberg–Weyl and oscillator algebras, as well as
with the special linear algebra sl2 (R), generated by the annihilation and
creation operators on the boson Fock space. This will also be the occasion to
introduce other representations based on polynomials. After those introductory
sections, the construction of noncommutative random variables as operators
acting on Lie algebras will be formalised in Chapter 4, based in particular on
the notion of spectral measure. Quantum stochastic integrals are introduced
in Chapter 5. In Chapter 6 we revisit the approaches of Chapters 3 and 4,
relating Lie algebraic relations and probability distributions, in the unified
framework of the splitting lemma, see chapter 1 of [38]. The problem of
defining joint densities of couples of noncommutative random variables is
Introduction xix

treated in Chapter 7 under the angle of Weyl calculus, and Lévy processes
on real Lie algebras are considered in Chapter 8. The classical, commutative
Malliavin calculus is introduced in Chapter 9, and an introduction to quasi-
invariance and the Girsanov theorem for noncommutative Lévy processes
is given in Chapter 10. The noncommutative counterparts of the Malliavin
calculus for Gaussian distributions, and then for gamma and other related
probability densities are treated in Chapters 11 and 12, respectively, including
the case of so(3).
1

Boson Fock space

You don’t know who he was? Half the particles in the universe obey
him!
(Reply by a physics professor when a student asked who Bose was.)
We start by introducing the elementary boson Fock space together with
its canonically associated creation and annihilation operators on a space of
square-summable sequences, and in the more general setting of Hilbert spaces.
The boson Fock space is a simple and fundamental quantum model which will
be used in preliminary calculations of Gaussian moments on the boson Fock
space, based on the commutation and duality relations satisfied by the creation
and annihilation operators. Those calculations will also serve as a motivation
for the general framework of the subsequent chapters.

1.1 Annihilation and creation operators


Consider the space of square-summable sequences
 ∞


 := (C) = f : N → C :
2
|f (k)| < ∞
2

k=0

with the inner product




f , g
2 := f (k)g(k), f , g ∈ 2 ,
k=0

and orthonormal basis (en )n∈N given by the Kronecker symbols



1 k = n,
en (k) := δk,n =
0 k  = n,
k, n ∈ N.

1
2 Boson Fock space

Definition 1.1.1 Let σ > 0. The annihilation and creation operators are the
linear operators a− and a+ implemented on 2 by letting
√ √
a+ en := σ n + 1 en+1 , a− en := σ n en−1 , n ∈ N.
Note that the above definition means that a− e0 = 0.
The sequence space 2 endowed with the annihilation and creation operators
a and a+ is called the boson (or bosonic) Fock space. In the physical

interpretation of the boson Fock space, the vector en represents a physical


n-particle state. The term “boson” refers to the Bose–Einstein statistics and in
particular to the possibility for n particles to share the same state en , and Fock
spaces are generally used to model the quantum states of identical particles in
variable number.
As a consequence of Definition 1.1.1 the number operator a◦ defined as
a := a+ a− has eigenvalues given by


a◦ en = a+ a− en = σ 2 na+ en−1 = nσ 2 en , n ∈ N. (1.1)
Noting the relation

a− a+ en = σ n + 1a− en+1 = σ 2 (n + 1)en ,
in addition to (1.1), we deduce the next proposition.
Proposition 1.1.2 We have the commutation relation
[a+ , a− ]en = σ 2 en , n ∈ N.
Quantum physics provides a natural framework for the use of the non-commu-
tative operators a− and a+ , by connecting them with the statistical intuition
of probability. Indeed, the notion of physical measurement is noncommutative
in nature; think, e.g., of measuring the depth of a pool vs. measuring water
temperature: each measurement will perturb the next one in a certain way, thus
naturally inducing noncommutativity. In addition, noncommutativity gives rise
to the impossibility of making measurements with infinite precision, and the
physical interpretation of quantum mechanics is essentially probabilistic as a
given particle only has a probability density of being in a given state/location.
In the sequel we take σ = 1.
Given f = (f (n))n∈N and g = (g(n))n∈N written as

 ∞

f = f (n)en and g= g(n)en ,
n=0 n=0
we have

 ∞
 ∞

√ √
a+ f = f (n)a+ en = f (n) n + 1 en+1 = f (n − 1) n en
n=0 n=0 n=1
1.1 Annihilation and creation operators 3

and

 ∞
 ∞

√ √
a− f = f (n)a− en = f (n) n en−1 = f (n + 1) n + 1 en ,
n=0 n=1 n=0

hence we have
√ √
(a+ f )(n) = nf (n − 1), and (a− f )(n) = n + 1f (n + 1). (1.2)

This shows the following duality relation between a− and a+ .

Proposition 1.1.3 For all f , g ∈ 2 with finite support in N we have

a− f , g
2 = f , a+ g
2 .

Proof : By (1.2) we have




a− f , g
2 = (a− f )(n)g(n)
n=0

 √
= n + 1 f (n + 1)g(n)
n=0
∞

= n f (n)g(n − 1)
n=1
∞
= f (n)(a+ g)(n)
n=1
= f , a+ g
2 .

We also define the position and momentum operators

Q := a− + a+ and P := i(a+ − a− ),

which satisfy the commutation relation

[P, Q] = PQ − QP = −2Id .

To summarise the results of this section, the Hilbert space H = 2 with


inner product ·, ·
2 has been equipped with two operators a− and a+ , called
annihilation and creation operators and acting on the elements of H such that

a) a− and a+ are dual of each other in the sense that

a− u, v
2 = u, a+ v
2 ,
4 Boson Fock space

and this relation will also be written as (a+ )∗ = a− , with respect to the
inner product ·, ·
2 .
b) the operators a− and a+ satisfy the commutation relation
[a+ , a− ] = a+ a− − a− a+ = σ 2 Id ,
where Id is the identity operator.

1.2 Lie algebras on the boson Fock space


In this section we characterise the Lie algebras made of linear mappings
Y : 2  −→ 2 ,
written on the orthonormal basis (en )n∈N of the boson Fock space 2 as
Yen = γn en+1 + n en + ηn en−1 , n ∈ N, (1.3)
where γn , n , ηn ∈ C, with η0 = 0 and γn  = 0, n ∈ N. We assume that Y is
Hermitian, i.e., Y ∗ = Y, or equivalently
γ̄n = ηn+1 and n ∈ R, n ∈ N.
For example, the position and moment operators
Q := a− + a+ and P := i(a+ − a− )
can be written as
√ √
Qen = a− en + a+ en = nen−1 + n + 1en+1 ,
√ √
i.e., γn = n + 1, n = 0, and ηn = n, while
√ √
Pen = i(a+ en − a− en ) = i n + 1en+1 − i nen−1 ,
√ √
i.e., γn = i n + 1, n = 0, and ηn = −i n.
In the sequel we consider the sequence (Pn )n∈N of polynomials given by

n
Pn (Y) := αk,n Y k , n ∈ N.
k=0
Proposition 1.2.1 The condition
en = Pn (Y)e0 , n ∈ N, (1.4)
defines a unique sequence (Pn )n∈N of polynomials that satisfy the three-term
recurrence relation
xPn (x) = γn Pn+1 (x) + n Pn (x) + ηn Pn−1 (x), n ∈ N, (1.5)
1.2 Lie algebras on the boson Fock space 5

from which the sequence (Pn )n∈N can be uniquely determined based on the
initial condition P−1 = 0, P1 = 1.

Proof : The relation (1.3) and the condition (1.4) show-that

YPn (Y)e0 = γn Pn+1 (Y)e0 + n Pn (Y)e0 + ηn Pn−1 (Y)e0


= γn en+1 + n en + ηn en−1 ,

which implies the recurrence relation (1.5).

For example, the monomial Y n satisfies

en , Y n e0
2 = γ0 · · · γn−1 , n ∈ N,

hence since γn  = 0, n ∈ N, we have in particular

1 = en , en
2
= en , Pn (Y)e0
2

n
= αk,n en , Y k e0
2
k=0
= αn,n en , Y n e0
2
= αn,n γ1 · · · γn , n ∈ N.

In the case where Y = Q is the position operator, imposing the relation

en = Pn (Q)e0 , n ∈ N,

i.e., (1.4), shows that


√ √
QPn (Q)e0 = n + 1Pn+1 (Q)e0 + nPn−1 (Q)e0 ,

hence the three-term recurrence relation (1.5) reads


√ √
xPn (x) = n + 1Pn+1 (x) + nPn−1 (x),

for n ∈ N, with initial condition P−1 = 0, P1 = 1, hence (Pn )n∈N is the family
of normalised Hermite polynomials, cf. Section 12.1.

Definition 1.2.2 By a probability law of Y in the fundamental state e0 we will


mean a probability measure μ on R such that

xn μ(dx) = e0 , Y n e0
2 , n ∈ N,
R

which is also called the spectral measure of Y evaluated in the state Y  →


e0 , Y e0
2 .
6 Boson Fock space

In this setting the moment generating function defined as

t  −→ e0 , etY e0
2

will be used to determine the probability law μ of Y in the state e0 .


We note that in this case the polynomials Pn (x) are orthogonal with respect
to μ(dx), since
 ∞
Pn (x)Pm (x)μ(dx) = e0 , Pn (Y)Pm (Y)e0
2
−∞
= Pn (Y)e0 , Pm (Y)e0
2
= en , em
2
= δn,m , n, m ∈ N.

1.3 Fock space over a Hilbert space


More generally, the boson Fock space also admits a construction upon any real
separable Hilbert space h with complexification hC , and in this more general
framework it will simply be called the Fock space.
The basic structure and operators of the Fock space over h are similar to
those of the simple boson Fock space, however it allows for more degrees
of freedom. The boson Fock space 2 defined earlier corresponds to the
symmetric Fock space over the one-dimensional real Hilbert space h = R.
We will use the conjugation operator

: hC → hC

on the complexification

hC := h ⊕ ih = {h1 + ih2 : h1 , h2 ∈ h},

of h, defined by letting

h1 + ih2 := h1 − ih2 , h1 , h2 ∈ h.

This conjugate operation satisfies


 
h, k h = h, k
hC = k, h
hC , h, k ∈ hC .
C

The elements of h are characterised by the property h = h, and we will call


them real. The next definition uses the notion of the symmetric tensor product
“◦” in Hilbert spaces.
1.3 Fock space over a Hilbert space 7

Definition 1.3.1 The symmetric Fock space over hC is defined by the direct
sum

s (h) = h◦n
C.
n∈N

We denote by  := 1 + 0 + · · · the vacuum vector in s (h). The symmetric


Fock space is isomorphic to the complexification of the Wiener space L2 ( )
associated to h in Section 9.2.
The exponential vectors

∞ ⊗n
f
E(f ) := √ , f ∈ hC ,
n=0
n!

are total in s (h), and their scalar product in s (h) is given by

E(k1 ), E(k2 )
hC = e k1 ,k2
hC .

1.3.1 Creation and annihilation operators on s (h)


The annihilation, creation, position, and momentum operators a− (h), a+ (h),
Q(h), P(h), h ∈ h, can be defined as unbounded and closed operators on the
Fock space over h, see, e.g., [17, 79, 87]. The creation and annihilation oper-
ators a+ (h) and a− (h) are mutually adjoint, and the position and momentum
operators


Q(h) = a− (h) + a+ (h) and P(h) = i a− (h) − a+ (h)

are self-adjoint if h ∈ h is real. The commutation relations of creation, annihi-


lation, position, and momentum are


⎪ [a(h), a+ (k)] = h, k
hC ,





⎪ [a(h), a(k)] = [a+ (h), a+ (k)] = 0,





⎪ [Q(h), Q(k)] = [P(h), P(k)] = 0,






[P(h), Q(k)] = 2i h, k
hC .

The operators a− (h), a+ (h), Q(h), P(h) are unbounded, but their domains
contain the exponential vectors E(f ), f ∈ hC . We will need to compose them
with bounded operators on s (h), and in order to do so we will adopt the
8 Boson Fock space

following convention. Let




L E(hC ), s (h)



= B ∈ Lin span(E(hC )), s (h) : ∃B∗ ∈ Lin span(E(hC )), s (h)
    
such that E(f ), BE(g) h = B∗ E(f ), E(g) h for all f , g ∈ hC ,
C C

denote the space of linear operators that are defined on the exponential
vectors and that have an “adjoint” that is also defined on the exponential
vectors. Obviously − +

the operators a (h), a (h), Q(h), P(h), U(h1 , h2 ) belong to
L E(hC ), s (h) . We will say that an expression of the form


n
Xj Bj Yj ,
j=1



with X1 , . . . , Xn , Y1 , . . . , Yn ∈ L E(hC ), s (h) and B1 , . . . , Bn ∈ B s (h)
defines
a bounded operator on s (h), if there exists a bounded operator
M ∈ B s (h) such that

  
n
 ∗ 
E(f ), ME(g) h = Xj E(f ), Bj Yj E(g) h
C C
j=1

holds for all f , g ∈ hC . If it exists, this operator is unique because the exponen-
tial vectors are total in s (h), and we will then write


n
M= Xj Bj Yj .
j=1

1.3.2 Weyl operators


The Weyl operators U(h1 , h2 ) are defined by


U(h1 , h2 ) = exp iP(h1 ) + iQ(h2 ) = exp i a− (h2 − ih1 ) + a+ (h2 − ih1 ) ,

and they satisfy




U(h1 , h2 )U(k1 , k2 ) = exp i h2 , k1
hC − h1 , k2
hC U(h1 + h2 , k1 + k2 ).

Furthermore, we have U(h1 , h2 )∗ = U(−h1 , −h2 ) and U(h1 , h2 )−1 =


U(−h1 , −h2 ). We see that U(h1 , h2 ) is unitary, if h1 and h2 are real. These
operators act on the vacuum state  = E(0) as
1.3 Fock space over a Hilbert space 9

 
h1 , h1
hC + h2 , h2
hC
U(h1 , h2 ) = exp − E(h1 + ih2 )
2
and on the exponential vectors E(f ) as
U(h1 , h2 )E(f )
 
h1 , h1
hC + h2 , h2
hC
= exp − f , h1 + ih2
hC − E(f + h1 + ih2 ).
2

Exercises
Exercise 1.1 Moments of the normal distribution.
In this exercise we consider an example in which the noncommutativity
property of a− and a+ naturally gives rise to a fundamental example of
probability distribution, i.e., the normal distribution.
In addition to that we will assume the existence of a unit vector 1 ∈ h
(fundamental or empty state) such that a− 1 = 0 and 1, 1
h = 1. In particular,
this yields the rule
a+ u, 1
h = u, a− 1
h = 0.
Based on this rule, check by an elementary computation that the first four
moments of the centered N (0, σ 2 ) can be recovered from Qn 1, 1
h with
n = 1, 2, 3, 4.
In the following chapters this problem will be addressed in a systematic
way by considering other algebras and probability distributions as well as the
problem of joint distributions such as the distribution of the couple (P, Q).
2

Real Lie algebras

Algebra is the offer made by the devil to the mathematician. The


devil says: “I will give you this powerful machine, it will answer any
question you like. All you need to do is give me your soul: give up
geometry and you will have this marvelous machine”.
(M. Atiyah, Collected works.)
In this chapter we collect the definition and properties of the real Lie
algebras that will be needed in the sequel. We consider in particular the
Heisenberg–Weyl Lie algebra hw, the oscillator Lie algebra osc, and the Lie
algebras sl2 (R), so(2), and so(3) as particular cases. Those examples and their
relationships with classical probability distributions will be revisited in more
details in the subsequent chapters.

2.1 Real Lie algebras


Definition 2.1.1 A Lie algebra g over a field K is a K-vector space with a
linear map [·, ·] : g × g −→ g called Lie bracket that satisfies the following
two properties.
1. Anti-symmetry: for all X, Y ∈ g, we have
[X, Y] = −[Y, X].
2. Jacobi identity: for all X, Y, Z ∈ g, we have
     
X, [Y, Z] + Y, [Z, X] + Z, [X, Y] = 0.
For K = R, we call g a real Lie algebra, for K = C a complex Lie algebra.
Definition 2.1.2 Let g be a complex Lie algebra. An involution on g is a
conjugate linear map ∗ : g −→ g such that

10
2.1 Real Lie algebras 11

i) (X ∗ )∗ = X for all X ∈ g,
ii) [X, Y]∗ = −[X ∗ , Y ∗ ] for all X, Y ∈ g.
In the sequel we will only consider real Lie algebras, i.e., Lie algebras over
either the field K = R of real numbers, or involutive Lie algebras over the field
K = C of complex numbers.
Remark 2.1.3 Let g be a real Lie algebra. Then the complex vector space
gC := C ⊗R g = g ⊕ ig
is a complex Lie algebra with the Lie bracket


[X + iY, X  + iY  ] := [X, X  ] − [Y, Y  ] + i [X, Y  ] + [Y, X  ] ,
for X, X  , Y, Y  ∈ g. In addition,
1. the conjugate linear map
∗ : gC −→ gC
Z = X + iY  −→ Z ∗ = −X + iY
defines an involution on gC , i.e., it satisfies
(Z ∗ )∗ = Z and [Z1 , Z2 ]∗ = [Z2∗ , Z1∗ ]
for all Z, Z1 , Z2 ∈ gC
2. the functor1 g  −→ (gC , ∗) is an isomorphism between the category of real
Lie algebras and the category of involutive complex Lie algebras. The
inverse functor associates to an involutive complex Lie algebra (g, ∗) the
real Lie algebra
gR = {X ∈ g : X ∗ = −X},
where the Lie bracket on gR is the restriction of the Lie bracket of g. Note
that [·, ·] leaves gR invariant, since, if X ∗ = −X, Y ∗ = −Y, then
[X, Y]∗ = −[X ∗ , Y ∗ ] = −[(−X), (−Y)] = −[X, Y].

2.1.1 Adjoint action


In addition to the Lie algebra g we will consider the Lie group generated by all
exponentials of the form gt := etY/2 , Y ∈ g.

1 This functor is used for the equivalence of real Lie algebras and complex Lie algebras with an
involution by associating a complex Lie algebra with an involution to every real Lie algebra, and
vice versa. Categories are outside the scope of this book.
12 Real Lie algebras

The adjoint action of gt := etY/2 on X is defined by


X(t) = Adgt (X) := etY/2 Xe−tY/2 , t ∈ R.

2.2 Heisenberg–Weyl Lie algebra hw


The Heisenberg–Weyl Lie algebra hw is the three-dimensional Lie algebra
with basis {P, Q, E} satisfying the involution
P∗ = P, Q∗ = Q, E∗ = E,
and the commutation relations
[P, Q] = −2iE, [P, E] = [Q, E] = 0.

2.2.1 Boson Fock space representation


As in Chapter 1, the Heisenberg–Weyl Lie algebra can be implemented using
a− and a+ through the position and moment operators
Q = a− + a+ and P = i(a+ − a− ),
which satisfy
[P, Q] = PQ − QP = −2iE,
with E = σ 2 I, and both have Gaussian laws.

2.2.2 Matrix representation


Under the relations Q = a− + a+ and P = i(a+ − a− ), the Heisenberg–Weyl
Lie algebra hw has the matrix representation
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 σ 0 0 0 0 0 0 σ2
a− = ⎣ 0 0 0 ⎦ , a+ = ⎣ 0 0 σ ⎦ , E = ⎣ 0 0 0 ⎦ .
0 0 0 0 0 0 0 0 0
Here the exponential exp(αa− + βa+ + γ E) can be computed as
⎛⎡ ⎤⎞ ⎡ ⎤
0 ασ γ σ 2 1 ασ γ σ 2 + αβσ 2 /2
exp ⎝⎣ 0 0 βσ ⎦⎠ = ⎣ 0 1 βσ ⎦,
0 0 0 0 0 1
which is however not sufficient in order to recover the Gaussian moment
generating function of Q hinted at in Exercise 1.1.
2.3 Oscillator Lie algebra osc 13

2.2.3 Representation on h = L2 (R, dt)


A representation of hw on h = L2 (R, dt) can also be constructed by letting
2 
Pφ(t) := φ (t) and Qφ(t) := tφ(t), t ∈ R, φ ∈ S(R).
i

2.3 Oscillator Lie algebra osc


In addition to P, Q, and E we consider a fourth symmetric basis element N to
the Heisenberg–Weyl Lie algebra hw, and we impose the relations
[N, P] = −iQ, [N, Q] = iP, [N, E] = 0,
and the involution N ∗ = N. This yields the oscillator Lie algebra
osc = span {N, P, Q, E}.

2.3.1 Matrix representation


The oscillator algebra hw has the matrix representation
⎡ ⎤ ⎡ ⎤
0 σ 0 0 0 0

a = ⎣ 0 0 0 , a =⎦ + ⎣ 0 0 σ ⎦,
0 0 0 0 0 0
⎡ ⎤ ⎡ ⎤
0 0 σ2 0 0 0
E= ⎣ 0 0 0 ⎦, N= ⎣ 0 1 0 ⎦.
0 0 0 0 0 0

2.3.2 Boson Fock space representation


The oscillator Lie algebra osc can be written as the four dimensional Lie
algebra with basis
{N, a+ , a− , E},
where N and E are given on the boson Fock space 2 by
1
Nen = nen and Een = λen , n ∈ N,
λ
where N = λ−1 a◦ , λ > 0, and a◦ is the number operator. Recall that here the
creation and annihilation operators
√ √
a− en−1 = nen−1 , and a+ en = n + 1en+1 ,
14 Real Lie algebras

satisfy
Q + iP Q − iP
a+ = and a− = .
2 2
The Lie bracket [·, ·] satisfies

[N, a± ] = ±a± , [a− , a+ ] = E, [E, N] = [E, a± ] = 0.

2.3.3 The harmonic oscillator


Due to the relation
1 + − P2 + Q2 − 2σ 2 I
N= a a = ,
λ 4λ
the operator N is also known as the Hamiltonian of the harmonic oscillator.
This is by analogy with classical mechanics where the Hamiltonian of the
harmonic oscillator is given by
m 2 k 2 1 2 k 2
H= |ẋ| + x = p + x = T + U,
2 2 2m 2
with x the position of the particle (= elongation of the spring from its rest
position), ẋ its velocity, p = mẋ its momentum, and the two terms T and U are
respectively the kinetic energy
1 1 2
T= m|ẋ|2 = p
2 2m
and the energy stored in the spring
k 2
U= x ,
2
with m the mass of the particle and k Hooke’s constant, a characteristic of the
spring.

2.4 Lie algebra sl2 (R)


Consider the three-dimensional real Lie algebra

sl2 (R) = span {B+ , B− , M}

with basis B+ , B− , M, Lie bracket

[M, B± ] = ±2B± , [B− , B+ ] = M,


2.4 Lie algebra sl2 (R) 15

and the involution (B+ )∗ = B− , M ∗ = M. Letting

X = B+ + B− + M,

we will check that X has a gamma distribution with parameter β > 0, provided

Me0 = βe0 and B− e0 = 0.

2.4.1 Boson Fock space representation


For any c > 0 we can define a representation of sl2 (R) on 2 by
⎧ +

⎨ ρc (B )ek = (k + c)(k + 1)ek+1 ,

ρc (M)ek = (2k + c)ek ,

⎩ √
ρc (B− )ek = k(k + c − 1)ek−1 ,

where e0 , e1 , . . . is an orthonormal basis of 2 .


Letting
1 1 − 2 1 + 2
M := + a◦ , B− := (a ) , B+ := (a ) ,
2 2 2
generates the representation

[B− , B+ ] = M, [M, B− ] = −2B− , [M, B+ ] = 2B+ .

of sl2 (R). On the other hand, by defining


1 + 2 1 ◦ 1 − 2 1 ◦
a+ := (a ) + a and a− := (a ) + a ,
2 2 2 2
we have
1
a− + a+ = B− + B+ + M − , and i(a− − a+ ) = i(B− − B+ ),
2
and a− + a+ + 1/2 has a gamma law, while i(a− − a+ ) has a continuous
binomial law with parameter 1/2, under the conditions

Me0 = βe0 and B− e0 = 0.

2.4.2 Matrix representation


The Lie algebra sl2 (R) can be represented by 2 × 2 matrices with trace zero,
i.e.,
     
− 0 i + 0 0 −1 0
B = , B = , M= ,
0 0 i 0 0 1
16 Real Lie algebras

however this matrix representation is not compatible with the involution of the
Lie algebra. On the other hand, taking
     
0 0 0 1 −1 0
B− = , B+ = , M= ,
1 0 0 0 0 1
satisfies the correct involution, but with the different commutation relation
[M, B± ] = ∓2B± .

2.4.3 Adjoint action


Lemma 2.4.1 Letting Y = B− − B+ , the adjoint action of gt := etY/2 on Xβ
is given by


etY/2 Xβ e−tY/2 = et(adY)/2 Xβ = cosh(t) + β sinh(t) Xγ (β,t) ,

where
β cosh(t) + sinh(t)
γ (β, t) = , t ∈ R+ .
cosh(t) + β sinh(t)
See Section 4.4 of [46] for a proof of Lemma 2.4.1.

2.4.4 Representation of sl2 (R) on L2C (R+ , γ β (τ )dτ )


Denoting by

τ β−1 −τ
γβ (τ ) = 1{τ ≥0} e , τ ∈ R,
(β)
the gamma probability density function on R with shape parameter β > 0, a
representation {M, B− , B+ } of sl2 (R) can be constructed by letting

M := β + 2ã◦ , B− := ã− − ã◦ , B+ := ã+ − ã◦ ,



where ã− = τ , i.e.,
∂τ
ã− f (τ ) = τ f  (τ ), f ∈ Cc∞ (R),

as in [93], [95], [97]. The adjoint ã+ of ã− with respect to the gamma density
γβ (τ ) satisfies
 ∞  ∞

g(τ )ã f (τ )γβ (τ )dτ = f (τ )ã+ g(τ )γβ (τ )dτ , f , g ∈ Cc∞ (R),
0 0
(2.1)
2.4 Lie algebra sl2 (R) 17

and is given by ã+ = (τ − β) − ã− , i.e.,


∂f
ã+ f (τ ) = (τ − β)f (τ ) − τ (τ ) = (τ − β)f (τ ) − ã− f (τ ).
∂τ
The operator ã◦ defined as

∂ ∂ ∂2
ã◦ = ã+ = −(β − τ ) −τ 2
∂τ ∂τ ∂τ
β
has the Laguerre polynomials Ln with parameter β as eigenfunctions:

ã◦ Lnβ (τ ) = nLnβ (τ ), n ∈ N,

and the multiplication operator ã− + ã+ = τ − β has a compensated gamma


law in the vacuum state 1R+ in LC 2 (R , γ (τ )dτ ). Letting
+ β
⎧ − + − + ◦
⎨ Q̃ = B + B = ã + ã − 2ã ,

P̃ = i(B− − B+ ) = i(ã− − ã+ ),


M = τ − Q̃ = τ − B− − B+ = τ − ã− − ã+ + 2ã◦ ,

i.e.,


⎪ ∂ ∂2

⎪ Q̃ = τ − β + 2(β − τ ) + 2τ , (2.2a)

⎪ ∂τ ∂τ 2



P̃ = 2iτ − i(τ − β),

⎪ ∂τ



⎪ ∂ ∂2

⎩M = β − 2(β − τ ) − 2τ 2 , (2.2b)
∂τ ∂τ
we have

[P̃, Q̃] = 2iM, [P̃, M] = 2iQ̃, [Q̃, M] = −2iP̃,

and Q̃ + M is the multiplication operator

Q̃ + M = τ ,

hence Q̃ + M has the gamma law with parameter β in the vacuum state  =
1R+ in LC2 (R , γ (τ )dτ ).
+ β
We will show in Chapter 6 that when |α| < 1, the law (or spectral measure)
of αM + Q̃ is absolutely continuous with respect to the Lebesgue measure on
R. In particular, for α = 0, Q̃ and P̃ have continuous binomial distributions and
M + Q̃ and M − Q̃ are gamma distributed when α = ±1. On the other hand,
Q̃ + αM has a geometric distribution, when |α| > 1, cf. [1], and Exercise 6.3.
18 Real Lie algebras

2.4.5 Construction on the one-dimensional Gaussian


space - β = 1/2
When β = 1/2, writing τ = x2 /2, the operators ã− , ã+ , ã◦ are identified to
the operators
1 + − 1 − 1 +
ã◦τ = α α , ã−
τ = Qαx , ã+
τ = α Q,
2 x x 2 2 x
acting on the variable x, where
Q = αx− + αx+ and P = i(αx− − αx+ ),
and
∂ ∂
αx− = and αx+ = x − ,
∂x ∂x

i.e., Q is multiplication by x and P = −ix + 2i , with [P, Q] = 2iI, and we
∂x
have
⎧ !
⎪ 1 x2

⎪ ã◦τ f (τ ) = αx+ αx− f ,

⎪ 2 2



⎪ !

1 − x2
ã−τ f (τ ) = Qαx f ,

⎪ 2 2



⎪ !

⎪ 1 + x2

⎩ ã+τ f (τ ) = αx Qf .
2 2
The above relations have been exploited in various contexts, see, e.g., [64],
[66], [91]. In [91], these relations have been used to construct a Malliavin
calculus on Poisson space directly from the Gaussian case. In [66] they are
used to prove logarithmic Sobolev inequalities for the exponential measure.
Taking β = 1/2, a representation {M, B− , B+ } of sl2 (R) can be con-
structed as

⎪ 1 α − α + + αx+ αx− P2 + Q2


⎪ M = + 2ã◦τ = x x = ,

⎪ 2 2 4


1 − 2
⎪ B− = ã− ◦
τ − ãτ = (αx ) ,

⎪ 2




⎩ B+ = ã+ − ã◦ = 1 (α + )2 .
τ τ
2 x
In fact, letting
1 − 2 P2 − Q2
Q̂ := B− + B+ = ((αx ) + (αx+ )2 ) =
2 4
2.4 Lie algebra sl2 (R) 19

and
i PQ + QP
P̂ := i(B− − B+ ) = ((α − )2 − (αx+ )2 ) = ,
2 x 4
we have the commutation relations
[M, P̂] = −2iQ̂, [M, Q̂] = 2iP̂, [P̂, Q̂] = 2iM,
and
α + 1 P2 α − 1 Q2
Q̂ + αM = + ,
2 2 2 2
and
! !
α+1 P2 1−α Q2
M + α Q̂ = + .
2 2 2 2

2.4.6 Construction on the two-dimensional Gaussian space - β = 1


When β = 1 and γ1 (τ ) is the exponential probability density we let
∂ ∂ ∂ ∂
αx− := , αy− := , αx+ := x − , αy+ := y −
∂x ∂y ∂x ∂y
denote the partial annihilation and creation operators on the two–variable
boson Fock space
!
1 −(x2 +y2 )/2
(Ce1 ⊕ Ce2 )  LC 2
R2 ; e dxdy .

The next lemma is valid when β = 1, in which case the exponential random
variable τ = (x2 + y2 )/2 can be represented as
1 +
τ=((α + αx− )2 + (αy+ + αy− )2 ).
2 x
Lemma 2.4.2 The operators ã− , ã+ , ã◦ are identified to operators on
1
(Ce1 ⊕ Ce2 ), acting on the variable τ = (x2 + y2 ) by the relations
2


⎪ 1

⎪ ã◦ = (αx+ αx− + αy+ αy− ), (2.3a)

⎪ 2

1
ã+ = − ((αx+ )2 + (αy+ )2 ) − ã◦ , (2.3b)

⎪ 2



⎪ 1
⎩ã− = − ((αx− )2 + (αy− )2 ) − ã◦ , (2.3c)
2
and we have
i i
P̃ = (ã− − ã+ ) = ((αx+ )2 + (αy+ )2 − (αx− )2 − (αy− )2 ).
2 2
20 Real Lie algebras

Proof : From (2.3b) and (2.3c) we have


" # !
x2 + y2
(αx− )2 + (αy+ )2 f
2
! ! ! !! !
∂ ∂ ∂ ∂ x2 + y2
= x− x− + y− y− f
∂x ∂x ∂y ∂y 2
! !
∂ ∂ ∂ ∂ x2 + y2
= x − x − 1 − x + ∂x + y − y − 1 − y + ∂y f
2 2 2 2
∂x ∂x ∂y ∂y 2
" #
= −2(1 + x2 + y2 )f (τ ) − (x2 + y2 )f  (τ ) + 2f  (τ ) + x2 + y2 f  (τ )
= −2((1 − τ )f (τ ) + τ f  (τ ) − (1 − τ )f  (τ ) − τ f  (τ ))
= −2(ã+ + ã◦ )f (τ ),

and
" # ! ! !
x2 + y2 ∂ ∂ ∂ ∂ x2 + y2
(αx− )2 + (αy− )2 f = + f
2 ∂x ∂x ∂y ∂y 2
" #
= 2f  (τ ) + (x2 + y2 ) f  (τ )
= −2(−τ f  (τ ) − (1 − τ )f  (τ ) − τ f  (τ ))
= −2(ã− + ã◦ )f (τ ).

2.5 Affine Lie algebra


The affine algebra can be viewed as the sub-algebra of sl2 (R) generated by
   
1 0 0 1
X1 = , X2 = ,
0 0 0 0
with the commutation relation

[X1 , X2 ] = X2 ,

and the affine group can be constructed as the group of 2 × 2 matrices of the
form
   x 
a b e 1 x2 ex1 /2 sinch(x1 /2)
g = ex1 X1 +x2 X2 = = ,
0 1 0 1
a > 0, b ∈ R, where
sinh x
sinchx = , x ∈ R.
x
2.6 Special orthogonal Lie algebras 21

The affine group also admits a classical representation on L2 (R) given by


!
t−b
(U(g)φ)(t) = a−1/2 φ , φ ∈ L2 (R),
a
where
 
a b
g= , a > 0, b ∈ R,
0 1
and the modified representation on h = LC
2 (R, γ (|τ |)dτ ) defined by
β

(Û(g)φ)(τ ) = φ(aτ )eibτ e−(a−1)|τ |/2 aβ/2 , φ ∈ LC


2
(R, γβ (|τ |)dτ ), (2.4)
obtained by Fourier transformation and a change of measure. We have
$
d $$ P
Û(X1 )φ(τ ) = $ Û(eitX1 )φ(τ ) = −i φ(τ ),
dt t=0 2

with P = i(β − |τ |) + 2iτ , and
∂τ
$
d $$
Û(X2 )φ(τ ) = $ Û(eitX2 )φ(τ ) = iτ φ(τ ) = i(Q + M)φ(τ ),
dt t=0
τ ∈ R, where P, Q, and M are defined in (2.2a)-(2.2b). In other words we have
i
Û(X1 ) = − P and Û(X2 ) = i(Q + M),
2
hence we have
!
P
Û(ex1 X1 +x2 X2 ) = exp −ix1 + ix2 (Q + M) ,
2
under the identification
!
1 1
X1 = (B− + B+ ) and X2 = i B− + B+ + 2M − .
2 2

2.6 Special orthogonal Lie algebras


In this section we focus on special orthogonal Lie algebras so(2) and so(3).

2.6.1 Lie algebra so(2)


The Lie algebra so(2) of SO(2) is commutative and generated by
 
0 −1
ξ1 = .
1 0
22 Real Lie algebras

By direct exponentiation we have


 !  
0 −θ cos θ − sin θ
gt = exp = , θ ∈ R+ .
θ 0 sin θ cos θ

2.6.2 Lie algebra so(3)


The Lie algebra so(3) of SO(3) is noncommutative and has a basis consisting
of the three anti-Hermitian elements ξ1 , ξ2 , ξ3 , with the relations

[ξ1 , ξ2 ] = ξ3 , [ξ2 , ξ3 ] = ξ1 , [ξ3 , ξ1 ] = ξ2 .


⎡ ⎤
x1
Let x = ⎣ x2 ⎦ ∈ R, then
x3

ξ(x) = (x1 ξ1 + x2 ξ2 + x3 ξ3 )

defines a general element of so(3), it is anti-Hermitian, i.e., ξ(x)∗ = −ξ(x).


We can take e.g.,
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 0 0 0 −1 0 0 0
ξ1 = ⎣ −1 0 ⎦
0 , ξ2 = ⎣ 0 0 0 ⎦ , ξ3 = ⎣ 0 0 1 ⎦.
0 0 0 1 0 0 0 −1 0

We note that by Rodrigues’ rotation formula, every g ∈ SO(3) can be parame-


terised as

g = ex ξ1 +y ξ2 +z ξ3
= ea(z,−y,x)
= Id + sin(φ) a(u1 , u2 , u3 ) + (1 − cos φ) a(u1 , u2 , u3 )2 ,

for some x, y, z ∈ R, where


⎡ ⎤
0 −u3 −u2
a(u1 , u2 , u3 ) = ⎣u3 0 −u1 ⎦,
u2 u1 0
%
and φ = x2 + y2 + z2 is the angle of rotation about the axis

1
(u1 , u2 , u3 ) := % (z, −y, x) = (cos α, sin α cos θ , sin α sin θ ) ∈ S2 .
x + y2 + z2
2
2.6 Special orthogonal Lie algebras 23

2.6.3 Finite-dimensional representations of so(3)


We consider a family of finite-dimensional representations of so(3) in terms of
the basis ξ0 , ξ+ , ξ− of so(3) defined by
ξ0 = 2iξ3 , ξ+ = i(ξ1 + iξ2 ), ξ− = i(ξ1 − iξ2 ).
In this basis the commutation relations of so(3) take the form
[ξ0 , ξ± ] = ±2ξ± and [ξ+ , ξ− ] = ξ0 ,
and ξ0∗ = ξ0 , ξ+∗ = ξ− , ξ−∗ = ξ+ . This is close to a representation of sl2 (R),
although not with the correct involution. Letting n ∈ N be a positive integer and
given e−n , e−n−2 , . . . , en−2 , en an orthonormal basis of an n + 1-dimensional
Hilbert space, we define a representation of so(3) by

⎪ ξ0 ek = kek ,


(2.5a)



⎪ ⎧



⎪ ⎪
⎨ 0 if k = n,



⎪ ξ e =

⎨ + k ⎪ % (2.5b)
⎩ 1 (n − k)(n + k + 2)ek+2 else.
⎪ 2



⎪ ⎧



⎪ ⎪
⎨ 0 if k = −n,



⎪ ξ− ek =

⎪ ⎪ % (2.5c)

⎩ ⎩ 1 (n + k)(n − k + 2)ek−2 else.
2
In order to get back a representation in terms of the basis ξ1 , ξ2 , ξ3 , we have
i 1 i
ξ1 = − (ξ− + ξ+ ), ξ2 = (ξ− − ξ+ ), ξ3 = − ξ0 .
2 2 2

2.6.4 Two-dimensional representation of so(3)


For n = 1, we get the two-dimensional representation
     
1 0 0 1 0 0
ξ0 = , ξ+ = , ξ− = ,
0 −1 0 0 1 0
with respect to the basis {e1 , e−1 }, or
     
i 0 1 1 0 1 i 1 0
ξ1 = − , ξ2 = − , ξ3 = − .
2 1 0 2 −1 0 2 0 −1
In this representation we get
 
i x3 x1 − ix2
ξ(x) = − .
2 x1 + ix2 −x3
24 Real Lie algebras

2.6.5 Adjoint action


We note that

C = ξ12 + ξ22 + ξ33

commutes with the basis elements ξ1 , ξ2 , ξ3 , e.g.,

[ξ1 , C] = [ξ1 , ξ12 ] + [ξ1 , ξ22 ] + [x1 , ξ33 ]


= 0 + ξ 3 ξ 2 + ξ2 ξ 3 − ξ 2 ξ 3 − ξ3 ξ 2
= 0,

where we used the Leibniz formula for the commutator, i.e., the fact that we
always have

[a, bc] = [a, b]c + b[a, c].

The element C is called the Casimir operator.


Let us now study the commutator of two general elements ξ(x), ξ(y) of
so(3), with x, y ∈ R3 . We have

[ξ(x), ξ(y)] = [x1 ξ1 + x2 ξ2 + x3 ξ3 , y1 ξ1 + y2 ξ2 + y3 ξ3 ]


= (x2 y3 − x3 y2 )ξ2 + (x3 y1 − x1 y3 )ξ2 + (x1 y2 − x2 y1 )ξ3
= ξ(x × y),

where x × y denotes the cross product or vector product of two vectors x and y
in three-dimensional space,
⎡ ⎤
x2 y3 − x3 y2
x × y = ⎣ x3 y1 − x1 y3 ⎦ .
x1 y2 − x2 y1


This shows that the element exp ξ(x) of the Lie group SO(3) acts on so(3) as
a rotation. More precisely, we have the following result.

Lemma 2.6.1 Let x, y ∈ R3 , then we have


"
#


Ad exp ξ(x) ξ(y) = ξ Rx (y)

where Rx denotes a rotation around the axis given by x, by an angle ||x||.

Proof : Recall that the adjoint action of a Lie group of matrices on its Lie
algebra is defined by


Ad exp(X) (Y) = exp X)Y exp − X).


2.6 Special orthogonal Lie algebras 25

It is related to the adjoint action of the Lie algebra on itself,

ad(X)Y = [X, Y]

by



Ad exp(X) (Y) = exp ad(X) (Y).

We already checked that





ad ξ(x) ξ(y) = [ξ(x), ξ(y)] = ξ(x × y).


We now have to compute the action of the exponential of ad ξ(x) , we will
choose a convenient basis for this purpose. Let
x
e1 = ,
||x||
choose for e2 any unit vector orthogonal to e1 , and set

e3 = e1 × e2 .

Then we have

0⎨ if j = 1,
x × ej = e3 if j = 2,

−e2 if j = 3.


We check that the action of Ad ξ(x) on this basis is given by
"
#
∞
1"
#n

Ad exp ξ(x) ξ(e1 ) = ad ξ(x) ξ(e1 )
n!
n=0
1

= ξ(e1 ) + ξ(x × e1 ) + ξ x × (x × e1 ) + · · ·
& '( ) 2 & '( )
=0 =0


= ξ Rx (e1 ) ,
"
#

1

Ad exp ξ(x) ξ(e2 ) = ξ(e2 ) + ξ(x × e2 ) + ξ x × (x × e2 ) + · · ·
& '( ) 2 & '( )
=||x||e3 =−||x||2 e2


= ξ cos(||x||)e2 + sin(||x||)e3


= ξ Rx (e2 ) ,
"
#
1

Ad exp ξ(x) ξ(e3 ) = ξ(e3 ) + ξ( x × e3 ) + ξ x × (x × e3 ) + · · ·
& '( ) 2 & '( )
=−||x||e2 =−||x||2 e3


= ξ cos(||x||)e3 − sin(||x||)e2


= ξ Rx (e3 ) .
26 Real Lie algebras

Notes
Relation (2.3a) has been used in [92] to study the relationship between
the stochastic calculus of variations on the Wiener and Poisson spaces, cf.
also [64].

Exercises
Exercise 2.1 Consider the Weyl type representation, defined as follows for a
subgroup of sl2 (R). Given z = u + iv ∈ C, u < 1/2, define the operator Wz as
! !
1 x ux
Wz f (x) = √ f exp − + iv(1 − x) .
1 − 2u 1 − 2u 1 − 2u
1. Show that the operator Wz is unitary on L2 with W0 = Id and that for any
λ = κ + iζ , λ = κ  + iζ  ∈ C
we have


⎪ Wλ Wλ = Wκ+κ  −2κκ  +(ζ +ζ  /(1−2κ)) ,





⎨ dWtλ
= λã+ − λ̄ã−= − λP̃, λ = κ + iζ ∈ 2 (N; C),
dt |t=0



⎪ !

⎪ iusτ

⎩ Wis Wu = exp 2 Wu Wis , u < 1/2, s ∈ R.
1 − 2u
Conclude that Wλ can be extended to L2 (R; C), provided |κ| < 1/2, and
 
1/a b
 −→ W(1−a2 )/2+ib/a , a ∈ R \ {0}, b ∈ R,
0 a
is a representation of the subgroup of SL(2, R) made of upper-triangular
matrices.
2. Show that the representation (Wλ )λ contains the commutation relations
between ã+ and ã− , i.e., we have
d d d d
P̃Q̃ = − Wt Wis|t=s=0 and Q̃P̃ = − Wis Wt|t=s=0 .
dt ds dt ds
3

Basic probability distributions on Lie algebras

The theory of probabilities is at bottom nothing but common sense


reduced to calculus.
(P.S. de Laplace, in Théorie Analytique des Probabilités.)
In this chapter we show how basic examples of continuous and discrete
probability distributions can be constructed from real Lie algebras, based on
the annihilation and creation operators a− , a+ , completed by the number
operator a◦ = a+ a− . In particular, we study in detail the relationship between
the Gaussian and Poisson distributions and the Heisenberg–Weyl and oscillator
Lie algebras hw and osc, which generalises the introduction given in Chapter 1.
We work on the Fock space over a real separable Hilbert space h, and we also
examine a situation where the gamma and continuous binomial distributions
appear naturally on sl2 (R), in relation with integration by parts with respect to
the gamma distribution.

3.1 Gaussian distribution on hw


Since the Heisenberg–Weyl Lie algebra hw is based on the creation and
annihilation operators a− and a+ introduced on the boson Fock space 2 in
Chapter 1, we start with an extension of those operators to an arbitrary complex
Hilbert space.
Namely, we consider
a) a complex Hilbert space h equipped with a sesquilinear inner product ·, ·
,
such that
z u, v
= z̄ u, v
, z ∈ C,
and u, v
= v, u
,

27
28 Basic probability distributions on Lie algebras

b) two operators a− and a+ , called annihilation and creation operators acting


on the elements of h, such that
i) a− and a+ are dual of each other in the sense that

a− u, v
= u, a+ v
, u, v ∈ h (3.1)

which will also be written (a+ )∗ = a− , for the scalar product ·, ·


, and
ii) the operators a− and a+ satisfy the commutation relation

[a− , a+ ] = a− a+ − a+ a− = E, (3.2)

where E commutes with a− and a+ ,


c) a unit vector e0 ∈ h (fundamental or empty state) such that a− e0 = 0 and
e0 , e0
= 1.

We will show that under the conditions

a− e0 = 0 and Ee0 = σ 2 e0 ,

(e.g., when E = σ 2 Ih where Ih is the identity of h), the operator Q = a− + a+


has a Gaussian law in the sense that it yields the moment generating function
2 σ 2 /2
t −→ e0 , etQ e0
= et , t ∈ R+ ,

which extends in particular the example of Exercise 1.1 to moments of all


orders. Similarly, we could show that P = i(a− − a+ ) also has a Gaussian law
in the state e0 , see Exercise 3.1.
Next, we will consider several representations for the aforementioned noncom-
mutative framework.

3.1.1 Gaussian Hilbert space representation


A way to implement the Heisenberg–Weyl algebra and the above operators a−
and a+ is to take
!
1 −x2 /(2σ 2 )
h : = LC R; √
2
e dx
2π σ 2
  ∞ *
| f (x)|2 e−x /(2σ ) dx < ∞
2 2
= f:R→C:
−∞

under the inner product


 ∞
1
ū(x)v(x)e−x
2 /(2σ 2 )
u, v
h := √ dx,
2π σ 2 −∞
3.1 Gaussian distribution on hw 29

by letting
∂ ∂
a− := σ 2 and a+ := x − σ 2
∂x ∂x
and by defining e0 to be the constant function equal to one, i.e., e0 (x) = 1,
x ∈ R, which satisfies the conditions e0 , e0
h = 1 and a− e0 = 0. A standard
integration by parts shows that
 ∞
σ2
a− u, v
h = √ ū (x)v(x)e−x /(2σ ) dx
2 2

2
2π σ −∞
 ∞
1
ū(x)(xv(x) − σ 2 v (x))e−x /(2σ ) dx
2 2
=√
2π σ 2 −∞
= u, a+ v
h ,
i.e., (3.1) is satisfied, and
[a− , a+ ]u(x) = a− a+ u(x) − a+ a− u(x)
= a− (xu(x) − σ 2 u (x)) − σ 2 a+ u (x)

= σ2 (xu(x) − σ 2 u (x)) − σ 2 xu (x) + σ 4 u (x)
∂x
= σ 2 u(x),
hence (3.2) is satisfied.
In this representation, we easily check that the position and momentum
operators Q and P are written as
!
− + + − 2 ∂
Q = a + a = xIh and P = i(a − a ) = i xIh − 2σ ,
∂x
and that
 ∞
1
xn e−x
2 /(2σ 2 )
e0 , Qn e0
h = √ dx
2π σ 2 −∞
is indeed the centered Gaussian moment of order n ∈ N, which recovers in
particular the first four Gaussian moments computed in Exercise 1.1. In
addition, the moment generating function of Q in the state e0 , defined by
∞ n
 t
e0 , e e0
h =
tQ
e0 , Qn e0
h ,
n!
n=0

satisfies
 ∞ !
1 1 22
etx e−x
2 /(2σ 2 )
e0 , etQ e0
h = √ dx = exp σ t .
2π σ 2 −∞ 2
30 Basic probability distributions on Lie algebras

3.1.2 Hermite representation


In this section we implement the representation of the Heisenberg–Weyl
algebra constructed on the boson Fock space in Section 1.2 using the Hermite
polynomials Hn (x; σ 2 ) with parameter σ 2 > 0, which define the orthononomal
sequence
1
en (x) := √ Hn (x; σ 2 )
σ n n!
" #
2 R; √
in h := LC 1
e−x2 /(2σ 2 ) dx , i.e. we have
2π σ 2

en , em
h = δn,m , n, m ∈ N.

In addition, the Hermite polynomials are known to satisfy the relations


∂Hn
a− Hn (x; σ 2 ) = σ 2 (x; σ 2 ) = nσ 2 Hn−1 (x; σ 2 ),
∂x
and
!

a+ Hn (x; σ 2 ) = x − σ 2 Hn (x; σ 2 ) = Hn+1 (x; σ 2 ),
∂x
i.e.,
√ √
a− en = σ n en−1 , a+ en = σ n + 1 en+1 , n ∈ N,

with

a− e0 = 0.

We also note that the relation


1
en = √ (a+ )n e0
σn n!
reads

Hn (x) = (a+ )n H0 (x) = (a+ )n e0 = σ n n!en ,

i.e.,
1
en = √ Hn (Q)e0 , n ∈ N,
σ n n!
which is Condition (1.4) of Proposition 1.2.1.
3.2 Poisson distribution on osc 31

3.2 Poisson distribution on osc


The generic Hermitian element of the oscillator Lie algebra

osc = span {N, P, Q, E}

can be written in the form

Xα,ζ ,β = αN + ζ a+ + ζ a− + βE,

with α, β ∈ R and ζ ∈ C. We will show that

X := X1,1,1 = N + a+ + a− + E

has a Poisson distribution with parameter λ > 0 under the conditions

a− e0 = 0 and Ee0 = λe0 ,



i.e., we take σ = λ. We start by checking this fact on the first moments. We
have

X n e0 , e0
= X n−1 e0 , Xe0

= X n−1 e0 , a+ e0
+ λ X n−1 e0 , e0

= a− X n−1 e0 , e0
+ λ X n−1 e0 , e0
.

On the other hand, we note the commutation relation

[a− , X] = [a− , N] + [a− , a+ ] = a− + λIh ,

which implies

X n+1 e0 , e0
= X n e0 , Xe0

= X n e0 , a+ e0
+ λ X n e0 , e0

= a− X n e0 , e0
+ λ X n e0 , e0

= Xa− X n−1 e0 , e0
+ a− X n−1 e0 , e0
+ λ X n−1 e0 , e0
+ λ X n e0 , e0
,

and recovers by induction the Poisson moments given by the Touchard


polynomials Tn (λ) as

n
IEλ [Z n ] = Tn (λ) = λk S(n, k), n ∈ N,
k=0

cf. Relation (A.8) in the Appendix A.2.


32 Basic probability distributions on Lie algebras

The representation of osc on the boson Fock space 2 is given by




⎪ Nen = nen ,



⎨ √
a+ en = n + 1λen+1 ,





⎩ a− e √
n−1 = nλen−1 ,

a◦ a+ a−
where N := λ = λ is the number operator.

3.2.1 Poisson Hilbert space representation


We choose h = 2 (N, pλ ) where

λk
pλ (k) = e−λ , k ∈ N,
k!
is the Poisson distribution, with the inner product

 ∞
 λk
f , g
:= f (k)g(k)pλ (k) := e−λ f (k)g(k) .
k!
k=0 k=0

In the sequel we will use the finite difference operator  defined as

f (k) = f (k + 1) − f (k), k ∈ N, (3.3)

cf. Section 9.3 for its generalisation to spaces of configurations under a


Poisson random measure.
Let λ > 0. In the next proposition we show that the operators a− and a+
defined by

a− f (k) := λf (k) = λ(f (k + 1) − f (k)), (3.4)

and

a+ f (k) := kf (k − 1) − λf (k), (3.5)

satisfy the Conditions (3.1)-(3.2) above with E = λIh .

Proposition 3.2.1 The operators a− and a+ defined in (3.4)-(3.5) satisfy the


commutation relation

[a− , a+ ] = λIh

and the involution (a− )∗ = a+ .


3.2 Poisson distribution on osc 33

Proof : The commutation relation follows from (A.3b) and (A.3c). Next, by
the Abel transformation of sums we have

 λk
a− f , g
= λe−λ ( f (k + 1) − f (k))g(k)
k!
k=0

  ∞
λk λk
= λe−λ f (k + 1)g(k) − λe−λ f (k)g(k)
k! k!
k=0 k=0

  ∞
λk λk
= e−λ f (k)g(k − 1) − λe−λ f (k)g(k)
(k − 1)! k!
k=1 k=0

 λk
= e−λ f (k)(kg(k − 1) − λg(k))
k!
k=1

= f , a+ g
.

We also note that the number operator N := a+ a− /λ satisfies


1 + −
Nf (k) = a a f (k)
λ
= a+ f (k + 1) − a+ f (k)
= kf (k) − λf (k + 1) − (kf (k − 1) − λf (k)
= kf (k) − λf (k + 1) − kf (k − 1) + λf (k)
= −λf (k + 1) + (k + λ + 1)f (k) − kf (k − 1), k ∈ N.
This shows that
(N + a+ + a− + E)f (k) = −λf (k + 1) + (k + λ)f (k) − kf (k − 1)
+ λ(f (k + 1) − f (k)) + kf (k − 1) − λf (k) + λf (k)
= kf (k), (3.6)
hence
N + a+ + a− + E
has a Poisson distribution with parameter λ > 0 in the vacuum state 1.

3.2.2 Poisson-Charlier representation on the boson Fock space


In this section we use the Lie algebra representation based on the boson Fock
space in Section 1.2, together with the Charlier polynomials Cn (k; λ) defined
in Section A.1.3.2 in appendix. First, we note that the functions
34 Basic probability distributions on Lie algebras

1
en (k) := √ Cn (k; λ), n ∈ N,
λn/2 n!

form an orthonormal sequence in h = 2 (N, pλ ). Next, we note that the


annihilation and creation operators a− and a+ defined in (3.4)-(3.5) satisfy

a− Cn (k; λ) = λ(Cn (k + 1, λ) − Cn (k, λ)) = nλCn−1 (k, λ),

and

a+ Cn (k, λ) = kCn (k − 1, λ) − λCn (k, λ) = Cn+1 (k, λ),

hence we have
√ %
a− en := λnen−1 , and a+ en := λ(n + 1)en+1 ,

as in the 2 representation of the boson Fock space, and this yields

a+ a− Cn (k, λ) = nλa+ Cn−1 (k, λ) = nλCn (k, λ).

In addition, the commutation relation of Proposition 3.2.1 can be recovered as


follows:

[a− , a+ ] f (k) = a− a+ f (k) − a+ a− f (k)


= a− (kf (k − 1) − λf (k)) − λa+ ( f (k + 1) − f (k))
= λ((k + 1)f (k) − λf (k + 1)) − λ(kf (k − 1) − λf (k))

− λk( f (k) − f (k − 1)) + λ2 ( f (k + 1) − f (k))


= λf (k),

showing that [a− , a+ ] = λIh .


Similarly, the duality

a− f , g
= f , a+ g

of Proposition 3.2.1 for the inner product f , g


h can be recovered by the
similar Abel transformation of sums
3.2 Poisson distribution on osc 35


 λk
a− f , Cn (·, λ)
= λ ( f (k + 1) − f (k))Cn (k, λ)
k!
k=0

 λk−1
= −λf (0)Cn (0, λ)) + λ f (k)(kCn (k − 1, λ) − λCn (k, λ))
k!
k=1

 λk
= f (0)Cn+1 (0, λ)) + f (k)Cn+1 (k, λ)
k!
k=1

 λk
= f (k)Cn+1 (k, λ)
k!
k=0

= f , a+ Cn (·, λ)
,

with Cn (0, λ) = (−λ)n . We also check that (3.6) can equivalently be recovered
as

(N + a+ + a− + E)Cn (k, λ)
= k(Cn (k, λ) − Cn (k − 1, λ)) − λ(Cn (k + 1, λ) − Cn (k, λ))
+ λ(Cn (k + 1, λ) − Cn (k, λ))
+ kCn (k − 1, λ) − λCn (k, λ) + λCn (k, λ)
= kCn (k, λ),

hence N + a+ + a− + λE has a Poisson distribution with parameter λ > 0.

3.2.3 Adjoint action


The next lemma will be used for the Girsanov theorem in Chapter 10.

Lemma 3.2.2 Letting Y = i(wa+ + wa− ), the adjoint action of gt := etY on

Xα,ζ ,β = αN + ζ a+ + ζ a− + βE

is given by

etY Xα,ζ ,β e−tY = αN + (ζ − iαwt)a+ + (ζ + iαwt)a−




+ β + 2t(wζ ) + α|w|2 t2 E, (3.7)

t ∈ R+ , where (z) denotes the imaginary part of z.

Proof : The adjoint action

X(t) := etY Xα,ζ ,β e−tY


36 Basic probability distributions on Lie algebras

of gt := etY on Xα,ζ ,β solves the differential equation


d
Ẋ(t) = Adgt (X) = [Y, X(t)].
dt
Looking for a solution of the form
X(t) = a(t)N + z(t)a+ + z(t)a− + b(t)E,
we get the system


⎪ ȧ(t) = 0,




ż(t) = −iαw,





⎩ ḃ(t) = i(wz − wz),

of ordinary differential equations with initial conditions


a(0) = α, z(0) = ζ , b(0) = β,
whose solution yields (3.7).

3.3 Gamma distribution on sl2 (R)


In this section we revisit the representation of sl2 (R) on h = LC
2 (R, γ (x)dx)
β
with the inner product
 ∞
f , g
:= f (x)g(x)γβ (x)dx,
0
introduced in Section 2.4, in connection with the gamma probability density
function
xβ−1 −x
γβ (x) = e 1{x≥0}
(β)

on R, with shape parameter β > 0. We have ã− = x , i.e.,
∂x
ã− f (x) = xf  (x), f ∈ Cb∞ (R).
The adjoint ã+ of ã− with respect to the gamma density γβ (x) on R satisfies
 ∞

ã f , g
h = g(x)ã− f (x)γβ (x)dx
0
 ∞
= xg(x)f  (x)γβ (x)dx
0
3.3 Gamma distribution on sl2 (R) 37

 ∞
= f (x)(xg(x) − βg(x) − xg (x))γβ (x)dx
0

= f , ã+ g
h , f , g ∈ Cb∞ (R),
hence we have
ã+ = x − β − ã− ,
i.e.,

ã+ f (x) = (x − β)f (x) − x f (x) = (x − β)f (x) − ã− f (x).
∂x
In other words, the multiplication operator ã− + ã+ = τ −β has a compensated
gamma distribution in the vacuum state e0 in LC 2 (R , γ (τ )dτ ).
+ β

The operator ã defined as
∂ ∂ ∂2
ã◦ = ã+ = −(β − x) − x 2
∂x ∂x ∂x
β
has the Laguerre polynomials Ln with parameter β as eigenfunctions:
ã◦ Lnβ (x) = nLnβ (x), n ∈ N. (3.8)
Recall that the basis {M, B− , B+ } of sl2 (R), which satisfies
[B− , B+ ] = M, [M, B− ] = −2B− , [M, B+ ] = 2B+ ,
can be constructed as
M = β + 2ã◦ , B− = ã− − ã◦ , B+ = ã+ − ã◦ .
For example, for the commutation relation [M, B− ] = −2B− we note that
[M, B− ] = 2[ã◦ , ã− ]
= −2[(β − x)∂ + x∂ 2 , x∂]
= −2(β − x)∂(x∂) − 2x∂ 2 (x∂) + 2x∂((β − x)∂ + x∂ 2 )
= −2(β − x)∂ − 2(β − x)x∂ 2 − 2x∂(∂ + x∂ 2 )
− 2x∂ + 2x(β − x)∂ 2 + 2x∂ 2 + 2x2 ∂ 3
= −2(β − x)∂ − 2(β − x)x∂ 2 − 2x∂ 2 − 2x2 ∂ 3 − 2x∂ 2
− 2x∂ + 2x(β − x)∂ 2 + 2x∂ 2 + 2x2 ∂ 3
= −2β∂ − 2x∂ 2
= −2(x∂ + (β − x)∂ + x∂ 2 )
= −2B− .
38 Basic probability distributions on Lie algebras

We check that
∂ ∂2
B− + B+ = ã− + ã+ − 2ã◦ = x − β + 2(β − x) + 2x 2
∂x ∂x
and

i(B− − B+ ) = i(ã− − ã+ ) = 2ix − i(x − β),
∂x
hence

B− + B+ + M = β + ã− + ã+ = x, (3.9)

identifies with the multiplication by x, therefore it has a gamma distribution


with parameter β.

3.3.1 Probability distributions


As a consequence of (3.9) we find that Q + M identifies to the multiplication
operator

Q + M = τ,

hence Q + M has the gamma distribution with parameter β in the vacuum state
2 (R , γ (τ )dτ ). In this way we can also recover the moment generating
e0 in LC + β
function
 ∞
− + 1
e0 , et(B +B +M) e0
= etx γβ (x)dx = , t < 1,
0 (1 − t)β
which is the moment generating function of the gamma distribution with
parameter β > 0.
More generally, the distribution (or spectral measure) of αM + Q has been
completely determined in [1], depending on the value of α ∈ R:

– When α = ±1, M + Q and M − Q have gamma distributions.


– For |α| < 1, Q + αM has an absolutely continuous distribution and in
particular for α = 0, Q and P have continuous binomial distributions.
– When |α| > 1, Q + αM has a Pascal distribution, cf. Case (iii) on page 41.

In order to define an inner product on span {vn : n ∈ N} such that M ∗ = M and


(B− )∗ = B+ , the vn have to be mutually orthogonal, and their norms have to
satisfy the recurrence relation

||vn+1 ||2 = B+ vn , vn+1


= vn , B− vn+1
= (n + 1)(n + λ)||vn ||2 . (3.10)
3.3 Gamma distribution on sl2 (R) 39

It follows that there exists an inner product on span {vn :n ∈ N} such that the
lowest weight representation with

Me0 = λe0 , B− e0 = 0,

is a ∗-representation, if and only if the coefficients (n + 1)(n + λ) in Equation


(3.10) are non-negative for all n ∈ N, i.e., if and only if λ ≥ 0.
For λ = 0 we get the trivial one-dimensional representation

B+ −
(0) e0 = B(0) e0 = M(0) e0 = 0

since ||v1 ||2 = 0, and for λ > 0 we get


⎧ %

⎪B+ e = (n + 1)(n + λ) en+1 , (3.11a)

⎪ (λ) n


M(λ) en = (2n + λ)en ,




⎩B− e = %n(n + λ − 1) e ,

(3.11b)
(λ) n n−1

where (en )n∈N is an orthonormal basis of 2 . Letting

Y(λ) := B+ −
(λ) + B(λ) + λM(λ) , λ ∈ R,

defines an essentially self-adjoint operator, and Y(λ) is a compound Poisson


random variable with characteristic exponent


(u) = e0 , eiuY(λ) − 1 e0 .

Our objective in the sequel is to determine the Lévy measure of Y(λ) , i.e., to
determine the measure μ on R for which we have
 ∞

iux
(u) = e − 1 μ(dx).
−∞

This is the spectral measure of Y(λ) evaluated in the state Y  → e0 , Y e0


.

3.3.2 Laguerre polynomial representation


Recall that the polynomial representation defined in Section 1.2 relies on the
condition

en = pn (Y(λ) )e0 , n ∈ N,
40 Basic probability distributions on Lie algebras

which yields a sequence of orthogonal polynomials with respect to μ, since


 ∞
pn (x)pm (x)μ(dx) = e0 , pn (Y(λ) )pm (Y(λ) )e0

−∞

= pn (Y(λ) )e0 , pm (Y(λ) )e0

= δnm ,
for n, m ∈ N. Looking at (3.11a)-(3.11b) and the definition of Y(λ) , we can
easily identify the three-term recurrence relation satisfied by the pn as
% %
Y(λ) en = (n + 1)(n + λ)en+1 + β(2n + λ)en + n(n + λ − 1)en−1 ,
n ∈ N. Therefore, Proposition 1.2.1 shows that the rescaled polynomials
+n ,
k
Pn := pn , n ∈ N,
k+λ
k=1

satisfy the recurrence relation


(n + 1)Pn+1 + (2βn + βλ − x)Pn + (n + λ − 1)Pn−1 = 0, (3.12)
with initial condition P−1 = 0, P1 = 1.
We can distinguish three cases according to the value of β, cf. [1].
i) |β| = 1: In this case we have, up to rescaling, Laguerre polynomials, i.e.,
Pn (x) = (−β)n Ln(λ−1) (βx)
(α)
where the Laguerre polynomials Ln are defined as in [63, Equation
(1.11.1)], with in particular
 n !
n xk
Ln (x) =
(0)
(−1)k , x ∈ R+ .
k k!
k=0

The measure μ can be obtained by normalising the measure of orthogo-


nality of the Laguerre polynomials, it is equal to
|x|λ−1 −βx
μ(dx) = e 1βR+ dx.
(λ)
If β = +1, then this measure is, up to a normalisation parameter, the usual
gamma distribution (with parameter λ) of probability theory.
ii) |β| < 1: In this case we find the Meixner-Pollaczek polynomials after
rescaling,
 
x
Pn (x) = Pn
(λ/2)
% ; π − arccos β .
2 1 − β2
3.3 Gamma distribution on sl2 (R) 41

For the definition of these polynomials see, e.g., [63, Equation (1.7.1)]. For
the measure μ we get
 $  $2
(π − 2 arccos β)x $$ λ ix $
$
μ(dx) = C exp % $ + % $ dx,
2 1−β 2 $ 2 2 1−β $ 2

where C has to be chosen such that μ is a probability measure.


iii) |β| > 1: In this case we get the Meixner polynomials after rescaling,
+
n !
k+λ−1 x λ
Pn (x) = (−c sgnβ)n Mn sgnβ − ; λ; c2
k 1/c − c 2
k=1

where
-
c = |β| − β 2 − 1.

The definition of these polynomials can be found, e.g., in [63, Equation


(1.9.1)]. The density μ is again the measure of orthogonality of the
polynomials Pn (normalised to a probability measure). We therefore find
the probability distribution

 (λ)n
μ=C c2n δx
n! n
n=0

where
! !
λ 1
xn = n + − c sgnβ, for n ∈ N
2 c
and
 ∞
1 (λ)n
= c2n = (1 − c2 )−λ .
C n!
n=0

Here, (λ)n denotes the Pochhammer symbol

(λ)n := λ(λ + 1) · · · (λ + n − 1), n ∈ N.

The representation (3.11a)-(3.11b) of osc on 2 can be built by defining an


orthonormal basis (en )n∈N of L2 (R+ , γλ (τ )dτ ) using the Laguerre polynomi-
als, as
.
n!(λ)
en (x) = (−1)n Lλ−1 (x), n ∈ N.
(n + λ − 1)! n
42 Basic probability distributions on Lie algebras

The relation
∂ λ−1
B− Lnλ−1 (x) = x λ−1
L (x) − nLnλ−1 (x) = −(n + λ − 1)Ln−1 (x),
∂x n
n ≥ 1, shows that
.
n!(λ)
B− en = (−1)n B− Lnλ−1 (x)
(n + λ − 1)!
.
n!(λ)
= −(n + λ − 1)(−1) n
Lλ−1 (x)
(n + λ − 1)! n−1
.
%
n (n − 1)!(λ) λ−1
= − n(n + λ − 1)(−1) L (x)
(n + λ − 2)! n−1
%
= n(n + λ − 1)en−1 (x),

n ≥ 1, and similarly by the recurrence relation


λ−1 β−1
(x − λ − 2n)Lnλ−1 (x) + (n + λ − 1)Ln−1 (x) + (n + 1)Ln+1 (x) = 0,

(see (3.12)) we have

B+ Lnβ (x) = (ã+ − ã◦ )Lnβ−1 (x)


∂ β−1
= (x − β)Lnβ−1 (x) − x L (x) − nLnβ−1 (x)
∂x n
β−1
= (x − β)Lnβ−1 (x) − nLnβ−1 (x) + (n + β − 1)Ln−1 (x) − nLnβ−1 (x)
β−1
= (x − β − 2n)Lnβ−1 (x) + (n + β − 1)Ln−1 (x)
β−1
= −(n + 1)Ln+1 (x)
.
(n + β)!
= (n + 1)(−1)n en+1 (x)
(n + 1)!(β)
.
% (n + β − 1)!
= (−1) (n + β)(n + 1)
n
en+1 (x),
n!(β)

hence
%
B+ en (x) = (n + β)(n + 1)en+1 (x).
3.3 Gamma distribution on sl2 (R) 43

3.3.3 The case β = 1


When β = 1 the operators ã◦ , ã− and ã+ satisfy
⎧ ⎧
⎪ − ∂

⎪ ã = −x , ⎪
⎪ 
n−1

⎪ ∂x ⎪
⎪ −
= −x

⎪ ⎪

ã Ln (x) Lk (x),

⎪ ⎪

⎨ ∂ ⎨ k=0
ã+ = x − 1 − x , i.e.

⎪ ∂x ⎪
⎪ ã+ Ln (x) = nLn (x) − (n + 1)Ln+1 (x),

⎪ ⎪


⎪ ⎪


⎪ 2 ⎪


⎩ ã◦ = (x − 1) ∂ − x ∂ , ⎩ ◦
2
ã Ln (x) = nLn (x),
∂x ∂x
with

ã+ + ã− = 1 − x,

and the commutation relations

[ã+ , ã− ] = −x, [ã◦ , ã+ ] = ã◦ + ã+ , [ã− , ã◦ ] = ã◦ + ã− .

We have noted earlier that i(ã− − ã+ ) has a continuous binomial distribution
(or spectral measure) in the vacuum state 1, with hyperbolic cosine density
(2 cosh πξ/2)−1 , in relation to a representation of the subgroup of sl2 (R) made
of upper-triangular matrices.
Next, we also notice that although this type of distribution can be studied
for every value of β > 0 in the above framework, the construction can also
be specialised based on Lemma 2.4.2 for half-integer values of β using the
annihilation and creation operators αx− , αy− , αx+ , αy+ on the two-dimensional
boson Fock space (Ce1 ⊕ Ce2 ).
Defining the operator L as L = −Q̃ − 2ã◦ with

Q̃ = 1 − x, P̃ = −i(2x∂x + 1 − x),

we find that

[ã◦ , P̃] = iL, [ã◦ , L] = iP̃, [L, P̃] = 2iM,

and

[P̃, Q̃] = 2ix, [ã◦ , Q̃] = −iP̃, (3.13)

hence
 *
i i i
L, − P̃, M
2 2 2
44 Basic probability distributions on Lie algebras

generates the unitary representation


/ 0   / 0
P̃ M L L M P̃ L P̃ M
− , =i , , =i , ,− = −i ,
2 2 2 2 2 2 2 2 2

also called the Segal–Shale–Weil representation of sl2 (R). Indeed, the above
relations can be proved by ordinary differential calculus as
(1 − x + x∂x )(−x∂x ) − (−x∂x )(1 − x + x∂x )
= −(1 − x)x∂x − x∂x − x2 ∂x2 + (1 − x)x∂x − x + x2 ∂x2 + x∂x
= −x,

and

(−(1 − x)∂x − x∂x2 )(1 − x + x∂x ) − (1 − x + x∂x )(−(1 − x)∂x − x∂x2 )


= −(1 − x)2 ∂x + 1 − x − (1 − x)x∂x2 − (1 − x)∂x − x(−2∂x + (1 − x)∂x2 )
− x(2∂x2 + x∂x3 ) − (−(1 − x)2 ∂x − x(1 − x)∂x2 − x(1 − x)∂x2 + x∂x − x2 ∂x3 − x∂x2 )
= −(1 − x)∂x − x∂x2 + (1 − x) + x∂x ,

and

− x∂x (−(1 − x)∂x − x∂x2 ) − (−(1 − x)∂x − x∂x2 )(−x∂x )


= x(1 − x)∂x2 − x∂x + x∂x2 + x2 ∂x3 − ((1 − x)∂x + (1 − x)x∂x2 + 2x∂x2 + x2 ∂x3 )
= −(1 − x)∂x − x∂x2 − x∂x .

3.3.4 Adjoint action


The next lemma will be used for the Girsanov theorem in Chapter 10.

Lemma 3.3.1 Letting Y = B− − B+ , the adjoint action of gt := etY on Xβ is given by




etY/2 Xβ e−tY/2 = et(adY)/2 Xβ = cosh(t) + β sinh(t) Xγ (β,t) ,

where
β cosh(t) + sinh(t)
γ (β, t) = .
cosh(t) + β sinh(t)
See Section 4.4 of [45] for a proof of Lemma 3.3.1.

Exercises
Exercise 3.1 Define the operators b− and b+ by

b− = −ia− , b+ = ia+ .
3.3 Gamma distribution on sl2 (R) 45

1. Show that b− and b+ satisfy the same commutation relation

[b− , b+ ] = [−ia− , ia+ ] = [a− , a+ ] = σ 2 Ih

as a− and a+ , with the condition b− e0 = 0.


2. Show that we have the duality relation b− u, v
h = u, b+ v
h , u, v ∈ h.
3. Show that P = i(a+ − a− ) also has a Gaussian distribution in the fundamental
state e0 .
Exercise 3.2 Moments of the Poisson distribution.
The goal of this exercise is to recover the first moments of the Poisson distribution from
the commutation relations of the oscillator algebra and the relation Ee0 = λe0 .
In particular, show that Xe0 , e0
= λ, X 2 e0 , e0
= λ + λ2 , and X 3 e0 , e0
=
λ + 3λ2 + λ3 .
Exercise 3.3 Classical gamma moments.
Consider a (classical) random variable X having the gamma distribution with shape
parameter α > 0, probability density function
xα−1 −αx
ϕX (x) := e , x > 0,
(α)
and moment generating function

IE[etX ] = (1 − t)−α , t < 1.


Show that the moment of order n ∈ N of X is given by
IE[X n ] = α(α + 1) · · · (α + n − 1) . (3.14)
& '( )
n times
Hint: you may use the relation
∂n
IE[X n ] = IE[etX ]|t=0 .
∂tn

Exercise 3.4 Gamma moments on sl2 (R).


Consider the Lie algebra sl2 (R) with basis B− , B+ , M, and the commutation relations
[B− , B+ ] = M, [M, B− ] = −2B− , [M, B+ ] = 2B+ ,
under the involution
(B− )∗ = B+ , M ∗ = M. (3.15)
Next, consider a (Hilbert) space h with inner product ·, ·
and a representation of
B− , B+ , M on h such that
B− e0 = 0, Me0 = αe0 ,
for a certain unit vector e0 ∈ h such that e0 , e0
= 1. Recall that the involution (3.15)
reads
B− u, v
= u, B+ v
and Mu, v
= u, Mv
, u, v ∈ h.
46 Basic probability distributions on Lie algebras

The goal of this question is to show that the first three moments of B− + B+ + M in
the state e0 coincide with the moments (3.14) of a gamma distribution with shape
parameter α > 0 in the state e0 , i.e.,
1. for n = 1, show that
e0 , (B− + B+ + M)e0
= IE[X],
2. for n = 2, show that
1 2
e0 , (B− + B+ + M)2 e0
= IE X 2 ,

3. for n = 3, show that


1 2
e0 , (B− + B+ + M)3 e0
= IE X 3 ,

i.e., show that we have


e0 , (B− + B+ + M)n e0
= IE[X n ], n = 0, 1, 2, 3,

where IE[X n ] is given by the relation (3.14) of Question 1.


4

Noncommutative random variables

In these days the angel of topology and the devil of abstract algebra
fight for the soul of each individual mathematical domain.
(H. Weyl, “Invariants”, Duke Mathematical Journal, 1939.)
Starting with this chapter we move from particular examples to the more gen-
eral framework of noncommutative random variables, with an introduction to
the basic concept of noncommutative probability space. In comparison with the
previous chapters which were mostly concerned with distinguished families
of distributions, we will see here how to construct arbitrary distributions in a
noncommutative setting.

4.1 Classical probability spaces


Following the description given by K.R. Parthasarathy in Reference [87], the
notion of a real-valued observable has three faces: (i) a spectral measure on
the line, (ii) a self-adjoint operator in a Hilbert space, and (iii) a unitary
representation of the real line as an additive group. The equivalence of these
three descriptions is a consequence of von Neumann’s spectral theorem for
a, not necessarily bounded, self-adjoint operator and Stone’s theorem on the
infinitesimal generator of a one-parameter unitary group in a Hilbert space.
Before switching to this noncommutative picture we recall the framework
of classical probability.
Definition 4.1.1 A “classical” probability space is a triple ( , F, P) where
• is a set, the sample space, the set of all possible outcomes.
• F ⊆ P( ) is the σ -algebra of events.

47
48 Noncommutative random variables

• P : F −→ [0, 1] is a probability measure that assigns to each event its


probability.

This description of randomness is based on the idea that randomness is


due to a lack of information: if we knew which ω ∈ is realised, then the
randomness disappears.
Recall that to any real-valued random variable X : ( , F, P) −→ (R, B(R)),
we can associate a probability measure PX on R by


PX (B) := P X −1 (B)

for B ∈ B(R) or
 
fdPX = f ◦ XdP
R
for f : R −→ R, a bounded measurable function. The probability measure PX
is called the distribution of X with respect to P.
This construction is not limited to single random variables, as we can also
define the joint distribution of an n-tuple = (X1 , . . . , Xn ) of real random
variables by

P(X1 ,...,Xn ) (B) := P(X −1 (B))

for B ∈ B(Rn ). We shall see that the distribution of a single noncommutative


random variable can be defined similarly as in the “classical” (or commutative)
case, but that the joint distribution of noncommuting random variables requires
a more careful discussion.
Still quoting Reference [87], “real valued random variables on a clas-
sical probability space ( , F, P) when viewed as selfadjoint multiplication
operators in the Hilbert space L2 ( ) are special examples in the quantum
description. This suggests the possibility of developing a theory of quantum
probability within the framework of operators and group representations in a
Hilbert space.”

4.2 Noncommutative probability spaces


Next is the most fundamental definition in quantum (or noncommutative)
probability.

Definition 4.2.1 A quantum probability space is a pair (A, ) consisting of


a unital associative *-algebra A and a positive normalised functional (called
a state)  : A −→ C.
4.2 Noncommutative probability spaces 49

By “unital associative *-algebra” we mean that A is a vector space


over the field of complex numbers C, equipped with an associative bilinear
multiplication
m : A × A −→ A,
(a, b)  −→ m(a, b) = ab,
with an element IA (called the unit of A ) such that aIA = IA a = a for all
a ∈ A, and a map
∗ : A −→ A,
a  −→ a∗ ,
(called an involution) such that


⎪ (a∗ )∗ = a (∗ is involutive),

(λa + μb)∗ = λa∗ + μb∗ (∗ is conjugate linear),


⎩ (ab)∗ = b∗ a∗ (∗ is anti-multiplicative),
for all a, b ∈ A, λ, μ ∈ C. From now on, by “algebra” we will mean a unital
associative *-algebra. By a positive normalised functional or state on an
algebra we mean a map
 : A −→ C,
a  −→ (a),
such that


⎪ (λa + μb) = λ(a) + μ(b) ( is linear),

(a∗ a) ≥ 0 ( is positive),


⎩ (I ) = 1 ( is normalised),
A

for all a, b ∈ A, λ, μ ∈ C.
First, we note that the “classical” probability spaces described in Section 4.1
can be viewed as special cases of quantum probability spaces.
Example 4.2.2 (Classical ⊆ Quantum) To a classical probability space
( , F, P) we can associate a quantum probability space (A, ) by taking
• A := L∞ ( , F, P), the algebra of bounded measurable functions
f : −→ C, called the algebra of random variables. The involution is
given by pointwise complex conjugation, f ∗ = f , where f (ω) = f (ω) for
ω ∈ . 3
•  : A  f  −→ E(f ) = fdP, which assigns to each random variable its
expected value.
50 Noncommutative random variables

4.2.1 Noncommutative examples


Other, genuinely noncommutative quantum probability spaces are motivated
by quantum mechanics.
Example 4.2.3 (Quantum mechanics) Let h be a Hilbert space, with a unit
vector ψ. Then the quantum probability space associated to (h, ψ) is given by

• A = B(h), the algebra bounded linear operators X : h −→ h. Self-adjoint


(or normal) operators are called quantum random variables.
•  : B(h)  X  −→ (X) = ψ, Xψ
.
Suppose now that h is a finite dimensional complex Hilbert space, i.e.,
h = Cn with the inner product

n
x, y
:= xk yk
k=1

and the norm


%
||x|| = x, x
, x, y ∈ Cn .
A linear operator X ∈ B(Cn ) is simply a linear map
X : Cn −→ Cn ,
or equivalently a matrix X = (xjk )1≤ j,k≤n ∈ Mn (C) that acts on a vector v =
(vk )1≤k≤n ∈ Cn by matrix multiplication,
 n 

Xv = xjk vk ,
k=1 j=1,...,n

i.e.,
⎡ ⎤⎡ ⎤ ⎡ ⎤
x11 x12 ... x1n v1 x11 v1 + x12 v2 + · · · + x1n vn
⎢ x21 x22 ... x2n ⎥⎢ v2 ⎥ ⎢ x21 v1 + x22 v2 + · · · + x2n vn ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢ .. .. .. .. ⎥⎢ .. ⎥=⎢ .. ⎥.
⎣ . . . . ⎦⎣ . ⎦ ⎣ . ⎦
xn1 xn2 ... xnn vn xn1 v1 + xn2 v2 + · · · + xnn vn
The involution on A = B(Cn ) = Mn (C) is defined by complex conjugation
T
and transposition, i.e., X ∗ = X , or equivalently by
⎡ ⎤∗ ⎡ ⎤
x11 x12 . . . x1n x11 x21 . . . xn1
⎢ x21 x22 . . . x2n ⎥ ⎢ x12 x22 . . . xn2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ . . . . ⎥ =⎢ . .. . .. ⎥ ,
⎣ . . .
. . . . ⎦
. ⎣ . . . . . . ⎦
xn1 xn2 . . . xnn x1n x2n . . . xnn
4.2 Noncommutative probability spaces 51

where xij = xij , 1 ≤ i, j ≤ n. For any unit vector ψ ∈ h we can define a state
 : Mn (C) −→ C by

(X) = ψ, Xψ
, X ∈ Mn (C).

The following example shows how to construct any Bernoulli distribution on


the algebra A = M2 (C) of 2 × 2 complex matrices.
Example 4.2.4 (M2 (C)) Let us consider A = M2 (C) with the state
6   7
1 1
(B) = ,B
0 0

for B ∈ M2 (C), and the quantum random variable


 
a b
X=
b c

with a, c ∈ R and b ∈ C. Then the first three moments can be computed as


follows:
6    7
1 a b 1
(X) = , = a,
0 b c 0
8   2  9
1 a b 1
(X ) =
2
,
0 b c 0
6   2  7
1 a + |b|2 b(a + c) 1
= ,
0 b(a + c) c2 + |b|2 0
= a2 + |b|2 ,
(X 3 ) = a3 + (2a + c)|b|2 , (4.1)
..
.

A natural question is then:


Can we find a general formula for the moments (X k ) of X?
The answer is given by

(X n ) = xn μX (dx), n ∈ N, (4.2)

where μX is the probability measure on R defined by

μX (dx) := a1 δλ1 (dx) + a2 δλ2 (dx), (4.3)


52 Noncommutative random variables

where
% %
a − c + (a − c)2 + |b|2 c − a + (a − c)2 + |b|2
a1 = % , a2 = % . (4.4)
2 (a − c)2 + |b|2 2 (a − c)2 + |b|2
and
% %
a+c+ (a − c)2 + |b|2 a + c − (a − c)2 + |b|2
λ1 = , λ2 = . (4.5)
2 2
Proof. The characteristic polynomial PX (z) of X is given by
PX (z) = det(zI − X)
 
z−a −b
= det
−b z−c
= (z − a)(z − c) − |b|2
= z2 − (a + c) + ca − |b|2 , z ∈ C.
We note that the zeroes λ1 , λ2 of the characteristic polynomial PX of X are
given by (4.5), and they are real. Hence for any z ∈ C with z  = 0, we have
det(zI − X)  = 0 and we can compute the inverse
RX (z) = (zI − X)−1
of zI − X, also called the resolvent of X, as
 
−1 1 z − c −b
(zI − X) = 2 .
z − (a + c) + ca − |b|2 −b z − a
The expectation
6   7

1 1 z−c
 RX (z) = , RX (z) =
0 0 z2 − (a + c) + ca − |b|2
of the resolvent in the state  can be written by partial fraction decomposition
as follows:

z−c a1 a2
 RX (z) = = +
(z − λ1 )(z − λ2 ) z − λ1 z − λ2
with


a1 = lim (z − λ1 ) RX (z)
z→λ1
λ1 − c
=
λ 1 − λ2
%
a − c + (a − c)2 + |b|2
= % ,
2 (a − c)2 + |b|2
4.2 Noncommutative probability spaces 53

and
%

c − a + (a − c)2 + |b|2
a2 = lim (z − λ2 ) RX (z) = % ,
z→λ2 2 (a − c)2 + |b|2
as in (4.4).
Note that we have 0 ≤ a1 , a2 and a1 + a2 = 1, so that μX (dx) defined in
(4.3) is indeed a probability measure on R.
We have shown that the expectation of RX (z) satisfies

z−c
 RX (z) = 2
z − (a + c) + ca − |b|2
 ∞
a1 a2 1
= + = μX (dx)
z − λ1 z − λ2 −∞ z − x
for z ∈ C\R. From the geometric series
! ∞

−1 1 X Xn
RX (z) = (zI − X) = I− = ,
z z zn+1
n=0

and
  xn∞
1
= ,
z−x zn+1
n=0

which converge uniformly for z sufficiently large, we get


∞ ∞ 
(X n )z−n−1 = xn μX (dx)
n=0 n=0

for z sufficiently large, and finally we check that (4.2) holds for all n ∈ N. 

Remark 4.2.5 The function Gμ : C\R −→ C defined by



1
Gμ (z) = μ(dx),
I z − x
where I is an interval of the real line R, is called the Cauchy–Stieltjes
transform of μ, cf. the appendix Section A.4.
Definition 4.2.6 Let (A, ) be a quantum probability space and X ∈ A be a
self-adjoint quantum random variable. Then we call a probability measure μ
on R the law (or distribution) of X with respect to the state  if
 ∞
(X k ) = xk μ(dx)
−∞
for all k ∈ N.
54 Noncommutative random variables

Note that the law of a quantum variable X is defined with respect to the
state . If μ is the law of X in the state , we shall write

L (X) = μ.

In general, the law of a quantum random variable might not be unique


(this is related to the moment uniqueness (or determinacy) problem, see
Reference [4]).  
a b
In the previous example we have determined the law of X =
b c
 
1
with respect to the state  given by the vector , we found
0
 !
a b
L = a1 δλ1 + a2 δλ2 ,
b c

where a1 and a2 are given by (4.4).


We will now study the more general case of the distribution of Hermitian
matrices. For this we shall use the spectral theorem and the functional calculus
presented in Section 4.3.

Theorem 4.2.7 (Spectral theorem) A Hermitian linear map X ∈ B(Cn ) can


be written in the form

X= λEλ
λ∈σ (X)

where σ (X) denotes the spectrum of X (= set of eigenvalues) and Eλ the


orthogonal projection onto the eigenspace of X associated to the eigenvalue λ,

V(X, λ) = {v ∈ Cn : Xv = λv}.

4.3 Noncommutative random variables


Random variables over a quantum probability space (A, ) can be defined in
several ways. Hermitian elements X ∈ A can be considered noncommutative
real-valued random variables, since the distribution of X in the state  defines
a probability measure on R. This is true in particular when (A, ) is based on
a classical probability space.
In general, we will need a more flexible notion that generalises the afore-
mentioned setting. Recall that a random variable on a classical probability
4.3 Noncommutative random variables 55

space ( , F, P) with values in a measurable space (M, M) is a measur-


able map

X : ( , F, P) −→ (M, M).

Such a map X induces a *-algebra homomorphism

jX : L∞ (M) −→ L∞ ( )

by the composition

jX (f ) = f ◦ X, f ∈ L∞ (M).

In classical probability the composition f ◦ X is usually denoted by f (X) by


letting the function f act on the “variable” X. In quantum probability, on
the other hand, we opt for the opposite (or dual) point of view, by letting
the random variable X act on the function algebra L∞ (M). This leads to the
following definition.

Definition 4.3.1 A quantum (or noncommutative) random variable on an


algebra B over a quantum probability space (A, ) is a unital *-algebra
homomorphism:

j : B −→ (A, ).

Note that by an “algebra” we actually refer to a “unital *-algebra”. By unital


*-algebra homomorphism we mean that j preserves the algebraic structure of
B, i.e., j is

i) linear: we have

j(λa + μb) = λj(a) + μj(b), a, b ∈ B,

λ, μ ∈ C;
ii) multiplicative: we have

j(ab) = j(a)j(b), a, b ∈ B;

iii) unit-preserving: we have j(IB ) = IA ;


iv) involutive: we have j(b∗ ) = j(b)∗ for b ∈ B.

Note that this definition extends the construction of quantum random variable
given earlier when (A, ) is based on a classical probability space. If X ∈ A
is Hermitian, then we can define a quantum random variable jX on the algebra
C[x] of polynomials in a Hermitian variable x by setting


jX P(x) = P(X), P ∈ C[x].
56 Noncommutative random variables

Definition 4.3.2 The state j :=  ◦ j induced on B by

j : B −→ (A, )

is called the distribution (or law) of j with respect to .

When B is replaced by a real Lie algebra g, we have to modify the definition


of a random variable.

Definition 4.3.3 A quantum (or noncommutative) random variable on a real


Lie algebra g over a quantum probability space (A, ) is a Hermitian Lie
algebra homomorphism

j : g −→ (A, ).

By a “Hermitian Lie algebra homomorphism” we mean that j has the


following properties:

i) Linearity: we have

j(λX + μY) = λj(X) + μj(Y), X, Y ∈ g,

λ, μ ∈ R;
ii) Lie algebra homomorphism: we have


j [X, Y] = j(X)j(Y) − j(Y)j(X), X, Y ∈ g;

iii) Hermitianity: we have j(X)∗ = −j(X), X ∈ g.

4.3.1 Where are noncommutative random variables valued?


We have now constructed various random variables and probability distribu-
tions from real Lie algebras. When restricted to a single Hermitian element,
or to the commutative algebra it generates, noncommutative random variables
are distributed over the real line, so we could think of these restrictions as real-
valued random variables. But where do the random variables themselves really
take their values?
In Definition 4.3.1 we have seen that the notion of quantum random variable

j : B −→ (A, )

extends the notion of classical X-valued random variable constructed on a


classical probability space underlying (A, ) and taking values in a space
X, with B an algebra of functions on X. Since most of our Lie algebras
are noncommutative, they cannot be genuine function algebras. However, the
4.4 Functional calculus for Hermitian matrices 57

elements of a Lie algebra g can be regarded as functions on its dual g∗ , so that


a random variable of the form

j : g −→ (A, )

can be viewed as taking values in g∗ . In that sense, the terminology “probability


on duals of real Lie algebras” better reflects the dualisation which is implicit in
the definition of quantum probability spaces and quantum random variables.
For simplicity and convenience we nonetheless work with the less precise
terminology of “probability on real Lie algebras”.

4.4 Functional calculus for Hermitian matrices


Let n ∈ N and let A ∈ Mn (C) a Hermitian matrix, i.e., we have A∗ = A, and
let f : R −→ R be any function. Then there are several equivalent methods to
define f (A), cf. e.g., Exercise 4.1.
Example 4.4.1 Let P ∈ B(Cn ) be an orthogonal projection, i.e., P satisfies

P2 = P = P∗ .

If P is a non-trivial orthogonal projection, i.e., P  = 0 and P  = I, then P has


two eigenvalues λ1 = 0 and λ2 = 1, with the eigenspaces

V(P, 0) = ker(P) = {v ∈ Cn : Pv = 0},
V(P, 1) = range(P) = {v ∈ Cn : ∃ w ∈ Cn such that v = Pw}.

The operator f (P) depends only on the values of f at λ1 = 0 and λ2 = 1, and


we have

f (P) = f (0)(I − P) + f (1)P,

since P is the orthogonal projection onto V(P, 1) = range(P) and I − P is the


orthogonal projection onto V(P, 0) = ker(P).
Let us now describe the law of a Hermitian matrix with respect to an
arbitrary state.

Theorem 4.4.2 Let  be a state on Mn (C) and let X ∈ Mn (C) be a Hermitian


matrix with spectral decomposition

X= λEλ
λ∈σ (X)
58 Noncommutative random variables

Then the law of X with respect to  is given by



L (X) = (Eλ )δλ ,
λ∈σ (X)

where σ (X) is the spectrum of X.


Proof : For any function f : R −→ R we have

f (X) = f (λ)Eλ ,
λ∈σ (X)

and therefore, by linearity of  : Mn (C) −→ C,


 


 f (X) = f (λ)(Eλ ) = f (x)μ(dx)
λ∈σ (X)
:
with μ = λ∈σ (X) (Eλ )δλ . Since this is true in particular for the functions
f (x) = xk with k ∈ N, we can conclude that the law L (X) of X in the state 
is given by

L (X) = (Eλ )δλ .
λ∈σ (X)

Example 4.4.3
a) If P is a non-trivial orthogonal projection, then σ (X) = {0, 1} and we find
L (P) = (P)δ1 + (I − P)δ0 .
Since in this sense orthogonal projections can only take the values 0 and
1, they can be considered as the quantum probabilistic analogue of events,
i.e., random experiments that have only two possible outcomes – “yes” and
“no” (or “true” and “false”).
b) Consider now the case where  is a vector state, i.e.,
(B) = ψ, Bψ
, B ∈ Mn (C),
for some unit vector ψ ∈ Cn . Let

λEλ
λ∈σ (X)

be a quantum random variable in (Mn (C), ), then the weights (Eλ ) in
the law of X with respect to ,

L (X) = (Eλ )δλ ,
λ∈σ (X)
4.5 The Lie algebra so(3) 59

are given by

(Eλ ) = ψ, Eλ ψ
= ||Eλ ψ||2 ,

i.e., the probability with which X takes a value λ with respect to the state
associated to ψ is exactly the square of the length of the projection of ψ
onto the eigenspace V(X, λ),

L (X) = ||Eλ ψ||2 δλ .
λ∈σ (X)

4.5 The Lie algebra so(3)


In this section we consider the real Lie algebra so(3) with basis consisting of
the three anti-Hermitian elements ξ1 , ξ2 , ξ3 defined as
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 0 0 0 −1 0 0 0
ξ1 = ⎣ −1 0 0 ⎦ , ξ2 = ⎣ 0 0 0 ⎦ , ξ3 = ⎣ 0 0 1 ⎦,
0 0 0 1 0 0 0 −1 0
with the commutation relations

[ξ1 , ξ2 ] = ξ3 , [ξ2 , ξ3 ] = ξ1 , [ξ3 , ξ1 ] = ξ2 .


⎡ ⎤
x1
Given x = ⎣ x2 ⎦ ∈ R, we let ξ(x) = (x1 ξ1 + x2 ξ2 + x3 ξ3 ) define an anti-
x3
Hermitian general element of so(3), i.e., ξ(x)∗ = −ξ(x), cf. Section 2.6.

4.5.1 Two-dimensional representation of so(3)


In the two-dimensional representation,
     
1 0 0 1 0 0
ξ0 = , ξ+ = , ξ− = ,
0 −1 0 0 1 0

of so(3) on h = C2 with respect to the basis {e1 , e−1 }, i.e.,


     
i 0 1 1 0 1 i 1 0
ξ1 = − , ξ2 = − , ξ3 = − ,
2 1 0 2 −1 0 2 0 −1
we get
   
i x3 x1 − ix2 1 x3 x1 − ix2
ξ(x) = − and J(x) = .
2 x1 + ix2 −x3 2 x1 + ix2 −x3
60 Noncommutative random variables

We note that two vectors ψ and λψ that differ only by a complex factor λ with
modulus |λ| = 1 define the same state since we have

Xλψ, λψ
= |λ|2 Xψ, ψ
= Xψ, ψ
.

As a consequence, up to the aforementioned equivalence, any vector state on


C2 can be characterised by a vector of the form
 
θ θ cos θ2
ψ = cos e1 + eiφ sin e−1 = ,
2 2 eiφ sin θ2

with θ ∈ [0, π ), φ ∈ [0, 2π ). In order to determine the distribution of J(x) with


respect to the state given by the vector ψ, we first compute the exponential of
tξ(x). Note that we have
 
1 x12 + x22 + x32 0 ||x||2
ξ(x) = −
2
=− I,
4 0 x1 + x22 + x32
2
2
-
where ||x|| = x12 + x22 + x32 denotes the norm of x. By induction we get


 ||x||
2

⎪ if k = 2 + 1 is odd,

⎨ (−1) ξ(x),
22
ξ(x) = k



⎩ (−1) ||x|| I,
2
if k = 2 is even.
22

Therefore we have


t2 ||x||2 t3 ||x||2 t4 ||x||2
exp tξ(x) = I + tξ(x) − I− ξ(x) + ± ···
!2 4 3! 4 ! 4! 16
t 2 t
= cos ||x|| I + sin ||x|| ξ(x). (4.6)
2 ||x|| 2

For the Fourier transform of the distribution of the quantum random variable
J(x) with respect to the state given by ψ, this yields



ψ, exp itJ(x) ψ
= ψ, exp − tξ(x) ψ

! !
t 2 t
= cos ||x|| ψ, Iψ
− sin ||x|| ψ, ξ(x)ψ
.
2 ||x|| 2
4.5 The Lie algebra so(3) 61

But
6    7
cos θ2 i x3 x1 − ix2 cos θ2
ψ, ξ(x)ψ
= , −
eiφ sin θ2 2 x1 + ix2 −x3 eiφ sin θ2
!
i
θ θ
=− x1 eiφ + e−iφ sin cos
2 2 2
!!
i
θ θ θ θ
− ix2 e−iφ − eiφ sin cos + x3 cos2 − sin2
2 2 2 2 2
i

= − x1 sin θ cos φ + x2 sin θ sin φ + x3 cos θ
2
8⎡ cos φ sin θ ⎤ ⎡ x ⎤9
1
i
= − ⎣ sin φ sin θ ⎦ , ⎣ x2 ⎦
2
cos θ x3
i
= − B(ψ), x
,
2
where the vector ψ = e1 cos θ2 + e−1 eiφ sin θ2 is visualised as the point
⎡ ⎤
cos φ sin θ
B(ψ) = ⎣ sin φ sin θ ⎦ .
cos θ

on the unit sphere1 with polar coordinates (θ, φ) in R3 . Let us now denote by
6 7
x
γ := B(ψ), ∈ [−1, 1]
||x||
the cosine of the angle between B(ψ) and x. We have
! !

t||x|| t||x||
ψ, exp itJ(x) ψ
= cos + iγ sin ,
2 2
which shows that the distribution of the Hermitian element J(x) in the state
associated to the vector ψ is given by

1−γ 1+γ
L J(x) = δ ||x|| + δ ||x|| .
2 − 2 2 2

We find (again) a Bernoulli distribution with parameters p = (1 + γ )/2 and


q = (1 − γ )/2.
More generally, a state defined from an n-dimensional representation will
yield a discrete distribution with n equally distant points with distance ||x||/2
and symmetric around 0.

1 The unit sphere is also called the Bloch sphere in this case.
62 Noncommutative random variables

4.5.2 Three-dimensional representation of so(3) on h = C3


For n = 2, i.e., in the three-dimensional representation of so(3), Equations
(2.5a), (2.5b), and (2.5c), give

⎡ x1 − ix2 ⎤
x3 √ 0
⎢ 2 ⎥
⎢ x + ix x1 − ix2 ⎥
⎢ 1 2 ⎥
ξ(x) = −i ⎢ √ 0 √ ⎥
⎢ 2 2 ⎥
⎣ x1 + ix2 ⎦
0 √ −x3
2

and

⎡ x1 − ix2 ⎤
x3 √ 0
⎢ 2 ⎥
⎢ x + ix x1 − ix2 ⎥
⎢ 1 2 ⎥
J(x) = ⎢ √ 0 √ ⎥.
⎢ 2 2 ⎥
⎣ x1 + ix2 ⎦
0 √ −x3
2

Therefore, we have

⎡ ⎤
x2 + x22 (x1 − ix2 )x2 (x1 − ix2 )2
⎢ x32+ 1 √ ⎥
⎢ 2 2 2 ⎥
⎢ (x1 + ix2 )x3 (x1 − ix2 )x2 ⎥
⎢ √ x1 + x22
2
√ ⎥
ξ(x) = − ⎢
2
⎥,
⎢ 2 2 ⎥
⎢ ⎥
⎣ (x1 + ix2 )2 (x1 + ix2 )x3 x 2 + x2 ⎦
√ x32 + 1 2
2 2 2

and

ξ(x)3 = −(x12 + x22 + x32 )ξ(x),

which implies by induction



⎪ I if n = 0,

ξ(x) =
n
(−||x||2 )m ξ if n is odd, n = 2m + 1,


⎩ (−||x||2 )m ξ 2 if n ≥ 2 is even, n = 2m + 2.
4.5 The Lie algebra so(3) 63

The exponential of ξ(x) is given by


∞ n

 t
exp tξ(x) = ξ(x)n
n!
n=0

1  (t||x||)2m+1
=I+ (−1)m ξ(x)
||x|| (2m + 1)!
m=0


1 (t||x||2 )2m+2
+ (−1)m ξ(x)2
||x||2 (2m + 2)!
m=0
sin(t||x||) 1 − cos(t||x||)
=I+ ξ(x) + ξ(x)2 .
||x|| ||x||2
This formula is known as Rodrigues’ rotation formula. We want to determine
the law of ξ(x) in the state given by
⎡ ⎤
1
ψ = ⎣ 0 ⎦,
0
for this we have to calculate the first two moments
x12 + x22 + 2x32
ψ, ξ(x)ψ
= −ix3 , ψ, ξ(x)2 ψ
= − .
2
Thus we have



ψ, exp itJ(x) ψ
= ψ, exp − tξ(x) ψ



sin(t||x||) (x12 + x22 + 2x32 ) 1 − cos(t||x||)
= I + ix3 −
||x|| 2||x||2
which shows that J(x) has distribution

(1 − γ )2 1 − γ2 (1 + γ )2
L J(x) = δ−||x|| + δ0 + δ−||x|| ,
4 2 4
where γ = x3 /||x|| is the cosine of the angle between ψ and x. This is a
binomial distribution with parameters n = 2 and p = (1 + γ )/2.

4.5.3 Two-dimensional representation of so(3)


Let us start by defining another model for the two-dimensional representation
of so(3). We take the two-dimensional Hilbert space L2 ({−1, 1}, bp ), where bp
denotes the Bernoulli distribution

bq = pδ+1 + qδ−1 ,
64 Noncommutative random variables

with 0 < p < 1, q = 1 − p. We define the representation on the basis vectors


1{+1} and 1{−1} by

ξ0 1{x} = x1{x} ,

⎨ 0, if x = +1,
ξ+ 1{x} = q
⎩ 1{+1} if x = −1,
p
⎧ ,
⎨ p1 if x = +1,
{−1}
ξ− 1{x} = q

0 if x = −1,

for x ∈ {−1, +1}. Clearly, ξ0 is Bernoulli distributed in the state given by the
constant function 1, i.e., L1 (ξ0 ) = bp . More generally, let us consider the
elements


Xθ = cos(θ )ξ0 + sin(θ )(ξ+ + ξ− ) = 2i cos(θ )ξ3 + sin(θ )x1

with θ ∈ [0, 2π ). By Lemma 2.6.1, Xθ can be obtained from X0 = 2iξ3 by a


rotation around the second axis, more precisely,



Xθ = 2iξ Rθ (e3 ) = 2i exp ad(θ ξ2 ) ξ3 = eθξ2 ξ3 e−θξ2 ,

where
⎡ ⎤
cos(θ ) 0 sin(θ )
Rθ = ⎣ 0 1 0 ⎦.
− sin(θ ) 0 cos(θ )

Therefore, we have

1, exp(itXθ )1
= 1, eθξ2 exp(itX0 )e−θξ2 1
= gθ , exp(itX0 )gθ
,

with

gθ = e−θξ2 1
! !
θ θ
= cos 1 − 2 sin ξ2 1
2 2
! , !! ! , !!
θ q θ θ p θ
= cos + sin 1{+1} + cos − sin 1{+1} ,
2 p 2 2 q 2
where we could use Equation (4.6) to compute the exponential of
 √ 
1 0 − q/p
−θ ξ2 = √ .
2 p/q 0
4.6 Trace and density matrix 65

We see that the law of Xθ has density |gθ |2 with respect to the law of X0 , which
gives
! , !!2
θ q θ
L1 (Xθ ) = p cos sin+ δ+1
2 p 2
! , !!2
θ p θ
+ q cos − sin δ−1
2 q 2
1

= 1 + (2p − 1) cos(θ ) + 2 pq sin(θ ) δ+1
2
1

+ 1 − (2p − 1) cos(θ ) − 2 pq sin(θ ) δ−1 .
2

4.6 Trace and density matrix


We now describe an important state called the trace on the algebra Mn (C) of
complex n × n matrices. As we shall see, the trace can be used to give a useful
expression for arbitrary states.

Theorem 4.6.1 There exists a unique linear functional on Mn (C) that


satisfies the two conditions

tr(I) = 1, (4.7)

where I denotes the identity matrix, and

tr(AB) = tr(BA), A, B ∈ Mn (C). (4.8)

This unique functional tr : Mn (C) −→ C is called the trace (or normalised


trace) on Mn (C) and is given by

1
n
tr(A) = ajj (4.9)
n
j=1

for A = (ajk ) ∈ Mn (C). The trace is a state. We can compute the trace of a
matrix A ∈ Mn (C) also as

1
n
tr(A) = ej , Aej
, (4.10)
n
j=1

where {e1 , . . . , en } is any orthonormal basis of Cn .


66 Noncommutative random variables

Proof : a) Existence. It is straightforward to check that the functional defined


in (4.9) does indeed satisfy the conditions (4.7) and (4.8). We have

1
n


tr(I) = tr (δjk )1≤ j,≤ n = δjj = 1
n
j=1

and
⎛  ⎞

n
tr(AB) = tr ⎝ Ajk Bk ⎠
k=1 1≤ j,≤ n

n
= aj bj
j,=1
 n
= bj aj = tr(BA).
j,=1

b) Uniqueness. Denote by ejk with 1 ≤ j, k ≤ n the matrix units, i.e., ejk is the
matrix with all coefficients equal to zero except for the coefficient in the j-th
row and k-th column, which is equal to 1,
k
⎛ ⎞
0 ··· 0 0 0 ··· 0
⎜. . .. .. .. . . .. ⎟
⎜. .. ⎟
⎜. . . . . .⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜ ⎟
⎜ ⎟
ejk = (δjr δks )1≤r,s≤n = j ⎜0 ··· 0 1 0 ··· 0⎟ .
⎜ ⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜. .. ⎟
⎜ . .. .. .. .. . . ⎟
⎝. . . . . . .⎠
0 ··· 0 0 0 ··· 0

The n2 matrix units {e11 , . . . , e1n , e21 , . . . , enn } form a basis of Mn (C). There-
fore two linear functionals coincide, if they have the same values on all matrix
units. For the trace tr we have

0 if j  = k,
tr(ejk ) =
1/n if j = k.
Note that we have the following formula for the multiplication of the matrix
units,

ejk em = δk ejm .


4.6 Trace and density matrix 67

Let f : Mn (C) −→ C be a linear functional that satisfies conditions (4.7) and


(4.8). We will show that f takes the same values as tr on the matrix units which
then implies f = tr and therefore establishes uniqueness.
For j  = k, we have ejk = ej1 e1k , e1k ej1 = 0 and therefore

f (ejk ) = f (ej1 e1k ) = f (e1k ej1 ) = f (0) = 0,

since f satisfies (4.8). We also have ejk ekj = ejj , so (4.8) implies

f (ejj ) = f (ejk ekj ) = f (ekj ejk ) = f (ekk ),

for any j, k ∈ {1, . . . , n}. This means that there exists a constant c ∈ C such that

f (ejj ) = c

for j = 1, . . . , n. But it is easy to see that (4.7) implies c = 1


n, and we have
shown
1
f (ejk ) = δjk = tr(ejk )
n
for 1 ≤ j, k ≤ n.

c) Proof of (4.10). Let e1 , . . . , en be an orthonormal basis of Cn . To prove


formula (4.10) it is sufficient to prove that the functional

f : Mn (C) −→ C

defined by

1
n
f (A) = ej , Aej
,
n
j=1

satisfies Equations (4.7) and (4.8). The first is obvious, we clearly have

1 1
n n
f (A) = ej , Iej
= ||ej ||2 = 1.
n n
j=1 j=1

For (4.8) we use the identity


v
v= ej , v
ej , v ∈ Cn ,
j=1
68 Noncommutative random variables

which develops a vector v ∈ Cn with respect to the basis e1 , . . . , en . Let A, B ∈


Mn (C), applying the formula to bj , we get

1
n
f (AB) = ej , ABej

n
j=1
8  n 9
1 n 
= ej , A e , Bej
e
n
j=1 =1

1  n
= ej , Ae
e , Bej

n
j,=1

1  n
= e , bej
ej , Ae

n
j,=1
8 ⎛ ⎞9
1 
n n
= e , b ⎝ ej , Ae
ej ⎠
n
=1 j=1

1
n
= e , BAe

n
=1
= f (BA).
This formula shows that the trace is a state. We have
1
n
tr(I) = ej , ej
= 1,
n
j=1

and, if A ∈ Mn (C) is a positive matrix, then there exists a matrix B ∈ Mn (C)


such that A = B∗ B, and we have
1
n
1 1
tr(B) = ej , Aej
= ej , B∗ Bej
= ||bej ||2 ≥ 0.
n n n
j=1

Let ρ ∈ Mn (C) be a positive matrix with trace one. Then we can define a
state on Mn (C) on
(A) = tr(ρA)
for A ∈ Mn (C). Indeed, since tr(ρ) = 1 we have (I) = tr(ρI) = 1, and
since ρ is positive, there exists a matrix B ∈ Mn (C) such that ρ = B∗ B and
therefore
(A) = tr(ρA) = tr(B∗ BA) = tr(BAB∗ ) ≥ 0
4.6 Trace and density matrix 69

for any positive matrix A ∈ Mn (C). Here we used the fact that A is of the
form A = C∗ C, since it is positive, and therefore BAB∗ = (CB∗ )(CB∗ ) is also
positive.
All states on Mn (C) are of this form.

Theorem 4.6.2 Let  : Mn (C) −→ C be a state. Then there exists a unique


matrix ρ = (ρjk ) ∈ Mn (C) such that

(A) = tr(ρA)

for all A ∈ Mn (C). The matrix ρ is positive and has trace equal to one. Its
coefficients can be calculated as

ρjk = n(ekj )

for 1 ≤ j, k ≤ n, where ekj denotes the matrix unit, and

j
⎛ ⎞
0 ··· 0 0 0 ··· 0
⎜. .. .. .. .. ⎟
⎜. .. .. ⎟
⎜. . . . . . .⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜ ⎟
⎜ ⎟
ekj := (δkr δjs )1≤r,s≤n = k ⎜0 ··· 0 1 0 ··· 0⎟ .
⎜ ⎟
⎜ ⎟
⎜0 ··· 0 0 0 ··· 0⎟
⎜. .. ⎟
⎜. .. .. .. .. .. ⎟
⎝. . . . . . .⎠
0 ··· 0 0 0 ··· 0
The theorem can be deduced from the fact that Mn (C) is a Hilbert space
with the inner product

A, B
= tr(A∗ B) for A, B ∈ Mn (C),

and from the observation that the matrices



ηjk = nejk , j, k = 1, . . . , n,

form an orthonormal basis for Mn (C).

Definition 4.6.3 A positive matrix ρ ∈ Mn (C) with tr(ρ) = 1 is called a


density matrix.

The expression “density matrix” is motivated by the observation that in


quantum probability such matrices play the same role as probability densities
do in classical probability.
70 Noncommutative random variables

4.7 Spin measurement and the Lie algebra so(3)


Since Stern and Gerlach’s experiments in 1921–1922, it is known that many
atoms and particles have a magnetic moment and that the measurement of
this moment, in a chosen direction, will always product half-integer values.
See, e.g., [41, volume 3, chapters 5 and 6] or [82, 1.5.1 “The Stern–
Gerlach experiment”] for an introduction to these experiments and their correct
description in quantum physics.
By constructing the Stern–Gerlach device appropriately, one can cause the
particle or atom to be deflected by an amount that depends upon the component
of the particle or atom’s magnetic dipole moment in a chosen direction.
When the particle or atom hits a screen, this deflection can be measured and
allows one to deduce the component of the particle or atom’s magnetic dipole
moment. In the most elementary case, when we observe, e.g., the spin of
an electron, we will obtain only two possible outcomes, +1/2 or −1/2, in
appropriately chosen units. Similar experiments can also be conducted with
photons, using their polarisation. We will here describe how such experiments
can be modelled using our so(3)-quantum probabilty space.
In Section 2.6, we considered the random variable
J(x) = i(x1 ξ1 + x2 ξ2 + x3 ξ3 )
in several representations of so(3). The random variable J(x) corresponds to
the measurement of the component of the spin of our particle in the direction
⎡ ⎤
x1
given by x = ⎣ x2 ⎦. Let us assume that x is a unit vector, i.e.,
x3
-
||x|| = x12 + x22 + x32 = 1.
Let us begin with the two-dimensional representation, i.e., with the represen-
tation with parameter n = 1. This corresponds to a spin 1/2-particle, as in
Reference [41, volume 3, chapter 6 “Spin one-half”]. A general vector state in
this representation is given by a vector of the form
θ θ
ψ = cos e1 + eiφ sin e−1
2 2
with θ ∈ [0, π ], φ ∈ [0, 2π ). This corresponds to a particle whose spin points
in the direction
⎡ ⎤
cos φ sin θ
B(ψ) = ⎣ sin φ sin θ ⎦ .
cos θ
4.7 Spin measurement and the Lie algebra so(3) 71

Table 4.1 Dictionary “Classical ↔ Quantum”

Classical Quantum

Sample space A set = {ω1 , . . . , ωn } A Hilbert space h = Cn


Events Subsets of that The orthogonal projections in h
form a σ -algebra that form a lattice, which is
(also a Boolean algebra) not Boolean (or distributive),
e.g., in general E ∧ (F1 ∨ F2 )  =
(E ∧ F1 ) ∨ (E ∧ F2 )
Random A measurable function Self-adjoint operators
variables / f : −→ R X : h −→ h, X ∗ = X
observables forming form a commutative spanning a noncommutative
(von Neumann) algebra (von Neumann) algebra
to each event E ∈ F event are observables
to obtain an r.v. IE with values in {0, 1}.
Note that Eλ = I{λ} (X).
Probability A countably additive A density matrix, i.e. a
distribution/ function P : F −→ [0, 1] pos. operator with tr(ρ) = 1
state determined by n pos. real
numbers pk = P({ωk }) P(X = λ) = tr(ρEλ ),
n
such that pk = 1 P(X ∈ E) = tr(ρIE (X)),

k=1 
P(E) = P({ω}) IE (X) = Eλ ).
ω∈E λ∈E∩σ (X)
3 n
Expectation IE[f ] = fdP IE[X] = tr(ρX) = f (ω)P({ω})
k=1
2
Variance Var[f ] = IE[f 2 ] − (IE[f ])2 Var ρ [X] = tr(ρX 2 ) − tr(ρX)
Extreme The set of all probability The extreme points of the
points distribution on set S(h) of states on h
is a compact convex set are exactly the one-dim.
exactly n extreme points projections onto the rays
δωk , k = 1, . . . , n. Cu, u ∈ h a unit vector.
If P = δωk , then If ρ = Pu then Var[X]
the distribution of any r.v. f = ||(X − u, Xu
)u||2
is concentrated at one point Thus Var[X] = 0 if and only
(namely f (ωk )). if u is an eigenvector of X.
Degeneracy of the state does not kill
the uncertainty of the observables!
Product Given two systems described Given two systems described
spaces by ( i , Fi , Pi ), i = 1, 2 then by (hi , ρi ), i = 1, 2 then
systems ( 1 × 2 , F1 ⊗ F2 , P1 ⊗ P2 ) (h1 ⊗ h2 , ρ1 ⊗ ρ2 )
describes both independent describes both independent
systems as a single system systems as a single system
−→ independence
−→ entanglement
72 Noncommutative random variables

We found that the law of J(x) in the state with state vector ψ is given by

1−γ 1+γ
Lψ J(x) = δ−1/2 + δ1/2 ,
2 2
where γ is the cosine of the angle between B(ψ) and ||x||,
6 7
x
γ = B(ψ), .
||x||
So the measurement of the component of the spin in direction x of a spin
1/2-particle whose spin points in the direction b(ψ), will give +1/2 with
probability (1 + γ )/2, and −1/2 with probability (1 − γ )/2.
The other representations correspond to particles with higher spin; the n+1-
dimensional representation describes a particle with spin n2 . In particular, for
n = 2, we have spin 1-particles, cf. Reference [41, volume 3, chapter 5 “Spin
one”].

Notes
On so(3), see [20, 21] for the Rotation Group SO(3), its Lie algebra so(3),
and their applications to physics. See, e.g., [36, 37, 100] for Krawtchouk
polynomials and their relation to the binomial process (or Bernoulli random
walk).
Table 4.1 presents an overview of the terminology used in classical and
quantum probability, as in e.g., [88].

Exercises
Exercise 4.1 Let n ∈ N, A ∈ Mn (C) be a Hermitian matrix. Let f : R → C be
a function.

m
1. Find a polynomial p(x) = pk xk with
k=1

p(λi ) = f (λi )

for all eigenvalues λi of A and set



m
f (A) = p(A) = pk Ak .
k=1
4.7 Spin measurement and the Lie algebra so(3) 73

If A has m distinct eigenvalues λ1 , . . . , λm (counted without their


multiplicities) then we can use Lagrange interpolation to find p as

m + x − λi
p(x) = f (λk )
λ k − λi
k=1 i=k

2. Find an invertible matrix U ∈ Mn (C) that diagonalises A, i.e., such that


⎡ ⎤
λ1 0 · · · 0 0
⎢ 0 λ2 · · · 0 0 ⎥
⎢ ⎥
⎢ .. ⎥
U −1 AU = ⎢ ... ..
.
..
.
..
. . ⎥
⎢ ⎥
⎣ 0 0 · · · λn−1 0 ⎦
0 0 ··· 0 λn
a diagonal matrix and let
⎡ ⎤
f (λ1 ) 0 ··· 0 0
⎢ 0 f (λ2 ) · · · 0 0 ⎥
⎢ ⎥
⎢ ⎥ −1
f (A) = U ⎢ ... ..
.
..
.
..
.
..
. ⎥U .
⎢ ⎥
⎣ 0 0 ··· f (λn−1 ) 0 ⎦
0 0 ··· 0 f (λn )
The numbers λ1 , . . . , λn are the eigenvalues counted with their
multiplicities. By the spectral theorem we know that we can choose U
unitary, so that U −1 = U ∗ .
3. Show that using the spectral theorem in the form stated earlier and writing
X as

X= λEλ ,
λ∈σ (X)

we have

f (X) = f (λ)Eλ .
λ∈σ (X)

Exercise 4.2 In the framework of the Examples of Section 4.4, define further

n
(j)

n
(j)

n
(j)
n0 = ξ0 , n+ = ξ+ , n− = ξ− .
j=1 j=1 j=1

1. Show that these operators define a representation of so(3), i.e., we have

[n+ , n− ] = n0 , [n0 , n± ] = ±2± ,


74 Noncommutative random variables

and
(Xin0 )∗ = n0 , (n+ )∗ = n− .
2. Show that the indicator functions 1{x} are eigenvectors of n0 , with
eigenvalues equal to the difference of the number of +1s and −1s in x.
3. Show that n0 has a binomial distribution on the set
{−n, −n + 2, . . . , n − 2, n} and compute the density of this distribution
with respect to the constant function.
4. Compute the law of
nθ = n0 + θ (n+ + n+ )
and discuss possible connections with the Krawtchouk polynomials.
Exercise 4.3 Calculate (X 3 ) in (4.1).
5

Noncommutative stochastic integration

To invent, one must think aside.


(Paul Souriau, cited by Jacques Hadamard in “The Psychology
of Invention in the Mathematical Field,” 1954.)
In this chapter we revisit the construction of a Fock space over a given Hilbert
space, which we already encountered in Section 1.3. In this book we will
consider only symmetric (or boson) Fock spaces. In the previous chapters,
we have already seen in many examples how they can be used to construct
representations of Lie algebras and realisations of quantum probability spaces.
In Sections 5.2 and 5.3 we will define the fundamental noise processes of
creation, annihilation, and conservation operators on symmetric Fock space
and define and study stochastic integrals for these processes. They provide a
powerful and flexible tool for the construction of many other, more general
processes.

5.1 Construction of the Fock space


We will present two constructions of the Fock space, one based on the theory of
positive definite functions and one based on tensor products of Hilbert spaces.

5.1.1 Construction from a positive definite function


The following definition is fundamental for this paragraph.

Definition 5.1.1 Let X be a set. A function K : X × X −→ C is called a


positive definite kernel on X, if for all n ∈ N and all choices of x1 , . . . , xn ∈ X
and λ1 , . . . , λn ∈ C we have

75
76 Noncommutative stochastic integration


n
λj λk K(xj , xk ) ≥ 0.
j,k=1

Example 5.1.2

a) The inner product of a complex Hilbert space is a positive definite kernel.


If H is a complex Hilbert space, n ∈ N, x1 , . . . , xn ∈ X and λ1 , . . . , λn ∈ C,
then we clearly have
8 9 = =2
   = =
n n n
= n =
λj λk xj , xk
= λj xj , λk xk ==
= λ x =
j j = ≥ 0.
j,k=1 j=1 k=1 = j=1 =

b) By the same argument the inner product of a complex Hilbert space H is


also a positive definite kernel on any subset of H.
c) The inner product of a real Hilbert space is also a positive definite kernel,
since a real Hilbert space can be viewed as a subset of its complexification.

The following theorem shows that all positive definite kernels are in a sense of
the form of the examples above.

Theorem 5.1.3 Let K : X × X −→ C be a positive definite kernel. Then there


exists a complex Hilbert space HK and a map ϕ : X −→ HK such that
i) ϕ(K) is total in HK ,
ii) we have K(x, y) = ϕ(x), ϕ(y)
, for all x, y ∈ X.
The space HK is unique up to unitary equivalence.

Recall that a subset M of a Hilbert space H is called total, if any vector x ∈ H


with

∀y ∈ M, y, x
= 0

is necessarily the zero vector. This is equivalent to the linear span of M being
dense in H.
There are several constructions that allow to produce positive definite
kernels. For example, if K, L : X × X −→ C are positive definite kernels,
then K · L : X × X −→ C with

(K · L)(x, y) = K(x, y)L(x, y)

for x, y ∈ X is again a positive definite kernel on X.


5.1 Construction of the Fock space 77

Lemma 5.1.4 Let X be a set and K, L : X × X −→ C two positive definite


kernels on X. Then their pointwise product
K · L : X × X −→ C
(x, y)  −→ K(x, y)L(x, y)
is positive definite.
Proof : Let n ∈ N, x1 , . . . , xn ∈ X. We have to show that the entrywise product
(also called the Hadamard product or Schur product)



K x ◦ Lx = Kjkx Ljk x
)1≤ j, k≤n = K(xj , xk )L(xj , xk ) 1≤ j, k≤n
of the matrices



K x = K(xj , xk ) 1≤ j, k≤n and Lx = L(xj , xk ) 1≤ j, k≤n
is positive. Let λ1 , . . . , λn ∈ C. We denote by diag(x) the diagonal matrix
diag(x) = (xj δjk )1≤ j, k≤n . We can write

n


λj λk (K · L)(xj , xk ) = Tr diag(λ)∗ K x diag(λ)(Lx )t ,
j,k=1

where

n
Tr(A) = ajj , A = (ajk ) ∈ Mn (C),
j=1

denotes the (non-normalised) trace on Mn (C). Since Lx is a positive matrix, we


can write it in the form Lx = A∗ A with a square matrix A ∈ Mn (C). Substituting
this expression into the aforementioned formula and using the trace property,
we get

n


λj λk (K · L)(xj , xk ) = Tr (At )∗ diag(λ)∗ K x diag(λ)At .
j,k=1

Since Kx is positive and since conjugation with another matrix preserves


positivity, we see that this expression is positive.
Positive multiples and sums of positive definite kernels are clearly positive
definite kernels. Since the constant function with a positive value is also a
positive definite kernel, it follows that for a positive definite kernel K on X


exp K = Kn
n=0

is also a positive definite kernel on X. Therefore, we have the following result.


78 Noncommutative stochastic integration

Proposition 5.1.5 Let H be a Hilbert space. Then there exists a Hilbert space,
denoted exp(H) spanned by the set
> ?
E(h) : h ∈ H ,

with the inner product determined by




E(h1 ), E(h2 )
= exp h1 , h2
,

for h1 , h2 ∈ H.

Proof : Since the inner product of a Hilbert space is a positive definite kernel,
the preceding discussion shows that


K(h1 , h2 ) = exp h1 , h2

defines a positive definite kernel on H. Theorem 5.1.3 then gives the existence
of the Hilbert space exp H.

Remark 5.1.6 The Hilbert space exp H is called the Fock space over H,
or the symmetric or boson Fock space over H. We will also use the notation
s (H). We will briefly see another kind of Fock space, the full or free Fock
space F(H) over a given Hilbert space H in the next paragraph. But otherwise
we will only use the symmetric Fock space and call it simply Fock space, when
there is no danger of confusion.

5.1.2 Construction via tensor products


We now consider another construction of the Fock space s (H) over a given
Hilbert space H, and refer to the appendix Section A.8 for background on the
tensor products of Hilbert spaces. We set H ⊗0 = C, H ⊗1 = H, and

H ⊗n := H ⊗ H ⊗ · · · ⊗ H , n ≥ 2.
& '( )
n times

The (algebraic) direct sum




Falg (H) = H ⊗n
n=0

is a pre-Hilbert space with the “term-wise” inner product defined by




(hn )n∈N , (kn )n∈N
Falg (H) := hn , kn
H ⊗n
n=0
5.1 Construction of the Fock space 79

for (hn )n∈N , (kn )n∈N ∈ Falg (H), i.e., with hn , kn ∈ H ⊗n for n ≥ 0. The inner
product on H ⊗0 = C is of course simply z1 , z2
C = z1 z2 . Upon completion
we get the Hilbert space
 ∞

∞ 
⊗n ⊗n
F(H) = H = (hn )n∈N : hn ∈ H , ||hn ||H ⊗n < ∞ ,
2
n=0
n=0

which is the space of all sequences of tensors of increasing order whose


squared norms are summable. This space is called the free or full Fock space
over H and denoted by F(H), and it plays an important role in free probability
theory.
We now turn to the definition of the symmetric Fock space, and for this
purpose we start with the symmetrisation of the tensor powers of a Hilbert
space. On H ⊗n we can define a symmetrisation operator Sn by
1 
Sn (h1 ⊗ · · · ⊗ hn ) = hσ (1) ⊗ · · · ⊗ hσ (n)
n!
π∈n

for h1 , . . . , hn ∈ H, where n denotes the set of permutations of {1, . . . , n}.


One can check by direct calculation that we have Sn = (Sn )∗ = (Sn )2 , so the
symmetrisation operator is an orthogonal projection. We define the symmetric
tensor power of order n as the range of this projection,

H ◦n = Sn (H ⊗n ),

and the symmetric Fock space as the completed direct sum of the symmetric
tensor powers,

s (H) = H ◦n .
n=0
If we denote by S the direct sum of the symmetrisation operators, i.e.,


S (hn )∞
n=0 = Sn (hn ) n=0

for (hn )∞
n=0 ∈ F(H), then we have


s (H) = S F(H) .

One can show that the tensor powers

v⊗n = v ⊗ · · · ⊗ v
& '( )
n times

are total in the symmetric tensor power H ◦n , see Exercise 5.1. Let us now use
the exponential vectors
80 Noncommutative stochastic integration

!∞
h⊗n
E(h) = √
n! n=0

to show that the symmetric Fock space which we just constructed is the same
as the space exp H that we obtained previously from the theory of positive
definite kernels. We have
∞ = ⊗n =2 ∞
=h = h 2n

=√ = = = exp h 2 < ∞,
= = ⊗n n!
n=0
n! H n=0

and therefore E(h) ∈ F(H). Since each term is a product vector, we have
furthermore


S E(h) = E(h),

and therefore E(h) ∈ s (H). Another computation gives



 1

E(h), E(k)
= h, k
n = exp h, k
, h, k ∈ H.
n!
n=0

The totality of the exponential vectors {E(h) : h ∈ H} in the symmetric Fock


space s (H) follows from the totality of the tensor powers {v⊗n : v ∈ H} in
the symmetric tensor power H ◦n , since
$
dn $$ 1  
E(th), (kn )n∈N
= √ h⊗n , kn ,
dn t $t=0 n!
for n ≥ 0, h ∈ H, (kn )n∈N ∈ s (H).
We have checked that the map H  h  −→ Eh ∈ s (H) satisfies all the
conditions of Theorem 5.1.3 with respect to the positive definite kernel
exp ·, ·
H , therefore s (H) is indeed isomorphic to exp H.

5.2 Creation, annihilation, and conservation operators


There is a special family of operators acting on the symmetric Fock space. We
can use the symmetric tensor powers to define them. For h ∈ H and T ∈ B(H),
we can define


⎪ a− (h) : H ◦n −→ H ◦(n−1)
⎪ n



a+ ◦n
n (h) : H −→ H
◦(n+1)





⎩ a◦ (T) : H ◦n −→ H ◦n ,
n
5.2 Creation, annihilation, and conservation operators 81

by setting
⎧ −
⎪ an (h)v⊗n := h, v
v⊗(n−1) ,







⎪ √ n

⎨ a+ ⊗n ⊗n
:= n + 1Sn+1 (h ⊗ v ) = √
1
vk ⊗ h ⊗ vn−k ,
n (h)v
n + 1 k=0







⎪ n

⎪ ◦ ⊗n
vk−1 ⊗ Tv ⊗ vn−k ,
⎩ n a (T)v :=
k=1

on tensor powers v⊗n ∈ H ◦n . These operators and their extensions to s (H) are
called the annihilation operator, the creation operator, and the conservation
operator, respectively. The conservation operator with T = I is also called the
number operator, since it acts as
a◦n (I)v◦n = nv◦n ,
i.e., it has the symmetric tensor powers as eigenspaces and the eigenvalues give
exactly the order of the tensor power.
We set H ◦(−1) = {0}, then the 0 − th order annihilation operator a− 0 (h) :
◦0 ◦(−1) −
H = C −→ H = {0} must clearly be the zero operator, i.e., a0 (h)(z) = 0
for any h ∈ H and z ∈ C. The direct sums




a− (h) = a−
n (h), a+ (h) = a+
n (h), and a◦ (T) = a◦n (T)
n=0 n=0 n=0
are well-defined on the algebraic direct sum of the tensor powers. We have

− ∗
◦ ∗
a (h) = a+ (h) and a (T) = a◦ (T ∗ ),
so these operators have adjoints and therefore are closable. They extend to
densely defined, closable, (in general) unbounded operators on s (H).
There is another way to let operators T ∈ B(H) act on the Fock spaces F(H)
and s (H), namely by setting
(T)(v1 ⊗ · · · ⊗ vn ) := (Tv1 ) ⊗ · · · ⊗ (Tvn )
for v1 , . . . , vn ∈ H. The operator (T) is called the second quantisation of T. It
is easy to see that (T) leaves the symmetric Fock space invariant. The second
quantisation operator (T) is bounded if and only if T is a contraction, i.e.,
T ≤ 1. The conservation operator a◦ (T) of an operator T ∈ B(H) can be
recovered from the operators (etT )t∈R via
$
◦ d $$
a (T) = $ (etT )
dt t=0
82 Noncommutative stochastic integration

on some appropriate domain. On exponential vectors we have the following


formulas for the annihilation, creation, and conservation operators

a− (h)E(k) = h, k
E(k),
$
d$
a+ (h)E(k) = $$ E(k + th),
dt t=0
$
d$

a◦ (T)E(k) = $$ E etT k ,
dt t=0
with h, k ∈ H, T ∈ B(H). The creation, annihilation, and conservation operators
satisfy the commutation relations
⎧ −

⎪ [a (h), a− (k)] = [a+ (h), a+ (k)] = 0,







⎪ [a− (h), a+ (k)] = h, k
I,





[a◦ (T), a◦ (S)] = a◦ [T, S] , (5.1)








⎪ [a◦ (T), a− (h)] = −a− (Th),




⎩ ◦
[a (T), a+ (h)] = a+ (Th),
for h, k ∈ H, S, T ∈ B(H), cf. [87, Proposition 20.12]. Since the operators are
unbounded, these relations can only hold on some appropriate domain. One
can take, e.g., the algebraic direct sum of the symmetric tensor powers of H,
since this is a common invariant domain for these operators. Another common
way to give a meaning to products is to evaluate them between exponential
vectors. The condition
 +   
a (h)E(1 ), a+ (k)E(2 ) − a− (k)E(1 ), a− (h)E(2 )
 
= h, k
E(1 ), E(2 )

for all h, k, 1 , 2 ∈ H can be viewed as an alternative formulation of the second


relation mentioned earlier. Instead of actually multiplying two unbounded
operators, we take adjoints to let the left factor of the product act on the left
vector of the scalar product. This technique will also be used in Section 5.4 to
give a meaning to the Itô formula for products of quantum stochastic integrals.
An important property of the symmetric Fock space is its factorisation.

Theorem 5.2.1 Let H1 and H2 be two Hilbert spaces. The map

U : {E(h1 + h2 ) : h1 ∈ H1 , h2 ∈ H2 } −→ s (H1 ) ⊗ s (H2 ),


E(h1 + h2 )  −→ E(h1 ) ⊗ E(h2 ),
5.3 Quantum stochastic integrals 83

h1 ∈ H1 , h2 ∈ H2 , extends to a unique unitary isomorphism between s (H1 ⊕


H2 ) and s (H1 ) ⊗ s (H2 ).

Proof : It easy to check that U preserves the inner product between expo-
nential vectors. The theorem therefore follows from the totality of these
vectors.

5.3 Quantum stochastic integrals


For stochastic integrals and stochastic calculus we need a time parameter, so
we choose

H = L2 (R+ , h) ∼
= L2 (R+ ) ⊗ h
with some Hilbert space h. Since we can write H as a direct sum

L2 (R+ , h) = L2 ([0, t], h) ⊕ L2 ([t, +∞), h)

of

Ht] = L2 ([0, t], h) and H[t = L2 ([t, +∞), h)

by decomposing functions on R+ as f = f 1[0,t] + f 1[t,+∞) , we get from


Theorem 5.2.1 an isomorphism

Ut : s (H) −→ s (Ht] ) ⊗ s (H[t ) (5.2)

which acts on exponential vectors as

Ut E(k) = E(k1[0,t] ) ⊗ E(k1[t,+∞) )

for k ∈ L2 (R+ , h), with its adjoint given by




Ut∗ E(k1 ) ⊗ E(k2 ) = E(k1 + k2 )


for k1 ∈ L2 ([0, t], h), k2 ∈ L2 ([t, +∞), h). Let mt ∈ B L2 (R+ ) denote multipli-
cation by the indicator function 1[0,t] , i.e.,

f (s), if s ≤ t.
mt ( f )(s) =
0, else,
Then the tensor product mt ⊗ T of multiplication by the indicator function 1[0,t]
on L2 (Rt ) and an operator T ∈ B(h) acts as


Tf (s), if s ≤ t,
(mt ⊗ T)( f ) (s) =
0, else,
84 Noncommutative stochastic integration

for f ∈ L2 (Rt , h). We introduce the notation




⎪ a− (h) = a− (1[0,t] ⊗ h),
⎪ t



a+ +
t (h) = a (1[0,t] ⊗ h),





⎩ a◦ (T) = a◦ (m ⊗ T), t ∈ R , h ∈ h, T ∈ B(h).
t t +

Note that the evaluation of these operators between a pair of exponential


vectors is given by
⎧  t

⎪ E(k −
= h, k2 (s)
h ds E(k1 ), E(k2 )
,

⎪ 1 ), at (h)E(k2 )





0

⎪  t

E(k1 ), a+ (h)E(k2 )
= k1 (s), h
h ds E(k1 ), E(k2 )
,


t


0



⎪  t


⎩ E(k1 ), a◦t (T)E(k2 )
= k1 (s), Tk2 (s)
h ds E(k1 ), E(k2 )
,
0
T ∈ B(h), h ∈ h, k1 , k2 ∈ H. An important notion in stochastic calculus is
adaptedness. In our setting this is defined in the following way.


h be a Hilbert space and set H = L (R+ , h). Let t ∈ R+ .
Definition 5.3.1 Let 2

An operator X ∈ B s (H) is called t-adapted, if it can be written in the form


X = Xt] ⊗ I
with respect to the factorisation given in Equation (5.2) and with some
operator Xt] ∈ B( a stochastic process (Xt )t∈R+ , i.e., a family (Xt )t∈R+

s (Ht] )),
of operators in B s (H) , is called adapted, if Xt is t-adapted for all t ∈ R+ .
We will further assume in the sequel that our processes are smooth, i.e., they
are piece-wise continuous in the strong operator topology. This means that the
maps
R+  t  −→ Xt v ∈ s (H)
are piece-wise continuous for all v ∈ s (H). For applications it is necessary
to extend the notion of adaptedness to unbounded operators and processes, see
the references mentioned at the end of this chapter.
Quantum stochastic integrals of smooth adapted processes with respect to
the creation, annihilation, or conservation process can be defined as limits of
Riemann–Stieltjes sums. Let
> ?
π := 0 = t0 < t1 < · · · < tn = t
5.3 Quantum stochastic integrals 85

be a partition of the interval [0, t], then the corresponding approximation of the
stochastic integral
 t
I(t) = Xs daεs (h), t ∈ R+ ,
0
with ε ∈ {−, ◦, +} and h a vector in h (if ε ∈ {−, +}) or an operator on h (if
ε = ◦), is defined by

n


Iπ (t) := Xtk−1 aεtk (h) − aεtk−1 (h) , t ∈ R+ .
k=1

We introduce the notation


aεst (h) = aεt (h) − aεs (h), 0 ≤ s ≤ t,
for ε ∈ {−, ◦, +}. Since the creation and conservation operators are linear in
h and the annihilation operator is conjugate linear in f , we can also write the
increments as
aεst (h) = aε (1[s,t] ⊗ h),
where, in the case ε = ◦, 1[s,t] as to interpreted as an operator on L2 (R+ ), acting
by multiplication.
Under appropriate natural conditions one can show that these approximation
do indeed to converge on appropriate domains (e.g., in the strong operator
topology) when the mesh
|π | = max {tk − tk−1 : k = 1, . . . , n}

3oft the partition π goes to zero, and define the quantum stochastic integral
ε (h) as the limit. Evaluating a Riemann–Stieltjes sum over two expo-
0 Xs da s
nential vectors yields
 
E(k1 ), Iπ (t)E(k2 ) =
⎧ n 
⎪   tk

⎪ E(k E(k k1 (s), h
h ds if ε = +,

⎪ 1 ), X tk−1 2 )

⎪ tk−1
⎪ n
⎪ k=1

⎨   tk
E(k1 ), Xtk−1 E(k2 ) h, k2 (s)
h ds if ε = −,

⎪ tk−1

⎪ k=1


⎪  n
  tk



⎩ E(k1 ), X tk−1 E(k2 ) k1 (s), hk2 (s)
h ds if ε = ◦.
k=1 tk−1

Note that for ε ∈ {−, +}, h is a vector in h, whereas for ε = ◦, it is an


operator on h. These expressions can be derived from the action of the
creation, annihilation, and conservation operators on exponential vectors. Note
86 Noncommutative stochastic integration

that the adaptedness condition insures that Xt commutes with increments of


creation, annihilation, and conservation processes, i.e., operators of the form
aεt+s (h) − aεt (h) for s > 0.
The values of quantum stochastic integrals between exponential vectors are
given by the following First Fundamental Lemma, cf. [87, Proposition 25.9].
Theorem 5.3.2 Let (Xt )t≥0 be an adapted smooth quantum stochastic
process, ε ∈ {−, ◦, +}, h ∈ h if ε ∈ {−, +} and h ∈ B(h) if ε = ◦, and k1 , k2 ∈ H.
Then the evaluation of
 t
I(t) = Xs daεs (h)
0
between exponential vectors is given by
 
E(k1 ), I(t)E(k2 ) =
⎧  t
⎪  

⎪ E(k1 ), Xt E(k2 ) k1 (s), h
h ds, if ε = +,



⎨ 0 t  
E(k1 ), Xt E(k2 ) h, k2 (s)
h ds, if ε = −,

⎪ 0 t

⎪  


⎩ E(k1 ), Xt E(k2 ) k1 (s), hk2 (s)
h ds, if ε = ◦.
0

5.4 Quantum Itô table


A weak form of the analogue of Itô’s formula for products of quantum
stochastic integrals is given by the Second Fundamental Lemma.
Theorem 5.4.1 (Second Fundamental Lemma, [87, Proposition 25.10]) Let
(Xt )t≥0 and (Xt )t≥0 be adapted smooth quantum stochastic processes, ε, δ ∈
{−, ◦, +}, h ∈ h if ε ∈ {−, +}, h ∈ B(h) if ε = ◦, k ∈ h if δ ∈ {−, +}, and
k ∈ B(h) if δ = ◦. Set
 t  t
It = Xs das (h), and Jt =
ε
Ys daδs (k), t ∈ R+ .
0 0
Then we have
 t
  
It E(1 ), Jt E(2 ) = Xs E(1 ), Js E(2 )
m1 (s)ds
0
 t  t
 
+ Is E(1 ), Ys E(2 )
m2 (s)ds + Xs E(1 ), Ys E(2 )
m12 (s)ds,
0 0
for all 1 , 2 ∈ H, where the functions m1 , m2 , and m12 are given by the
following tables:
5.4 Quantum Itô table 87

ε − ◦ +
m1 1 (s), h
h h1 (s), 2 (s)
h h, 2 (s)
h

δ − ◦ +
m2 k, 2 (s)
h 1 (s), k2 (s)
h 1 (s), k
h

ε\δ − ◦ +
− 0 0 0
◦ 0 h1 , k2
h h1 , k
h
+ 0 h, k2
h h, k
h

A stronger form of the Itô formula, which holds on an appropriate domain and
under appropriate conditions on the integrands, is
 t  t  t
It Jt = Is dJs + dIs Js + (dI • dJ)s ,
0 0 0

where the product in the last term is computed according to the rule




Xt daεt (h) • Yt daδt (k) = Xt Yt daεt (h) • daδt (k)

and the Itô table

• da−
t (k) da◦t (k) da+
t (k)
da+
t (h) 0 0 0
da◦t (h) 0 da◦ (hk) da+ (hk)
da−
t (h) 0 da− (k∗ h) h, k
dt

If one adds the differential dt and sets all products involving dt equal to zero,
then


span {da+ (h) : h ∈ h} ∪ {da◦ (T) : T ∈ B(h)} ∪ {da− (h) : h ∈ h} ∪ {dt}

becomes an associative algebra with the Itô product • called the Itô algebra
over h. If dim h = n, then the Itô algebra over h has dimension (n + 1)2 .
Example 5.4.2
To realise classical Brownian motion on a Fock space, we can take h = C and
set
 t  t
Bt := da−
s + da+
s ,
0 0
88 Noncommutative stochastic integration

where we wrote a± ±
s for as (1). Then the quantum stochastic Itô formula given
earlier shows that
 t  t  t  t
B2t = 2 Bs da−
s + 2 B da
s s
+
+ ds = 2 Bs dBs + t,
0 0 0 0
i.e., we recover the well-known result from classical Itô calculus. The integral
for Bt can of course be computed explicitly, we get
Bt = a− + − +
t + at = a (1[0,t] ) + a (1[0,t] ).

We have already shown in Section 3.1 that the sum of the creation and the
annihilation operator are Gauss distributed.

Notes
See [16] for more information on positive definite functions and the proofs of
the results quoted in Section 5.1. Guichardet [50] has given another construc-
tion of symmetric Fock space for the case where the Hilbert space H is the
space of square integrable functions on a measurable space (M, M, m). This
representation is used for another approach to quantum stochastic calculus, the
so-called kernel calculus [69, 73].
P.-A. Meyer’s book [79] gives an introduction to quantum probability and
quantum stochastic calculus for readers who already have some familiarity
with classical stochastic calculus. Other introductions to quantum stochastic
calculus on the symmetric Fock space are [17, 70, 87]. For an abstract approach
to Itô algebras, we refer to [15]. Noncausal quantum stochastic integrals, i.e.,
integrals with integrands that are not adapted, were defined and studied by
Belavkin and Lindsay, see [13, 14, 68]. The recent book by M.-H. Chang [26]
focusses on the theory of quantum Markov processes.
We have not treated here the stochastic calculus on the free Fock space
which was introduced by Speicher and Biane, see [19] and the reference
therein. Free probability is intimately related to random matrix theory, cf. [9].
More information on the methods and applications of free probability can also
be found in [81, 120, 121].

Exercises
Exercise 5.1 We want to show that the tensor powers v⊗n do indeed span
the symmetric tensor powers H ◦n . For this purpose, prove the following
polarisation formulas:
5.4 Quantum Itô table 89

 
1  +
n
Sn (v1 ⊗ · · · ⊗ vn ) = n
k (1 v1 + · · · + n vn )⊗n
n!2
∈{±1}n k=1

1 
n 
= n−k
(−1) (vl1 + · · · + vlk )⊗n ,
n!
k=1 l1 <···<lk

for n ≥ 0 and v1 , . . . , vn ∈ H, where the first summation runs over n-tuples


 = (1 , . . . , n ) with coefficients −1 or +1.
6

Random variables on real Lie algebras

Any good theorem should have several proofs, the more the better.
For two reasons: usually, different proofs have different strengths and
weaknesses, and they generalise in different directions – they are not
just repetitions of each other.
(M. Atiyah, in Interview with M. Atiyah and I. Singer.)
In Chapter 3 we have considered the distribution of random variables on real
Lie algebras by specifying ad hoc Hilbert space representations. In this chapter
we revisit this construction via a more systematic approach based on the
framework of Chapter 4 and the splitting Lemma 6.1.3. We start as previously
from the Heisenberg–Weyl and oscillator Lie algebras, and then move on to
the Lie algebra sl2 (R).

6.1 Gaussian and Poisson random variables on osc


Consider the oscillator algebra osc with the relations

[N, a− ] = −a− , [N, a+ ] = a+ ,

and

(a+ )∗ = a− , a− e0 = 0, Ne0 = 0.

For α, β ∈ R and ζ ∈ C, let Xα,ζ ,β denote the random variable

Xα,ζ ,β := αN + ζ a+ + ζ a− + βE.

Proposition 6.1.1 Let α, β ∈ R and ζ ∈ C. The distribution of Xα,ζ ,β in the


vacuum vector e0 has the characteristic function

90
6.1 Gaussian and Poisson random variables on osc 91



⎨ e
iβ−|ζ |2 /2 , for α = 0,
e0 , eiXα,ζ ,β e0
=

⎩ eiβ+|ζ |2 (eiα −iα−1)/α 2 , for α  = 0.

As a consequence of Proposition 6.1.1, X0,ζ ,β is a Gaussian random variable


with variance |ζ |2 and mean β, while when α  = 0, Xα,ζ ,β is a Poisson random
variable with “jump size” α, intensity |ζ |2 /α 2 , and drift β − |ζ |2 /α. The proof
of Proposition 6.1.1 is a direct consequence of the splitting Lemma 6.1.3 which
itself follows from the next Lemma 6.1.2 that gives the normally ordered form
of the generalised Weyl operators.

Lemma 6.1.2 Let z ∈ C. We have the commutation relations


+ + + +
ezN a− = e−z a− ezN , eza a− = (a− − zE)eza , eza N = (N − za+ )eza ,

on the boson Fock space.

Proof : This can be deduced from the formula for the adjoint actions,

AdeX Y : = eX Ye−X

 (−1)m n m
= X YX
n!m!
n,m=0

 !
1  k
k
= (−1)m X k−m YX m
k! m
k=0 m=0
1 
= Y + [X, Y] + X, [X, Y] + · · ·
2
= e ad X Y,

cf. Section A.5 in appendix. For the first relation we get

AdezN a− = ezN a− e−zN


= ez ad N a−
z2 z3
= a− + z[N, a− ] + [N, [N, a− ]] + [N, [N, [N, a− ]]] + · · ·
2 3!
z 2 z 3
= a− − za− + a− − a− + · · ·
2 3!
∞ n
z
= (−1)n a−
n!
n=0
= e−z a− .
92 Random variables on real Lie algebras

For the second relation we have


+ + + +
Adeza a− = eza a− e−za = ez ad a a−
z2 + + −
= a− + z[a+ , a− ] + [a , [a , a ]] + · · ·
2
= a− − z[a− , a+ ]
= a− − zE.
For the last relation we find
+ + + +
Adeza N = eza Ne−za = ez ad a N
z2 + +
= N + z[a+ , N] + [a , [a , N]] + · · ·
2
= N + z[a+ , N]
= N − za+ .

The following formula, also known as the splitting lemma, cf. Proposition 4.2.1
in Chapter 1 of [38], provides the normally ordered form of the Weyl operators
and is a key tool to calculate characteristic functions of elements of the
oscillator algebra.
Lemma 6.1.3 (Splitting lemma) Let x, u, v, α ∈ C. We have


exp xN + ua+ + va− + αE
"u # "v # " uv #
= exp (ex − 1)a+ eα+xN exp (ex − 1)a− exp 2 (ex − x − 1) .
x x x
In particular, when x = 0 we have
+ −
exp(ua+ + va− + αE) = eua eva e(α+uv/2)E , u, v, α ∈ C.
Proof : We will show that
+ −
exp(xN + ua+ + va− + αE) = eα̃E eũa exN eṽa
on the boson Fock space, where
⎧ ∞ n−1
⎪  x u

⎪ ũ = u = (ex − 1),



⎪ n! x


n=1




⎨ ∞ n−1
x v
ṽ = v = (ex − 1),

⎪ n! x

⎪ n=1





⎪ ∞ n−2


⎪ x uv

⎩ α̃ = α + uv = α + 2 (ex − x − 1).
n! x
n=2
6.1 Gaussian and Poisson random variables on osc 93

Set


ω1 (t) = exp t(xN + ua+ + va− + αE)

and
+ −
ω2 (t) = eũ(t)a etxN eṽ(t)a eα̃(t)E ,

t ∈ [0, 1], where


⎧ ∞
⎪  tn u

⎪ ũ(t) : = u xn−1 = (etx − 1),



⎪ n! x


n=1




⎨ ∞
tn v
ṽ(t) : = v xn−1 = (etx − 1),

⎪ n! x

⎪ n=1





⎪ ∞

⎪ n−2 t
n uv

⎩ α̃(t) = tα + uv x = tα + 2 (etx − tx − 1).
n! x
n=2

We have


ω1 (t) = xN + ua+ + va− + αE exp(t(xN + ua+ + va− + αE)),

and
+ − + −
ω2 (t) = ũ (t)a+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xeũ(t)a NetxN eṽ(t)a eα̃(t)E
+ − + −
+ ṽ (t)eũ(t)a etxN a− eṽ(t)a eα̃(t)E + α  (t)eũ(t)a etxN eṽ(t)a Eeα̃(t)E
+ − + −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xeũ(t)a NetxN eṽ(t)a eα̃(t)E
+ −
+ vetx eũ(t)a etxN a− eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) eũ(t)a etxN eṽ(t)a Eeα̃(t)E
x
+ −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ x(N − ũ(t)a+ )eũ(t)a etxN eṽ(t)a eα̃(t)E + veũ(t)a a− etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
+ −
= uetx a+ eũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ x(N − ũ(t)a+ )eũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ v(a− − ũ(t))eũ(t)a etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
94 Random variables on real Lie algebras

+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E
" uv # + −
+ α + (etx − 1) − vũ(t) Eeũ(t)a etxN eṽ(t)a eα̃(t)E
x
+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E + αEeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
= ua+ eũ(t)a etxN eṽ(t)a eα̃(t)E + xNeũ(t)a etxN eṽ(t)a eα̃(t)E
+ − + −
+ va− eũ(t)a etxN eṽ(t)a eα̃(t)E + αEeũ(t)a etxN eṽ(t)a eα̃(t)E


= xN + ua+ + va− + αE exp(t(xN + ua+ + va− + αE)),
where, using Lemma 6.1.2, we checked that both expressions coincide for all
t ∈ [0, 1] since ω1 (0) = ω2 (0) = 1. Therefore, we have ω1 (t) = ω2 (t),
t ∈ R+ , which yields the conclusion.
We find in particular
+ −
exp(uQ) = exp(ua− + ua+ ) = eua eua euvE/2 .

Proof of Proposition 6.1.1: This is now a consequence of the splitting Lemma


6.1.3, using the relations (a+ )∗ = a− and a− e0 = Ne0 = 0. 
We close this section with a lemma on cyclicity of the vacuum state e0 and will
be used for the Girsanov theorem in Chapter 10.
Lemma 6.1.4 For any ζ  = 0, the vacuum vector e0 is cyclic for Xα,ζ ,β , i.e.,

span {Xα,ζ
k
,β e0 : k = 0, 1, . . .} =  .
2

Proof : Due to the creation operator a+ in the definition of Xα,ζ ,β we have


√ 
k−1
k
Xα,ζ e
,β 0 = ζ k
k!ek + c e ,
=0
for some coefficients c ∈ C, therefore we have
span {e0 , Xα,ζ ,β e0 , . . . , Xα,ζ
k
,β e0 } = span {e0 , . . . , ek },

for all k ∈ N, if ζ  = 0.

6.2 Meixner, gamma, and Pascal random variables on sl2 (R)


As in the case of the oscillator algebra osc we will need the following version
of the splitting lemma, cf. [38, Chapter 1, Proposition 4.3.1].
6.2 Meixner, gamma, and Pascal random variables on sl2 (R) 95

Lemma 6.2.1 ([38]) For any x, u, v ∈ R we have

exp(uB+ + xM + vB− ) = exp(ũB+ ) exp(x̃M) exp(ṽB− )

on the boson Fock space, where




⎪ ũ =
tanh(δu)

⎪ ,

⎪ δ − (x/u) tanh(δu)



⎪ !
⎨ δsech(δu)
x̃ = log ,

⎪ δ − (x/u) tanh(δu)







⎪ ũ v tanh(δu)
⎩ ṽ = v = ,
u u(δ − (x/u) tanh(δu))

with δ = x2 − v2 and sechx = 1/ cosh x.

Using the above splitting Lemma 6.2.1 for sl2 (R) we write eXβ with

Xβ = B+ + B− + βM, β ∈ R,

as a product
+ −
eXβ = eν+ B eν0 M eν− B .

It is straightforward to compute the distribution of ρλ (Xβ ) in the state vector e0


as in the following proposition in which we consider the representation ρλ (Xβ )
of Xβ , which satisfies

ρλ (M)e0 = λe0 and ρλ (B− )e0 = 0.


+ −
Using the splitting lemma to write eλXβ as a product eν+ B eν0 M eν− B , the
distribution of ρλ (Xβ ) in the state vector e0 is computed in the next proposition
which will also be used for the Girsanov theorem in Chapter 10.

Proposition 6.2.2 The Fourier–Laplace transform of the distribution of


ρλ (Xβ ) with respect to e0 is given by
 % λ
β 2−1
e0 , eρλ (Xβ ) e0
= %
%
% .
β 2 − 1 cosh β 2 − 1 − β sinh β 2 − 1

For β = 0 we find the Fourier transform (cosh λ)−λ which corresponds to the
hyperbolic secant distribution.
More generally when |β| < 1, the above distribution is called the Meixner
distribution. It is absolutely continuous with respect to the Lebesgue measure
and the density is given by
96 Random variables on real Lie algebras

 $  $2
(π − 2 arccos β)x $ $
$ λ ix $
C exp % $ + % $ ,
2 1 − β2 $ 2 2 1−β $2

where C is a normalisation constant, see also [1]. When β = 0 and λ = 1 we


find the density
$ !$2
$ 1 i $ 1
$
ξ 1  → C $ + ξ1 $$ = ,
2 2 2 cosh(π ξ1 /2)
of the continuous binomial distribution, with C = 1/π. For β = ±1 we get
the gamma distribution, which has the density
1
|x|λ−1 e−βx 1βR+ .
(λ)
Finally, for |β| > 1, we get a discrete measure, the negative binomial
distribution (also called Pascal distribution). The next cyclicity lemma will
also be used for the Girsanov theorem in Chapter 10.

Lemma 6.2.3 The lowest weight vector e0 is cyclic for ρλ (Xβ ) for all β ∈ R,
λ > 0.

Proof : On e0 , we get

% 
k−1
ρλ (Xβ )k e0 = k!λ(λ + 1) · · · (λ + k − 1)ek + c e
=0

for some coefficients c ∈ C. Therefore,

span {e0 , ρλ (Xβ )e0 , . . . , ρλ (Xβ )k e0 } = span {e0 , . . . , ek },

for all k ∈ N, if λ > 0.

6.3 Discrete distributions on so(2) and so(3)


6.3.1 The Bernoulli distribution on so(2)
Recall that the Lie algebra so(2) is generated by
 
0 i
ξ1 = .
−i 0
We can check that the Bernoulli distribution is generated by so(2). In other
words taking a, b ∈ C such that |a|2 + |b|2 = 1 we can compute the
characteristic function
6.4 The Lie algebra e(2) 97

6   !  7 6   !  7
a 0 i a a 0− iθ a
, exp θ = , exp
b −i 0 b b −iθ0 b
6    7
cosh θ sinh θ a a
= ,
− sinh θ cosh θ b b
= (|a|2 + |b|2 ) cosh θ − 2iI(āb) sinh θ
= cosh θ − i(p − q) sinh θ
= (p + q) cosh θ − i(p − q) sinh θ
= peθ + qe−θ ,
with āb − b̄a = 2i(ab) = 2i(p − q) and |a|2 + |b|2 = p + q = 1. This yields
the characteristic function of the Bernoulli distribution
qδ−1 + pδ1
with
1 1
p= (āb − b̄a + 2i|a|2 + i|b|2 ) and q = (−āb + b̄a + 2i|a|2 + 2i|b|2 ),
4i 4i
which is supported by {−1, 1}.

6.3.2 The three-point distribution on so(3)


Recall that Lie algebra so(3) is generated by
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 −1 0 0 0 1 0 0 0
ξ 1 = ⎣1 0 0⎦, ξ2 = ⎣ 0 0 0⎦, ξ3 = ⎣0 0 −1⎦.
0 0 0 −1 0 0 0 1 0
The probability distribution associated to so(3) is supported by three points.

6.4 The Lie algebra e(2)


Consider the representation of the Euclidean Lie algebra e(2) on 2 (Z)
given by
Mek = 2kek , E+ ek = ek+1 , E− ek = ek−1 ,
where (ek )k∈Z denotes the standard basis of 2 (Z). Let us now give a prob-
abilistic interpretation of e(2), i.e., compute the distribution of the quantum
random variable
Eα,ζ = αM + ζ E+ + ζ E−
with α ∈ R, ζ ∈ C.
98 Random variables on real Lie algebras

By the version (6.1) of the splitting lemma proved in Exercise 6.1 it is easy
to compute the moment generating function of the law of Eα,ζ in the state given
by the vector e0 . We have
∞ k
 z
exp(zE− )e0 = e−k
k!
k=0

and

e0 , exp(λEα,ζ )e0

8   9
ζ " 2λα # ! ζ " #
= exp e − 1 E− e0 , exp(λαM) exp e2λα − 1 E− e0
2α 2α
|ζ |2k " 2λα #2k


= e−2αλk e − 1
(2αk!)2
k=0
∞
|ζ |2k
= sinh(αλ)2k
α 2 k!2
k=0
!
2|ζ |
= J0 sinh(αλ) ,
α
where J0 denotes the modified Bessel (or hyperbolic) Bessel function of the
first kind,

(−1)m " x #2m




J0 (x) = .
(m!)2 2
m=0

See also Section 5.V “e2 and Lommel polynomials” in Feinsilver and Schott’s,
Algebraic Structures and Operator Calculus, Volume III: Representations of
Lie Groups.

6.4.1 The case α = 0


The operator E+
is the shift on Z in our representation, note that it is normal,
since E+ and E− = (E+ )∗ commute, and even unitary. In the state given by the
vector e0 the distribution of E+ is the uniform distribution on the unit circle,

e0 , En+ Em

e0
= δn,m .

Therefore, the distribution of E0,1 = E+ + E− is the image measure of the


uniform distribution on the unit circle under the map S1  z  → z+z ∈ [−2, 2],
which is the arcsine distribution,
6.4 The Lie algebra e(2) 99

1
Le0 (E+ + E− )  √ 1(−2,2) (x)dx.
2π 4 − x2

6.4.2 The case α = 0


For α  = 0, Eα.ζ had moment generating function
!
2|ζ | sinh(αλ)
e0 , exp(λEα,ζ )e0
= J0
α
and characteristic function
!
2|ζ | sin(αλ)
e0 , exp(iλEα,ζ )e0
= J0 .
α
The distribution of Eα,ζ is a discrete measure supported on Z, we have
 !!
|ζ | 2
Le0 (Eα,ζ ) = Jm δ2mα ,

m∈Z
cf. Theorem 5.9.2 in [40]. Here,
" x #m 

(−1)k " x #2k
Jm (x) = ,
2 k!(k + m + 1) 2
k=0
are the Bessel functions of the first kind of order m ≥ 0.

Notes
We also refer the reader to [118], [115], [117], [116] for additional noncom-
mutative relations on real Lie algebras, and to [10] and references therein for
further discussion of “noncommutative (or quantum) mathematics”.

Exercises
Exercise 6.1 Splitting lemma for the two-dimensional Euclidean group.
The goal of this exercise is to prove the following version


exp xM + yE+ + zE− (6.1)
"y " # # "z " # #
= exp e2x − 1 E+ exp(xM) exp e2x − 1 E−
2x 2x
of the splitting lemma, for x, y, z ∈ C, where M, E+ , E− denote a basis of
the Lie algebra of the group of rigid motions in two dimensions (i.e., the
100 Random variables on real Lie algebras

Euclidean Lie algebra). Denote by e(2) the Lie algebra with basis R, Tx , Ty
and the relations
[Tx , Ty ] = 0, [R, Tx ] = Ty , [R, Ty ] = −Tx ,
and R∗ = −R, Tx∗ = −Tx , Ty∗ = −Ty .
1. Show that
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 −1 0 0 0 1 0 0 0
R̃ = ⎣ 1 0 0 ⎦, T̃x = ⎣ 0 0 0 ⎦, T̃y = ⎣ 0 0 1 ⎦,
0 0 0 0 0 0 0 0 0
satisfy the same commutation relations as R, Tx , Ty .
2. Consider the affine subspace
⎧⎡ ⎤ ⎫
⎨ x ⎬
K2 = ⎣ y ⎦ : x, y ∈ R ⊆ R3
⎩ ⎭
1
and show that
exp(θ R̃), exp(vT̃x ), exp(wT̃y )
act as rigid motions on K2 (i.e., maps that preserve distances).
3. Show that we can find a basis M, E+ , E− for e(2) that satisfies the relations
M ∗ = M, (E+ )∗ = E− ,
and
[E+ , E− ] = 0, [M, E± ] = ±2E± .
4. Show that we have


⎪ exp(uE+ )M = (M − 2E+ ) exp(uE+ ),




exp(uM)E− = e−2u E− exp(uM),





⎩ exp(uE+ )E− = E− exp(uB+ ).

5. Define

+ −

⎨ ω1 (t) = exp t(xM + yE + zE ) ,




ω2 (t) = exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E− ,
with
y " 2tx # z " 2tx #
x̃(t) = tx, ỹ(t) = e −1 , z̃(t) = e −1 .
2x 2x
6.4 The Lie algebra e(2) 101

Show that we have ω1 (0) = ω2 (0) and that ω1 (t) and ω2 (t) satisfy the
same differential equation
ωj (t) = (xM + yE+ + zE− )ωj (t), j = 1, 2.
6. Conclude that Equation (6.1) holds.
Exercise 6.2 Splitting lemma on the Heisenberg–Weyl algebra hw.
Consider the Heisenberg–Weyl algebra hw generated by a− , a+ , E with the
commutation relations
[a− , a+ ] = E, [E, a− ] = [E, a+ ] = 0.
The goal of this question is to prove the splitting lemma

+ −
exp ua+ + va− + wE = eua eva e(w+uv/2)E , u, v, w ∈ C.
1. Using the relation
z2 z3
ezX Ye−zX = Y + z[X, Y] + [X, [X, Y]] + [X[X, [X, Y]]] + · · ·
2 3!
∞ n
 z
=Y+ [X, [X, . . . [X, Y] · · · ]], (6.2)
n! & '( )
n=1
n times

show that for all z ∈ C we have


+ + + +
i) eza a− e−za = a− − zE and eza a− = (a− − zE)eza ,
+ + + +
ii) eza Ee−za = E and eza E = Eeza ,
− − − −
iii) eza Ee−za = E and eza E = Eeza .
2. Given u, v, w ∈ C, let


ω1 (t) : = exp t(ua+ + va− + wE) , t ∈ R+ .
Show that
ω1 (t) = (ua+ + va− + wE)ω1 (t), t ∈ R+ . (6.3)
3. Given u, v, w ∈ C, let now
+ − 2 uv/2)E
ω2 (t) : = euta evta e(tw+t , t ∈ R+ .
Show that
+ −
ω2 (t) = ua+ euta evta e(tw+t
2 uv/2)E

+ −
+ veuta a− evta e(tw+t
2 uv/2)E

+ − 2 uv/2)E
+ (w + tuv)euta evta Ee(tw+t . (6.4)
102 Random variables on real Lie algebras

4. Using Relations (6.3) and (6.4) and the result of Question (1), show that
ω2 (t) = (ua+ + va− + wE)ω2 (t), t ∈ R+ .
Show from (6.3) that, as a consequence, we have ω1 (t) = ω2 (t), t ∈ R+ ,
and
+ +va− +wE + −
eua = eua eva e(w+uv/2)E . (6.5)
5. Using the splitting lemma Relation (6.5), show that when E = σ 2 I we have
− +a+ ) − −a+ ) 2 σ 2 /2
e0 , eu(a e0
= e0 , eiu(a e0
= eu ,
where e0 is a unit vector in a Hilbert space H with inner product ·, ·
, such
that e0 , e0
= 1 and a− e0 = 0.
6. From the result of Question (5) show that a− + a+ and i(a− − a+ ) have
centered Gaussian distribution with variance σ 2 .
Exercise 6.3 Consider the differential operators ã+ , ã− , ã◦ defined in Section
3.3 by ã− = −τ ∂τ , ã+ = τ − 1 − τ ∂τ , and ã◦ = −(1 − τ )∂τ − τ ∂τ2 .
1. Show that for s ∈ R we have
" # " #
exp −isQ̃ ã◦ exp isQ̃ = ã◦ + isã+ − isã− + s2 τ ,

and


1

exp isP̃ ã◦ exp −isP̃ = e−2s ã◦ − sinh(2s) ã+ + ã− + sinh2 (s).
2
(6.6)
2. Show that for any s ∈ R the operator
ã◦ + isã+ − isã− + s2 x
has a geometric distribution with parameter s2 /(1 + s2 ) in the vacuum
state 1, and that the operator
1

e−2s ã◦ − sinh(2s) ã+ + ã− + sinh2 (s)
2
has a geometric distribution with parameter tanh2 (s) in the vacuum state.
3. Conclude that the distribution of P̃ has the Fourier transform
1
IE[exp(itP̃)] = , t ∈ R.
cosh(t)
7

Weyl calculus on real Lie algebras

Couples are wholes and not wholes, what agrees disagrees, the
concordant is discordant. From all things one and from one all things.
(Heraclitus, On the Universe 59.)
This chapter introduces the notion of joint (Wigner) density of random
variables, for future use in quantum Malliavin calculus. For this we will rely
on functional calculus on general Lie algebras, starting with the Heisenberg–
Weyl algebra. We also consider some applications to quantum optics and time-
frequency analysis.

7.1 Joint moments of noncommuting random variables


The notion of moment φ, Pn ψ
of order n of P in the state φ, ·ψ
is well
understood as the noncommutative analogue of IE[X n ]. Indeed, in the single
variable case, various functional calculi are available for elements of an
algebra. In particular,

• polynomials can be applied to any element in an algebra;


• holomorphic functions can be applied to elements of a Banach algebra
provided the function is holomorphic on a neighborhood of the spectrum of
the element;
• continuous functions can be applied to normal elements of C*-algebras
provided they are continuous on the spectrum of the element;
• measurable (i.e., Borel) functions can be applied to normal elements in a
von Neumann algebra (e.g., B(h)), provided they are measurable on the
spectrum of the operator.

103
104 Weyl calculus on real Lie algebras

In addition, the Borel functional calculus can be extended to unbounded


normal operators (where normal means that XX ∗ and X ∗ X have the same
domain and coincide on this domain).
All the above functional calculi work due to a restriction to a commutative
subalgebra. In that sense, Weyl calculus is fundamentally different because it
applies to functions of noncommuting operators.
We now need to focus on the meaning of joint moments. For example,
which is the noncommutative analogue of IE[XY]? A possibility is to choose
φ, PQψ
as the the noncommutative analogue of IE[XY], however this would
lead to φ, QPψ
as the the noncommutative analogue of IE[YX] and we have
IE[XY] = IE[YX] while

φ, PQψ
 = φ, QPψ

due to noncommutativity. A way to solve this contradiction can be to define


the first joint moment as
1
( φ, QPψ
+ φ, PQψ
) .
2
Next, how can we define the analogue of IE[XY 2 ]? Proceeding similarly, a
natural extension could be
1" #
φ, PQ2 ψ
+ φ, QPQψ
+ φ, Q2 Pψ
.
3
Then what is the correct way to construct IE[X 2 Y 3 ]? Clearly there is a
combinatorial pattern which should be made explicit.
We note that the exponentials eiby+iax = eiby eiax = eiby eiax commute, hence

eiby+iax = eiby eiax



 in+m n m n m
= a b x y
n!m!
n,m=0

(ax + by)2 (ax + by)3  (i)qn
= 1 + iax + iby − −i + (ax + by)
2 3! n!
n=4
1
= 1 + iax + iby − (a2 x2 + b2 y2 + 2abxy)
2


i 3 3 (i)n
− (a x + b3 y3 + 3ab2 xy2 + 3a2 bx2 y) + (ax + by)
3! n!
n=4

 in+m n m n m
= a b x y .
n!m!
n,m=0
7.1 Joint moments of noncommuting random variables 105

On the other hand, the exponentials eibQ and eiaP do not commute, and
expanding the exponential series we get

 1
eibQ+iaP = (ibQ + iaP)n
n!
n=0

(ibQ + iaP)2 (ibQ + iaP)3  (ibQ + iaP)n
= I + ibQ + iaP + + +
2 3! n!
n=4
b2 Q2 + a2 P2
+ abQP + abPQ
= I + ibQ + iaP −
2
i " 3 3 #
− b Q + a P + ba (QP + P2 Q + PQP) + ab2 (Q2 P + PQ2 + QPQ)
3 3 2 2
3!
∞ n
i
+ (bQ + aP)n ,
n!
n=4

and by identifying the above with the exponential series of eiby+iax we get

i) from the terms in ab with k = l = 1,


QP + PQ
xy ←→ .
2
ii) from the terms in ab2 and a2 b we recover

y2 QP2 + P2 Q + PQP
x ←→
2! 3!
and
x2 Q2 P + PQ2 + QPQ
y ←→ .
2! 3!
More generally, by identifying the terms in ak bm , we map

yk xm
k! m!
to the coefficient of order ak bm in
1 1  +
n
(bQ + aP)k+m = a|A| bk+m−|A| (Q)1{l∈A} (P)1{l∈A}
/ ,
n! n!
A⊂{1,...,n} l=1

hence we map
!  +
k+m
k+m k m
yx to Q1{l∈A} P1{l∈A}
/ ,
k A⊂{1,...,k+m} l=1
|A|=k
106 Weyl calculus on real Lie algebras

with the correspondence

k!m!  +
k+m
yk xm ←→ Q1{l∈A} P1{l∈A}
/ .
(k + m)! A⊂{1,...,k+m} l=1
|A|=k

7.2 Combinatorial Weyl calculus


The above arguments can be generalised as a combinatorial Weyl calcu-
lus. Denote by W(X1 , . . . , Xk : j1 , . . . , jk ) the set of all words in the letters
X1 , . . . , Xk such that X occurs j times for  = 1, . . . , k. For example, denoting
the empty word by I, we have


⎪ W(X1 , . . . , Xk ; 0, . . . , 0) = {I},




W(X1 , X2 , 1, 1) = {X1 X2 , X2 X1 }





⎩ W(X , X , 2, 1) = {X 2 X , X X X , X X 2 }.
1 2 1 2 1 2 1 2 1

The next lemma is a noncommutative extension of the multinomial identity


(A.5).

Lemma 7.2.1
 
(X1 + · · · + Xk )n = X.
j1 ,...,jk ≥0 X∈W(X1 ,...,Xk ;j1 ,...,jk )
j1 +···+jk =n

More generaly we can build a combinatorial Weyl calculus as in the next


definition.

Definition 7.2.2 Let X1 , . . . , Xk be elements in some algebra. We define a


linear map from the algebra of polynomials in k variables x1 , . . . , xk to the
algebra generated by X1 , . . . , Xk , by

j j 1 
x11 · · · xjk ←→ ! X.
n X∈W(X1 ,...,Xk ;j1 ,...,jk )
j1 , . . . , j k

We can also summarise the above definition as the one-to-one correspondence

et1 x1 +···+tk xk ←→ exp (t1 X1 + · · · + tk Xk ) ,


7.3 Heisenberg–Weyl algebra 107

where we interpret the exponentials as formal power series. As an example,


the above combinatorial Weyl calculus recovers the identifications
1
x1 x2 ←→ (X1 X2 + X2 X1 )
2
and
1 2
x12 x2 ←→ (X X2 + X1 X2 X1 + X2 X12 ).
3 1
The combinatorial Weyl calculus is a homomorphism when restricted to a
subalgebra generated by a linear combination α1 x1 + · · · + αk xk , i.e., we have

P(α1 x1 + · · · + αk xk ) ←→ P(α1 X1 + · · · + αk Xk )

for all α1 , . . . , αk ∈ C and all polynomials P.

7.2.1 Lie-theoretic Weyl calculus


In the “Lie-theoretic” Weyl calculus the definition of the “combinatorial”
Weyl calculus is modified (condition of square integrability, appearance of
the modular function σ and the Duflo-Moore operator C, see Theorem 7.4.1)
to achieve some “nice” properties, such as invariance or isometry. These
modifications are rather moderate in the unimodular case, but they lead to more
complicated formulas in the non-unimodular case. They also destroy the nice
relation we have for the marginal distributions of the Wigner function.

7.3 Heisenberg–Weyl algebra


7.3.1 Functional calculus on the Heisenberg–Weyl algebra
Let B2 (h) denote the space of Hilbert–Schmidt operators equipped with the
scalar product

ρ1 , ρ2
B := Tr[ρ1∗ ρ2 ], ρ1 , ρ2 ∈ B2 (h).

The operator f (P, Q) is defined by



f (P, Q) : = eixP+iyQ (F f )(x, y)dxdy,
R2

and is also denoted by O(f ) = f (P, Q) with

O(e−iux−ivy ) = exp (iuP + ivQ) , u, v ∈ R,


108 Weyl calculus on real Lie algebras

and the bound


f (P, Q) B2 (h) = O( f ) B2 (h) ≤ Cp f L p (R2 )

for 1 ≤ p ≤ 2, see Lemma 7.3.2.


Definition 7.3.1 The domain Dom(O) of the mapping O can be extended to
the set of measurable functions ϕ : R2 −→ C for which there exists a bounded
operator M ∈ B(L2 (R)) on L2 (R) such that

1
f , Mg
= f , eiuP+ivQ g
F −1 ϕ(u, v)dudv, (7.1)
2π R2
for all f , g ∈ S(R), and for ϕ ∈ DomO we define O(ϕ) to be the bounded
operator M appearing in Equation (7.1), it is uniquely determined due to the
totality of S(R).
If ϕ ∈ S(R) is a Schwartz function on R2 one can check that

1
ϕ(P, Q) = O(ϕ) = eiuP+ivQ F −1 ϕ(u, v)dudv
2π R2
defines a bounded operator, and the map
O : S(R2 ) −→ B(h)
defined in this way extends to a continuous map from Lp (R) to B(h) for all
p ∈ [1, 2], as shown in the next lemma.
Lemma 7.3.2 Let 1 ≤ p ≤ 2. Then we have Lp (R2 ) ⊆ DomO and there
exists a constant Cp such that
||O(ϕ)|| = || f (P, Q)|| ≤ Cp ||ϕ||p
for all ϕ ∈ Lp (R2 ).
Proof : This follows immediately from [125, Theorem 11.1], where it is
stated for the irreducible unitary representation with parameter h̄ = 1 of the
Heisenberg–Weyl group.
In case p > 2 there exist functions in Lp (R2 ) for which we cannot define
a bounded operator in this way, see, e.g., [125] and references therein.
Nevertheless, since
1 −1 ix0 u+iy0 v
F e = δ(x0 ,y0 ) ,

the map O can be extended to exponential functions, with the relation


O eix0 u+iy0 v = exp (ix0 P + iy0 Q) .
7.3 Heisenberg–Weyl algebra 109

7.3.2 Wigner functions on the Heisenberg–Weyl algebra


Our goal is to define a noncommutative analogue of the joint probability den-
sity of a couple of noncommutative random variables. Here we are concerned
with the notion of joint distribution for the couple (P, Q). Namely we define a
noncommutative analogue of the joint probability density of the couple (P, Q)
on the Heisenberg–Weyl
algebra hw with {P, Q, I} with [P, Q] = 2iI. The
“joint density” of the pair P, Q will be represented by its Wigner distribution.
Definition 7.3.3

Let  be a state on B(h). We will call W (dx) the Wigner
distribution of P, Q in the state , if the relation
 ∞


ϕ(x)W (dx) =  O(ϕ)
−∞
is satisfied for all Schwartz functions ϕ ∈ S(R).
In general, W (dx) may not be positive and it is a signed measure, since O
does not map positive functions to positive operators. However, we can show
in the next proposition that it admits a density.
Proposition 7.3.4 Let  be a state on B(h). Then there exists a function
C
w ∈ 2≤p≤∞ Lp (R2 ) such that

W (dx, dy) = w (x, y)dxdy.




We call w the Wigner function of P, Q in the state .
Proof :
It is sufficient
to observe that Lemma 7.3.2 implies that the map
ϕ  −→  O(ϕ) defines a continuous linear functional on L p (R2 ) for p ∈
[1, 2].
By analogy with (A.10) we let

1
W|φ
ψ| (u, v) := e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdy (7.2)
(2π )2 R2
denote the Wigner density of (P/2, −Q), defined by the Fourier inversion of
the characteristic function
(x, y)  −→ ψ, eixP/2−iyQ φ
h .
Here, |φ
ψ|, φ, ψ ∈ h denotes the state defined by
ρ  −→ ψ, ρφ
, ρ ∈ B(h).
In other words we have

ψ, e ixP/2−iyQ
φ
h = eiux+ivy W|φ
ψ| (u, v)dudv, (7.3)
R2
x, y ∈ R, i.e., W|φ
ψ| (u, v) represents the Wigner density of (P, Q) in the state

ψ|, and we can also write
110 Weyl calculus on real Lie algebras




W|φ
ψ| (u, v)ϕ(u, v)dudv = φ, O(ϕ) ψ

R2

for all Schwartz functions ϕ, or



W|φ
ψ| (u, v)ϕ(u, v)dudv
R2
 
1
= e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdyϕ(u, v)dudv
(2π )2 R2 R2

and

W|φ
ψ| (u, v)eiau+ibv dudv
R2
 
1
= e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdyeiau+ibv dudv
(2π )2 R2 R2
 
1
= e−i(x−a)u−i(y−b)v ψ, eixP/2−iyQ φ
h dxdydudv
(2π )2 R2 R2
= ψ, eiaP/2−ibQ φ
h ,

where we applied (A.9), and this recovers (7.3). In particular we have


 6 7
QP/2 + PQ/2
uvW|φ
ψ| (u, v)dudv = − φ, ψ .
R2 2

Next, by identification of the terms in xy2 in (7.3) we recover


 6 7
uv2 1 2P Q2 Q
W|φ
ψ| (u, v)dudv = φ, Q +P + QP ,
R2 2! 3! 2 2 2

while the identification of the terms in x2 y yields



1
u2 vW|φ
ψ| (u, v)dudv
2! R2
8 ! ! ! !9
1 P 2 P 2 P P
=− φ, Q + Q+ Q .
3! 2 2 2 2

Note that letting

Pφ(t) = −2iφ  (t) and Qφ(t) = tφ(t), φ ∈ S(R),

also defines a representation based on h = L2 (R; C, dx) of (P, Q) on hw, with


the involutions P∗ = P and Q∗ = Q, as follows from the integration by parts
7.3 Heisenberg–Weyl algebra 111

 ∞
f , Pg
h = f̄ (t)Pg(t)dt
−∞
 ∞
= − 2i f̄ (t)g (t)dt
−∞
 ∞
= 2i f̄  (t)g(t)dt
−∞
 ∞
= (−2if  )(t)g(t)dt
−∞
= Pf , g
h .
On the other hand, by the following version
! !
P P
exp −ix + iyQ = e−ixy/2 exp (iyQ) exp −ix
2 2
of the splitting lemma on the Heisenberg–Weyl algebra hw, cf. Exercise 6.2,
we have
! !
P −ixy/2 P
exp −ix + iyQ ψ(t) = e exp (iyQ) exp −ix ψ(t)
2 2

 xn ∂ n ψ
= eiyt−ixy/2 (−1)n (t)
n! ∂tn
n=0
=e iyt−ixy/2
ψ(t − x), (7.4)
x, y, t ∈ R, ψ ∈ S(R). As a consequence, by (7.2) we can write

1
W|φ
ψ| (u, v) = e−ixu−iyv ψ, eixP/2−iyQ φ
h dxdy
(2π )2 R2

1
= e−ixu−iyv e−ixP/2+iyQ ψ, φ
h dxdy
(2π )2 R2
  ∞
1 −ixu−iyv
= e eiyt−ixy/2 ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞
  ∞
1
= eiy(−2v+2t−x)/2−ixu ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞
  ∞
2
= eiy(−2v+2t−x)−ixu ψ(t − x)φ(t)dtdxdy
(2π )2 R2 −∞

1 ∞ −2i(t−v)u
= e ψ(2v − t)φ(t)dt
π −∞

1 ∞ −2itu
= e ψ(v − t)φ(t + v)dt
π −∞
 ∞
1
= e−itu ψ(v − t/2)φ(v + t/2)dt.
2π −∞
112 Weyl calculus on real Lie algebras

Consequently the second marginal is given by


 ∞  ∞ ∞
1
W|φ
ψ| (u, v)du = e−itu ψ(v − t/2)φ(v + t/2)dtdu
−∞ 2π −∞ −∞
= φ(v)ψ̄(v), v ∈ R,
cf. (A.9). For the first marginal we have
 ∞
W|φ
ψ| (u, v)dv = (Fφ)(u)(Fψ)(u), u ∈ R,
−∞
since
" #  ∞
1
F e−ixP/2+iyQ φ (t) = √ eizt+iyz−ixy/2 φ(z − x)dz
2π −∞
 ∞
1
=√ eit(x+z) eiy(x+z)−ixy/2 φ(z)dz
2π −∞
 ∞
1
=√ ei(x+z)t eiyz+ixy/2 φ(z)dz
2π −∞

1 ixy/2+ixt ∞ iz(y+t)
=√ e e φ(z)dz
2π −∞
= eixy/2+ixt Fφ(t + y)
!
P
= exp ixQ + iy Fφ(t),
2
x, y, t ∈ R, by the splitting lemma (7.4), which shows that
 ∞
(F φ)(v)(Fψ)(v) = W|F φ
F ψ| (u, v)du
−∞
 ∞
1
= e−ixu−iyv Fψ, eixP/2−iyQ Fφ
h dxdydu
(2π )2 −∞ R2
 ∞
1
= e−ixu−iyv Fψ, FeixQ+iyP/2 φ
h dxdydu
(2π )2 −∞ R2
 ∞
1
= e−ixu−iyv ψ, eixQ+iyP/2 φ
h dxdydu
(2π )2 −∞ R2
 ∞
1
= e−iyu−ixv ψ, eixP/2+iyQ φ
h dxdydu
(2π )2 −∞ R2
 ∞
1
= − e−ixv+iyu ψ, eixP/2−iyQ φ
h dxdydu
(2π )2 −∞ R2
 ∞
= − W|φ
ψ| (v, −u)du
−∞
 ∞
= W|φ
ψ| (v, u)du.
−∞
7.3 Heisenberg–Weyl algebra 113

Figure 7.1 presents a graph of W|φ


ψ| with
1 1
φ(x) = ψ(x) = √ (H0 + H1 (x))e−x /4 = √ (1 + x)e−x /4 ,
2 2

2 2
drawn with the QuTiP package, cf. [61], [62].
Figure 7.2 represents the colour map of the aforementioned Wigner function,
also drawn using QuTiP.

Figure 7.1 Wigner function.

Figure 7.2 Wigner function colour map.


114 Weyl calculus on real Lie algebras

7.4 Functional calculus on real Lie algebras


In this section we extend the aforementioned functional calculus to more
general Lie algebras via the approach of [7], and we also include some
results and proofs not explicitly stated therein. For this we start by intro-
ducing Wigner functions for square-integrable representations of general Lie
algebras.
Let ·, ·
G ∗ ,G denote the pairing between the Lie algebra G and its dual G ∗ .
Let G be a Lie group with Lie algebra G and let U be a unitary representation
of G on some Hilbert space h with inner product ·, ·
h . We assume that U
is irreducible, and square integrable, i.e., there exists a non-zero vector ψ ∈ h
such that

| ψ, U(g)ψ
h |2 dμ(g) < ∞,
G
where μ denotes the left Haar measure on G. The following theorem defines
the Duflo-Moore operator C.
Theorem 7.4.1 ([34]) There exists a positive self-adjoint operator C on h
such that

ψ1 , U(g)φ1
h ψ2 , U(g)φ2
h dμ(g) = Cψ2 , Cψ1
h φ1 , φ2
h . (7.5)
G
Moreover, the operator C is the identity if and only if G is unimodular, and
DomC−1 is dense in h. We assume the existence of an open subset N0 of
G, symmetric around the origin, whose image exp(N0 ) by exp : G −→ G is
dense in G with μ(G \ exp(N0 )) = 0. The image measure of μ on N0 by
exp−1 : exp(N0 ) −→ N0 is called the Haar measure on G, and we denote by
m(x) its density with respect to the Lebesgue measure dx on G.
Let now σ (ξ ) denote the density in the decomposition of the Lebesgue
measure dξ on G ∗ :
dξ = dk(λ)σ (ξ )dλ (ξ ),
where dk(λ) is a measure on the parameter space of the co-adjoint orbits in
G ∗ and dλ (ξ ) is the invariant measure on the orbit Oλ∗ . Let also (X1 , . . . , Xn ),
resp. (X1∗ , . . . , Xn∗ ), denote a basis of G, resp. G ∗ .
Definition 7.4.2 ([7]) Given (φ, ψ) ∈ h × DomC−1 the Wigner function
W|φ
ψ| is defined as
√ 
σ (ξ )
x X +···+x X −1 %
−i ξ ,x
G ∗ ,G
W|φ
ψ| (ξ ) = e U e 1 1 n n C ψ, φ
h m(x)dx,
(2π )n/2 N0
ξ ∈ G∗.
7.4 Functional calculus on real Lie algebras 115

2 (G ∗ ; dξ/σ (ξ )) to
The following proposition extends the definition of Wρ in LC
ρ ∈ B2 (h).

Proposition 7.4.3 ([7]) The mapping

h × DomC−1 −→ LC 2 (G ∗ ; dξ/σ (ξ ))

ρ  −→ Wρ

extends to an isometry on B2 (h):

Wρ1 , Wρ2
L2 (G ∗ ;dξ/σ (ξ )) = ρ1 , ρ2
B2 (h) , ρ1 , ρ2 ∈ B2 (h).
C

Proof : By a density argument it suffices to consider

ρ1 = |φ1
ψ1 | and ρ2 = |φ2
ψ2 |,

with (φ1 , ψ1 ), (φ2 , ψ2 ) ∈ h × DomC−1 . From the identity (7.5) and since
 ∞
1 
n
ei ξ ,x−x
G ∗ ,G dξ dx = δx (dx ), (7.6)
(2π ) −∞
we have
Wρ1 ,Wρ2
L2 (G ∗ ;dξ/σ (ξ ))
C
  %
1

= e−i ξ ,x
G ∗ ,G Tr[U e−(x1 X1 +···+xn Xn ) ρ1 C−1 ] m(x)dx
(2π )n G ∗ N0
 1 " # 2% !
  
× e−i ξ ,x
G ∗ ,G Tr U e−(x1 X1 +···+xn Xn ) ρ2 C−1 m(x )dx dξ
N0


 1 " # 2
= Tr U e−(x1 X1 +···+xn Xn ) ρ1 C−1 Tr U e−(x1 X1 +···+xn Xn ) ρ2 C−1 m(x)dx
N
 0 " #


= U ex1 X1 +···+xn Xn C−1 ψ1 , φ1
h U ex1 X1 +···+xn Xn C−1 ψ2 , φ2
h m(x)dx
N
 0
= U(g)C−1 ψ1 , φ1
h U(g)C−1 ψ2 , φ2
h dμ(g)
G
= ψ2 , ψ1
h φ1 , φ2
h
= ρ2 , ρ1
B2 (h) ,

where we used the relation

Tr(U(g)∗ ρC−1 ) = Tr(U(g)∗ |φ


ψ|C−1 ) = Tr(C−1 U(g)∗ |φ
ψ|)
= ψ, C−1 U(g)∗ φ
h = U(g)C−1 ψ, φ
h .
116 Weyl calculus on real Lie algebras

As a result, the definition of Wρ (ξ ) extends to ρ ∈ B2 (h) as:


√  1 " # 2%
σ (ξ ) −i ξ ,x
G ∗ ,G −(x1 X1 +···+xn Xn ) −1
Wρ (ξ ) = e Tr U e ρC m(x)dx,
(2π )n/2 N0

ξ ∈ G∗.

Definition 7.4.4 2 (G ∗ ; dξ/σ (ξ )) −→ B (h) denote the dual of


Let O : LC 2
ρ  −→ Wρ , i.e.,


ρ|O(f )
B2 (h) = W ρ (ξ )f (ξ ) ,
G∗ σ (ξ )

2 (G ∗ ; dξ/σ (ξ )), ρ ∈ B (h).


f ∈ LC 2

Note that for ρ = |φ


ψ| we have

ρ|O(f )
B2 (h) = Tr|φ
ψ|∗ O(f )
= φ|O(f )ψ
h
= W|φ
ψ| , f
L2 (G ∗ ;dξ/σ (ξ ))
C


= W |φ
ψ| (ξ )f (ξ ) .
G ∗ σ (ξ )

The next proposition allows to extend O as a bounded operator from


2 (G ∗ ; dξ/σ (ξ )) to B (h).
LC 2

Proposition 7.4.5 We have the bound




O(f ) B2 (h) ≤ f 2 (G ∗ ;dξ/σ (ξ )) ,
LC f ∈ LC
2
G ; dξ/σ (ξ ) ,

and the expression


 % !
f

O( f ) = m(x)F √ (x)U ex1 X1 +···+xn Xn C−1 dx.
N0 σ

Proof : We have

| O(f ), ρ
B2 (h) | = | f , Wρ
L2 (G ∗ ;dξ/σ (ξ )) |
C

≤ f 2 (G ∗ ;dξ/σ (ξ ))
LC Wρ 2 (G ∗ ;dξ/σ (ξ ))
LC

≤ f 2 (G ∗ ;dξ/σ (ξ ))
LC ρ B2 (h) ,
7.5 Functional calculus on the affine algebra 117

and

φ, O( f )ψ
h = Tr|φ
ψ|∗ O( f ) = W |φ
ψ| (ξ )f (ξ )dξ/σ (ξ )
G∗
  .
1
m(x)
= ei ξ ,x
G ∗ ,G −(x
TrU e 1 1X +···+x n n |φ
ψ|C
X ) −1 dxf (ξ )dξ
(2π ) n/2 ∗
G N0 σ (ξ )
 ! " # %
f
= F √ (x) φ|U ex1 X1 +···+xn Xn C−1 ψ
h m(x)dx
N0 σ
6 $  ! " # % 7
$ f
= φ$ F √ (x)U ex1 X1 +···+xn Xn C−1 m(x)dxψ .
N0 σ h

In other words, we have


" % # %

O e−i ·,x
G ∗ ,G σ (·) = (2π )n/2 m(x)U ex1 X1 +···+xn Xn C−1 , (7.7)

and
 "

√ 1 √ #
O f σ = O e−i ·,x
G ∗ ,G σ (F f )(x)dx,
(2π )n/2 N0

f ∈ 2 (G ∗ ; dξ ).
LC

7.5 Functional calculus on the affine algebra


In this section we study in detail the particular case of the affine algebra
generated by
   
1 0 0 1
X1 = , X2 = ,
0 0 0 0
with [X1 , X2 ] = X2 . The affine group can be constructed as the group of 2 × 2
matrices of the form
  / x x1
0
a b e 1 x2 e 2 sinch x21
g= = = exp (x1 X1 + x2 X2 ) , (7.8)
0 1 0 1
a > 0, b ∈ R, where
sinh x
sinchx = , x ∈ R.
x
We will work with
P
X1 = −i and X2 = i(Q + M),
2
118 Weyl calculus on real Lie algebras

which form a representation of the affine algebra, as

[X1 , X2 ] = X2 .

Here, N0 = G is identified to R2 and


x1
m(x1 , x2 ) = e−x1 /2 sinch , x1 , x2 ∈ R,
2
moreover from (92) in [7],
1
d± (ξ1 , ξ2 ) = dξ1 dξ2 ,
2π|ξ2 |
hence

σ (ξ1 , ξ2 ) = 2π|ξ2 |, ξ1 , ξ2 ∈ R,

and the operator C is given by


.

Cf (τ ) = f (τ ), τ ∈ R.
|τ |

In order to construct the Malliavin calculus on the affine algebra we will have
to use the functional calculus presented in Section 11.1. Letting B2 (h) denote
the space of Hilbert–Schmidt operators on h, the results of Section 11.1 allow
us to define a continuous map
!
2 1
O : LC R ,
2
dξ1 dξ2 −→ B2 (h)
|ξ2 |
as

O( f ) := (F f )(x1 , x2 )e−ix1 P/2+ivx2 (Q+M) dx1 dx2 .
R2

Relation (7.7), i.e.,


" % # %

O e−i ·,x
G ∗ ,G σ (·) = (2π )n/2 m(x)U ex1 X1 +···+xn Xn C−1 ,

shows that
1 " −u/2 u #−1/2 " −iuξ1 −ivξ2 % #
e−iuP/2+iv(Q+M) = √ e sinch O e |ξ2 | C.
2π 2
The next proposition shows that these relations can be simplified, and that the
Wigner function is directly related to the density of the couple (P, Q+M), with
the property (7.9).
7.5 Functional calculus on the affine algebra 119

Proposition 7.5.1 We have


!

iuξ1 +ivξ2
P
O e = exp iu − iv(Q + M) , (7.9)
2
u, v ∈ R.

Proof : We have for all φ, ψ ∈ h:

φ, e−iuP/2+iv(Q+M) ψ
h

−u/2 −1/2 "
e sinch u2 % #
= √ φ, O e−iuξ1 −ivξ2 |ξ2 | Cψ
h


−u/2 −1/2
e sinch u2 %
= √ W|φ
Cψ| (ξ1 , ξ2 ), e−iuξ1 −ivξ2 |ξ2 |
L2 (G ∗ ; dξ1 dξ2 )
2π C 2π|ξ2 |
   .
− 2x ixξ e−x/2 sinch 2x
1 e e 1
= e−iuξ1 −ivξ2 φ x x −u/2
2π R3 sinch 2 sinch 2 e sinch u2
! cosh x !
ξ2 e−x/2 −|ξ2 | sinch 2x |ξ2 | β−1 dx
×ψ e 2 dξ1 dξ2
sinch 2x sinch 2x (β)
 ! ixξ1 !
1 ξ2 ex/2 e ξ2 ex/2
= e−iuξ1 −ivξ2 φ ψ
2π R3 sinch 2x sinch 2x sinch 2x
cosh x !
−|ξ2 | sinch 2x |ξ2 | β−1 dx
×e 2 dξ1 dξ2
sinch 2x (β)
= W|φ
ψ| , e−iuξ1 −ivξ2
L2 (G ∗ ; dξ1 dξ2 )
C 2π |ξ2 |

−iuξ −ivξ
= φ, O e 1 2 ψ
h .

As a consequence of (7.9), the operator O(f ) has the natural expression


 !
O(f ) = O (F f )(x1 , x2 )e−ix1 ξ1 −ix2 ξ2 dx1 dx2
R2

= (F f )(x1 , x2 )O(e−ix1 ξ1 −ix2 ξ2 )dx1 dx2
R2

= (F f )(x1 , x2 )e−ix1 P/2+ix2 (Q+M) dx1 dx2 .
R2

As a consequence of Proposition 7.4.5 we find the bound

O(f ) B2 (h) ≤ f dξ dξ
2 (G ∗ ; 1 2 )
LC
.
2π|ξ |
2
120 Weyl calculus on real Lie algebras

This allows us to define the Wigner density W̃|φ


ψ| (ξ1 , ξ2 ) as the joint density
of (−P/2, Q + M),

ψ|eiuP/2−iv(Q+M) φ
h = eiuξ1 +ivξ2 W̃|φ
ψ| (ξ1 , ξ2 )dξ1 dξ2 ,
R2

φ, ψ ∈ h, and this density has continuous binomial and gamma laws as


marginals. Using a noncommutative integration by parts formula, we will be
able to prove the smoothness of the joint density of (P, Q + M). We also have
the relations

dξ1 dξ2
ψ|O( f )φ
h = W |ψ
φ| (ξ1 , ξ2 )f (ξ1 , ξ2 )
G ∗ 2π |ξ2 |

dξ1 dξ2
= W|φ
ψ| (ξ1 , ξ2 )f (ξ1 , ξ2 ) ,
G ∗ 2π |ξ2 |

and

1 dξ1 dξ2
ψ|eiuP/2−iv(Q+M) φ
h = eiuξ1 +ivξ2 W|φ
ψ| (ξ1 , ξ2 ) ,
2π G ∗ |ξ2 |

which show that the density W̃|φ


ψ| of (P/2, −(Q + M)) in the state |φ
ψ|
has the expression

1
W̃|φ
ψ| (ξ1 , ξ2 ) = W|φ
ψ| (ξ1 , ξ2 ) (7.10)
2π|ξ2 |
 ! ! cosh x !β−1
1 ξ2 e−x/2 e−ixξ1 ξ2 ex/2 −|ξ2 | sinch 2x |ξ2 | dx
= φ ψ e 2 ,
2π R sinch 2x sinch 2x sinch 2x sinch 2x (β)

as in Relation (102) of [7]. Note that W̃|φ


ψ| has the correct marginals since
integrating in dξ1 in (7.10) we have, using (7.6),

1
W|φ
ψ| (ξ1 , ξ2 )dξ1 = γβ (|ξ2 |)φ(ξ2 )ψ(ξ2 ),
2π |ξ2 | R

and

1 dξ2
W|φ
ψ| (ξ1 , ξ2 )
2π R |ξ2 |

1 x |ω|β−1
= e−iξ1 x φ(ωex/2 )ψ(ωe−x/2 )e−|ω| cosh 2 dxdω.
2π R2 (β)
7.5 Functional calculus on the affine algebra 121

In the vacuum state, i.e., for φ = ψ =  = 1R+ , we have


   ∞
1 dξ2 1 τ β−1 −τ cosh x
W|
| (ξ1 , ξ2 ) = e−iξ1 x e 2 dτ dx
2π R ξ2 2π R 0 (β)

1 1
= e−iξ1 x
β dx
2π R cosh 2x
$ !$2
$ β i $
$
= c $ + ξ1 $$ ,
2 2
where c is a normalisation constant and  is the gamma function. When β = 1
we have c = 1/π and P has the hyperbolic cosine density in the vacuum state
 = 1R + :

1
ξ1  −→ .
2 cosh(π ξ1 /2)
Proposition 7.5.2 The characteristic function of (P, Q + M) in the state

ψ| is given by

|ω|β−1
ψ, eiuP+iv(Q+M) φ
h = eivωsinch u ψ(ωeu )φ(ωe−u )e−|ω| cosh u dω.
R (β)
In the vacuum state  = 1R+ we find

1
, eiuP+iv(Q+M) 
h = , u, v ∈ R.
(cosh u − ivsinchu)β

Proof : Using the modified representation Û defined in (2.4) on h =


2 (R, γ (|τ |)dτ ), we have
LC β
D " u# E
ψ, e−iuP/2+iv(Q+M) φ
h = ψ, Û eu , veu/2 sinch φ
2 h


u ivτ eu/2 sinch u −(eu −1)|τ |/2 βu/2 |τ |β−1 −|τ |
= ψ(τ )φ τ e e 2e e e dτ
R (β)
 " # " # u |ω|
u
β−1
= eivωsinch 2 ψ ωe−u/2 φ ωeu/2 e−|ω| cosh 2 dω.
R (β)
In the vacuum state |
| we have
 ∞ u u |ω|β−1
, e−iuP/2+iv(Q+M) 
h = eiωsinch 2 −|ω| cosh 2 dω
0 (β)
1
= .
(cosh u
2 − ivsinch u2 )β
122 Weyl calculus on real Lie algebras

In particular, we have

1
ψ, eiv(Q+M) φ
h = eivω ψ(ω)φ(ω)e−|ω| |ω|β−1 dω
(β) R

hence as expected, Q+M has density ψ(ω)φ(ω)γβ (|ω|), in particular a gamma


law in the vacuum state. On the other hand we have

1
ψ, eiuP φ
h = ψ(ωeu )φ(ωe−u )e−|ω| cosh u |ω|β−1 dω,
(β) R
which recovers the density of P:

1
ξ1  −→ e−iξ1 x ψ(ωex )φ(ωe−x )e−|ω| cosh x |ω|β−1 dxdω.
2π (β) R2
In the vacuum state we find
1
, eiuP 
h = , u ∈ R.
(cosh u)β

7.6 Wigner functions on so(3)


In this section we consider Wigner functions on the Lie algebra su(2) ≡ so(3).
We consider the Lie group SU(2) of unitary 2 × 2 matrices with determinant
one. This example illustrates the importance of the choice of the set N0 in the
definition of the Wigner function given by [7]. We use the notation and results
of Section 4.5.
A basis of the Lie algebra g = su(2) ≡ so(3) of SU(2) is given by X1 , X2 , X3
with the relations
[X1 , X2 ] = X3 and cyclic permutations,
and Xj∗ = −Xj . For the adjoint action ad : g −→ Lin(g), ad(X)Z = [X, Z], we
have
⎧ ⎡ ⎤

⎪ 0 0 0




⎪ ad(X1 ) = ⎣ 0 0 −1 ⎦ ,



⎪ 0 1 0





⎪ ⎡ ⎤

⎨ 0 0 1
ad(X2 ) = ⎣ 0 0 0 ⎦,



⎪ −1 0 0





⎪ ⎡ ⎤

⎪ 0 −1 0



⎪ ad(X3 ) = ⎣ 1 0 0 ⎦,


⎩ 0 0 0
7.6 Wigner functions on so(3) 123

or
⎡ ⎤
0 −x3 x2
ad(X) = ⎣ x3 0 −x1 ⎦
−x2 x1 0
for X = x1 X1 + x2 X2 + x3 X3 . The dual adjoint action
ad∗ : g −→ Lin(g∗ )
is given by
⎡ ⎤
0 −x3 x2
ad∗ (X) = −ad(X)T = ⎣ x3 0 −x1 ⎦
−x2 x1 0
if we use the dual basis e1 , e2 , e3 ,
ej , Xk
= δjk
for g∗ . Similarly for ad∗ (X1 ), ad∗ (X2 ), ad∗ (X3 ). By exponentiation we get
⎧ ⎡ ⎤

⎪ 0 0 0




⎪ Ad∗ (etX1 ) = ead(X1 ) = ⎣ 0 cos 1 − sin t ⎦ ,



⎪ 0 sin t cos t





⎪ ⎡ ⎤

⎨ cos t 0 sin t
Ad∗ (etX2 ) = ead(X2 ) = ⎣ 0 0 0 ⎦,



⎪ − sin t 0 cos t





⎪ ⎡ ⎤

⎪ cos t − sin t 0



⎪ Ad∗ (etX3 ) = ead(X3 ) = ⎣ sin t cos t 0 ⎦ ,


⎩ 0 0 0
and we see that g acts on its dual as rotations. The orbits are therefore the
spheres
Or = {ξ = ξ1 e1 + ξ2 e2 + ξ3 e3 : ξ12 + ξ22 + ξ32 = r}, r ≥ 0.
The invariant measure on these orbits is just the uniform distribution, in polar
coordinates sin ϑdϑdφ. The Lebesgue measure on g∗ can be written as
r2 dr sin ϑr dϑr dϕr
so that we get σr (ϑr , ϕr ) = 1, cf. [7, Equation (19)]. The transfer of the Haar
measure μ on the group
dμ(eX ) = m(X)dX
124 Weyl calculus on real Lie algebras

gives
$ ∞ $
$  ad(X)n $$
$
m(X) = $det $
$ (n + 1)! $
n=0

see-Equation (27) in [8]. Since ad(X) is normal and has simple eigenvalues
±i x12 + x22 + x32 and 0, we can use the spectral decomposition to get
-
2 − 2 cos x12 + x22 + x32 sin2 t/2
m(X) = =4 ,
x12 + x22 + x32 t2
-
where t = x12 + x22 + x32 . For N0 we take the ball

N0 = {X = (x1 , x2 , x3 ) : x12 + x22 + x32 ≤ 2π }.

7.6.1 Group-theoretical Wigner function


If U : SU(2) −→ U (h) is a unitary representation of SU(2) on some Hilbert
space h and ρ : B(h) −→ C a state, then the associated Wigner function is
given by


-
2 sin x12 + x22 + x32 /2
−i ξ ,X

Wρ (ξ ) = e ρ(U(e )) X
- dX
(2π )3/2 N0 x12 + x22 + x32

see [7, Equation (48)]. We compute this for the irreducible n + 1-dimensional
representations Dn/2 = span{en , en−2 , . . . , e−n }, given by

⎪ 0 if k = n,

U(X+ )ek = ,
⎩ i %(n − k)(n + k + 2)e

else.
k+2
2

⎪ 0 if k = −n,

U(X− )ek = ,
⎩ i %(n + k)(n − k + 2)e

else.
k−2
2
ik
U(X3 )ek = ek ,
2
7.6 Wigner functions on so(3) 125

in terms of X+ = X1 + iX2 , X− = X1 − iX2 , and X3 , cf. Equation (2.5), and


ρ = (n + 1)−1 tr. The operator U(X) has the eigenvalues
- - -
n 2 n−2 2 n 2
−i x1 + x22 + x32 , − i x1 + x22 + x32 , . . . , i x + x22 + x32 ,
2 2 2 1
which yields

Wρ (ξ )
-
 
n/2 -
sin x12 + x22 + x32 /2
2 x12 +x22 +x32
e−i ξ ,X

ik
= e - dX
(2n + 1)(2π )3/2 N0 x12 + x22 + x32
k=−n/2


n/2  2π  π  rπ
2
= e−i ξ ,X
eikt t sin(t/2) sin θ dtdθdφ
(2n + 1)(2π )3/2
k=−n/2 0 0 0

by using polar coordinates on N0 . Since the integrand is rotationally invariant


in the variable X, the Wigner function will again be rotationally invariant and
it is sufficient to consider ξ = (0, 0, z),
Wρ (0, 0, z)
n/2  2π  π  2π

2
= e−itz cos θ eikt t sin(t/2) sin θ dtdθdφ
(2n + 1)(2π )3/2 0 0 0
k=−n/2
n/2  2π  π

2
= e−itz cos θ eikt t sin(t/2) sin θ dtdθ
(2n + 1)(2π )1/2 0 0
k=−n/2
n/2  2π

4
= eikt sin(zt) sin(t/2)dt
z(2n + 1)(2π )1/2 0
k=−n/2
1
=−
z(2n + 1)(2π )1/2
n/2  2π "
 #
× eit(k+z+1/2) − eit(k−z+1/2) − eit(k+z−1/2) + eit(k−z−1/2) dt
k=−n/2 0
!
2(e2iπ − 1)e2πi(k−1/2) 
n/2
e2π iz e−2πiz
= −
iz(2n + 1)(2π )1/2 2k + 2z + 1 2k − 2z − 1
k=−n/2

4(e2iπ − 1)e2πi(k−1/2)  2k sin(2πz) + (2z + 1)i cos(2π z)


n/2
= ,
z(2n + 1)(2π )1/2 4k2 − (2z + 1)2
k=−n/2

cf. also [12, 28]. Note that even for the trivial representation with n = 0 and
U(eX ) = 1, this doesn’t give the Dirac measure at the origin.
126 Weyl calculus on real Lie algebras

7.6.2 Probabilistic Wigner density


pr
We define the function Wρ as

1
Wρpr (ξ ) = e−i ξ ,X
ρ(U(eX ))dX
(2π )3/2 g

and this will always have the right marginals. It is again rotationally invariant
for the representation Dn/2 and the state ρ = (2n + 1)−1 tr, and we get
n/2 

1
Wρpr (ξ1 , ξ2 , ξ3 ) = e−i ξ ,X
eik||X|| dX.
(2n + 1)(2π )3/2 g k=−n/2

Using polar coordinates (t, θ, ψ) in g, with the north pole in direction ξ =


(ξ1 , ξ2 , ξ3 ), we get
n/2  ∞  π  2π

1
Wρpr (ξ ) = e−it||ξ || cos θ eikt t2 sin θdtdθ dφ
(2n + 1)(2π )3/2 0 0 0
k=−n/2
n/2  ∞  π

1
= e−it||ξ || cos θ eikt t2 sin θ dtdθ
(2n + 1)(2π )1/2 0 0k=−n/2
n/2  ∞

1
= t sin(t||ξ ||)eikt dt
||ξ ||(2n + 1)(2π )1/2 0
k=−n/2
n/2  ∞

1
= teit||ξ || cos(kt)dt,
2i||ξ ||(2n + 1)(2π )1/2 −∞k=−n/2

so that we get

1 
n/2

Wρpr (ξ ) = δ||ξ ||−k ,
2||ξ ||(2n + 1)(2π )1/2
k=−n/2

which is clearly not a measure, but a distribution. However, there should be a


pr
better description of Wρ . . .
  R  π  2π
e−i ξ ,X
−||ξ ||2 e−i||ξ ||r cos θ 2
dX = r sin θ drdθ dφ
||X||2 ≤R ||X|| 0 0 0 r
 R π
= −2π||ξ || 2
e−i||ξ ||r cos θ r sin θ drdθ
0 0
 R 1
= −2π||ξ ||2 ei||ξ ||rz rdzdr
0 −1
7.6 Wigner functions on so(3) 127

 R 2 sin(r||ξ ||)
= −2π||ξ || 2
rdr
0 r||ξ ||
 R
= −4π ||ξ || sin(r||ξ ||)dr
0


= 4π cos(R||ξ ||) − 1 ,

i.e., the Fourier transform of the distribution



f
f  −→ dX
||X|| ≤R
2 ||X||
is almost what we are looking for. E.g., for the spin 1/2 representation we have

ρ(U(eX )) = cos(||X||/2)

and therefore the associated “probabilistic” Wigner distribution is the


distribution

1 f
Wρpr : f  −→ dX + f (0).
4π ||X||2 ≤1/2 ||X||
Assuming that f depends only on z we can check that we get the right
marginals, as

1 f
dX + f (0)
4π ||X||2 ≤R ||X||
 R  π  2π 
1 f (r cos θ ) 2
= r sin θ drdθ dφ + f (0)
4π 0 0 0 r
 
1 R π 
= f (r cos θ )r sin θ drdθ + f (0)
2 0 0
 
1 R r 
= f (z)dzdr + f (0)
2 0 −r

1 R 
= (f (r) − f  (−r))dr + f (0)
2 0
1

= f (R) − f (0) + f (−R) − f (0) + f (0)
2
1
= ( f (R) + f (−R)),
2
which gives the correct marginal distribution 12 (δ−1/2 + δ1/2 ), if we set R =
' f ) · ∇(g)
1/2. Since ( fg) = f (g) + 2∇( ' + g( f ), and therefore
!
f f 2 ∂f
 = − 2 ,
r r r ∂r
128 Weyl calculus on real Lie algebras

pr
we can rewrite Wρ also as
 ! !
1 f 2 ∂f
Wρpr (f ) = f (0) +  + 2 dX.
4π ||x||≤1/2 r r ∂r

Using Gauss’ integral theorem, we can transform the first part of the integral
into a surface integral,
 ! 
1 ' f 1 1 ∂f
Wρpr (f ) = f (0) + ∇ · d'n + dX.
4π ||x||=1/2 r 2π ||x||≤1/2 r2 ∂r

7.7 Some applications


7.7.1 Quantum optics
We refer to [48] for details on the material presented in this section. In optics
one may consider the Maxwell equations for an electric field on the form

E(x, t) = CE q(t) sin(kx)

polarised along the z-axis, where CE is a normalisation constant, and the


magnetic field

B(x, t) = CB q (t) cos(kx),

polarised along the y-axis, where CB is a normalisation constant.


The classical field energy at time t is given by

1
(E2 (x, t) + B2 (x, t))dx,
2

and this system is quantised using the operators (Q, P) and the harmonic
oscillator Hamiltonian
1 2
H= (P + ω2 Q2 )
2
where ω = kc is the frequency and c is the speed of light. Here the operator
N = a+ a− is called the photon number operator.
Each number eigenstate en is mapped to the wave function

(x) = (2n n!)−1/2 e−x


2 /2
Hn (x), x ∈ R,
7.7 Some applications 129

while the coherent state (α) corresponds to



 αn
(α) = e−|α| e−x (2n n!)−1/2 √ Hn (x)
2 /2 2 /2

n=0
n!
∞
α n Hn (x)
= e−|α| e−x
2 /2 2 /2
√ n
n! 2
n=0

= e−|α| /2 e−x /2 exα/ 2−α x /4
2 2 2 2

 ! 
|α|2 α 2
= exp − − x− √ /2 .
2 2
The Wigner phase-space (quasi)-probability density function in the quasi-state

ψ| is then given by
 ∞
1
W|φ
ψ| (x, y) = φ̄(x − t)ψ(x + t)eiyt dt,
2π −∞
while the probability density in a pure state |φ
φ| is
 ∞
1
W|φ
φ| (x, y) = φ̄(x − t)φ(x + t)eiyt dt.
2π −∞

7.7.2 Time-frequency analysis


The aim of time-frequency analysis is to understand signals with changing
frequencies. Say we have recorded a signal, e.g., an opera by Mozart or the
noise of a gearbox, which can be represented as a function f : R −→ C that
depends on time. Looking at the square of the modulus | f (t)|2 , we can see at
what time the signal was strong or noisy, but it is more difficult to determine
the frequency. An analysis of the frequencies present in the signal can be done
by passing to the Fourier transform,
 +∞
1
f̃ (ω) = √ f (t)eitω dt.
2π −∞
The square of the modulus | f̃ (ω)|2 of the Fourier transform gives us a good
idea to what extent the frequency ω is present in the signal. But it will not tell
us at what time the frequency ω was played, the important information about
how the frequency changed with time is not visible in | f̃ (ω)|2 , e.g., if we play
Mozart’s opera backwards we still get the same function | f̃ (ω)|2 .
On the other hand, although the Wigner function

1 ∞
Wf (ω, t) = f (t + s)f (t − s)e2isω ds,
π −∞
130 Weyl calculus on real Lie algebras

cf. Section 7.4, does not have a clear interpretation as a joint probability dis-
tribution since it can take negative values, it can give approximate information
on which frequency was present in the signal at what time. The use of Wigner
functions in time-frequency analysis, where they are often called Wigner–Ville
functions after Ville [119], is carefully explained in Cohen’s book [29]. In [85],
Wigner functions are used to analyse the sound of a gearbox in order to predict
when it will break down.

Note
We also refer the reader to [3] for more background on Wigner functions and
their use in quantum optics.

Exercises
Exercise 7.1 Quantum optics.
1. Compute the distribution of the photon number operator N = a+ a− in the
coherent state
∞
αn
(α) = e−|α|
2 /2
√ en .
n=0
n!
2. Show that in the quasi-state |φ
φ| with
1
e−z /4 ,
2
φ(z) =
(2π )1/4
the Wigner phase-space (quasi)-probability density function is a standard
two-dimensional joint Gaussian density.
8

Lévy processes on real Lie algebras

I really have long been of the mind that the quantity of noise that
anyone can carefreely tolerate is inversely proportional to his mental
powers, and can therefore be considered as an approximate measure
of the same.
(A. Schopenhauer, in The World as Will and Representation.)
In this chapter we present the definition and basic theory of Lévy processes
on real Lie algebras with several examples. We use the theories of factorisable
current representations of Lie algebras and Lévy processes on ∗-bialgebras to
provide an elegant and efficient formalism for defining and studying quantum
stochastic calculi with respect to additive operator processes satisfying Lie
algebraic relations. The theory of Lévy processes on ∗-bialgebras can also
handle processes whose increments are not simply additive, but are composed
by more complicated formulas, the main restriction is that they are independent
(in the tensor sense).

8.1 Definition
Lévy processes, i.e., stochastic processes with independent and stationary
increments, are used as models for random fluctuations, in physics, finance,
etc. In quantum physics so-called quantum noises or quantum Lévy processes
occur, e.g., in the description of quantum systems coupled to a heat bath
[47] or in the theory of continuous measurement [53]. Motivated by a model
introduced for lasers [122], Schürmann et al. [2, 106] have developed the
theory of Lévy processes on involutive bialgebras. This theory generalises,
in a sense, the theory of factorisable representations of current groups and
current algebras as well as the theory of classical Lévy processes with values

131
132 Lévy processes on real Lie algebras

in Euclidean space or, more generally, semigroups. Note that many interesting
classical stochastic processes arise as components of these quantum Lévy
processes, cf. [1, 18, 42, 105].
Let D be a complex pre-Hilbert space with inner product ·, ·
. We denote by
L(D) the algebra of linear operators on D having an adjoint defined everywhere
on D, i.e.,
>
L(D) := A : D −→ D linear : ∃A∗ : D −→ D linear operator (8.1)

?
such that x, Ay
= A x, y
for all x, y ∈ D .
By LAH (D) we mean the anti-Hermitian linear operators on D, i.e.,
LAH (D) = {A : D −→ D linear : x, Ay
= − Ax, y
for all x, y ∈ D}.
In the sequel, g denotes a Lie algebra over R, D is a complex pre-Hilbert space,
and  ∈ D is a unit vector.
Definition 8.1.1 Any family


jst : g −→ LAH (D) 0≤s≤t
of representations of g is called a Lévy process on g over D (with respect to
the unit vector  ∈ D) provided the following conditions are satisfied:
i) (Increment property). We have
jst (X) + jtu (X) = jsu (X)
for all 0 ≤ s ≤ t ≤ u and all X ∈ g.
ii) (Boson independence). We have
[jst (X), js t (Y)] = 0, X, Y ∈ g,
0 ≤ s ≤ t ≤ s ≤ t , and
, js1 t1 (X1 )k1 · · · jsn tn (Xn )kn 

= , js1 t1 (X1 )k1 


· · · , jsn tn (Xn )kn 
,
for all n, k1 , . . . , kn ∈ N, 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn , X1 , . . . , Xn ∈ g.
iii) (Stationarity). For all n ∈ N and all X ∈ g, the moments
mn (X; s, t) = , jst (X)n 

depend only on the difference t − s.


iv) (Weak continuity). We have
lim , jst (X)n 
= 0, n ∈ N, X ∈ g.
t)s
8.1 Definition 133

Such a process extends to a family of ∗-representations of the complexification


gC = g ⊕ ig with the involution
(X + iY)∗ = −X + iY, X, Y ∈ g,
by setting
jst (X + iY) = jst (X) + ijst (Y).
We denote by U(g) the universal enveloping algebra of gC , i.e., the unital
associative algebra over C generated by the elements of gC with the relation
XY − YX = [X, Y] for X, Y ∈ g. If X1 , . . . , Xd is a basis of g, then
n
{X1n1 · · · Xd d : n1 , . . . , nd ∈ N, }
is a basis of U(g). The set
n
{X1n1 · · · Xd d : n1 , . . . , nd ∈ N, n1 + · · · + nd ≥ 1}
is a basis of a nonunital subalgebra U0 (g) of U generated by g, and we have
U(g) = C1 ⊕ U0 (g).
Furthermore, we extend the involution on gC defined in Remark 2.1.3 as
an antilinear antihomomorphism to U (g) and U0 (g). It acts on the basis given
earlier by

n1 n ∗
X1 · · · Xd d = (−1)n1 +···+nd Xd d · · · X1n1 .
n

A Lévy process (jst )0≤s≤t on g extends to a family of ∗-representations of U (g),


and the functionals
ϕt = , j0t (·)
: U (g) −→ C
are states, i.e., unital positive linear functionals. Furthermore, they are differ-
entiable with respect to t and
1
L(u) = lim ϕt (u), u ∈ U0 (g),
t)0 t

defines a positive Hermitian linear functional on U0 (g). In fact, one can prove
that the family (ϕt )t∈R+ is a convolution semigroup of states on U0 (g). The
functional L is also called the generating functional of the process. It satisfies
the conditions of the following definition.
Definition 8.1.2 A linear functional L : U0 −→ C on a (non-unital)
*-algebra U0 is called a generating functional if
i) L is Hermitian, i.e., L(u∗ ) = L(u) for all u ∈ U0 ;
ii) L is positive, i.e., L(u∗ u) ≥ 0 for all u ∈ U0 .
134 Lévy processes on real Lie algebras

Schürmann has shown that there exists indeed a Lévy process for any
generating functional on U0 (g), cf. [106]. Let

(1)

jst : g −→ LAH (D(1) ) 0≤s≤t and j(2) : g −→ LAH (D(2) ) 0≤s≤t

be two Lévy processes on g with respect to the state vectors (1) and (2) ,
respectively. We call them equivalent, if all their moments agree, i.e., if
(1) (2)
(1) , jst (X)k (1)
= (2) , jst (X)k (2)
,

for all k ∈ N, 0 ≤ s ≤ t, X ∈ g. This implies that all joint moments also agree
on U (g), i.e.,
(1) (1)
(1) , js1 t1 (u1 ) · · · jsn tn (u1 )(1)

(2) (2)
= (2) , js1 t1 (u1 ) · · · jsn tn (un )(2)
,

for all 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn , u1 , . . . , un ∈ U (g), n ≥ 1.

8.2 Schürmann triples


By a Gelfand–Naimark–Segal (GNS)-type construction, one can associate to
every generating functional a Schürmann triple.

Definition 8.2.1 A Schürmann triple on g is a triple (ρ, η, ψ), where

a) ρ : g −→ LAH (D) is a representation on some pre-Hilbert space D, i.e.,




ρ [X, Y] = ρ(X)ρ(Y) − ρ(Y)ρ(X) and ρ(X)∗ = −ρ(X),

for all X, Y ∈ g,
b) η : g −→ D is a ρ-1-cocycle, i.e., it satisfies

η [X, Y]) = ρ(X)η(Y) − ρ(X)η(Y), X, Y ∈ g,

and
c) ψ : g −→ C is a linear functional with imaginary values such that the
bilinear map (X, Y)  −→ η(X), η(Y)
is the 2-coboundary of ψ (with
respect to the trivial representation), i.e.,


ψ [X, Y] = η(Y), η(X)
− η(X), η(Y)
, X, Y ∈ g.

The functional ψ in the Schürmann triple associated to a generating functional

L : U0 (g) −→ C
8.2 Schürmann triples 135

is the restriction of the generating functional L to g. Conversely, given


a Schürmann triple (ρ, η, ψ), we can reconstruct a generating functional
L : U0 (g) −→ C from it by setting


⎪ L(X1 ) = ψ(X1 ), n = 1,



⎨  
L(X1 X2 ) = − η(X1 ), η(X2 ) , n = 2,




⎩ L(X · · · X ) = −η(X ), ρ(X ) · · · ρ(X )η(X ), n ≥ 3,

1 n 1 2 n−1 n

for X1 , . . . , Xn ∈ g. We will now see how a Lévy processes can be recon-


structed from its Schürmann triple.
Let (ρ, η, ψ) be a Schürmann triple on g, acting on a pre-Hilbert space D.
We can define a Lévy process on the symmetric Fock space



 L2 (R+ , D) = L2 (R+ , D)◦n
n=0

by setting




jst (X) = a◦st ρ(X) + a+ −
st η(X) − ast η(X) + ψ(X)(t − s)Id , (8.2)

for X ∈ g, where a◦st , a+ −


st , and ast denote

increments
of the conservation,
creation, and annihilation processes on  L2 (R+ , D) defined in Chapter 5,
see also [79, 87]. Using the commutation relations (5.1) satisfied by the
conservation, creation, and annihilation operators, it is straightforward to check
that we have
 

jst (X), jst (Y) = jst [X, Y] , and jst (X)∗ = −jst (X),

0 ≤ s ≤ t, X, Y ∈ g. One also checks that the moments of jst satisfy the


stationarity and continuity properties of Definition 8.1.1. Furthermore, using
the identification




 L2 (R+ , D) ∼=  L2 ([0, t[, D) ⊗  L2 ([t, +∞), D)

and the fact that  ∼ =  ⊗  holds for the vacuum vector with respect to
this factorization, one can show that the increments of (jst )0≤s≤t are boson
independent. The family
" "
##
jst : g −→ LAH  L2 (R+ , D)
0≤s≤t

extends to a unique family of unital ∗-representations of U (g), since the


elements of g generate U(g). We denote this family again by (jst )0≤s≤t .
136 Lévy processes on real Lie algebras

The following theorem can be traced back to the works of Araki and Streater.
In the form given here it is a special case of Schürmann’s representation
theorem for Lévy processes on involutive bialgebras, cf. [106].
Theorem 8.2.2 Let g be a real Lie algebra. Then there is a one-to-one
correspondence (modulo equivalence) between Lévy processes on g and
Schürmann triples on g. Precisely, given (ρ, η, L) a Schürmann triple on g
over D,




jst (X) := a◦st ρ(X) + a+ −
st η(X) − ast η(X) + (t − s)L(X)Id , (8.3)
0
≤ s ≤ t, X ∈ g, defines Lévy process on g over a dense subspace H ⊆
 L2 (R+ , D) , with respect to the vacuum vector .
The correspondence between (equivalence classes of) Lévy processes and
Schürmann triples is one-to-one and the representation (8.2) is universal.
Theorem 8.2.3 [106]
i) Two Lévy processes on g are equivalent if and
only if their Schürmann
triples are unitarily equivalent on the subspace ρ U (g) η(g).
ii) A Lévy process (kst )0≤s≤t with generating functional L and Schürmann
triple (ρ, η, ψ) is equivalent to the Lévy process (jst )0≤s≤t associated to
(ρ, η, L) defined in Equation (8.2).
Due to Theorem 8.2.3, the problem of characterising and constructing all Lévy
processes on a given real Lie algebra can be decomposed into the following
steps.
a) First, classify all representations of g by anti-Hermitian operators (modulo
unitary equivalence). This gives the possible choices for the representation
ρ in the Schürmann triple.
b) Next, determine all ρ-1-cocycles. We distinguish between trivial cocycles,
i.e., cocycles which are of the form
η(X) = ρ(X)ω, X∈g
for some vector ω ∈ D in the representation space of ρ, and non-trivial
cocycles, i.e., cocycles, which cannot be written in this form.
c) Finally, determine all functionals L that turn a pair (ρ, η) into a Schürmann
triple (ρ, η, L).
The last step can also be viewed as a cohomological problem. If η is a ρ-1-
cocycle then the bilinear map
(X, Y)  −→ η(X), η(Y)
− η(X), η(Y)

8.2 Schürmann triples 137

is a 2-cocycle for the trivial representation, i.e., it is antisymmetric and it


satisfies

 

η [X, Y] , η(Z) − η(Z), η [X, Y]

 

− η(X), η [Y, Z] + η [Y, Z] , η(X) = 0,

for all X, Y, Z ∈ g. For L we can take any functional that has the map

(X, Y)  −→ η(X), η(Y)


− η(Y), η(Z)

as coboundary, i.e., L(X) is imaginary and




L [X, Y] = η(Y), η(X)
− η(X), η(Y)

for all X, Y ∈ g. If η is trivial then such a functional always exists as we


can take L(X) = ω, ρ(X)ω
. But for a nontrivial cocycle such a functional
may not exist. For a given pair (ρ, η), L is determined only
up to a Hermitian
0–1-cocycle, i.e., a Hermitian functional  that satisfies  [X, Y] = 0 for all
X, Y ∈ g.
Trivial changes to the cocycle are equivalent to a conjugation of the
corresponding Lévy process by a unitary cocycle of the time shift on the Fock
space. More generally, we have the following proposition.

Proposition 8.2.4 Let g be a real Lie algebra, (jst )0≤s≤t a Lévy process on
g with Schürmann triple (ρ, η, ψ) over the pre-Hilbert space D, B a unitary
operator on D, and ω ∈ D. Then (ρ̃, η̃, ψ̃) with
⎧ ∗

⎪ ρ̃(X) := B ρ(X)B,





⎪ ∗ ∗

⎨ η̃(X) := B η(X) − B ρ(X)Bω,



⎪ ψ̃(X) := ψ(X) − Bω, η(X)
+ η(X), Bω
+ Bω, ρ(X)Bω







= ψ(X) − ω, η̃(X)
+ η̃(X), ω
− ω, ρ̃(X)ω
, X ∈ g,
is also a Schürmann triple on g.

Proof : The operator process (Ut )t∈R+ is given by





Ut = et(ih−||ω|| /2) exp −a+
2
t (Bω) at (B) exp at (ω) ,

where t (B) denotes the second quantisation of B. It is unitary and adapted to


the Fock space. Furthermore, for 0 ≤ s ≤ t, we can write Us∗ Ut as

Us∗ Ut = Id ⊗ ust ⊗ Id
138 Lévy processes on real Lie algebras

on the Fock space with respect to the factorisation






 L2 ([0, s[, D) ⊗  L2 ([s, t[, D) ⊗  L2 ([t, +∞), D) .

This allows us to show that (j˜st )0≤s≤t is again a Lévy process on g.

Corollary 8.2.5 If the generating functional

L : U0 (g) −→ C

of (jst ) can be extended to a positive functional L̃ on U (g), then (jst ) is


equivalent to the cocycle conjugate


Ut∗ st ρ(·) Ut

of the second quantisation of ρ.

Proof : Applying the standard GNS construction to L̃ on U (g), we obtain a


pre-Hilbert space D, a projection η : U(g) −→ D, and a representation ρ of
U (g) on D such that

L(u) = ω, ρ(u)ω
,

for all u ∈ U(g), where ω = η(1)/L̃(1). It is not difficult to check that


(ρ|g , η|g , L̃|g ) is a Schürmann triple for (jst ). Taking


Ut = exp At (ω) − A∗t (ω)

and applying Proposition 8.2.4, we obtain the desired result.

Remark 8.2.6 If the generating functional of Lévy process (jst ) can be written
in the form L(u) = ω, ρ(u)ω
for all u ∈ U0 (g), then we call (jst ) a compound
Poisson process.

In the next examples we will work with the complexification gC of a real Lie
algebra g and the involution

(X + iY)∗ = −X + iY, X, Y ∈ g.

One can always recover g as the real subspace


> ?
g = X ∈ gC : X ∗ = −X

of antisymmetric elements in gC .
8.2 Schürmann triples 139

8.2.1 Quantum stochastic differentials


Since we can define quantum stochastic integrals and know the Itô table for
the four Hudson–Parthasarathy integrators,

• da+ (u) da◦ (F) da− (u) dt


da+ (v) 0 0 0 0
da◦ (G) da+ (Gu) da◦ (GF) 0 0
da− (v) v, u
dt da− (F ∗ v) 0 0
dt 0 0 0 0

for all F, G ∈ L(D), u, v ∈ D, we get a quantum stochastic calculus for


Lévy processes on g, too. The map dL associating elements u of the universal
enveloping algebra to the corresponding quantum stochastic differentials dL u
defined by




dL u = da◦ ρ(u) + da+ η(u) + da− η(u∗ ) + ψ(u)dt, (8.4)

is a ∗-homomorphism from U0 (g) to the Itô algebra over D, see [45, Propo-
sition 4.4.2]. It follows that the dimension of the Itô algebra generated by
{dL X : X ∈ g} is at least the dimension of D (since η is supposed surjective) and
not bigger than (dim D + 1)2 . If D is infinite-dimensional, then its dimension
is also infinite. Note that it depends on the choice of the Lévy process.

Proposition 8.2.7 The Lévy process of (ρ̃, η̃, ψ̃) in Proposition 8.2.4 is
equivalent to the Lévy process defined by

j˜st (X) = Ut∗ jst (X)Ut , 0 ≤ s ≤ t, X ∈ X,

where (Ut )t∈R+ is the solution of


 !!
t ||ω||2
Ut = Id + Us da− + ◦
s (ω) − das (Bω) + das (B − Id) + ih − ds .
0 2

Proof : Using the quantum Itô table, one can show that (j˜st )0≤s≤t is of the
form



j˜st (X) = a◦st B∗ ρ(X)B + a+ ∗
st B η(X) − B ρ(X)Bω


− a− ∗
st B η(X) − B ρ(X)Bω


+ (t − s) ψ(X) − Bω, η(X)
+ η(X), Bω
+ Bω, ρ(X)Bω
Id .
140 Lévy processes on real Lie algebras

8.3 Lévy processes on hw and osc




A Lévy process jst : g −→ LAH (D) 0≤s≤t on g with generating functional
L : U0 (g) −→ C and Schürmann triple (ρ, η, ψ) is called Gaussian if ρ is the
trivial representation of g, i.e., if
ρ(X) = 0, X ∈ g.
Recall that the Heisenberg–Weyl Lie algebra hw is the three-dimensional Lie
algebra with basis {A+ , A− , E}, commutation relations
[A− , A+ ] = E, [A± , E] = 0,
and the involution (A− )∗ = A+ , E∗ = E. We begin with the classification of
all Gaussian generating functionals on the hw algebra.
Proposition 8.3.1 Let v1 , v2 ∈ C2 and z ∈ C. Then


⎪ ρ(A+ ) = ρ(A− ) = ρ(E) = 0,




η(A+ ) = v1 , η(A− ) = v2 , η(E) = 0,





⎩ L(A+ ) = z, L(A− ) = z, L(E) = ||v ||2 − ||v ||2 ,
1 2

defines the Schürmann triple on D = span {v1 , v2 } of a Gaussian generating


functional on U0 (hw).
Proof : One checks that for all these cocycles there do indeed exist generating
functionals and computes their general form.
Therefore from (8.4) we get, for an arbitrary Gaussian Lévy process on hw:

⎪ + + −
⎪ dL A = da (v1 ) + da (v2 ) + zdt,




dL A− = da+ (v2 ) + da− (v1 ) + zdt,




⎩ d E =
||v ||2 − ||v ||2 dt,

L 1 2

and the Itô table


• dL A+ dL A− dL E
dLA+ v2 , v1
dt v2 , v2
dt 0
dL A− v1 , v1
dt v1 , v2
dt 0
dL E 0 0 0
For ||v1 ||2 = 1 and v2 = 0, this is the usual Itô table for the creation and
annihilation process in Hudson–Parthasarathy calculus.
8.3 Lévy processes on hw and osc 141

We now consider the oscillator Lie algebra osc which is obtained by addition
of a Hermitian element N with commutation relations

[N, A± ] = ±A± , [N, E] = 0.

The elements E and NE − A+ A− generate the center of U0 (osc). If we want an


irreducible representation of U (osc), which has nontrivial cocycles, they have
to be represented by zero. Since we are only interested in ∗-representations,
this also implies ρ(A+ ) = ρ(A− ) = 0 as in the next proposition whose proof
is similar to that of Proposition 8.3.1.

Proposition 8.3.2 The Schürmann triples of Gaussian generating functionals


on U0 (osc) are all of the form


⎪ ρ(N) = ρ(A+ ) = ρ(A− ) = ρ(E) = 0,




η(N) = v, η(A+ ) = η(A− ) = η(E) = 0,





⎩ L(N) = b, L(A+ ) = L(A− ) = L(E) = 0,

with v ∈ C, b ∈ R.

In particular, letting v1 , v2 ∈ C2 and b ∈ R, ρ = ρ1 , and


⎨ η(N) = v1 , η(A+ ) = v2 , η(A− ) = η(E) = 0,

L(N) = b, L(E) = ||v2 ||2 , L(A+ ) = L(A− ) = v1 , v2
,

also defines a Schürmann triple on osc acting on D = span {v1 , v2 }. The


corresponding quantum stochastic differentials are


⎪ dL N = da◦ (Id) + da+ (v1 ) + da− (v1 ) + bdt,






⎪ + +
⎨ dL A = da (v2 ) + v1 , v2
dt,



⎪ dL A− = da− (v2 ) + v2 , v1
dt,






dL E = ||v2 ||2 dt,
142 Lévy processes on real Lie algebras

and they satisfy the Itô table

• dL A+ dL N dL A− dL E
dL A+ 0 0 0 0


dL N dL A+ dL N + ||v1 ||2 − b dt 0 0
dL A− dL E dL A− 0 0
dL E 0 0 0 0

Note that for ||v1 ||2 = b, this is the usual Itô table of the four fundamental
noises of Hudson–Parthasarathy calculus.
For n ≥ 2 we may also consider the real Lie algebra with basis X0 , X1 , . . . Xn
and the commutation relations

⎨ Xk+1 , if 1 ≤ k < n,
[X0 , Xk ] = (8.5)

0, otherwise,

and

[Xk , X ] = 0, 1 ≤ k,  ≤ n.

For n = 2 this algebra coincides with the Heisenberg–Weyl Lie algebra hw,
while for n > 2 it is a n − 1-step nilpotent Lie algebra. Its irreducible unitary
representations can be described and constructed using the “orbit method” (i.e.,
there exists exactly one irredicible unitary representation for each orbit of the
coadjoint representation), see, e.g., [101, 102].

8.4 Classical processes



(jst )0≤s≤t be a Lévy process on a real Lie algebra gR over  =
Let
 L2 (R+ , D) . Denote by gR+ the space of g-valued simple step functions


n 
gR+ := Xk 1[sk ,tk ) : 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn < ∞, X1 , . . . , Xn ∈ g .
k=1

Then gR+ is a real Lie algebra with the pointwise Lie bracket and the Lévy
process (jst )0≤s≤t on g defines a representation π of gR+ via


n 
n
π(X) = jsk tk (Xk ), for X = Xk 1[sk ,tk ) ∈ gR+ . (8.6)
k=1 k=1
8.4 Classical processes 143

Denote also by (R+ ) the space of real-valued simple step functions



n 
(R+ ) := φk 1[sk ,tk ) : 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn < ∞, φ1 , . . . , φn ∈ R .
k=1

Given an Hermitian element Y of gR such that Y ∗ = Y, we define a map


y : (R+ ) −→ L()
where L() is defined in (8.1), by

n 
n
yφ := φk jsk tk (Y), for φ = φk 1[sk ,tk ) ∈ (R+ ).
k=1 k=1

Clearly, the operators {yφ : φ ∈ (R+ )} commute, since y is the restriction of



n 
n
π : gR +  φ = φk 1[sk ,tk )  −→ jsk tk (φk ) ∈ L()
k=1 k=1

to the abelian current algebra CY R+ over CY. Furthermore, if φ is real-valued,


then yφ is Hermitian, since Y is Hermitian. Therefore, there exists a classical
stochastic process (Ỹt )t∈R+ whose moments are given by
 
IE Ỹt1 · · · Ỹtn = , y1[0,t1 ) · · · y1[0,tn ) 
, t1 , . . . , tn ∈ R+ .

Since the expectations of (jst )0≤s≤t factorise, we can choose (Ỹt )t∈R+ to be a
Lévy process, and if jst (Y) is even essentially self-adjoint then the marginal
distributions of (Ỹt )t∈R+ are uniquely determined.
In order to characterize the process (Ỹt )t∈R+ in Theorem 8.4.3 below, we will
need the following analogues of the splitting Lemma 6.1.2 in the framework
of quantum Lévy processes.
Lemma 8.4.1 Let X ∈ L(D), u, v ∈ D, and suppose further that the series
∞ n
 ∞ n

t n t
X w and (X ∗ )n w (8.7)
n! n!
n=0 n=0

converge in D for all w ∈ D. Then we have


⎧ " # ◦


◦ ∗
ea (X) a− (v) = a− e−X v ea (X) ,





+
+
⎪ ea (u) a− (v) = a− (v) − v, u
ea (u) ,




⎩ ea+ (u) a◦ (X) =
a◦ (X) − a+ (Xu) ea+ (u) ,

on the algebraic boson Fock space over D.


144 Lévy processes on real Lie algebras

Proof : This can be deduced from the formula

1 
AdeX Y = eX Ye−X = Y + [X, Y] + X, [X, Y] + · · · = eadX Y
2

for the adjoint actions.

The following lemma, which is the Lévy process analogue of Lemma 6.1.3,
provides the normally ordered form of the generalised Weyl operators, and it
is a key tool to calculate the characteristic functions of classical subprocesses
of Lévy processes on real Lie algebras.

Lemma 8.4.2 Let X ∈ L(D) and u, v ∈ D and suppose further that the series
(8.7) converge in D for all w ∈ D. Then we have





exp α + a◦ (X) + a+ (u) + a− (v) = eα̃ exp a+ (ũ) exp a◦ (X) exp a− (ṽ)

on the algebraic boson Fock space over D, where α ∈ C and


 ∞
 ∞

X n−1 (X ∗ ) n−1 1
ũ = u, ṽ = v, α̃ = α + v, X n−2 u
.
n! n! n!
n=1 n=1 n=2



Proof : Let ω ∈ D and set ω1 (t) = exp t α + a◦ (X) + a+ (u) + a− (v) ω and
"
#
"
#
ω2 (t) = eα̃(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) ω

for t ∈ [0, 1], where


⎧ ∞ n
⎪  t n−1

⎪ =


ũ(t) X u,

⎪ n!

⎪ n=1
⎨ ∞ n
t
ṽ(t) = (X ∗ )n−1 v,

⎪ n!

⎪ n=1

⎪ ∞ n

⎪ t

⎩ α̃(t) = tα + v, X n−2 u
,
n!
n=2

with ω1 (0) = ω = ω2 (0). Using Lemma 6.1.2 we also check that





ω1 (t) = α + a◦ (X) + a+ (u) + a− (v) ω exp t α + a◦ (X) + a+ (u) + a− (v) ω,
8.4 Classical processes 145

and
! "

dũ #
"
#
ω2 (t)
=e a α̃(t) +
(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) ω
dt
"
#
"
#
+ e exp a+ ũ(t) a◦ (X) exp ta◦ (X) exp a− ṽ(t) ω
α̃(t)

"
!
#
dṽ "
#
+ eα̃(t) exp a+ ũ(t) exp ta◦ (X) a− (t) exp a− ṽ(t) ω
dt
"
#
"
# dα̃
+ eα̃(t) exp a+ ũ(t) exp ta◦ (X) exp a− ṽ(t) (t)ω
dt
coincide for all t ∈ [0, 1]. Therefore we have ω1 (1) = ω2 (1).
In the next theorem we compute the characteristic exponent (Ỹt )t∈R+ by
application of the splitting Lemma 8.4.2.
Theorem 8.4.3 Let (jst )0≤s≤t be a Lévy process on a real Lie algebra gR with
Schürmann triple (ρ, η, L). Then for any Hermitian element Y of gR such that
η(Y) is analytic for ρ(Y), the associated classical Lévy process (Ỹt )t∈R+ has
characteristic exponent

 λn
(λ) = iλL(Y) + in η(Y ∗ ), ρ(Y)n−2 η(Y)
,
n!
n=2

for λ in a neighborhood of zero, with ρ(Y)0 = Id.


Proof : From Lemma 6.1.3 we have
 
 iλỸ  ∞
λ n D E
IE e t = exp itλL(Y) + t in η(Y ∗ ), ρ(Y)n−2 η(Y) ,
n!
n=2

which yields the characteristic exponent


1   1
(λ) = log IE eiλỸt = log , eiλj0t (Y) 
, λ ∈ R,
t t
for




j0t (Y) = a◦0t ρ(Y) + a+
0t η(Y) + a0t η(Y) + tL(Y).

A more direct proof of the theorem is also possible using the convolution of
functionals on U (g) instead of the boson Fock space realisation of (jst )0≤s≤t .
We note (λ) also coincides with

 λn
(λ) = in L(Y n ).
n!
n=1
146 Lévy processes on real Lie algebras

Next, we give two corollaries of Theorem 8.4.3; the first of them justifies our
definition of Gaussian generating functionals.

Corollary 8.4.4 Let L be a Gaussian generating functional on gR with corre-


sponding Lévy process (jst )0≤s≤t . For any Hermitian element Y the associated
classical Lévy process (Ỹt )t∈R+ is Gaussian with mean and variance

IE[Ỹt ] = tL(Y), IE[Ỹt2 ] = ||η(Y)||2 t, t ∈ R+ .

We see that in this case we can take




(Ỹt )t∈R+ = ||η(Y)||Bt + L(Y)t t∈R ,
+

where (Bt )t∈R+ is a standard Brownian motion. The next corollary deals with
the case where L is the restriction to U0 (g) of a positive functional on U(g).

Corollary 8.4.5 Let (ρ, η, L) be a Schürmann triple on gR whose cocycle is


trivial, i.e., there exists a vector ω ∈ D such that

η(u) = ρ(u)ω, u ∈ U0 (g),

with generating functional of the form

L(u) = ω, ρ(u)ω
, u ∈ U0 (g).

Suppose further that the vector ω is analytic for ρ(Y), i.e.,


∞ n
 u
exp (uρ(Y)) ω := ρ(Y)n ω,
n!
n=1

converges for sufficiently small u. Then the classical stochastic process


(Ỹt )t∈R+ associated to (jst )0≤s≤t and to Y is a compound Poisson process with
characteristic exponent given by
D " # E
(u) = ω, eiuρ(Y) − 1 ω .

The above corollary suggests to call a Lévy process on g with trivial cocycle
η(u) = ρ(u)ω and generating functional L(u) = ω, ρ(u)ω
for u ∈ U0 (g)
a Poisson process on g. Note that in case the operator ρ(Y) is (essentially)
self-adjoint, the Lévy measure of (Ỹt )t∈R+ can be obtained by evaluating its
spectral measure

μ(dλ) = ω, dPλ ω

3
in the state ω, where ρ(Y) = λdPλ is the spectral resolution of (the closure
of) ρ(Y).
8.4 Classical processes 147

By choosing a commutative subalgebra of π(gR+ ) we can also obtain a


classical process, using the fact that the product f X of an element X ∈ gR+
with a function f ∈ (R+ ) is again in gR+ .

Theorem 8.4.6 Let (jst )0≤s≤t be a Lévy process on a real Lie algebra g and
let π be as in Equation (8.6). Choose X ∈ gR+ , and define

X(f ) := iπ(fX), f ∈ (R+ ).

Then there exists a classical stochastic process (X̂t )t∈R+ with independent
increments that has the same finite distributions as X, i.e.,


1

2
, g1 X(f1 ) · · · gn X(fn ) 
= IE g1 X̂(f1 ) · · · gn X̂(fn )

for all n ∈ N, f1 , . . . , fn ∈ (R+ ), g1 , . . . , gn ∈ C0 (R), where


  n
X̂(f ) = f (t)dX̂t = αk (X̂tk − X̂sk ),
R+ k=1


n
for f = αk 1[sk ,tk ) ∈ (R+ ).
k=1

The existence of (X̂t )t∈R+ follows as in [1, Section 4], and





g1 X(f1 ) , . . . , gn X(fn )

can be defined by the usual functional calculus for the (essentially) self-adjoint
operators X(f1 ), . . . , X(fn ).

Notes
Lévy processes on real Lie algebras form a special case of Lévy processes
on involutive bialgebras, see [106], [79, chapter VII], [45]. They have already
been studied under the name factorisable representations of current algebras in
the sixties and seventies, see [109] for a historical survey and for references.
They are at the origin of the theory of quantum stochastic differential calculus.
See Section 5 of [109] for more references and a historical survey on the theory
of factorisable representations of current groups and algebra and its relation
to quantum stochastic calculus. Among future problems we can mention
the study of the cohomology of representations and the classification of all
Lévy processes on Lie algebras. We refer to [51] for the cohomology of Lie
algebras and Lie groups. It is known that the cohomology groups of all simple
148 Lévy processes on real Lie algebras

nontrivial representations of the Lie algebra defined in (8.5) are trivial, see [51,
Proposition II.6.2].

Exercises
Exercise 8.1 Example of classical Lévy process. Let Y = B+ + B− + βM with
β ∈ R and Me0 = m0 e0 . This exercise aims at characterising the classical
Lévy process (Ỹt )t∈R+ associated to Y and (jst )0≤s≤t in the manner described
earlier. Corollary 8.4.5 tells us that (Ỹt )t∈R+ is a compound Poisson process
with characteristic exponent


(u) = e0 , eiuX − 1 e0
.

We want to determine the Lévy measure of (Ỹt )t∈R+ , i.e., we want to determine
the measure μ on R, for which
 ∞

iux
(u) = e − 1 μ(dx).
−∞
This is the spectral measure of X evaluated in the state e0 , · e0
. Consider the
polynomials pn (x) ∈ R[x] defined by the condition
en = pn (X)e0 , n ∈ N.
1. Show that the polynomials pn (x) are orthogonal with respect to μ, i.e.,
 ∞
pn (x)pm (x)μ(dx) = δnm , n, m ∈ N.
−∞
2. Find the three-term recurrence relation satisfied by the polynomials pn (x).
3. Determine the polynomials pn (x) according to the value of β.
4. Determine the density μ with respect to which the polynomials pn (x) are
orthogonal.
9

A guide to the Malliavin calculus

I do not think that 150 years from now, people will photocopy pages
from Bourbaki to rhapsodize on them. Some lines in this memoir by
Poisson, on the other hand, are beaming with life . . .
(P. Malliavin, in Dialogues Autour de la Création
Mathématique, 1997.)
This chapter is an introduction to the Malliavin calculus, as a preparation for
the noncommutative setting of Chapters 11 and 12. We adopt the point of view
of normal martingales in a general framework that encompasses Brownian
motion and the Poisson process as particular cases, as in [98]. The Malliavin
calculus originally requires a heavy functional analysis apparatus, here we
assume a basic knowledge of stochastic calculus; proofs are only outlined and
the reader is referred to the literature for details.

9.1 Creation and annihilation operators


Let ( , F, P) be a probability space equipped with a right-continuous filtration
(Ft )t∈R+ , i.e., an increasing family of sub σ -algebras of F such that
F
Ft = Fs , t ∈ R+ .
s>t
In our presentation of stochastic integration we work in the framework of
normal martingales, which are square-integrable martingales (Mt )t∈R+ such
that
1 2
E (Mt − Ms )2 | Fs = t − s, 0 ≤ s < t. (9.1)
As will be seen in the next sections, the family of normal martingales contains
Brownian motion and the compensated standard Poisson process as particular
cases.
149
150 A guide to the Malliavin calculus

Every square-integrable process (Mt )t∈R+ with centered independent incre-


ments and generating the filtration (Ft )t∈R+ satisfies
1 $ 2 1 2
$
E (Mt − Ms )2 $Fs = E (Mt − Ms )2 , 0 ≤ s ≤ t.

In particular, a square-integrable process (Mt )t∈R+ with centered independent


increments is a normal martingale if and only if
1 2
E (Mt − Ms )2 = t − s, 0 ≤ s ≤ t.

Note that a martingale (Mt )t∈R+ is normal if and only if (Mt2 − t)t∈R+ is a
martingale, i.e.,
1 2
E Mt2 − t | Fs = Ms2 − s, 0 ≤ s < t.

9.1.1 Multiple stochastic integrals


Let L2 (R+ )◦n denote the subspace of L2 (R+ )⊗n = L2 (Rn+ ), made of sym-
metric functions fn in n variables (see the Appendix A.8. for a review of
tensor products). The multiple stochastic integral of a symmetric function
fn ∈ L2 (R+ )◦n is defined as an iterated integral. First we let
 ∞
I1 (f ) = f (t)dMt , f ∈ L2 (R+ ).
0

As a convention we identify L2 (R+ )◦0 to R and let

I0 ( f 0 ) = f 0 , f0 ∈ L2 (R+ )◦0  R.

As a consequence of (9.1) we can prove the following.

Proposition 9.1.1 The multiple stochastic integral


 ∞  tn  t2
In ( fn ) := n! ··· fn (t1 , . . . , tn )dMt1 · · · dMtn (9.2)
0 0 0

of fn ∈ L2 (R+ )◦n satisfies the isometry formula

E[In (fn )Im (gm )] = n!1{n=m} fm , gm


L2 (Rn+ ) ,

fn ∈ L2 (R+ )◦n , fm ∈ L2 (R+ )◦m , n, m ∈ N.

In particular we have E[In (fn )] = 0 for all n ≥ 1. As a consequence of Propo-


sition 9.1.1 the multiple stochastic integral operator In induces an isometric
isomorphism between L2 ( ) and the Fock space over L2 (R+ ).
9.1 Creation and annihilation operators 151

Lemma 9.1.2 For all fn ∈ L2 (R+ )◦n , n ≥ 1, we have




E[In ( fn ) | Ft ] = In fn 1[0,t]n , t ∈ R+ .

Proof : Since the indefinite Itô integral is a martingale from (9.2) we have
 ∞  tn  t2 $ 
$
E[In (fn ) | Ft ] = n!E ··· fn (t1 , . . . , tn )dMt1 · · · dMtn $Ft
0 0 0
 t tn  t2
= n! ··· fn (t1 , . . . , tn )dMt1 · · · dMtn
0 0 0


= In fn 1[0,t]n .

9.1.2 Annihilation operator


Consider the spaces S and U defined by
 n 

S= Ik (fk ) : fk ∈ L4 (R+ )◦k , k = 0, . . . , n, n ∈ N , (9.3)
k=0

and
 n 

U= 1[ti ,ti−1 ) Fi : Fi ∈ S, 0 = t0 ≤ t1 < · · · < tn , n ≥ 1 ,
i=1

which is contained in
 n 

◦k
Ũ := Ik (gk (∗, ·)) : gk ∈ L (R+ ) ⊗ L (R+ ), k = 0, . . . , n, n ∈ N ,
2 2

k=0

where the symmetric tensor product ◦ is defined in the Appendix A.8. Next
we state the definition of the operators D and δ on multiple stochastic integrals
(random variables and processes), whose linear combinations span S and U .

Definition 9.1.3 Let D : S −→ L2 ( ×R+ ) be the linear operator defined by

Dt In ( fn ) = nIn−1 ( fn (∗, t)), dP × dt − a.e.,

fn ∈ L2 (R+ )◦n .

Due to its role as a lowering operator on the degree of multiple stochastic


integrals, the operator D identifies to an annihilation operator on the boson
Fock space over L2 (R+ ).
152 A guide to the Malliavin calculus

Proposition 9.1.4 The domain Dom(D) = ID([0, ∞)) of D consists in the


space of square-integrable random variables with chaos expansion


F= In ( fn ), (9.4)
n=0

such that the series



n
kIk−1 ( fk (∗, ·))
k=1

converges in L2 ( × R+ ) as n goes to infinity.

Given F ∈ Dom(D) with the expansion (9.4) we have


1 2 ∞

IE DF 2
L2 (R+ )
= kk! fk 2
L2 (Rk+ )
< ∞,
k=1

and


Dt F = f1 (t) + kIk−1 ( fk (∗, t)), dtdP − a.e.
k=1

In particular, the exponential vector



 1

⊗n
ξt (u) := In u1[0,t] , t ∈ R+ ,
n!
n=0

belongs to Dom(D) for all u ∈ L2 (R+ ) and we have

Ds ξt (u) = 1[0,t] (s)u(s)ξt (u), s, t ∈ R.

Since S defined by (9.3) is assumed to be dense in L2 ( ), (Mt )t∈R+ has the


chaos representation property.

Definition 9.1.5 Let

δ : Ũ −→ L2 ( )

be the linear operator defined by

δ(In ( fn+1 (∗, ·))) = In+1 ( f̃n+1 ), fn+1 ∈ L2 (R+ )◦n ⊗ L2 (R+ ),

where f̃n+1 is the symmetrisation of fn+1 in n + 1 variables defined as

1 
n+1
f̃n+1 (t1 , . . . , tn+1 ) = fn+1 (t1 , . . . , tk−1 , tk+1 , . . . , tn+1 , tk ).
n+1
k=1
9.1 Creation and annihilation operators 153

In particular we have

1 
n+1
f ◦ gn (t1 , . . . , tn+1 ) = f (tk )gn (t1 , . . . , tk−1 , tk+1 , . . . , tn+1 ),
n+1
k=1

i.e., f ◦ gn is the symmetrisation of f ⊗ gn in n + 1 variables. Similarly, the


operator δ is usually referred to as a creation operator, due to the fact that it
raises the degree of multiple stochastic integrals. The operator δ is also called
the Skorohod integral. Note that we have
 ∞
δ( f ) = I1 ( f ) = f (t)dMt , f ∈ L2 (R+ ),
0

and, in particular,

δ(uIn ( fn ))
 ∞  ∞
=n In ( fn (∗, s) ◦ u· 1[0,s] (∗, ·))dMs +
n us In ( fn 1[0,s]n )dMs ,
0 0

u ∈ L2 (R+ ), gn ∈ L2 (R+ )◦n , where as a convention “∗” denotes the n − 1 first


variables and “·” denotes the last integration variable in In .
By the isomorphism between L2 ( ) and (L2 (R+ )) we can deduce that
canonical commutation relation satisfied by the operators D and δ, i.e., for any
u ∈ Ũ we have

Dt δ(u) = ut + δ(Dt u), t ∈ R+ .

9.1.3 Duality relation


The next proposition states the duality relation satisfied by D and δ.

Proposition 9.1.6 The operators D and δ satisfy the duality relation

IE[Fδ(u)] = IE[ DF, u


L2 (R+ ) ], F ∈ S, u ∈ U.

Proof : We consider F = In (fn ) and ut = Im (gm+1 (∗, t)), t ∈ R+ , fn ∈ L2 (R+ )◦n ,


gm+1 ∈ L2 (R+ )◦m ⊗ L2 (R+ ). We have

IE[Fδ(u)] = IE[Im+1 (g̃m+1 )In ( fn )]


= n!1{n=m+1} fn , g̃n
L2 (Rn+ )
= n!1{n=m+1} fn , gn
L2 (Rn+ )
154 A guide to the Malliavin calculus

= n!1{n−1=m}
 ∞  ∞
× ··· fn (s1 , . . . , sn−1 , t)gn (s1 , . . . , sn−1 , t)ds1 · · · dsn−1 dt
0 0
 ∞
= n1{n−1=m} IE[In−1 (fn (∗, t))In−1 (gn (∗, t))]dt
0

= IE[ D· In (fn ), Im (gm+1 (∗, ·))


L2 (R+ ) ]

= IE[ DF, u
L2 (R+ ) ].
Remark 9.1.7 By construction, the operator D satisfies the stability assump-
tion thus we have Ds F = 0, s > t, for any Ft -measurable F ∈ S, t ∈ R+ .

From now on we will assume that S is dense in L2 ( ), which is equivalent to


saying that (Mt )t∈R+ has the chaos representation property. As a consequence
of Proposition 9.1.6 we have the following.

Proposition 9.1.8 The operators D and δ are closable on L2 ( ) and L2 ( ×


R+ ) respectively.

It also follows from the density of S in L2 ( ) that U is dense in L2 ( ×


R+ ). More generally, the following proposition follows from the fact that the
denseness of S is equivalent to the chaos representation property.

Proposition 9.1.9 If (Mt )t∈R+ has the chaos representation property then it
has the predictable representation property.

The domain Dom(δ) of δ is the space of processes (ut )t∈R+ ∈ L2 ( × R+ )


with


ut = In (fn+1 (∗, t)),
n=0

and such that




IE[|δ(u)|2 ] = (n + 1)! f̃n 2
< ∞.
L2 (Rn+1
+ )
n=1

The creation operator δ satisfies the following Itô–Skorohod type isometry.

Proposition 9.1.10 Let u ∈ Dom(δ) such that ut ∈ Dom(D), dt-a.e., and


(Ds ut )s,t∈R+ ∈ L2 ( × R2+ ). We have
1 2  ∞  ∞ 
IE[|δ(u)|2 ] = IE u 2L2 (R ) + IE Ds ut Dt us dsdt . (9.5)
+
0 0
9.2 Wiener space 155

By bilinearity, we also have


 ∞ ∞
δ(u), δ(v)
L2 ( ) = u, v
L2 ( ×R+ ) + Ds ut , Dt vs
L2 ( ) dsdt,
0 0
for u and v satisfying the conditions of Proposition 9.1.10.
Definition 9.1.11 Let ILp,1 denote the space of stochastic processes (ut )t∈R+
such that ut ∈ Dom(D), dt-a.e., and
1 2  ∞  ∞ 
p p
u p,1 := IE u L2 (R ) + IE |Ds ut | dsdt < ∞.
p
+
0 0
The next result is a direct consequence of Proposition 9.1.10 and Definition
9.1.11 for p = 2.
Proposition 9.1.12 We have IL2,1 ⊂ Dom(δ).
As a consequence of Proposition 9.1.6, the operator δ coincides with the Itô
integral with respect to (Mt )t∈R+ on the square-integrable adapted processes,
as stated in the next proposition.
Proposition 9.1.13 Let (ut )t∈R+ ∈ Lad
2 ( × R ) be a square-integrable
+
adapted process. We have
 ∞
δ(u) = ut dMt .
0

Note that when (ut )t∈R+ ∈ 2 ( ×R ) is a square-integrable adapted process,


Lad +
then Relation (9.5) becomes the Itô isometry as a consequence of Proposition
9.1.13, i.e., we have
= ∞ =
= =
=
δ(u) L2 ( ) = = ut dMt =
= 2
0 L ( )

= u L2 ( ×R+ ) , u ∈ Lad
2
( × R+ ),
as follows from Remark 9.1.7 since Dt us = 0, 0 ≤ s ≤ t.

9.2 Wiener space


In this section we focus on the case where (Mt )t∈R+ is a standard Brownian
motion, i.e., (Mt )t∈R+ is a normal martingale that solves the structure equation
[M, M]t = t, t ∈ R+ .
The reader is referred to [58, 76, 83, 84, 98, 113] for more details. Brownian
motion can also be defined from a linear map W : h −→ L2 ( ) on a real
156 A guide to the Malliavin calculus

separable Hilbert space h, such that the W(h) are centered Gaussian random
variables with covariances given by
IE[W(h)W(k)] = h, k
, h, k ∈ h
on a probability space ( , F, P).
Setting H1 = W(h) yields a closed Gaussian subspace of L2 ( ), and
W : h −→ H1 ⊆ L2 ( ) is an isometry, and we will assume that the σ -algebra
F is generated by the elements of H1 .
Let
1
e−(s1 +···+sd )/2 ,
2 2
φdσ (s1 , . . . , sd ) = (s1 , . . . , sd ) ∈ Rd ,
(2π )d/2
denote the standard Gaussian density function with covariance σ 2 Id on Rn .
The multiple stochastic integrals In (fn ) of fn ∈ L2 (R+ )◦n with respect to
(Bt )t∈R+ satisfy the multiplication formula

n∧m ! !
n m
In (fn )Im (gm ) = k! In+m−2k (fn ⊗k gm ),
k k
k=0

where fn ⊗k gm is the contraction


(tk+1 , . . . , tn , sk+1 , . . . , sm )  −→
 ∞  ∞
··· fn (t1 , . . . tn )gm (t1 , . . . , tk , sk+1 , . . . , sm )dt1 . . . dtk ,
0 0
tk+1 , . . . , tn , sk+1 , . . . , sm ∈ R+ . In particular, we have
I1 (u)In (v⊗n ) = In+1 (v⊗n ◦ u) + n u, v
L2 (R+ ) In−1 (v⊗(n−1) ) (9.6)
for n ≥ 1, and
I1 (u)I1 (v) = I2 (v ◦ u) + u, v
L2 (R+ )
for n = 1. The Hermite polynomials, cf. Section A.1 in the appendix, will be
used to represent the multiple Wiener integrals.
Proposition 9.2.1 For any orthogonal family {u1 , . . . , ud } in L2 (R+ ) we have

⊗nd
+
d
In (u⊗n
1
1
◦ · · · ◦ ud )= Hnk (I1 (uk ); uk 22 ),
k=1

where n = n1 + · · · + nd .
Proof : We have
H0 (I1 (u); u 22 ) = I0 (u⊗0 ) = 1 and H1 (I1 (u); u 22 ) = I1 (u),
9.2 Wiener space 157

hence the proof follows by induction on n ≥ 1, by comparison of the recurrence


formula (A.1) with the multiplication formula (9.6).

In particular we have
" #  t sn  s2
In 1⊗n
[0,t] = n! ··· dBs1 · · · dBsn = Hn (Bt ; t),
0 0 0

and
" # +d " #
⊗nd ⊗nk
In 1⊗n 1
[t0 ,t1 ] ◦ · · · ◦ 1[td−1 ,td ] = I n k 1[tk−1 ,tk ]
k=1

+
d
= Hnk (Btk − Btk−1 ; tk − tk−1 ).
k=1

From this we recover the orthonormality properties of the Hermite polynomials


with respect to the Gaussian density:
 ∞
dx
Hn (x; t)Hm (x; t)e−x /(2t) √
2
= IE[Hn (Bt ; t)Hm (Bt ; t)]
−∞ 2πt
1 " # " #2
= IE In 1⊗n
[0,t] Im 1 ⊗m
[0,t]

= 1{n=m} n!tn .

In addition, by Lemma 9.1.2 we have that


" # 1 " #$ 2
$
Hn (Bt ; t) = In 1⊗n ⊗n
[0,t] = IE In 1[0,T] $Ft , t ∈ R+ ,

is a martingale which, from Itô’s formula, can be written as


" #
Hn (Bt ; t) = In 1⊗n
[0,t]
 t  t
∂Hn 1 ∂ 2 Hn
= Hn (0; 0) + (Bs ; s)dBs + (Bs ; s)ds
0 ∂x 2 0 ∂x2
 t ∂Hn
+ (Bs ; s)ds
0 ∂s
 t

=n In−1 1⊗(n−1)
[0,s] dBs
0
 t
=n Hn−1 (Bs ; s)dBs .
0
158 A guide to the Malliavin calculus

Given fn ∈ L2 (R+ )⊗n with orthogonal expansion


 ⊗nd
ak11,...,kdd e⊗n
n ,...,n
fn = k1 ◦ · · · ◦ ekd ,
1

n1 +···+nd =n
k1 ,...,kd ≥0

in an orthonormal basis (en )n∈N of L2 (R+ ), we have


 n ,...,n
In (fn ) = ak11,...,kdd Hn1 (I1 (ek1 ); 1) · · · Hnd (I1 (ekd ); 1),
n1 +···+nd =n
k1 ,...,kd ≥0
n ,...,n
where the coefficients ak11,...,kdd are given by
1 ⊗nd
In (fn ), Ik (e⊗n
n ,...,n
ak11,...,kdd = k1 ◦ · · · ◦ ukd )
L2 ( )
1
n1 ! · · · nd !
⊗n
= fn , e⊗n
k1 ◦ · · · ◦ ekd
L2 (Rn+ ) .
1 d

The following relation for exponential vectors, can be recovered independently


using the Hermite polynomials.

Proposition 9.2.2 We have



 !
1 1
ξ(u) = In (u⊗n ) = exp I1 (u) − u 2
L2 (R+ )
. (9.7)
n! 2
k=0

Proof : Relation (9.7) follows from Proposition A.1.3-i) and Proposition


9.2.1 which reads In (u⊗n ) = Hn (I1 (u); u 2L2 (R ) ), n ≥ 1.
+

The following property can be proved by a Fourier transform argument and


the density property in L2 ( ) of the linear space spanned by the exponential
vectors
 ! *
1
exp I1 (u) − u 2L2 (R ) : u ∈ L2 (R+ ) ,
2 +

cf. e.g., Theorem 4.1, p. 134 of [52].

Proposition 9.2.3 The Brownian motion (Bt )t∈R+ has the chaos representa-
tion property, i.e., any F ∈ L2 ( ) admits a chaos decomposition


F= Ik (gk ).
k=0

Assume that F has the form F = g(I1 (e1 ), . . . , I1 (ek )) for some
!
1 −|x|2 /2
g ∈ L2 Rk , e dx ,
(2π )k/2
9.2 Wiener space 159



and admits the chaos expansion F = In (fn ). Then for all n ≥ 1 there exists
n=0
a (multivariate) Hermite polynomial Pn of degree n such that

In (fn ) = Pn (I1 (e1 ), . . . , I1 (ek )).

9.2.1 Gradient and divergence operators


In the Brownian case, the operator D has the derivation property, i.e.,

Dt (FG) = FDt G + GDt F, F, G ∈ S.

More precisely, introducing the algebra of bounded smooth functionals


>
?
S = F = f W(h1 ), . . . , W(hn ) : n ∈ N, f ∈ Cb∞ (Rn ), h1 , . . . , hn ∈ h ,

the derivation operator

D : S −→ L2 ( ) ⊗ h ∼
= L2 ( ; h)

is given by

n
∂f

DF = W(h1 ), . . . , W(hn ) ⊗ hi
∂xi
i=1


for F = f W(h1 ), . . . , W(hn ) ∈ S. In particular, D is a derivation with respect
to the natural L∞ ( )-bimodule structure of L2 ( ; h), i.e.,

D(FG) = F(DG) + (DG)F, F, G ∈ S.

We can also define the gradient Du F = u, DF


with respect to h-valued
random variables u ∈ L2 ( ; h), this is L∞ ( )-linear in the first argument and
a derivation in the second, i.e.,

DFu G = FDu G and Du (FG) = F(Du G) + (Du F)G.

The derivation operator D is a closable operator from Lp ( ) to Lp ( ; h) for


1 ≤ p ≤ ∞. We will denote its closure again by D. Given that L2 ( ) and
L2 ( ; h) are Hilbert spaces (with the obvious inner products), the closability
of D implies that it has an adjoint. We will call the adjoint of

D : L2 ( ) −→ L2 ( ; h)

the divergence operator and denote it by

δ : L2 ( ; h) −→ L2 ( ).
160 A guide to the Malliavin calculus

Denoting by
⎧ ⎫
⎨ 
n ⎬
Sh = u = Fj ⊗ hj : F1 , . . . , Fn ∈ S, h1 , . . . , hn ∈ h, n ∈ N
⎩ ⎭
j=1

the smooth elementary h-valued random variables, δ(u) is then given by



n 
n
δ(u) = Fj W(hj ) − hj , DFj
L2 (R+ )
j=1 j=1
:
for u = nj=1 Fj ⊗ hj ∈ Sh . If we take, e.g., h = L2 (R+ ), then Bt = W(1[0,t] )
is a standard Brownian motion, and the h-valued random variables can also be
interpreted as stochastic processes indexed by R+ .

Proposition 9.2.4 Let u1 , . . . , un ∈ L2 (R+ ) and

F = f (I1 (u1 ), . . . , I1 (un )),

where f is a polynomial or f ∈ Cb1 (Rn ). We have


n
∂f
Dt F = ui (t) (I1 (u1 ), . . . , I1 (un )), t ∈ R+ . (9.8)
∂xi
i=1

In particular for f polynomial and for f ∈ Cb1 (Rn ) we have


n
∂f
Dt f (Bt1 , . . . Btn ) = 1[0,ti ] (t) (Bt , . . . Btn ),
∂xi 1
i=1

0 ≤ t1 < · · · < tn , and (9.8) can also be written as

DF, h
L2 (R+ )
 ∞  ∞ !
d
= f u1 (t)(dB(t) + εh(t)dt), . . . , un (t)(dB(t) + εh(t)dt)
dε 0 0 |ε=0
d
= F(ω + h)|ε=0 ,

h ∈ L2 (R+ ), where the limit exists in L2 ( ). We refer to the above identity as
the probabilistic interpretation of the gradient operator D on the Wiener space.
In other words the scalar product h, DF
L2 (R+ ) coincides with the Fréchet
derivative
$
d $$

Dh F = $ f W(h1 ) + ε h, h1
, . . . , W(hn ) + ε h, hn

dε ε=0
9.2 Wiener space 161



for all F = f W(h1 ), . . . , W(hn ) ∈ S and all h ∈ h. We also have the integration
by parts formulas

IE[FW(h)] = IE[ h, DF
L2 (R+ ) ], (9.9)

and

IE[FGW(h)] = IE[ h, DF
L2 (R+ ) G + F h, DG
L2 (R+ ) ], (9.10)

for all F, G ∈ S, h ∈ h. The derivation operator D and the divergence operator


δ satisfy the commutation relation

Dh δ(u)) = h, u
L2 (R+ ) + δ(Dh u), (9.11)

and the Skorohod isometry

IE[δ(u)δ(v)] = IE[ u, v
L2 (R+ ) ] + IE[Tr(Du ◦ Dv)],

for h ∈ h, u, v ∈ Sh , F ∈ S. In addition, we have the divergence formula of the


next proposition.

Proposition 9.2.5 For all u ∈ U and F ∈ S we have

δ(u)F = δ(uF) + DF, u


L2 (R+ ) . (9.12)

From Proposition 9.1.13, the Skorohod integral δ(u) coincides with the Itô
integral of u ∈ L2 (W; H) with respect to Brownian motion, i.e.,
 ∞
δ(u) = ut dBt ,
0

when u is square-integrable and adapted with respect to the Brownian filtration


(Ft )t∈R+ . In this case the divergence operator is also called the Hitsuda–
Skorohod integral.
The operator D can be extended in the obvious way to h-valued random
variables, i.e., as D ⊗ Idh . Thus Du is an h ⊗ h-valued random variable and can
also be interpreted as a random variable whose values are (Hilbert–Schmidt)
operators on h. If {ej : j ∈ N} is a complete orthonormal system on h, then
Tr(Du ◦ Dv) can be computed as


Tr(Du ◦ Dv) = Dei u, ej
L2 (R+ ) Dej v, ei
.
i, j=1
162 A guide to the Malliavin calculus

9.3 Poisson space


Let X be a σ -compact metric space with a diffuse Radon measure σ . The space
of configurations of X is the set of Radon measures
 
 n
:= ω =
X
xk : (xk )k=0 ⊂ X, n ∈ N ∪ {∞} ,
k=n
(9.13)
k=0

where x denotes the Dirac measure at x ∈ X, i.e.,

x (A) = 1A (x), A ∈ B(X),

and defined in (9.13) is restricted to locally finite configurations.


The configuration space X is endowed with the vague topology and its
associated σ -algebra denoted by F X , cf. [6]. Under the measure πσX on
( X , F X ), the Nn -valued vector

ω  −→ (ω(A1 ), . . . , ω(An ))

has independent components with Poisson distributions of respective parame-


ters σ (A1 ), . . . , σ (An ), whenever A1 , . . . , An are compact disjoint subsets of X.
When X is compact we will consider Poisson functionals of the form


F(ω) = f0 1{ω(X)=0} + 1{ω(X)=n} fn (x1 , . . . , xn ),
n=1

where fn ∈ L1 (X n , σ ⊗n )
is symmetric in n variables, n ≥ 1.
Recall that the Fourier transform of πσX via the Poisson stochastic integral
 
f (x)ω(dx) = f (x), f ∈ L1 (X, σ )
X x∈ω

is given by
  !  !
IEπσ exp i f (x)ω(dx) = exp (eif (x) − 1)σ (dx) , (9.14)
X X

f ∈ L1 (X, σ ), which shows that


  
IE f (x)ω(dx) = f (x)σ (dx), f ∈ L1 (X, σ ),
X X

and
/  !2 0 
IE f (x)(ω(dx) − σ (dx) = |f (x)|2 σ (dx), f ∈ L2 (X, σ ).
X X
9.3 Poisson space 163

When f ∈ L2 (X, σ ), Relation (9.14) extends as


  !  !
IEπσ exp i f (x)(ω(dx) − σ (dx)) = exp (eif (x)
− if (x) − 1)σ (dx) .
X X

The standard Poisson process (Nt )t∈R+ with intensity λ > 0 can be constructed
as

Nt (ω) = ω([0, t]), t ∈ R+ ,

on the Poisson space


 

n
= ω= tk : 0 ≤ t1 < · · · < tn , n ∈ N ∪ {∞}
k=1

over X = R+ , with the intensity measure

ν(dx) = λdx, λ > 0.

In this setting, every configuration ω ∈ can be viewed as the ordered


sequence ω = (Tk )k≥1 of jump times of (Nt )t∈R+ on R+ .

Proposition 9.3.1 Let fn : Rn+  −→ R be continuous with compact support in


Rn+ . Then we have the P(dω)-almost sure equality
 ∞  tn−  t2−
In ( fn )(ω) = n! ··· fn (t1 , . . . , tn )(ω(dt1 ) − dt1 ) · · · (ω(dtn ) − dtn ).
0 0 0
The above formula can also be written as
 ∞  tn−  t−
2
In (fn ) = n! ··· fn (t1 , . . . , tn )d(Nt1 − t1 ) · · · d(Ntn − tn ),
0 0 0
and by symmetry of fn in n variables we have

In (fn ) = fn (t1 , . . . , tn )(ω(dt1 ) − dt1 ) · · · (ω(dtn ) − dtn ),
n

with

n = {(t1 , . . . , tn ) ∈ Rn+ : ti  = tj , ∀i  = j}.

Letting

Xn = {(x1 , . . . , xn ) ∈ X n : xi  = xj , ∀i  = j},

we have

InX (fn )(ω) = fn (x1 , . . . , xn )(ω(dx1 ) − σ (dx1 )) · · · (ω(dxn ) − σ (dxn )).
Xn
164 A guide to the Malliavin calculus

The integral InX (fn ) extends to symmetric functions in fn ∈ L2 (X)◦n via the
isometry formula
 
IEπσ InX (fn )ImX (gm ) = n!1{n=m} fn , gm
L2 (X,σ )◦n ,

for all symmetric functions fn ∈ L2 (X, σ )◦n , gm ∈ L2 (X, σ )◦m .

Proposition 9.3.2 For u, v ∈ L2 (X, σ ) such that uv ∈ L2 (X, σ ) we have

I1X (u)InX (v⊗n )

= In+1
X
(v⊗n ◦ u) + nInX ((uv) ◦ v⊗(n−1) ) + n u, v
L2 (X,σ ) In−1
X
(v⊗(n−1) ).

We have the multiplication formula


2(n∧m)
InX (fn )ImX (gm ) = X
In+m−s (hn,m,s ),
s=0

fn ∈ L2 (X, σ )◦n , gm ∈ L2 (X, σ )◦m , where


 ! ! !
n m i
hn,m,s = i! fn ◦s−i gm ,
i i s−i i
s≤2i≤2(s∧n∧m)

and fn ◦lk gm , 0 ≤ l ≤ k, is the symmetrisation of

(xl+1 , . . . , xn , yk+1 , . . . , ym )  −→

fn (x1 , . . . , xn )gm (x1 , . . . , xk , yk+1 , . . . , ym )σ (dx1 ) · · · σ (dxl )
Xl

in n + m − k − l variables. The multiple Poisson stochastic integral of the


function
⊗k
1⊗k
A1 ◦ · · · ◦ 1Ad
1 d

is linked to the Charlier polynomials by the relation

⊗k
+
d
InX (1⊗k
A1 ◦ · · · ◦ 1Ad )(ω) =
1 d
Cki (ω(Ai ), σ (Ai )),
i=1

provided A1 , . . . , Ad are mutually disjoint compact subsets of X and n =


k1 + · · · + kd . The following expression of the exponential vector

 1 X ⊗n
ξ(u) = I (u )
n! n
k=0
9.3 Poisson space 165

is referred to as the Doléans exponential and satisfies


 !+
ξ(u) = exp u(x)(ω(dx) − σ (dx)) ((1 + u(x))e−u(x) ),
X x∈ω

u ∈ L2 (X). We note that the Poisson measure has the chaos representation
property, i.e., every square-integrable functional F ∈ L2 ( X , πσ ) admits the
orthogonal Wiener–Poisson decomposition


F= InX (fn )
n=0

in series of multiple stochastic integrals.

9.3.1 Finite difference gradient


In this section we study the probabilistic interpretation and the extension to the
Poisson space on X of the operators D and δ defined in Definitions 9.1.3 and
9.1.5. Consider the spaces S and U of random variables and processes given by
 n 

S= IkX (fk ) : fk ∈ L4 (X)◦k , k = 0, . . . , n, n ∈ N ,
k=0

and
 n 

◦k
U= IkX (gk (∗, ·)) : gk ∈ L (X)
2
⊗ L (X), k = 0, . . . , n, n ∈ N .
2

k=0

Definition 9.3.3 Let the linear, unbounded, closable operators

DX : L2 ( X , πσ ) → L2 ( X × X, P ⊗ σ )

and

δ X : L2 ( X × X, P ⊗ σ ) → L2 ( X , P)

be defined on S and U respectively by

DXx InX (fn ) := nIn−1


X
(fn (∗, x)), πσ (dω) ⊗ σ (dx) − a.e.,

n ∈ N, fn ∈ L2 (X, σ )◦n , and

δ X (InX (fn+1 (∗, ·))) := In+1


X
(f̃n+1 ), πσ (dω) − a.s.,

n ∈ N, fn+1 ∈ L2 (X, σ )◦n ⊗ L2 (X, σ ).


166 A guide to the Malliavin calculus

In particular we have

δ X (f ) = I1X (f ) = f (x)(ω(dx) − σ (dx)), f ∈ L2 (X, σ ),
X

and

δ X (1A ) = ω(A) − σ (A), A ∈ B(X),

and the Skorohod integral has zero expectation:

IE[δ X (u)] = 0, u ∈ Dom(δ X ).

In case X = R+ we simply write D and δ instead of DR+ and δ R+ . The


commutation relation between DX and δ X is given by

DXx δ X (u) = u(x) + δ X (DXx u), u ∈ U.

Let Dom(DX ) denote the set of functionals F : X −→ R with the expansion



 ∞

F= InX (fn ), such that n!n fn 2
L2 (X n ,σ ⊗n )
< ∞,
n=0 n=1

and let Dom(δ X ) denote the set of processes u : X × X −→ R with the


expansion

 ∞

u(x) = InX (fn+1 (∗, x)), x ∈ X, such that n! f̃n 2
L2 (X n ,σ ⊗n )
< ∞.
n=0 n=1

The following duality relation can be obtained by transfer from Proposition


9.1.6.

Proposition 9.3.4 The operators DX and δ X satisfy the duality relation

IE[ DX F, u
L2 (X,σ ) ] = IE[Fδ X (u)],

F ∈ Dom(DX ), u ∈ Dom(δ X ).

The next lemma gives the probabilistic interpretation of the gradient DX ,


as an extension of the finite difference operator (3.3) to spaces of random
configurations.

Lemma 9.3.5 For any F of the form

F = f (I1X (u1 ), . . . , I1X (un )), (9.15)


9.3 Poisson space 167

with u1 , . . . , un ∈ Cc (X), and f is a bounded and continuous function, or a


polynomial on Rn , we have F ∈ Dom(DX ) and

DXx F(ω) = F(ω ∪ {x}) − F(ω), P ⊗ σ (dω, dx) − a.e., (9.16)

where as a convention we identify ω ∈ X with its support.

Definition 9.3.6 Given a mapping F : X −→ R, let

εx+ F : X −→ R and εx− F : X −→ R,

x ∈ X, be defined by

(εx− F)(ω) = F(ω\x), and (εx+ F)(ω) = F(ω ∪ x), ω ∈ X .

Note that Relation (9.16) can be written as

DXx F = εx+ F − F, x ∈ X.

On the other hand, the result of Lemma 9.3.5 is clearly verified on simple
functionals. For instance when F = I1X (u) is a single Poisson stochastic integral,
we have

DXx I1X (u)(ω) = I1X (u)(ω ∪ {x}) − I1X (u)(ω)


 
= u(y)(ω(dy) + x (dy) − σ (dy)) − u(y)(ω(dy) − σ (dy))
X X

= u(y)x (dy)
X
= u(x), x ∈ X.

As in [126], the law of the mapping (x, ω)  −→ ω∪{x} under 1A (x)σ (dx)πσ (dω)
is absolutely continuous with respect to πσ . In particular, (ω, x)  −→ F(ω∪{x})
is well-defined, πσ ⊗ σ , and this justifies the extension of Lemma 9.3.5 in the
next proposition.

Proposition 9.3.7 For any F ∈ Dom(DX ) we have

DXx F(ω) = F(ω ∪ {x}) − F(ω), πσ (dω)σ (dx) − a.e.

Proof : There exists a sequence (Fn )n∈N of functionals of the form (9.15),
such that (DX Fn )n∈N converges everywhere to DX F on a set AF such that
(πσ ⊗ σ )(AcF ) = 0. For each n ∈ N, there exists a measurable set Bn ⊂ X × X
such that (πσ ⊗ σ )(Bcn ) = 0 and

DXx Fn (ω) = Fn (ω ∪ {x}) − Fn (ω), (ω, x) ∈ Bn .


168 A guide to the Malliavin calculus

C∞
Taking the limit as n goes to infinity on (ω, x) ∈ AF ∩ n=0 Bn , we get

DXx F(ω) = F(ω ∪ {x}) − F(ω), πσ (dω)σ (dx) − a.e.

Proposition 9.3.7 implies that DX satisfies the following finite difference


product rule.

Proposition 9.3.8 We have for F, G ∈ S:

DXx (FG) = FDXx G + GDXx F + DXx FDXx G, πσ (dω)σ (dx) − a.e.

9.3.2 Divergence operator


The adjoint δX of DX satisfies the following divergence formula.

Proposition 9.3.9 Let u : X × X −→ R and F : X −→ R such that u(·, ω),


DX· F(ω), and u(·, ω)DX· F(ω) ∈ L1 (X, σ ), ω ∈ X . We have

Fδ X (u) = δ X (uF) + u, DX F
L2 (X,σ ) + δ X (uDX F).

The relation also holds if the series and integrals converge, or if F ∈ Dom(DX )
and u ∈ Dom(δ X ) is such that uDX F ∈ Dom(δ X ).

In the next proposition, Relation (9.17) can be seen as a generalisation of


(A.3c) in Proposition A.1.5:

Cn+1 (k, t) = kCn (k − 1, t) − tCn (k, t),

which is recovered by taking u = 1A and t = σ (A). The following statement


provides a connection between the Skorohod integral and the Poisson stochas-
tic integral.

Proposition 9.3.10 For all u ∈ Dom(δ X ) we have



δ (u) =
X
ux (ω \ {x})(ω(dx) − σ (dx)). (9.17)
X

9.4 Sequence models


In this section we describe a construction of differential operators and integra-
tion by parts formulas based on sequence models, in which
 
= ω = (ωk )k∈N : ωk ∈ Rd+2 ,
9.4 Sequence models 169

is a linear space of sequences, where ωk = (ωk0 , . . . , ωkd+1 ) ∈ Rd+2 , k ∈ N,


d ≥ 1, with the norm
ωk Rd+2
ω = sup ,
k∈N k+1
and associated Borel σ -algebra F. This is in connection with the notion of
numerical model in § I-4.3 of [77] in the case of Brownian motion be built
from a sequence of independent standard Gaussian random variables. In that
spirit, the Malliavin calculus on real sequences is also developed in § I-6.2
of [86].
Consider the finite measure λ on Rd+2 with density
dλ(t0 , t1 , . . . , td+1 )
1
√ e−t0 /2 e−t1 1[0,∞) (t1 )1[−1,1]d (t2 , . . . , td+1 )dt0 · · · dtd+1 .
2
=
2d−1 2π
Definition 9.4.1 We denote by P the probability defined on ( , F) via its
expression
P({ω = (ωk )k∈N ∈ : (ω0 , . . . , ωn ) ∈ A}) = λ⊗n+1 (A),
on cylinder sets of the form
{ω = (ωk )k∈N ∈ : (ω0 , . . . , ωn ) ∈ A}
where A is a Borel set in (Rd+2 )n+1 and n ∈ N.
We denote by
τk = (τk0 , . . . , τkd+1 ) : −→ Rd+2 k ∈ N,
the coordinate functionals defined as
τk (ω) = (τk0 (ω), . . . , τkd+1 (ω)) = (ωk0 , . . . , ωkd+1 ) = ωk .
The sequences (τk0 )k∈N , (τk1 )k∈N , (τki )k∈N , i = 2, . . . , d + 2, are independent
and respectively Gaussian, exponential, and uniform on [−1, 1]. Letting
E = R × (0, ∞) × (−1, 1)d−1 , Ē = R × [0, ∞) × [−1, 1]d ,
we construct a random point process γ as the sequence
γ = {Tk : k ≥ 1} ⊂ R+ × [−1, 1]d ,
of random points defined by
 k−1 

d+1
Tk (ω) = 1 2
τi (ω), τk (ω), . . . , τk (ω) , ω ∈ , k ≥ 1.
i=0
170 A guide to the Malliavin calculus

On the other hand the standard Brownian motion indexed by t ∈ [0, 1] can be
constructed as the Paley–Wiener series

1  τn0
W(t) = tτ00 + √ sin(2nπt), t ∈ [0, 1],
π 2 n=1 n

with
√  1  1
τn0 = 2 sin(2πnt)dW(t), n ≥ 1, τ00 = dW(t) = W(1),
0 0

and if (z(t))t∈[0,1] is an adapted process given as


√ ∞
z(t) = F(0, 0) + 2 F(n, 0) cos(2nπt), t ∈ [0, 1],
n=1

then the stochastic integral of (z(t))t∈[0,1] with respect to (W(t))t∈[0,1] is


written as
 1 ∞
z(t)dW(t) = F(n, 0)τn0 .
0 n=0

In this framework, the shift of Brownian motion by a process (ψ(s))s∈[0,1] and


of the point process γ by a random diffeomorphism

φ : R+ × [−1, 1]d −→ R+ × [−1, 1]d

will be replaced by a random variable F : −→ H whose components are


denoted by (F(k, i))k∈N,i=0,1,...,d+1 . The link between F and ψ, φ is the
following:
⎧ ⎧
⎪ ⎪ √  1

⎪ ⎪
⎪ 2 sin(2πkt)ψ(t)dt, k ≥ 1,

⎪ ⎪


⎪ 0

⎪ F(k, 0) =

⎪ ⎪  1

⎪ ⎪

⎨ ⎪
⎩ ψ(t)dt, k = 0,


0




⎪ τk1 + F(k, 1) = φ 1 (Tk+1 ) − φ 1 (Tk ),
⎪ k ≥ 0,





⎩ i
τk + F(k, i) = φ i (Tk ), k ≥ 0, i = 2, . . . , d + 1.
We now introduce a gradient and a divergence operator in the sequence model.
Given X a real separable Hilbert space with orthonormal basis (hi )i∈N , let H⊗X
denote the completed Hilbert–Schmidt tensor product of H with X. Let S be the
set of functionals on of the form f (τk1 , ..., τkn ), where n ∈ N, k1 , ..., kn ∈ N,
9.4 Sequence models 171

and f is a polynomial or f ∈ Cc∞ (En ). We define a set of smooth vector-valued


functionals as
 n 

S(X) = Fi hi : F0 , . . . , Fn ∈ S, h0 , . . . , hn ∈ X, n ∈ N ,
i=0

which is dense in L2 ( , P; X).

Definition 9.4.2 Let D : S(X) → L2 ( , H ⊗ X) be defined via


F(ω + εh) − F(ω)
DF(x), h
H⊗X = lim , ω ∈ , h ∈ H.
ε→0 ε
Let (ek )k≥0 the canonical basis of

H = 2 (N, Rd+2 ) = 2 (N) ⊗ Rd+2 , with ek = (e0k , . . . , ed+1


k ), k ∈ N.

We denote DF = (Dik F)(k,i)∈N×{0,1,...,d+1} ∈ L2 ( ; H ⊗ X), and for u ∈


S(H ⊗ X), we write
∞ 
 d−1
u= uik eik , uk ∈ S(X), k ∈ N,
k=0 i=0

Let also


⎪ Rd+2 , i = 0,



⎨ > ?
Ei = (y0 , . . . , yd+1 ) ∈ Rd+2 : y1 = 0 , i = 1,




⎩ >(y0 , . . . , yd+1 ) ∈ Rd+2 : yi ∈ {−1, 1}? ,

i = 2, . . . , d + 1,

and

ik = {ω ∈ : ωk ∈ Ei } , k ∈ N, i = 1, . . . , d + 1,

and let
> ?
U (X) := u ∈ S(H ⊗ X) : uik = 0 on ik , k ∈ N, i = 0, 1, . . . , d + 1 ,

which is dense in L2 ( ; H ⊗ X).

Proposition 9.4.3 The operator D : L2 ( ; X) → L2 ( ; H ⊗ X) is closable


and has an adjoint operator δ : U(X) → L2 ( ; X), with

IEP [ DF, u
H⊗X ] = IE [ δ(u), F
X ] , u ∈ U (X), F ∈ S(X),
172 A guide to the Malliavin calculus

where δ is defined as

δ(u) = τk0 u0k + u1k − traceDk uk , u ∈ U(X),
k∈N

with

traceDk uk := D0k u0k + · · · + Dd+1 d+1


k uk , u ∈ U (X).

Proof : This result is proved by finite dimensional integration by parts with


respect to λ, under the boundary conditions imposed on elements of U (X).

Definition 9.4.4 For p ≥ 1, we call IDp,1 (X) the completion of S(X) with
respect to the norm

F IDp,1 (X) = F X Lp ( ) + DF H⊗X Lp ( ) .

In particular, IDUp,1 (H) is the completion of U (R) with respect to the norm
· IDp,1 (H) . For p = 2, let Dom(δ; X) denote the domain of the closed
extension of δ. As shown in the following proposition, IDU
2,1 (H) is a Hilbert
space contained in Dom(δ; X).

Proposition 9.4.5 The operator δ is continuous from IDU 2


2,1 (H) into L ( )
with

δ(F) 2
≤ (d + 2) F 2
, F ∈ IDU
2,1 (H).
L2 ( ) IDU2,1 (H)

Proof : Let F ∈ U (R). We have


 
 
d+1
δ(F) = τk0 F(k, 0) + F(k, 1) − Dik F(k, i) ,
k=0 i=0

and
 ∞
2

(δ(F)) ≤ (d + 2)
2
τk0 F(k, 0) − D0k F(k, 0)
k=0
 ∞
2 ∞ 2
 
d+1 
+(d + 2) F(k, 1) − D1k F(k, 1) + (d + 2) Dik F(k, i) ,
k=0 i=2 k=0
9.4 Sequence models 173

hence from the Gaussian, exponential and uniform cases, cf. [103], [94], [96],
we have
/∞ 0

δ(F) 2
L2 ( )
≤ (d + 2) IEP (F(k, 0))2
k=0
⎡ ⎤

 
d+1
+ (d + 2) IEP ⎣ (D0k F(l, 0))2 + (D1k F(l, 1))2 + (Dik F(l, i))2 ⎦
k,l=0 i=2

≤ (d + 2) π 0 F 2
.
IDU2,1 (H)

Based on the duality relation between D and δ and on the density of U (X)
in L2 ( ; H ⊗ X), it can be shown that the operators D and δ are local, i.e.,
for F ∈ ID2,1 (X), resp. F ∈ Dom(δ; X), we have DF = 0 almost surely on
{F = 0}, resp. δ(F) = 0 almost surely on {F = 0}.

Notes
Infinite-dimensional analysis has a long history: it began in the sixties (work of
Gross [49], Hida, Elworthy, Krée, . . .), but it is Malliavin [75] who has applied
it to diffusions in order to give a probabilistic proof of Hörmander’s theorem.
Proposition 9.2.4 is usually taken as a definition of the Malliavin derivative
D, see, e.g., [84]. The relation between multiple Wiener integrals and Hermite
polynomials originates in [107]. Finding the probabilistic interpretation of D
for normal martingales other than the Brownian motion or the Poisson process,
e.g., for the Azéma martingales, is still an open problem.

Exercises
Exercise 9.1 Consider (Bt )t∈R+ and (Nt )t∈R+ as two independent standard
Brownian motion and Poisson process. Compute the mean and variance of
the following stochastic integrals:
 T  T  T  T  T
Bet dBt , Bt dBt , (Nt − t)d(Nt − t), Bt d(Nt − t), (Nt − t)dBt .
0 0 0 0 0
174 A guide to the Malliavin calculus

Exercise 9.2 Let (Bt )t∈[0,T] denote a standard Brownian motion. Compute the
expectation
  T !
IE exp β Bt dBt
0

for all β < 1/T. Hint: expand (BT )2 by Itô’s calculus.

Exercise 9.3 Let (Bt )t∈[0,T] denote a standard Brownian motion generating the
filtration (Ft )t∈[0,T] and let f ∈ L2 ([0, T]). Compute the conditional expectation
1 3T $ 2
$
IE e 0 f (s)dBs $Ft , 0 ≤ t ≤ T.

Exercise 9.4 Let (Bt )t∈[0,T] denote a standard Brownian motion and let α ∈ R.
Solve the stochastic differential equation
dXt = αXt dt + dBt , 0 ≤ t ≤ T.

Exercise 9.5 Consider (Bt )t∈R+ a standard Brownian motion generating the
filtration (Ft )t∈R+ , and let (St )t∈R+ denote the solution of the stochastic
differential equation
dSt = rSt dt + σ St dBt . (9.18)
1. Solve the stochastic differential equation (9.18).
2. Find the function f (t, x) such that
f (t, St ) = IE[(ST )2 | Ft ], 0 ≤ t ≤ T.
3. Show that the process t  −→ f (t, St ) is a martingale.
4. Using the Itô formula, compute the process (ζt )t∈[0,T] in the predictable
representation
 t
f (t, St ) = IE[φ(ST )] + ζs dBs .
0
Exercise 9.6 Consider the stochastic integral representation
 ∞
F = IE[F] + ut dMt (9.19)
0
with respect to the normal martingale (Mt )t∈R+ .
1. Show that the process u in (9.19) is unique in L2 ( × R+ ).
2. Using the Clark–Ocone formula (cf. e.g., § 5.5 of [98]), find the process
(ut )t∈R+ in the following cases:
9.4 Sequence models 175

a) Mt = Bt is a standard Brownian motion, and


(i) F = (BT )3 , (ii) F = eaBT for some a ∈ R.
b) Mt = Nt − t is a standard (compensated) Poisson process, and
(i) F = (NT )3 , (ii) F = (1 + a)NT for some a > −1.
Exercise 9.7 Consider the multiple stochastic integral expansion


F = IE[F] + In (fn ) (9.20)
n=1
with respect to the normal martingale (Mt )t∈R+ .
1. Show that the decomposition (9.20) is unique.
2. Find the sequence (fn )n≥1 in the following cases:
a) Mt = Bt is a standard Brownian motion, and
(i) F = (BT )3 , (ii) F = eaBT for some a ∈ R.
b) Mt = Nt − t is a standard (compensated) Poisson process, and
(i) F = (NT )3 , (ii) F = (1 + a)NT for some a > −1.
Exercise 9.8 Consider (Bt )t∈R+ a standard Brownian motion generating the
filtration (Ft )t∈R+ , and let (St )t∈R+ denote the solution of the stochastic
differential equation
dSt = rSt dt + σ St dBt . (9.21)
1. Solve the stochastic differential equation (9.21).
2. Given φ a C 1 bounded function on R, show that there exists a function
f (t, x) such that
f (t, St ) = IE[φ(ST ) | Ft ], 0 ≤ t ≤ T,
with f (T, x) = φ(x), x ∈ R.
3. Show that the process t  −→ f (t, St ) is a martingale.
4. Using the Itô formula, compute the process (ζt )t∈[0,T] in the predictable
representation
 t
f (t, St ) = IE[φ(ST )] + ζs dBs .
0
5. What is the partial differential equation satisfied by f (t, x)?
Exercise 9.9 Consider (Nt )t∈R+ a standard Poisson process generating the
filtration (Ft )t∈R+ , and let (St )t∈R+ denote the solution of the stochastic
differential equation
dSt = rSt dt + σ St− d(Nt − t). (9.22)
176 A guide to the Malliavin calculus

1. Solve the stochastic differential equation (9.22).


2. Given φ a C 1 bounded function on R, show that there exists a function
f (t, x) such that
f (t, St ) = IE[φ(ST ) | Ft ], 0 ≤ t ≤ T.
3. Show that the process t  −→ f (t, St ) is a martingale.
4. Using the Itô formula, compute the process (ζt )t∈[0,T] in the predictable
representation
 T
f (T, ST ) = IE[φ(ST )] + ζt− (dNt − dt).
0
5. What is the difference-differential equation satisfied by f (t, x)?
Exercise 9.10 Let (Nt )t∈R+ denote a standard Poisson process on R+ . Given
f ∈ L1 (R+ ) and bounded we let
 ∞
f (y)(dNy − dy)
0
denote the compensated Poisson stochastic integral of f , and
  ∞ !
L(s) := IE exp s f (y)(dNy − dy) , s ∈ R+ .
0
1. Show that we have
 ∞   ∞ !

L (s) = f (y)(esf (y)
− 1)dy IE exp s f (y)(dNy − dy) .
0 0
2. Show that we have
L (s) esK − 1
≤ h(s) := α 2 , s ∈ R+ ,
L(s) K
provided f (t) ≤ K, dt-a.e., for some K > 0.
3. Show that
 t !  t sK !
e −1
L(t) ≤ exp h(s)ds = exp α 2 ds , t ∈ R+ ,
0 0 K
 ∞
provided in addition that |f (y)|2 dy ≤ α 2 , for some α > 0.
0
4. Show, using Chebyshev’s inequality, that
 ∞ !   ∞ !
−tx
P f (y)(dNy − dy) ≥ x ≤ e IE exp t f (y)dNy ,
0 0
and that
 ∞ !  !
t esK − 1
P f (y)(dNy − dy) ≥ x ≤ exp −tx + α 2 ds .
0 0 K
9.4 Sequence models 177

5. By minimisation in t, show that


 ∞  ∞ ! !
xK −x/K−α /K
2 2

P f (y)dNy − f (y)dx ≥ x ≤ ex/K 1 + 2 ,


0 0 α
for all x > 0, and that
 ∞  ∞ ! !−x/2K
xK
P f (x)dNx − f (x)dx ≥ x ≤ 1 + 2 ,
0 0 α
for all x > 0.
10

Noncommutative Girsanov theorem

Be not astonished at new ideas; for it is well-known to you that a thing


does not therefore cease to be true because it is not accepted by many.
(B. Spinoza.)
In this chapter we derive quasi-invariance results and Girsanov density formu-
las for classical stochastic processes with independent increments, which are
obtained as components of Lévy processes on real Lie algebras. The examples
include Brownian motion as well as the Poisson process, the gamma process,
and the Meixner process. By restricting ourselves to commutative subalgebras
of the current algebra that have dimension one at every point, we can use
techniques from the representation theory of Lie algebras in order to get
explicit expressions on both sides of our quasi-invariance formulas.

10.1 General method


We will use results from Chapter 8 on Lévy processes on real Lie algebras
and their associated classical increment processes. Let ( jst )0≤s≤t be a Lévy
process on a real Lie algebra g, defined as in (8.3) and fix X ∈ gR+ with
classical version (X̂t )t∈R+ . In addition to the conditions of Definition 8.1.1, we
assume that the representation ρ in the Schürmann triple can be exponentiated
to a continuous unitary representation of the Lie group associated to g. These
assumptions guarantee that jst can also be exponentiated to a continuous
unitary group representation. By Nelson’s theorem, this implies that D contains
a dense subspace whose elements are analytic vectors for all ρ(X), X ∈ g, and
any finite set of operators of the form jst (X), 0 ≤ s ≤ t, X ∈ g, is essentially
selfadjoint on some common domain. Furthermore, the vacuum vector  is an

178
10.1 General method 179

analytic vector for all jst (X), 0 ≤ s ≤ t, X ∈ g, and we will assume that η(g)
consists of analytic vectors.
Denote by g = eX an element of the simply connected Lie group G
associated to g. Our assumptions guarantee that η(g) and L(g) can be defined
for X in a sufficiently small neighborhood of 0. For an explicit expression for
the action of Ust (g) on exponential vectors, see also [106, Proposition 4.1.2].
In order to get a quasi-invariance formula for (X̂t )t∈R+ we choose an element
Y ∈ gR+ that does not commute with X and let the unitary operator U = eπ(Y)
act on the algebra

AX = alg {X( f ) : f ∈ S(R+ )}

generated by X. By letting U ∗ act on the vacuum state , we obtain a new


state vector  = U ∗ . If  is cyclic for AX , then  can be approximated by
elements of the form G with G ∈ AX . It is actually possible to find an element
G which is affiliated to the von Neumann algebra generated by AX such that
G =  , as follows from the BT theorem, see [104, Theorem 2.7.14].
The following calculation then shows that the finite marginal distributions
of (X̂t )t∈R+ are absolutely continuous with respect to those of (X̂t ),
1
2

IE g X̂  ( f ) = , g X  ( f ) 



= , g UX( f )U ∗ 



= , Ug X( f ) U ∗ 



= U ∗ , g X( f ) U ∗ 



=  , g X( f ) 



= G, g X( f ) G

1
2
= IE g X( f ) |Ĝ|2 .

Here, G is a “function” of X and Ĝ is obtained from G by replacing X by


X̂. This is possible, because AX is commutative, and requires only standard
functional calculus.
The density relating the law of (X̂t )t∈R+ to that of (X̂t )t∈R+ is therefore given
by |Ĝ|2 . The same calculation also applies to finite joint distributions, i.e., we
also have
1

2 1

2
IE g1 X̂  ( f1 ) · · · gn X̂  ( fn ) = IE g1 X̂( f1 ) · · · gn X̂( fn ) |Ĝ|2 ,

for all n ∈ N, f1 , . . . , fn ∈ S(R+ ), g1 , . . . , gn ∈ C0 (R).


In the following section we will show several examples how quasi-
invariance formulas for Brownian motion, the Poisson process, the gamma
180 Noncommutative Girsanov theorem

process [111, 112], and the Meixner process can be obtained in a


noncommutative framework. We present explicit calculations for several
classical increment processes related to the oscillator algebra and the Lie
algebra sl2 (R) of real 2 × 2 matrices with trace zero.
Note that by letting U act on AX directly we obtain a different algebra

AX  = alg {UX( f )U ∗ : f ∈ S(R+ )},

generated by X  ( f ) = UX( f )U ∗ , f ∈ S(X). Since this algebra is again


commutative, there exists a classical process (X̂t )t∈R+ that has the same
expectation values as X  with respect to , i.e.,


1

2
, g1 X  ( f1 ) · · · gn X  ( fn ) 
= IE g1 X̂  ( f1 ) · · · gn X̂  ( fn )

for all n ∈ N, g1 , . . . , gn ∈ C0 (R), f1 , . . . , fn ∈ S(R+ ), where


 ∞ n
X̂  ( f ) = f (t)dX̂t , for f = fk 1[sk ,tk ) ∈ S(R+ ).
0 k=1

If X  ( f ) is a function of X( f ), then AX is invariant under the action of U. In


this case the classical process (X̂t )t∈R+ can be obtained from (X̂t )t∈R+ by a
pathwise transformation, see (10.5) and (10.6). But even if this is not the case,
we can still get a quasi-invariance formula that states that the law of (X̂t )t∈R+
is absolutely continuous with respect to the law of (X̂t )t∈R+ .

10.2 Quasi-invariance on osc


In this section we explicitly compute the density |G|2 on several examples
based on Gaussian and Poisson random variables.
The oscillator Lie algebra is the four dimensional Lie algebra osc with basis
{N, A+ , A− , E} and the Lie bracket given by

[N, A± ] = ±A± , [A− , A+ ] = E, [E, N] = [E, A± ] = 0,

with the involution N ∗ = N, (A+ )∗ = A− , and E∗ = E.


Letting Y = i(wA+ + wA− ), by Lemma 3.2.2 we can compute the adjoint
action of gt = etY on a general Hermitian element Xα,ζ ,β of osc written as

Xα,ζ ,β = αN + ζ A+ + ζ A− + βE

with α, β ∈ R, ζ ∈ C, as


X(t) = αN + (ζ − iαwt)A+ + (ζ + iαwt)A− + β + 2t(wζ ) + α|w|2 t2 E,
10.2 Quasi-invariance on osc 181

where (z) denotes the imaginary part of z. Recall that by Proposition 6.1.1, the
distribution of ρ(Xα,ζ ,β ) in the vacuum vector e0 is either a Gaussian random
variable with variance |ζ |2 and mean β or Poisson random variable with “jump
size” α, intensity |ζ |2 /α 2 , and drift β − |ζ |2 /α. We interpret the result of the
next proposition as
1
2 1 $ $2 2
IE g X(t) = IE g(Xα,ζ ,β )$G(Xα,ζ ,β , t)$ , g ∈ C0 (R).

Proposition 10.2.1 Letting


Xα,ζ ,β = αN + ζ a+ + ζ a− + βE,
Y = i(wa+ + wa− ) and
Xt := etY Xα,ζ ,β e−tY ,
we have
$ $2
e0 , g(Xt )e0
= e0 , g(Xα,ζ ,β )$G(Xα,ζ ,β , t)$ e0

for all g ∈ C0 (R), with


! 2 !(x−β)/α+|ζ | /α
2 2
w 2 2 |w|
|G(x, t)| =
2
1 + 2tα +t α
ζ |ζ |2
! 2 !!
|ζ |2 w 2 2 |w|
× exp 2tα +t α . (10.1)
α2 ζ |ζ |2
Proof : We have Y ∗ = −Y and


e0 , g(Xt )e0
= e0 , g etY Xα,ζ ,β e−tY e0

= e0 , etY g(Xα,ζ ,β )e−tY e0

= v(t), g(Xα,ζ ,β )v(t)

= G(Xα,ζ ,β , t)e0 , g(Xα,ζ ,β )G(Xα,ζ ,β , t)e0

$ $2
= e0 , g(Xα,ζ ,β )$G(Xα,ζ ,β , t)$ e0
.
As a consequence of Lemma 6.1.4, the function
v(t) := e−tY e0
can be written in the form


v(t) = k
ck (t)Xα,ζ ,β e0 = G(Xα,ζ ,β , t)e0 .
k=0

In order to compute the function G we consider





v (t) = − exp − tρ(Y) ρ(Y)e0 = −i exp − tρ(Y) we1 ,
182 Noncommutative Girsanov theorem

with

v (t) = −e−tY Ye0


= −iwe−tY e1
iw −tY " #
=− e Xα,ζ̃ (t),β̃(t) e0 − β̃(t)e0
ζ̃ (t)
iw " #
=− Xα,ζ ,β − β̃(t) e−tY e0
ζ̃ (t)
iw " #
=− Xα,ζ ,β − β̃(t) v(t),
ζ̃ (t)
where

ζ̃ (t) = ζ − iαwt, and β̃(t) = β + 2t(wζ ) + α|w|2 t2 .

This is satisfied provided G(x, t) satisfies the differential equation


∂G iw " #
(x, t) = − x − β̃(t) G(x, t)
∂t ζ̃ (t)
with initial condition G(x, 0) = 1, which shows that
  t  
x − β̃(s)
G(x, t) = exp −iw ds
0 ζ̃ (s)
!  
αwt (x−β)/α+|ζ | /α
2 2
wζ t2 2
= 1−i exp i − |w| ,
ζ α 2

and yields (10.1).

After letting α go to 0 we get


 !2 
w t2 w
|G(x, t)| = exp 2t(x − β)i − |ζ |2 2i
2
.
ζ 2 ζ

When α = 0, this identity gives the relative density of two Gaussian random
variables with the same variance, but different means. For α  = 0, it gives
the relative density of two Poisson random variables with different intensities.
Note that the classical analogue of this limiting procedure is

lim (1 + α)αλ(Nα −λ/α


2 )+λ2 /α 2 /2
= eλX−λ ,
α→0

where Nα is a Poisson random variable with intensity λ > 0 and λ(Nα −


λ/α 2 ) converges in distribution to a standard Gaussian variable X. No such
normalisation is needed in the quantum case.
10.3 Quasi-invariance on sl2 (R) 183

10.3 Quasi-invariance on sl2 (R)


Let us now consider the three-dimensional Lie algebra sl2 (R), with basis
B+ , B− , M, Lie bracket

[M, B± ] = ±2B± , [B− , B+ ] = M,

and the involution (B+ )∗ = B− , M ∗ = M. Letting Y = B− − B+ , β ∈ R, and


Xβ = B+ + B− + βM in sl2 (R), we can compute

[Y, Xβ ] = 2βB+ + 2βB− + 2M = 2βX1/β

and by Lemma 3.3.1 the adjoint action of gt = etY on Xβ = B+ + B− + βM is


given by
t

etY/2 Xβ e−tY/2 = e 2 adY Xβ = cosh(t) + β sinh(t) Xγ (β,t) ,
β cosh(t) + sinh(t)
where γ (β, t) = . By Proposition 6.2.2 the distribution of
cosh(t) + β sinh(t)
ρc (Xβ ) in the state vector e0 is given by its Fourier–Laplace transform
 % c
β2 − 1
e0 , e λρc (Xβ )
e0
= %
%
% .
β 2 − 1 cosh λ β 2 − 1 − β sinh λ β 2 − 1

Proposition 10.3.1 Letting Y = B− − B+ , we have


$ $2
e0 , g(etY Xβ e−tY )e0
= e0 , g(Xβ )$G(Xβ , t)$ e0

for all g ∈ C0 (R), with


 

1 t x − c β cosh(s) + sinh(s)
G(x, t) = exp ds . (10.2)
2 0 cosh(s) + β sinh(s)

Proof : We have Y ∗ = −Y and




e0 , g(etρc (Y)/2 Xβ e−tρc (Y)/2 )e0
= e0 , g e−tρc (Y)/2 Xβ e−tρc (Y)/2 e0

= e0 , e−tρc (Y)/2 g(Xβ )e−tρc (Y)/2 e0

= v(t), g(Xβ )v(t)

= G(Xβ , t)e0 , g(Xβ )G(Xβ , t)e0

$ $2
= e0 , g(Xβ )$G(Xβ , t)$ e0
.

By Lemma 6.2.3 the lowest weight vector e0 is cyclic for ρc (Xβ ) for all β ∈ R,
c > 0, therefore the function

v(t) := e−tρc (Y)/2 e0


184 Noncommutative Girsanov theorem

can be written in the form




v(t) = ck (t)ρc (Xβ )k e0 = G(Xβ , t)e0 .
k=0

In order to compute the function G, we consider


1 1 √
v (t) = − e−tρc (Y)/2 ρc (Y)e0 = e−tρc (Y) ce1 .
2 2
As shown earlier we introduce Xβ into this equation to get
1 −tρc (Y)/2

v (t) = e ρc (Xγ (β,t) ) − cγ (β, t) e0
2

ρc (Xβ ) − c β cosh(t) + sinh(t) −tρc (Y)/2
= e e0 ,
2 cosh(t) + 2β sinh(t)
which is satisfied under the ordinary differential equation


∂G x − c β cosh(t) + sinh(t)
(x, t) = G(x, t),
∂t 2 cosh(t) + 2β sinh(t)
for G(x, t) with initial condition G(x, 0) = 1. We check that the solution of this
ODE is given by (10.2).

If |β| < 1, then we can write G in the form




G(x, t) = exp (β, t)x − c(β, t) ,

where
  .  . 
1 1+β 1+β
(β, t) = % arctan e t
− arctan , (10.3)
1 − β2 1−β 1−β

and
t 1 1 + β + e−2t (1 − β)
(β, t) = + log . (10.4)
2 2 2

10.4 Quasi-invariance on hw
In this section we use the Weyl operators and notation of Section 1.3. Recall
that in Chapter 7, a continuous map O from Lp (R2 ), 1 ≤ p ≤ 2, into the space
of bounded operators on h has been defined via

O( f ) = (F −1 f )(x, y)eixP+iyQ dxdy,
R2
10.5 Quasi-invariance for Lévy processes 185

where F denotes the Fourier transform, with the bound


O( f ) ≤ Cp f Lp (R2 ) ,

and the relation


O(eiux+ivy ) = exp (iuP + ivQ) , u, v ∈ R.
In the next proposition we state the Girsanov theorem on the noncommutative
Wiener space, in which we conjugate Oh (ϕ) with U(−k2 /2, k1 /2), k ∈ h ⊗ R2 ,
which amounts to a translation of the argument of ϕ by ( k1 , h1
, k2 , h2
).
Proposition 10.4.1 Let h, k ∈ h ⊗ R2 and ϕ ∈ Dom Oh . Then we have


U(−k2 /2, k1 /2)Oh (ϕ)U(−k2 /2, k1 /2)∗ = Oh T( k1 ,h1
, k2 ,h2
) ϕ
where T(x0 ,y0 ) ϕ(x, y) = ϕ(x + x0 , y + y0 ).
Proof : For (u, v) ∈ R2 , we have
U(−k2 /2, k1 /2)ei(uP(h1 )+vQ(h2 )) U(−k2 /2, k1 /2)∗
= U(−k2 /2, k1 /2)U(uh1 , vh2 )U(−k2 /2, k1 /2)∗



= exp −i u k1 , h1
+ v k2 , h2
U(uh1 , vh2 )
and therefore
U(−k2 /2, k1 /2)Oh (ϕ)U(−k2 /2, k1 /2)∗
 ∞ ∞
= F −1 ϕ(u, v)e−i(u k1 ,h1
+v k2 ,h2
) ei(uP(h1 )+vQ(h2 )) dudv
0 0

= F −1 T( k1 ,h1
, k2 ,h2
) ϕ(u, v)ei(uP(h1 )+vQ(h2 )) dudv
Cd


= Oh T( k1 ,h1
, k2 ,h2
) ϕ .

10.5 Quasi-invariance for Lévy processes


In this section we derive quasi-invariance or Girsanov formulas for Lévy
processes on Lie algebras, such as Brownian motion, the Poisson process, the
gamma process, and the Meixner process.

10.5.1 Brownian motion


Let now ( jst )0≤s≤t be the Lévy process on osc with the Schürmann triple
defined by D = C,
186 Noncommutative Girsanov theorem



⎪ ρ(N) = 1, ρ(A± ) = ρ(E) = 0,




η(A+ ) = 1, η(N) = η(A− ) = η(E) = 0,





⎩ L(N) = L(A± ) = 0, L(E) = 1.

Taking X = −i(A+ + A− ) constant we get

X( f ) = a+ ( f ) + a− ( f )

and the associated classical process (X̂t )t∈R+ is Brownian motion. We choose
for Y = h(A+ − A− ), with h ∈ S(R+ ). A similar calculation as in the previous
subsection yields
 t
 −Y
X (1[0,t] ) = e X(1[0,t] )e = X(1[0,t] ) − 2
Y
h(s)ds
0

i.e., AX is invariant under eY and (X̂t )t∈R+ is obtained from (X̂t )t∈R+ by adding
a drift. Now eπ(Y) is a Weyl operator and gives an exponential vector when it
acts on the vacuum, i.e., we have

eπ(Y)  = e−||h||
2 /2
E(h)

see, e.g., [79, 87]. But – up to the normalisation – we can create the same
exponential vector also by acting on  with eX(h) ,

eX(h)  = e||h|| /2 E(h).


2



Therefore, we get G = exp X(h) − ||h||2 and the well-known Girsanov
formula for Brownian motion
6  ∞ !! 7



e0 , g X̂  ( f ) e0
= e0 , g X̂( f ) exp 2X(h) − 2 h2 (s)ds e0 .
0
(10.5)

10.5.2 The Poisson process


Taking

X = −i(N + νA+ + νA− + ν 2 E)

constant we get
 ∞
X( f ) = a◦ ( f ) + νa+ ( f ) + νa− ( f ) + ν 2 f (s)ds
0
10.5 Quasi-invariance for Lévy processes 187

and the associated classical process (X̂t )t∈R+ is a noncompensated Poisson


process with intensity ν 2 and jump size 1. Given h ∈ S(R+ ) of the form

n
h(t) = hk 1[sk ,tk ) (t),
k=1

with hk > −ν 2 , let


%
w(t) = i( ν 2 + h(t) − ν),

and Y = w(A+ − A− ). The aforementioned calculations show that

X  (1[0,t] ) = eY X(1[0,t] )e−Y

is a non-compensated Poisson process with intensity ν 2 + h(t). We have the


Girsanov formula
D
E
e0 , g X̂  ( f ) e0
8 ! 9

+ n
hk X̂(1[sk ,tk ) ) −ν 2 (tk −sk )hk
= e0 , g X̂( f ) 1+ 2 e e0
ν
k=1
8  !  9

−ν 2 3 ∞ h(s)ds +n
hk X̂(1[sk ,tk ) )
= e0 , g X̂( f ) e 0 1+ 2 e0
ν
k=1
6 !!  ∞ !! 7

h
= e0 , g X̂( f ) exp X̂ log 1 + 2 − ν2 h(s)ds e0 .
ν 0

10.5.3 The gamma process


Let now ( jst )0≤s≤t be the Lévy process on sl2 (R) with Schürmann triple D =
2 , ρ = ρ2 , and

η(B+ ) = e0 , η(B− ) = η(M) = 0, L(M) = 1, L(B± ) = 0,

cf. [1, Example 3.1]. Taking X = −i(B+ + B+ + M) constant, the random


variables



X(1[s,t] ) = a◦st ρ(X) + a+ −
st e0 + ast (e0 ) + (t − s)id

are gamma distributed in the vacuum vector . We also let Y = h(B− − B+ )


where h is the simple function

r
h= hk 1[sk ,tk ) ∈ S(R+ ), 0 ≤ s1 ≤ t1 ≤ s2 ≤ · · · ≤ tn ,
k=1
188 Noncommutative Girsanov theorem

and as in the previous subsection we get

X  (1[s,t] ) = eπ(Y) X(1[s,t] )e−π(Y) = X(e2h 1[s,t] ).

On the other hand, using the tensor product structure of the Fock space, we can
calculate
 n 

−π(Y)
e  = exp − hk jsk tk (Y) 
k=1
= e−h1 js1 t1 (Y)  ⊗ · · · ⊗ e−hn jsn tn (Y) 
"X
 ∞ #
−2h1

= exp 1−e 1[s1 ,t1 ) − (t1 − s1 ) h(s)ds  ⊗ · · ·
2 0
"X
 ∞ #
−2hn

· · · ⊗ exp 1−e 1[sn ,tn ) − (tn − sn ) h(s)ds 
2 0
 ∞ !
X −2h
= exp (1 − e ) − h(s)ds ,
2 0

since jst is equivalent to ρt−s .

Proposition 10.5.1 Let n ∈ N, f1 , . . . , fn ∈ S(R+ ), g1 , . . . , gn ∈ C0 (R), then


we have the Girsanov formula
"

#
e0 , g1 X  ( f1 ) · · · gn X  ( fn ) e0
(10.6)
6  ∞ !! 7



= e0 , g1 X( f1 ) · · · gn X( fn ) exp X(1 − e−2h ) − 2 h(s)ds e0 .
0

10.5.4 The Meixner process


We consider again the same Lévy process on sl2 (R) as in the previous
subsection. Let ϕ, β ∈ S(R+ ) with |β(t)| < 1 for all t ∈ R+ , and set

Xϕ,β = ϕ(B+ + B− + βM) ∈ sl2 (R)R+ .

Let Y again be given by Y = h(B− − B+ ), h ∈ S(R+ ). Then we get

X  (t) = eY(t) X(t)e−Y(t)



"
#
= ϕ(t) cosh(2h) + β(t) sinh(2h) B+ + B− + γ β(t), 2h M ,

i.e., X  = Xϕ  ,β  with
"

#

ϕ  (t) = ϕ(t) cosh 2h(t) + β(t) sinh 2h(t) , and β  (t) = γ β(t), 2h(t) .
10.5 Quasi-invariance for Lévy processes 189

As in the previous subsection, we can also calculate the function G,


 ∞ !!
1

eπ(Y)  = exp X(β,2h),β −  β(t), h(t) ,
2 0
where ,  are defined as in Equations (10.3) and (10.4). As a consequence
we get the following proposition.
Proposition 10.5.2 The finite joint distributions of Xϕ  ,β  are absolutely
continuous with respect to those of Xϕ,β , and the mutual density is given by
 ∞ !


exp X(β,2h),β −  β(t), 2h(t) .
0

Notes
The Girsanov formula for Brownian motion and gamma process appeared
first in the context of factorisable representations of current groups [114], cf.
[111, 112] for the gamma process. The quasi-invariance results of Section 10.2
for the Poisson, gamma, and Meixner processes have been proved for finite
joint distributions. They can be extended to the distribution of the processes
using continuity arguments for the states and endomorphisms on our operator
algebras, or by the use of standard tightness arguments coming from classical
probability. The general idea also applies to classical processes obtained by a
different choice of the commutative subalgebra, cf. e.g., [18]. The classical
Girsanov theorem has been used by Bismut [22] in order to propose a
simpler approach to the Malliavin calculus by the differentiation of related
quasi-invariance formulas in order to obtain integration by parts formulas for
diffusion processes, which where obtained by Malliavin in a different way.

Exercises
Exercise 10.1 Girsanov theorem for gamma random variables. Take β = 1 in
the framework of Proposition 10.3.1. Show that we have
!
1 −t
G(x, t) = exp − (x(e − 1) + ct) ,
2
and that this recover the change of variable identity
  1
2
IE g(et Z) = IE g(Z) exp Z(1 − e−t ) − ct

for a gamma distributed random variable Z with parameter c > 0.


11

Noncommutative integration by parts

Mathematical thinking is logical and rational thinking. It’s not like


writing poetry.
(J. Nash, in Mathematicians: An Outer View of the Inner World.)
In this chapter we develop a calculus on noncommutative probability spaces.
Our goal is to give a meaning to an integration by parts formula via a suitable
gradient operator acting on noncommutative random variables. We focus in
particular on the noncommutative Wiener space over a Hilbert space h. In this
case, the integration by parts formula will show the closability of the gradient
operator as in classical infinite-dimensional analysis. We also compute the
matrix elements between exponential vectors for our divergence operator and
use them to show that the divergence operator coincides with the noncausal
creation and annihilation integrals defined by Belavkin [13, 14] and Lindsay
[68] for integrable processes, and therefore with the Hudson–Parthasarathy
[54] integral for adapted processes.

11.1 Noncommutative gradient operators


We use the notation of Section 7.4 in order to construct noncommutative
gradient operators. Given a Lie group G with Lie algebra G whose dual is
denoted by G ∗ , we let Ad g ξ , ξ ∈ G ∗ , denote the co-adjoint action:
"

Ad "g ξ , x
G ∗ ,G = ξ , Ad g−1 x
G ∗ ,G , x ∈ G.

We also let G
Ad g , g ∈ G, be defined for f : G ∗ → C as

G "
Ad g f = f ◦ Ad g−1 ,

190
11.1 Noncommutative gradient operators 191

and let Had x be the differential of g −→ G Ad g . The following proposition,


called covariance property, will provide an analogue of the integration by parts
formula.

Proposition 11.1.1 For any x = (x1 , . . . , xn ) ∈ G and f ∈ DomO we have


" #
[x1 U(X1 ) + · · · + xn U(Xn ), O(f )] = O H ad(x)f .

Proof : Using the covariance condition

C−1
U(g)∗ C−1 U(g) = % , g ∈ G,
(g−1 )

cf. Relations (34), (44), (56) in [7], we have


WU(g)ρU(g)∗ (ξ )
√  %
σ (ξ )
= m(x)e−i ξ ,x
G ∗ ,G Tr[U(e−(x1 X1 +···+xn Xn ) )U(g)ρU(g)∗ C−1 ]dx
(2π )n/2 N0
√  .
σ (ξ ) −i ξ ,x
∗ ,G −1 −(x X +···+x X ) −1 m(x)
= e G TrU(g )U(e 1 1 n n )U(g)ρC dx
(2π ) n/2 N0 (g−1 )
√ 
σ (ξ ) − Ad g−1 x %
= n/2
e−i ξ ,x
G ∗ ,G Tre ρC−1 m(x)(g)dx
(2π ) N0
√ 
σ (ξ )
= e−i ξ , Ad g x
G ∗ ,G TrU(e−(x1 X1 +···+xn Xn ) )
(2π )n/2 N0
-
× ρC−1 det( Ad g ) m( Ad g x)(g)dx
,
"
σ ( Ad −1 ξ )  " %
g −i Ad −1 ξ ,x
G ∗ ,G
= e g TrU(e−(x1 X1 +···+xn Xn ) )ρC−1 m(x)dx
(2π )n/2 N0
"
= Wρ ( Ad ξ ).
g−1

We proved the covariance property


"
WU(g)ρU(g)∗ (ξ ) = Wρ ( Ad g−1 ξ ).

By duality we have

U(g)O(f )U(g)∗ |ρ
B2 (h) = Tr[(U(g)O(f )U(g)∗ )∗ ρ]
= Tr[U(g)O(f )∗ U(g)∗ ρ]
= Tr[O(f )∗ U(g)∗ ρU(g)]
= O(f )|U(g)∗ ρU(g)
B2 (h)
192 Noncommutative integration by parts

= f |WU(g)∗ ρU(g)
B2 (h)
= f |Wρ ◦ Ad "g
L2 (G ∗ ;dξ/σ (ξ ))
C
"
= f ◦ Ad g−1 |Wρ
L2 (G ∗ ;dξ/σ (ξ ))
C
"
= O(f ◦ Ad g−1 )|ρ
B2 (h) ,

which implies
" #
U(g)O(f )U(g)∗ = O GAd g f ,

and the conclusion follows by differentiation.

11.2 Affine algebra


Here we turn to the case of the affine algebra generated by
P
X1 = −i and X2 = i(Q + M),
2
which form a representation of the affine algebra with [X1 , X2 ] = X2 . Next
we define the gradient operator that will be used to show the smoothness of
Wigner densities. For this we fix κ ∈ R and we let Sh denote the algebra of
operators on h that leave the Schwartz space S(R) invariant.

Definition 11.2.1 For any x = (x1 , x2 ) ∈ R2 , the gradient operator D is


defined as
i i
Dx F := − x1 [P, F] + x2 [Q + κM, F], F ∈ Sh .
2 2
Proposition 11.2.2 For any x = (x1 , x2 ) ∈ R2 , the operator Dx is closable
for the weak topology on the space B(h) of bounded operators on h.
Proof : Let φ, ψ ∈ S(R). Let (Bı )ı∈I be a net of operators in Sh ∩ B(h) such
ı∈I ı∈I
that Bı −→ 0 and Dx Bı −→ B ∈ B(h) in the weak topology. We have
ψ|Bφ
h = lim ψ|Dx Bı φ
h
ı∈I
i i
= lim ψ| − x1 (PBı φ − Bı Pφ) + x2 ((Q + κM)Bı φ − Bı (Q + κM)φ)
h
ı∈I 2 2
i
= lim − x1 ( Pψ|PBı φ
h − ψ|Bı Pφ
h )
ı∈I 2
i
+ lim − x2 ( (Q + κM)ψ|Bı φ
h − ψ|Bı (Q + κM)φ
h ) = 0,
ı∈I 2

hence B = 0.
11.2 Affine algebra 193

The following is the affine algebra analogue of the integration by parts formula
(2.1).
Proposition 11.2.3 For any x = (x1 , x2 ) ∈ R2 and f ∈ DomO, we have
[x1 U(X1 ) + x2 U(X2 ), O(f )] = O(x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 )).
Proof : This is a consequence of the covariance property since from (7.8), the
co-adjoint action is represented by the matrix
 
1 ba−1
,
0 a−1
i.e.,
G
Ad g f (ξ1 , ξ2 ) = f ◦ Ad g−1 (ξ1 , ξ2 ) = f (ξ1 + ba−1 ξ2 , a−1 ξ2 ),
"

hence
H
ad x f (ξ1 , ξ2 ) = x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 ).

For κ = 1, the integration by parts formula can also be written as


D(x1 ,2x2 ) O(f ) = O(x1 ξ2 ∂1 f − x2 ξ2 ∂2 f ).
The noncommutative integration by parts formulas on the affine algebra given
in this section generalises the classical integration by parts formula (2.1) with
respect to the gamma density on R. We define the expectation of X as
IE[X] = , X
h ,
where  = 1R+ is the vacuum state in h. The results of this section are in fact
valid for any representation {M, B− , B+ } of sl2 (R) and any vector  ∈ h such
that iP = Q and M = β.
Lemma 11.2.4 Let x = (x1 , x2 ) ∈ R2 . We have
1
IE[Dx F] = IE [x1 {Q, F} + x2 {P, F}] , F ∈ Sh .
2
Proof : We use the relation iP = Q:
− IE[[iP, F]] = , −iPF
h − , −iFP
h
= iP, F
h + , FQ
h
= Q, F
h + FQ
h
= Q, F
h + , FQ
h
= IE [{Q, F}] ,
194 Noncommutative integration by parts

and
IE[[iQ, F]] = , iQF
h − , iFQ
h
= − iQ, F
h + , FP
h
= P, F
h + , FP
h
= IE [{P, F}] ,
and we note that IE[[M, F]] = 0.
In the sequel we fix a value of α ∈ R.
Definition 11.2.5 For any x = (x1 , x2 ) ∈ R2 and F ∈ Sh , let
x1 x2
δ(F ⊗ x) := {Q + α(M − β), F} + {P, F} − Dx F.
2 2
Note also that
1
δ(F ⊗ x) = (x1 (Q + iP + α(M − β)) + x2 (P − i(Q + κM))) F
2
F
+ (x1 (Q − iP + α(M − β)) + x2 (P + i(Q + κM)))
2
= x1 (B+ F + FB− ) − ix2 (B+ F + FB− )
x1 i
+ α {M − β, F} − x2 κ[M, F]
2 2
x1 i
= (x1 − ix2 )(B+ F + FB− ) + α {M − β, F} − x2 κ[M, F].
2 2
The following Lemma shows that the divergence operator has expectation zero.
Lemma 11.2.6 For any x = (x1 , x2 ) ∈ R2 we have
IE [δ(F ⊗ x)] = 0, F ∈ Sh .
Proof : It suffices to apply Lemma 11.2.4 and to note that , M
h = β.

Compared to a classical commutative setup, the noncommutative case brings


additional conceptual difficulties by requiring the definition of both a right-
sided and a left-sided gradient, which can be combined to a two-sided
symmetric gradient.
For F, U, V ∈ Sh and x = (x1 , x2 ) ∈ R2 we let
←− i i
U D Fx = (Dx U)F = − x1 [P, U]F + x2 [Q + κM, U]F,
2 2
and

→F i i
D x V = FDx V = − x1 F[P, V] + x2 F[Q + κM, V],
2 2
11.2 Affine algebra 195

and we define a two-sided gradient by


←→ ←− −

U D Fx V := U D Fx V + U D Fx V
i i
= − x1 [P, U]FV − x1 UF[P, V]
2 2
i i
+ x2 [Q + κM, U]FV + x2 UF[Q + κM, V].
2 2

Proposition 11.2.7 Let x = (x1 , x2 ) ∈ R2 and U, V ∈ Sh . Assume that


x1 (Q + αM) + x2 P commutes with U and with V. We have
1 ←→ 2  
IE U D Fx V = IE Uδ(F ⊗ x)V , F ∈ Sh .

Proof : By Lemma 11.2.6 we have

IE[Uδ(F ⊗ x)V]
1
= IE [U ({x1 (Q + α(M − β)) + x2 P, F} + ix1 [P, F] − ix2 [Q + κM, F]) V]
2
1
= IE[{x1 (Q + α(M − β)) + x2 P, UFV} + ix1 U[P, F]V − ix2 U[Q + κM, F]V]
2
1
= IE[{x1 (Q + α(M − β)) + x2 P, UFV} + ix1 [P, UFV]
2
− ix1 [P, U]FV] + IE[−ix1 UF[P, V] − ix2 [Q + κM, UFV]
+ ix2 [Q + κM, U]FV + ix2 UF[Q + κM, V]]
1
= IE[δ(UFV ⊗ x)] + IE[−ix1 [P, U]FV − ix1 UF[P, V]
2
+ ix2 [Q + κM, U]FV + ix2 UF[Q + κM, V]]
←→
= IE[U D F x V].

The closability of δ can be proved using the same argument as in Proposition


11.2.2. Next is a commutation relation between D and δ.

Proposition 11.2.8 For all κ = 0 and x = (x1 , x2 ), y = (y1 , y2 ) ∈ R2 we


have

Dx δ(F ⊗ y) − δ(Dx F ⊗ y)
y − iy2 y
= 1 (x1 {M, F} + ix2 [M, F]) + α 1 (x1 {Q, F} + x2 {P, F}),
2 2

F ∈ Sh .
196 Noncommutative integration by parts

Proof : We have

Dx δ(F ⊗ y)
i i
= − x1 [P, δ(F ⊗ y)] + x2 [Q + κM, δ(F ⊗ y)]
2 2
i 1 y 2
= − x1 P, y1 (B+ F + FB− ) − iy2 (B+ F + FB− ) + 1 α{M − β, F}
2 2
i 1 y 2
+ x2 Q + κM, y1 (B+ F + FB− ) − iy2 (B+ F + FB− ) + 1 α{M − β, F}
2 2
i
= δ(Dx F ⊗ y) − x1 (y1 [P, B ]F + y1 F[P, B ] − iy2 [P, B ]F − iy2 F[P, B− ]
+ − +
2
y1 y i
+ α[P, M]F + 1 αF[P, M]) + x2 (y1 [Q + κM, B+ ]F + y1 F[Q + κM, B− ]
2 2 2
y y
− iy2 [Q + κM, B+ ]F − iy2 F[Q + κM, B− ] + 1 α[Q, M]F + 1 αF[Q, M])
2 2
i y1
= δ(Dx F ⊗ y) − x1 (y1 {iM, F} − iy2 {iM, F} + α{2iQ, F})
2 2
i
+ x2 (y1 [M, F] − iy2 [M, F] + iy1 α{P, F})
2
1 i 1
= δ(Dx F ⊗ y) + x1 y1 {M + αQ, F} + x2 y1 [M, F] + x2 y1 α{P, F}
2 2 2
i 1
− x1 y2 {M, F} + x2 y2 [M, F].
2 2

Proposition 11.2.9 For all F, G ∈ Sh we have


←− x1 x2
δ(GF ⊗ x) = Gδ(F) − G D F − [Q + αM, G]F − [P, G]F,
2 2
and

→ x1 x2
δ(FG ⊗ x) = δ(F)G − D F G − F[Q + αM, G] − F[P, G].
2 2
Proof : We have
x1 x1
δ(GF ⊗ x) = (Q + iP + α(M − β))GF + GF(Q − iP + α(M − β))
2 2
x2 x2
+ (P − iQ)GF + GF(P + iQ)
2 2
x1 x1
= G(Q + iP + α(M − β))F + GF(Q − iP + αM − α/2)
2 2
x2 x2
+ G(P − iQ)F + GF(P + iQ)
2 2
i i x1 x2
+ x1 [P, G]F − x2 [Q, G]F − [Q + αM, G]F − [P, G]F.
2 2 2 2
11.3 Noncommutative Wiener space 197

Similarly we have
x1 x1
δ(FG ⊗ x) = (Q + iP + α(M − β))FG + FG(Q − iP + α(M − β))
2 2
x2 x2
+ (P − iQ)FG + FG(P + iQ)
2 2
x1 x1
= (Q + iP + α(M − β))FG + F(Q − iP + αM − α/2)G
2 2
x2 x2
+ (P − iQ)FG + F(P + iQ)G
2 2
i i x1 x2
+ x1 F[P, G] − x2 F[Q, G] − F[Q + αM, G] − F[P, G].
2 2 2 2

11.3 Noncommutative Wiener space


In this section we define derivation and divergence operators which have
properties similar to their commutative analogues defined in Chapter 9 in
the classical Malliavin calculus on the Wiener space. The derivation operator
will be used in Chapter 12 to provide sufficient conditions for the existence
of smooth Wigner densities for pairs of operators satisfying the canonical
commutation relations. However, since the vacuum state does not define a
Hilbert space, the extension of the divergence to (noncommutative) vector
fields will become more difficult.

11.3.1 Noncommutative gradient


On the Heisenberg–Weyl Lie algebra the statement of Proposition 11.1.1 reads
i
[uQ − vP, O(f )] = O (u∂1 f + v∂2 f ) .
2
In this section we extend this relation to the noncommutative Wiener space of
Section 1.3. To emphasise the analogy with the analysis on the Wiener space
of Chapter 9, we call (B(h), IE) the noncommutative Wiener space over h, and
denote by IE the state defined by
IE[X] = , X
, X ∈ B(h).

We now define a derivation operator D on B(h) and a divergence operator δ


on B(h, h ⊗ hC ⊗ C2 ), as the adjoint of the two-sided gradient for cylindrical
(noncommutative) vector fields on the algebra of bounded operators on the
symmetric Fock space over the complexification of the real Hilbert space h.
198 Noncommutative integration by parts

Definition 11.3.1 Let k ∈ hC ⊗ C2 . We set



Dom Dk = B ∈ B(h) :
*
i
[Q(k1 ) − P(k2 ), B] defines a bounded operator on h
2
and for B ∈ Dom Dk we let
i
Dk B : = [Q(k1 ) − P(k2 ), B].
2
Note that B ∈ Dom Dk for some k ∈ hC ⊗ C2 implies

B∗ ∈ Dom Dk and Dk B∗ = (Dk B)∗ .

Example 11.3.2

a) Let k ∈ hC ⊗ C2 and consider a unit vector

ψ ∈ Dom P(k2 ) ∩ Dom Q(k1 ) ∩ Dom P(k2 ) ∩ Dom Q(k1 ).

We denote by Pψ the orthogonal projection onto the one-dimensional


subspace spanned by ψ. Evaluating the commutator [Q(k1 ) − P(k2 ), Pψ ]
on a vector φ ∈ Dom P(k2 ) ∩ Dom Q(k1 ), we get

[Q(k1 ) − P(k2 ), Pψ ]φ



= ψ, φ
Q(k1 ) − P(k2 ) (ψ) − ψ, Q(k1 ) − P(k2 ) (φ)
ψ



= ψ, φ
Q(k1 ) − P(k2 ) (ψ) − Q(k1 ) − P(k2 ) ψ, φ
ψ.

We see that the range of [Q(k1 ) − P(k2 ), Pψ ] is two-dimensional, so it can


be extended to a bounded operator on h. Therefore Pψ ∈ Dom Dk , and
we get
i"

#
(Dk Pψ )φ = ψ, φ
Q(k1 ) − P(k2 ) (ψ) − Q(k1 ) − P(k2 ) ψ, φ
ψ ,
2
φ ∈ h.
b) Let h ∈ h ⊗ R2 , k ∈ hC ⊗ C2 . Then
i
[Q(k1 ) − P(k2 ), U(h1 , h2 )]
2
defines a bounded operator on h, and we get


Dk U(h1 , h2 ) = i k1 , h1
+ k2 , h2
U(h1 , h2 ).

Proposition 11.3.3 Let k ∈ hC ⊗ C2 . The operator Dk is a closable operator


from B(h) to B(h) with respect to the weak topology.
11.3 Noncommutative Wiener space 199

ı∈I
Proof : Let (Bı )ı∈I ⊆ Dom Dk ⊆ B(h) be any net such that Bı −→ 0 and
ı∈I
Dk Bı −→ β for some β ∈ B(h) in the weak topology. To show that Dk is
closable, we have to show that this implies β = 0. Let us evaluate β between
two exponential vectors E(h1 ), E(h2 ), h1 , h2 ∈ hC , then we get
E(h1 ), βE(h2 )
= lim E(h1 ), Dk Bı E(h2 )

ı∈I
i 

= lim Q(k1 ) − P(k2 ) E(h1 ), Bı E(h2 )
2 ı∈I
i 

− lim E(h1 ), Bı Q(k1 ) − P(k2 ) E(h2 )
2 ı∈I
= 0,
which implies β = 0 as desired.

11.3.2 Integration by parts


From the Girsanov theorem Proposition 10.4.1 we can derive an integration by
parts formula that can be used to get the estimates that show the differentiabil-
ity of the Wigner densities. In particular we interpret the expression in the next
integration by parts formula as a directional or Fréchet derivative.
Proposition 11.3.4 Let h ∈ h ⊗ R2 , k ∈ hC ⊗ C2 , and ϕ such that
∂ϕ ∂ϕ
ϕ, , ∈ Dom Oh .
∂x ∂y
Then [Q(k1 ) − P(k2 ), Oh (ϕ)] defines a bounded operator on h and we have
!
i ∂ϕ ∂ϕ
[Q(k1 ) − P(k2 ), Oh (ϕ)] = Oh k1 , h1
+ k2 , h2
.
2 ∂x ∂y
Proof : For real k this is the infinitesimal version of the previous proposition,
so we only need to differentiate
! !
k2 k1 k2 k1 ∗

U ε ,ε Oh (ϕ)U ε , ε = Oh T(ε k1 ,h1
,ε k2 ,h2
) ϕ
2 2 2 2
with respect to ε and to set ε = 0. The conclusion for complex k follows by
linearity.
We define a S of the smooth functionals as

S = alg Oh (ϕ) : h ∈ h ⊗ R; ϕ ∈ C∞ (R2 ) satisfy
*
∂ κ1 +κ2 ϕ
∈ Dom Oh , κ1 , κ2 ≥ 0 .
∂xκ1 ∂yκ2
200 Noncommutative integration by parts

Note that S is weakly dense in B(h), i.e., S  = B(h), since S contains the
Weyl operators U(h1 , h2 ) with h1 , h2 ∈ h. Next, we define

D : S −→ B(h) ⊗ hC ⊗ C2

where the tensor product is the algebraic tensor product over C, by setting
DOh (ϕ) equal to
⎛ ! ⎞
∂ϕ
O
⎜ h ∂x ⊗ h1 ⎟
DOh (ϕ) = ⎜⎝ ∂ϕ
! ⎟

Oh ⊗ h2
∂y
and extending it as a derivation with respect to the B(h)-bimodule structure of
B(h) ⊗ hC ⊗ C2 defined by
! ! ! !
O1 ⊗ k1 OO1 ⊗ k1 O1 ⊗ k1 O1 O ⊗ k1
O· = , ·O=
O2 ⊗ k2 OO2 ⊗ k2 O2 ⊗ k2 O2 O ⊗ k2

for O, O1 , O2 ∈ B(h) and k ∈ hC ⊗ C2 .


For example, when h ∈ h ⊗ R2 , we get
" # !
U(h1 , h2 ) ⊗ h1
DU(h1 , h2 ) = DOh exp i(x + y) = i
U(h1 , h2 ) ⊗ h2
= iU(h1 , h2 ) ⊗ h.

Definition 11.3.5 We define a B(h)-valued inner product on B(h) ⊗ hC ⊗ C2


by ·, ·
: B(h) ⊗ hC ⊗ C2 × B(h) ⊗ hC ⊗ C2 −→ B(h) by
6 ! !7
O1 ⊗ h1 O1 ⊗ k1
, = O∗1 O1 h1 , k1
+ O∗2 O2 h2 , k2

O2 ⊗ h2 O2 ⊗ k2

For all A, B ∈ B(h) ⊗ hC ⊗ C2 and all O ∈ B(h) we have




⎪ B, A
= A, B
∗ ,






⎪ ∗
⎨ O A, B
= AO, B
,



⎪ A, B
O = A, BO
,






O∗ A, B
= A, OB
.

This turns B(h)⊗hC ⊗C2 into a pre-Hilbert module over B(h), and by mapping
O ⊗ k ∈ B(h) ⊗ hC ⊗ C2 to the linear map

h  v −→ Ov ⊗ k ∈ h ⊗ hC ⊗ C2 ,
11.3 Noncommutative Wiener space 201

we can embed B(h) ⊗ hC ⊗ C2 in the Hilbert module M = B(h, h ⊗ hC ⊗ C2 ).


We will regard hC ⊗ C2 as a subspace of M via the embedding

hC  k  −→ idh ⊗ k ∈ M.

Note that we have O · k = k · O = O ⊗ k and A, k


= k, A
for all k ∈ hC ⊗ C2 ,
O ∈ B(h), A ∈ M, where the conjugation in M is defined by O ⊗ k = O∗ ⊗ k.

Proposition 11.3.6 Let O ∈ S and k ∈ hC ⊗ C2 . Then O ∈ Dom Dk and

Dk O = k, DO
= DO, k
.

∂ϕ ∂ϕ
Proof : For h ∈ h ⊗ R2 and ϕ ∈ Dom Oh such that also ∂x , ∂y ∈ Dom Oh ,
we get
⎛ ! ⎞
8 ∂ϕ
! Oh ⊗ h1 9
k1 ⎜ ∂x ⎟
k, DOh (ϕ)
= ,⎜
⎝ ! ⎟

k2 ∂ϕ
Oh ⊗ h2
∂y
!
∂ϕ ∂ϕ
= Oh k1 , h1
+ k2 , h2

∂x ∂y
i
= [Q(k1 ) − P(k2 ), Oh (ϕ)] = Dk O,
2
where we used Proposition 11.3.4. The first equality of the proposition now
follows, since both
i
O  −→ Dk O = [Q(k1 ) − P(k2 ), O] and O  −→ k, DO

2
are derivation operators. The second equality follows immediately.

The next result is the noncommutative analogue of Equation (9.9).

Theorem 11.3.7 For any k ∈ hC ⊗ C2 and O ∈ S we have


  1  
IE k, DO
= IE {P(k1 ) + Q(k2 ), O}
2
where {·, ·} denotes the anti commutator {X, Y} = XY + YX.

Proof : This formula is a consequence of the fact that

Q(h) = h = iP(h), h ∈ hC ,
202 Noncommutative integration by parts

which implies
  i"

#
IE k, DO
= Q(k1 ) − P(k2 ) , O
− , O Q(k1 ) − P(k2 ) 

2
i

= k1 + ik2 , O
− , O(k1 + ik2 )

2
1"

#
= P(k1 ) + Q(k2 ) , O
+ , O P(k1 ) + Q(k2 ) 

2
1
= IE [{P(k1 ) + Q(k2 ), O}] .
2

We can also derive an analogue of the commutative integration by parts


formula (9.10).
Corollary 11.3.8 Let k ∈ hC ⊗ C2 , and O1 , . . . , On ∈ S, then
/ 0 ⎡ ⎤
1 +
n n m−1 + +
n
IE P(k1 ) + Q(k2 ), Om = IE ⎣ Oj k, DOm
Oj ⎦ ,
2
m=1 m=1 j=1 j=m+1

where the products are ordered such that the indices increase from the left to
the right.
Proof : This is obvious, since O  −→ k, DO
is a derivation.

11.3.3 Closability
Corollary 11.3.8 can be used for n = 3 to show the closability of D from B(h)
to M. This also implies that D is also closable in stronger topologies, such as,
e.g., the norm topology and the strong topology. We will denote the closure of
D again by the same symbol.
Corollary 11.3.9 The derivation operator D is a closable operator from B(h)
to the B(h)-Hilbert module M = B(h, h ⊗ hC ⊗ C2 ) with respect to the weak
topologies.
ı∈I
Proof : We have to show that for any net (Aı )ı∈I in S with Aı −→ 0 and
ı∈I
DAı −→ α ∈ M, we get α = 0. Let f , g ∈ hC . Set
f +f f −f g+g g−g
f1 = , f2 = , g1 = , and g2 = .
2 2i 2 2i
Then we have
U(f1 , f2 ) = e−||f ||/2 E(f ) and U(g1 , g2 ) = e−||g||/2 E(g).
11.3 Noncommutative Wiener space 203

Thus we get

e(||f ||
2 +||g||2 )/2
E(f ) ⊗ h, αE(g)

(||f ||2 +||g||2 )/2


=e E(f ), h, α
E(g)

 
= lim IE U(−f1 , −f2 ) h, DAı
U(g1 , g2 )
ı∈I
11> ?
= lim IE P(h1 ) + Q(h2 ), U(−f1 , −f2 )Aı U(g1 , g2 )
ı∈I 2
   2
− h, DU(−f1 , −f2 ) Aı U(g1 , g2 ) − U(−f1 , −f2 )Aı h, DU(g1 , g2 )


= lim ψ1 , Aı ψ2
+ ψ3 , Aı ψ4
− ψ5 , Aı ψ6
− ψ7 , Aı ψ8

ı∈I
=0

for all h ∈ hC ⊗ C2 , where


⎧ 1


⎪ ψ1 = U(f1 , f2 ) P(h1 ) + Q(h2 ) , ψ2 = U(g1 , g2 ),




2






⎨ ψ3 = U(f1 , f2 ), 1
ψ4 = U(g1 , g2 ) P(h1 ) + Q(h2 ) ,
2






⎪ ψ5 = Dh U(−f1 , −f2 ) , ψ6 = U(g1 , g2 ),






ψ7 = U(f1 , f2 ), ψ8 = Dh U(g1 , g2 ).

But this implies α = 0, since {E(f ) ⊗ h|f ∈ hC , h ∈ hC ⊗ C2 } is dense in


h ⊗ hC ⊗ C2 .

Since D is a derivation, the next proposition implies that Dom D is a


∗-subalgebra of B(h).

Proposition 11.3.10 Let O ∈ Dom D. Then O∗ ∈ Dom D and

DO∗ = DO.

Proof : It is not difficult to check this directly on the Weyl operators


U(h1 , h2 ), h ∈ h ⊗ R2 . We get U(h1 , h2 )∗ = U(−h1 , −h2 ) and


D U(h1 , h2 )∗ = DU(−h1 , −h2 ) = −iU(−h1 , −h2 ) ⊗ h
= U(h1 , h2 )∗ ⊗ (ih) = DU(h1 , h2 ).

By linearity and continuity it therefore extends to all of Dom D.


204 Noncommutative integration by parts

Finally, we show how the operator D can be iterated. Given h a complex Hilbert
space we can define the derivation operator

D : S ⊗ h −→ B(h) ⊗ hC ⊗ C2 ⊗ h

by setting

D(O ⊗ h) = DO ⊗ h, O ∈ S, h ∈ h.

By closure of D we get an unbounded derivation from the Hilbert module


B(h, h ⊗ h) to M(h) = B(h ⊗ h, h ⊗ hC ⊗ C2 ⊗ h), which allows us to iterate
D. It is easy to see that D maps S ⊗ h to S ⊗ hC ⊗ C2 ⊗ h and so we have
" #⊗n
Dn (S ⊗ h) ⊆ S ⊗ hC ⊗ C2 ⊗ h.

In particular, S ⊆ Dom Dn for all n ∈ N, and we can define Sobolev-type


norms || · ||n and semi norms || · ||ψ,n , on S by


n
||O||2n := ||O∗ O|| + || Dn O, Dn O
||,
j=1

and

n
||O||2ψ,n := ||Oψ||2 + || ψ, Dn O, Dn O
ψ
||,
j=1

ψ ∈ h. In this way we can define Sobolev-type topologies on Dom Dn .

11.3.4 Divergence operator


We now extend the definition of the “Fréchet derivation” Dk to the case where
k is replaced by an element of M. It becomes now important to distinguish
between a right and a left “derivation operator”. Furthermore, it is no longer a
derivation.

Definition 11.3.11 Let u ∈ M and O ∈ Dom D. Then we define the right



→ ←−
gradient D u O and the left gradient O D u of O with respect to u by

→ ←−
D u O = u, DO
, and O D u = DO, u
.

We list several properties of the gradient.


11.3 Noncommutative Wiener space 205

Proposition 11.3.12 (i) Let X ∈ B(h), O, O1 , O2 ∈ Dom D, and u ∈ M. We


have
⎧ −→ −


⎪ D Xu O = X D u O,





⎪ −
→ "−
→ # −


⎪ = O2 + D uO1 O2 ,

⎨ D u (O1 O2 ) D u O1


⎪ ←− ←−

⎪ O D uX = (O D u )X,




⎪ " ←

⎩ (O1 O2 )←
⎪ − ←
− − #
D u = O1 D O2 u + O1 O2 D u .

(ii) For any k ∈ hC ⊗ C2 and O ∈ Dom D, we have



→ ←−
Dk O = D idh ⊗k O = O D idh ⊗k .

Proof : These properties can be deduced easily from the definition of the
gradient and the properties of the derivation operator D and the inner product
·, ·
.

We may also define a two-sided gradient




D u : Dom D × Dom D −→ B(h)
←→ "−
→ # " ← − #
(O1 , O2 )  −→ O1 D u O2 = O1 D u O2 + O1 D u O2 .

←→
For k ∈ hC ⊗ C2 we have O1 D idh ⊗k O2 = Dk (O1 O2 ).
The algebra B(h) of bounded operators on the symmetric Fock space h and
the Hilbert module M are not Hilbert spaces with respect to the expectation in
the vacuum vector . Therefore, we cannot define the divergence operator or
Skorohod integral δ as the adjoint of the derivation D. It might be tempting to
try to define δX as an operator such that the condition
→ 2
1−
IE [(δX)B] = IE D X B (11.1)



is satisfied for all B ∈ Dom D X . However this is not sufficient to charac-
terise δX. In addition, the following Proposition 11.3.13 shows that this is
not possible without imposing additional commutativity conditions, see also
Proposition 11.3.15.

Proposition 11.3.13 Let k ∈ hC ⊗ C2 with k1 + ik2  = 0. There exists no


(possibly unbounded) operator M whose domain contains the vacuum vector
such that
206 Noncommutative integration by parts

IE[MB] = IE [Dk B]

holds for all B ∈ Dom Dk .

Proof : We assume that such an operator M exists and show that this leads to
a contradiction. Letting B ∈ B(h) be the operator defined by

h  ψ  −→ Bψ := k1 + ik2 , ψ
,

it is easy to see that B ∈ Dom Dk and that Dk B is given by


i i 

(Dk B)ψ = k1 + ik2 , ψ
(k1 + ik2 ) − Q(k1 ) − P(k2 ) (k1 + ik2 ), ψ ,
2 2
ψ ∈ h. Therefore, if M existed, we would have

0 = , MB
= IE[MB] = IE[Dk B]
i
= , (Dk B)
= − k1 + ik2 , k1 + ik2
,
2
which is clearly impossible.

We now introduce the analogue of smooth elementary h-valued random


variables, as
⎧ ⎫
⎨ n ⎬
Sh = Fj ⊗ h(j) : F1 , . . . , Fn ∈ S, h(1) , . . . , h(n) ∈ hC ⊗ C2 , n ∈ N .
⎩ ⎭
j=1


n
←→
Given A, B ∈ B(h) and u ∈ Sh of the form u = Fj ⊗h(j) , defining A δu B by
j=1

1   " (j) # " #  

n n
←→ (j)
A δu B := P h1 + Q h2 , AFj B − A Dh(j) Fj B,
2
j=1 j=1

shows by Corollary 11.3.8 that


1 ← → 2 1 ←→ 2
IE A δu B = IE A D u B . (11.2)
←→
However, A δu B can written
" #as a product
" #AXB for"some# operator " X#only if A
(1) (1) (n) (n)
and B commute with P h1 + Q h2 , . . . , P h1 + Q h2 . In fact,
relations such as (11.1) or (11.2) cannot be satisfied for all A, B ∈ Dom D
unless we impose some commutativity conditions on A and B.
For this reason we now define a divergence operator that satisfies a
weakened version of (11.2), see Proposition 11.3.15. This definition will be
extended to a larger domain in Remark 11.3.18.
11.3 Noncommutative Wiener space 207

Definition 11.3.14 We set



n
1   " (j) #
n " #   n
(j)
Sh,δ = Fj ⊗ h(j) : P h1 + Q h2 , Fj − Dh(j) Fj
2
j=1 j=1 j=1

defines a bounded operator on h ⊂ Sh ,

and define the divergence operator δ : Sh,δ −→ B(h) by

1   " (j) # " #  


n n
(j)
δ(u) = P h1 + Q h2 , Fj − Dh(j) Fj ,
2
j=1 j=1


n
for u = Fj ⊗ h(j) ∈ Sh,δ .
j=1

In case h = L2 (R+ ), the divergence operator coincides with the Hudson–


Parthasarathy quantum stochastic integral for adapted integrable processes
and with the noncausal quantum stochastic integrals defined by Lindsay and
Belavkin for integrable processes, see Section 11.4.

It is now easily checked that the relation δ(u) = δ(u) holds for all u ∈
Sh,δ .
:
Proposition 11.3.15 Let u = nj=1 Fj ⊗ h(j) ∈ Sh,δ and
 " # " # " # " #
(1) (1) (n) (n)
A, B ∈ Dom D ∩ P h1 + Q h2 , . . . , P h1 + Q h2

i.e., A and B are in the commutant of


 " # " # " # " #
(1) (1) (n) (n)
P h1 + Q h2 , . . . , P h1 + Q h2 ,

then we have
1 ←→ 2
IE [Aδ(u)B] = IE A D u B .

Remark 11.3.16 Note that δ : Sh,δ −→ B(h) is the only linear map with this
property, since for one single element h ∈ hC ⊗ C2 , the sets
> ∗ ?
A  : A ∈ Dom D ∩ {P (h1 ) + Q (h2 )}

and
> ?
B : B ∈ Dom D ∩ {P (h1 ) + Q (h2 )}

are still total in h.


208 Noncommutative integration by parts

Proof : From Corollary 11.3.8 we get


1 ← → 2  
IE A D u B = IE A u, DB
+ DA, u
B
⎡ ⎤
n

 n


= IE ⎣ AFj Dh(j) B + Dh(j) A Fj B⎦
j=1 j=1
⎡ ⎤
n  "
 # " 2  n
1

= IE ⎣ A Dh(j) Fj B⎦ .
(j) (j)
P h1 + Q h2 , AFj B −
2
j=1 j=1

But since A and B commute with


" # " # " # " #
P h(1)
1 + Q h
(1)
2 , . . . , P h
(n)
1 + Q
(n)
h2 ,
we can pull them out of the anticommutator, and we get
1 ← → 2
IE A D u B
⎡ ⎤
1 n  " 2 " #  n


= IE ⎣ A Dh(j) Fj B⎦
(j) (j)
A P h1 + Q h2 , Fj B −
2
j=1 j=1

= IE[Aδ(u)B].

We now give an explicit formula for the matrix elements between two
exponential vectors of the divergence of a smooth elementary element u ∈
Sh,δ . This is the analogue of the first fundamental lemma in the Hudson–
Parthasarathy calculus, see Theorem 5.3.2 or [87, Proposition 25.1].
Theorem 11.3.17 Let u ∈ Sh,δ . Then we have the following formula
6 ! 7
ik1 − ik2
E(k1 ), δ(u)E(k2 )
= E(k1 ) ⊗ , uE(k2 )
k1 + k2
for the evaluation of the divergence δ(u) of u between two exponential vectors
E(k1 ), E(k2 ), for k1 , k2 ∈ hC .
Remark 11.3.18 This suggests to extend the definition of δ in the following
way: set

Dom δ = u ∈ M : ∃M ∈ B(h) such that ∀k1 , k2 ∈ hC , (11.3)
6 ! 7*
ik1 − ik2
E(k1 ), ME(k2 )
= E(k1 ) ⊗ , uE(k2 )
k1 + k2
and define δ(u) for u ∈ Dom δ to be the unique operator M that satisfies the
condition in Equation (11.3).
11.3 Noncommutative Wiener space 209

:n
Proof : Let u = j=1 Fj ⊗ h . Recalling the definition of Dh we get the
(j)

following alternative expression for δ(u),

1  " " (j) # " # " # " ##


n
(j) (j) (j)
δ(u) = P h1 + Q h2 − iQ h1 + iP h2 Fj
2
j=1

1  " " (j) # " # " # " ##


n
(j) (j) (j)
+ Fj P h1 + Q h2 + iQ h1 − iP h2
2
j=1
n "
 " # !#
+ (j) (j) (j) (j)
= a h2 − ih1 Fj + Fj a h2 − ih1 . (11.4)
j=1

Evaluating this between two exponential vectors, we obtain



n
 (j) (j) 
E(k1 ), δ(u)E(k2 )
= a(h2 − ih1 )E(k1 ), Fj E(k2 )
j=1

n
 (j) (j) 
+ E(k1 ), Fj a(h2 + ih1 )E(k2 )
j=1

n

(j) (j) (j) (j)
= h2 − ih1 , k1
+ h2 + ih1 , k2
E(k1 ), Fj E(k2 )

j=1

n

(j) (j) (j) (j)
= k1 , h2 − ih1
+ k2 , h2 + ih1
E(k1 ), Fj E(k2 )

j=1
6 ! 7
ik1 − ik2
= E(k1 ) ⊗ , uE(k2 ) .
k1 + k2

Corollary 11.3.19 The divergence operator δ is closable in the weak


topology.
ı∈I ı∈I
Proof : Let (uı )ı∈I be a net such that uı −→ 0 and δ(uı ) −→ β ∈ B(h) in the
weak topology. Then we get

E(k1 ), βE(k2 )
= lim E(k1 ), δ(uı )E(k2 )

ı∈I
6 ! 7
ik1 − ik2
= lim E(k1 ) ⊗ , uı E(k2 )
ı∈I k1 + k2
= 0,

for all k1 , k2 ∈ hC , and thus β = 0.


210 Noncommutative integration by parts

We have the following analogues of Equations (9.11) and (9.12).

Proposition 11.3.20 Let u, v ∈ Sh,δ , F ∈ S, h ∈ hC ⊗ C2 , then we have




⎪Dh ◦ δ(u) = h, u
+ δ ◦ Dh (u),




⎪ 1  1 " (j) # " # 2
n

⎪ ←− (j)
⎨δ(Fu) = Fδ(u) − F D u + P h1 + Q h2 , F Fj ,
2
j=1



⎪ n 1 " # " #2

⎪ −
→ 1 (j) (j)
⎪δ(uF) = δ(u)F − D u F +
⎪ F, P h1 + Q h2 Fj . (11.5a)

⎩ 2
j=1
:n
Proof : a) Let u = j=1 Fj ⊗ h(j) . Setting
⎧ 1 " " (j) # " ##

⎪ X = P h + Q
(j)
h2 ,

⎪ j 1


2



i " " (j) # " ##
(j)
Yj = Q h1 − P h2 ,

⎪ 2





⎩ Y = i
Q (h ) − P (h ) ,

1 2
2
we have

n


δ(u) = (Xj − Yj )Fj + Fj (Xj + Yj ) ,
j=1

and therefore

n


Dh δ(u) = Y(Xj − Yj )Fj + YFj (Xj + Yj ) − (Xj − Yj )Fj Y − Fj (Xj + Yj )Y .
j=1
:n
On the other hand, we have Dh (u) = j=1 (YFj − Fj Y) ⊗ h(j) , and


n


δ Dh (u) = (Xj − Yj )YFj − (Xj − Yj )Fj Y + YFj (Xj + Yj ) − Fj Y(Xj + Yj ) .
j=1

Taking the difference of these two expressions, we get




n

 
Dh δ(u) − δ Dh (u) = Y, Xj − Yj Fj + Fj [Y, Xj + Yj ]
j=1

n

(j) (j)
= h1 , h1
+ h2 , h2
Fj = h, u
.
j=1
11.3 Noncommutative Wiener space 211

b) A straightforward computation gives



n


δ(Fu) = (Xj − Yj )FFj + FFj (Xj + Yj )
j=1

n

n
=F (Xj − Yj )Fj + Fj (Xj + Yj ) − [F, Xj − Yj ]Fj
j=1 j=1

n 
n
= Fδ(u) − [Yj , F]Fj + [Xj F, X]Fj
j=1 j=1

n 
n
= Fδ(u) − F ∗ , h(j)
Fj + [Xj , F]Fj
j=1 j=1

←−  n
= Fδ(u) − F D u + [Xj , F]Fj
j=1

! 8 9
−h2
where we used that [Xj , F] = i , DF defines a bounded operator,
h1
since F ∈ S ⊆ Dom D. Equation (11.5a) can be shown similarly.

If we impose additional commutativity conditions, which are always satisfied


in the commutative case, then we get simpler formulas that are more similar to
the classical ones.
:
Corollary 11.3.21 If u = nj=1 Fj ⊗ h(j) ∈ Sh,δ and
 " # " # " # " #
(1) (1) (n) (n)
F ∈ Dom D ∩ P h1 + Q h2 , . . . , P h1 + Q h2

then we have
←− −

δ(Fu) = Fδ(u) − F D u , and δ(uF) = δ(u)F − D u F.

11.3.5 Relation to the commutative case


Here we show that the noncommutative calculus contains the commutative
calculus as a particular case, at least in the case of bounded functionals. It
is well-known that the symmetric Fock space (hC ) is isomorphic to the
complexification L2 ( ; C) of the Wiener space L2 ( ) over h, cf. [17, 58, 79].
Such an isomorphism

=
I : L2 ( ; C)  −→ (hC )
212 Noncommutative integration by parts

can be defined by extending the map


" #
I : eiW(h)  −→ I eiW(h) = eiQ(h)  = e−||h|| /2 E(ih),
2
h ∈ h.

Using this isomorphism, a bounded functional F ∈ L∞ ( ; C) becomes a


bounded operator M(F) on (hC ), acting simply by multiplication,


M(F)ψ = I FI −1 (ψ) , ψ ∈ (hC ).

iW(h)
In particular, we get M e = U(0, h) for h ∈ h. We can show that the
derivation of a bounded differentiable functional coincides with its derivation
as a bounded operator.
Proposition 11.3.22 Let k ∈ h and F ∈ L∞ ( ; C) ∩ Dom
D̃ k such that
D̃k F ∈ L∞ ( ; C). Then we have M(F) ∈ Dom Dk0 , where k0 = 0k , and


M(D̃k F) = Dk0 M(F) .
Proof : It is sufficient to check this for functionals of the form F = eiW(h) ,
h ∈ h. We get
" #
M(D̃k eiW(h) ) = M i k, h
eiW(h)
6 ! !7
0 0
= i k, h
U(0, h) = i , U(0, h)
k h


= Dk0 U(0, h) = Dk0 M(eiW(h) ) .

This implies that we also have an analogous result for the divergence.

11.4 The white noise case


Belavkin [13, 14] and Lindsay [68] have defined noncausal quantum stochastic
integrals with respect to the creation, annihilation, and conservation processes
on the boson Fock space over L2 (R+ ) using the classical derivation and
divergence operators. Our divergence operator coincides with their noncausal
creation and annihilation integrals for integrable processes up to a coordinate
transformation. This immediately implies that for integrable adapted processes
our integral coincides with the quantum stochastic creation and annihilation
integrals defined by Hudson and Parthasarathy, cf. [54, 87].
Let now h = L2 (T, B, μ), where (T, B, μ) is a measure space such that B
is countably generated. In this case we can apply the divergence operator to
processes indexed by T, i.e., B(h)-valued measurable functions on T, since
11.4 The white noise case 213

they can be interpreted



as elements of the Hilbert module, if they are square-
integrable. Let3 L T, B(h) denote all B(h)-valued measurable functions t  −→
2

Xt on T with T Xt 2 dt < ∞. Then the definition of the divergence operator


becomes



Dom δ = X = (X 1 , X 2 ) ∈ L2 T, B(h) ⊕ L2 T, B(h) :
∃M ∈ B(h) such that E(k1 ), ME(k2 )
=



i(k2 − k1 ) E(k1 ), Xt1 E(k2 )
+ (k1 + k2 ) E(k1 ), Xt2 E(k2 )
dμ(t),
T

k1 , k2 ∈ hC ,
and δ(X) is equal to the unique operator satisfying the aforementioned
condition.
The definition of noncausal quantum stochastic integrals of [13, 14] and
[68] with respect to the creation, annihilation, and conservation processes on
the boson Fock space over L2 (R+ ) use the classical derivation and divergence
operators D and δ from the Malliavin calculus on the Wiener space L2 ( ).
Recall that D and δ are defined using the isomorphism between L2 ( ) and the
Fock space (L2 (R+ ; C)) over L2 (R+ ; C) = L2 (R+ )C , cf. Chapter 4. Namely,
D̃ acts on the exponential vectors as
D̃E(k) = E(k) ⊗ k, k ∈ L2 (R+ , C),
and δ̃ is the adjoint of D̃. Note that" due
to the isomorphism between

2 #
 L (R+ ; C) ⊗ L (R+ ; C) and L R+ ;  L (R+ ; C) , the elements of
2 2 2

2
 L2 (R+ ; C)

⊗L (R+ ; C) can be interpreted as function on R+ , which allows
3 DE(k) t = k(t)E(k) almost surely.
us to write
2 The action
of the annihilation
integral Ft dAt on some vector ψ ∈  L (R+ ; C) is then defined as the
Bochner integral
 BL 
Ft da−
r ψ := Ft (Dψ)t dt,
R+ R+
and that of the creation integral as
 BL
Ft da+
t ψ = δ̃(F· ψ).
R+
We will also use the notation
 
δ(X) = Xt1 dP(t) + Xt2 dQ(t),
T T
and call δ(X) the Hitsuda–Skorohod integral of X. These definitions satisfy the
adjoint relations
214 Noncommutative integration by parts

 BL !∗  BL  BL !∗  BL
Ft da−
r ⊃ Ft∗ da+
t , and Ft da+
r ⊃ Ft∗ da−
t .
R+ R+ R+ R+

It turns out that our Hitsuda–Skorohod integral operator δ coincides, up to a


coordinate transformation, with the above creation and annihilation integrals.
This immediately implies that for adapted, integrable processes our integral
also coincides with the quantum stochastic creation and annihilation integrals
defined by Hudson and Parthasarathy, cf. [54, 87].
Proposition 11.4.1 Let (T, B, μ) = (R+ , B(R+ ), dx), i.e., the positive half-
line with the Lebesgue measure, and let X = (X 1 , X 2 ) ∈ Dom δ. Then we have
   BL  BL
Xt1 dP(t) + Xt2 dQ(t) = (Xt2 − iXt1 )da+
t + (Xt2 + iXt1 )da−
t .
R+ R+ R+ R+

Proof : To prove this, we show that the Belavkin–Lindsay integrals satisfy


the same formula
for the matrix elements between exponential vectors. Let
(Ft )t∈R+ ∈ L R+ , B(h) be such that its creation integral in the sense of
2

Belavkin and Lindsay is defined with a domain containing the exponential


vectors. Then we get
6  BL 7


E(k1 ), Ft da+
t E(k2 = E(k1 ), δ̃ F· E(k2 )

)
R+


= D̃E(k1 ) · , F· E(k2 )


= k1 (t) E(k1 ), Ft E(k2 )
dt.
R+

For the annihilation integral we deduce the formula


6  BL 7 6 BL 7
− ∗ +
E(k1 ), Ft dat E(k2 ) = Ft dat E(k1 ), E(k2 )
R+ R+
 
= k2 (t) E(k2 ), Ft∗ E(k1 )
dt = k2 (t) E(k1 ), Ft E(k2 )
dt.
R+ R+

Let (T, B, μ) = (R+ , B(R+ ), dx), i.e., the positive half-line with the Lebesgue
measure, and let X = (X 1 , X 2 ) ∈ Dom δ. Then we have
   BL  BL
+
Xt dP(t) +
1
Xt dQ(t) =
2
(Xt − iXt )dat +
2 1
(Xt2 + iXt1 )da−
t .
R+ R+ R+ R+

The integrals defined by Belavkin and Lindsay are an extension of those


defined by Hudson and Parthasarathy.
Corollary 11.4.2 For adapted processes X ∈ Dom δ, the Hitsuda–Skorohod
integral
11.4 The white noise case 215

 
δ(X) = Xt1 dP(t) + Xt2 dQ(t)
T T
coincides with the Hudson–Parthasarathy quantum stochastic integral defined
in [54].

11.4.1 Iterated integrals


Here we informally discuss the iterated integrals of deterministic functions,
showing a close relation between these iterated integrals and the so-called
Wick product or normal-ordered product. Although this involves unbounded
operators on which the divergence operator δ has not been formally defined,
the construction can be made rigorous by choosing an appropriate common
invariant domain for these operators, using e.g., vectors with a finite chaos
decomposition. Namely, in order to iterate the operator, we start by defining δ
on B(h) ⊗ hC ⊗ C2 ⊗ h, where h is some Hilbert space, as δ ⊗ idh . Next, using
Equation (11.4), one can show by induction that
   
(1) (n)
h1 h1
δ n
(1) ⊗ ··· ⊗ (n)
h2 h2
 + + (j) (j)
a+ (h2 − ih1 ) a− (h2 + ih ).
(j) (j)
=
I⊆{1,...,n} j∈I j∈{1,...,n}\I
   
h(1) h(n)
for h(1) = 1
(1) , . . . , h(n) = 1
(n) ∈ hC ⊗ C2 . This is just the Wick
h2 h2

(1) (1)
(n) (n)
product of P(h1 + Q(h2 ) , . . . , P(h1 + Q(h2 ) , i.e.,
   
(1)
h1
(n)
h1
(1) (1)
(n) (n)
δ n
(1) ⊗ ··· ⊗ (n) = P(h1 + Q(h2 ) , · · · , P(h1 + Q(h2 ) ,
h2 h2
where the Wick product , is defined in terms of the momentum and position
operators on the algebra generated by {P(k), Q(k) : k ∈ hC }, by
⎧ + −
⎨ P(h) , X = X , P(h) = −ia (h)X + iXa (h)

Q(h) , X = X , Q(h) = a+ (h)X + Xa− (h)
X ∈ alg {P(k), Q(k) : k ∈ hC }, h ∈ hC . Equivalently, in terms of creation and
annihilation we have
⎧ + + +
⎨ a (h) , X = X , a (h) = a (h)X

a− (h) , X = X , a− (h) = Xa− (h).
216 Noncommutative integration by parts

Notes
Another definition of D and δ on noncommutative operator algebra has been
considered by Biane and Speicher in the free case [19], where the operator
algebra is isomorphic to the full Fock space. In [74], Mai, Speicher, and Weber
study the regularity of distributions in free probability. Due to the lack of
commutativity, it seems impossible in their approach to use an integration
by parts formula, so that they were compelled to find alternative methods. It
would be interesting to apply these methods to quantum stochastic differential
equations.
Our approach to quantum white noise calculus is too restrictive so far
since we require the derivatives DX to be bounded operators. Dealing with
unbounded operators is necessary for applications of quantum Malliavin
calculus to more realistic physical models. Ji and Obata [59, 60] have defined
a creation-derivative and an annihilation-derivative in the setting of quantum
white noise theory. Up to a basis change (they derive with respect to a− +
t and at ,
while we derive with respect to P and Q), these are the same as our derivation
operator. But working in the setting of white noise theory, they can derive much
more general (i.e., unbounded) operators.

Exercises
Exercise 11.1 In the framework of Proposition 12.1.4, assume in addition that
X ∈ Dom Dnk , (Dk X)−1 ∈ Dom Dnk , and
F
κ F
κ
ω∈ Dom Q(k1 ) − P(k2 ) Dom Q(k1 ) − P(k2 ) .
1≤κ≤n 1≤κ≤n

Show that the density of the distribution μX, of X ∈ B(h) in the state  is
n − 1 times differentiable for all n ≥ 2.
12

Smoothness of densities on real Lie algebras

How can it be that mathematics, being after all a product of human


thought which is independent of experience, is so admirably appro-
priate to the objects of reality?
(A. Einstein, in Geometry and Experience.)
In this chapter the noncommutative Malliavin calculus on the Heisenberg–
Weyl algebra is extended to the affine algebra via a differential calculus and
a noncommutative integration by parts. As previously we obtain sufficient
conditions for the smoothness of Wigner type laws of noncommutative random
variables, this time with gamma or continuous binomial marginals. The
Malliavin calculus on the Heisenberg–Weyl algebra {P, Q, I} of Chapter 11
relies on the composition of a function with a couple of noncommutative
random variables introduced via the Weyl calculus and on a covariance identity
which plays the role of integration by parts formula.

12.1 Noncommutative Wiener space


In this section we use the operator D in the framework of Chapter 11 to give
sufficient conditions for the existence and smoothness of densities for operators
on h. The domain of the operator D is rather small because we require δ(u) to
be a bounded operator and “deterministic” non-zero elements h ∈ hC ⊗ C2
cannot be integrable. As in the classical Malliavin calculus we will rely on a
Girsanov transformation, here given by Proposition 10.4.1, and it will also be
used to derive sufficient conditions for the existence of smooth densities. In the
sequel we let H p,κ (R2 ) denote the classical Sobolev space of orders κ ∈ N and
p ∈ [2, ∞].

217
218 Smoothness of densities on real Lie algebras

Proposition 12.1.1 Let κ ∈ N, h ∈ h ⊗ R2 with h1 , h2


 = 0, and  a vector
state, i.e., there exists a unit vector ω ∈ h such that

(X) = ω, Xω
, X ∈ B(h).

If there exists a k ∈ hC ⊗ C2 such that


F
F

ω∈ Dom Q(k1 )κ1 P(k2 )κ2 ∩ Dom Q(k1 )κ1 P(k2 )κ2
κ1 +κ2 ≤κ κ1 +κ2 ≤κ

and

h1 , k1
 = 0 and h2 , k2
 = 0,
C
then we have wh, ∈ 2≤p≤∞ H p,κ (R2 ).

Proof : We will show the result for κ = 1, the general case can be shown
similarly (see also the proof of Theorem 12.1.2). Let ϕ ∈ S(R) be a Schwartz
function, and let p ∈ [1, 2]. Then we have
$ $ $6 ! 7$
$ ∂ϕ $ $ ∂ϕ $
$ dW $ = $ ω, O ω $$
$ ∂x h, $ $ h
∂x
$6 7$
$ i   $
= $$ ω, Q(k1 ), Oh (ϕ) ω $$
2| k1 , h1
|


Ch,p ||Q(k1 )ω|| + ||Q(k1 )ω||
≤ ||ϕ||p .
2| k1 , h1
|
Similarly, we get
$ $

$ ∂ϕ $ Ch,p ||P(k2 )ω|| + ||P(k2 )ω||
$ $
$ ∂y dWh, $ ≤ 2| k2 , h2
|
||ϕ||p ,


and together these two inequalities imply wh, ∈ H p ,1 (R2 ) for p = p/(p − 1).

We will give a more general result of this type in the next Theorem 12.1.2.
Namely we show that the derivation operator can be used to obtain sufficient
conditions for the regularity of the joint Wigner densities as of noncom-
mutating random variables as in the next Theorem 12.1.2 which generalises
Proposition 12.1.1 to arbitrary states.

Theorem 12.1.2 Let κ ∈ N, h ∈ h ⊗ R2 with h1 , h2


 = 0, and suppose that
 is of the form

(X) = tr(ρX), X ∈ B(h),


12.1 Noncommutative Wiener space 219

for some density matrix ρ. If there exist k,  ∈ hC ⊗ C2 such that


 
h1 , k1
h2 , k2

det  = 0,
h1 , 1
h2 , 2

F
and ρ ∈ Dom Dκk 1 Dκ2 , and
κ1 +κ2 ≤κ


tr |Dκk 1 Dκ2 ρ| < ∞, κ1 + κ2 ≤ κ,
C
then we have wh, ∈ 2≤p≤∞ H p,κ (R2 ).
The absolute value of a normal operator is well-defined via functional calculus.
For a non-normal operator X we set |X| = (X ∗ X)1/2 . The square root is well-
defined via functional calculus, since X ∗ X is positive and therefore normal.
Proof : Let
     
h1 , k1
h2 , k2
X1 i Q(k1 ) − P(k2 )
A := and := A−1 ,
h1 , 1
h2 , 2
X2 2 Q(1 ) − P(2 )
then we have
!
1
∂ϕ
[X1 , Oh (ϕ)] = h2 , 2
Dk Oh (ϕ) − h2 , k2
D Oh (ϕ) = Oh
det A ∂x
and
!
1
∂ϕ
[X2 , Oh (ϕ)] = − h1 , 1
Dk Oh (ϕ) + h1 , k2
D Oh (ϕ) = Oh ,
det A ∂y
for all Schwartz functions ϕ ∈ S(R). Therefore, we have
$ κ +κ $ $ !!$
$ ∂ 1 2ϕ $ $ ∂ κ1 +κ2 ϕ $
$ $ $ $
$ ∂xκ1 ∂yκ2 dWh, $ = $ tr ρ Oh ∂xκ1 ∂yκ2 $
$ " $
$     #$
$
= $ tr ρ X1 , . . . X1 , X2 , . . . X2 , Oh (ϕ) $
& '( ) & '( ) $
κ1 times κ2 times
$
$
= $ tr [X2 , . . . [X2 , [X1 , . . . [X1 , ρ]]]]Oh (ϕ) $
≤ Cρ,κ1 ,κ2 ||Oh (ϕ)|| ≤ Cρ,κ1 ,κ2 Ch,p ||ϕ||p ,
C
for all p ∈ [1, 2], since ρ ∈ κ1 +κ2 ≤κ Dom Dκk 1 Dκ2 and tr(|Dκk 1 Dκ2 ρ|) < ∞
for all κ1 + κ2 ≤ κ, and thus
$ $
Cρ,κ1 ,κ2 = tr $[X2 , . . . [X2 , [X1 , . . . [X1 , ρ]]]]$ < ∞.
But this implies that the density of dWh, is contained in the Sobolev spaces
H p,κ (R2 ) for all 2 ≤ p ≤ ∞.
220 Smoothness of densities on real Lie algebras

Example 12.1.3 Let 0 < λ1 ≤ λ2 ≤ · · · be an increasing sequence of positive


numbers and {ej : j ∈ N} a complete orthonormal system for hC . Let

Tt : hC −→ hC

be the contraction semigroup defined by

Tt ej = e−tλj ej , j ∈ N, t ∈ R+ ,

with generator A = λj Pj . If the sequence increases fast enough to ensure
j∈N


that e−tλj < ∞, i.e., if tr Tt < ∞ for t > 0, then the second quantisation
j=1
ρt = (Tt ) : h −→ h is a trace class operator with trace

Zt = tr ρt = en , ρt en
,
n∈N∞
f

where we use N∞ f to denote the finite sequences of non-negative integers and


{en : n ∈ N∞
f } is the complete orthonormal system of h consisting of the vectors

en = e◦n ◦nr
1 ◦ · · · ◦ er ,
1
n = (n1 , . . . , nr ) ∈ N∞
f ,

i.e., the symmetrisation of the tensor e1 ⊗ · · · ⊗ e1 ⊗ · · · ⊗ er ⊗ · · · ⊗ er where


each vector ej appears nj times. We get

 + ∞
+ 1
Zt = e−nk tλk =
1 − e−tλk
n∈N∞ k=1 k=1

for the trace of ρt . We shall be interested in the state defined by


1
(X) = tr(ρt X), X ∈ B(h).
Zt
We get
 $ $ 
$ en , |ρt/2 a (ej )|2 en
$ = ||ρt/2 a (ej )||2
n∈N∞
f n∈N∞
f
 +
= nj (nj − 1) · · · (nj −  + 1)e−(nj −)tλj e−nk tλk
n∈N∞ k=j

 + 1
≤ (n + ) e−ntλj < ∞,
1 − e−tλk
n=0 k=j
12.1 Noncommutative Wiener space 221

and therefore ρt a (ej ) defines a bounded operator with finite trace for all j,  ∈
N and t > 0. Similarly, we get
$ $ $
 $
tr $a (ej )ρt $ < ∞, tr $ρt a+ (ej ) $ < ∞, etc.,

and
$ $ $ $
tr $P1 (ej1 )Q2 (ej2 )ρt $ < ∞, tr $P1 (ej1 )ρt Q2 (ej2 )$ < ∞,

t > 0, j1 , j2 , 1 , 2 ∈ N.

For a given h ∈ h ⊗ R2 with h1 , h2


 = 0 (and thus in particular h1  = 0
and h2  = 0), we can always find indices j1 and j2 such that h1 , ej1
 = 0 and
h2 , ej2
 = 0. Therefore, we can check for all
κ ∈ N, all assumptions of

e that
Theorem 12.1.2 are satisfied with k = 0j1 and  = e0j . Finally, we check that

2
the Wigner density wh, of P(h1 ), Q(h2 ) with h1 , h2
 = 0 in the state (·) =
C C
Zt−1 tr(ρt ·) belongs to κ∈N 2≤p≤∞ H p,κ (R2 ), in particular, its derivatives
of all orders exist, and are bounded and square-integrable.
We now show that the aforementioned approach also applies to derive suffi-
cient conditions for the regularity of a single bounded self-adjoint operator.
Recall that the distribution of a bounded self-adjoint operator X, in the state 
is the unique measure μX, on the real line such that

(X n ) = xn dμX, , n ∈ N.
R
 
Such a measure μX, always exists, is supported on the interval −||X||, ||X|| ,
and it is unique by the Carleman moment growth condition [25]. In the
next proposition, for simplicity we consider only vector states (·) = ω, · ω

associated to a unit vector ω ∈ h.

Proposition 12.1.4 Let X ∈ B(h) and assume that there exists a k ∈ hC ⊗ C2


such that



ω ∈ Dom Q(k1 ) − P(k2 ) ∩ Dom Q(k1 ) − P(k2 ) ,

X ∈ Dom Dk , X · Dk X = Dk X · X, Dk X invertible and (Dk X)−1 ∈ Dom Dk .


Then the distribution μX, of X ∈ B(h) in the state  has a bounded density.

Proof : Since X · Dk X = Dk X · X, we have Dk p(X) = (Dk X)p (X) for all


polynomials p. We therefore get
" # " #
Dk (Dk X)−1 p(X) = p(X)Dk (Dk X)−1 + p (X).
222 Smoothness of densities on real Lie algebras

The hypotheses of the proposition ensure that


$ " # $
$ $
$ ω, Dk (Dk X)−1 p(X) ω
$



|| Q(k1 )−P(k2 ) ω||+|| Q(k1 )−P(k2 ) ω||
≤ ||(Dk X)−1 || ||p(X)||
$ 2$
≤ C1 sup $p(x)$,
x∈[−||X||,||X||]

and
$ " # $ $$ " #$$
$ $ $$ $$
$ ω, p(X)Dk (Dk X)−1 ω
$ ≤ $$D (Dk X)−1 $$ ||p(X)||
$ $
≤ C2 sup $p(x)$,
x∈[−||X||,||X||]

and therefore allow us to get the estimate


$ ||X|| $
$ $ $ $
$ p (x)dμX, (x)$$ = $ ω, p (X)ω
$

$
−||X||
$D " " # " # # E$
$ $
= $ ω, Dk (Dk X)−1 p(X) − p(X)Dk (Dk X)−1 ω $
$ $
≤ (C1 + C2 ) sup $p(x)$
x∈[−||X||,||X||]

for all polynomials p. But this implies that μX, admits a bounded density.

12.2 Affine algebra


Recall from Section 7.5 that the Wigner density
1
W̃|φ
ψ| (ξ1 , ξ2 ) = W |φ
ψ| (ξ1 , ξ2 )
2π|ξ2 |
exists, and writing ξ = ξ1 X1∗ + ξ2 X2∗ ∈ G ∗ we have

Wρ (ξ )
 ,
|ξ2 |1/2 x1
= √ e−iξ1 x1 −iξ2 x2 Tr[e−x1 X1 −x2 X2 ρC−1 ] e−x1 /2 sinch dx1 dx2 ,
2π R 2 2
and for ρ = |φ
ψ|,

W|φ
ψ| (ξ )
 ,
|ξ2 |1/2 −iξ1 x1 −iξ2 x2 x1 X1 +x2 X2 −1 x1
= √ e Û(e )C ψ|φ
h e−x1 /2 sinch dx1 dx2
2π R2 2
12.2 Affine algebra 223


1 −x1 /2
= e−iξ1 x1 −iξ2 x2 φ(e−x1 τ )ψ(τ )e−iτ x2 e sinch (x1 /2)
2π R3
-
−x1 dτ
× e−(e −1)|τ | e−βx1 /2 |τ |β−1/2 e−x1 /2 sinch(x1 /2) dx1 dx2
(β)
 ! ! cosh x !
ξ2 e−x/2 |ξ2 |e−ixξ1 ξ2 ex/2 −|ξ2 | sinch 2x |ξ2 | β−1 dx
= φ ψ e 2 .
R sinch 2x sinch 2x sinch 2x sinch 2x (β)
Note that Wρ takes real values when ρ is self-adjoint. Next, we turn to proving
the smoothness of the Wigner function W|φ
ψ| . Let now H1,2 σ (R × (0, ∞))

denote the Sobolev space with respect to the norm


 ∞ 
1
f 2H σ (R×(0,∞)) = | f (ξ1 , ξ2 )|2 dξ1 dξ2 (12.1)
1,2
0 ξ2 R
 ∞ 
+ ξ2 (|∂1 f (ξ1 , ξ2 )|2 + |∂2 f (ξ1 , ξ2 )|2 )dξ1 dξ2 .
0 R

Note that if φ, ψ have supports in R+ , then W|φ


ψ| has support in R × (0, ∞),
and the conclusion of Theorem 12.2.1 reads W|φ
ψ| ∈ H1,2 σ (R × (0, ∞)).

Theorem 12.2.1 Let φ, ψ ∈ DomX1 ∩ DomX2 . Then

1R×(0,∞) W|φ
ψ| ∈ H1,2
σ
(R × (0, ∞)).

Proof : We have, for f ∈ Cc∞ (R × (0, ∞)):


$ $
$ $ $ $
$ $ $ $
$ 2 f (ξ1 , ξ2 )W |φ
ψ| (ξ1 , ξ2 )dξ1 dξ2 $ = 2π φ|O(ξ2 f (ξ1 , ξ2 ))ψ
h
R
≤ 2π φ h ψ h O(ξ2 f (ξ1 , ξ2 )) B2 (h)

≤ 2π φ h ψ h ξ2 f (ξ1 , ξ2 ) L2 (G ∗ ;dξ1 dξ2 /|ξ2 |)
C

≤ 2π φ h ψ h f L2 (G ∗ ;ξ2 dξ1 dξ2 ) ,
C

and for x1 , x2 ∈ R:
$ $
$ $
$ (x1 ∂1 f (ξ1 , ξ2 ) + x2 ∂2 f (ξ1 , ξ2 ))W |φ
ψ| (ξ1 , ξ2 )dξ1 dξ2 $
$ 2 $
R
$ $
= 2π $ φ|O(x1 ξ2 ∂1 f (ξ1 , ξ2 ) − x2 ξ2 ∂2 f (ξ1 , ξ2 ))ψ
h $
$ $
= 2π $ φ|[x1 U(X1 ) + x2 U(X2 ), O( f )]ψ
h $

≤ 2π φ h (x1 U(X1 ) + x2 U(X2 ))ψ f L2 (G ∗ ;dξ1 dξ2 /|ξ2 |) .
C

Under the same hypothesis we can show that 1R×(−∞,0) W|φ


ψ| belongs to the
σ (R × (−∞, 0)) which is defined similarly to (12.1). Note
Sobolev space H1,2
224 Smoothness of densities on real Lie algebras

that the above result and the presence of σ (ξ1 , ξ2 ) = 2π |ξ2 | are consistent with
the integrability properties of the gamma law, i.e., if
f (ξ1 , ξ2 ) = g(ξ1 )γβ (ξ2 ), ξ1 ∈ R, ξ2 > 0, g  = 0,
σ (R × (0, ∞)) if and only if β > 0.
then f ∈ H1,2

12.3 Towards a Hörmander-type theorem


In order to further develop the Malliavin calculus for quantum stochastic
processes we need to apply the derivative operator Dh to solutions of quantum
stochastic differential equations with the aim to find sufficient conditions for
their regularity. The goal, which remains to be achieved, would be to prove a
Hörmander type theorem for quantum stochastic processes. In this section we
sketch an approach that could lead towards such a result.
Let h be a Hilbert space carrying a representation {P, Q} of the canonical
commutation relations and let  be a state on B(h). Recall that the Wigner
function W of (P, Q) in the state  satisfies



f (u, v)W(u, v)dudv =  OP,Q ( f ) ,
R2

2
see Definition 7.3.3. Consider

2 the Fock space s L (R+ , 2 ) and a unitary
cocycle (Ut )t≥0 on h ⊗ s L (R+ , 2 ) given as the solution of the quantum
stochastic differential equation

 
dUt = Rk da+
t (ek ) + (Sk,l − δk,l )da◦t (Ek,l )
k∈N k∈N


− R∗k Sk,l da− ⎠
t (el ) + Kdt U(t),
k,l∈N

with initial condition U0 = 1, where (en )n∈N is an orthonormal basis of 2 and


Ek,l ∈ B(2 ) denotes the operator given by

ek , if l = j,
Ek,l ej =
0, otherwise.
For (Ut )t≥0 to be unitary, the coefficients (Sk,l )k,l∈N , (Rk )k∈N , which are
operators on h, should be such that (Sk,l )k,l∈N ∈ B(h ⊗ 2 ) is unitary and K
can be written as
1 ∗
K = −iH − Rk Rk ,
2
k∈N
12.3 Towards a Hörmander-type theorem 225

where H is a Hermitian operator, see, e.g., [87, Theorem 26.3]. The operators
Qt and Pt defined by

Pt := Ut∗ P ⊗ 1Ut , Qt := Ut∗ Q ⊗ 1Ut ,

satisfy a quantum stochastic differential equation of the form



dXt = L(X)t dt + Rk (X)t da+t (ek )
k∈N


+ Rk (X ∗ )∗ t da−
t (ek ) + Sk,l (X)t da◦t (Ek,l )
k∈N k,l∈N

with initial condition X0 = X ⊗ 1, where



⎪ 1 



⎪ L(X) = i[H, X] − Rk Rk X + XR∗k Rk − 2R∗k XRk ,

⎪ 2


k

⎪ ∗

⎪ L(X) t = j t L(X) = U t L(X)Ut ,

⎪ 

⎪ ∗
⎨ Rk (X) = Slk [X, Rl ],


l

⎪ Rk (X)t = Ut∗ Rk (X)Ut ,



⎪ 

⎪ Skl (X) = ∗
XSjl − δkl X,

⎪ Sjk



⎪ j

Skl (X)t = Ut∗ Skl (X)Ut .
The operators OPt ,Qt ( f ) obtained from Pt and Qt by the Weyl calculus satisfy
the same type of quantum stochastic differential equation. To begin, we
consider the simpler case
!
1 ∗
dUt = Rda+ t dAt − R Rdt Ut , (12.2)
2
without the conservation part and with only one degree of freedom, i.e., h = C.
Then we have
!
1 ∗ 1 ∗
dXt = Ut R XR − R RX − XR R Ut dt + Ut∗ [X, R]Ut da+
∗ ∗
t
2 2
+ Ut∗ [R∗ , X]Ut da−
t
! !
1 ∗ 1 ∗
= Ut∗ ∗ + ∗ −
R XR − R RX − XR R dt + [X, R]dat + [R , X]dat Ut
2 2
(12.3)

Next we investigate the differentiation of the solution of a QSDE, in the


following two steps.
226 Smoothness of densities on real Lie algebras

12.3.1 Derivative of a quantum stochastic integral


In Definition 11.3.1, we introduced the derivation operator Dh for h =
(h1 , h2 ) ∈ L2 (R+ , C2 ) by
i − 
Dh M = a (h1 − ih2 ) + a+ (h1 + ih2 ), M .
2
Let
 t  t
Mt = Fs da−
s , Nt = Gs da+
s ,
0 0

and suppose that Xt , Yt , and Fs , Gs are in the domain of Dh , Fs , Gs furthermore


adapted. By the quantum Itô formula we get
⎧  t  t

⎪ −
a (h1[0,t] )Mt = −
h(s)Xs das + a− (h1[0,s] )Fs da−

⎪ s ,



⎪  0
 0

⎪ +
t t

⎨ a (h1[0,t] )Mt = h(s)Xs da+s + a+ (h1[0,s] )Fs da−s ,
 t0
 t
0
 t


⎪ a− (h1[0,t] )Nt =
⎪ h(s)Y da−
+ a−
(h1[0,s] )G da+
+ h(s)Gs ds,


s s s s

⎪  0
0 0

⎪ t t

⎩ a+ (h1[0,t] )Nt = h(s)Ys da+ a+ (h1[0,s] )Fs da+
s + s ,
0 0

and similar formulas hold for the products Mt a− (h1[0,t] ), Mt a+ (h1[0,t] ),


Nt a− (h1[0,t] ), and Nt a+ (h1[0,t] ). Therefore, we have
⎧  t  t
⎪ −
 −  −  − 


⎪ [a (h), M t ] = a (h1 [0,s] ), F s das = a (h), Fs da−
s ,



⎪  0
t  0

⎪ +
 t

⎨ [a (h), Mt ] = a+ (h), Fs da− s − h(s)Fs ds,
 0t  0t

⎪ −
 −  +

⎪ [a (h), N ] = a (h), G das + h(s)Gs ds,


t s

⎪  t
0 0

⎪  − 

⎩ [a+ (h), Nt ] = a (h), Gs da+ s .
0

Combining, these formulas, we get the following expressions for the deriva-
tives of quantum stochastic integrals,
i − 
Dh Mt = a (h1 − ih2 ) + a+ (h1 + ih2 ), Mt
2
 
i t

= Dh Fs da−
s − h1 (s) + ihs (s) Fs ds,
2 0
12.3 Towards a Hörmander-type theorem 227

and
i − 
Dh Nt = a (h1 − ih2 ) + a+ (h1 + ih2 ), Nt
2
 
+ i t

= Dh Gs das + h1 (s) − ih2 (s) Gs ds.
2 0
Time integrals commute with the derivation operator, i.e., we have
 t  t
Dh Ms ds = Dh Ms ds.
0 0

12.3.2 Derivative of the solution


Let (Ut )t∈R+ be a solution of Equation (12.2), then we get
 t !
1 ∗
Dh Ut = Rda+s − R∗ −
da s − R Rds Dh Us
0 2
 
i t
i t

+ h1 (s) + ih2 (s) R∗ Us ds + h1 (s) − ih2 (s) RUs ds
2 0 2 0
 t !  t
1
= Rda+s − R∗ −
da s − R∗
Rds D U
h s + R̃s Us ds,
0 2 0

where
i
i

R̃s = h1 (s) + ih2 (s) R∗ + h1 (s) − ih2 (s) R.
2 2
Similarly, we have
 t !
1 ∗
Dh Ut∗ = Dh Us∗ R∗ da− s − Rda +
s − R Rds
0 2
 t 
i
i t ∗

− Us∗ R∗ h1 (s) + ih2 (s) ds − Us R h1 (s) − ih2 (s) ds
2 0 2 0
 t !  t
1 ∗
= Dh Us∗ R∗ da− s − Rda +
s − R Rds − Us∗ R̃s ds.
0 2 0

Finally, using (12.3), we get




Dh jt (X) = Dh (Ut∗ X ⊗ 1Ut ) = (Dh Ut∗ )X ⊗ 1Ut + Ut∗ X ⊗ 1(Dh Ut )
 t !
1 1
= (Dh Us∗ ) R∗ XR − R∗ RX − XR∗ R ds
0 2 2
!
+ [X, R]da+ ∗
s + [R , X]das

Us
228 Smoothness of densities on real Lie algebras

 t !
1 ∗ 1 ∗
+ Us∗ ∗
R XR − R RX − XR R dt
0 2 2
!
+ [X, R]da+ ∗
t + [R , X]das

(Dh Us )
 t
i

+ h1 (s) − ih2 (s) Us∗ [X, R]Us ds
2 0

i t

− h1 (s) + ih2 (s) Us∗ [R∗ , X]Us ds
2 0
 t  t





= Dh js L(X) ds + Dh jt R(X) da+ s
0 0
 t  t




+ Dh jt R(X ∗ )∗ da− s − jt [R̃s , X] ds.
0 0
i.e., the “flow” Dj ◦ jt satisfies an
3 t equation

similar to that of jt , but with an
additional (inhomogenous) term 0 jt [R̃s , X] ds. jt is homomorphic, but Dj ◦ jt
will not be homomorphic in general.

12.3.3 The other flow


Let us also define1
kt (X) = Ut 1 ⊗ XUt∗ ,
which satisfies the quantum stochastic differential equation
 t  t
kt (X) = X ⊗ 1 + [R, ks (X)]das + [ks (X), R∗ ]da−
+
s
0 0
 t !
∗ 1 ∗ 1 ∗
+ R ks (X)R − R Rks (X) − ks (X)R R ds
0 2 2
as can be shown using the quantum Itô formula and the quantum
stochastic

differential equations satisfied by Ut and Ut . Similarly, Dh kt ) satisfies the
quantum stochastic differential equation


Dh kt (X) = (Dh Ut )X ⊗ 1Ut∗ + Ut X ⊗ 1(Dh Ut∗ )
 t  t

 + 

= R, Dh ks (X) das + Dh ks (X) , R∗ da−s
0 0
 t !


1 ∗
1

+ R Dh ks (X) R − R R Dh ks (X) − Dh ks (X) R R ds
0 2 2
 t


+ R̃s , Dh ks (X) ds.
0


1 X  → U ∗ XU defines an automorphism on B(h ⊗  L2 (R ,  ) with inverse X  → U XU ∗ . j
t t s + 2 t t t
is the restriction of this map to B(h) ⊗ 1, kt is the restriction of the inverse.
12.3 Towards a Hörmander-type theorem 229

We introduce the shorter notation Yt = kt (X), then we have Dh Y0 = 0 and


 t !
1 1
Dh Yt = R∗ (Dh Ys )R − R∗ R(Dh Ys ) − (Dh Ys )R∗ R ds
0 2 2
 t  t
  +  
+ R, Dh Ys das + Dh Ys , R∗ da−
s
0 0
 t
i



+ h1 (s) − ihs (s) [Ys , R] − h1 (s) + ih2 (s) [R∗ , Ys ] ds.
2 0
 t
 
The last term is R̃s , Ys ds, where
0
i 1
R̃s = h1 (s)(R − R∗ ) + h2 (s)(R + R∗ ).
2 2
We see that Dh Yt satisfies an inhomogeneous quantum stochastic differential
equation, where the inhomogeneity is a function of Yt . The homogeneous part
is the same as for Yt . We try a variation of constants, i.e., we assume that the
solution has the form
Dh Yt = Ut Zt Ut∗ ,
since the solutions of the homogeneous equation are of the form Ut ZUt∗ (at
least for initial conditions acting only on the initial space). For Zt we make the
Ansatz
 t  t  t
Zt = Fs da+
s + G s ds + Hs ds
0 0 0
with some adapted coefficients Ft , Gt , and Ht . Then the Itô formula yields
 t !
∗ ∗ 1 ∗ ∗ 1 ∗ ∗
Dh Yt = R Us Zs Us R − R RUs Zs Us − Us Zs Us R R ds
0 2 2
 t  t
   ∗ 
+ Us Zs Us∗ , R da+s + R , Us Zs Us∗ da−
s
0 0
 t  t  t
+ Us dZUs∗ − Us Gs Us∗ Rds − R∗ Us Fs Us∗ ds.
0 0 0
Comparing this equation with the previous equation for Dh Yt , we get
 t  t  t  t
Us dZs Us∗ − Us Gs Us∗ Rds − R∗ Us Fs Us∗ ds = [R̃, Ys ]ds.
0 0 0 0

3Uniqueness of 3the integral representation


3 of Z implies
3 Fs = Gs = 0 (since
t
U dZ U ∗ = t U F da+ U ∗ + t U G da− U ∗ + t U H dsU ∗ , but there are
0 s s s 0 s s s s 0 s s s s 0 s s s
no creation or annihilation integrals on the right-hand side) and
 t  t
Us Hs dsUs∗ = [R̃, Ys ]ds, i.e., Hs = Us∗ [R̃, Ys ]Us ,
0 0
230 Smoothness of densities on real Lie algebras

0 ≤ s ≤ t. Recalling Yt = Ut (X ⊗ 1)Ut∗ , we can also rewrite the above as


Hs = [Rs , X ⊗ 1], with Rs = Us∗ R̃Us .
Thus we have
 t  t
Zt = [Rs , X ⊗ 1]ds, and Dh Xt = Ut [Rs , X ⊗ 1]dsUt∗ .
0 0
The next step is to take Yt = Ut∗ OP,Q ( f )Ut = OPt ,Qt ( f ) and to find an expres-
sion for Dh Xt involving derivatives of f .

Exercises
Exercise 12.1 Relation to the commutative case. Let


⎪ − + 1 − 2 + 2 P2 − Q2

⎨ Q = B + B = ((a x ) + (a x ) ) = ,
2 4


⎩ P = i(B− − B+ ) = i ((a− x )2 − (a+ x )2 ) = PQ + QP .

2 4
1. Show that we have
[P, Q] = 2iM, [P, M] = 2iQ, [Q, M] = −2iP.
2. Show that
P2 Q2
Q + M = B− + B+ + M = , Q − M = B− + B+ − M = − ,
2 2
i.e., Q + M and M − Q have gamma laws.
3. Give the probability law of Q + M and Q − M.
4. Give the probability law of Q + αM when |α| < 1 and |α| > 1.
5. Find the classical analogues of the integration by parts formula (2.1)
written as
 2 * 
1 P
IE[D(1,0) F] = IE ,F − F ,
2 2
for α = 1, and
  2 *
1 Q
IE[D(1,0) F] = IE F − ,F ,
2 2
for α = −1.
Appendix

I was born not knowing and have had only a little time to change that
here and there.
(R.P. Feynman)
This appendix gathers some background and complements on orthogonal
polynomials, moments and cumulants, the Fourier transform, adjoint action
on Lie algebras, nets, closability of linear operators, and tensor products.

A.1 Polynomials
A.1.1 General idea
Consider a family (Pn )n∈N of polynomials satisfying the orthogonality relation
 ∞
Pn (x)Pk (x)f (x)μ(dx) = 0, n  = k,
−∞

with respect to a measure μ on R.

A.1.2 Finite support


We first consider the case where μ is supported on a finite set. If the measure μ
is supported on n points, then LC 2 (R, μ) has dimension n. If μ is supported on n

points x1 , . . . , xn , then the monomials 1, x, . . . , xn−1 correspond to the vectors


⎛ ⎞ ⎛ ⎞ ⎛ 2 ⎞ ⎛ n−1 ⎞
1 x1 x1 x1
⎜ 1 ⎟ ⎜ x2 ⎟ ⎜ x2 ⎟ ⎜ xn−1 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ 2 ⎟ ⎜ 2 ⎟
⎜ . ⎟,⎜ . ⎟,⎜ . ⎟,...,⎜ . ⎟,
⎝ . ⎠ ⎝ . ⎠ ⎝ . ⎠
. . . ⎝ . ⎠.
1 xn xn2 xnn−1

231
232 Appendix - Polynomials

and the Vandermonde determinant formula


⎡ ⎤
1 x1 x12 · · · x1n−1
⎢ 1 x2 x2 · · · xn−1 ⎥ +
⎢ 2 2 ⎥
det ⎢ . . .. .. .. ⎥ = (xj − xk )  = 0
⎣ .. .. . . . ⎦ 1≤ j<k≤n
1 xn xn2 ··· xnn−1
shows that these vectors are linearly independent. In this case the monomials
1, . . . , xn−1 are linearly independent and they form a basis of LC 2 (R, μ). By

Gram-Schmidt orthogonalisation we can transform the basis 1, x, . . . , xn−1 into


an orthonormal basis P0 , . . . , Pn−1 by letting P0 = 1, and then recursively

n−1
P̃n := xn − Pk (x), xn
Pk ,
k=0

for n ≥ 1, with the normalisation


P̃n
Pn = - ,
P̃n , P̃n

where ·, ·
denotes the inner product of LC 2 (R, μ), i.e.,

f , g
= f (x)g(x)μ(dx).
R

Since 2 (R, μ)
LC has dimension n, it follows that the monomials xm with m ≥ n
are linear combinations of P0 , . . . , Pn−1 . Therefore, we get
Pk = 0, k ≥ n.
Example A.1.1 Consider μ = pδx1 + qδx2 with p, q > 0 such that p + q = 1
and x1 , x2 ∈ R. Here we get P0 = 1 and

P̃1 = x − 1, x
= x − px1 − qx2 ,
3∞
P̃1 , P̃1
= −∞ (x − px1 − qx2 )2 μ(dx),

so that P0 = 1 and P1 form a basis of LC


2 (R, μ). This recover the example of

2 × 2 matrices.

A.1.3 Infinite support


Let us now suppose that μ is not supported on a finite set, in which case we
build an orthonormal family (Pn )n∈N of polynomials. Assuming that the nth
polynomial Pn has degree n, n ∈ N, the polynomials P0 , . . . , Pn form a basis
for the space of polynomials of degree not higher than n. It can be shown,
Appendix - Polynomials 233

cf. Theorems 4.1 and 4.4 in [27], that a family (Pn )n∈N of polymials such that
deg(Pn ) = n is orthogonal with respect to some measure μ on R if and only if
there exist sequences (αn )n∈N , (βn )n∈N such that (Pn )n∈N satisfies a three-term
recurrence relation of the form

xPn (x) = Pn+1 (x) + αn Pn (x) + βn Pn−1 (x), n ≥ 1.

As an important particular case we have

xPn (x) = Pn+1 (x) + (α + β)nPn (x) + n(t + αβ(n − 1))Pn−1 (x),

for the Meixner polynomials (1934). When α = β = t = 1 we have

aPn (x) = Pn+1 (x) + 2nPn (x) + n2 Pn−1 (x),

and we find the Laguerre polynomials (1879). When β = 0 and α = t = 1 we


have

xPn (x) = Pn+1 (x) + nPn (x) + nPn−1 (x),

and we find the Charlier polynomials that can be associated to numbers of


partitions without singletons, cf. [11]. When α = 0, β = 0, t = 1 we find

xPn (x) = Pn+1 (x) + nPn−1 (x),

which yields the Hermite polynomials that correspond to numbers of pairings,


cf. [11]. Let now μ be a probability measure on R whose moments of all orders
are finite, i.e.,
 ∞
|x|n μ(dx) < ∞, n ∈ N.
−∞

The Legendre polynomials are associated with μ the uniform distribution, and
this generalises to the family of Gegenbauer polynomials (or ultraspherical
polynomials) in case μ is the measure with density (1 − x2 )α−1/2 1[−1,1] with
respect to the Lebesgue measure, α>−1/2. Important special cases include the
arcsine, uniform, Wigner’s semicircle distributions. The Jacobi polynomials vs.
the beta distribution constitute another generalisation.
Next we review in detail some important particular cases.

A.1.3.1 Hermite polynomials


Definition A.1.2 The Hermite polynomial Hn (x; σ 2 ) of degree n ∈ N and
parameter σ 2 > 0 is defined with

H0 (x; σ 2 ) = 1, H1 (x; σ 2 ) = x, H2 (x; σ 2 ) = x2 − σ 2 ,


234 Appendix - Polynomials

and more generally from the recurrence relation

Hn+1 (x; σ 2 ) = xHn (x; σ 2 ) − nσ 2 Hn−1 (x; σ 2 ), n ≥ 1. (A.1)

In particular we have

Hn (x; 0) = xn , n ∈ N.

The generating function of Hermite polynomials is defined as



 λn
ψλ (x, σ 2 ) = Hn (x; σ 2 ), λ ∈ (−1, 1).
n!
n=0

Proposition A.1.3 The following statements hold on the Hermite


polynomials:

i) Generating function:
2 σ 2 /2
ψλ (x, σ 2 ) = eλx−λ , x, λ ∈ R.

ii) Derivation rule:


∂Hn
(x; σ 2 ) = nHn−1 (x; σ 2 ),
∂x
iii) Creation rule:
!

Hn+1 (x; σ 2 ) = x − σ 2 Hn (x; σ 2 ).
∂x
Proof : The recurrence relation (A.1) shows that the generating function ψλ
satisfies the differential equation

⎪ ∂ψλ
⎨ (x, σ 2 ) = (x − λσ 2 )ψλ (x, σ 2 ),
∂λ


ψ0 (x, σ 2 ) = 1,
which proves (i). From the expression of the generating function we deduce
(ii), and by rewriting (A.1) we obtain (iii).
Next is the orthonormality properties of the Hermite polynomials with respect
to the Gaussian density:
 ∞
dx
Hn (x; σ 2 )Hm (x; σ 2 )e−x /(2σ ) √
2 2
= 1{n=m} n!σ 2n .
−∞ 2π σ 2
We have
∂Hn
(x; σ 2 ) = nHn−1 (x; σ 2 ),
∂x
Appendix - Polynomials 235

and the partial differential equation


∂Hn 1 ∂ 2 Hn
(x; s) = − (x; s),
∂s 2 ∂x2
i.e, the heat equation with initial condition

Hn (x; 0) = xn , x ∈ R, n ∈ N.

A.1.3.2 Poisson–Charlier polynomials


Definition A.1.4 Let the Charlier polynomial of order n ∈ N and parameter
λ ≥ 0 be defined by

C0 (k, λ) = 1, C1 (k, λ) = k − λ, k ∈ R, λ ∈ R+ ,

and the recurrence relation

Cn+1 (k, λ) = (k − n − λ)Cn (k, λ) − nλCn−1 (k, λ), n ≥ 1.

Let
λk −λ
pk (λ) = e , k ∈ N, λ ∈ R+ ,
k!
denote the Poisson probability density, which satisfies the finite difference
differential equation
∂pk
(λ) = −pk (λ), (A.2)
∂λ
where  is the difference operator

f (k) := f (k) − f (k − 1), k ∈ N.

Let also
∞ n
 t
ψλ (k, t) = Cn (k, λ), λ ∈ (−1, 1),
n!
n=0

denote the generating function of Charlier polynomials.

Proposition A.1.5 For all k ∈ Z and λ ∈ R+ we have the relations



⎪ λn ∂ n pk

⎪C (k, λ) = (λ), (A.3a)


n
pk (λ) ∂λn



⎪Cn (k + 1, λ) − Cn (k, λ) = nCn−1 (k, λ), (A.3b)





Cn+1 (k, λ) = kCn (k − 1, λ) − λCn (k, λ), (A.3c)
236 Appendix - Polynomials

and the generating function ψλ (k, t) satisfies

ψλ (k, t) = e−λk (1 + t)k , (A.4)

λ, t > 0, k ∈ N.

Proof : Relation (A.3c) follows from (A.3a) and (A.2) as

λn+1 ∂ n+1 pk
Cn+1 (k, λ) = (λ)
pk (λ) ∂λn+1
λn+1 ∂ n pk λn+1 ∂ n pk−1
=− n
(λ) + (λ)
pk (λ) ∂λ pk (λ) ∂λn
λn ∂ n pk λn ∂ n pk−1
= −λ (λ) + k (λ)
pk (λ) ∂λn pk−1 (λ) ∂λn
= −λCn (k, λ) + kCn (k − 1, λ).

Finally, using Relation (A.3c) we have



 tn−1
∂ψλ
(k, t) = Cn (k, λ)
∂t (n − 1)!
n=1
∞ n
 t
= Cn+1 (k, λ)
n!
n=0
∞ n
 ∞ n

t t
= −λ Cn (k − 1, λ) + k Cn (k, λ)
n! n!
n=0 n=0
∞ n
 ∞ n

t t
= −λ Cn (k − 1, λ) + k Cn (k, λ)
n! n!
n=0 n=0
= −λψλ (k, t) + kψλ (k − 1, t),

λ ∈ (−1, 1), hence the generating function ψλ (k, t) satisfies the differential
equation
∂ψλ
(k, t) = −λψλ (k, t) + kψλ (k − 1, t), ψ0 (k, t) = 1, k ≥ 1,
∂t
which yields (A.4) by induction on k.

We also have
∂ k pk
(λ) = (−)k pk (λ).
∂λk
Appendix - Polynomials 237

We also have the orthogonality properties of the Charlier polynomials with


respect to the Poisson distribution with respect to the inner product

 ∞
 λk
u, v
:= pk (λ)u(k)v(k) = e−λ u(k)v(k),
k!
k=0 k=0

with λ = σ (A), i.e.,




−λ λk
Cn (·, λ), Cm (·, λ)
2 (N,p· (λ)) = e Cn (k, λ)Cm (k, λ)
k!
k=0
= n!λn δn,m .

The exponential vector



 λn
Cn (ω(A), σ (A)) = e−λσ (A) (1 + λ)ω(A)
n!
n=0
= ψλ (ω(A), σ (A)).

A.1.3.3 Meixner polynomials


In this case we have the recurrence relation

(n + 1)Pn+1 + (2βn + βm0 − x)Pn + (n + m0 − 1)Pn−1 = 0,

with initial condition P−1 = 0, P1 = 1, for the rescaled polynomials


.
+n
k
Pn = pn .
k + m0
k=1

According to the value of β we have to distinguish three cases.


1. |β| = 1: In this case we have, up to rescaling, Laguerre polynomials, i.e.,

Pn (x) = (−β)n Ln(m0 −1) (βx)

where the Laguerre polynomials Ln(α) are defined as in [63, Equation


(1.11.1)]. The measure μ can be obtained by normalising the measure of
orthogonality of the Laguerre polynomials, and it is equal to

|x|m0 −1 −βx
μ(dx) = e 1βR+ dx.
(m0 )
If β = +1, then this measure is, up to a normalisation parameter, the usual
χ 2 -distribution (with parameter m0 ) of probability theory.
238 Appendix - Polynomials

2. |β| < 1: In this case we find the Meixner-Pollaczek polynomials after


rescaling,
 
x
Pn (x) = Pn(m0 /2) % ; π − arccos β .
2 1 − β2

For the definition of these polynomials see, e.g., [63, Equation (1.7.1)]. For
the measure μ we get
 $  $2
(π − 2 arccos β)x $ $
$ m0 ix $
μ(dx) = C exp % $ + % $ dx,
2 1 − β2 $ 2 2 1 − β2 $

where C has to be chosen such that μ is a probability measure.


3. |β| > 1: In this case we get Meixner polynomials
!+
n
xsgnβ m0 k + m0 − 1
Pn (x) = (−c sgnβ) Mn
n
− ; m0 ; c2
1/c − c 2 k
k=1
%
after rescaling, where c = |β| − β 2 − 1.
The definition of these polynomials can be found, e.g., in [63, Equation
(1.9.1)]. The density μ is again the measure of orthogonality of the polyno-
mials Pn (normalised to a probability measure). We therefore get
∞ 2n
 c
μ=C (m0 )n δxn ,
n!
n=0

where
" !
m0 # 1
xn = n + − c sgnβ, n ∈ N,
2 c

and
∞ 2n
 c 1
C−1 = (m0 )n = .
n! (1 − c2 )m0
n=0

Here,

(m0 )n = m0 (m0 + 1) · · · (m0 + n − 1)

denotes the Pochhammer symbol.


Appendix - Moments and cumulants 239

A.2 Moments and cumulants


In this section we provide some combinatorial background on the relationships
between the moments and cumulants of random variables and we refer the
reader to [89] and [90] for more information.
The cumulants (κnX )n≥1 of a random variable X have been defined in [110]
and were originally called the “semi-invariants” of X due to the property
κnX+Y = κnX + κnY , n ≥ 1, when X and Y are independent random variables.
Precisely, given the moment generating function
∞ n
 t
IE[etX ] = IE[X n ], t ∈ R,
n!
n=0

of a random variable X, the cumulants of X are the coefficients (κnX )n≥1


appearing in the series expansion of the logarithmic moment generating
function of X, i.e., we have

 tn
log(IE[etX ]) = κnX , t ∈ R.
n!
n=1

Given j1 , . . . , jk , n ∈ N such that j1 + · · · + jk = n, recall the definition of the


multinomial coefficient
!
n n!
= .
j1 , . . . , j k j 1 ! · · · jk !
In addition to the multinomial identity
 !
n j j
(x1 + · · · + xk )n = x11 · · · xkk , n ∈ N, (A.5)
j1 ,...,jk ≥0
j1 , . . . , j k
j1 +···+jk =n

we note the combinatorial identity


 ∞ k ∞
  
xn = xd1 · · · xdk .
n=1 n=k d1 +···+dk =n
d1 ≥1,...,dk ≥1

This expression translates into the classical identity



n 
E[X n ] = X
κ|P 1|
· · · κ|P
X
a|
, (A.6)
a=1 P1 ∪···∪Pa ={1,...,n}

based on the Faà di Bruno formula, links the moments IE[X n ] of a random
variable X with its cumulants (κnX )n≥1 , cf. e.g., Theorem 1 of [71], and also
[67] or § 2.4 and Relation (2.4.4) page 27 of [72]. In (A.6), the sum runs
240 Appendix - Moments and cumulants

over the partitions P1 , . . . , Pa of {1, . . . , n} with cardinal |Pi |. By inversion


of the cumulant formula (A.6), the cumulant κnX can also be computed from
the moments μXn of X, cf. Theorem 1 of [71], and also [67] or § 2.4 and
Relation (2.4.3) page 27 of [72].
The cumulant formula (A.6) can be inverted to compute the cumulant κnX
from the moments μXn of X as

n 
κnX = (a − 1)!(−1)a−1 μX|Pn | · · · μX|Pna | ,
1
a=1 Pn +1 ∪···∪Pna ={1,...,n}

n ≥ 1, where the sum runs over the partitions Pn1 , . . . , Pna of {1, . . . , n} with
cardinal |Pni | by the Faà di Bruno formula, cf. Theorem 1 of [71], and also [67]
or § 2.4 and Relation (2.4.3) page 27 of [72].
Example A.2.1
a) Gaussian cumulants. When X is centered we have κ1X = 0 and
κ2X = IE[X 2 ] = Var[X], and X becomes Gaussian if and only if κnX = 0,
n ≥ 3, i.e. κnX = 1{n=2} σ 2 , n ≥ 1, or

(κ1X , κ2X , κ3X , κ4X , . . .) = (0, σ 2 , 0, 0, . . .).

In addition when X is centered Gaussian we have κnX = 0, n  = 2, and (A.6)


can be read as Wick’s theorem for the computation of Gaussian moments
of X  N (0, σ 2 ) by counting the pair partitions of {1, . . . , n}, cf. [57], as

n 
IE[X n ] = σ n X
κ|P n | · · · κ|Pn |
X
a
1
k=1 Pn1 ∪···∪Pnk ={1,...,n}
|Pn1 |=2,...,|Pnk |=2
⎧ n
⎨ σ (n − 1)!!, n even,
= (A.7)

0, n odd,
where the double factorial
+ n!
(n − 1)!! = (2k − 1) = 2−n/2
(n/2)!
1≤2k≤n

counts the number of pair-partitions of {1, . . . , n} when n is even.


b) Poisson cumulants. In the particular case of a Poisson random variable
Z  P(λ) with intensity λ > 0 we have

 ∞
 (λet )n
ent P(Z = n) = e−λ = eλ(e −1) ,
t
IE[etZ ] = t ∈ R+ ,
n!
n=0 n=0
Appendix - Fourier transform 241

hence κnZ = λ, n ≥ 1, or

(κ1Z , κ2Z , κ3Z , κ4Z , . . .) = (λ, λ, λ, λ, . . .),

and by (A.6) we have



n
IEλ [Z n ] = An (λ, . . . , λ) = Bn,k (λ, . . . , λ)
k=0

n  
n
= λk = λk S(n, k)
k=1 Pn1 ∪···∪Pnk ={1,...,n} k=0

= Tn (λ), (A.8)

i.e., the n-th Poisson moment with intensity parameter λ > 0 is given by
Tn (λ) where Tn is the Touchard polynomial of degree n used in Section 3.2.
In particular the moment generating function of the Poisson distribution
with parameter λ > 0 and jump size α is given by

 ∞

αt −1) (αt)n (αt)n
t  −→ eλ(e = IEλ [Z n ] = Tn (λ).
n! n!
n=0 n=0

In the case of centered Poisson random variables we note that Z and Z −


IE[Z] have same cumulants of order k ≥ 2, hence for Z − IE[Z], a centered
Poisson random variable with intensity λ > 0, we have

n  
n
IE[(Z − IE[Z])n ] = λa = λk S2 (n, k),
a=1 Pn1 ∪···∪Pna ={1,...,n} k=0
|Pn1 |≥2,...,|Pna |≥2

n ∈ N, where S2 (n, k) is the number of ways to partition a set of n objects


into k non-empty subsets of size at least 2, cf. [99].

A.3 Fourier transform


The Fourier transform Fϕ of an integrable function f ∈ L2 (Rn ) is defined as

1
(F ϕ)(x) := ei ξ ,x
ϕ(ξ )dξ , x ∈ Rn .
(2π )n/2 Rn
The inverse Fourier transform F −1 is given by

1
(F −1 ϕ)(ξ ) = e−i ξ ,x
ϕ(x)dx, ξ ∈ Rn ,
(2π )n/2 Rn
242 Appendix - Fourier transform

with the property F −1 (F ϕ) = ϕ. In particular when n = 2 we have



1
Fϕ(u, v) = ϕ(x, y)eiux+ivy dxdy
2π R2
and the inverse

1
F −1 ϕ(x, y) = ϕ(u, v)e−iux−ivy dudv.
2π R2

We also note the relation


 ∞
eiξ(x−y) dξ dy = 2π δx (dy), (A.9)
−∞

i.e.,
 ∞  ∞
eiξ(x−y) ϕ( y)dξ dy = 2π ϕ(x),
−∞ −∞

for ϕ a sufficiently smooth function in S(R).


When n = 1, given a real-valued random variable X with characteristic
function
 
(u) = IE eiuX , u ∈ R,

and probability density function ϕX (x), the inverse Fourier transform



1
F −1 ϕ(x) = √ ϕ(u)e−iux du,
2π R2
yields the relation

1
ϕX (x) = (F −1 )(x) = IE[eiuX ]e−iux du,
2π R

for the probability density function ϕX of X, provided the characteristic


function u  −→ IE[eiuX ] is integrable on R.
When n = 2, given a couple (X, Y) of classical random variables with
characteristic function
 
(u, v) = IE eiuX+ivY , u, v ∈ R,

such that  is integrable on R2 , the couple (X, Y) admits a joint probability


density function of ϕ(X,Y) given by

1
ϕ(X,Y) (x, y) = (F −1 )(x, y) = (u, v)e−iux−ivy dudv. (A.10)
(2π )2 R2
Appendix - Cauchy–Stieltjes transform 243

A.4 Cauchy–Stieltjes transform


Let μ be a probability measure on R, then we define a function

Gμ : C\R −→ C

by

1
Gμ (z) = μ(dt).
R z−t
The function Gμ is called the Cauchy transform or Stieltjes transform of μ.
We have
1 1 1
= -
2 ≤
|z − x| |(z)|
(z) − x + (z)2

so the integral is well-defined for all x ∈ C with (z)  = 0 and defines a


holomorphic function on C\R. Furthermore, since
1 z−x
= ,
z−x |z − x|2


we have  Gμ (z) < 0 if (z) > 0, and Gμ (z) = Gμ (z). Therefore, it is
enough to know Gμ on C+ = {z ∈ C : (z) > 0}.

Theorem A.4.1 [5, Section VI, Theorem 3] Let

G : C+ −→ C− = {z ∈ C : (z) < 0}

be a holomorphic function. Then there exists a probability measure μ on R


such that

1
G(z) = μ(dx)
z−x
for z ∈ C+ if and only if

lim sup y|G(iy)| = 1.


y→∞

The measure μ is uniquely determined by G, and it can be recovered by the


Stieltjes inversion formula

1
μ(B) = lim G(x + iε)dx
π ε)0 B
for B ⊆ R a Borel set such that μ(∂B) = 0.
244 Appendix - Adjoint action

If the measure μ has compact support, say in the interval [−M, M] for some
M > 0, then we can express Gμ in terms of the moments of μ,

mn (μ) = xn μ(dx),

for n ∈ N, as a power series


  
∞ ∞

1 xn mn (μ)
Gμ (z) = μ(dx) = μ(dx) = ,
z−x zn+1 zn+1
n=0 n=0

which converges for |z| > M.

A.5 Adjoint action


Given X and Y two elements of a Lie algebra, the adjoint actions AdeX and
adX are defined by

e ad X Y := [X, Y] and AdeX Y := eX Ye−X .

In particular we have

AdeX Y = e ad X Y,

and

AdeX Y := eX Ye−X

 (−1)m n m
= X YX
n!m!
n,m=0

 !
1  k
k
= (−1)m X k−m YX m
k! m
k=0 m=0
1 
= Y + [X, Y] + X, [X, Y] + · · ·
2
= e ad X Y.

The identity


k !
k
(−1)m X k−m YX m = [X, [X, [· · · [X, [X, Y]] · · · ]]]
m & '( )
m=0 k times
Appendix - Nets 245

clearly holds for k = 0, 1, and can be extended by induction to all k ≥ 2, as


follows:
⎡ ⎤

[X, [X, [· · · [X, [X, Y]] · · · ]]] = ⎣X, [X, [X, [· · · [X, [X, Y]] · · · ]]]⎦
& '( ) & '( )
k+1 times k times
/ ! 0

k
k
= X, (−1)m X k−m YX m
m
m=0

k !
k
= (−1)m [X, X k−m YX m ]
m
m=0

k !
k
= (−1)m (X k+1−m YX m − X k−m YX m+1 )
m
m=0

k !
k
= (−1)m X k+1−m YX m
m
m=0

k+1 !
k
− (−1)m−1 X k+1−m YX m
m−1
m=1

k+1 ! !!
k k
= (−1)m X k+1−m YX m +
m m−1
m=0

k+1 !
k+1
= (−1)m X k+1−m YX m ,
m
m=0

where on the last step we used the Pascal recurrence relation for the binomial
coefficients.

A.6 Nets
In a metric space (X, d) a point x ∈ X is called an adherent point (also called
point of closure or contact point) of a set A ⊆ X if and only if there exists
a sequence (xn )n∈N ⊂ A that converges to x. This characterisation cannot be
formulated in general topological spaces unless we replace sequences by nets,
which are a generalisation of sequences in which the index set N is replaced
by more general sets.
246 Appendix - Closability of linear operators

A partially ordered set (I, ≤) is called a directed set, if for any j, k ∈ I there
exists an element  ∈ I such that j ≤  and k ≤ . A net in a set A is a family
of elements (xi )i∈I ⊆ A indexed by a directed set. A net (xi )i∈I in a topological
space X is said to converge to a point x ∈ X if, for any neighborhood Ux of x
in X, there exists an element i ∈ I such that xj ∈ Ux for all j ∈ I with i ≤ j.
In a topological space X a point x ∈ X is said to be an adherent point of a set
A ∈ X if and only if there exists a net (xı )ı∈I in A that converges to x. A map
f : X −→ Y between topological spaces is continuous, if and only if for any
point x ∈ X and any net in X converging to x, the composition of f with this
net converges to f (x).

A.7 Closability of linear operators


The notion of closability of operators on a normed linear space H consists
in minimal hypotheses ensuring that the extension of a densely defined linear
operator is consistently defined.
Definition A.7.1 A linear operator T : S −→ H from a normed linear space
S into H is said to be closable on H if for every sequence (Fn )n∈N ⊂ S such
that Fn −→ 0 and TFn −→ U in H, one has U = 0.
Remark A.7.2 For linear operators between general topological vector
spaces one has to replace sequences by nets.
For any two sequences (Fn )n∈N and (Gn )n∈N both converging to F ∈ H and
such that that (TFn )n∈N and (TGn )n∈N converge respectively to U and V in H,
the closability of T shows that (T(Fn − Gn ))n∈N converges to U − V, hence
U = V.
Letting Dom(T) denote the space of functionals F for which there exists a
sequence (Fn )n∈N converging to F such that (TFn )n∈N converges to G ∈ H, we
can extend a closable operator T : S −→ H to Dom(T) as in the following
definition.
Definition A.7.3 Given T : S −→ H a closable operator and F ∈ Dom(T),
we let
TF = lim TFn ,
n→∞

where (Fn )n∈N denotes any sequence converging to F and such that (TFn )n∈N
converges in H.
Appendix - Tensor products 247

A.8 Tensor products


A.8.1 Tensor products of Hilbert spaces
The algebraic tensor product V ⊗ W of two vector spaces V and W is the vector
space spanned by vectors of the form v ⊗ w subject to the linearity relations


⎪ (v1 + v2 ) ⊗ w = v1 ⊗ w + v2 ⊗ w,

v ⊗ (w1 + w2 ) = v ⊗ w1 + v ⊗ w2 ,



(λv) ⊗ w = λ(v ⊗ w) = v ⊗ (λw),
λ ∈ C, v, v1 , v2 ∈ V, ww, w1 , w2 ∈ W. Given two Hilbert spaces H1 and H2 , we
can consider the sesquilinear map ·, ·
H1 ⊗H2 : (H1 ⊗ H2 ) × (H1 ⊗ H2 ) −→ C
defined by
h1 ⊗ h2 , k1 ⊗ k2
H1 ⊗H2 := h1 , k1
H1 h2 , k2
H2
on product vectors and extended to H1 ⊗H2 by sesquilinearity. It is not difficult
to show that this map is Hermitian and positive, i.e., it is an inner product, and
therefore it turns H1 ⊗ H2 into a pre-Hilbert space. Completing H1 ⊗ H2 with
respect to the norm induced by inner product, we get the Hilbert space
H1 ⊗H2 = H1 ⊗ H2 ,
which is the Hilbert space tensor product of H1 and H2 , with the continuous
extension of ·, ·
H1 ⊗H2 . This construction is associative and can be iterated to
define higher-order tensor products. In the sequel we will denote the Hilbert
space tensor product simply by ⊗, when there is no danger of confusion with
the algebraic tensor product.
The tensor product T1 ⊗ T2 of two bounded operators
T1 : H1 −→ K1 and T2 : H2 −→ K2 ,
is defined on product vectors h1 ⊗ h2 ∈ H1 ⊗ H2 by
(T1 ⊗ T2 )(h1 ⊗ h2 ) := (T1 h1 ) ⊗ (T2 h2 )
and extended by linearity to arbitrary vectors in the algebraic tensor product
H1 ⊗ H2 . One can show that T1 ⊗ T2 has norm
T1 ⊗ T2 = T1 T2 ,
therefore T1 ⊗ T2 extends to a bounded linear operator between the Hilbert
space tensor products H1 ⊗H2 and K1 ⊗K2 , which we denote again by T1 ⊗ T2 .
Tensor products of more than two bounded operators can be defined in the
same way.
248 Appendix - Tensor products

A.8.2 Tensor products of L2 spaces


Let (X, μ) and (Y, ν) denote measure spaces. Given f ∈ L2 (X, μ) and g ∈
L2 (Y, ν), the tensor product f ⊗ g of f by g is the function in L2 (X × Y, μ ⊗ ν)
defined by
( f ⊗ g)(x, y) = f (x)g(y).
In particular, the tensor product fn ⊗ gm of two functions fn ∈ L2 (X, σ )⊗n ,
gm ∈ L2 (X, σ )⊗m , satisfies
fn ⊗ gm (x1 , . . . , xn , y1 , . . . , ym ) = fn (x1 , . . . , xn )gm (y1 , . . . , ym ),
(x1 , . . . , xn , y1 , . . . , ym ) ∈ X n+m . Given f1 , . . . , fn ∈ L2 (X, μ), the symmetric
tensor product f1 ◦ · · · ◦ fn is defined as the symmetrisation of f1 ⊗ · · · ⊗ fn , i.e.,
1 
( f1 ◦ · · · ◦ fn )(t1 , . . . , tn ) = f1 (tσ (1) ) · · · fn (tσ (n) ), (A.11)
n!
σ ∈n

t1 , . . . , tn ∈ X, where n denotes the set of permutations of {1, . . . , n}. Let


now L2 (X)◦n denote the subspace of L2 (X)⊗n = L2 (X n ) made of symmetric
functions fn in n variables. As a convention, L2 (X)◦0 is identified to R.
From (A.11), the symmetric tensor product can be extended as an associative
operation on L2 (X)◦n .
The tensor power of order n of L2 ([0, T], Rd ), n ∈ N, d ∈ N∗ , is
L2 ([0, T], Rd )⊗n  L2 ([0, T]n , (Rd )⊗n ).
For n = 2 we have (Rd )⊗2 = Rd ⊗ Rd  Md,d (R) (the linear space of square
d × d matrices), hence
L2 ([0, T], Rd )⊗2  L2 ([0, T]2 , Md,d (R)).
More generally, the tensor product (Rd )⊗n is isomorphic to Rd . The generic
n

element of L2 ([0, T], Rd )⊗n is denoted by


f = ( f (i1 ,...,in ) )1≤i1 ,...,in ≤d ,
with f (i1 ,...,in ) ∈ L2 ([0, T]n ).
Exercise solutions

Weary of Seeking had I grown,


So taught myself the way to Find.
(F. Nietzsche, in Die fröhliche Wissenschaft.)

Chapter 1 - Boson Fock space


Exercise 1.1 Moments of the normal distribution.
a) First moment. We note that

Q1, 1
= a+ 1, 1
= 1, a− 1
= 0.

b) Second moment. Next we have

Q2 1, 1
= (a+ + a− )2 1, 1

= (a+ + a− + a+ a− + a− a+ )1, 1

2 2

= a− a+ 1, 1

= (a+ a− + σ 2 )1, 1

= σ 2 1, 1

= σ 2.

c) Third moment. We have

Q3 1, 1
= (a+ + a− )3 1, 1

= (a+ + a− )2 a+ 1, 1

= (a+ + a− + a+ a− + a− a+ )a+ 1, 1

2 2

249
250 Exercise solutions

= (a− a+ + a− a+ )1, 1

2 2

= (a− (a+ a− + σ 2 ) + (a+ a− + σ 2 )a+ )1, 1

= 0.

d) Fourth moment. Finally we have

Q4 1, 1
= (a+ + a− )4 1, 1

= (a+ + a− + a+ a− + a− a+ )(a+ + a− + a+ a− + a− a+ )1, 1

2 2 2 2

= (a− + a− a+ )(a+ + a− a+ )1, 1

2 2

= (a− a+ + a− a+ + a− a+ + a− a+ a− a+ )1, 1

2 2 3 3

= (a− a+ + a− a+ + a− a+ + a− a+ a− a+ )1, 1

2 2 3 3

= (a− (a+ a− + σ 2 )a+ + a− (a+ a− + σ 2 ))1, 1

+ (a+ a− + σ 2 )a+ + a− a+ (a+ a− + σ 2 ))1, 1

= (a− (a+ a− + σ 2 )a+ + (a+ a− + σ 2 )a+ + σ 2 a− a+ )1, 1

= (a− a+ a− a+ + σ 2 a− a+ + (a+ a− + σ 2 )a+ + σ 2 a− a+ )1, 1

= (a− a+ (a+ a− + σ 2 ) + σ 2 a− a+ + σ 2 a− a+ )1, 1

= (σ 2 a− a+ + σ 2 a− a+ + σ 2 a− a+ )1, 1

= 3σ 2 (a+ a− + σ 2 I)1, 1

= 3σ 4 1, 1

= 3σ 4 ,

which is the fourth moment of the centered normal distribution N (0, σ 2 )


with variance σ 2 .

We could continue and show that more generally, Qn 1, 1


coincides
with the n-th moment of the centered Gaussian distribution N (0, σ 2 ) as
given in (A.7).

Chapter 2 - Real Lie algebras


Exercise 2.1
1. The operator Wλ acts on the square roots of exponential vectors as
% %
Wλ ξ(2f ) = ξ((2κ + 2f − 4κf )) exp(iI1 (ζ )), |f | < 1/2.
Exercise solutions 251

Given that W0 = Id , by independence we only need to prove that


 ∞
Wu+iv f (τ )Wu+iv g(τ )e−τ dτ
0
 ∞ ! !
1 τ uτ
= √ f exp − + is(1 − τ )
0 1 − 2u 1 − 2u 1 − 2u
! !
1 τ uτ
×√ g exp − − is(1 − τ ) e−τ dτ
1 − 2u 1 − 2u 1 − 2u
 ∞ ! ! !
1 τ τ τ
= f g exp − dτ
0 1 − 2u 1 − 2u 1 − 2u 1 − 2u
 ∞
= f (τ )g(τ )e−τ dτ .
0

Now, for u + iv, u + iv ∈ C with | u |, | u |< 1/2,

Wu +iv Wu+iv f (τ )


! !
1 τ uτ
= Wz √ f exp − + iv(1 − τ )
1 − 2u 1 − 2u 1 − 2u
!
1 τ
=√ f
(1 − 2u)(1 − 2u ) (1 − 2u)(1 − 2u )
uτ u τ
× exp − −
(1 − 2u)(1 − 2u ) 1 − 2u

! !
τ 
− iv − 1 + iv (1 − τ )
1 − 2u
!
1 τ
=√ f
1 − 2(u + u − 2uu ) 1 − 2(u + u − 2uu )
! !
(u + u − 2uu )τ v 
× exp − + i + v (1 − τ ) .
1 − 2(u + u − 2uu )u 1 − 2u

Let z = u + iv ∈ C. For t ∈ R close enough to 0 we have


! !!
∂ ∂ 1 τ utτ
Wtz f (τ ) = √ f exp − + ivt(1 − τ )
∂t ∂t 1 − 2tu 1 − 2tu 1 − 2ut
! !
1 2uτ  τ utτ
=√ f exp − + ivt(1 − τ )
1 − 2tu (1 − 2ut)2 1 − 2tu 1 − tu
!
1 uτ u2 tτ u
+ − − + iv(1 − τ ) + √
1 − 2tu 1 − 2ut (1 − 2ut)2 1 − 2tu
! !
τ utτ
×f exp − + ivt(1 − τ ) .
1 − 2tu 1 − 2ut
252 Exercise solutions

Evaluating this expression at t = 0 yields


∂Wtz f
(τ )|t=0 = 2uτ f  (τ ) + uf (τ ) + f (τ )(−uτ + iv(1 − τ ))
∂t
= (u − iv)τ f  (τ ) + f (τ )(u + iv − (u + iv)τ ) + (u + iv)τ f  (τ )
= (za+ − z̄a− )f (τ ).

For the second part, we compute


! ! !
1 τ τ uτ
Wu Wis f (τ ) = √ f exp iv 1 − −
1 − 2u 1 − 2u 1 − 2u 1 − 2u
! !
1 τ uτ 2ivuτ
=√ f exp − + iv(1 − τ ) −
1 − 2u 1 − 2u 1 − 2u 1 − 2u
!
2ivuτ
= exp − Wis Wu f (τ ).
1 − 2u
" #
1
For the exponential vector 1−2α exp − 1−2α2ατ
we have
!
1 ατ
Wz √ exp −
1 − 2α 1 − 2α
!
1 1 2ατ uτ
=√ √ exp − − + is(1 − τ )
1 − 2α 1 − 2u 2(1 − 2α)(1 − 2u) 1 − 2u
!
1 2α + 2u − 4αu
=√ exp −τ + is(1 − τ ) ,
(1 − 2(u + α − 2uα)) 2(1 − 2α)(1 − 2u)
α ∈ (−1/2, 1/2). The semi-group property holds for
"
#
Wisζ = exp − sζ Q̃ = (exp (isI1 (ζ )))s∈R+ ,
s∈R+

but not for (Wtκ )t∈R+ , which is different from (exp(itP̃))t∈R+ .


2. We have
! ! !
1 τ τ tτ
Wt Wis f (τ ) = √ f exp −is −1 − ,
1 − 2t 1 − 2t 1 − 2t 1 − 2t
hence
!
∂ τ
Wt Wis f (τ ) = −i − 1 Wt Wis f (τ ).
∂s 1 − 2t
Now we have
!
∂ ∂ 2iτ τ ∂
Wt Wis f (τ ) = − Wt Wis f (τ ) − i − 1 Wt Wis f (τ ),
∂t ∂s (1 − 2t)2 1 − 2t ∂t
Exercise solutions 253

and

∂ ∂ ∂
Wt Wis f (τ )|t=s=0 = −2iτ f (τ ) + i(1 − τ ) Wt f (τ )|t=0
∂t ∂s ∂t
= −2iτ f (τ ) + i(1 − τ )(2τ ∂τ + (1 − τ ))f (τ )
= i((2τ ∂τ + (1 − τ ))((1 − τ )f )(τ ) = −P̃Q̃f (τ ).

On the other hand, we have

∂ ∂ ∂ ∂
Wis Wt f (τ ) |t=s=0 = Wis 1|s=0 Wt f (τ )|t=0 = −Q̃P̃f (τ ).
∂t ∂s ∂s ∂t

Remarks.
Relation (3.13) can be proved using the operator Wz , as a consequence
of the aforementioned proposition. We have from (1)

∂ ∂
−Q̃P̃ = Wis Wt f (τ ) |t=s=0
∂s ∂t ! !
∂ ∂ 2istτ
= exp Wt Wis f (τ ) |t=s=0
∂s ∂t 1 − 2t
!
4itτ 2iτ
= + |t=s=0 Wt Wis f (τ ) |t=s=0
(1 − 2t) 2 1 − 2t
! !
2isτ ∂ ∂
+ exp Wt Wis f (τ ) |t=s=0
1 − 2t ∂s ∂t
∂ ∂
= 2iτ f (τ ) + Wt Wis f (τ ) |t=s=0
∂t ∂s
= 2iτ f (τ ) − P̃Q̃f (τ ).

Chapter 3 - Basic probability distributions on Lie algebras


Exercise 3.1 Define the operators b− and b+ by

b− = −ia− , b+ = ia+ .

1. The commutation relations and we clearly have

b− e0 = −ia− e0 = 0.
254 Exercise solutions

2. We have

b− u, v
H = −ia− u, v
H
= i a− u, v
H
= i u, a+ v
H
= u, ia+ v
H
= u, b+ v
H .

3. It suffices to rewrite P as P = b− + b+ and to note that {b, b+ } satisfy the


same properties as {a− , a+ }. In other words we check that the
transformation

a−  → −ia+ , a+  → ia−

maps P to Q and satisfies the commutation relation

[ia− , −ia+ ] = (ia− )(−ia+ ) − (−ia+ )(ia− ) = a− a+ − a+ a− = σ 2 I,

and the duality relation

(−ia+ )u, v
= i a+ u, v
= i u, a− v
= u, ia− v
,

hence P = i(a− − a+ ) also has a Gaussian law in the state e0 .


Exercise 3.2 Moments of the Poisson distribution.
a) First moment. We note that

Xe0 , e0
= (N + a+ + a− + E)e0 , e0
= λ Ee0 , e0
= λ e0 , e0
= λ.

b) Similarly we have

X 2 e0 , e0
= λ e0 , e0
+ λ Xe0 , e0
= λ + λ2 .

c) We have

X 3 e0 , e0
= Xa− Xe0 , e0
+ a− Xe0 , e0
+ λ Xe0 , e0
+ λ X 2 e0 , e0

= X 2 a− e0 , e0
+ Xa− e0 , e0
+ λ Xe0 , e0

+ Xa− e0 , e0
+ a− e0 , e0
+ λ e0 , e0

+ λ Xe0 , e0
+ λ X 2 e0 , e0

= λ Xe0 , e0
+ λ e0 , e0
+ λ Xe0 , e0
+ λ X 2 e0 , e0

= λ + 3λ2 + λ3 .
Exercise solutions 255

Exercise 3.3 We have


∂n ∂n
IE[e tX
] = (1 − t)−α
∂tn ∂tn
= α(α + 1) · · · (α + n)(1 − t)−α+n , t < 1,

hence
∂n
IE[X n ] = IE[etX ]|t=0 = α(α + 1) · · · (α + n)(1 − t)−α−n .
∂tn
Exercise 3.4
1. For n = 1 we have

e0 , (B+ + B− + M)e0
= B− e0 , e0
+ e0 , Me0
= α e0 , e0
= α,

since e0 , e0
= 1.
2. For n = 2 we have

e0 , (B+ + B− + M)2 e0
= e0 , (B+ + B− + M)(B+ + B− + M)e0

= e0 , (B− + M)(B+ + M)e0

= e0 , M 2 e0
+ e0 , B− Me0
+ e0 , B− B+ e0
+ e0 , MB+ e0

= α 2 e0 , e0
+ α e0 , B− e0
+ e0 , B− B+ e0
+ e0 , MB+ e0

= α 2 e0 , e0
+ e0 , [B− , B+ ]e0
+ e0 , B+ B− e0

+ e0 , [M, B+ ]e0
+ 2 e0 , B− e0

= α 2 e0 , e0
+ e0 , Me0
+ 2 e0 , B+ e0
= α(α + 1).

3. For n = 3 we have

e0 , (B+ + B− + M)3 e0
= e0 , (B− + M)(B+ + B− + M)(B+ + M)e0

= e0 , B− B+ (B+ + M)e0
+ e0 , B− B− B+ e0
+ e0 , B− MB+ e0

+ e0 , MB+ (B+ + M)e0


+ e0 , MB− B+ e0
+ e0 , MM(B+ + M)e0

= e0 , M(B+ + M)e0
+ e0 , B− Me0
+2 e0 , B− B+ e0
+ e0 , B− B+ Me0

+ e0 , MB+ B+ e0
+ e0 , MB+ Me0
+ e0 , MMe0
+ e0 , MMMe0

= e0 , M 2 e0
+ 2 e0 , Me0
+ e0 , M 2 e0
+ e0 , MMe0
+ e0 , MMMe0

= α 3 + 3α 2 + 2α
= α(α + 1)(α + 2).
256 Exercise solutions

Chapter 4 - Noncommutative random variables


Exercise 4.1
1. We will reproduce here Jacobi’s proof of the diagonalisability of
Hermitian matrices.
2. Let A = (aij )1≤i j≤n ∈ Mn (C) be a Hermitian n × n matrix. We defined
 : Mn (C) → R,

n 
n
(B) = |bij |2
i=1 j=i+1

for B = (bij )1≤i j≤n ∈ Mn (C), and fA : U(n) → R,

fA (U) = (U ∗ AU).

Since fA is continuous and U(n) is compact, the extreme value theorem


implies that fA has a minimum value, i.e., there exists a unitary matrix W
such that

fA (W) = (W ∗ AW) ≤ fA (U) for all U ∈ U(n).

We will show by contradiction that the matrix B := W ∗ AW is diagonal.


We will show that if B = W ∗ AW is not diagonal, then there exists a
matrix U such that

fA (WU) = (U ∗ W ∗ AWU) = (U ∗ BU) < (B) = fA (W),

which contradicts the minimality of fA (W).


Suppose that B = (bij )1≤i j≤n has an off-diagonal non-zero coefficient
bij  = 0 with i  = j. Then, there exists a unitary 2 × 2 matrix
 
u v
V=
−v u
 
bii bij
which diagonalises the Hermitian 2 × 2 matrix .
bji bjj
Note that we have |u|2 + |v|2 = 1, since V is unitary, and the inverse of
V is given by
 
∗ u −v
V = .
v u
Exercise solutions 257

Define now
⎡ ⎤
1
⎢ .. ⎥
⎢ . ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ u v ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ .. ⎥
U=⎢ . ⎥,
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ −v u ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ .. ⎥
⎣ . ⎦
1
where the coefficients that are not marked all vanish. In other words, we
embed V into the unitary group U(n) such that it acts non-trivially only on
the ith and the jth component. Then conjugation of the matrix B with U
will change only the coefficients of the ith and the jth row and column of
B, more precisely we get
U ∗ BU =
⎡ ⎤
ub1i − vb1j vb1i + ub1j
⎢ ⎥
⎢ ⎥
⎢ ubi−1,i − vbi−1 j vbi−1,i + ubi−1 j ⎥
⎢ ⎥
⎢ ⎥
⎢ ubi1 − vbj1 · · ∗ · · · 0 · · ubin − vbjn ⎥
⎢ ⎥
⎢ ubi+1,i − vbi+1 j vbi+1,i + ubi+1 j ⎥
⎢ ⎥
⎢ ⎥,
⎢ ⎥
⎢ ubj−1,i − vbj−1 j vbj−1,i + ubj−1 j ⎥
⎢ ⎥
⎢ ⎥
⎢ vbi1 + ubj1 · · 0 · · · ∗ · · vbin + ubjn ⎥
⎢ ⎥
⎢ ubj+1,i − vbj+1 j vbj+1,i + ubj+1 j ⎥
⎢ ⎥
⎣ ⎦
ubni − vbnj vbni + ubnj

where the coefficients in the empty blocks are unchanged, and the values
marked by ∗ (i.e., (U ∗ BU)ii and (U ∗ BU)jj ) do not matter for our
calculations, since they do not occur in the sum defining (U ∗ BU).
We will now prove that (U ∗ BU) = (B) − |bij |2 < (B).
We have
 1 
(U ∗ BU) = |(U ∗ BU)k |2 = |(U ∗ BU)k |2
2 k,=1,...,n
1≤k<≤n
k=

since U ∗ BU is Hermitian.
258 Exercise solutions

If we take the sum over a row different from the ith or jth row, say the
kth row, then we have

|ubki − vbkj |2 + |vbki + ubkj |2


= |u|2 |bki |2 − uvbki bkj − uvbki bkj
+ |v|2 |bkj |2 + |v|2 |bki |2 + vubki bkj + vubki bkj + |u|2 |bkj |2
= |bki |2 + |bkj |2

for the coefficients in the ith and jth column, and, since the other
coefficients are not changed by the conjugation with U, we have
 
|(U ∗ BU)k |2 = |bk |2 .
=1,...,n =1,...,n
=k =k

For the sum over the ith and jth row, we observe

|ubi − vbj |2 + |vbi + ubj |2 = |bi |2 + |bj |2

and
   
|(U ∗ BU)i |2 + |(U ∗ BU)j |2 = |bi |2 + |bj |2 .
=1,...,n =1,...,n =1,...,n =1,...,n
=i j =i j =i j =i j

Since we chose U such that (U ∗ BU)ij = (U ∗ BU)ji = 0, we get

   
|(U ∗ BU)i |2 + |(U ∗ BU)j |2 = |bi |2 + |bj |2
=1,...,n =1,...,n =1,...,n =1,...,n
=i =j =i j =i j
 
= |bi |2 + |bj |2 − |bij |2 − |bji |2
=1,...,n =1,...,n
=i =j

and finally,

(U ∗ BU) = (B) − |bij |2 < (B),

as desired. This completes the proof, see also [23].


Exercise solutions 259

Exercise 4.2
1. This follows by direct computation.
2. We find


n0 1x = #{ j, xj = +1} − #{ j, xj = −1} 1x
for x ∈ n , and n0 has a binomial distribution on the set
{−n, −n + 2, . . . , n − 2, n}, with density
n !
n k n−k
nL1 (n0 ) = p q δ2k−n
k
k=0
with respect to the constant function.
3. The law of
nθ = n0 + θ (n+ + n+ )
can be computed from exp(θ n2 )1.

Chapter 5 - Noncommutative stochastic integration


Exercise 5.1 For the first equality expanding the right-hand side gives
 n 
1  +
n
k (1 v1 + · · · + n vn )⊗n
n!2 n
∈{±1} k=1
 n 
1  +  n
=  k j1 vj1 ⊗ · · · jn vjn .
n!2n n
∈{±1} k=1 j1 ,...,jm =1

Next, one checks that the terms with repeated indices vanish. Since an n-tuple
of distinct indices ( j1 , . . . , jn ) defines a permutation by σ (k) = jk for k ∈
{1, . . . , n}, the sum becomes
1  
vσ (1) ⊗ · · · vσ (n) .
n!2n
∈{±1} σ ∈nn

The terms in the sum do no longer depend on  and we get the desired result.
Note that we can write this polarisation formula equivalently as an expectation
  
vσ (1) ⊗ · · · ⊗ vσ (n) = IE Z1 · · · Zn (Z1 v1 + · · · + Zn vn )⊗n ,
σ ∈n

where Z1 , . . . , Zn are independent Bernoulli random variables with


1
P(Zk = ±1) = , k = 1, . . . , n.
2
We refer to Relation (22) of [78] for the second equality.
260 Exercise solutions

Chapter 6 - Random variables on real Lie algebras


Exercise 6.1
1. This question is easy.
2. We have
⎧ ⎡ ⎤

⎪ cos θ − sin θ 0

⎪ ⎣

⎪ exp(θ R̃) = sin θ cos θ 0 ⎦,



⎪ 0 0 1



⎪ ⎡ ⎤

⎨ 1 0 v
exp(vT̃x ) = ⎣ 0 1 0 ⎦,



⎪ 0 0 1



⎪ ⎡ ⎤

⎪ 1 0 0



⎪ exp(w T̃ ) = ⎣ 0 1 w ⎦.


y
0 0 1

exp(θ R̃) acts as a rotation on K2 ,


⎡ ⎤ ⎡ ⎤
x cos(θ )x − sin(θ )y
exp(θ R̃) ⎣ y ⎦ = ⎣ sin(θ )x + cos(θ )y ⎦ ,
1 1

and exp(vT̃x ) and exp(wT̃y ) act as translations, i.e.,


⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
x x+v x x
exp(vT̃x ) ⎣ y ⎦ = ⎣ y ⎦ , exp(wT̃y ) ⎣ y ⎦ = ⎣ y + w ⎦ .
1 1 1 1

3. We have M = 2iR, E+ = Tx + iTy , E− = Tx − iTy .


4. The answer follows from the equalities
⎧ + +


⎪ euE Me−uE = exp u ad(E+ ) M = M − 2E+ ,





⎪ euM E− e−uM
⎨ = exp u ad(M) E−
2 3
= E− − 2uE− + 4u2 E− + 8u −
3! E + · · ·




−2u
=e E , −



⎩ uE+ − −uE+

e E e = exp u ad(E+ ) E− = E− .
5. We have
dω1
(t) = (xM + yE+ + zE− ) exp(xM + yE+ + zE− )
dt
= (xM + yE+ + zE− )ω1 (t).
Exercise solutions 261

We compute the derivatives of x̃(t), ỹ(t), and z̃(t),

dx̃ dỹ dz̃


(t) = x, (t) = ye2xt , (t) = ze2xt .
dt dt dt

Using (iv) we can show that ω2 (t) satisfies the same differential equation
as ω1 (t),

dω2



(t) = ye2xt E+ exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
dt




+ exp ỹ(t)E+ xM exp x̃(t)M exp z̃(t)E−




+ exp ỹ(t)E+ exp x̃(t)M ze2xt E− exp z̃(t)E−




= ye2xt E+ exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−




+ x(M − 2ỹ(t)E+ ) exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−




+ ze2xt e−2xt E− exp ỹ(t)E+ exp x̃(t)M exp z̃(t)E−
= (xM + yE+ + zE− )ω2 (t).

6. The functions ω1 (t) and ω2 (t) have the same initial value for t = 0 and
satisfy the same differential equation, therefore they agree for all values
of t. Taking t = 1 we get the desired formula.
Exercise 6.2
1. a) We have
+ +
eza a− e−za
z2 + + − z3
= a− + z[a+ , a− ] +[a , [a , a ]] + [a+ [a+ , [a+ , a− ]]] + · · ·
2 3!
z2 z 3
= a− − zE + [a+ , E] + [a+ [a+ , E]] + · · ·
2 3!
= a− − zE.

b) Since [E, a+ ] = 0 we clearly have the commutation relation


+ + + +
eza Ee−za = Eeza e−za = E by (6.2).
c) Similarly, since [E, a− ] = 0, Relation (6.2) yields the commutation
− − − −
relation eza Ee−za = Eeza e−za = E.
2. We have

d t(ua+ +va− +wE) + −


ω1 (t) = e = (ua+ + va− + wE)et(ua +va +wE) , t ∈ R+ .
dt
262 Exercise solutions

3. We have
d " uta+ vta− (tw+t2 uv/2)E #
ω2 (t) = e e e
dt ! !
d uta+ vta− (tw+t2 uv/2)E + d vta− (tw+t2 uv/2)E
= e e e + euta e e
dt dt
+ − d (tw+t2 uv/2)E
+ euta evta e
dt
+ − + −
= ua+ euta evta e(tw+t + veuta a− evta e(tw+t
2 uv/2)E 2 uv/2)E

+ − 2 uv/2)E
+ euta evta (w + tuv)Ee(tw+t , t ∈ R+ .

4. We have
+ − + −
ω2 (t) = ua+ euta evta e(tw+t + veuta a− evta e(tw+t
2 uv/2)E 2 uv/2)E

+ − 2 uv/2)E
+ euta evta (w + tuv)Ee(tw+t
+ − + −
= ua+ euta evta e(tw+t + v(a− − utE)euta evta e(tw+t
2 uv/2)E 2 uv/2)E

+ − 2 uv/2)E
+ (w + tuv)Eeuta evta e(tw+t
+ − + −
= ua+ euta evta e(tw+t + va− euta evta e(tw+t
2 uv/2)E 2 uv/2)E

+ − 2 uv/2)E
+ weuta evta Ee(tw+t
+ −
= (ua+ + va− + w)euta evta Ee(tw+t
2 uv/2)E

= (ua+ + va− + w)ω2 (t), t ∈ R+ .

Consequently, ω1 (t) and ω2 (t) satisfy the same differential equations

ω1 (t) = (ua+ + va− + w)ω1 (t), t ∈ R+ ,

and

ω2 (t) = (ua+ + va− + w)ω2 (t), t ∈ R+ ,

with same initial condition ω1 (0) = ω2 (0) = I, and this yields


ω1 (t) = ω2 (t), t ∈ R+ , which shows (6.5) for t = 1.
5. We have
+ +va− +wE + −
e0 , eua e0
= e0 , eua eva e(w+uv/2)E e0

+ −
= ew+uv/2 e0 , eua eva e0

+
= ew+uv/2 e0 , eua e0

Exercise solutions 263


= ew+uv/2 eua e0 , e0

= ew+uv/2 e0 , e0

= ew+uv/2 .

When u = v = t ∈ R and w = 0 this yields


− +a+ )+wE 2 σ 2 /2
e0 , et(a e0
= et ,

which is the moment generating function of a centered Gaussian random


variable with variance σ 2 . Similarly when u = −it, v = it and w = 0 we
find
− −a+ ) − +va+ 2 σ 2 /2
e0 , eit(a e0
= e0 , eua e0
= e(−it)(it)/2 = et . (6.1)

6. It suffices to note that (6.1) is the moment generating function of the


centered Gaussian distribution with variance σ 2 > 0. More generally, one
can check that za− + z̄a+ has a centered Gaussian distribution with
variance σ 2 |z|2 for all z ∈ C.
Exercise 6.3
1. A direct calculation can be used for the first part, namely for f polynomial
we have
" #" #
ã◦ exp (is(1 − τ )) f (τ ) = −(1 − τ )∂τ − τ ∂τ2 f (τ )eis(1−τ )
= −(1 − τ )(f  (τ )eis(1−τ ) − is(1 − τ )f (τ )eis(1−τ ) )
− τ (f  (τ ) − isf  (τ ) − isf  (τ ) − s2 f (τ ))eis(1−τ ) ,

hence

exp (−is(1 − τ )) ã◦ exp (is(1 − τ ))


= (−(1 − τ )∂τ − τ ∂τ2 )f (τ ) + is(τ ∂τ + (1 − τ ))f (τ )
+ isτ ∂τ f (τ ) + s2 τ f (τ )
= ã◦ f (τ ) + isã+ f (τ ) − isã− f (τ ) + s2 τ f (τ ).

Denoting by αx− = ∂x , αy− = ∂y , αx+ = x − ∂x , αy+ = y − ∂y the annihilation


and creation operators on the two-dimensional boson Fock space
!
1 −(x2 +y2 )/2
(Ce1 ⊕ Ce2 )  L2 R2 , e ,

264 Exercise solutions

show that1
+ 2 −(α − )2 )/2
+ 2 − 2
2e−t((αx ) x αx+ αx− + αx− αx+ eit((αx ) −(αx ) )/2

= e2t ((αx+ )2 + (αx− )2 + αx+ αx− + αx− αx+ )

− e−2t ((αx+ )2 + (αx− )2 − αx+ αx− − αx− αx+ )

= 2 cosh(2t)(αx+ αx− + αx− αx+ ) + 2 sinh(2t)((αx+ )2 + (αx− )2 ),


hence (6.6) follows, since αu− αu+ = αu+ αu− + 1, u = x, y, and
αx+ αx− + αx− αx+ + αy+ αy− + αy− αy+ = 4ã◦ + 2 by Lemma 2.4.2.



2. From Question 1 the distribution of exp − isQ̃ ã◦ exp isQ̃ in the
vacuum state  = 1 is the same as the distribution of ã◦ in the state
exp(isQ̃), cf. [17]. In addition, by (3.8) the spectrum of a◦ is N and the
Laguerre polynomial Ln is its eigenvector of eigenvalue n ∈ N. In order to
determine the distribution of a◦ in the state eis(1−x) , it is necessary and
sufficient to decompose eis(1−x) into a series of Laguerre polynomials:
∞ !n
eis  is
exp(is(1 − x)) = Ln (x),
1 + is 1 + is
n=0

which implies that the distribution of a◦ in this state is the geometric


distribution μ on N with parameter s2 /(1 + s2 ), i.e.,
$ is !n $2 !n
$ e is $ s2
μ({n}) = $$ $ = 1 , n ∈ N.
1 + is 1 + is $ 1 + s2 1 + s2
For the second part, we notice that as shown on page 40 of [17],
 ∞ ! √
e−x /2
2
t (2n)! tanh(t)n
Hn (x) exp ((αx+ )2 − (αx− )2 )  √ dx = √ ,
−∞ 2 2π n!2n 2 cosh(t)
n ∈ N, where (Hn )n∈N is the sequence of Hermite polynomials which are
orthonormal with respect to the standard Gaussian density. From [35], we
have the relation
! n √ √
x2 + y2 (−1)n  (2k)! (2n − 2k)!
Ln = H2k (x)H2n−2k (y),
2 2n k!(n − k)!
k=0

1 cf. page 40 of [17]


Exercise solutions 265

hence

Ln (τ ) exp(itP̃)1, 1

 ∞ ∞ !
1 x2 + y2 + 2 −(α − )2 )/2 + 2 −(α − )2 )/2
= Ln et((αx ) x et((αy ) y 
2π −∞ −∞ 2

× e−(x
2 +y2 )/2
dxdy
n √ √ 
(−1)n  (2k)! (2n − 2k)! ∞ t + 2 − 2
Hk (x)e 2 ((αx ) −(αx ) ) e−x /2 dx
2
= √
n
2 2π k=0 k!(n − k)! −∞
 ∞
1 + 2 − 2
Hn−k (y)et((αy ) −(αy ) )/2 e−y /2 dy
2
×√
2π −∞
(−1)n tanh(t)n  (2k)!(2n − 2k)!
n
=
4n cosh(t) k!2 (n − k)!2
k=0
%
= (−1)n 1 − tanh(t)2 tanh(t)n , (6.2)

since


n
(2k)!(2n − 2k)!
= 22n , n ∈ N.
k!2 (n − k)!2
k=0

Consequently, the distribution of a◦ in the state exp(isP̃) is the geometric


distribution ν on N with parameter tanh2 (s), given by

ν({n}) = (1 − tanh2 (s)) tanh2n (s), n ∈ N.

In other words, this result follows from the fact that the random variables
αx+ αx− , αy+ αy− are independent and have negative binomial distributions in
the states
! !
t t
exp ((αx+ )2 − (αx− )2 )  and exp ((αy+ )2 − (αy− )2 ) ,
2 2

hence their half sum a◦ has a geometric distribution in the state exp(itP̃),
cf. [93], [97].
3. Applying (6.2) with n = 0 we find
% 1
IE[exp(itP̃)] = 1 − tanh(t)2 = , t ∈ R.
cosh(t)
266 Exercise solutions

Chapter 7 - Weyl calculus on real Lie algebras


Exercise 7.1 Quantum optics.
1. The distribution of N = a+ a− in the state
∞
αn
(α) = e−|α|
2 /2
√ en
n=0
n!
is given by its moment generating function

 α 2n " #
(α), etN (α)
= e−|α|
2
etn = exp (et − 1)|α|2 ,
n!
n=0

i.e., it is a Poisson distribution with parameter |α|2 .


2. When φ(z) = e−z /4 /(2π )1/4 , the probability density in the pure state
2


φ| is given by
 ∞
1
W|φ
φ| (x, y) = φ̄(x − t)φ(x + t)eiyt dt
2π −∞
 ∞
1
e−(x−t) /4−(x+t) /4 eiyt dt
2 2
= 3/2
(2π ) −∞
 ∞
1
e−x /2−t /2+ity dt
2 2
= 3/2
(2π ) −∞
1 −(x2 +y2 )/2
= e , x, y ∈ R.

Chapter 8 - Lévy processes on real Lie algebras


Exercise 8.1
1. We have

pn (x)pm (x)μ(dx) = e0 , pn (X)pm (X)e0

= pn (X)e0 , pm (X)e0

= δnm , n, m ∈ N.
2. From Section 3.3.2 we get
% %
Xen = (n + 1)(n + m0 )en+1 + β(2n + m0 )en + n(n + m0 − 1)en−1 ,
n ∈ N, and
(n + 1)Pn+1 + (2βn + βm0 − x)Pn + (n + m0 − 1)Pn−1 = 0,
Exercise solutions 267

with initial condition P−1 = 0, P1 = 1, for the rescaled polynomials


+n ,
n
Pn = pn .
n + m0
k=1
3. It follows from the results of Section 3.3.2 that when |β| = 1 we have

Pn (x) = (−β)n Ln(m0 −1) (βx)


(α)
where Ln is the Laguerre polynomials, if |β| < 1 we find the
Meixner-Pollaczek polynomials and if |β| > 1 we get the Meixner
polynomials.
4. The three measures μ can be found from the results of Section 3.3.2. In
particular, if β = +1 then μ is, up to a normalisation parameter, the usual
χ 2 -distribution with parameter m0 .

Chapter 9 - A guide to the Malliavin calculus


Exercise 9.1 First we note that all stochastic integrals with respect to a
martingale have expectation equal to zero. Next, if Mt is a normal martingale
and ut is either an adapted processs or an independent process such that
IE[u2t ] = t, the Itô isometry shows that
  /  !2 0  
T T T
Var ut dMt = IE ut dMt = IE |ut |2 dt
0 0 0
 T 1 2  T T2
= IE |ut |2 dt = tdt = ,
0 0 2
which is the case in questions (b) − (c) − (d) − (e) since both Bt and Nt − t
are normal martingales. For question (a) we note that formally we have
 T   T 1 2  T
Var Be dBt =
t IE |Be | dt =
t
2
et dt = eT − 1.
0 0 0

However, this stochastic integral is not defined as the process Bet is not adapted
since et > t, t ∈ R+ .

Exercise 9.2 We have


  T ! 1 " #2
IE exp β Bt dBt = IE exp β(B2T − T)/2
0
 ∞ !
x2 x2 dx
= e−βT/2 exp β − √
−∞ 2 2T 2π T
268 Exercise solutions

 ∞ !
−βT/2 y2 dy
=e exp −(1 − βT) √
−∞ 2 2π
e−βT/2
=√ , β < 1/T.
1 − βT

Exercise 9.3 We have


1 3T $ 2 1 3t 3T $ 2
$ $
IE e 0 f (s)dBs $Ft = IE e 0 f (s)dBs e t f (s)dBs $Ft
3t 1 3T $ 2
$
= e 0 f (s)dBs IE e t f (s)dBs $Ft
3t 1 3T 2
= e 0 f (s)dBs IE e t f (s)dBs
 t  !
1 T
= exp f (s)dBs + | f (s)|2 ds ,
0 2 t
0 ≤ t ≤ T, since
 T  T !
f (s)dBs  N 0, | f (s)|2 ds .
t t

Exercise 9.4 Letting Yt = e−αt Xt we find dYt = e−αt dBt , hence


 t
Yt = Y0 + e−αs dBs ,
0
and
 t  t
Xt = eαt Yt = eαt Y0 + eαt e−αs dBs = eαt X0 + eα(t−s) dBs ,
0 0
0 ≤ t ≤ T.

Exercise 9.5
1. We have St = S0 ert+σ Bt −σ
2 t/2
, t ∈ R+ .
2. We have
f (t, St ) = IE[(ST )2 | Ft ]
= St2 IE[e2r(T−t)+2σ (BT −Bt )−σ
2 (T−t)
| Ft ]
= St2 e2r(T−t)−σ 2 (T−t)
IE[e2σ (BT −Bt ) | Ft ]
2 (T−t)+2σ 2 (T−t)
= St2 e2r(T−t)−σ ,

0 ≤ t ≤ T, hence f (t, x) = x2 e (2r+σ 2 )(T−t) , 0 ≤ t ≤ T.


Exercise solutions 269

3. By the tower property of conditional expectations we have

IE[f (t, St ) | Fu ] = IE[IE[ST2 | Ft ] | Fu ]


= IE[ST2 | Fu ]
= f (u, Su ), 0 ≤ u ≤ t ≤ T,

hence the process t  → f (t, St ) is a martingale.


4. By the Itô formula we have
 t
∂f
f (t, St ) = f (0, S0 ) + σ Su
(u, Su )dBu
0 ∂x
 t   t
∂f σ 2 t 2 ∂ 2f ∂f
+ r Su (u, Su )du + Su 2 (u, Su )du + (u, Su )du
0 ∂x 2 0 ∂x 0 ∂u
 t
∂f
= f (0, S0 ) + σ (u, Su )dBu
0 ∂x

because the process f (t, St ) is a martingale. This yields

∂f 2
ζt = σ St (t, St ) = 2σ St2 e(2r+σ )(T−t) , t ∈ [0, T].
∂x

We also check that f (t, x) satisfies the PDE

∂f σ 2 2 ∂ 2f ∂f
rx (t, x) + x (t, x) + (t, x) = 0
∂x 2 ∂x2 ∂t

with terminal condition f (T, x) = x2 .

Chapter 10 - Noncommutative Girsanov theorem


Exercise 10.1 We note that
−t )−ct
e0 , g(et Z)e0
= e0 , g(Z)eZ(1−e e0
,

which reads as the classical change of variable formula


 ∞  ∞
−z c−1 −t )−ct
t
g(e z)e z dz = g(z)ez(1−e e−z zc−1 dz,
0 0

for a gamma distributed random variable Z with parameter c > 0.


270 Exercise solutions

Chapter 11 - Noncommutative integration by parts


Exercise 11.1 The proof is similar to that of Proposition 12.1.4. Now using
the formula

n−1
p(n) (X) = Dnk (Dk X)−n p(X) − Aκ p(κ) (X),
κ=0

where A0 , . . . , An−1 are bounded operators, to get the necessary estimate


$ ||X|| $
$ $ $ $
$ p (x)dμX, (x)$$ ≤ C
(n)
sup $p(x)$
$
−||X|| x∈[−||X||,||X||]

by induction over n ≥ 2.

Chapter 12 - Smoothness of densities on real Lie algebras


Exercise 12.1
1. This follows from the computations of Section 2.4.
2. We have
P2
Q + M = B− + B+ + M = ,
2
and
Q2
Q − M = B− + B+ − M = − .
2
3. The random variables Q + M and Q − M both have gamma laws.
4. The law of Q + αM can be found in Chapter 3, depending on the value of
α. For |α| < 1, Q + αM has an absolutely continuous law and when
|α| > 1, Q + αM has a geometric law with parameter c2 supported by
{−1/2 − sgn(α)(c − 1/c)k : k ∈ N},

with c = α sgn(α) − α 2 − 1.
5. The classical versions of those identities are given by the integration by
parts formulas (2.1) for the gamma density.
References

[1] L. Accardi, U. Franz, and M. Skeide. Renormalized squares of white noise and
other non-Gaussian noises as Lévy processes on real Lie algebras. Comm. Math.
Phys., 228(1):123–150, 2002. (Cited on pages xvii, 17, 38, 40, 96, 132, 147,
and 187).
[2] L. Accardi, M. Schürmann, and W.v. Waldenfels. Quantum independent incre-
ment processes on superalgebras. Math. Z., 198:451–477, 1988. (Cited on
pages xvii and 131).
[3] G.S. Agarwal. Quantum Optics. Cambridge University Press, Cambridge, 2013.
(Cited on page 130).
[4] N.I. Akhiezer. The Classical Moment Problem and Some Related Questions in
Analysis. Translated by N. Kemmer. Hafner Publishing Co., New York, 1965.
(Cited on page 54).
[5] N.I. Akhiezer and I.M. Glazman. Theory of Linear Operators in Hilbert Space.
Dover Publications Inc., New York, 1993. (Cited on page 243).
[6] S. Albeverio, Yu. G. Kondratiev, and M. Röckner. Analysis and geometry
on configuration spaces. J. Funct. Anal., 154(2):444–500, 1998. (Cited on
page 162).
[7] S.T. Ali, N.M. Atakishiyev, S.M. Chumakov, and K.B. Wolf. The Wigner
function for general Lie groups and the wavelet transform. Ann. Henri Poincaré,
1(4):685–714, 2000. (Cited on pages xvi, xvii, 114, 115, 118, 120, 122, 123,
124, and 191).
[8] S.T. Ali, H. Führ, and A.E. Krasowska. Plancherel inversion as unified approach
to wavelet transforms and Wigner functions. Ann. Henri Poincaré, 4(6):1015–
1050, 2003. (Cited on pages xvi and 124).
[9] G.W. Anderson, A. Guionnet, and O. Zeitouni. An Introduction to Random
Matrices. Cambridge: Cambridge University Press, 2010. (Cited on page 88).
[10] U. Franz and A. Skalski. Noncommutative Mathematics for Quantum Systems.
To appear in the Cambridge IISc Series, 2015. D. Applebaum, Probability on
compact Lie groups, volume 70 of Probability Theory and Stochastic Modelling,
Springer, 2014” after 45th entry (Cited on pages xvi and 99).
[11] M. Anshelevich. Orthogonal polynomials and counting permutations. www
.math.tamu.edu/∼manshel/papers/OP-counting-permutations.pdf, 2014. (Cited
on page 233).
[12] N.M. Atakishiyev, S.M. Chumakov, and K.B. Wolf. Wigner distribution function
for finite systems. J. Math. Phys., 39(12):6247–6261, 1998. (Cited on page 125).

271
272 References

[13] V.P. Belavkin. A quantum nonadapted Itô formula and stochastic analysis in
Fock scale. J. Funct. Anal., 102:414–447, 1991. (Cited on pages 88, 190, 212,
and 213).
[14] V.P. Belavkin. A quantum nonadapted stochastic calculus and nonstationary
evolution in Fock scale. In Quantum Probability and Related Topics VI, pages
137–179. World Sci. Publishing, River Edge, NJ, 1991. (Cited on pages 88, 190,
212, and 213).
[15] V.P. Belavkin. On quantum Itô algebras. Math. Phys. Lett., 7:1–16, 1998. (Cited
on page 88).
[16] C. Berg, J.P.R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups,
volume 100 of Graduate Texts in Mathematics. Springer-Verlag, New York,
1984. Theory of positive definite and related functions. (Cited on page 88).
[17] Ph. Biane. Calcul stochastique non-commutatif. In Ecole d’Eté de Probabilités
de Saint-Flour, volume 1608 of Lecture Notes in Mathematics. Springer-Verlag,
Berlin, 1993. (Cited on pages 7, 88, 211, and 264).
[18] Ph. Biane. Quantum Markov processes and group representations. In Quantum
Probability Communications, QP-PQ, X, pages 53–72. World Sci. Publishing,
River Edge, NJ, 1998. (Cited on pages xvii, 132, and 189).
[19] Ph. Biane and R. Speicher. Stochastic calculus with respect to free Brow-
nian motion and analysis on Wigner space. Probab. Theory Related Fields,
112(3):373–409, 1998. (Cited on pages 88 and 216).
[20] L.C. Biedenharn and J.D. Louck. Angular Momentum in Quantum Physics.
Theory and Application. With a foreword by P.A. Carruthers. Cambridge:
Cambridge University Press, reprint of the 1981 hardback edition edition, 2009.
(Cited on page 72).
[21] L.C. Biedenharn and J.D. Louck. The Racah-Wigner Algebra in Quantum
Theory. With a foreword by P.A. Carruthers. Introduction by G.W. Mackey.
Cambridge: Cambridge University Press, reprint of the 1984 hardback ed.
edition, 2009. (Cited on page 72).
[22] J.-M. Bismut. Martingales, the Malliavin calculus and hypoellipticity under
general Hörmander’s conditions. Z. Wahrsch. Verw. Gebiete, 56(4):469–505,
1981. (Cited on pages xvii and 189).
[23] F. Bornemann. Teacher’s corner - kurze Beweise mit langer Wirkung. Mitteilun-
gen der Deutschen Mathematiker-Vereinigung, 10:55–55, July 2002. (Cited on
page 258).
[24] N. Bouleau, editor. Dialogues autour de la création mathématique. Association
Laplace-Gauss, Paris, 1997.
[25] T. Carleman. Les Fonctions Quasi Analytiques. Paris: Gauthier-Villars, Éditeur,
Paris, 1926. (Cited on page 221).
[26] M.H. Chang. Quantum Stochastics. Cambridge Series in Statistical and Proba-
bilistic Mathematics. Cambridge University Press, Cambridge, 2015. (Cited on
pages xviii and 88).
[27] T.S. Chihara. An Introduction to Orthogonal Polynomials. Gordon and Breach
Science Publishers, New York-London-Paris, 1978. Mathematics and Its Appli-
cations, Vol. 13. (Cited on page 233).
[28] S.M. Chumakov, A.B. Klimov, and K.B. Wolf. Connection between two wigner
functions for spin systems. Physical Review A, 61(3):034101, 2000. (Cited on
page 125).
[29] L. Cohen. Time-Frequency Analysis: Theory and Applications. Prentice-Hall,
New Jersey, 1995. (Cited on pages xvii and 130).
References 273

[30] M. Cook. Mathematicians: An Outer View of the Inner World. Princeton


University Press, USA, 2009. With an introduction by R. C. Gunning.
[31] D. Dacunha-Castelle and M. Duflo. Probability and Statistics. Vol. I. Springer-
Verlag, New York, 1986. (Cited on page xvii).
[32] D. Dacunha-Castelle and M. Duflo. Probability and Statistics. Vol. II. Springer-
Verlag, New York, 1986. (Cited on page xvii).
[33] P.A.M. Dirac. The Principles of Quantum Mechanics. Oxford, at the Clarendon
Press, 1947. 3rd ed.
[34] M. Duflo and C.C. Moore. On the regular representation of a nonunimod-
ular locally compact group.J. Funct. Anal., 21(2):209–243, 1976. (Cited on
page 114).
[35] A. Erdélyi, W. Magnus, F. Oberhettinger, and F.G. Tricomi. Higher Transcen-
dental Functions, volume 2. McGraw Hill, New York, 1953. (Cited on page 264).
[36] P. Feinsilver and J. Kocik. Krawtchouk matrices from classical and quantum
random walks. In Algebraic methods in statistics and probability. AMS special
session on algebraic methods in statistics, Univ. of Notre Dame, IN, USA, April
8–9, 2000, pages 83–96. Providence, RI: AMS, American Mathematical Society,
2001. (Cited on page 72).
[37] P. Feinsilver and R. Schott. Krawtchouk polynomials and finite probability
theory. In Probability Measures on groups X. Proceedings of the Tenth Oberwol-
fach conference, held November 4-10, 1990 in Oberwolfach, Germany, pages
129–135. New York, NY: Plenum Publishing Corporation, 1991. (Cited on
page 72).
[38] P. Feinsilver and R. Schott. Algebraic structures and operator calculus, Vol. I:
Representations and Probability Theory, volume 241 of Mathematics and Its
Applications. Kluwer Academic Publishers Group, Dordrecht, 1993. (Cited on
pages xviii, 92, 94, and 95).
[39] P. Feinsilver and R. Schott. Algebraic structures and operator calculus. Vol. II,
volume 292 of Mathematics and its Applications. Kluwer Academic Publishers
Group, Dordrecht, 1994. Special functions and computer science. (Cited on
page xv).
[40] P. Feinsilver and R. Schott. Algebraic structures and operator calculus. Vol. III,
volume 347 of Mathematics and its Applications. Kluwer Academic Publishers
Group, Dordrecht, 1996. Representations of Lie groups. (Cited on page 99).
[41] R.P. Feynman, R. Leighton, and M. Sands. The Feynman Lectures on Physics.
Vols. 1-3. Addison-Wesley Publishing Co., Inc., Reading, Mass.-London, 1964-
1966. www.feynmanlectures.info (Cited on pages 70 and 72).
[42] U. Franz. Classical Markov processes from quantum Lévy processes. Inf. Dim.
Anal., Quantum Prob., and Rel. Topics, 2(1):105–129, 1999. (Cited on pages xvii
and 132).
[43] U. Franz, R. Léandre, and R. Schott. Malliavin calculus for quantum stochastic
processes. C. R. Acad. Sci. Paris Sér. I Math., 328(11):1061–1066, 1999. (Cited
on page xviii).
[44] U. Franz, R. Léandre, and R. Schott. Malliavin calculus and Skorohod integra-
tion for quantum stochastic processes. Infin. Dimens. Anal. Quantum Probab.
Relat. Top., 4(1):11–38, 2001. (Cited on page xviii).
[45] U. Franz and R. Schott. Stochastic Processes and Operator Calculus on Quan-
tum Groups. Kluwer Academic Publishers Group, Dordrecht, 1999. (Cited on
pages 139 and 147).
274 References

[46] U. Franz and N. Privault. Quasi-invariance formulas for components of quantum


Lévy processes. Infin. Dimens. Anal. Quantum Probab. Relat. Top., 7(1):131–
145, 2004. (Cited on page 16).
[47] C.W. Gardiner and P. Zoller. Quantum Noise. Springer Series in Synergetics.
Springer-Verlag, Berlin, second edition, 2000. A handbook of Markovian and
non-Markovian quantum stochastic methods with applications to quantum
optics. (Cited on pages xvii and 131).
[48] C. Gerry and P. Knight. Introductory Quantum Optics. Cambridge University
Press, Cambridge, 2004. (Cited on page 128).
[49] L. Gross. Abstract Wiener spaces. In Proceedings of the Fifth Berkeley Sym-
posium on Mathematical Statistics and Probability, Berkeley, 1967. Univ. of
California Press. (Cited on page 173).
[50] A. Guichardet. Symmetric Hilbert spaces and related topics, volume 261 of
Lecture Notes in Mathematics. Springer Verlag, Berlin, Heidelberg, New York,
1972. (Cited on page 88).
[51] A. Guichardet. Cohomologie des groupes topologiques et des algèbres de Lie.,
volume 2 of Textes Mathematiques. CEDIC/Fernand Nathan, Paris, 1980. (Cited
on pages 147 and 148).
[52] T. Hida. Brownian Motion. Springer Verlag, Berlin 1981. (Cited on page 158).
[53] A.S. Holevo. Statistical structure of quantum theory, volume 67 of Lecture Notes
in Physics. Monographs. Springer-Verlag, Berlin, 2001. (Cited on pages xvii
and 131).
[54] R.L. Hudson and K.R. Parthasarathy. Quantum Itô’s formula and stochastic
evolutions. Comm. Math. Phys., 93(3):301–323, 1984. (Cited on pages 190, 212,
214, and 215).
[55] C.J. Isham. Lectures on quantum theory. Imperial College Press, London, 1995.
Mathematical and structural foundations. (Cited on page xviii).
[56] Y. Ishikawa. Stochastic Calculus of Variations for Jump Processes. de Gruyter,
Berlin, 2013. (Cited on page xviii).
[57] L. Isserlis. On a formula for the product-moment coefficient of any order of a
normal frequency distribution in any number of variables. Biometrika, 12(1-2):
134–139, 1918. (Cited on page 240).
[58] S. Janson. Gaussian Hilbert spaces, volume 129 of Cambridge Tracts in Math-
ematics. Cambridge University Press, Cambridge, 1997. (Cited on pages 155
and 211).
[59] U.C. Ji and N. Obata. Annihilation-derivative, creation-derivative and represen-
tation of quantum martingales. Commun. Math. Phys., 286(2):751–775, 2009.
(Cited on page 216).
[60] U.C. Ji and N. Obata. Calculating normal-ordered forms in Fock space by
quantum white noise derivatives. Interdiscip. Inf. Sci., 19(2):201–211, 2013.
(Cited on page 216).
[61] J.R. Johansson, P.D. Nation, and F. Nori. Qutip: An open-source python
framework for the dynamics of open quantum systems. Computer Physics
Communications, 183(8):1760–1772, 2012. (Cited on page 113).
[62] J.R. Johansson, P.D. Nation, and F. Nori. Qutip 2: A python framework for
the dynamics of open quantum systems. Computer Physics Communications,
184(4):1234–1240, 2013. (Cited on page 113).
[63] R. Koekoek and R.F. Swarttouw. The Askey-scheme of hypergeometric orthog-
onal polynomials and its q-analogue. Delft University of Technology, Report
98–17, 1998. (Cited on pages 40, 41, 237, and 238).
References 275

[64] A. Korzeniowski and D. Stroock. An example in the theory of hypercontractive


semigroups. Proc. Amer. Math. Soc., 94:87–90, 1985. (Cited on pages 18
and 26).
[65] P.S. de Laplace. Théorie Analytique des Probabilités. V. Courcier, Imprimeur, 57
Quai des Augustins, Paris, 1814.
[66] M. Ledoux. Concentration of measure and logarithmic Sobolev inequalities. In
Séminaire de Probabilités XXXIII, volume 1709 of Lecture Notes in Math., pages
120–216. Springer, Berlin, 1999. (Cited on page 18).
[67] V.P. Leonov and A.N. Shiryaev. On a method of calculation of semi-invariants.
Theory Probab. Appl., 4:319–329, 1959. (Cited on pages 239 and 240).
[68] J.M. Lindsay. Quantum and non-causal stochastic calculus. Probab. Theory
Related Fields, 97:65–80, 1993. (Cited on pages 88, 190, 212, and 213).
[69] J.M. Lindsay. Integral-sum kernel operators. In Quantum Probability Commu-
nications (Grenoble, 1998), volume XII, page 121. World Scientific, Singapore,
2003. (Cited on page 88).
[70] J.M. Lindsay. Quantum stochastic analysis – an introduction. In Quantum
independent increment processes. I, volume 1865 of Lecture Notes in Math.,
pages 181–271. Springer, Berlin, 2005. (Cited on page 88).
[71] E. Lukacs. Applications of Faà di Bruno’s formula in mathematical statistics.
Am. Math. Mon., 62:340–348, 1955. (Cited on pages 239 and 240).
[72] E. Lukacs. Characteristic Functions. Hafner Publishing Co., New York, 1970.
Second edition, revised and enlarged. (Cited on pages 239 and 240).
[73] H. Maassen. Quantum markov processes on fock space described by integral
kernels. In Quantum probability and applications II (Heidelberg 1984), volume
1136 of Lecture Notes in Math., pages 361–374. Springer, Berlin, 1985. (Cited
on page 88).
[74] T. Mai, R. Speicher, and M. Weber. Absence of algebraic relations and of zero
divisors under the assumption of full non-microstates free entropy dimension.
Preprint arXiv:1502.06357, 2015. (Cited on page 216).
[75] P. Malliavin. Stochastic calculus of variations and hypoelliptic operators. In
Intern. Symp. SDE. Kyoto, pages 195–253, Tokyo, 1976. Kinokumiya. (Cited
on pages xiii, xvii, and 173).
[76] P. Malliavin. Stochastic analysis, volume 313 of Grundlehren der Mathematis-
chen Wissenschaften. Springer-Verlag, Berlin, 1997. (Cited on page 155).
[77] P. Malliavin. Stochastic analysis, volume 313 of Grundlehren der Mathematis-
chen Wissenschaften. Springer-Verlag, Berlin, 1997. (Cited on page 169).
[78] S. Mazur and W. Orlicz. Grundlegende Eigenschaften der polynomischen
Operationen. I. Stud. Math., 5:50–68, 1934. (Cited on page 259).
[79] P.A. Meyer. Quantum probability for probabilists, volume 1538 of Lecture Notes
in Math. Springer-Verlag, Berlin, 2nd edition, 1995. (Cited on pages 7, 88, 135,
147, 186, and 211).
[80] P.A. Meyer. Quantum probability seen by a classical probabilist. In Probability
towards 2000 (New York, 1995), volume 128 of Lecture Notes in Statist., pages
235–248. Springer, New York, 1998. (Cited on page xvi).
[81] A. Nica and R. Speicher. Lectures on the combinatorics of free probability,
volume 335 of London Mathematical Society Lecture Note Series. Cambridge
University Press, Cambridge, 2006. (Cited on page 88).
[82] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Informa-
tion. Cambridge University Press, Cambridge, 2000. (Cited on page 70).
276 References

[83] D. Nualart. Analysis on Wiener space and anticipating stochastic calculus.


In Ecole d’été de Probabilités de Saint-Flour XXV, volume 1690 of Lecture
Notes in Mathematics, pages 123–227. Springer-Verlag, Berlin, 1998. (Cited on
page 155).
[84] D. Nualart. The Malliavin calculus and related topics. Probability and its Appli-
cations. Springer-Verlag, Berlin, second edition, 2006. (Cited on pages xvii, 155,
and 173).
[85] H. Oehlmann. Analyse temps-fréquence de signaux vibratoires de boı̂tes de
vitesses. PhD thesis, Université Henri Poincaré Nancy I, 1996. (Cited on
page 130).
[86] H. Osswald. Malliavin calculus for Lévy processes and infinite-dimensional
Brownian motion, volume 191 of Cambridge Tracts in Mathematics. Cambridge
University Press, Cambridge, 2012. (Cited on pages xviii and 169).
[87] K.R. Parthasarathy. An Introduction to Quantum Stochastic Calculus. Birkäuser,
1992. (Cited on pages xv, 7, 47, 48, 82, 86, 88, 135, 186, 208, 212, 214, and 225).
[88] K.R. Parthasarathy. Lectures on quantum computation, quantum error-
correcting codes and information theory. Tata Institute of Fundamental
Research, Mumbai, 2003. Notes by Amitava Bhattacharyya. (Cited on page 72).
[89] G. Peccati and M. Taqqu. Wiener Chaos: Moments, Cumulants and Diagrams: A
survey with Computer Implementation. Bocconi and Springer Series. Springer,
Milan, 2011. (Cited on page 239).
[90] J. Pitman. Combinatorial stochastic processes, volume 1875 of Lecture Notes
in Mathematics. Springer-Verlag, Berlin, 2006. Lectures from the 32nd Summer
School on Probability Theory held in Saint-Flour, July 7–24, 2002. (Cited on
page 239).
[91] N. Privault. Inégalités de Meyer sur l’espace de Poisson. C. R. Acad. Sci. Paris
Sér. I Math., 318:559–562, 1994. (Cited on page 18).
[92] N. Privault. A transfer principle from Wiener to Poisson space and applications.
J. Funct. Anal., 132:335–360, 1995. (Cited on page 26).
[93] N. Privault. A different quantum stochastic calculus for the Poisson process.
Probab. Theory Related Fields, 105:255–278, 1996. (Cited on pages 16 and 265).
[94] N. Privault. Girsanov theorem for anticipative shifts on Poisson space. Probab.
Theory Related Fields, 104:61–76, 1996. (Cited on page 173).
[95] N. Privault. Une nouvelle représentation non-commutative du mouvement
brownien et du processus de Poisson. C. R. Acad. Sci. Paris Sér. I Math.,
322:959–964, 1996. (Cited on page 16).
[96] N. Privault. Absolute continuity in infinite dimensions and anticipating stochas-
tic calculus. Potential Analysis, 8(4):325–343, 1998. (Cited on page 173).
[97] N. Privault. Splitting of Poisson noise and Lévy processes on real Lie algebras.
Infin. Dimens. Anal. Quantum Probab. Relat. Top., 5(1):21–40, 2002. (Cited on
pages 16 and 265).
[98] N. Privault. Stochastic analysis in discrete and continuous settings with normal
martingales, volume 1982 of Lecture Notes in Mathematics. Springer-Verlag,
Berlin, 2009. (Cited on pages 149, 155, and 174).
[99] N. Privault. Generalized Bell polynomials and the combinatorics of Poisson
central moments. Electron. J. Combin., 18(1):Research Paper 54, 10, 2011.
(Cited on page 241).
[100] N. Privault and W. Schoutens. Discrete chaotic calculus and covariance identi-
ties. Stochastics Stochastics Rep., 72(3-4):289–315, 2002. (Cited on page 72).
[101] L. Pukanszky. Leçon sur les repréntations des groupes. Dunod, Paris, 1967.
(Cited on page 142).
References 277

[102] L. Pukanszky. Unitary representations of solvable Lie groups. Ann. scient. Éc.
Norm. Sup., 4:457–608, 1971. (Cited on page 142).
[103] R. Ramer. On nonlinear transformations of Gaussian measures. J. Funct. Anal.,
15:166–187, 1974. (Cited on page 173).
[104] S. Sakai. C∗ -Algebras and W ∗ -Algebras. Springer-Verlag, New York-
Heidelberg, 1971. (Cited on page 179).
[105] M. Schürmann. The Azéma martingales as components of quantum independent
increment processes. In J. Azéma, P.A. Meyer, and M. Yor, editors, Séminaire
de Probabilités XXV, volume 1485 of Lecture Notes in Math. Springer-Verlag,
Berlin, 1991. (Cited on pages xvii and 132).
[106] M. Schürmann. White Noise on Bialgebras. Springer-Verlag, Berlin, 1993.
(Cited on pages xvii, 131, 134, 136, 147, and 179).
[107] I. Shigekawa. Derivatives of Wiener functionals and absolute continuity of
induced measures. J. Math. Kyoto Univ., 20(2):263–289, 1980. (Cited on
page 173).
[108] K.B. Sinha and D. Goswami. Quantum stochastic processes and noncommu-
tative geometry, volume 169 of Cambridge Tracts in Mathematics. Cambridge
University Press, Cambridge, 2007. (Cited on page xviii).
[109] R. F. Streater. Classical and quantum probability. J. Math. Phys., 41(6):3556–
3603, 2000. (Cited on pages xvii and 147).
[110] T.N. Thiele. On semi invariants in the theory of observations. Kjöbenhavn
Overs., pages 135–141, 1899. (Cited on page 239).
[111] N. Tsilevich, A.M. Vershik, and M. Yor. Distinguished properties of the gamma
process and related topics. math.PR/0005287, 2000. (Cited on pages xvii, 180,
and 189).
[112] N. Tsilevich, A.M. Vershik, and M. Yor. An infinite-dimensional analogue of the
Lebesgue measure and distinguished properties of the gamma process. J. Funct.
Anal., 185(1):274–296, 2001. (Cited on pages xvii, 180, and 189).
[113] A.S. Üstünel. An introduction to analysis on Wiener space, volume 1610
of Lecture Notes in Mathematics. Springer Verlag, Berlin, 1995. (Cited on
page 155).
[114] A.M. Vershik, I.M. Gelfand, and M.I. Graev. A commutative model of the group
of currents SL(2, R)X connected with a unipotent subgroup. Funct. Anal. Appl.,
17(2):137–139, 1983. (Cited on pages xvii and 189).
[115] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 1, volume 72 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1991. (Cited on page 99).
[116] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 3, volume 75 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1992. (Cited on page 99).
[117] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions. Vol. 2, volume 74 of Mathematics and its Applications (Soviet Series).
Kluwer Academic Publishers Group, Dordrecht, 1993. (Cited on page 99).
[118] N.J. Vilenkin and A.U. Klimyk. Representation of Lie groups and special
functions, volume 316 of Mathematics and its Applications. Kluwer Academic
Publishers Group, Dordrecht, 1995. (Cited on page 99).
[119] J. Ville. Théorie et applications de la notion de signal analytique. Câbles et
Transmission, 2:61–74, 1948. (Cited on page 130).
[120] D. Voiculescu. Lectures on free probability theory. In Lectures on probability
theory and statistics (Saint-Flour, 1998), volume 1738 of Lecture Notes in Math.,
pages 279–349. Berlin: Springer, 2000. (Cited on page 88).
278 References

[121] D. Voiculescu, K. Dykema, and A. Nica. Free random variables. A noncom-


mutative probability approach to free products with applications to random
matrices, operator algebras and harmonic analysis on free groups, volume 1
of CRM Monograph Series. American Mathematical Society, Providence, RI,
1992. (Cited on page 88).
[122] W. von Waldenfels. Itô solution of the linear quantum stochastic differential
equation describing light emission and absorption. In Quantum probability
and applications to the quantum theory of irreversible processes, Proc. int.
Workshop, Villa Mondragone/Italy 1982, volume 1055 of Lecture Notes in
Math., pages 384–411. Springer-Verlag, Berlin, 1984. (Cited on pages xvii
and 131).
[123] W. von Waldenfels. A measure theoretical approach to quantum stochastic
processes, volume 878 of Lecture Notes in Physics. Monographs. Springer-
Verlag, Berlin, 2014. (Cited on page xviii).
[124] E.P. Wigner. On the quantum correction for thermodynamic equilibrium. Phys.
Rev., 40:749–759, 1932. (Cited on page xvi).
[125] M.W. Wong. Weyl Transforms. Universitext. Springer-Verlag, Berlin, 1998.
(Cited on page 108).
[126] L.M. Wu. L1 and modified logarithmic Sobolev inequalities and deviation
inequalities for Poisson point processes. Preprint, 1998. (Cited on page 167).
Index

hw, 12, 140 cocycle, 134, 136, 138, 146


osc, 13, 90, 140, 180 complex Lie algebra, 10, 11
sl2 (R), 14, 36, 94, 183, 184 compound Poisson process, 138, 148
e(2), 97, 100 conservation operator, 80
so(2), 21, 96 creation operator, 1, 2, 80, 149,
so(3), 21, 59, 70, 96, 122 154
cumulant, 239
Abel transformation, 33, 34
adapted stochastic process, 84 density matrix, 65
adherent point, 245 dictionary “classical ↔ quantum”,
adjoint action, 11, 16, 35, 44, 244 72
affine Lie algebra, 20, 117 directed set, 246
algebra, 49 distribution, 53
annihilation operator, 1, 2, 80, 149, Bernoulli, 96
151 Gamma, 36
anti commutator, 201 Pascal, 38
divergence formula
Bernoulli distribution, 96 Poisson case, 168
Bessel function (first kind), 98 Wiener case, 161
Bloch sphere, 61 divergence operator
Borel σ -algebra, xi, 48 on the affine algebra, 194
boson Fock space, 2, 75, 78 noncommutative, 204
Boson independence, 132 Poisson case, 168
Brownian motion, 155 Wiener case, 159
duality relation, 3, 153
Carleman growth condition, 221 Duflo-Moore operator, 114
Casimir operator, 24
Cauchy-Stieltjes transform, 243 enveloping algebra, 133
chaos representation property equivalence
Poisson case, 164 of Lévy processes, 134
Wiener case, 158 exponential vector, 79
Charlier polynomial, 233, 235
classical probability space, 47 finite difference gradient
classical processes, 142 continuous case, 165
closability, 246 first fundamental lemma, 86

279
280 Index

Fock space, 6, 7, 75 Laguerre polynomial, 233, 237


boson or symmetric, 2, 7, 75, 78 Lie
free, full, 78 algebra, 10
symmetric, 79 bracket, 10, 14
functional calculus, 57, 107, 114, 117 group, 11
fundamental lemma Lie algebra
first, 86 hw, 12, 140
second, 86 osc, 13, 90, 140, 180
fundamental state, 5 sl2 (R), 14, 36, 94, 183, 184
e(2), 97, 100
gamma distribution, 36, 94 affine, 20
Gaussian oscillator, 13
generating functional, 140 real, complex, 11
Lévy process, 140 special orthogonal, 21
Schürmann triple, 140
Gegenbauer polynomial, 233 Meixner distribution, 94
generating functional, 133 Meixner polynomial, 233, 237, 238
Gaussian, 140 Meixner-Pollaczek polynomial, 238
Girsanov theorem, 178 moment generating function, 6
Brownian case, 186 moment problem, 54
gamma case, 188, 189 momentum operator, 3
Meixner case, 189 multinomial coefficient, 239
Poisson case, 187 multiple stochastic integrals
Wiener case, 185 continuous time, 150
GNS construction, 134 Poisson case, 162
gradient operator Wiener case, 155
on the affine algebra, 192
noncommutative, 190, 197 net, 245, 246
Wiener case, 159 noncommutative gradient operator, 190
normal martingale
Hörmander theorem, 224 continuous time, 149
Hadamard product, 77 normal operator, 219
harmonic oscillator, 14 number operator, 2, 13, 32, 81
Heisenberg–Weyl algebra, 12, 27, 107
Hermite polynomial, 30, 233, 234 operator
Hermitian matrix, 72 annihilation, 1, 2, 80, 149
conservation, 80
involution creation, 1, 2, 80, 149
of an algebra, 49 divergence
on a Lie algebra, 10 on the affine algebra, 194
Itô algebra, 87 noncommutative, 204
Itô table, 139 Duflo-Moore, 114
gradient
Jacobi polynomial, 233 on the affine algebra, 192
joint moments, 103 noncommutative, 190, 197
momentum, 3
Lévy measure, 39 number, 2, 13, 81
Lévy process, 140, 185 position, 3
Gaussian, 140 symmetrisation, 79
on a real Lie algebra, 132 Weyl, 8
Lagrange interpolation, 73 oscillator algebra, 13, 31, 180
Index 281

Pascal distribution, 38, 94 Skorohod


Pochhammer symbol, 41 integral, 153, 192
Poisson process, 146 isometry, 154
Poisson space, 162 smoothness of densities
polarisation formula, 88 affine algebra, 222
polynomial Wiener space, 217
Charlier, 233, 235 Sobolev space, 217
Gegenbauer, 233 special orthogonal Lie algebra, 21
Hermite, 30, 233 spectral
Jacobi, 233 measure, 5
Laguerre, 233, 237 theorem, 54
Meixner, 233, 237, 238 speed of light, 128
Meixner-Pollaczek, 238 spin, 70
Touchard, 241 splitting lemma, 92, 95, 99,
ultrapherical, 233 101, 111
position operator, 3 state, 49
positive definite kernel, 75 Stieltjes inversion formula, 243
probability law, 53 structure equation, 156
probability space symmetric
classical, 47 Fock space, 7, 78, 79
quantum, 48 tensor product, 6
product symmetrisation operator, 79
Hadamard, 77
Schur, 77 tensor product, 247
total, 76
quantum Touchard polynomial, 241
Itô table, 86 trace, 65, 77
optics, 128
probability space, 48 ultraspherical polynomial, 233
random variable, 50, 55 universal enveloping algebra, 133
stochastic calculus, 75
stochastic differential equation, 224 vacuum vector, 7
stochastic integral, 83 Vandermonde determinant, 232
white noise calculus, 216
quasi-invariance, 183, 184 Weyl calculus, 104, 106
combinatorial, 106
real Lie algebra, 11 Lie-theoretic, 107
resolvent, 52 Weyl operator, 8
Rodrigues’ formula, 63 white noise, 212, 216
Wiener space, 155, 190
Schürmann triple, 134 Wigner
Gaussian, 140 density, xvi
Schur product, 77 distribution, 109
second fundamental lemma, 86 function, 109, 114, 122
second quantisation, 81 group-theoretical, 124
sequence model, 168, 169 Wigner–Ville function, 130

You might also like