You are on page 1of 321

Lecture Notes in Mathematics 1538

Editors:
A. Dold, Heidelberg
F. Takens, Groningen
S rin er
BPlin g
Heidelberg
New York
Barcelona
Budapest
Hong Kong
London
Milan
Paris
Santa Clara
Singapore
Tokyo
Paul-Andr6 Meyer

Quantum Probability
for Probabilists
Second Edition

"~ Springer
Author
Paul-AndrdMeyer
Institute de Recherche Mathdmatique Avancde
Universit6 Louis Pasteur
7, rue Ren6 Descartes
67048 Strasbourg-Cedex, France

Library of C o n g r e s s Cataloging-in-Publication Data

Meyer, Paul Andre.


Quantum p r o b a b i l i t y for probabilists / P a u l A n d r e M e y e r . - - 2nd
ed.
p. cm. - - ( L e c t u r e n o t e s in m a t h e m a t i c s ; 1538)
Includes bibliographical references and i n d e x .
ISBN 3 - 5 4 0 - 6 0 2 7 0 - 4 (pbk.)
1. P r o b a b i l i t i e s . 2. Quantum t h e o r V. Z. T i t l e . II. Series:
Lecture n o t e s in m a t h e m a t i c s (Springer-Verlag) ; 1538,
QA3.L28 n o . 1538 1995
[QC174.17.P68]
510 s - - d c 2 0
[530.1'2'015192] 95-37123
CIP

The present volume is a corrected second edition of the previously published first
edition (ISBN 0-540-56476-4).

Mathematics Subject Classification (1991):


Primary: 81S25
Secondary: 60H05, 60H10, 60J55, 60J60, 46LXX

ISBN 3-540-60270-4 Springer-Verlag Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, re-use
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other
way, and storage in data banks. Duplication of this publication or parts thereof is
permitted only under the provisions of the German Copyright Law of September 9,
1965, in its current version, and permission for use must always be obtained from
Springer-Verlag. Violations are liable for prosecution under the German Copyright
Law.
© Springer-Verlag Berlin Heidelberg 1995
Printed in Germany
Typesetting: Camera-ready TEX output by the author
SPIN: 10479489 46/3142-543210 - Printed on acid-free paper
Introduction

These notes contain all the material accumulated over six years in Strasbourg to
teach " Q u a n t u m Probability" to myself and to an audience of commutative probabilists.
The text, a first version of which appeared in successive volumes of the Sdminaire
de Probabilit{s, has been augmented and carefully rewritten, and translated into
international English. Still, it remains true "Lecture Notes" material, and I have resisted
suggestions to publish it as a monograph. Being a non-specialist, it is i m p o r t a n t for me
to keep the m o d e r a t e right to error one has in lectures. The origin of the text also
explains the addition "for probabilists" in the title : though much of the material is
accessible to the general public, I did not care to redefine Brownian motion or the Ito
integral.
More precisely than "Quantum Probability", the main topic is " Q u a n t u m Stochastic
Calculus", a field which has recently got official recognition as 81S25 in the Math.
Reviews classification. I find it attractive for two reasons. First, it uses the same language
as q u a n t u m physics, and doe~ have some relations with it, though possibly not with the
kind of fundamental physics mathematicians are fond of. Secondly, it is a domain where
one's experience with classical stochastic calculus is really profitable. I use as much of
the classical theory as I can in the motivations and the proofs. I have tried to prepare
the reader to make his way into a literature which is often hermetic, as it uses physics
language and a variety of notations. I have therefore devoted much care to comparing the
different notation systems (adding possibly to the general confusion with my personal
habits).
It is often enlightening to interpret standard probability in a non commutative
language, and the interaction has already produced some interesting commutative
benefits, among which a better understanding of classical stochastic flows, of some
parts of Wiener space analysis, and above all of Wiener chaos expansions, a difficult
and puzzling topic which has been renewed through Emery's discovery of the chaotic
representation property of the Az~ma martingales, and Biane's similar proof for finite
Markov chains.
Anyone wishing to work in this field should consult the excellent book on Q u a n t u m
Probability by K.R. Parthasarathy, An Introduction to Quantum Stochastic Calculus, as
well as the seven volumes of seminars on QP edited by L. Accardi and W. von Waldenfels.
It has been impossible to avoid a large overlap with P a r t h a s a r a t h y ' s monograph, all the
more so, since I have myself learnt the subject from Hudson and Partha~arathy. However,
I have stressed different topics, for example multiplication formulas and Maassen's kernel
approach.
Our main concern being stochastic calculus on Fock space, we could not include the
independent fermion approach of Barnett, Streater and Wilde, or the abstract theory
of stochastic integration with respect to general "quantum martingales" (Barnett and
Wilde; Accardi, Fagnola and Quaegebeur). This is unfair for historical reasons and
unfortunate, since much of this parallel material is very attractive, and in need of
systematic exposition. Other notable omissions are stopping times (Barnett and Wilde,
P a r t h a s a r a t h y and Sinha), and the recent developments on "free noise" (Speicher).
But also entire fields are absent from these notes : the functional analytic aspects of
the dilation problem, non-commutative ergodic theory, and the discussion of concrete
Lan~evin eauations from auantnm r~hwic~.
Vl

These notes also appear at a crucial time : in tess than one year, there has been
an impressive progress in the understanding of the analytic background of QP, and the
non-commutative methods for the construction of stochastic processes are no longer
pale copies of their probabilistic analogues. This progress has been taken into account,
but only in part, as it would have been unreasonable to include a large quantity of still
undigested (and unpublished) results.
A good part of my pleasure with QP I owe to the openmindedness of my colleagues,
which behaved with patience and kindness towards a beginner. Let me mention with
special gratitude, among many others, the names of L. Aeeardi, R.L. Hudson, J. Lewis,
R. Streater, K.R. Parthasarathy, W. yon Waldenfels. I owe also special thanks to
S. Attal, P.D.F. Ion, Y.Z. Hu, R.L. Hudson, S. Paycha for their comments on the
manuscript, which led to the correction of a large number of errors and obscurities.

P.A. Meyer, October 1992

N o t e o n t h e s e c o n d p r i n t i n g . A few misprints have been corrected, as well as an


i m p o r t a n t omission in the statement of Dynkin's formula (Appendix 5). Above all, I have
made some additions to Chapter 1 (developing more completely the point of view that
a random variable is a homomorphism between algebras), to Appendix 2 (including the
recent probabilistie interpretation, due to Bhat and Parthasarathy, of the Stinespring
construction as the quantum analogue of Markov dependence), and I have a d d e d a new
Appendix containing recent results on stochastic integration.
December 1994
Table of Contents
CHAPTER I : NON-COMMUTATIVE PROBABILITY

1. B a s i c d e f i n i t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
R a n d o m variables and states in a complex algebra (1). An example (2). Unitary groups (3). Von
N c u m a n n ' s axiomatization (4). Selfadjoint operators and Stone's theorem (5). Criteria for selfad-
jointness : Nelson's analytic vectors theorem (6). Spectral measures and integrals (7). Spectral rep-
resentation of s.a. operators (8). Elementary q u a n t u m mechanics (9). Quantization of a probabilistic
situation (10). Back to algebras (11).

CHAPTER II : SPIN

1. T w o level s y s t e m s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Two dimensional Hilbert space (1). Description of all states and observables (2). Creation and
annihilation operators (3). Elementary WeyI operators (4).
2. C o m m u t i n g spin systems ................................................................. 16
C o m m u t i n g spins and toy Fock space (1). Creation and annihilation operators (2). Representation
of all operators on toy Fock space (3). Adapted operators (4). The three basic "martingales" and
predictable representation (5), Discrete coherent vectors (6)
3. A d e M o i v r e - L a p l a c e theorem ........................................................... 21
Preliminary computations (1). Convergence to the harmonic oscillator creation/annihilation pair
(2). Including the number operator (3).
4. A n g u l a r m o m e n t u m ....................................................................... 27
Angular m o m e n t u m commutation relations (1). Structure of irreducible representations (2). Addi-
tional results (3). Addition of angular m o m e n t a (4), Structure of the Bernoulli case (5). A commu-
tative process of spins (6).
5. A n t i c o m m u t i n g spin systems ............................................................. 36
Fermion creation/annihilation operators (1). Jordan-Wigner transformation on toy Fock space (2).
Some computations (3). Clifford algebras (4-5). Basis free definition (6). G r a s s m a n n and Wick
product (7). Linear spaces of Bernoulli r.v.'s (8). Automorphisms of the Clifford structure (9).
Integrals (10).

CHAPTER III: THE HARMONIC OSCILLATOR

1. S c h r 6 d i n g e r ' s m o d e l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
The canonical c o m m u t a t i o n relation (1). Schr0dinger's model (2). Weyl's c o m m u t a t i o n relations
(3). Gaussian states of the canonical pair (4). Uncertainty relations and minimal uncertainty (5).
Uniqueness of the Weyl commutation relation : the Stone-von N e u m a n n theorem (6-7). Application
to the Plancherel theorem (8). Application to reducible canonical pairs and general gaussian laws
(9).

2. C r e a t i o n a n d a n n i h i l a t i o n o p e r a t o r s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Dirac's approach for the harmonic oscillator (1). The real gaussian model of the canonical pair (2).
Eigenveetors of the harmonic oscillator (3). Weyl operators and coherent vectors (4). The complex
gaussian model (5). Classical states (6).
VIII Table

CHAPTER IV : FOCK S P A C E (1)

1. B a s i c d e f i n i t i o n s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Symmetric and antisymmetric tensor products, norms, bases (1). Bosom and fermion Fork spaces (2).
Exponential or coherent vectors (3). Creation and annihilation operators (4). Canonical c o m m u t a t i o n
(CCR) and anticommutation (CAR) relations (5). Second quantization of an operator (6). Some
useful formulas (7). Creation/annihilation operators on the full Fork space (8). Weyl operators (9).
A computation of generators (1O). Exponentials of simple operators (11).

2. F o r k s p a c e a n d m u l t i p l e i n t e g r a l s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Wiener Ito multiple integrals (1). Extensions : square integrahle martingales and the chaotic
representation property (2 a)), the symmetric description (2 b)) and the "shorthand notation" (2
c)). Interpretation of exponential vectors (3) and of the operators Pt, Qt. The number operator and
the Ornstein-Uhlenheck scmigroup (4). Compensated Poisson processes as processes of operators
on Fock space (5). T h e Ito integral in Fock space language (6).

3. M u l t i p l i c a t i o n f o r m u l a s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
T h e Wiener multiplication formula : classical h)rm (1) and shorthand form (2). Maassen's
s u m / i n t e g r a l lemma, associativity, Wick and other products (3). Gaussian computations (4-5). T h e
Poisson multiplication formula (6). Relation with toy Fock space (7). G r a s s m a n n and Clifford mul-
tiplications (8). Fermion hrownian motion (9).

4. M a a s s e n ' s k e r n e l s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Kernels with two and three arguments (1). Examples of kernels (2). Composition of kernels (3).
Kernel computation on Gaussian spaces (4). The algebra of regular kernels : Maassen's theorem (5).
Fermion kernels (6). The J o r d a n - W i g n e r transformation (7). Other estimates on kernels (8).

CHAPTER V. MULTIPLE FOCK SPACES

1. M u l t i d i m e n s i o n a l stochastic integrals .................................................. 103


Fock space over L2(IR+, JC), multiple integrals, shorthand notation (1). Some product formulas (2).
Some non Fock representations of the CCR (3), their Weyl operators (4), Ito table and multiplication
formula (5). A remark on complex Brownian motion (6).

2. N u m b e r and exchange operators ...................................................... 112


Exchange operators, Evans' notation (1). Kernels on multiple Fock space (2). Composition of
kernels (3). Uniqueness of kernel representations (4). Operator valued kernels (5). P a r t h a s a r a t h y ' s
construction of L6vy processes (6).

3. B a s i s f r e e n o t a t i o n in m u l t i p l e F o e k s p a c e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
Imitating simple Fock space (1). Belavkin's notation (2) Another approach to kernels (2).

C H A P T E R VI. S T O C H A S T I C CALCULUS ON F O C K SPACE

1. O p e r a t o r stochastic integrals ........................................................... 125


Initial space, adapted operators (1). Integration of an adapted process, informal description, simple
Fock space (2-4). C o m p u t a t i o n on exponential vectors (5). Basic inequalities (6 7). Extension to
multiple Fock spaces (8). The general fornmlas (9). Uniqueness of the representation as stochastic
integrals (10). Shift operators on Fock space (ii). Cocycles (12). Some examples (13).

2. S t o c h a s t i c c a l c u l u s w i t h k e r n e l s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Algebraic stochastic integration : the Skorohod integral (1). Extension to kernels : regular and
s m o o t h processes (2). Another definition of the stochastic integral (3).
Table IX

3. S t o c h a s t i c d i f f e r e n t i a l e q u a t i o n s a n d flows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Formal study : Right and left exponential equations (1). The Picard method (2), Conditions for
isometry and unitarity (3). Generator of the evolution of operators (4). Translation of the classical
s.d.e, theory into algebraic language (5). Discrete flows (6). The algebra of structure equations (7).
Discrete Markov chains as q u a n t u m flows (8-10), Continuous finite Markov chains as q u a n t u m flows
(11). Classical s.d.e.'s as q u a n t u m flows (12-13). Introduction to Az~ma's martingales and their
flows (14).
4. S . d . e . ' s a n d flows : r i g o r o u s r e s u l t s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
The right equation with infinite nmltiplicity and bounded coefficients (1), T h e Mohari Sinha
conditions (2). Introduction to the unbounded case (3). Contractivity condition (4), A functional
analytic l e m m a (5). T h e right equation ~,s a multiplicative integral (6-8). Time reversal (9) and
relation between the right and left equations (10), Existence of q u a n t u m flows : infinite multiplicity
and bounded coefficients (11-12). The homomorphism property (13), Contraetivity (14). A simple
uniqueness result (15).The case of an abelian initial algebra (16). Mohari's uniqueness theorem (17).
A sketch of Fagnola's existence theorem (18).

CHAPTER VII. INDEPENDENT INCREMENTS


§1. C o a l g e b r a s a n d b i a l g e b r a s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Definition of coalgebras and bialgebras (1). Examples (2). Convolution of linear funetionals (3). The
fundamental theorem on coalgebras (4). Processes with independent increments (5). Generators of
convolution semigroups and Schflrmann triples (6). The Schiirmann triple of a L6vy process (7).
Relation with Belavkin's notation (8).
§2, C o n s t r u c t i o n of the process ........................................................... 204
Construction on a coalgebra (1). The bialgebra case (2). Application to Az~ma martingales (3).
Conditionally positive functions on semi-groups (4-5),
APPENDIX 1 : FUNCTIONAL ANALYSIS ............................................. 209
Hilbert-Schmidt operators (1). Trace class operators (2). Duality properties (3) Weak convergence
properties (4). Weak topologies for operators (5). Tensor products of Hilbert spaces (6-7).
APPENDIX 2 : CONDITIONING AND KERNELS .................................... 218
Conditioning : discrete case (1). Conditioning : continuous ease (2). Example of the canonical pair
(3). Multiplicity theory (4). Transition kernels and completely positive mappings (5-7). Q u a n t u m
Markov processes (8).
APPENDIX 3 : TWO EVENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

APPENDIX 4 : C*-ALGEBRAS

1. E l e m e n t a r y theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
Definition of C* algebras (1). Application of spectral theory (2). Some elementary properties (3).
Positive elements (4). Symbolic calculus for s,a. elements (5). Applications (6). Characterization of
positive elements (7). A few inequalities (8).
2. S t a t e s o n C * - a l g e b r a s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
Existence of many states (1). Representations and the GNS theorem (2-3). Examples from toy Fock
space theory (4). Quotient algebras and approximate units (5).
3. V o n N e u m a n n algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Weak topologies and normal states (1). Von Neumann's bicommutant theorem (2 3), Kaplanski's
density theorem (4). The predual (5). Normality and order continuity (6). About integration
theory (7). Measures with bounded density (8) The linear Radon Nikodym theorem (9), The KMS
condition (10), Entire vectors (11).
X Table

4. T h e T o m i t a - T a k e s a k i theory ............................................................ 254


Elementary geometric properties (1), The main operators (2 3). Interpretation of the adjoint (4).
T h e modular property (5). Using the linear RN theorem (6). T h e main computation (7). T h e three
main theorems (8). Additional results (9). Examples (10).
APPENDIX 5 : LOCAL TIMES AND FOCK SPACE

1. D y n k i n ' s f o r m u l a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Symmetric Markov semigroups and processes (1). Dynkin's formula (2). Sketch of the Marcus-Rosen
approach to the continuity of local times (3).
2. L e J a n ' s "supersymrnetric" approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Notations of complex Brownian motion (1). Computing the Wiener product (2). Extension to other
bilinear forms ( ~ - p r o d u c t s ) (3). Stratonovieh integral and trace (4). Expectation of the exponential
of an element of the second cha~s (5). Return to Clifford algebra : antisymmetric ~ - p r o d u c t s (6),
Exponential formula in the antisymmetrie case (7), Supersymmetric Foek space : the Wick and
Wiener products (8). Properties of the Wiener product (9), Applications to local times (sketch)
(10). Proof of associativity of the ~-product (11).
APPENDIX 6 : MORE ON STOCHASTIC INTEGRATION

1. E v e r y w h e r e d e f i n e d s t o c h a s t i c i n t e g r a l s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Semirnartingales of vectors and opet'ators (1). An algebra of operator semimartingales (2).
2. R e g u l a r semimartingales ............................................................... 290
Non-commutative regular semimartingales (1), A lifting theorem (2). Proof of the representation
theorem (3-5),
3. R e p r e s e n t a b l e interpretations o f Fock s p a c e . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Probabilistic interpretations of Fock space (1-2).
REFERENCES .............................................................................. 300

INDEX ....................................................................................... 308

INDEX OF NOTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310


Chapter I

Non-Commutative Probability

These notes are a revised version of notes in French p u b l i s h e d in successive volumes


(XX to X X I I ) of t h e Sdminaire de Probabilitds, and we will not offer again a slow
i n t r o d u c t i o n to this topic, w i t h detailed justifications of the basic definitions. Non-
c o m m u t a t i v e probability is the kind of probability t h a t goes w i t h t h e n o n - c o m m u t a t i v e
world of q u a n t u m ?hysies, and as such it is a n a t u r a l d o m a i n of interest for the
m a t h e m a t i c i a n . We are eager to get to the heart of t h e subject as soon as possible.
T h e r e a d e r t h a t thinks we are t o o quick m a y be referred to the origixml French version,
or ( b e t t e r ) to the b o o k of P a r t h a s a r a t h y [Par1].

§1. B A S I C DEFINITIONS

1 T h e shortest way into q u a n t u m probability is to use algebras. By this word we always


u n d e r s t a n d complex algebras A with involution * and unit I . Again we offer no apology
for the role of complex n u m b e r s in a probabilistie setup. We recaIl t h a t (ab)* = b'a*,
/* = I , and set a few e l e m e n t a r y definitions.
A n element a E .A being given, we call a* its adjoint, and we sasr t h a t a is hermitian
or real if a = a* (we avoid at the present t i m e the word "self-adjoin/'}. For any b E .A,
b*b and bb* are real (as wen as b + b* and i(b - b*)).
A probability law or a state on M is a linear m a p p i n g p f r o m .A to •, which is real
in the sense t h a t
p(a*) = p ( a ) for all a e .A,

is positive in the sense t h a t p(b*b) > 0 for all b e A , and has mass 1 ( p ( I ) = 1).
One can easily see that positivity implies reality in *-algebras with unit : we do not
try to minimize the number of axioms.
Real e l e m e n t s a will represent real r a n d o m variables, and we call p(a) t h e expecta-
tion of a in the state p - - b u t a r b i t r a r y non-real elements do not represent c o m p l e x
r a n d o m variables : these correspond to elements a which c o m m u t e w i t h t h e i r adjoint
(normal elements).
G i v e n a real element a, a k is also real, and we put r k = p ( a k ) . WI~ h~:~,vet h e following
easy l e m m a .
2 I. N o n - c o m m u t a t i v e probability

LEMMA. The sequence (rk) is a m o m e n t sequence.


P a o o F . Let b = ~ n Arian' where the An are a r b i t r a r y complex numbers, only a finite
n u m b e r of which are different from 0. W r i t i n g t h a t p(b*b) >_ O, we find t h a t

-2--~m~nrm+n >_ O .
m~n

A c c o r d i n g to H a m b u r g e r ' s answer to the classical m o m e n t p r o b l e m on the line (see


[BCR] p. 186), therc exists at least one probability m e a s u r e ~r on t h e line such t h a t

rk = . ( a k) = k ~(dx).
(X3

If the m o m e n t p r o b l e m is d e t e r m i n e d (the classical sufficient c o n d i t i o n implyil~g de-


t e r m i n a c y is the a n a l y t i c i t y near 0 of ~ k rkzk/k!), we call ~r the law of the random
variable a. At the present time, this is merely a name.
2 Let us give at once an interesting example. We consider the Hilbert space ~ = g2
of all square s u m m a b l e sequences (a0, al . • .), the shift o p e r a t o r S on this space, and
its adjoint S*

s ( , 0 , a, ...) = (0,a0, a l . . . ) , S*(~0, ~1...) = (al, ~ 2 . - ) ,


and take for ~1 the c )mplex algebra g e n e r a t e d by S and S * . Since S ~ S = I the i d e n t i t y
need not be added. T h i s relation implies

S * P S q m S* (P-q) if p > q, = S q-p if p < q.

T h e n it is easily seen t h a t ~4 has a basis, consisting of all o p e r a t o r s SPS *q. T h e


m u l t i p l i c a t i o n table is left to the reader, and the involution is given by ( S P s * q ) * =
s q s *p . T h u s we have described A as an abstract *-algebra.
Let 1, the vacuum vector, be the sequence (1, 0, 0 . . . ) C ~ . In the language t h a t will
be ours f r o m the following chapter on, we may call S = A + t h e creation operator, a n d
S* = A - the annihilation operator. T h e basis elements are t h e n " n o r m a l l y o r d e r e d " ,
i.e. they have tile creators to the left of annihilators.
It is well known, and easy to see, t h a t the m a p p i n g p(A) = < 1, A1 > is a s t a t e on
the algebra of all b o u n d e d o p e r a t o r s on £2. Its restriction to .4 is given by

p ( S P S *q) = 1 if p = q = 0, 0 otherwise.

We call p t h e vacuum atate on ~4. We ask for the law u n d e r p of the h e r m i t i a n element
0 = (S+ S*)/2.
Let us c o m p u t e the m o m e n t s p ( ( S + s * ) n ) . E x p a n d i n g this p r o d u c t we find a s u m
of monomials, which m a y be coded by words of length n of symbols + (for S ) and
- (for S* ). T h e n--th m o m e n t of S + S* is the n u m b e r of such words t h a t code t h e
identity. Consider the effect of such a m o n o m i a l on a sequence (ai) C ~2. E a c h t i m e a
+ occurs the sequence is m o v e d to the right and a zero is a d d e d to its left, and for a -
the sequence is m o v e d to the left, its first element being discarded - - a n d if it was one of
t h e original a i 's, its value is lost forever. Since at the end we m u s t get back t h e original
Basic definitions 3

sequence, the balai.ce of pluses and minuses must remain > 0 all the time, and return
to 0 at the last step. It now appears that we are just counting Bernoulli paths which
return to 0 at time n after spending all their time on the positive side. The number of
such paths is well known to be 0 f o r o d d n , a n d t-4-7
1 (2kk) for even n = 2 k To get the
moments of Q we must divide it by 2 n
The Arcsine law on [ - 1 , 1] , i.e. the projection of the uniform distribution on the
unit half-circle, has similar moments, namely 2 -2k (2kk) for n = 2k. To insert the factor
1
we must project the uniform distribution on the half-disk instead of the half-circle.
This law is called (somewhat misleadingly) "Wigner's semicircle law", and has been
found (Voiculescu [Voi], Speicher [Spe]) to play the role of the Gauss law in "maximally
non-commutative" situations.
3 This approach using moments is important, but it has considerable shortcomings
when one needs to consider more than one "random variable" at a time.
It is a general principle in non-commutative probability that only commuting random
variables have joint distributions in arbitrary states. For simplicity we restrict our
discussion to the case of two commuting elements a,b of ,4. We copy the preceding
definitions and put rke = p(akbe). Then setting c = ~ k ~ "kkak d- [~tlbl the condition
p(c*c) >_ 0 again translates into a positive definiteness property of the sequence r k t .
However, our hope that it will lead us to a measure 7r (possibly not unique) on the
plane such that rkl = f xky~Tr(dx,dy) is frustrated : in more than one dimension,
m o m e n t sequences are not characterized by the natural (or any other known) positive
definiteness condition. For details on this subject, see Berg-Christensen-Ressel [BCR]
and Nussbaum [Nus]. Another interesting reference is Woronowicz [Wor].
This does not prevent the moment method from being widely used in non-commutative
situations. Whenever the sequence (rkl) happens to be the moment sequence of a unique
law on IR2 , it can safely be called the joint law of (a, b). The moment, method has been
used efficiently to investigate central limit problems.
The way out of dlis general difficulty consists in replacing moments by Fourier
analysis. Then Boch.ler's theorem provides a positive definiteness condition that implies
existence and uniqueness of a probability law, and extends to several dimensions. Thus
we try to associate vAth a, or with a, b (or any number of commuting real elements of
A ) functions of real variables s, t . . .

~(s) = p ( e ~s°) , ~(s,t) = e "sa+tb) . . .

and to check they are positive definite in the sense of Fourier analysis. However, defining
these exponentials requires summing the exponential series, while in the moment
approach no limits in the algebra were involved. Assuming ,4 is a Banach algebra
would make things easy - - but, alas, far too easy, and too restrictive.
For simplicity let us deal with one single element a. If we can define in any way
U s =e isa , we expect these elements of ,4 to satisfy the properties

vsut = us+t , Uo = I , (vs)* = v-~ ,

implying that for every s we have U ' U s = I = UsU*. Elements of ,4 satisfying


this property are cmled unitary, and a family (Us) of unitaries satisfying the first two
properties is called a (one parameter) unitary group.
4 I. Non-commutative probability

P u t t i n g ~2(t) = p(_Ut), c = ~ i )~iU~ and writing that p(c*c) > 0 yields the positive
type condition ~-]~ij~ i ~ j ~ ( t j - ti) ~- O. If ~ is continuous, Bochner's theorem shows
there is a unique probability law # on IR such that ~ (t) = f e it~ # (dx). If the semigroup
Ut is given as e ita, we may call # the law of a, but in most cases the generator a of
the semigroup does not belong to the algebra itself, but to a kind of closure of .4. This
is the place where the concrete interpretation of .A as an algebra of operators becomes
essential.

Von Neumann~s axiomatization

4 We are going now to present the essentials of the standard language of q u a n t u m


probability, as it has been used since the times of yon Neumann. We refer for proofs
to Parthasarathy [Parl] or to standard books on operator theory, the most convenient
being possibly Reed and Simon IRES}.
We use the physicists' convention that the hermitian scalar product <, > of a complex
Hilbert space is linear with respect to its second variable.
We start with the algebra £ of all bounded operators on a complex (separable)
Hilbert space ~/. The real elements of L: are called bounded hermitian, symmetric
or selfadjoint operators, or we may call them (real) bounded random variables, and
physicists call them bounded observables. The algebra .4 of the preceding sections may
be a subalgebra of £:, but for the moment we do not insist on restricting the algebra.
Complex (bounded) random variables correspond to normal operators in £ , i.e. to
operators a which commute with their adjoint a*.
Intuitively speaking, the range of values of a random variable a, whether real or
complex, is the spectrum of a, i.e. the set of all ~ E (U such that a - h i is not a
1-1 mapping of ~ onto itself. For instance, hermitian elements assume real values,
tmitaries take their values in the unit circle, and projections, i.e. selfadjoint elements
p such that p2 __ p, assume only the values 0 and 1. Thus the projection on a closed
subspace E corresponds to the indicator function of E in classical probability (the
events of classical probability may thus be identified with the closed subspaces of 7-/).
The classical boolean operations on events have their analogues here : the intersection
of a family Ei of "events" is their usual intersection as subspaces, their "union" is
the closed subspace generated by UiEi, and the "complement" of an event E is its
orthogonal subspace E l . However, these operations are of limited interest unless they
are performed on events whose associated projections all commute.
Most of classical probability deals with completely additive measures. Here too,
the interesting states (called normal states) are defined by a continuity property
along monotone sequences of events. However, it is not necessary to go into abstract
discussions, because all normal states p can be explicitly described as

(4.1) p(d) = Tr(AW) = Tr(Wd) (A C £:)

where Tr is the trace and W is a positive operator such that T r ( W ) = 1, sometimes


called the density operator of the state (see Appendix 1). We shall have little use for
such general states, which are most important in q u a n t u m statistical mechanics. Our
concrete examples of states are mostly pure states of the form

(4.2) p ( d ) = < ~ , A~ >


Basic definitions 5

where ~ is a given unit vector, which is determined by p up to a factor of modulus 1


("phase factor"). This corresponds in (4.1) to the case where W is the orthogonal
projection on the one-dimensional subspace generated by T. Non-pure states axe often
called mixed states.
The pure state associated with the unit vector ~p is simply called "the state W"
unless some confusien can arise. The set of all states is convex, compact in a suitable
topology, and the pure states are its extreme points. Hence they correspond to the point
masses ¢x in classical probability theory, and are the most "deterministic" of all states.
However, and contrary to the classical situation, an observable which does not admit
as eigenvector has a non-degenerate law in the pure state W.

Selfadjoint operators

5 The algebra /2 is a (non-separable) Banach algebra in the operator norm, but the
introduction of the Hilbert space 7-/ allows us to use also the more useful atrong and
weak operator topologies on /2. In particular, the most interesting unitary groups (Ut)
are the strongly continuous ones, which generally have unbounded generators (in fact,
the generator is bounded if and only if the group is norm-continuous). Let us recall the
precise definition of the generator

A=I 7~Ut
d t-0

The domain of A consists exactly of those f 6 7-I such that l i m t ~ c ( U t f - f ) / t = iAf


exists in the (norm) topology of 7-/. This turns out to be a dense sub'~pace of 7-/, and A
on this domain is a closed operator, which uniquely determines the one-parameter group
(Ut). These results are rather easy, and can be deduced from the general Hille-Yosida
theory for semigroups. One formally writes lit = e ira •
A densely defined operator A, closed or simply closable, has a densely defined and
closed adjoint A*, whose domain consists exactly of all f E 7-/such that g ~ ~ < f , Ag >
on ~D(A) extends to a continuous linear functional on ~ , and whose value is then defined
uniquely by the relation

(5.1) <A*f,g>=<f, Ag> for g E : D ( A ) .

The following result is Stone's theorem. It describes completely the "random variables"
in the unitary group approach of subsection 3.
THEOREM 1. An ope:ator A is the generator of a strongly continuous unitary group i f
and only if it is den:;ely defined, dosed and selfadjoint, i.e. A = A * , including equMity
o f domains.
The classical examples on 7-I = L2(IR) are the unitary group of translations,
U t f ( x ) = f ( x - t), whose generator is i d / d t , and the unitary group U , f ( x ) = eitX f ( x ) ,
whose generator is multiplication by x.
It is usually rather difficult to prove that a given operator is selfadjoint, except in the
case of an everywhere defined, bounded operator A , in which case selfadjointness reduces
to the s y m m e t r y condition < f , A 9 > = < A f , g > . The following h a p p y situation
sometimes occurs in probability, in relation with Maxkov processes :
6 I. N o n - c o m m u t a t i v e probability

THEOREM 2. Let (Pt) be a strongly continuous semigroup o f positive, symmetric


contractions of 7t, and let A be its generator (Hille Yosida theory). Then - A is
selfadjoint and positive.
The proof of this result is rather easy. The classical example is - A = - A , also
called the "free Harniltonian", which corresponds to a Brownian motion semigroup.

Criteria for selfadjointness

6 The domain of a selfadjoint operator generally cannot be given in an explicit way.


The operator is usually defined as an extension of some symmetric, non-closed operator
A on a dense domain 1) consisting of "nice" vectors. Symmetry easily implies that A
is closable. T h e n two problems arise:
- - Is the closure A of A selfadjoint ? (If the answer is yes, A is said to be essentially
selfadjoint on l ) , and 1) is called a core for A).
- - If not, has A any selfadjoint extensions ?
We shall deal here with the first problem only. The most important practical criterion
is Nelson's theorem on analytic vectors ([ReS], Theorem X.39). It is implicit that I) is
dense and A is defined on 1) and symmetric.
THEOREM 3, Assume the domain 13 is stable under A , and there exists a dense set of
vectors f E 1) such that the exponential series
Z n
(6.1) E--~. Anf
n

has a non-zero rad/us of convergence. Then A is essentially selfadjoin~ on 1). Besides


that, whenever the series (6.1) converges for z = it, its sum give? the correct value for
the unitary operator aira acting on f .
Conversely, it va~ be shown that any strongly continuous unitary group has a dense
set of entire vectors, i.e. of vectors for which the exponential series is convergent for all
values of z.
The second result is the spectral criterion for selfadjointness. It is very important
for q u a n t u m mechanics, but we will not use it. See IRES], Theorem X.1.
THEOREM 4. The operator A is essentially self-adjoint on 1) i f and only i f there exist
two numbers whose imaginary parts are of opposite signs (in the loose sense : one real
number is enough), and which are not eigenvaIues of A*.

Spectral measures and integrals

7 Let (E, g) be a measurable space. By a spectral measure on E we m e a n a mapping J


from C to the set of projections on the Hilbert space 7-{, such that J ( O ) = 0, J ( E ) = I ,
and which is con~pleteIy additive - - as usual, this property splits into additivity and
continuity. The first means that J ( A U B ) = J ( A ) + J ( B ) if A N F = O, with the
important remark that the left side is a projection by definition, and the sum of two
projections is a pr~j ~ction if and only if they are orthogonal, and in particular commute.
As for continuity, it can be expressed by the requirement that A,~ ~ O should imply
J ( A n ) ~ 0 in the atrong topology.
Basic definitions 7

Given a real (finite valued) m e a s u r a b l e function f on E , one can define a spectral


integral fE f(x) J(dx), which is a selfadjoint o p e r a t o r on 7-t, b o u n d e d if f is b o u n d e d .
I n s t e a d of t r y i n g to develop a n abstract theory, let us assume t h a t ( E , g) is a very nice
( = Lusin) m e a s u r a b l e space 1 . It is known that all such spaces are isomorphic either to
a finite set, or a c o u n t a b l e one, or to t h e real line IR. T h u s we m a y a s s u m e t h a t E is
the real line, with its Borel a - f i e l d g . T h e n to describe the spectral m e a s u r e we need
only know the operators Et = J ( ] - ~ , t ] ), a right-continuous a n d increasing family
of projections, such t h a t Eco = I , E - c ~ = 0. Such a family is called a resolution of
identity.
Denote by ~ t the range of Et ; t h e n ~ t is an increasing, right continuous family of
closed spaees in ~ , which plays here the role of the filtration .Tt in m a r t i n g a l e theory.
For any x E ~ , the curve x(t) = Etx plays the role of the m a r t i n g a l e ]E Ix I.Tt] : we
have x(t) E 7-¢t for al! t, a n d for s < t the i n c r e m e n t x ( t ) - x ( s ) is orthogonal to ~ s .
T h e b o u n d e d increasing f u n c t i o n < x >t = II zt II2 plays the role of the angle bracket in
m a r t i n g a l e theory, in the sense t h a t

< x >t - < x >s = II x ( t ) - ~(s)II 2 for s < t.

A n d t h e n we m a y define a "stochastic integral" ff(s)dx(s) u n d e r the hypothesis


f lf(s)lZd<x>s < cx3. The construction is so close to ( a n d simpler t h a n ) the Ito
c o n s t r u c t i o n of stochastic integrals t h a t no details are necessary for probabilists. We
n o t e the useful f o r n m l a

< / f(s)dx(s),/g(s)dy(s)>= /f(s)g(s)d<x(s),y(s)>

T h e right side has the following m e a n i n g : the bracket (ordinary scalar p r o d u c t in 7-t ) is
a f u n c t i o n of s of b o u n d e d variation, w.r.t, which f(s)g(s) is integrable. This formula
corresponds in stochastic integration to the isometry p r o p e r t y of the Ito integral.
These were vector integrals. To define F = f f(s)dEs as a n operator stochastic
integral, we take tke space of all x E 7-t such t h a t f If(s) 12 d < z > s < o¢ as d o m a i n of
F , a n d on this dom, d n F z = f f(s)dz(s). T h e n we h a v e :
TrIEOREM 5. For a n y real (~qnite valued) function f on the line, the spectra] integral
F = f f(s) dEs is a densely deigned selfadjoint operator. If Ill is botmded by M, so is
the o p e r a t o r norm of F.
R e t u r n i n g now to the original space E , it is not difficult to show that the spectral
integral does not de;~end on the choice of the isomorphism b e t w e e n / ~ a n d JR, a n d t h e n
the theory is e x t e n d e d to (almost) a r b i t r a r y state spaces. Spectral measures on E are
the n o n - c o m m u t a t i v e analogues of classical r a n d o m variables t a k i n g values in E .
For instance, let Z be a classical r.v. on a probability space (~, .T, ]P), t a k i n g values
in E . We m a y associate with Z a spectral measure Y on 7-~ = L 2 ( ~ ) , such t h a t
for A E E
J(A) is the operator of m u l t i p l i c a t i o n by IZ_I(A ) ,
obviously a projection. Given a real valued r.v. f on E , the .',pectral integral
f f(x) J(dx) is m,,ltiplieation by f o Z .
1 All complete separable metric spaces are Lusin spaces. See Chapter III of [DeM1].
8 I. Non-cornrnutative probability

Given a state p on 7-I and a spectral measure J , the mapping A , , p ( J ( A ) ) is


an ordinary probability measure 7r on E , called the law of J in the atate p. Spectral
measures behave much like ordinary random variables. For instance given a measurable
mapping h : E ~ i ' , we define a spectral measure h o J on F as B ~ J(h-a(B)),
whose law in the state p is the direct image h(Tr) in the classical sense.
In order to construct joint laws of random variables in this sense, the following
result is essential : given two (Lusin) measurable spaces E l , E2 and two commuting
spectral measures J1, J2, there is a unique spectral measure J on E1 × E2 such that
J ( A × E2) = J I ( A ) for A e gl and J(Ea × A) = J2(A) for A C g2. This is closely
related to the bimeasures theorem of classical probability (see [DeM1], III.74). The
theorem also applies to an infinite number of spectral measures, as a substitute of
Kolmogorov's theorem on the construction of stochastic processes.

S p e c t r a l r e p r e s e n t a t i o n of selfadjoint operators

8 We have mentioned two interpretations of real valued random variables : as self


adjoint operators and as spectral measures on JR, the first one being more n a t u r a l in
physics, and the second closer to probabilistic intuition. Here is von N e u m a n n ' s theorem,
which draws a bridge between them.
THEOREM 6. For every sel£adjoint operator A, there exists a unique resolution of identity
(Et) on the line, such that
I"

A = JsdE~ .

Then, given any (real or complex) Borel function f on the line, the spectral integral
f f ( s ) d E ~ is denoted by f ( A ) .
An example of such integrals in the complex case is the unitary group

lit = e irA = / e its des m

J
Then the formula
1"
< ~, Ut~p > = / eitSd< ~, EsT >

shows there is no contradiction between the two definitions of the law of A in the pure
state ~ , by means cf unitary groups and spectral measures.
Given any two leal valued Borel functions f, g, g(A) is a selfadl~int operator, and
we have ( f ( g ( A ) ) - = ( f g ) ( A ) . For a complex function f , the adjoint of f ( A ) is fi(A).
WARNING. The issue of defining the joint law of several commuting operators, which has
plagued the moment approach, has not become a trivial one. Except in the bounded ease,
it is not easy to decide whether the spectral measures associated with several selfadjoint
operators do commute. In particular, it is not sufficient to check that they commute on
a dense common stable domain. The best general result seems to be an extension of the
analytic vectors theorem to the Lie algebras setup, also due to Nelson. However, the
commutation of the associated unitary groups is a (necessary and) sufficient condition,
and in all practical cases in these notes the unitary groups will be explicitly computable.
Basic definitions 9

E l e m e n t a r y q u a n t u m mechanics

9 Let us show how the above language applies to the elementary :.nterpretation of
q u a n t u m mechanics, in the case of a selfadjoint operator with discrete spectrum.
Physicists like the Dirac notation, in which a vector x E ~ is denoted by I x >
(ket vector), the linear functional < y , . > on Q is denoted by ( Y l (bra vector), and
the value of the linear functional (bra) on the vector (ket) is the bracket < y l x > or
<y, x > . However, this explanation does not convey the peculiar kind of intuition carried
by Dirac's notation, which uses bras and kets "labelled" in a concrete way, and interprets
the scalar product ( y I x > between labelled vectors as a "transition amplitude" from
the "state" labelled x, to the state labelled y, whose squared modulus is a "transition
probability". We shall see later examples of such a "labelling" (see the first pages
of Chapter II). Taking an orthonormal basis (ek) , and expanding the vector f as
~-~k fkek' we may also consider the ket vector I f > as a column matrix (fk), the bra
vector <g I as a row matrix g* = (gk) with gk = ~ j g'6ik, and the bracket <g, f > is
the usual matrix product g*f. In this context the linearity of the scalar product w.r.t.
its second variable is very natural. A slightly unusual feature of this notation is its
tendency to write scalars on the right, as in the following formula for the representation
of a vector f in the o.n. basis (ek)

If>=~-~klek><eklf> or I=}-~klek><e~l.
Let A be a selfadjoint operator. If the basis vectors e k belong to the domain of A,
the brackets < ek, Aej > (Dirac : < e k I A ] ej > ) are called the ma~riz elements of A.
Assume now that A has discrete spectrum, every e k being an eigenvector of A with
eigenvalue ~k. Then the unitary group associated with A is

Ut = E k eit)~klek > < ek I


Let p be the pure state associated with ~ , a unit vector. Then we have, p u t t i n g
< ek , ~ > = ~k

< ~,UtcP > = E k eitAk I~k 12 •

Inverting the Fourier transform, we identify the law of A in the state p as the discrete
law ~ k Ic2k 12e)~k , in conformity with the rules of elementary q u a n t u m mechanics. On
the other hand, the spectral subspace Et for A is the closed subspace generated by all
e k such that ,kk _< t.

About quantizatlon

10 The following comments are not of a mathematical nature, but provide some hints
on what will be done in these notes.
Each time we want to provide a q u a n t u m analogue to some given classical prob-
ahilistie situation (Q,~-, IF'), be it simple random walk, Brownian motion, or Poisson
processes, we take as Hilbert space 7-i the space L 2 (Q), and interpret the classical ran-
dom variables as multiplication operators on this space. The classical probability law
10 I. N o n - c o m m u t a t i v e probability

also provides a n a t u r a l state on 7-/, the pure state associated with the function 1. T h e n
the interest of the q u a n t u m probabilistic situation depends on finding n a t u r a l sets of
random variables that do not commute with the classical ones - - they often occur as
generators of groups of symmetries operating on the space. These two non-commuting
sets then are "mixed together" to produce new observables or sets of observables, whose
probabilistic structure is investigated. Non-commutative probability has the same pur-
pose as classical probability, namely to compute laws of random variables and stochastic
processes. The difference lies in a richer space of random variables, and the fact that
seemingly innocuous operations, like addition of non-commuting r.v.'s, sometimes pro-
duce unexpected results.
We are going to investigate successively the analogues of Bernoulli space and the
finite random walk (Chapter II), of finite dimensional Gaussian random variables
(Chapter III), of Brownian motion and Poisson processes (Chapter IV). Then we shall
study the simplest t o n - c o m m u t a t i v e extension of the classical Ito Calculus.
Back to algebras 1
11 Let us now return to the general problem of defining a random variable in q u a n t u m
probability, trying to get a broader view. We started from the simple idea that a r.v. is
a hermitian element of some algebra A (with unit and involution). Then we described
the traditional approach of yon Neumann, which specifies .4 to be £(7-{), the algebra
of all bounded operators on some separable Hilbert space, and defines a r.v. to be a
spectral measure on 7i over lR, and more generally a r.v. taking values in a measurable
space (E, g) to be a spectral measure on 7-( over E . Neither of these two point of views
is the modern definition. Roughly speaking, the modern idea of a r.v. contains both an
arbitrary algebra .4 and a Hilbert space 7-(, as follows :
A r.v. X is a homomorphism from some algebra .4 into £(7t) (preserving the invo-
lution and the identity 2 ), and possibly satisfying some additional continuity property.
This new point of view on random variables and stochastic processes was introduced
in q u a n t u m probability by an influential paper of Accardi, Frigerio and Lewis. In these
notes it is used only in the section on Evans-Hudson flows.
First of all, let us consider commutative situations. Let X be a classical random
variable defined on a probability space (f/, Y, IP) and taking values in some measurable
space (E, g). Then we make take for A the algebra of bounded measurable functions f
on E , for 7/ the Hiibert space L2(f~), and then mapping f C A to the multiplication
operator by f o x defines a homomorphism from -4 to /3(7-/). Our new definition
amounts to saying that this mapping - - which uniquely determines X - - is the random
variable. Note also that this mapping is continuous w.r.t, monotone convergence (if a
bounded sequence fn increases to f , then fn o X converges to f o X in the strong
convergence of operators.)
Next, consider a spectral measure X on a Hilbert space 7-/ over some measurable
space (E, g). Then the spectral integral

f ~ X ( f ) = / E f(s) dX(s)

1 This subsection is m addition, and may be omitted.


2 Sometimesidentity preservation is omitted, but never without a warning.
Basic definitions 11

defines a h o m o m o r p k i s m f r o m t h e algebra A of b o u n d e d m e a s u r a b l e functions into


£:(~f). If E is a c o m p a c t m e t r i c space, we m a y restrict A to be the smaller algebra
C(E) - - knowing this restriction uniquely d e t e r m i n e s X , since we recover all m e a s u r a b l e
functions by m o n o t o n e convergence. But restricting the algebra in this way leads to an
interesting converse :
THEOREM. I f E is compact metric I , any homomorphism X from C(E) to f~(7[) is
associated as above with a spectral measure.
PROOF (sketch). For u C 7-/, the m a p p i n g

f~--* <u, X(f)u>

is a linear functional #u on C ( E ) such t h a t # u ( f * f ) >_ O, hence a positive m e a s u r e


on E . It follows by p o l a r i z a t i o n t h a t for u , v C ~f, f , ~ < v , X ( f ) u > is a complex
m e a s u r e #u,v. T h e n it is easy to e x t e n d the m a p p i n g X ( f ) to Borel b o u n d e d functions
on E in such a way t h a t < u , X ( f ) u > = f f ( x ) # u , v ( d x ) . T h e n t a k i n g for f the
indicator f u n c t i o n I A of a Borel set, we define a spectral measure, and it easy to check
t h a t X is the c o r r e s p o n d i n g spectral integral.
Let us t u r n now co the n o n - c o m m u t a t i v e case : there are two interesting situations,
t h a t of a C* algebra A , which is the n o n - c o m m u t a t i v e analogue of C ( E ) for a "non-
c o m m u t a t i v e c o m p a c t space E " (which however has no i n d e p e n d e n t existence : it is
entirely s u b s u m e d in its algebra), and t h a t of a yon N e u m a n n algebra .4, which is t h e
n o n - c o m m u t a t i v e analogue of a " n o n - c o m m u t a t i v e space L°°(A) ''. In t h e l a t t e r case,
the h o m o m o r p h i s m X should also satisfy a continuity p r o p e r t y called normality (we
will not discuss it here).
The case of an arbitrary hermitian element a in an algebra, which was our starting
point in this chapter, can also be translated using the language of homomorphisms, but
we leave it aside.
COMMENT. As the preceding t h e o r e m concerns only compact spaces, it d o e s n ' t apply
to the m o s t c o m m o n example, t h a t of real valued r a n d o m variables! T h e obvious way
out of this difficulty consists in using the c o m p a c t space IR instead of IR, t h a t is,
in allowing the values + o o in our definition of r a n d o m variables. This requires only
trivial modifications, except t h a t only finite r a n d o m variables correspond to self-adjoint
operators.

T h e a d d i t i o n a J i n t r o d u c t o r y m a t e r i a l h a s b e e n g i v e n in a p p e n d i c e s at t h e
e n d o f t h e v o l u m e , so t h a t o u r r e a d e r is n o t t e m p t e d t o r e a d it at o n c e , b u t
g o e s d i r e c t l y t o Lhe h e a r t o f t h e m a t t e r . C h a p t e r II in p a r t i c u l a r c o n t a i n s
non-trivial finite dimensional examples.

1 We assume metrizability only for definiteness.


Chapter II- Spin
This chapter has little to do with the geometric property of elementary particles
called spin. Historically spin 1/2 provided the first basic example of a "two-level
quantum system", so that till now those simplest of all quantum systems are introduced
to students of physics with the help of "fictitious spins". The t r u t h is, that two-level
systems (or spins) play in quantum probability the role of Bernoulli random variables.
As in the classical case, they may be considered the cornerstone of probability theory.
The chapter is organized as follows : we describe the elementary (Bernoulli) case,
and study finite systems of such "quantum Bernoulli" random variables, commutative
in section 2, and anticommutative in section 5. We prove in §3 a quantum version of
the most elementary central limit theorem of probability theory, namely the de Moivre-
Laplace theorem; it slightly anticipates Chapter III. Section 4 studies "spin n " (angular
momentum), one of the possible quantum generalizations of a random variable assuming
n > 2 values, with an application to Biane's example of a commutative random walk
arising naturally in a non-commutative setup.

§1. T W O - L E V E L QUANTUM SYSTEMS

Two-dimensional Hilbert space

1 The simplest non-trivial Hilbert space ~ has dimension 2. Let us denote by (e0, ca)
an orthonormal basis. As usual we identify these basis elements with column matrices

0
These two vectors are usually "labelled" as follows (the general idea being that the first
vector is somehow "below" the second one)

e0 = ] d o w n > ; el = ] u p >

(for a spin), or
e0 = [ e m p t y > ; el = I full>

(for an electronic layer), or

e0 = [fundamental> ; el = [excited>

(for a two-level atom), etc.


14 II. Spin
Of course there is n o t h i n g m a t h e m a t i c a l about "up" and "down" a n d the basis vec-
tors can be labelled as well I s l i t l > and ]slit2> in the well known two-slit experiment.
A n o t h e r useful i n t e r p r e t a t i o n of the Hilbert space 7-/ is the following. Consider a (clas-
sical) s y m m e t r i c Bernoulli r a n d o m variable x on some probability space (for instance
the line with the s y m m e t r i c Bernoulli measure carried by { - 1 , 1}, z being the identity
m a p p i n g ) . T h e n we m a y identify e0 with the r.v. 1, a n d el with x.
2 In the two-dime:.sional case, it is possible to visualize all states a n d all observables
( r a n d o m variables). Let us spend a little time doing this, though we do not need it for
the sequel.
A normalized vector w = u e0 + v el can be considered as a n element of the 3 - s p h e r e
S3 = {lul 2 + Ivl 2 = ! } . To describe the pure states, i.e. the normalized vectors up to a
phase factor, we p u t

0 0
lul=cos~, Ivl=sin~ (0<0<Tr).

T h e n u = l u l J a , v = I v l e ib a n d we m a y add the same real n u m b e r to a a n d b without


changing the state. We fix this n u m b e r by taking a + b = 0. T h u s we d o n ' t lose pure
states if we take
• 0 "
(2.1) aJ = co0,qp = cos ~ e - U P / 2 e l + sin ~ euP/2e2

with 0 < 0 < 7r , 0 < q~ < 2rr. It is easy to check that in this way, t a k i n g 0 to be the
latitude a n d W to be the longitude, we define a 1-1 continuous m a p p i n g of the 2 - s p h e r e
$2 on the set of all pure states ( t h e n it is n a t u r a l to expect the set of all states to be
described by the convex hull of the sphere, i.e. the ball; we shall see this later).
Let us r e t u r n to the probabilistie i n t e r p r e t a t i o n using the Bernoulli r . v . x . As
explained in C h a p t e r I, it m a y be interpreted in the operator sense as m u l t i p l i c a t i o n
by x. Since x. 1 = 1. x = x , x 2 = 1, the m a t r i x representing this observable in the basis
(1, z) is (o1 10) , c o m n l o n l y denoted by az or ~rl.
The most general s.a. operator on 7~ is given by a m a t r i x

(2.2) A =
( t+z
x+iy
x
t

with real coefficients x, g, z, t. It can be w r i t t e n as t I + x a z ~- y a y + Zaz, a linear


c o m b i n a t i o n of I a n d the three P a u l i m a t r i c e ~ given by

(2.3) a~ = ~/01 ~) ; ~ry = (0i ;i) ; az = (10 --01)

(also d e n o t e d Crl, as, o3). A m o n g the s.a. operators, the density matrices of states D
are characterized by the property of being positive a n d having u n i t trace. T h i s implies
t = 1 / 2 , so it is more convenient to set (z being real a n d r = x + i y complex, contrary
to all good m a n n e r s )

(2.4)
1
D = ~ ( l+z
r
r
1- z ) '
1. Two-level systems 15

The positivity p r o p ( r t y < ( u ) , D ( U ) > _> 0 amounts to rr_< i - z 2 or x 2 + y ~ + z 2 _< 1.


This parametrizes the set of all states, in an affine way, by the points of the unit ball
of IR3, and of course the extreme ( = pure) states by points of the unit sphere. We leave
it to the reader to check that indeed, if we take in (2.4)

(2.5) x=sin0cos~ ; y=sin0sin~ ; z=cos0

then the density matrix

1 COS0 sinOe-@ h
(2.6) DO,~ = -2
(z + aO,~
) ; aO,~
= ( sin Oe@ - cos 0 ,]

represents the projection on the one dimensional subspace generated by the vector given
by (2.1). The matrices aO,~o represent all possible "Bernoulli r.v.'s", i.e. all observables
admitting the eigenvalues 4-1.
In the concrete interpretation of a spin-1/2 particle, SO,~ describes the outcome
of a spin measurement in IRa (in units of h) along the unit vector (2.5). Assume the
system is found in the pure state ~o = el (in the representation (1.1) this corresponds
to 0 = 0 and c2 arbitrary). Then a spin measurement along Oz is described by az
and gives the probability 1 for "up" and 0 for "down", an appm~ently deterministic
situation, physically realizable by means of a filter along Oz which lets through only
objects with spin up. In classical probability, this would be the end of the matter. Here,
a measurement along (2.5) will give a "probability amplitude" < w, ~r0, aJ> = cos 8 of
finding the spin along the axis, corresponding to a probability cos 2 0, and there is a
probability sin 2 0 to find it in the reverse direction. Thus we see at work the new kind
of indeterminism due to non-eommutativity.

C r e a t i o n and a n n i h i l a t i o n

3 We are going to introduce new sets of matrices, which will play a fundamental role
in the sequel.
We define the v,~cuum vector as the first basis vector e0. It corresponds to the
function 1 in the s t a n d a r d probabilistic interpretation using the Bernoulli measure, and
we therefore denote it by 1 (a notation general among physicists is I 0> ). The second
basis vector el was identified, in the probabilistic i n t e r p r e t a t i o n with the Bernoulli
r . v . x . Next we define the three matrices

(3.1) b - =: (0° 1) 0 ; 1 00) ; = (0° 01) "

called respectively the annihilation, creation and number operators (in the beautiful
Hindu interpretatien of Parthasarathy, the last one is the conservation operator).
These are not the s t a n d a r d notations (physicist use rather b*, b, n). The (number and)
annihilation operator kill the vacuum vector 1, while the creation operator transforms
it into the occupied state (it "creates a particle"). On the other hand, the creation
operator transforms into 0 the occupied state (a state cannot be doubly occupied : this
model obeys Pauli's exclusion principle). These unsociable particles are toy models for
16 II. Spin

real particles of the f e r m i o n family. Note that b+ and b - are mutually adjoint, and
b° = b+b - is selfadjoint. Note also the relation

(3.2) {b-,b +} = b-b + + b+b - = I

(3.3) [t-,b+] = t-t+ -b+b- = z-2b ° .

Relation (3.2) is the simplest example of the so called canonical a n t i c o m m u t a t i o n


relations (CAR); (3.3) has no special name.
The matrices I and bE ( e = - , o, + ) form a basis for all (2,2) matrices. In particular,
the Pauli matrices are represented as follows

(3.4) b++b-=c% ; i(b+-b-)=(Ty ; [b-,b+] =az=I-2b ° .

4 It is easy to give examples of unitary groups in this situation. In fact, for complex
z = x + iy = re ~ , the operator zb + - ~b- is - i times the s.a. operator x a v - y~r~,
hence its exponential is unitary. Let us call it a discrete WeyI operator
( cosr isinre -'~)
(4.1) W(z)=exp(zb+-zb-)= i s i n r e i~ cosr "

For every z, the family W ( t z ) , (t C IR) is a unitary group. On the otlmr hand, let us
call discrete coherent vectors the vectors
= w(z)l = ( eosr
\ s i n re-'~° ]
Taking r in the interval [0, Tr] and cy in the interval [0,27,-[ this realizes a
parametrization of unit vectors modulo a phase factor, slightly different from that in
(2.1).

§2. C O M M U T I N G SPIN SYSTEMS

In this section, the L 2 space over classical Bernoulli space becomes a finite dimen-
sional model ("toy Fock space") which approximates boson Fock space (the L 2 space
over Wiener space). We underline the analogies between the role played by the Bernoulli
process for vectors, and that played for operators by the non-commutative triple of the
c r e a t i o n / n u m b e r / a n n i h i l a t i o n processes• Later, this analogy will be carried over to con-
tinuous time.

C o m m u t i n g spin,s a n d t o y Fock s p a c e
1 Let ui, i = 1 , • • . , N be a finite family of independent, iden;ically distributed
Bernoulli random variables, each one taking the value 1 with probability p, the value
0 with probability q = 1 - p. As usual, we define

(1.1) x i - - ~'i -- P
2. C o m m u t i n g s p i n s 17

to get r.v.'s with zero expectation and unit variance. It is convenient to consider the
r.v.'s xi as the coordinate mappings on the product space E = { - 1 , 1} N, on which the
measure is denoted by #p (by # if p = 1, the most usual situation).
We denote by PN or simply W the set of all subsets of { 1 , . . . , N } , and for A E T~
we put

(1.2) XA=IIxi
iEA

These r.v.'s constitute an orthonormal basis of the Hilbert space kI, = Le(#p).
When p = ½, it is interesting to give an interpretation of this basis as follows : E
is a finite group under multiplication, the Haar measure and the characters of which
are exactly the measure # and the mappings x A for A E ~ (the dual group of E is
isomorphic with 7) equipped with the symmetric difference operation A ). Then the
expansion in the basis (x A ) is simply a finite Fourier-Plancherel representation.
Let ~ n be the subspace of L2(#p) generated by all x A such that IAI = n , IAI
denoting here the n u m b e r of elements of A. The space g20 is one dimensional, generated
by x 0 = 1. It is convenient to give names to these objects : we call q2n the n-th
(discrete) chaos, and x 0 the v a c u u m vector, also denoted by 1. Note that all these
Hilbert spaces are naturally isomorphic, no matter the value of p. Given the chaotic
expansion of a random variable Y = ~ A CA XA ' we may compute the scalar product of
Y with any other r.v. defined in the same way, and in particular the expectation of Y,
equal to < 1, Y > . W h a t makes the distinction between the different values of p is not
the Hilbert space structure, but the multiplication of random variables. Taking it for
granted that the product is associative, commutative, and that 1 acts as unit element,
it is entirely determ!ned by the formula giving x~

2 1 - 2p
(1.3) x i = 1 + cxi with c- - -
v¢4
In particular if p = ½, this reduces to x i2 = 1 and we have the explicit formula

(1.4) x A x B = XAA B

where A is the symmetric difference operation. From the above almost trivial example,
one may get the feeling that different probabilistic interpretations can be described on
the same Hilbert space by different multiplicative structures. This idea will not be given
a precise form, but still it pervades most of these notes.

O p e r a t o r s on toy Fock space

2 We are going to define for each k E { 1 , . . . , N } creation and annihilation operators


a~ and a k . We put

a ~ ( x A ) = xAu{k } if k ~ A, = 0 otherwise (creation)

a-~(XA) = XA\{k } if kEA , = 0 otherwise (armihilation)


18 II. Spin

It is easy to check that < x B , a k x A > = < a ~ x B , x A > , i.e. these operators are mutually
adjoint. From a physical point of view, they are creation and annihilation operators "at
the site k" for a distribution of spins located at the "sites" 1 , . . . , N . Let us introduce
also the corresponding number operators a~ = n k = a ~ a k , so that

a ~ x A -= x A if kEA , = 0 otherwise.

Denoting by greek letters s , r h . . , symbols taken from the set { - , o, +}~ we see that
operators @ and a~ relative to different sites always commute, while at one given site
k we have

(2.1) [ a ; a k+3 = I - 2a~


These relations may be considered in a very precise way (which will be described later
on) as discrete analogues of the CCR. This is why we shall call ( s y m m e t r i c ) t o y F o c k
space the finite model we have been describing. The idea of considering seriously toy
Fock space as a discrete approximation of symmetric Fock space (to be described in
Chapter IV) is due to J.-L. Journ@, under the French name of bdbd Fock. The formal
analogies between the discrete and continuous situations have been noticed much earlier
(see for instance the paper by Areccbi and al. [ACGT], and other papers mentioned in
the book [K1S]).
At each site, we consider the self-adjoint operators

(2.2) qk = a ~ + a k , Pk = i ( a ~ - - a k) .

The way qk operates is easily made explicit :

qk x A - - XAA{k} ;

comparing this with ; 1.4), we see that qk is the o p e r a t o r o f m u l t i p l i c a t i o n by x k in the


s y m m e t r i c B e r n o u I l , case. The operator of multiplication by x k in the non-symmetric
cases turns out to be equal to qk + c n k , the constant c being the same as in (1.3).
What about p~ !' Let us denote by $" the unitary operator on toy Fock space
defined by . T z A = i l A l x A . This operator preserves 1, and we have

(2.3) ~ - - ' a ~ 9c = - i a + ; .T-lak.T=ia; ," .T-'ak.T° =ak°

(2.4) . T - l qk .T = - P k ; . T - l p k .F = qk .

The letter 9r is meant to recall a Fourier transform, though it should not be confused
with the Fourier transform for the group structure mentioned above.
3 As we have expanded all vectors in E in the basis (x A ) , we look for a similar
g
expansion of all operators. To this end, we put a A = 1-IieA aT. for e = - , o , + and
A E "P. Then we have

(3.1) a ~ x B ---- XA+ B , aAx B = XB_ A

where XA+ s (XB_. A ) means XAt.j s ( X B \ A ) if A, B are disjoint (if A C B ) and 0


otherwise. Similarly, a°AXB = x B if A C B and 0 otherwise.
2. Commuting spins 19

Let `4 be the algebra generated by the creation and annihilation operators (note that
it contains the n u m b e r operators too). Every non-zero vector x E g~ has some non-zero
coefficient cA in its chaotic expansion; choosing ]A] maximal, x is transformed into
cA 1 by the pure ar.nihilator a A . On the other hand, 1 can be transformed into x B
by the pure creator a + , and hence can be transformed into any non-zero vector by a
linear combination of creators. Consequently, there is no subspace of E invariant under
the algebra ,4 except the trivial ones {0} and E , and a simple theorem in algebra
implies that .4 contains all operators on the finite dimensional space E 1. Using the
commutation properties of the operators a ± k , it is easy to show that ,4 consists of all
linear combinations of products a~a B , i.e. products for which creators stand to the
left of annihilators (such products are said to be normally ordered). On the other hand,
the number of all such products is (2N) 2 , which is the dimension of the space of all
(2 N, 2 N) matrices, i e. of all operators on E . Otherwise stated, we have constructed.a
basis for this last space.
There is another interesting basis : in a product a~ a B , we may use the commutation
at different sites to bring successively into contact all pairs of operators a~ and a k ,
where k belongs to the intersection A A B. Having done this, we see that our algebra is
also generated by products a U+avOa w , where now U, V, W constitute a disjoint triple.
Counting these operators, we see again that their number is 2 2N, hence they form a
basis too, though it is less familiar to physicists than the preceding one.

A d a p t e d operators

4 The preceding description of all operators did not depend on the order structure
of the set { 1 , . . . , N } . We are going to use this structure to describe operators as
"stochastic integrals", as random variables are in classicM probability.
Let us return to the probabilistie interpretation. The Bernoulli random variables
xl,...,XN are independent. They generate a-fields, for which we introduce some
notation : 9r<k is the past a-field, generated by x l , . . . , x k , and one may consider
also the strict past a-field ~-k-1 = ~-<k (trivial for k = 1 ). There are also future and
strict future a-fields, denoted respectively by ~'>k and ~->k. Since the past a-fields
occur more often t h a n the future ones, and heavy notation is not convenient, we write
~-k and 9vk_ for the past and strict past, whenever the context excludes any ambiguity.
For the corresponding L 2 spaces we use similarly the notations ~5, ~k, ~>k
We denote by Ee the orthogonal projection on the past ~5k (from the probabilistic
point of view, this is a conditional expectation).
Independence implies that ~ is isomorphic to the tensor product ~<k ® ¢>k , the
operation ® being read as the ordinary product of random variables, restricted to pairs
of r.v.'s which depend only on pre-k and post k co-ordinates. W h y restrict it at all?
Some attention should be given to this point, which will be important in continuous
time. Toy Fock space is a q u a n t u m probabilistic object, which admits several classical

1 This may be considered as an elementary case of yon N e u m a n n ' s bicommutant


theorem.
20 II. Spin

probabilistic interpretations, and the full product of random variables depends on the
interpretation (in a way, it even is the interpretation itself!). However, for pairs of r.v.'s
depending on different sets of coordinates as above, it reflects the algebraic operation
@ and has an intrinsic meaning.
Traditionally, a r.v. measurable w.r. to ~k ( - f k - ) is said to be k-adapted ( k -
previsibIe) and families of r.v.'s indexed by time which are adapted (previsible) at each
time are called adapted (previsible) processes the word "predictable" is commonly
used too. We are going to extend these notions to operators on .(I).
An operator A on a5 is said to be k-adapted if, in the tensor product decomposition
~<k ® 4)>k , it is read as the tensor product B ® I of an operator on the first factor and
th-e identity on the ~econd one. Otherwise stated, if f and g are functions depending
respectively on the first k and the last N - k co-ordinates, then 1) A f still depends
only on the first k coordinates, 2) A ( f g ) = ( A f ) g . For example, the multiplication
operator by a random variable h satisfies 2), and it satisfies 1) iff h is ~k-measurable.
The conditional expectation operator Ek satisfies 1), but not 2). The creation, counting
(number) and annihilation operators a A are k adapted iff A C { 1 , . . . , k}.
How can we trarsform an operator A into a k-adapted operator ? It is very natural
to define (with the same notations as above)

Ak(fg) = (Ek(Af))g

or in more algebraic notation A k = ( E k A E k ) ® I>k. Then it is easy to see that A k


plays the role of a conditional expectation of the operator A in t~oe vacuum state. This
means that given any two k adapted operators G, H we have

(4.1) <I,G*AHI> = <I,G*AkHI> .

This is a nice example of a case where q u a n t u m conditional expectations do exist - - in


general, they do not.

The three basic "martingales"

5 It is well known that any martingale M k on Bernoulli space has a predictable


representation

(5.1) M~
j=l,...,k

where m is a constant (the expectation of the martingale) and u k is a k-previsible


vector. It is interesting to remark that the product which appears here is precisely of
the kind considered in the preceding section, i.e. has an intrinsic meaning, independent
of the probabilistic interpretation of toy Fock space.
The predictable representation theorem for operators can be stated as follows : Every
operator A on toy rbck space has a unique representation as

(5.2) A=mI+ ~ @a~


j=I,...,N
e=+,O,--
3. T h e d e M o i v r e - L a p l a c e theorem 21

where the operators u~ are j-predictable. This theorem amounts to the choice of a
third basis for the space of operators : introduce, besides the three symbols +, o, - a
fourth symbol • and make the convention that for every k, a~ is the identity operator.
For every sequence ~ = e l , . . . ,eN of these four symbols, set a( = a gl
1 . . . a gNN . T h e n
these 4 N operators form a basis, and the predictable representation theorem amounts
to arranging this basis according to the last index k such that e k ¢ . .

Discrete coherent vectors

6 Given a martingale M of expectation 0, with its predictable representation (5.1),


probabilists often use the (discrete) stochastic exponential of M , a martingale of
expectation 1 whose value at time N is the r.v.

(6.t) z= II (l+ kxk).


k=l,...,N

Most important is the case where the r.v.'s u k are constants, i.e. M k = IE [ulbCk]
where u = MN = ~ k UkXk belongs to the first chaos. Then we put Z = g(u) ; vectors
of this form are often called (discrete) exponential vectors (or discrete coherent vectors
after normalization). We have

(6.2) < £ ( u ) , g ( v ) > = H ( 1 + UkVk) .


k

The word "coherent vectors" has many uses, and the whole book [K1S] has been devoted
to its different meanings. We have already met it in §1, for the elementary situation of
the two-dimensional spin space ~ . These two meanings are closely related : the toy Fock
space (I) is isomorphic to the tensor product of N copies of ~ , and the new coherent
(or exponential vectors) are tensor products ~p(zl) ® . . . ® ¢(Zn) of o10 coherent (or
exponential) vectors.
W e find it c o n v e n i e n t to use the word exponential vectors to d e n o t e v e c t o r s of
e x p e c t a t i o n 1 in the v a c u u m state, and coherent vectors for v e c t o r s of n o r m 1. T h i s
use is n o t s t a n d a r d in t h e literature.

§3. A D E M O I V R E - L A P L A C E THEOREM

In this section we prove, by elementary methods, a theorem which corresponds to


the classical de Moivre Laplace theorem on the convergence of sums of independent
Bernoulli r.v.'s to a Gaussian r.v.. The Bernoulli r.v.'s are replaced by spin creation,
annihilation and number operators, (commuting at different sites), and the analogue
of Gaussian variables are the harmonic oscillator creation, annihilation and number
operators, to be defined in the next chapter. The reader may prefer to become acquainted
first with the harmonic oscillator, and postpone this section. Elementary results in this
direction have been known since Louisell's book and the paper [ACGT] of Arecchi and
al. (1972). We follow the modern version of Accardi-Uach [AcB1],
22 II. Spin

In classical probability, the de Moivre-Laplace theorem is the first step into a vast
category of general problems : weak convergence of measures, central limit theorems.
Only special cases of these problems have been considered yet in q u a n t u m probability.
The m e t h o d generally used is convergence of moments (see Girl and von Waldenfels
[GvW], yon Waldenfels [vWl] for the fermion ease, and Accardi-Bach [AcB3]).

Preliminary computations

1 We first recall, for the reader's convenience, the notations for the elementary spin
operators. First of all, the three basic creation, number and annihilation operators on
the two dimensional Hilbert space 7-l= C 2 (Bernoulli space, with its basis consisting of
1 and x ) . To avoid confusion with the corresponding harmonic oscillator notation, we
put (as in subsection 5)

=(00 ; o(0 0 01); b + = (01 00) "

The most general ~kewadjoint operator g such that <1, g l > = 0 can be written as

(1.1) g = ( 0z ~i) with ( = 5 and A real.

We begin with the computation of the unitary operator eg : when ,\ = 0, this operator
is the elementary Weyl operator w(z) = exp(zb + - 2b-). For A • 0 we denote it by
w(z, )t). The vectors ¢ ( z ) = w(z) 1 are the eldmentary coherent vectors.
We take n independent copies of the preceding space, i.e. build the tensor product
7-in -- 7-/®n , with its vacuum vector 1 ®n denoted I n or simply 1. Given an operator h
on Tt, we may copy it on 7-/n in the 1 - s t , . . . ~ t h position : thus hi denotes h ® I ® . . . ® I
and hn I ® ... ® I ® h. Classical probability suggests that a v/~ normalization is
convenient for a central limit theorem, and we therefore put

(1.2) - +.. +

(1.3) Wn(z) - w(z/vl~)l ® . . . ® w(z/xl~)n = e x p ( z B + - - 2 t7.~-)


(1.4) = w (z) 1 =

These last operatols and vectors are called respectively the discrete Weyl operators and
discrete coherent vectors. The set of coherent vectors is the same phyAcists use, but the
mapping z --+ e n (z) is not the same, since we have included the normalization in our
definition. Nothing has been said yet about Bn°° .
We also introduce the notations for the harmonic oscillator. It sxfl=fices to say that
we are given a Hilbert space generated (through linear combinations and closure) by a
family of coherent vectors ¢ ( u ) = W ( u ) l satisfying the relation

(1.5) < ¢(u),¢(v) > = e-(l~i~+l~lbl2e ~ ,

(these vectors differ by a normalization constant from the exponential vectors $ ( u )


which are used in the harmonic oscillator section). The vacuum vector is 1 = ¢ (0).
3. T h e de Moivre-Laplace theorem 23

The annihilation operator is defined by a - ¢ ( u ) = u ¢ ( u ) , the creation operator


a + is its adjoint, and a ° = a+a - . The Weyl operators W ( z ) are.given formally by
W ( z ) = e x p ( z a + - z a - ) . We only need to know how they operate on the coherent
vectors, namely

(1.6) W(z)¢(u) = j~m(;u)¢(u + z).

The basic computation we use is that of the matrix elements

(1.7) < ¢ (u), W (z) ¢ (v) > = e i ~m(zv) < ¢ (u), ¢ (z + v) > ,

a somewhat lengthy expression if (1.5) is taken into account. Indeed, the result we want
to prove is : matrix elements of the discrete Weyl operators W n ( z ) between discrete
coherent states Ca(u) and Ca(v) tend to the matrix dements (1.7) with the same
values of z, u, v. In the last part we shall include the number operator too.
2 We start from the following relation, where g is given by (1.1)

g2 (-z( - 2 i ) ~ ) = _ z ~ i + 2i)~g
= 2i)~z -z£-4)~ ~

(this is of course a trivial case of the Hamilton-Cayley theorem). We change variables,


assuming at the start that the denominator does not vanish

g -i~
J=x/-~+z( ; g=iA+jv/-~+z~ •

Since j2 = - I , Euler's formula gives

e g = ei2~(cos V / ~ + z~ + j sin X / ~ + z~ )

and we have, replacing j by its value

sin ~,~2 + z(
(2.1) eg = eii~(cosvZ-~ + z~ + V/-~ + z~ ( g - i~) )

which is correct even in the case ~2 = - z ~ if sin0/0 = 1. The interest of taking a


general ~ will be clear later on. There is no harm in using complex square roots : the
functions cos v/~ and sin v G / v ~ are single valued and entire.
We first apply this result to Weyl operators, taking A = O, ( = z and Izi = r. We
get

sinr (0 ;5)
(2.2) w ( z ) = cosr + Z with Z=
F Z

Therefore
sin r
¢(z) = w(z)l = cost1 + ZX ,
r
24 II. Spin

which gives the matrix element

< ¢(~), ¢(~) > = cos l~t cos I~l + ~u


lul Ivl

and if we replace z by z / v ~ we get (the symbol ~ meaning that only the leading term
is kept)

(2.3) ¢(z/v/-n) ~ (1 - Izl~)l + z x.


2~

with the trivial consequence

limn < e n ( u ) , ~bn(V) > = < ~b(u), ~b(v) > ,

which is already an indication of convergence to the harmonic oscillator.


Let us now state (and sketch a proof of) an i m p o r t a n t result of Accardi-Bach : for
every finite family of complex numbers u, z l , . . . , zk, v, the matrix element

< en(u), w~(zl)... W~(zk)¢~(v) >

tends to the corresponding matrix element for the harmonic oscillator,

< ~(~), w ( z l ) . . . W(zk)¢(v) > .


In this general form, no generality is lost by computing the matrix elements relative to
the vacuum vector l n . Indeed, it is sufficient to add Wn(-U) = W*(u) and Wn(v) to
the set of our Weyl operators to get the general case.
On the other hand, it is interesting to consider more general matrix elements, and
to replace the Weyl operators w(z) = exp(zb + - 5 b - ) by the operators w ( z , ¢ ) =
exp(zb + - ~ b - ) , and similarly Wn(z), W ( z ) by the corresponding ( n o n - u n i t a r y )
operators a n (z, ¢) = e x p ( z B + - e B n ) , f~ (z, ¢) = e x p ( z a + - C a - ) . The interest of
doing so comes from the fact that such matrix elements

< 1 , a n (Zx, ( 1 ) . - - an (2k, ek) 1> =


< 1 , aJ (Zl/X/~, ¢ , / x / ' n ) . . . w (zklv/-n, eklv/-n) 1 > n

are entire functions of the complex variables Zl, ~ 1 , . . . , zk, ek and (if we are careful to
find locally uniform bounds for these entire functions) convergence of the functions will
imply convergence of all their partial derivatives or arbitrary orders. This means that
matrix elements (relative to 1 or to arbitrary coherent vectors) of arbitrary polynomials
in the operators B ~ converge to the matrix elements of the corresponding polynomials
in the operators a ~ .
The principle of Accardi-Bach's proof is simple : we expand the exponentials and
get
1
E n-p/2 E mm!...mk ! < 1 , (Zl b+ - - ¢ l b - ) m, ...(zkb + - ¢ k b - ) TM 1>
p ml+...mk=p
3. The de Moivre-Laplace t h e o r e m 25

to be raised to the power n . The term corresponding to p = 0 is 1, and the following


term is 0 since < 1, b ± l > = 0 . For p = 2 we get

(2.4) 1 1
i i<j
which we denote by S i n . Finally, the sum of the remaining terms is the product of
n -a/2 by a factor which remains uniformly bounded as the complex variables zi, ~i
range over a compact set. The limit of (1 + ( - S + o ( 1 ) ) / n ) n is e - s , and it remains to
cheek that this limit is equal to

(2.5) < 1, e zla+-~la- . . . e zka+-~ka- 1 > ,

to which a meaning should be given first. In order to do this, we anticipate Chapter III
and remark that e za+-Ca- may be given a definition in analogy with (and in analytic
continuation of) that of the Weyl operators, on the stable domain E generated by the
exponential vectors :

(2.6) e za+-Ca- g ( u ) = e - C U - z i / 2 $ ( u + z)

and that on this domain it satisfies an extension of the Weyt commutation relation
allowing the computation of the matrix element (2.5), which is found indeed to be e - S
(or just check it for the Weyl operators and deduce the general equality by analytic
continuation).

Including the number operator

3 In this section we are going to discuss the same topic, but including the number
operator• The first problem is the following : how to choose a constant cn such that the
matrix elements
< ¢n(U), ei)~B°¢n(V) >

relative to the ope::ator B°n = (b 7 + . . . + b ° ) / c n converge to a finite and non-trivial


limit. The surprising answer is that we may take cn = 1. From a classical point of view,
this is paradoxical : should not a r.v. with zero expectation and finite variance a 2 be
normalized by ax/{~ ? Precisely, this is the point : in the vacuum state, the number
operator has variarLce zero, and therefore does not need to be divided by something
which tends to infimty.
To see that this choice is convenient, let us compute the matrix element

< en(u), J~B°¢~(~) > = t, ~

where # = < ¢ ( u / ~ / n ) , eiAb°¢(v/v/"n ) > . Using the relation (b°) 2 = b° , we find that
e lab° = I + (e iA -- 1) b° , hence

#~ 1 lul~ Ivi~ +e '~v .


2n 2n n
and the limit of the matrix element is seen to be < ¢ ( u ) , ¢ ( e i A v ) ~ , while a ° ¢ ( v ) =
~b(clAy). Thus our "normalization" works.
26 II. Spin

It is therefore natural to use this normalization to define the operators Bn° and the
discrete "Weyl like" operators ~ f~n (z, 2i),) = exp (z B + - z B ~ + 2iA B~o) involving the
number operator, and to discuss the convergence of their matrix elements between
discrete coherent vectors. More generally, it would be interesting 3) to study the
corresponding operators ~2n(Z,(,2iA) which are entire functions of three complex
variables, and reduce to the preceding ones for ( = z and A real; this would justify
as above the convergence of all partial derivatives of the matrix elements ; 2) to discuss
the convergence of matrix elements for finite products of such operators. This program,
however, will be "left to the interested reader", and consider only the simplest case (see
Accardi-Bach [AcB2] for a detailed discussion).
The limit we have to compute is

(3.1) limn (< ~b(u/x/-n),exp(2iAb ° + klv/-~)e(,lv'-~) >)~


where we have put

We shall deduce this limit from formula (2.1). To abbreviate the writing, we also put

= F + zA~ ~ ~ + 2a~
z-L

The m a t r i x exp(2iXb ° + k/v/-n ) is equal to

sinO~ z . sinO~ "


-g v ~ cos a + ~A--W-

Next, cos a ~ cos A - sin X2-~ and

A sina a sin A + cos A sin A 2~-~2 .

The leading term is therefore

sin A 7(.~
----X-- ~ \
(3.2) f~n(z,<,2iA' ~ eil ( c-iA -i~--~n(e--iAsin
A z
--x--~
-- ~ )
ei~ + ~ ( ~ ' ~ - ~) ) "

It remains to compute < ~b(u), f~n(Z, ~, 2iA) ¢(v) > , to raise it to the power n and take
a limit. Neglecting second order terms in 1/n we have for the m a t r i x element 'before
raising to the power n )

Iv122n ]ulZ2n 2nZ{Ai(1 - e ia A.__7_


sin ) _ ~@eiAsin ~z iA sin A
- -A_A + -h-e - - y - +
e2iAftv

1 They are not exactly Weyl operators, and for this reason we avoid the notation INn.
4. Angular momentum 27

After raising to the power n and passing to the limit, the matrix element is the
exponential of

[vl2 lul 2 z~ i ~ i~sin,~, i),sin~ five2i~


2 2 -~ ~ ( l - e T)+(fiz-¢v)e T + .

This looks complicated, and needs some comments. First of all, what we are supposed
to have computed is

<¢(u),n(z,¢,2;~)¢(v)> where n(z,¢,2A)=e za+-~a-+2iAa° .

The language of future chapters will replace coherent vectors ¢ ( u ) ¢ ( v ) by e x p o n e n t i a l


v e c t o r s C ( u ) E(v) proportional to the preceding ones, but with a different normalization
so that < ~ ( u ) , E(v) > = e < u,v>. This replacement eliminates - ( M 2 + I u [ 2 ) / 2 to the
left. This being done, let us put e l ( z ) = (e z - 1 ) / z and e2(z) = (e z - 1 - z ) / z 2 . The
preceding expression can be written

< E ( u ) , f~(z, ¢, 2i~) E(v) >

where the right side is equal to

C E ( e 2 i A v + el(2/Az)) and C = exp(-~(vel(2iA) + ze2(2i/~))) •

That is, the operators we have constructed act on a rather simple way on exponential
(or coherent) vectors : the argument undergoes an affine transformation, and a scalar
factor is applied. They are simply related to the general Weyl operators to be introduced
in Chapter IV, and we shall also see in Chapter V that the preceding formula can be
extended to infinite dimensional situations.
This concludes the discussion of the de Moivre-Laplace theorem for commuting
spins.

§4. A N G U L A R MOMENTUM

As every reader of Feller's famous book knows, the most interesting object of study
in elementary probability theory is B e r n o u l l i r a n d o m walk, its zeros, m a x i m a and
fluctuations. The subject has not yet been studied in a systematic way in the framework
of yon Neumann probability theory, and the following discussion is a first step in this
direction. I a m grateful to W. yon Waldenfels for discussions which led to considerable
improvements.

Bernoulli random walk

1 As we have seen "n §1, a non-commutative "Bernoulli random vaiiable" is a triple of


elementary c r e a t i o n / a n n i h i l a t i o n / n u m b e r operators a +, a - , a ° , or equivalently of Pauli
matrices x = a + + a - , y = i (a + - a - ) , z = I - 2a ° . Taking independent commuting
copies of these opel'ators as in the preceding section, we define

(1.1) X v -~ x l + . . . + x v , Y v = Y l + . . . + Yv , Z v = z l + . . . + zv •
28 II. Spin

These three processes of selfadjoint operators satisfy at each time v the h~llowing angular
momentum commut Ltion relations

(1.2) [X,,,Yu] = 2iZ~, , [ Y m Z , ] = 2iX~, , [ Z u , X , ] -- 2iYu

(the last two relations will usually be replaced by the mention "circ. p e n n . " ) . The
factor 2 appears here, because Bernoulli random variables assume the values 4-1 and
not 4-1/2 ; it may be considered as another case of the probabilists' normalization h = 2.
More generally

(1.3) [ X m , Y n ] = 2iZmAn (circ. perm.)

The relation between these three processes is symmetric. In this section, we are going
to study these commutation relations for their own sake, and thus get new examples of
quantum probability spaces.
REMARK. We discussed in §3 the central limit behaviour of (1.!) in the vacuum state
(or in states close to it, due to our normalization of the state) : the s y m m e t r y between
the three processes disappears in the limit. In the vacuum state X and Y are Bernoulli
random walks, but Zu is a.s. equal to u. Thus if we divide X , and Yu by v'~ and pass
to the limit formally in (1.2) we get (see the preceding section for a rigorous discussion)
two selfadjoint operators X and Y which satisfy a classical Heisenberg commutation
relation, while Z degenerates. Privileging other pure states amounts to the same up to a
rotation, and the symmetry again breaks. If we try to take a perfectly symmetric state,
namely on each I1~2 factor the state with density matrix I / 2 , each one of the random
variables xu, Yu, zu then is a symmetric Bernoulli r.v. and the symmetric normalization
by ~ works for all three, but then the commutation relation (1.2) degenerates and
the limit r.v.'s commute. On the other hand, physicists do consider, under the name of
"Heisenberg field", a continuous one dimensional system of spins in which the s y m m e t r y
is preserved.

Ladder spaces

2 We are going to present (with notations appropriate to our probabilistie applications)


the elementary theory of angular momentum. The subject can be found in all books on
quantum mechanics, with slightly different notations.
Consider three selfadjoint operators on some Hilbert space 7-/satisfying the relations

(2.1) I X , Y ] = 2iZ (circ. perm.)

(the Lie algebra commutation relations for the group SU (2)). For simplicity, we assume
that 7-{ is finite dimensional. Hence all operators are bounded, and no domain problems
arise.
The basic operator which classifies the representations is the spin operator J . Its
standard definition becomes in our notation
J ( J + I ) = ( X 2 + y2 + Z 2 ) / 4 ,

the factor 4 coming from the fact that our operators are twice those of the physicists.
Taking this factor to the left, and adding I to get a perfect square, it is clear that in our
4. A n g u l a r m o m e n t u m 29

situation the n a t u r a l operator to consider is S = 2 J + I (which has integer eigenvalues,


as we shall see). It is defined by

(2.2) S2 =X 2+Y2+Z ~+I ;

S will then be the positive square root of S 2 . For instance, in the elementary case of §1,
we had X ~ = y 2 = Z" = I , hence S = 2 I . Expressing [ Y2, X ] as Y [ Y, X ] + [ Y, X ] Y ,
one sees that y 2 + Z 2 commutes with X , and by s y m m e t r y S commutes with all three
operators. The same is then true of all projections in the spectral resolution of S .
Therefore, in an irreducible situation, i.e. one in which every subspace invariant under
X, Y, Z is trivial, a condition of the form S = a I must be fulfilled; S being a positive
operator, a is a positive number.
In the next step we imitate the elementary spin case, introducing creation, annihi-
lation and number operators. Our definition is not exactly that of the physicists, our
apology for this (hateful) change of notation being coherence with the elementary spin
case, and with the harmonic oscillator in the limit. We define A + and N by

(2.3) X=A+ +A - , Y=i(A+-A-), Z=(S-I)---2N

Our creation/annihi!.ation pair A + corresponds to the physicists' JT=, and our Z is


2Jz in their notation. We call N the "number operator" for reasons to a p p e a r later (we
avoid here the notation A ° for the number operator). Using (2.3) a d.irect computation
gives
X2 + y 2 = S2 - Z 2 _ I = 2 ( A - A + + A + A - ) ,

and combining this with 2Z = 2 ( A - A + - A + A - ) , we get formulas which simplify


nicely

(2.4) A+A - = N(S- N) , A-A + = (N + I)(S- g -- I ) .

On the other hand, the basic commutation relations become (since S commutes with
the other operators)

(2.5) IN,A-]--A-, [ N , A +] = A + , [ A - , A +] = Z = S - I - 2 N .

A fundamental consequence of (2.5) is the following : if x is an eige',zvector of N with


eigenvalue A, A:t:x s either O, or an eigenvector of N with eigenvalue A + I . Therefore,
the sequence of vectors A + m x consists of mutually orthogonal vectors, and since the
space is finite dimensional it gets to 0 after finitely many steps. The same is true for
the downward sequence A - r n x .
A s s u m e now that x also is an eigenvector of S with eigenvalue a (such joint
eigenvectors exist since S and Z commute). Starting from x and applying the powers
(A+) m upwards an~ the powers ( A - ) m downwards we get the finite spin ladder of x ,
which generates linearly a ladder space. Since A + and S commute it remains in the a -
eigenspace of S . It follows from (2.4) that applying A =t= to ladder vectors yields linear
combination of ladder vectors. More precisely, the vectors x and A + x generate the
same ladder space, and we may therefore take as generator the b o t t o m vector of the
ladder. We normalize it and call it x0 : it plays the role of the vacuum state 1. Then we
30 II. Spin

put a m = ( A + ) m x 6 . Calling Xk the last non-zero vector, t h e d i m e n s i o n of t h e l a d d e r


space is k + 1.
Here is t h e i n f o r m a t i o n we already possess on the s t r u c t u r e of the ladder space

Nxo =£x0 , A-xo = 0 , A+xo = xl ,


Nxl = (,~ + 1)xl , A-z1 -- ~. , A + x l = x2 ,

N x k -= ( ~ + k ) x k , A-xk = ? , A+xk = 0 .

We add to this the relations S x i = a x i . A p p l y i n g (2.4) one m a y answer the question


marks

A-xi = A-A+xi_I = (N + I)(S- N - I)xi-1 = i ( e + i -- 1 ) x i - 1 .

Also, a p p l y i n g (2.4) to the relations A + A - x o = 0 and A - A + z k = 0 we get

~(a- ~ ) x 0 = 0 = (~ + k + 1 ) ( a - ~- k- 1)x k .

T h e choice ,k = a in the first relation cannot satisfy the second one since ~ _> 0.
T h e r e f o r e we must choose A = 0 and a = k + 1. T h e r e f o r e ~r is a strictly positive
integer (the t o t a l n u m b e r of ladder rungs), and the spin p a r a m e t e r j = (a - 1)/2
can take integer and half-integer values starting w i t h 0, 1 / 2 , . . . . Since ,k = 0 we have
N x j = j x j , a n d N is a genuine n u m b e r operator.
Note t h a t Z = S -- I - 2 N takes integer values, odd for S even (half-integer spin),
even for S odd.
To d e t e r m i n e the s t r u c t u r e of the ladder space we must also know t h e scalar p r o d u c t :
the basis vectors m e m u t u a l l y orthogonal, and to c o m p u t e their n o l m s we use for the
first t i m e t h e fact t h a t A + and A - are m u t u a l l y adjoint. T h e n we have using (2.4)

< Xi+l, X/+l > = < A + x i , A + x i > = < x i , A - A + x i > = (i + 1) (k - i) < x i , xi >

from which t h e relation < x i , x i > = ~ k!i! follows. T h e simplest fc.~znulas o c c u r if we


take as n o n - n o r m a l i z e d basis elements the vectors Yi = x i / i ! . T h e n we have Yi = 0 for
i>k, andfor i_<k

(2.6) Nyi=iyi , A-yi=(k T l-i)yi_ 1 , A+yi=(iT1)yi+l , <Yl,Yi>= (ki) •

Formulas (2.6) describe completely the s t r u c t u r e of t h e l a d d e r space, a n d it is a


m a t t e r of r o u t i n e to check t h a t a representation of t h e angular m o m e n t u m relations
is indeed defined in this way. It is called the s p i n j = k / 2 r e p r e s e n t a t i o n and c o m m o n l y
d e n o t e d by :Dj. It is easily shown to be irreducible. Indeed, any non-zero vector can be
t r a n s f o r m e d into a m u l t i p l e of x0 by a convenient annihilator, and t h e n t r a n s f o r m e d
into any o t h e r non zero vector by a convenient linear c o m b i n a t i o n of creators. Hence
there is no p r o p e r iLvariant subspace.
Conversely, given any (finite-dimensional) r e p r e s e n t a t i o n of the angular m o m e n t u m
c o m m u t a t i o n relations by selfadjoint operators X , II, Z , one m a y choose a c o m m o n
eigenvector of the c o m m u t i n g selfadjoint operators S and N , and c o n s t r u c t the
4. A n g u l a r m o m e n t u m 31

corresponding stable ladder space. It is easy to see that its orthogonal subspace is stable
too, so that we extract from it a new ladder space, etc. In this way the representation
is decomposed into a direct sum of ladder representations.
Each ladder space of dimension 2j carries a spin label j (integer or half-integer)
and a label a = 1 , . . . telling how many spaces of that dimension occurs in the
representation. W i t h i n the ladder space itself it is traditional to label the basis by
the 2j + 1 values of Jz = Z / 2 (magnetic quantum number m ) , which increase by 1
between - j , - j + 1 , . . . , j - 1, j . This leads to an orthonormal basis of the Hilbert space
7-/ labelled I s , j , m > in Dirac's notation.
It is true on any ladder space of spin j that the greatest eigenvalue of Z is k = 2 j .
Thus it is true for the whole representation that the greatest eigenvalue of Z and the
greatest eigenvalue of J are related by

(2.7) 2Jm~x = Z~n~x ,

and in fact the common eigenvectors for Z, J with eigenvalue Z n ~ for Z must have
the eigenvalue Jm~x for J .

Some comments

3 We point out here some analogies between the angular m o m e n t u m space and the
harmonic oscillator (anticipating on chapter III) and a few additional remarks. This
subsection may be skipped without harm.
a) The central limit theorem in §3 suggests to study operators of the form

eP+A++p-A-+qZ ,

which for p+ = z, p - = - z and q purely imaginary will be unitary "Weyl operators".


These exponentials can be explicitly computed as a product

er+A + esZ e r - A - .

This is called the disentangling formula for angular momentum operators. It is essen-
tially the same as in the elementary case since operators at different sites commute, and
one may find it in Appendix A to the paper [ACGT] of Arecchi aald hi., p. 703 of the
book of Klauder and Skagerstam [K1S] (remembering that J+ = A ~-, 2Jz = Z ) .
These Weyl ope~:ators (with q = 0) also appear on p. 690 of [KIS], formula (3.7a).
The same trend of ideas leads to a family of coherent vectors (called BIoch coherent
s*ates). The corresponding unnormalized exponeniial vectors are given by the formula
(cf. [K1S] p. 691, formula (3.12))

k zJ
E ( z ) = eZA+ l = ~ xj .
"= 7 .

Using (2.6) it is easy ~o find that

< C ( z ) , £ ( z ' ) , > = (1 + z z ' ) k .


32 II. Spin

These vectors are also studied in a paper by Radcliffe in the same book. There are other
sets of coherent and exponential vectors of physical interest•
b) For large values of S the ladder representation is close to the q u a n t u m harmonic
oscillator. To see this let us perfolwn a change of operators and basis vectors : if we put
S = k + 1 and
a + = A+/x/k, a- = A-~v@,

and replace x j by hj = a+axo, so that [ a - , a +] = I - ~k-~-. The operators and the


Hilbert space structure are described as follows in the new representation

a + h j = hj+l , N h j = j h j , a - h j = j ( 1 - ~2~-)hj_l
IIhj I[2 = j ! ( 1 - ~)(1 - ~ ) . . . ( 1 - ~ - [ ) .

For fixed j and large k this tends to the description of the harmonic oscillator.
c) The case k = 1 was the analogue of simple Bernoulli space in q u a n t u m probability.
One can consider the angular m o m e n t u m space with spin j as the q u a n t u m probabilistic
analogue of the measurable space with v = 2j +1 points, but this is not the most natural
identification. The natural one consists in taking as Hilbert space the L 2 space of the
uniform distribution on ~ points, i.e. the Haar measure on the group G = 2~/u2Z. This
L 2 space carries two simple unitary operators

U I ( m ) = f ( y - 1) , V I ( m ) = {a: f ( z )
where ~ is a primitive z,-th root of unity. They satisfy a kind of Weyl commutation
relation
VPU q = ~Pquqv p ,

and one may develop a discrete theory which closely parallels that of the canonical pair
(Chapter Ill). See t~e paper [CCS] by Cohendet et al• in the references).
d) We are careful to work on a finite tensor product ~ , ~ (C2) ®~' , while the natural
tendency in probability theory is to jump at once to the L 2 space of an infinite random
walk. There is a difficulty here in yon N e u m a n n probability, of which our reader should
be aware, since it is the first place where the elementary Hilbert space probability
becomes insufficient, and the need for C* algebraic probability is felt. We can take an
infinite product of measures, but not of Hilbert spaces - - or rather, there exist infinite
tensor products depending on the choice of a state in each Hilbert space, playing the
role of a measure. For details, see the basic article [vN2] of yon Neumann.

Addition of angular momenta

4 Consider independent copies of two representations of the angular m o m e n t u m


commutation relations, X i , Yi, Zi (i = 1, 2), on Hilbert spaces "hfi. This means, as
in classical probability, that we form the tensor product ~ = ~1 ® 7-t~, and identify X1
with X1 ® / 2 , X2 with I1 ® X 2 , etc. Then taking X = X I + X 2 , etc. we get a n e w
representation X, Y, Z , whose structure we want to describe, assuming that the factor
representations are irreducible, with spins Jl,J2- Note that in this case the values of
Z = Z1 + Z2 either are all even or all odd : hence all the spin ladders occurring in the
decomposition are Gf the same kind, all with half-integer or integer spin.
4. A n g u l a r M o m e n t u m 33

The addition theorem for angular momenta asserts that the possible values j for the
spin of the sum are given by the inequalities

IJl - J2l <- J ~ j i + j2 ,


and that each one of these values occurs exactly once in the new representation. We
shall prove it in detail in the case J2 = 1/2 only. We may consider Z1 and Z2 (two
commuting selfadjoint operators) as two ordinary functions, and thus the multiplicity of
an eigenvalue z of Z is the nmnber of decompositions z = zl + z2 of z into admissible
values z l , z: ( - 2 j i <_ z i <_ 2ji). Since z2 = -1-1, this number is at most 2. Looking at
the multiplicity of the eigenvalue 0 or 1, which appears in all spin ladders, it is easy to
see that there are at most 2 spin ladders, and their sizes are deduced from the relation

2 x2jl =(2jl- 1)+(2j1+1) .

The trivial case j l = 0 was implicitly excluded. The general addition theorem is proved
essentially in the same way, assuming Jl >_ j2 for instance : there are at most Jl - j 2 + 1
spin ladders, and their sizes are deduced from the relation
j~ +j~
(2jl + 1 ) ( 2 j 2 + 1 ) = E (2j+l).
j=lJl-J21

The consequence of interest for us is that, if we add a spin 1/2 representation to any
representation ~H, every spin ladder in 7/ of spin j gives birth to two spin ladders of
spins j 4- 1/2, except for j = 0 which generates one single ladder of spin 1/2.
Another topic which may be mentioned here (though it is not necessary for the
sequel) is that of Clebsch Gordan coefficients for the addition of two spins Jl and j2.
First of all let us rewrite the structure of the spin j ladder space using the standard
physicists' orthonormal basis t J, m >

A+lj, m > = p(j,m) lj,m + l > , A-lj,m > = p(j,m-1) lj, m - l > ,
ZIj,m > = 2ru[j,m > .

The value of p ( j , m ) is x / ( j - m + l ) ( j + m ) . For each representation ~ i we have a


basis for 7ti consisting of the vectors [ j i , m i > , with m i ranging from - J i to Ji.
Then we have for the tensor product 7/ a basis ] j l , m l > ® Ij2,me > , which is simply
denoted by IJ l, m l, J2, ms > On the other hand, 7-t has its own spin basis l J, m > with
j ranging from ] J l - j 2 [ to j l + j 2 . The Clebsch Gordan coefficients C ( j l m l , j 2 m 2 ; j m )
give the decomposition of this spin basis over the tensor product basis.
They can be computed as follows. We express A + [ j , m > in two ways, first as

p ( j , m ) lj, m + l > = p ( j , m ) E C(Jlkl'J2k2;jm+l) lJlkl'J2ke> '


kl+k2=rn+l

and on the other hand as

E C(jlml,j~m2;jm) A + ]jlml,j2m2> .
mt +m2=rn
34 II. Spin

Then the last vector is replaced by its value

p(ja,ml) [Jl m l + l , j 2 m2 > + p ( j 2 , m 2 ) ]Ja m l , j 2 m 2 + l > ;

from which we easily deduce the following induction formula : if ka +k2 = m + l

C(j~ ka, j2k2; j r e + l ) = ~ C(j~ k ~ - l , j2k2; j r a )


(4.1) + ~ pO, m) C(jl kl, j2 k 2 - 1 ; j m )

If one gets down to kl = - j l or k2 = - j 2 , the right hand side contains only


one term, and the downward induction continues in the same way to the last step
C(jl - J l , J 2 - J 2 , J - J ) = 1. Thus all the coefficients can be computed (more efficient
methods have been devised for practical applications). Using A - instead of A + would
give a similar upward induction formula.

Structure of the Bernoulli case

5 We are going to apply the preceding general results to the quantum Bernoulli random
walk. Let 7-I be th~ 5 2 space of the elementary symmetric Bernoulli measure, with
its basis 1, x. The representation space is ~ , = ~®u identified with the L 2 space
of the classical Bernoulli random walk up to time u (toy Fock space). At each time
u the angular momentum representation is provided by the three random variables
X . , Y~, Z . ,

(5.1) X, =xl +...+x,, etc.

and the corresponding creation and annihilation operators A +, A 7 are just the sums
of the elementary creation and annihilation operators at times i < v. The Pauli matrix
az keeps 1 fixed a v J maps x into - x . Hence on the usual o.n. basis x A (z A would
be b e t t e r here !) of toy Fock space, Z , acts as follows

(5.2) Z . X A = ( t A C l - [ A I ) x A = ( v - 2[A[)XA .

So the eigenspaces of Zu are the chaos of toy Fock space, the eigenvmue being u - 2 p on
the p - t h chaos Cp. Since S . commutes with Z . it keeps Cp stable, and decomposing
the representation amounts to finding the eigenvectors of S in each Cp. Recalling that
Z = S - I - 2N = 2J - 2 N , we see that for u even (odd) all components have integer
(half-integer) spin. We have

i~A ieA
By (2.7) the greatest allowed value of spin is exactly u/2, and corresponds to the chaos
Co of order 0. The vacuum vector 1 E Co generates a ladder of spin u/2 (i.e. of size
u + 1 ) : the vector A + i l of this ladder is the symmetric polynomial of degree i in the
Bernoulli r.v.'s, conceniently normalized. On the first chaos Ca, the eigenvalue of Z
is u/2 - 1, and therefore any ladder built on an eigenvector of J in Ca has at least
u rungs. Thus the ladder size can only be u or u + 1, and to have u + 1 steps it is
necessary to push the foot of the ladder down to Co. This corresponds to the ladder
4. Angular Momentum 35

of the vacuum vector, and therefore we have (1) - 1 ladders of size t,. Similarly in the
second chaos we find (~) - (~') ladders of size u - 1, etc. Finally, the decomposition of
the representation can be described by the following drawing

• • • • •
• • •
• • • • • • • •
• • •
• • • • •

7/=2 •
Y=3
zt=4

A c o m m u t a t i v e process of spins

6 At each time v mt us define in the standard way

(5.3) s~ : x~ + r~ + z~ + I .
S~ commutes with .X,, Y~, Zv ; on the other hand it commutes with all Xk, Yk, zk for
k > ~,, hence also with X k , Yk, Zk, and finally with S k . Then it is very easy to see that
(S~) is a commutative process. However, the two processes ( Z , ) and ( S , ) commute at
each fixed time, but not between different times.
The representation at time v + 1 is the tensor product of the representations at
time v and a spin 1/2 representation, and by the addition theorem each spin ladder
of size s = 2j + 1 at time ~, generates two ladders of sizes s q- 1 at time v + 1,
or one exceptionally if s = 1. Thus we may describe the time evolution of spin by
discrete paths w from the integers 0 , . . . , ~ to the integers _> ! , which start from 1
at time 0 and increase by -t-1 between successive integers; each path at time v with
w(v) > 0 can be extended in two ways to time v + 1, and in just one way if w(v) -- 1.
Thus the irreducible components in the Bernoulli representation at time v are in 1-1
correspondence with the paths w, the value of S , = 2 J , + 1 being w ( u ) .
Each path corresponding to a spin ladder of the representation, knowing the path
provides us with the two labels a , j in the standard description. Thus to completely
describe the basis we only need the label m , which corresponds to knowing Zu.
Therefore the path (Sk)k<_u and the operator Zu form a complete observable.
If we choose a s~ate, we may talk about the probability law of the process (Sk).
It is trivial in the vacuum state, since then Z k = k, the m a x i m u m Mlowed value, and
therefore Sk = k + 1. Ph. Biane computed this law in the tracial state, under which the
probability distribution of a self-adjoint operator A on C 2u with eigenvalues ai and
eigenspaces E i is given by

]P{A = al} = 2-V dim(Ei) .

Since the dimension of the ladder space corresponding to a path {S~ = s l , . . - , S , = su}
is equal to su, the corresponding probability is 2 - U s u . Then it is ~'ery easy to see that
the process (Sk) ~s a Markov chain on the integers, with S~ = 2 and for k > 1

k+l k-1
• {s~+~=~+~Js.=k}- 2k 'IP{s"+~=k-lls"-t:}- 2k
36 II. Spin

while if S~ = 1 one a.s. has S,+1 = 2. This Markov chain is the discrete analogue of a
Bessela process.
For a computation concerning general product states, see the papers of Biane [Bia]
and von Waldenfels [vW2]. It would be very interesting to know more about non
commutative conditioning in the Bernoulli setup.

§5. A N T I C O M M U T I N G SPIN SYSTEMS

This section provides an example of a truly non-commutative situation, of great


physical interest. It also constitutes an elementary introduction to some aspects of
the theory of Clifford algebras. Its starting point was the appendix in the book by
D. Kastler [Kas].

Fermion creation and annihilation operators

1 Let us recall the basic notations. We start with a family of N independent,


identically distributed, symmetric Bernoulli random variables x k . The canonical sample
space to realize them is { - 1 , 1} N equipped with the measure # which ascribes the
weight 2 - g to every point, and the corresponding L 2 space is the toy Fock space fig.
We denote by : r the "time set" { 1 , . . . , N } , and by 7) the set of all subsets of 7-. The
space fig has an o.n. basis consisting of all products x A = 1-IkeA xk, A ranging over "P.
We introduce some combinatorial definitions : given a subset A and an index k E : r ,
n ( k , A ) is the number of elements of A which are strictly smaller t h a n k ( t h a t is, the
number of strictly decreasing pairs one gets by writing first k, tken the elements of A
in increasing order). Similarly, n~(k, A) is the number of elements of A strictly larger
than k, so t h a t n ( k , A ) + n ' ( k , A ) = I A ] - I A ( k ). Given now two subsets A and B , we
define n ( n , B) = "~~kcn n(k, B) (the total number of inversions when one writes first
A, then B ) , and similarly n ' ( A , B ) = n ( B , A ) . It is clear that

(1.1) n ( A , B ) + n ( B , A ) = IAI FBI - IA n BI .

In particular, n ( A , A ) = IAI(IAI- 1)/2.


We define new creation and annihilation operators, which differ from those of the §2
subsection 2 by the presence of alternants

b~(x A) = ( - 1 ) n(k'A) XAu{k } if k ~ A, = 0 otherwise (creation)

b~(XA) = ( - 1 ) n(k'A) XA\{k } if k C A, = 0 otherwise (annihilation)

These operators are mutually adjoint for every k. It is easy to verify that (denoting
anticommutators by {, } )

(1.2) !b:,b;} = 0 = ; = jkI ;


5. Anticommuting spins 37

the meaning of these formulas being that each pair ( b { , b ~ ) is a spin, and all t h e s e
s p i n s a n t i c o m m u t e . These relations are the C A R (canonical anticornmutation relations)
in their exact form, unlike the C C R in §2. For this reason, there is no "toy Fock space"
in the antisymmetric case, only a finite dimensional Fock space.
The operator b~bk is the same as the operator a~ of toy Foek space, but we denote
it by b~ for consistency; it is called as before the n u m b e r operator.
For each k we introduce, as in the case of toy Fock space, self-adjoint linear
combinations of creation and annihilation operators

(a.3) rk=b~+b; ; sk=i(b~-b;).


These operators may be considered as a pair of Pauli matrices a x , a y , the third matrix
being a z = I - 2b~. Spins relative to different sites all a n t i c o m m u t e . Given a subset
A = { i l < i 2 . . . < i n } E 7~ we define r a = t i t . . . t i n ( r 0 = I by convention), and
we define similarly s A a n d b~A where e = - , o, + . The ordering of the time set is
important because of anticommutativity : for instance, the adjoint of b~ is not bA , but
( - 1 ) n(A,A) bA . The algebra of operators generated by all the b~ and I consists of all
operators on 9 , the proof being the same as in §2. Using the anticommutation relations
one can show that the operators b~ bB define a basis for all operators. Another useful
basis consists of all operators r A s B (the proof is easy, and left to the reader).
2 The fermion operators b~ can be constructed from the toy boson operators a+k
through the J o r d a n - W i t h e r t r a n s f o r m a t i o n . The measure # is a product of N Bernoulli
measures, one for each coordinate, and thus L 2 (#) is a tensor product of N elementary
Bernoulli spaces, each site i carrying its three Pauli matrices ~rx (i), ~ry (i), az (i) as in
§1. In particular, the operators ax (i), ay (i) are the same as the to.,,' boson selfadjoint
operators qi, Pi (commuting between different sites) from §2. Then we have

rl = a z ® I ® I . . . , sl = a v ® I ® I . . . ,
r2=crzNez®I®I... , s2=az®av®I®I... ,
r,=~rz®az®~®I®I... , s~=~rz®az®ay®I®I... ,

and so on. For N = 2 we get in this way four 4 x 4 matrices, which are (one of the
possible representations of) the famous Dirac matrices. This explicit construction of
anticommuting matrices fiom tensor products, i.e. from matrices which commute, can
be interpreted as a s t o c h a s t i c integral r e p r e s e n t a t i o n of the "fermion martingales"

R k = ra + " . . + r k , S k = Sl "~ " ' " Jr S k ,

by means of the "toy boson" martingales

Q k = ql + " " + qk , P k =- Pl + " " + Pk •

This representation can be inverted, and leads to a stochastic integral construction of


"toy bosons" from fermions.
This construction has been used by Onsager to compute the partition function of the
two dimensiona~ Ising model by means of spin matrices. Thus anticommuta~,ive algebra
may be useful in commutative problems.
38 II. Spin

3 Let us write down a few formulas concerning the fermion operators.

(3.1) rk x s = (-1) n(k's) XBa{k } hence r A x B = (-1) n(A'B) XAA B

One may also compute biAXB as in §2 (3.1). We have

(3.2) r A r B = (--1) n(A'B) r AA B = (--1) n(A,B)+n'(A,B) rB rA .

In particular, we have r A2 = ( _ I ) n ( A , A ) I " On the other hand, direct computation from


(1.3) gives
skx B = i ( - 1 ) n(k'B) ( IB~ (k) - I B (k) ) xBA{k }

The middle difference has the value 1 for k C B , - 1 for k ~ B , and can be written
( - - 1 ) n ( k ' B ) + n ' ( k ' B ) + [ B I . It follows that

(3.3) s k x B = i (--1)n'(k'B)(--1) IBI X B A { k } .

It does not seem necessary to write explicitly the formula for s A x B , but we have the
same relation as (3.2) with s instead of r . On the other hand, we have

(3.4) sB r A = (--1) IAIIBI r A s B .

We shall summarize these results in the next section, as follows : the algebra of all
operators on Bernoulli space can be considered as a C l i f o r d algebra, generated by
all r i , s i together~ and the r i and s i taken separately also generate two (isomorphic
Clifford algebras.
All these results have parallels in continuous time.

Clifford algebras

4 In toy Fock space theory, qk was the multiplication operator by the Bernoulli r.v. x k
We are going to interpret similarly the operators r k as multiplication by x k in a non
commutative algebra structure.
The space of all operators on toy Fock space has a basis consisting of the operators
rA sB . A look at (3.2) shows that linear combinations of the operators r A alone
(including I ) form a subalgebra. The composition of operators being associative, and
the operators r A being linearly independent, (3.2) is the complete multiplication table
for an associative algebra. Since the vectors x A form a basis of 9 , we may carry this
multiplication back to ~ itself : x A is read as a product x l l . . . x i , indexed by the
elements of A in increasing order, and then we have (in close analogy with formula (1.4)
of §2)

(41) x A x B ~- ( - - 1 ) n ( A , B ) x A A B •

The formula
x kx B =(--1)n(k' B) xBA{k }
5. A n t i c o m m u t i n g spins 39

t h e n shows that r k is the left m u l t i p l i c a t i o n operator by x k . We c o m p u t e d above the


adjoint of the o p e r a t o r r A , which belongs to the same subalgebra. T h u s we m a y also
provide 62 with a n involution

(4.2) (x A)* = ( - 1 ) n(A'A) x A .

If we also carry back to 62 the usual n o r m for operators, 62 becomes a n o r m e d algebra


(a C*-algebra).
The concrete * --algebra we have thus constructed will be called the complex Clifford
algebra C ( N ) with N generators. We are going to widen this definition, characterizing
this algebra up to isomorphism, first using a system of generators, a n d t h e n in a n
intrinsic way.
5 Let ~ be an integer, a n d let A be an algebra over C, generated (as a n algebra) by
its u n i t 1 a n d by generators "~k (k = 1 , . . . , ~) satisfying the following relations
2
(5.1) 7k----1 ; 7jTk+TkTj=O (j#k).

Given a subset A = {jl < --. < j n } C { 1 , . . . ,v} we define as above "fA = 7jl •--Tj~ ,
a n d it is easy to see that

(5.2) 7ATB = (--1) n(A'B) TAAB


as in (4.1). Hence l ' n e a r c o m b i n a t i o n s of the elements 7A form a subalgebra, which
obviously contains the 7k a n d 1, a n d therefore is the whole algebra. I f the elements 7A
are linearly free, a s i t u a t i o n which arises in the concrete case considered above, (5.2) is
the complete m u l t i p l i c a t i o n table of the algebra, which is t h e n completely described up
to isomorphism, a n d called a Clifford algebra C ( , ) 1, of d i m e n s i o n 2 v . T h e last result
in subsection 3 can now be stated as follows,
The operators r k and s k taken separately generate an algebra C ( H ) ; taken together
they generate an algebra C ( 2 N ) .
One can prove that the italicized condition a b o u t the elements 7j being hnearly free
is unnecessary for even v . Indeed, assume ~ = 2 N a n d consider the m a p p i n g which
assigns to the operators r 1 , . . . , rN, S 1 , . . . , SN the elements 71, -- •, 7 , • Since the two
algebras have the same m u l t i p l i c a t i o n rules, it can be extended to a h o m o m o r p h i s m of
the algebra generated by all rj, sk, which is the algebra of all linear operators on 62, to
the algebra A . T h e kernel of a h o m o m o r p h i s m is a two-sided ideal, a n d it is a classical
( a n d not difficult) result that a full m a t r i x algebra is simple, i.e. has no two-sided ideals
except zero a n d the algebra itself. So the h o m o m o r p h i s m is an isomorphism, a n d we see
t h a t for even t,, C(,a) is isomorphic to the full m a t r i x algebra over a space o f dimension
2 "/2 , and is a simple algebra.
Let us anticipate a little to describe the ease of odd u. We sh£1 c o n s t r u c t in 9 a
linear operator ~ on 62 such that a 2 = [ , a n d which a n t i c o m m u t e s w i t h all rj, s k.
Since the r A s B are sufficient to generate the full m a t r i x algebra, the algebra generated

1 We have learnt much a b o u t Clifford algebras from excellent m i m e o g r a p h e d notes by


J. Helmstetter, Atgebres de Clifford et Alg~bres de Weyl, Cahiers Math. Universit~ de
Montpellier n ° 25, 1982. We are far from exhausting their contents.
40 II. Spin

by the ri, sj and a cannot be the Clifford algebra C ( 2 N + 1), which algebra thus
a d m i t s a non-trivial h o m o m o r p h i c image, and therefore is not simple.
EXAMPLES. Let us describe the low dimensional Clifford algebras. We begin w i t h a
r e m a r k : let "Y1,.-. ,Tu be the generators of C(u). For j < k we have ~lj"/lk = --')'jk,
and therefore the u - 1 elements i712,...,i71n of the second chaos satisfy the
a n t i c o m m u t a t i o n relations (5.1). Since the linear independence hypothesis is trivially
satisfied, t h e y g e n e r a t e a copy of C (z~-1), which is called the even subalgebra since as
a linear space it is the sum of the even n u m b e r e d "chaos".
T h e Clifford algebra C(1) is c o m m u t a t i v e . T h e case u = 2 may be identified w i t h the
algebra of all 2 x 2 m a t r i c e s (of all operators on the elementary Bernoulli space, w i t h its
basis consisting of 1 and one single Bernoulli variable x ). T h e (linear) basis consisting
of the vectors rAs B has four elements 1 , r , s , r s such t h a t r 2 = 1 = s 2, rs + sr = 0.
If we take as basis 1 , r , s , i r s the corresponding 2 x 2 matrices are I, a z , a y , a z . If we
choose i n s t e a d the basis 1 , i r , i s , - r s , the multiplication table is t h a t of q u a t e r n i o n s
(with c o m p l e x coefficients). T h e even subalgebra, generated by 1 and irs = Crz, is
isomorphic to the algebra of diagonal matrices, i.e. to I~2 .
T h e case of C(4) is f u n d a m e n t a l in physics : it is g e n e r a t e d by the four 4 x 4 Dirac
matrices, and has d i m e n s i o n 16. Its even subalgebra is C(3) of dimension 8 (instead of
4 for the algebra g e n e r a t e d by the three Pauli matrices).

Basis-free definition of Clifford algebras

6 We now give a definition of Clifford algebras which does not d e p e n d on an explicit


choice of generators. Consider a complex linear space "H of finite d i m e n s i o n l,, and a
non d e g e n e r a t e s y m m e t r i c bilinear form B on H in our case, 7-/ is the first chaos
over Bernoulli space, i.e. the space of complex linear combinations of the Bernoulli
r a n d o m variables x i , and B ( u , v ) = IE [ u v ] . We identify q , as a vector space, w i t h
the exterior ( G r a s s m a n n ) algebra A('H) over 7-/. Indeed, the k - t h exterior power of
has a basis consisting of the vectors xi~ A . . . A xik with il < •. • < ik, hence it can be
identified w i t h the k - t h chaos of toy Fock space (with G for k = 0). T h e n the Clifford
p r o d u c t is c h a r a c t e r i z e d as the unique associative multiplication on ~ w i t h 1 as unit,
such t h a t

(6.1) uv+vu=B(u,v) l for u , v E T - t

Indeed this says the same thing as (5.1) for the basis ( x i ) , and the linear i n d e p e n d e n c e
s t a t e m e n t in subsection 5 has been hidden in the a s s u m p t i o n t h a t the u n d e r l y i n g space
of the algebra is A('H).
This description can be extended to possibly degenerate bilinear forms (when the
bilinear form is 0, one gets the exterior nmltiplieation). We give additional details in
Appendix 5.
To develop n o n - c o m m u t a t i v e probability, an involution is necessary. It requires an
additional element of s t r u c t u r e : "H nmst be a complex Hilbert space w i t h a conjugation
u , ~ u. In the Bernoulli case it is the s t a n d a r d conjugation in the space of c o m p l e x
linear c o m b i n a t i o n s of the real r.v.'s xi. This conjugation is used to c o n s t r u c t the
bilinear f o r m B ( u , v) from the complex scalar p r o d u c t , as < u, v > . It is t h e n easy to
5. Anticommuting spins 41

show there is a unique involution of the Clifford algebra such that x* = x on the first
chaos 7-{.
For algebraists, exterior multiplication is more elementary than Clifford multiplica-
tion, but for us the "elementary" thing was Bernoulli space, and we did not mention at
all exterior algebra. The advantage of our apparently crooked path is its easy extension
to infinitdy many dimensions, replacing Bernoulli space by Wiener space.
7 Looking back on this discussion, exterior multiplication appears as another interest-
ing multiplication on Bernoulli space, with multiplication table

(7.1) x A x B = (-1) n(A's) xAu B if A Yl B = 0 , = 0 otherwise.

This illustrates the vague philosophy that many interesting algebra structures, commu-
tative or not, can grow on the same space. The exterior and Clifford multiplications are
related on 7-I by the equality u A v = u v - B ( u , v) 1, and there are interesting formulas
(some of which will appear in the course of these notes) relating the two products for
higher order monomials.
Returning now to Bernoulli, we are led by analogy with (7.1) to a new c o m m u t a t i v e
multiplication on toy Foek space, whose table is given by

(7.2) XA x B -~ X A u B if Afl B = 0 , = 0 otherwise

This multiplication is called the W i c k product on Bernoulli space.

Linear spaces of Bernoulli random variables

8 Probabilists are familiar with Gaussian ~paces, i.e. linear spaces of random variables,
every one of which has a Gaussian law. If 7-/is a real Gaussian space, we have for h E

I E [ e iuh] = e -q(h)/2 with q ( h ) = I E [ h 2]

and therefore q is a positive quadratic form, which determines the probabilistie structure
of the space. T h e n the Wiener chaos theory (chapter IV) describes the full algebra
generated by the space 7-/.
The probabilistic idea behind the preceding description of Clifford algebras is that of
a linear space 7"I o f B e r n o u l l i r a n d o m variables, "Bernoulli" meaning that the support
of the r.v. has at most two points. This cmmot be realized in commutative probability,
but here every real linear combination y = ~ i a i x i of the basic Bernoulli random
variables x i has the property that y~ (Clifford square) is a constant, and therefore y
assumes only two values. In the vacuum state y has a symmetric Bernoulli law, but in
other states it may not be symmetric. An additional term a01 may be allowed in the
sum, thus allowing degeneracy at points other than 0. Conversely, Parthasarathy has
shown in [Par2] that linear spaces of two-valued random variables lead necessarily to
Clifford algebras. The analogy between Gauss in the commutative case and symmetric
Bernoulli in the anticommutative case is important, and should be kept in mind.
There is a commutative analogue of Clifford algebras, namely the Gaussian algebras
defined by Dynkin [Dyn], but it is far less important from the algebraic point of view.
42 II. Spin

Autornorphisms o f t h e C l i f f o r d structure
9 T h e Hilbert space (I) ( a n t i c o m m u t a t i v e Fock space) carries three distinguished
operators, which behave in a definite way with respect to the Clifford m u l t i p l i c a t i o n
on II/.
Given an integer n , let us define
f)(~.) ---- (--1) n(n+l)/2 , 0"(~.) = (--1) n , r ( n ) = (--1) n(n-1)/2 .

Given an element x of the n-th chaos in 9 , define a ( x ) = a ( n ) x (and similarly


for p , v ) and e x t e n d to • by linearity. T h e n it turns out t h a t 7 = p a = a p , t h a t
a is an a u t o m o r p h i s m of the exterior and Clifford algebra structures, and p , r are
a n t i a u t o m o r p h i s m s (i.e. reverse the order of products). We have p2 __ a2 = T2 = I .
S o m e t i m e s a is called the m a i n a u t o m o r p h i ~ m and T the m a i n a n t i a u t o m o r p h i s m of
the structure. T h e involution on the Clifford algebra is the composition of p and the
conjugation.
O n e m a y a d d to this list the i n v o l u t i o n *, a n d t h e " F o u r i e r t r a n s f o r m " .%- we i n t r o d u c e d
in t h e c o m m u t a t i v e case, w h i c h e x c h a n g e s the o p e r a t o r s r l a n d s i .
T h e o p e r a t o r a is also called the parity : it has the even and odd subalgebras as
eigenspaces w i t h eigenvalues -t-1 respectively. It follows easily that it a n t i c o m m u t e s w i t h
every o p e r a t o r r k or s k , since these exchange the two subalgebras.
In the tensor p r o d u c t representation of subsection 2, a is w r i t t e n as az ®N . Looking
•, a t the c o m p u t a t i o n of the operators r i , s i in the same tensor p r o d u c t representation,
and using the relation axcry = ic~z, we see that the product r l s l . . , rNSN is equal to
iNcrz ® N . However, cr has square I and a n t i c o m m u t e s with the 2 N o p e r a t o r s ri, s j .
In p a r t i c u l a r , for N = 2 r 1 , Sl, r2, s 2 are t h e f o u r Dirac m a t r i c e s 3~1,..., 7 4 , a n d t h e i r
p r o d u c t is called - i ' r 5 a n d a n t i c o m m u t e s with all of t h e m .

Integrals
10 T h e concrete Clifford algebra carries a natural probability law, which is the state
associated w i t h the v a c u u m vector. It is given by

p(XA)= <l,rAl> = 1 if A = O , 0 otherwise.

It is i m p o r t a n t to n o t e t h a t p ( X A X B ) = p ( X B Z A ) : the v a c u u m vector defines a tracial


state. O n the o t h e r hand, the v a c u u m vector defines also a state on the full m a t r i x
algebra (which is also the Clifford algebra generated by the r A and s B taken t o g e t h e r ) ,
on which it is clearly not a trace.
T h e value of the v a c u u m state on an o p e r a t o r e x p a n d e d as ~-~ACAXA is the
coefficient c 0 in the lowest chaos. If we take the coefficient in the highest chaos, we
get a linear functional (which is not a state for the ordinary involution) called the
B e r e z i n integral. It has m a n y interesting properties, and no obvious e x t e n s i o n to the
infinite dimensional case.
REMARK. Let 7-/ be a r e a l or complex linear space with a bilinear scalar p r o d u c t . T h e
elements of the associated Clifford algebra (and modules over this algebra) are called
~pinor~. T h e y a p p e a r in physics in two ways : the " q u a n t u m probabilistie" way described
in this chapter, and also as a way of handling the orthogonal group of 7-/ (the case of
the Lorentz group being particularly i m p o r t a n t ) . This is a topic we leave aside.
Chapter III

The Harmonic Oscillator


This chapter is the closest in these notes to what is usually called "Quantum
Mechanics". The present version is considerably shorter than the original French. It thus
becomes more obvious that its main topic is not really elementary quantum mechanics,
but rather elementary Fock space, and the quantum analogue of finite dimensional
Gaussian random variables.

§1. S C H R O D I N G E R ' S MODEL

The canonical pair

1 Everyone has heard about the "quantization procedure" which leads from a classical
Hamiltonian

(1.1) H (pl,ql,...,pn,qn) ,
on the 2n dimensional phase space IR TM , to the quantum mechanical Hamiltonian

(1.2) H (P1, Q 1 , . . . , In, Q,~),

Qj denoting the multiplication operator on L 2 (IRn) corresponding to the coordinate


mapping x j, while Pj =-ihDj is a partial derivative operator. For instance, if (1.1) is
the Hamiltonian for a system of n particles of mass m interacting through a potential V
2

Ej +
then the quantum mechanical Hamiltonian is the well known Schr6dinger operator
T/2
2m A + V . Except in special cases like this one, the non-eommutativity of the operators
Pj and Qj makes it a dii~cult problem to give a definite meaning to the operator (1.2),
and much work has been devoted to this "quantization problem", which does not concern
us here.
Let us deal with the case n = 1 for simplicity. A canonical pair is informally described
as a pair of self-adjoint operators P , Q on a Hilbert space 7/, satisfying (Heisenberg's)
canonical commutation relation (CCR)
(1.3) IF, Q] = - i h I .

However, this definition is not satisfactory : even if one postulates that (1.3) holds on a
dense stable domain 7) which is a core for both operators, pathological examples can be
44 III. The harmonic oscillator

given (see Reed and Simon [ReS], end of VIII.5). The standard way out of this difficulty
consists in translating the CCR into a statement concerning b~mnded operators, the
Weyl form of the CCR, described in the next subsection.
2 Schr6dinger's too,tel of the canonical pair is given by

(2.1) 7/=: L2(IR) , Q f ( x ) = x f ( z ) , P f ( z ) = - i h D f ( x ) .

In the coramutatiort relation (1.3) position Q and momentum P play nearly symmetric
roles, but the explicit realization (2.1) privileges (diagonMizes) position. The corre-
sponding "momentum representation" is deduced from this one using a conveniently
normalized Fourier-Plancherel transform,

(2.2) = f e g(x) x , = dxl,/2 h.

which maps Qzg into Pug and Pxg into - Q u a , so that P gets diagonalized.
Domains can be exactly described : the domain of Q cor, sists of those "wave
functions" f such that x f C L 2, and the domain of P of those f whose derivative
in the distribution sense belongs to L 2. The commutation relation (1.3) holds on the
Schwartz space $ , which is a core for both operators. The unitary groups Ur = e - i r P
and Vs = e -isQ can be made explicit

(2.3) ~ r f ( z ) = f ( x - hr) , V s f ( x ) = e - i s z f ( z ) ;

and it is easy to verify that

(2.4) Ur Vs = eitirSVs V v .

This is the most elementary version of the Weyl form of the CCR, and what we precisely
call a (one dimensional) canonical pair in this chapter will be a HiIbert space 7"{ provided
with two unitary groups ( U r ) , (Vs) satisfying (2.4). The canonical pair is said to be
irreducible if the only closed subspaces invariant under both groups are the trivial ones
{0} and 7-/. The interest of this definition lies in the fact that every irreducible canonical
pair is isomorphic to Schrgdinger's model (the Stone-yon Neumann theorem, subs. 6).
One can also express (2.4) as a relation between one of the un.;tary groups (say, Ur )
and the spectral family (Et) of the other one. In the explicit Schrhdinger representation,
we have

Etf(x) = I] -c~,t] (x) f ( x )

and (2.4) is equivalent to

(2.5) Ur Et = Et+/ir Ur .

This can be interpreted as a covariance property of the spectral measure under the
group G of transla*Aons of the line : let g E G be a translation x ~ x + u, acting on
functions by 7 g f ( x ) = f ( g - l x ) = f ( x - u ) . Then we have, for A = ] - ~ , t ] and then for
every Borel set of the line

(2.6)
1. SchrSdinger's model 45

This is a simple example of an imprimitivity system in the sense of Mackey, the general
form of which concerns the covariance of a spectral measure on a space E under a
locally compact group G of symmetries of E .

Weyl operators

3 Weyl suggested the following approach to the quantization problem of associating


a q u a n t u m observable F(P, Q) with a function F(p, q) on classical phase space. First
define the u n i t a r y operators e - i (rP+sQ) for real r , s. Then take the Fourier transform
of the classical observable

F ( r , s) = f e -i (~P+sq) r ( p , q) dpdq ,

and transform it bazkwards into the operator,

F(P, Q) = f e i (rP+sQ) F(r, s)drds

For a discussion of this method, which is extremely interesting as a version of pseudo-


differential calculus, see Folland [Fol]. Here we shall be concerned only with the definition
of the Weyl operators

(3.1) Wr,s = e - i (r p-sQ)

and, (by differentiation) of the selfadjoint unbounded operators r P - sQ. To conform


with "sympleetic" tradition, we have inserted a minus sign in (3.1).
A reasonable meaning for (3.1) is suggested by the elementary Campbell-Hausdorff
formula for a pair of finite dimensional matrices A and B which commute with their
commutator [A, B ]

(3.2) e A+B = c A e B e - [A,B]/2 _-- c B cA e [A,B]/2

We try this on A = - - i r P , B = i s Q , [ A , B ] = r s [P, QI = - i h r s I , and we have on the


model

(3.3) W r , s f ( x ) = e -ilirs/2 e isx f ( x - hr) ,

which is obviously unitary. It is interesting to define (just for a few lines) Wr,s, t =
e-itWr,s, in order to check on (3.3) that

(3.4) W~,s,t W~, 8,,t, = W + ~ , s + ~ ' , t + t ' - n ( r s ' - , / ) / 2 "

The composition law of the parameters defines the so called Heisenberg group multipli-
cation on ]R 3 . In particular, taking t = 0 we get the Weyl form of tLe CCR

(3.5) %,~ w , , ~ , = e -~h ( ~ ' - ~ ' ) / 2 Wr+r,,~+~, ,

which shows in particular that the operators Wr, s constitute a projective unitary
representation (i.e. a unitary representation up to a phase factor) of the additive
group ]R2 .
46 III. T h e h a r m o n i c oscillator

If we put Wz = PVr,s for z = r + is, we get the complex form of the CCR, which is
particularly useful

(3.6) Wz Wz' = e-ih ~r.(~.'/2) Wz+z '


It becomes meaningful for any complex Hilbert space, if 2z' is understood as <z, zt>.
We shall return to this general form of the CCR in chapter IV.
As for the operator r P - sQ itself, we may now define it as the selfadjoint operator
i ~d W tr,ts It=0 • One can prove that on the Schwartz space S this operator coincides with
t h e finear combination r P - sQ, and that it admits S as a core.
I~MARKS. a) In classical mechanics, much importance is given to canonical transforma-
tions, which preserve the sympleetic structure. Here, a linear transformation
P' = aP + bQ, Ql = cP + dQ
preserves the Heisenberg commutation relation, i.e. satisfies [pr, QI] = - i h I , if and
only if a d - be = 1. It is safer not to deal with unbounded operators, and to remark that
the operators We,s = War+cs,br+d s satisfy the Weyl CCR.
b) Given any probability law p for the canonical pair (i.e. any positive operator p of
trace 1 on L 2 (119.)), we may define its (quantum) characteristic function by the formula
(3.7) F ( r , s ) = ]E[e -i(rP-sQ) ] = Tr (pWr,s) .
Note that, for r = 0 and s = 0, the q u a n t u m c.f. gives the ordinary characteristic
functions of the observables Q, P . For instance, if p is the pure state associated with
a normalized "wave function" f , we have according to (3.4)

F(r, s) = < f, Wr,sf > = e -'Trs12 / f ( y ) f ( y eisJ d~

(we find it convenient here to use the Hilbert space L2(dx) instead of dx). The
properties of F ( r , s ) are similar to those of a characteristic function in classical
probability : it is continuous and bounded by 1 in modulus, and satisfies a suitable
"positive type" property. See for instance [Moy], [Fan], [Pool, [CuH].
Gaussian states

4 As in classical probability theory, a state p for the canonical pair is called (centered)
Gaussian if its charcocteristic function is the exponential of a quadratic form. This form
must be negative since F is bounded, hence the q u a n t u m c.f. is of positive type in the
usual sense. For example, let us take h = 1 and consider the pure state corresponding
to the square root of the Gaussian density w.r.t, the measure dx f = e-1/4e -x~/4c.
Then the characteristic function is easily seen to be the exponentiM of the quadratic
form - ( c s 2 / 2 + r2/Sc), which at first sight seems not to be symmetric. However, we
have been working m the "position representation" which privileges the operator Q.
To reestablish the balance between P and Q let us set c = ca and define Cp by the
(uncertainty) relation CpCq= 1/4. Then the quadratic form can be written symmetrically
as --71(Cpr2 d- CqS2) . We shall see later other examples of q u a n t u m Gaussian laws.
1. Schr6dinger's m o d e l 47

Minimal uncertainty

5 T h e m i n i m a l u n c e r t a i n t y states of the canonical pair axe Gaussian. In order to find


t h e m , we first r e p r o d u c e the classical p r o o f (by H. Weyl) of the Heisenberg u n c e r t a i n t y
relation. In this section we assume h = 1.
This is not the best normalization for probability theory. The product ef variances h2/4
in the Heisenberg uncertainty relation should be made equal to 1, and thus h = 2, the
standard choice in these notes. On the other hand, having the momentum operator
equal to -2iD 1s hardly acceptable for physics.
A n o r m a l i z e d element w of H such t h a t l E u [ O 2] < co and I E u [ P 2] < c~ is a
function w ( x ) , w i t h weak derivative w'(x) (defined a.e.) such t h a t

/, (x)12dz=l, /l '(x)l dx < oo, flxl l,o( )l=d <


T h e existence of a derivative in the L 2 sense implies t h a t w has a continuous
representative. We are going to prove

This will follow from f]:r.w ( x ) J ( x ) ] d:c >_ 1 / 2 , and the Schwarz inequality. On every
finite interval (a, b) we have, using integration by parts

2 Ibo .
Since t h e continuous f u n c t i o n x aJ(x) belongs to L 2 , it cannot r e m a i n b o u n d e d away
from 0 at 4-0o, and we can let bT + oo and a~ - co in such a way t h a t the last t e r m
on the right tends to O, thus getting

2 / x w J dx = - / w2 dx .

If w is real, the right side is equal to - 1 and we are finished. If w is complex, we r e m a r k


t h a t I w ] still is derivable in L 2 sense, with a derivative whose m o d u l u s is smaller t h a n
] J I ; applying the preceding inequality to I w I t h e n gives the required result.
This p r o o f allows us to find the normalized functions w which achieve the m i n i m u m
in the preceding irequality. First of all, if w achieves the m i n i m u m so does I w ], therefore
we m a y assume t h a t w is real. T h e n we have equality in the C a u c h y - S c h w a r z inequality
( f xww' dx ) 2 <_ ( f ,v 2 dx ) ( f xw '2 dx ) , hence J ( x ) = c xw(x) for s o m e c o n s t a n t c (which
cannot be ~ 0 if z has to be in L 2). Finally we get a G a u s s i a n wave f u n c t i o n
(normalized here w i t h respect to dx )

(5.2) ~(x) = c-'/4~ -~'/4~

T h e r e might exist complex functions w i t h the same m o d u l u s , also achieving the


m i n i m u m , so let us set w = O]w] w i t h ]0] = 1 ; since Iw] is C °° and does not vanish, 0
is locally derivable ,n L 2 . We m a y write w' = OIw ]' + O'[wl, f r o m which follows easily
t h a t ]w' I > Iwl~ unless O' = 0 and equality in (5.1) is achieved only if 0 is constant.
48 III. The harmonic oscillator

From these states we get all wave functions of m i n i m a l uncertainty as n o t necessarily


centered G a n s s i a n wave functions, i.e. wave functions (5.2) shifted in position a n d
m o m e n t u m space by some constant.
We shall see later the relation of these minimal uncertainty wave functions with the
coherent states of the harmonic oscillator.

The Stone-von Neumann theorem

6 T h e S t o n e - y o n N e u m a n n theorem shows that any irreducible realization of the Weyl


c o m m u t a t i o n relations is isomorphic to SchrSdinger's model. This result is particularly
a t t r a c t i v e for probabilists, since it has a n i n t e r p r e t a t i o n in the theory of flows : in
discrete time it has been thus rediscovered by 0 . H a n n e r (1950), and in continuous time
by K a l l i a n p u r a n d Mandrekar [KaM]. See also J. de Sam Lazaro a n d P.A. Meyer [LaM].
We are going to sketch the probabilistic proof, t r a n s l a t i n g it into pure Hilbert space
language.
On the other hand, the theorem has a more general interpretation in Maekey's theory
of imprimitivity systems.
We are given a Hilbert space 7/, provided with an increasing a n d right continuous
family of subspaces (7/t) tEN " It is convenient to assume that 7-/oo = 7/ a n d 7/-e(~ = {0},
hut this last hypothesis will be used only at the end of the proof. We denote by Et the
orthogonal projection on 7/t. In the probabilistic interpretation, ~ a n d 7/t are spaces
L 2 (f~, ~-, ]P) a n d L 2 (f~, 5ct, ]P) relative to an increasing and right continuous family of
a - f i e l d s .T't, a n d Et is the conditional expectation ]E [. ] 5rt ] .
We are also given a u n i t a r y group Ut, which in the probabilistic s i t u a t i o n is a group
of a u t o m o r p h i s m s Ot of the measure space i2. The eovariance property (2.5) m e a n s
that for all s, t

(6.1) f C 7/s - - > Ut .fcT/s+t .

A screw line (or helix) is a curve x(t) in 7/-t with the following properties. First,
x(0) = 0, the curve is adapted (i.e. x(t) E 7/t) for t > 0 a n d has orthogonal
increments, i.e. for 0 < s < t x ( t ) - x ( s ) is orthogonal to 7-{s - - in the probabilistic
context, this is the m a r t i n g a l e property. Next, the curve has s t a t i o n a r y i n c r e m e n t s :
Uh ( x ( t ) - x ( s ) ) = x ( t + h) - x ( s + h) for all s , t , h (in probabilistic language, it is a n
"additive flmctional"). The heart of the S t o n e - v o n N e u m a n n theorem is the following
result :

THEOREM. I f the filtration is not deterministic ( i.e. if 7-t-oo 7t 7-t) there exists a (non-
zero) screw line.
PROOF. Due to the stationarity, the hypothesis that the filtration is not deterministic
means that the Hilbert space g = 7 / 0 is not equal to 7i. Given now f E E , p u t for t >_ 0

P t f = EoUt f C g .

We leave it to the reader to check that (Pt)t>0 is a strongly continuous semigroup


of contractions of g'. We denote by A its generator, with d o m a i n ~9(A). We also p u t
f o X t = U t f , a n o t a t i o n which looks strange, but has an intuitive content since, in the
1. Schr6dinger's model 49

probabilistic case, Xt may be interpreted as a mapping from (Q, 9~) to the "state space"
E = ( Q , ~ ' 0 ) , and these mappings constitute a Markov process with semigroup (Pt).
Our next point (almost trivial) is to check that for f E :D(A) the following curve,
defined for t > 0

(6.2) x(t) = I o X t - ]oXo - A f o X s ds

is a screw line, or rather can be extended to the whole of IR so as to become a screw


line. This is also left to the reader. So finally we must only show that if all these screw
lines are equal to zero, then the filtration is deterministic, i.e. E = ~ .
The assumption that all the screw lines (6.2) are equal to zero can be stated as
follows : if f C : D ( A ) , then we have (recalling that f o X t = U t f for f E $ )

Utf = Uof + U s A f ds .

Otherwise stated, if we denote by (B, 7?(/?)) the generator of the semigroup (Ut)~>a,
f e ~ ( A ) --> I e Z)(B) and A f = B f . Now let Rp = f o c-Ps Psds be (for p > 0 )
the resolvent of (Pt) and similarly Sp be the resolvent of (Ut). For u E $ the vector
f = Rpu belongs to :D(A) and A f = p f - u, therefore it also belongs to :D(B) and
B f = p f - u , which implies SpU = f . Since this is true for all p > 0 inverting the Laplace
transform gives U~u = Ptu on E, implying that for t>O Ut maps $ into itself. Then it
follows from (6.1) that ~ 0 =7~, which implies easily that the filtration is deterministic.
It is worthwhile to describe the screw line x(t) which generates the model itself : for
t > 0 x(t) is the indicator function I]o,t ] , and for t < 0 it is equal to - I ] t , 0 ] "
The non-trivial part of the proof is finished. We consider a non-zero screw line, and
remark that II x(t) --x(s)112= c l t - s I by the stationarity of increments. We may assume
the screw line is normalized so that c = 1. Then we may define for f E L2(]R) the
"stochastic integral" I ( f ) = f f ( s ) d x ( s ) , which carries isometrically L2(IR) onto a
closed subspace M ( x ) of ~ . It is trivial to check that M ( x ) is invariant under the
operators Et and Ut, the first one corresponding to the multiplication of f by I ] -oo,t]
and the second one to a shift of f by t. Otherwise stated, the situation induced on
M ( x ) is isomorphic to Schrhdinger's model. Now we restrict ourselves to the stable
space M ( x ) ± ; if the induced filtration is not deterministic we can extract again a
screw llne and a copy of the model, and so on until we reach a residuM stable space on
which the induced filtation is deterministic (the "and so on" can be replaced, as usual in
mathematics, by a r~gorous argument beginning with the sentence "consider a maximal
family of mutually orthogonal normalized screw l i n e s . . . " ) . If we have 7-/-oo = {0},
this residual space is also equal to {0}, and we have decomposed ~t into a direct sum
of copies of the model (at most countably infinite since 7-/ is always assumed to be
separable).
7 If an irreducible canonical pair exists at all (i. e. one for which the only closed stable
subspaces are the trivial ones) then by the above proof it must cGnsist of one single
copy of the model. Let us prove that the model itself is irreducible. According to the
Stone-von Neuman:l theorem, if the model were reducible into two orthogonal stable
50 III. The harmonic oscillator

subspaces, it would contain two orthogonal screw-lines ; thus it is sufficient to prove there
is only one normalized screw-line ~ (t). Let f ( x ) be the function ~ (1) E L 2 ; according to
adaptation, we have f ( x ) = O (a.e.) for x > 1. The orthogonMity of increments implies
that ~ ( 1 ) = ~ ( 1 ) - ~ ( 0 ) is orthogonal to EoT-l, i.e. f(x)=O a.e. for x < 0. Writing that
f(x)-f(x-t) = 0 a.e. on l - c o , t] for 0 < t < 1, we find that f is a.e. constant
on [0,1] , and by normalization ~(t) is the fundamental screw line I]0,*] (t > 0),
whence the conclusion.
8 The Stone yon N e u m a n n theorem is a powerful result. For instance, set U~ =
V_,, Vtt = U, ; this is a second pair of unitary groups which satisfies the Weyl relation,
and which is evidently irreducible ; since the model is unique there must exist a unitary
operator on L 2 (JR) which transforms the first pair into the second one. We have proved
by "abstract nonsense" that the Fourier-Plancherel transform is unitary ! Also, consider
a minimal uncertainty Gaussian wave function in the position representation. Minimal
uncertainty is a property of the state itself, and does not depend on the representation
of the canonical pair we use ; hence the Fourier transform of a minimal uncertainty wave
function is a minimal uncertainty wave function in the m o m e n t u m representation, and
we get the well known result that the Fourier transform of a Gaussian is Gaussian 1 .
9 Here is another interesting consequence of the Stone-von Neumannn theorem. Let us
consider on some (separable) Hilbert space JC a family of operators Wr, s which satisfy
the Weyl commutation relations. According to the Stone-von N e u m a n n theorem, JC
is isomorphic to a countable sum of copies 7-/n of the model. Let f be a normalized
vector, and let f = ~-]n cn fn its decomposition along the spaces ~ n , the vectors fn
being normalized. Then we have

< f, Wr,sf > = ~_~nlCnl2 < f n , W ~ , s f n >


On the right side, we have done as if fn were a vector in the model 7-/, not in its
copy H n , and what we get is the characteristic function of the mizture ~7~nIcnl2Zf~,
denoting by gh the pure state corresponding to the vector h. This is an equality between
two q u a n t u m characteristic functions, computed on the left side on a pure state of a
reducible canonical pair, and on the right side on a mixture in the irreducible model.
Thus, every law that can be realized on any canonical pair can also be realized on the
model itself. The idea is somewhat the same as that of "canonical processes" in classical
probability theory : one starts realizing the law of the process on some huge probabihty
space, using possibly auxiliary processes, and then the law may be mapped to a reduced
space (consisting generally of a minimal set of sample functions) from which all the
auxiliary information has been erased.
We apply this idea as follows : we take a tensor product of two copies (P1, Q1) and
(P2, Q2) of the ScKrhdinger model. More concretely, the basic Hilbert space is L 2 (]p2),
the first pair consists of the multiplication operator by the coordinate Xl and the partial
derivative operator -ib.D1, and similarly for the second pair using the coordinate x2 •
Then we set
Q = aQI T bQ2 ; P = aPl - bPu
1 Another wonderful application concerns the structure of the endomorphisms of a *-algebra L:(~).
See Parthasarathy's bo9k, Prop. 29.4, p. 253.
2. C r e a t i o n a n d a n n i h i l a t i o n o p e r a t o r s 51

where
N N a, b are real and such that a 2 - b2 = 1 : a formal computation shows that
[ P , Q] = - i h / , so we have built a new (reducible) canonical pair. It is more rigorous
to define the Weyl operators

Wr, s f ( x l , x 2 ) -----e-ihrS/2 e-iS(axt+bx2) f ( x l -- al"w, x2 -b b ~ s )

and check that they satisfy the CCR. Let now f l and f2 be two normalized vectors in
the model, and f be their tensor product. Then the quantum charactei-istic function of
the state f is

< f , Wr, s f > = < f l , W . . . . s f 1 > < 9 ; 2 , W-br, b s f 2 > •

In particular, if we take for f l , f2 two identical real Gaussian wave functions, we get a
characteristic function of the form

exp ( - - ~1 /t A r 2 + # s 2 ) ) with A=(a 2+b2)cp, # = ( a 2-i-b2)cq .

In this way we have constructed quantum Gaussian laws of a r b i t r a r y uncertainty product


> h2/4. We shall use the same construction in the infinite dimensional situation, using
the tensor product of two copies of the simple Fock model. But there one cannot get
back to the model (the Stone-von Neumann theorem holds only for a finite number of
canonical pairs), and some "non-Fock" representations of the C C R appear.
Up to now, we have seen only Gaussian quantum characteristic functions correspond-
ing to diagonal quadcatic forms. It is easy to deduce from them Gaussian characteristic
functions which possess an off-diagonal term. Keep the same state (pure or mixed),
but replace the Weyl family by a new one, using a linear canonical transformation (§1,
subsection 2). Then the quadratic form gets "rotated" and is no longe.r diagonal, its dis-
criminant remaining invariant in this transformation because of the relation a d - b c = 1.
These general quantum Gaussian laws constitute the class of all possible limit laws in
the quantum central limit theorem of Cushen-Hudson [CuH].

~2. C R E A T I O N AND ANNIHILATION OPERATORS

In this section, attention shifts from the canonical pair Q, P towards a pair of non-
selfadjoint operators, the creation and annihilation pair, soon to be completed by the
number operator, which is closely related to the harmonic oscillator Hamiltonian.
From now on, we adopt systematically the convention that h = 2. This will eliminate
many unwanted factors of 2 in probabilistic formulas. However, theIe exists no system
of units such that 2 = 1, and unwanted factors of 2 will appear elseeJhere.
52 III. T h e h a r m o n i c o s c i l l a t o r

T h e h a r m o n i c o s c i l l a t o r , or e l e m e n t a r y Fock s p a c e

1 We first review quickly Dirac's approach to the theory of the q u a n t u m h a r m o n i c


oscillator. We start with the canonical pair in Schr6dinger's form

(1.1) Q=x, P=-2iD,

and define new operators a + on the common stable domain ,3 for P and Q as follows

(1.2) Q=a+ +a - , P=i(a+-a-).

They are called respectively the creation and a n n i h i l a t i o n operator. The standard
notation in physics books is a = a - , a* = a +. They satisfy on ,3 the commutation
relation [ a - , a + ] = I , as well as the relation < a + f , g > = < f , a - g > . Therefore
each of them has a densely defined adjoint, and hence is closable. Their closures are
denoted by the same notation, and one can prove they are exactly adjoint to each other
(we shall never use this fact).
The operator a ° = a + a - is well defined on ,3, and called the n u m b e r operator. The
notation a ° is not standard, the usual one being N . On ,3 the n u m b e r operator is
positive symmetric, and one can prove that it is essentially selfadjoint.
The explicit expression of the three operators on ,3 is
1 1
(1.3) a + = ~1 ( x - 2 D ) , a- = ~ (x+2D) , a° = ~(x 2-4D 2-21) .

All unpleasant factors of 2 will be eliminated in (2.2).


The general form of the Hamiltonians describing q u a n t u m harmonic oscillators (with
one degree of freedom) is H = a Q 2 + b P 2 with a, b > 0. All elementary textbooks on
q u a n t u m mechanics explain how they can be reduced to a standard form, for instance
(p2 + Q 2 ) / 4 = (x 2 - 4D2)/4, which by (1.3) can be expressed as a ° + 1 / 2 . We are
going to compute all eigenvalues and eigenvectors of H (or equivalently of N ).
Knowing these eigenvalues and eigenvectors would allow us to compute the unitary
group e - i t H which describes the q u a n t u m evolution of a harmonic oscillator. However,
we are not really interested in this d y n a m i c a l problem. What we are doing will remain
at a k i n e m a t i c a l level, namely, will amount to another description of the canonical pair.
2 The wave function
(2.1) ~(x) = C = (27r)-114 e-x2/4 ,

is normalized in 7~ = L ~ (IR, d x ) and belongs to ,3. It is killed by a - , and therefore by


a ° = a + a - . Its square is the density 7 (x) of the standard Gaussian measure 7 ( d x ) .
Since ~ is strictly positive, the mapping f ---* f / ~ is an isomorphism of L 2 (IR) onto
L2 (7), and we may carry the whole structure on this last Hilbert space. We thus get a
new model for the c monical pair, called the real G a u s s i a n m o d e l :

(2.2) T/=L 2(7), a+ = ( x - D ) , a- =D, a° =N=-D 2+xD

The operator Q = a + + a - is still equal to x, while the operator D "~ - x D = - N is well


known to probabilists as the generator L of a Markov semigroup ( P t ) , symmetric with
2. Creation and annihilation operators 53

respect to the measure 7, which is called the Ornstein Uhlenbeck semigroup. Therefore
L is negative selfadjoint and N itself has a natural positive selfadjoint extension. It is
true (we shall neither prove nor need it) that this extension is simply the closure of N
on S , i.e. N is essentially s.a. on S . Finally, the vector ~ has become the constant
function 1. We call it the vacuum vector and denote it by 1.
In the Gaussian model it is clear that the space 79 of all polynomials is stable under
a +, a - . On the other hand, it is dense in 7-t : this is a classical result, which can be
deduced from the fact that trigonometric polynomials are dense, and the exponential
series for e iu~: is norm convergent. One can show that P and Q are essentially
selfadjoint on 79.
3 We are going to find all the eigenvalues and eigenfunctions of N = a ° . The
computation rests on the simple formulas, valid on the stable domain 79
N a + = a + ( N + I) , Na- =a-(N-I).

The first one implies that, given an eigenvector f E 79 of N with eigenvalue )~, a + f is
either 0, or an eigenvector of N with eigenvalue A + 1. Starting with f = 1, ~ = 0 we
construct vectors hn = ( a + ) n l , such that N h n = n h n , and

< h n + l , h n + l > = < a+hn , a+hn > = < hn , a - a + h n > =


= < h a , (N + I) h~ > = ( n + i ) < h , , , h~>,
from which one deduces that none of them is 0, and
(3.1) < h n , hn > = nr

The vectors hn are read in the Gaussian model as the polynomials ha(x) = (x - D ) n l :
they are the Hermite polynomials with the normalization convenient for probabilists (not
for analysts). The first ones are
1, x, x2-1, x3-3x ....

Since there is one of them for each degree, they form a basis of the space of all
polynomials. Since polynomials are dense in ~ , the vectors ~Pn = ( n ! ) - 1/ 2 hn constitute
an orthonormal basis of 7/ which completely diagonalizes the number operator. Here
are the expressions of a + in this basis :

(3.2) a+cpn=x/n+ l~pn+l , a ° ~ 2 n = n ~ n , a-~n=V/--n-l~n_l.


From this we can deduce the matrices of P and Q, which were at the root of
Heisenberg's (1925) "matrix mechanics".
In fact this orthonormal basis is less useful than the orthogonaI basis ( h n ) . Here is
the formula to be memorized : every element of ~ has an expansion

(3.3) f = ~-~.~ ~Cn hn witii IIf II2 = ~-~'.~ ICOn!12


The Hilbert space consisting of all such expansions, and provided with the three
operators a e , is the most elementary example of a Fock space. More precisely, it will be
interpreted in Ctiapter IV as the (boson) Fock space over the one dimensional Hilbert
space (~.
54 III. The harmonic oscillator

Weyl operators and coherent vectors

4 The Weyl operator W z for z = r + is (§1, subs. 3) was written formally as


e-i(rP-sQ) ; replacing P, Q by their expressions above we can write

(4.1) W z = e z a + - ~ a - = e Z a + e - z a - e z~ [a+'a- ] / 2 = e - Z ~ / 2 eZa ÷ e - ~ a - .

According to (4.1) and the fact that a - kills the vacuum vector, we have

Z ]c zk
(4.2) Wzl = e - Z ~ / 2 e z a + l = e -Iz[2/2 E k -~. a4-]ci = e-lZIV2 ~-~k V. hk .

Since W z is unitary, these vectors have norm 1. They are called c o h e r e n t v e c t o r s and
denoted by ¢ (z). Let us show that their linear span is dense in 7-{. ~b this end, it is
more convenient to normalize them by the condition of having expectation 1 in the
vacuum state instead of having norm l . The vectors one gets in this way are called
e x p o n e n t i a l vectors, and denoted by

zk
(4.3) E(z) = ~ k ~ hk .

Given (3.1), we have

(4.4) $(0) = 1 , <$(u),E(v)> = e< u ' v >

To see that the space £ generated by all exponential (or coherent) vectors is dense
in 7/, it suffices to t~ote that the n - t h derivative at 0 of £(z) with respect to z (which
belongs dearly to the closure of £ ) is equal to h n . The space £ is used as a convenient
domain for many operators.
Let us return for a short while to the SchrSdinger representation. The vacuum vector
then is read as C e -tx12/4 (2.1), and if z = r + i s , formula (3.3) in §1 gives us for the
coherent vector ¢ (z) the formula

¢ ( z ) = Wr,sl = C e - i r ~ e is~: e -(~-2r)2/4 = C e - r 2 - i r s e -[xl'/4 e z~


(4.5) = C e -([z12+z2)/2 e -Ixl2/4 e zz .

These axe Gaussian states, closely related to the minimal uncertainty states of §1
subsection 4. The exponential vectors have the slightly simpler form C e z x - l z l 2 / 4 .
Returning to the Gaussian representation, we see that the exponential vector £(z)
is read as the function

zk
(4.6) Z(z) = ~ k ~ hk = e z~-zv2

We get in this way the well known generating function for Hermite polynomials. Also,
we have an explicit formula for the action of Weyl operators on exponential vectors

(4.7) wzE (u) = e-ZU-H~/2 e(z + ~) ,


2. C r e a t i o n and annihilation o p e r a t o r s 55

which will become the basis for the extension of the theory t o s e v e r a l (possibly infinitely
many) dimensions, since the product zu can be interpreted as a complex scalar product,
and formula (4.7) thus depends only on the complex Hilbert space structure.
We are going to describe in Chapter IV, in a very general situation, a more complete
family of unitary "Weyl operators", which in our case act as follows (A being a real
parameter)

(4.8) W ( z , ,~) $ ( u ) = exp ( - z u e iA - ~ - ) $ ( z e i)~ + u) .

These operators are related to the limits of discrete Weyl operators in the de M o i v r e -
Laplace theory of C h a p t e r II, §3. We shall return to this subject in Chapter V.

The complex Gaussian model

5 Let us associate w i t h e v e r y hE7-/ the function ~h : z , ~ < h , g ( z ) > on C. The


mapping ~ is antilinear and injective, since the exponential vectors are total in 7-(. The
c.
image of the vector ~ n ~ hn is the entire function ~ n z n , and in fact 7-/is m a p p e d
onto a Hilbert 8pace of entire function~, the exponential vector C(u) being transformed
into the exponential function e uz , and the creation and annihilation operators being
read respectively on this space as the operators of multiplication by z and of derivation.
The scalar product is given by the formula
Z n Z n

if F(z) = Ean~.. and G(z) = E bn'~, then < F,G > = E ann!bn
n rt n

This means that < z m, z n > = ~rnnn! and this property is realized by the ordinary
' . . . 2
L 2 scalar product wi~h respect to the Gaussian measure w with densJty (1/Tr)e-lZl
relative to the Lebesg,le measure dx dy on C. Therefore instead of mapping 7"[ onto all
of L 2 (7) for a real Gaussian measure, we have m a p p e d it on a subspace of L 2 (~v) for a
complex Gaussian measure. This interpretation is due to Bargmann in finite dimensions,
to Segal in infinitely many dimensions (Segal prefers to deal with a linear mapping on
a space of antiholomorphic functions, but the idea is the same).
Thus we end up with four interpretations of the canonical pair Hilbert space :
Schr6dinger's L2(]R), the real Gaussian L 2 ( 7 ) , the discrete Hilbert space g2 with
the Heisenberg matrices, and the subspace of entire functions in the complex Oaussian
space L 2 ( w ) .

Classical states

6 This subsection is borrowed mostly from Davies' book [Dav]. Our aim is to compute
some explicit laws: and to give still another construction of quantum Gaussian laws.
First of all, let us denote by ]Pw the probability law (pure state) corresponding to the
coherent vector ¢ (w), with w = u + i v . Setting z = r +is we compute the characteristic
function
IF,w [ c - i (rP-.~Q) ] = .~ ¢ ( w ) , Wr,s¢ (w) > = e -I~1~ < $(w), W~,.,E(w)>
(6.1) = ¢-I~l~e - 7 ~ - I z l V ~ < $ ( w ) , $(z + w ) > = e- " ' ' + ' ' ~ - 1 4 V 2 .
56 III. T h e h a r m o n i c oscillator

This is a Gaussian ch.f. of minimal uncertainty, and we can deduce from it the classical
ch.f.'s of the individual variables P, Q
E[~Q] = e-~,--~2/2, ~[,-~"P] = e - ~ r v - ~ / 2 ,

which means that replacing the vacuum by a coherent state has shifted the means of
P and Q by 2r and - 2 s respectively. Let us compute also the distribution of the
number operator N , using the expansion of ¢ ( w ) in the orthonormal basis which
diagonalizes N

(3.2) IPw{N = k} = < ¢(w),I{N=k} ¢(w) > = e -Iz[~ zk


k~

which is a Poisson law with mean e-lzL 2.


Classical states are mixtures of coherent states, i.e. are integrals f]Pz #(dz) for
some probability measure # on C. The corresponding density operator p is given by

< f, pg> = / < f , ¢ ( z ) > < ¢ ( z ) , g > # ( d z )

and the matrix elements of p in the Hermite polynomial basis are easily computed.
Taking for # a Gaussian measure, we may constralct in this way all possible Gaussian
laws for the canonical pair.
To describe the law of the observable N in a classical state, we use the generating
function IE[)~N] . [ : n d e r the law ]Pz this function is e (~-1) Jz]2 , which we integrate
w.r.t. #(dz). In particular, if # is complex Gaussian law with density ~ e - ~ l z i 2 , p is
diagonal in the Heimite polynomial basis (i.e. is a function of N ) al,d we have

]ELAN] _ a + l -a~ -- 11- -Abb with b=l/l+a.

Then l P { g = n } = ( 1 - b)b n , a geometric law, and therefore

(6.3) p = e-cN,/Tr (e -cN) with b = e -zc .

The Ornstein-Uhlenbeck semigroup (Pt) has - N as its infinitesimal generator, i.e.


we have Pt = e - t N . The density operator p thus appears to be an operator of this
semigroup, normalized to have unit trace. This has a physical interpretation as follows.
Consider a q u a n t u m system with a hamiltonian H bounded from below ; then the den-
sity operator e - H / k T normalized (if possible) to have unit trace represents a statistical
equilibrium of the sy:,tem at absolute temperature T , k denoting the Boltzmann con-
stant. Since the harraonic oscillator hamiltonian differs from N by ~ constant multiple
of the identity, which disappears by normalization, the law (6.3) appears as an equilib-
rium state of the harmonic oscillator at a strictly positive temperature~ while minimal
uncertainty corresponds to zero temperature. These features appear again in the infinite
dimensional case, h,~wever the Ornstein-Uhlenbeck semigroup operators no longer have
a finite trace and the temperature states cannot be represented as mixtures of coherent
states : they must be described by different representations of the CCR. This is similar
to the fact that dila~.ions of a Gaussian measure give rise to mutually singular measures
if the space is infinite dimensional.
Chapter IV

Pock Space (1)


The preceding chapters dealt with the non-commutative analogues of discrete r.v.'s,
then of real valued r.v.'s, and we now begin to discuss stochastic processes. We start
with the description of Foek space (symmetric and antisymmetric) as it is usuaUy
given in physics books. Then we show that boson Fock space is isomorphic to the L 2
space of Wiener measure, and interpret on Wiener space the creation, annihilation and
number operators. We proceed with the Poisson interpretation of Fock space, and the
operator interpretation of the Poisson multiplication. We conclude with multiplication
formulas, and the useful analogy with "toy Fock space" in chapter II, which leads to
the antisymmetric (Clifford) nmltiplications. All these operations are special cases of
Maassen's kernel calculus (§4).

§1. B A S I C D E F I N I T I O N S

Tensor product spaces

1 Let ~ be a complex Hilbert space. We consider its n-fold Hilbert space tensor
power 7~®n , and define
1
(1.1) tt 1 0 . . . O U n ---- ~.T ~ UO'(1)@'" @ U°'(n) '
aEG,~

(1.2) UlA...Aun= -Z 1
**! go-Uo-(1) @ . @ Uo-(n ) ,
aEG~

Gn denoting the group of permutations ~ of { 1 , . . . , n}, with signature e~. The symbol
A is the exterior product, and o the symmetric product. The closed subspace of ~@n
generated by all vectors (1.1) (resp. (1.2)) is called the n-th symmetric (antisymmeiric)
power of ~ , and denoted by 7Y° n (TiAn). Usually, we denote it simply by ~ n , adding o
or A only in case of necessity. We sometimes borrow Wiener's probabilistic terminology
and call it the n th chaos over ~ . We make the convention that 7/o = C, and the
element 1E I~ is called the vacuum vector and denoted by 1.
Given some subspace U of ~ (possibly 7/ itself) we also define the incomplete n th
chao~ over U to be the subspace of 7Yn consisting of linear combinations of products
(1.1) or (1.2) with uiE U. It does not seem necessary to have a general notation for it.
In the physicists' use, if ~ is the Hilbert space describing the state of one single
particle, ~ n describes a system of n particles of the same kind and is naturally called
the n-particle space. The fact that these objects are identical is expressed in q u a n t u m
58 I V . Fock s p a c e (1)

mechanics by s y m m e t r y properties of their joint wave function w.r.t, p e r m u t a t i o n s ,


s y m m e t r y a n d a n t i s y m m e t r y being the simplest possibilities. M a t h e m a t i c s allow other
"statistics", b u t they have n o t (yet ?) been observed in n a t u r e . We t e n d to favour bosons
over fermions in these notes, since probabilistic i n t e r p r e t a t i o n s are easier for bosons.
One can find in the literature two useful n o r m s or scalar p r o d u c t s on 7-/n, differing
by a scalar factor. T h e first one is that induced by 7-/®n ,

(1.3) <ul @...@un, vl @...@Vn ~> ~- < U l , V l > . . . < u n , v n ~ " .

According to (1.2) we t h e n have, in the a n t i s y m m e t r i c case for i n s t a n c e (with a a n d r


r a n g i n g over the p e r m u t a t i o n group G n )

. . . . Av > =(k)2
rt! geygr < Ua(1),?)r(1) >... <ucr(n),Vr(n) > •
o'~T

S u m m i n g over r cancels one of the factors l / n ! , b u t it is more convenient to cancel


b o t h of them, a n d to define

(1.4) < u l A . . . Aun,Vx A . . . A Vn > ^ = det < u i , v j > .

Similarly we have, replacing the d e t e r m i n a n t by a p e r m a n e n t (which has the same


definition as a d e t e r m i n a n t , except that the a l t e r n a t i n g factor s~ is o m i t t e d )

(t.5) < Ul O . . . O Un, Vl O . . . O Vn > o = p e r < u i , v j > .

In particular, the n o r m 2 for symmetric or a n t i s y m m e t r i c tensors of order n is n! times


their n o r m = as o r d i n a r y Sensors - - differential geometers often make this convention
too. In cases of a m b i g u i t y the two norms will be called explicitly the t e n s o r n o r m a n d
the ( a n t i ) s y m m e t r i c n o r m , a n d a convenient symbol like ]] []® will be added.
As a n illustration, let us start with an o.n. basis ( e l ) i e I for 7-/ a n d construct bases
for ~ n . First of all, in the a n t i s y m m e t r i c case, vectors of the form eil A . . . A ei~ w i t h
i l < . . . < in i n s o m e arbitrary ordering o f the i n d e x set I c o n s t i t u t e a n o.n. basis of
the n - t h chaos space in the a n t i s y m m e t r i c norm. Once the ordering has b e e n chosen,
such a vector can be u n i q u e l y denoted by e d where A is the finite subset { i l , - . . , i n }
of I , a n o t a t i o n we used in C h a p t e r II, a n d which is now e x t e n d e d to infinitely m a n y
dimensions. In the Dirac n o t a t i o n , vectors ei are states labelled [i > , e A is labelled
[il,. • •, in > , a n d the indicator function of A appears as a set of "occupation n u m b e r s "
equal to 0 or 1 ( " P a n l i ' s exclusion principle"). T h e v a c u u m (no-particle state) is ]0 >
in this n o t a t i o n .
In the s y m m e t r i c case, we get an orthogonal, b u t not o r t h o n o r m a l , basis consisting
of the vectors ei~ on1 . . . e i ~o ~ , with il < .. • < in as above, e l , . . , , a n being strictly
positive integers. Ascribing the o c c u p a t i o n n u m b e r 0 to the r e m a i n i n g indices, we
define a n "occupation function" a : i , ~ a ( i ) on the whole of I , a n d the basis
vectors can be u n a m b i g u o u s l y denoted by ca. We denote by ]el the s u m of the
o c c u p a t i o n n u m b e r s a n d set a ! = I l i a ( i ) ! (with the usual convention 0! = 1).
T h e n the squared tensor n o r m of ea is a !/In[!, a n d the squared s y m m e t r i c n o r m is
simply a ! . It is sometimes convenient to consider the k e t l el > as a coordinate m a p p i n g
X i ( a n a n t i l i n e a r coordinate m a p p i n g : if we had used the linear bra m a p p i n g , the
1. B a s i c d e f i n i t i o n s 59

multiplication of a mapping by a complex scalar would not be ~he usual one). Then
ea is also interpreted as a mapping, the (anti)monomiM X c~, and the elements of the
n - t h chaos appear as homogeneous antipolynomials of total degree n The symmetric
norm on the space of polynomials, given by ][ X ~ I]2 = a!, and the corresponding scalar
product, appear in c:assical harmonic analysis on ]Rn . They are usually given the form
< P, Q > =-if(D)Q(X) ]x=o' P(D) being understood as a differential operator with
constant coefficients.

Fock spaces

2 To describe systems of an arbitrary (maybe variable, maybe random) number of


particles, we take ~he direct sum of the spaces 7-Cn. More precisely, the symmetric
(antisymmetric) Fock space F ( ~ ) over 7{ is the Hilbert space direct sum of all the
symmetric ( a n t i s y ~ n e t r i c ) chaos, with the corresponding sym.(ant.) scalar product.
However, we prefer the following representation, which we already used in the harmonic
oscillator case : an element of Fock space is a series

(2.1) F = Z fn with fnE~,~ forall n


n

of elements fn E 7-1n such that

Ilfnl[ 2
(2.2) IIFII 2 = ~ ~! < ~ .
n

In this formula the tensor norm is used, the factorials taking care of the change in
norm : the n - t h ch.~os subspace with the induced norm then is isometric to the n - t h
symmetric (antisymmetric) tensor product with its symmetric (antisymmetric) norm.
REMARK. In the probabilistic interpretations of Fock space, F is interpreted as a random
variable, and (fn) as its representing sequence in the chaos expansion (comparable
to a sequence of Fourier coefficients). The "=" sign then has a non-triviM meaning,
namely the W i e n e r - I t o multiple integral representation. Many results can thus be
expressed in two slightly different ways, considering either the r.v. F or its representing
sequence (fn).
As we defined in '~ubsection 1 the incomplete chaos spaces over a prehilbert space
U, we may define ~he incomplete Fock space, consisting of finite sums (2.1) where
fn belongs to the incomplete n - t h chaos. A convenient notation for it is F0(7~)
(Slowikowski [Slol]). It is useful as a common domain for many unbounded operators
on Fock space.
In this chapter, we deal essentially with the case of the Fock space • over 7-/=
L2(]R+). An element of 7~n then is a function (class) f(sa,... ,sn) in n variables, and
since it is either symmetric or antisymmetric, it is determined by its restriction to the
increasing simplex of IR~., i.e. the set En = {Sl < ... < Sn}. Therefore, the symmetric
and antisymmetric Fock spaces over ~ are naturally isomorphic. In strong contrast
with this, the antisymmetric Fock space over a finite dimensional Hilbert space is finite
dimensional, while a symmetric Fock space is always infinite dimensional.
t30 I V . F o c k s p a c e (1)

Since all separable infinite dimensional Hilbert spaces 7"/ are isomorphic, our
restriction to L 2 (JR+) is inessential in many respects. One may object that using such an
isomorphism is artificial, but there is a ready answer : in the case of a finite dimensional
space E , no one finds it artificial to order a basis of E to construct a basis of the
exterior algebra A E : what we are doing is the same thing for a continuous basis
instead of a discrete one. Though the level of "artificiality" is the same, it requires a
deeper theorem to perform this operation : every reasonable non-atomic measure space
(E, $, #) is isomorphic with some interval of the line provided with Lebesgue measure,
and this is what we call "ordering the basis".
REMARK. The set ~" of all sums ~-~n fn/n[ with fn e 7-[®n and ~'-~n II fn 112/n! <
is called the full Fock space over Tt. As it has recently played an interesting role in
non-commutative probability, we will devote to it occasional comments.
We begin with a remark from Parthasarathy Sinha [PaS6]. If ~ = L2(IR+), the
mapping (Sl, s2,. • • ~sn) --~ (sl, s l + s 2 , . . . Sl + . . . + s n) is a measure preserving isomor-
phism between IR~_ and the increasing simplex En, and leads to an isomorphism be-
tween the Hilbert space direct sums (~,~ L 2(IR+) n (the full Fock space) and @ n L2(En)
(the symmetric/antisymmetric Fock space). However, this isomorphism is less simple
(uses more of the structure of IR+ ) than the isomorphism between the symmetric and
antisymmetric spaces.

Exponential vectors

3 This subsectic.n :oncerns symmetric Fock space only. Given h ¢2 7t, we define the
exponential vector g(h)

(3.1) $(h) = }-~ hTt!®n ( h o ~ = h ®~) .

The representing sequence of $(h) is thus (hNn). In particular, the vacuum vector 1
is the exponential vector £(0).
Let us compute the scalar product of two exponential vectors E(g) and $ ( h ) . Since
gO n = t o n , we have < gO n, h o n > = < g, h > n , hence using (2.2)

(3.2) < $(g), g(h) > = e < g'h >

As in the case of the harmonic oscillator, we call coherent vectors the normalized
exponential vectors, which also define coherent states.
Let us indicate some useful properties of exponential vectors. First of all, differen-
tiating n times ~,(th) at t = 0 we get h ° n . Linear combinations of such symmetric
powers generate ~ n . by a polarization formula like

1
(3.3) ui o . . . o u n = ~ ~ - ~ (elul + . . . + enUn) ° n ,

over all choices ei = +1. Thus the subspace generated by exponential vectors is dense
in F('H). We call it the exponential domain and denote it by $ .
On the other hand, any finite system $ ( f i ) of (different) exponential vectors is
linearly independent. Let us reproduce the proof of this result from Gaichardet's notes
1. B a s i c d e f i n i t i o n s 61

[Gui]. We assume a linear relation ~i-~ig(fi) = 0 a n d prove t h a t its coefficients


)~i are all equal to O. For every g E F the relation ~'~i<-~i$(fi),$(9)> = 0 gives
~'].i )~ie<f~'g> = 0 . Replacing g by g + th a n d differentiating k times at t = O, we get

(p= o,1,...,k- 1).


On t h e other h a n d , det ( < f / , • >P) = 1-Ii<j <fi - f j , " > ( V a n d e r m o n d e determinant).
For every pair (i,j) the set <fi - f j , • > # 0 is the c o m p l e m e n t of a hyperplane,
hence the intersection of these sets is not empty, a n d the d e t e r m i n a n t c a n n o t vanish
identically. Choosing a point where it does n o t vanish, we see t h a t all )~i m u s t be equal
to zero.
S y m m e t r i c Fock space over a Hilbert space can be described entirely in terms of
e x p o n e n t i a l vectors. Consider two Hilbert spaces 7-/ a n d q~ a n d a m a p p i n g ~ : 7-/---*
such t h a t < ~v( f ) , ~ (g) > = e < f,g > for f, g E 7-~, a n d the image ~(7-/) generates ~ .
T h e n it is easy to construct a n isomorphism between • a n d the s y m m e t r i c Fock space
over 7-/ which transforms ~ into the exponential m a p p i n g .

Creation and annihilation operators

4 T h e creation ope"ators a~ (b~) are indexed by a vector h E 7-/ " they climb one
step o n the ladder or' chaos spaces, m a p p i n g 7-/n into 7-/n+l as follows

(4.1) a+(ttlO.,
h .
ourt)=hOttl
. . .
Ottrt
.
;. .
b~(UlA
. .
Attrt)=hA?,.tl
. .
Attn
In particular, a + l = h , b+l=hn . In the symmetric case~ a n d using tile s y m m e t r i c n o r m
on b o t h spaces, a~ has n o r m x / ~ + 1]lh H. Indeed, we m a y assume t h a t Ilhl] = 1, t h e n
take h as the vecto: el in some o.n. basis (en) of 7-/, a n d the result follows from the
c o m p u t a t i o n of the squared n o r m of ea at the end of subsection 1. T h e same reasoning
in the a n t i s y m m e t r i c case gives tl b~-It = tlhII -
T h e annihilation operator ah, or bh, is indexed by a n element h I of the dual space
7-/I of 7-/, a n d goes one step down the ladder : a hat on a vector m e a n i n g t h a t it is
omitted, we have

(4.2) ah-,(u o o . . . o Un) = E i h'(ui) Uo o... ~ i . . . o u n

(4.3) ^ un) = Uo A . . . A 'a,,.

In particular, a h l = 0, b h l = 0. T h e adjoint of a ~ > t h e n is a ~ h l , a bra vector


belonging to 7-/I (sz.me property for b± ). This n o t a t i o n , due to P a r t ~ a s a r a t h y , is by far
the best, b u t it is more usual to omit the Dirac brackets a n d to write a+h, a h with
h C 7-/, a n n i h i l a t i o n operators being t h e n antilinear in h.
C r e a t i o n a n d anr.ihilation operators are e x t e n d e d by linearity to the incomplete Fock
space, a n d t h e n by closure to the largest possible domain. In the a n t i s y m m e t r i c case,
the extension consists of b o u n d e d operators. In the s y m m e t r i c case, the d o m a i n of a ±
consists of those elements ~-~nfn/n! of Fock space such t h a t ~ n [I a±fn H2/n! < cx~,
62 IV. Fock space (1)

the value of the operator on this domain being the obvious one. In particular, creation
and annihilation operators are defined on the exponential domain, and we have the very
useful properties

(4.4) ah$(f )= <h,f >g(f) ; a ~ $ ( f ) = - ~ d S ( / + ~h) I~_0


_

REMARKS. 1) The exponential domain g is stable by annihilation operators, but not


by creation operators. One sometimes uses the enlarged exponential domain F1(7-(),
generated by products u o $ ( v ) where u belongs to the incomplete Fock space F0(7-()
and v to 7-/. For details, see for instance Slowikowski [Slol]. This domain is clearly
stable by all creation operators, since a~- is symmetric multiplication by h. It is stable
by annihilation operators, because a h acts as a derivation w.r.t, the product o.
2) If instead of representing an element of Fock space as a sum ~-~n fn/n! one uses
the representing sequence ( f n ) , then the description of a ± becomes slightly simpler.
The representing sequence of g+ = a~ f is (in the symmetric case)

(4.5) gn- = <h,gn+l > , g+ = h o f n - 1 ,


where on the left side we have a contraction with h.
3) A useful subspace included in the domain of all creation and mmitfilation operators
consists of the sums ~-~n fn/n! such that ~ n n Ufn 112/n! < cx~ (the domain of the square
root ~ of the number operator, as we shall see later).
A slightly more delicate point consists in checking that each operator aT~ as we
have defined it is ex~.ctly the adjoint of its mate. We reproduce the proof from B r a t t e l i -
Robinson [BrR2], but for the sake of completeness only : the result will never be applied.
The relation < x , a ~ y > = <a~x,y> is obvious for x,y E F0 ; keeping fixed y E F 0 we
extend it to x E :D(a~). Then it is clear that the mapping < x , a h • > is continuous on F0,
meaning that x e T)(ah)* and ah*x = a~x. Conversely, assume that xel)(ah* ) and
put ah*(X ) ----z ; expand along the chaos spaces x = E n Xn and z :-- E n zn. Then the
relation < x , a ; y > --- < z , y > for y e r0 gives = hence II a + II2 <
and finally x E ~ ( a ~ ) , a~x = z.

Commutation and antlcommutatlon relations

5 An obvious computation on the explicit form (4.1) of the operators a ± , b+


shows t h a t on the algebraic sum of the (uncompleted) chaos spaces we have the
c o m m u t a t i o n / a n t i c o m m u t ation relations
+ +
(5.1) [ah,a~ ] =0= [ah,ak] ; [ah,a~- ] = < h , k > I ;

(5.2) ;

It is thus tempting to define in the symmetric case the following field operator8 with
domain P0, which are good emndidates for essential selfadjointness (there are similar
definitions in the an~isymmetric case)

(5.3) Qh = a~ + a h ; Ph = i( a+
h --ah)"
1. B a s i c d e f i n i t i o n s 63

If h is allowed to range over all of 7-/, there is no essential difference between the
two families of operators : we have Ph = Qih. On the other hand, if 7-I = K: • iK:
is the eomplexification of some real Hilbert space K:, and h, k range over K:, the
commutation relations then take an (infinite dimensional) Heisenbergdike form with
h = 2 (probahilistic normMization !)

(5.4) [Ph,Pk] = 0 = [Qh,Qk] ; [Ph,Qk] = - 2 i < h , k > .

We shall not discuss essential selfadjointness of the field operators on the uncompleted
sum F0 (this can be done using analytic vectors). As in the case of the harmonic
oscillator, we shall directly construct unitary Weyl operators, and deduce from them
(as generators of unitary groups) selfadjoint operators extending the above operators
on r 0 .

W e shall describe l a t e r o n a p r o b a b i l i s t i c i n t e r p r e t a t i o n of t h e o p e r a t o r s Qh for h


real as m u l t i p l i c a t i o n o p e r a t o r s , which m a k e s m u c h easier (if n e c e s s a r y at all !) t h e
discussion of t h e i r essential selfadjointness. T h e a n t i s y m m e t r i c case does not require
such a discussion, the c r e a t i o n and a n n i h i l a t i o n o p e r a t o r s b e i n g t h e n b o u n d e d .

Second quantization

6 Let A : 7-/ --* K~ be a bounded linear mapping between two Hilbert spaces;
then A ®n is a bounded linear mapping from ~ ® n to K~®n ; if A is contractive or is
an isomol~hism, the same is true for A ®n . Since this mapping ebviousiy preserves the
symmetric and antisymmetric subspaees, we see by taking direct sums that a contraction
A of TI induces a c retraction between the Fock spaces (symmetric and antisymmetric)
over 7-f and ]C, which we shall in both eases denote by F ( A ) . It is usually called the
second quantization of A , and operates as follows

(6.1) r ( A ) (Ul o . . . o an) = Aul o . . . o Auu

(in the antisymmetric case replace o by A). For operators A of norm greater t h a n 1,
F ( A ) cannot be extended as a bounded operator between Fock spaces. Given three
Hilbert spaces connected by two contractions A and B , it is trivial to check t h a t
F ( A B ) = F ( A ) F ( B ) . If A is an isomorphism, the same is true for F ( A ) . Note t h a t
F ( A ) preserves the symmetric (antisymmetric) product.
The most important case is that of 7-l= K: and unitary A. Since we obviously have
F ( A * ) = F ( A ) * , and F ( I ) = I we see that F ( A ) is unitary on Fock space.
In the symmetric case the following very useful formula tells how F ( A ) operates on
exponential vectors

(6.2) r(A) £(u) = $(Au) .

Consider now a unitary group Ut = e -itH on 7"f ; the operators F(Ut) constitute a
unitary group on F('H), whose selfadjoint generator is sometimes called the differential
second quantization of H , and is denoted by d F ( H ) . H u d s o n - P a r t h a s a r a t h y [HuP1]
use the lighter notation £ ( H ) , but the notation we will prefer later is a ° ( H ) . Thus a +
is indexed by a ket vector, a - by a bra vector, and a ° by an operator.
64 IV. Fork space (1)

Differentiating (6.1), it is easy to see that

(6.3) ; ~ ( H ) ( u l o . . . ou,~) =
= ( H u l ) o u 2 o . . . o u n + ul o Hu~ o . . . o u n + ... + Ul o u 2 . . , o H u n

provided all ui belong to the domain of H . The antisymmetric case is similar. This
formula is then used to extend the definition of A(H) to more or less a r b i t r a r y operators.
A useful formula defines A(H) on the exponential domain

(6.4) A ( H ) E ( u ) = a~,, £ ( u ) .

On the other hand, £ ( H ) acts as a derivation w.r.t, the symmetric product. Therefore,
the enlarged exponential domain F1(7-/) of subs. 4 is stable by the operators ,~(H), at
least for H bounded.
The second quantization r(ct) ( icl _< 1) is the multiplication operator by e n on the
n - t h chaos 7-fn, while A(cI) is the multiplication operator by cn on ~ n • In particular,
A(I) is multiplication by n on the n - t h chaos, and is called the number operator. More
generally, whenever the basic Hilbert space 7-t is interpreted as a space L 2 (#), we denote
by a~ (b~ in the antisymmetric case) the differential second quantization A(Mh) ,
corresponding to the multiplication operator Mh by a real valued function h, in which
case it is clear that Ut is multiplication by e x p ( - i t h ) , F(Ut) acts on the n - t h chaos
as multiplication by e x p ( - i t (h(sl) + . . . + h ( s n ) ) ) , and a~ then is multiplication
by h(sl) + . . . + h(sn). The same is true in the antisymmetric case, the product of an
antisymmetric function by a symmetric one being antisymmetric.
7 The following useful formulas concerning symmetric Fock space are left to the reader
as exercises :

(7.1) < ah g ( f ) , a k g(g) > = < f , h > < k , g >< g ( f ) , £ ( g ) >


(7.2) < a ~ g ( f ) , a k g ( g ) > = < h,g > < k,g > < £ ( f ) , E ( g ) >
(7.3) <a~g(f),a~g(g)>= [<f,k><h,g>+<h,k>]<g(f),g(g)>.

About second quantization operators, we recall that

(7.4) £ (A) £ ( f ) = a+ni £ ( f ) .

From this and the preceding relations, we deduce

(7.5) <A ( A ) £ ( f ) , A ( B ) E ( g ) > =


= [<Af, g> < f, Bg> + <Af, Bg>] <8(f),g(g)>
(7.6) <ahS(f),A(S)$(9)> = <f,h><f, Bg><E(f),g(g)>
(7.7) < a ~ g ( f ) , )~(B) £(g) > =
= [<h,g><f, Bg>+<h, Bg>]<£(f),g(g)>.

8 This subsection can be omitted without harm, since it concerns ~he full Fock space,
neither symmetric nor antisymmetric. In this case, "particles" belonging to n - p a r t i c l e
systems are individually numbered, and we may in principle annihilate the i -th particle,
1. Basic definitions 65

or create a new particle with a given rank in the system's numbering. Among these many
possibilities, we decide to call c - and c+ the operators on the algebraic (full) Fock space
that kill or create at the lowest rank, that is for n > 0

(8.1) Oh(hi ® . . . ® h,)= < h , hl > h i ® . . . ® h n


(8.1) c~(h 1 ®... ® hn) • h® h 1 ®...® hn .

For n = 0, we put c - 1 = 0 , c + l --- h. It is very easy to see that c - and c+ are


mutually adjoint on their domain, and that

(8.3) c ; c ~ = < h,k > X .

If h = k and Iih II = t we see that c~ is an i,ometry, hence ¢~- has a bounded extension
for every h, and taking adjoints the same is true for c~. The C* -algebra generated by
all operators c+ is the simple Uuntz algebra (900, which has elicited considerable interest
among C*-theorists. On the other hand, when the basic Hilbert space is L=(1R+), it
allows the development of a theory of stochastic integration with respect to "free noise"
(see Voieulescu [Voil], Speicher [Spell).

Weyl operators

9 Let G be the gr, mp of rigid motions of the Hilbert space 7% : an element of G can
be described as a pair c~= (u, U) consisting of a vector u E ~ an a u m t a r y operator U ,
acting on hET% by ~ h = U h + u. If 5 = ( v , V ) , we have

(0.1) /%~ = (v, V ) (u, U) = (v + V u , V U ) .

We define an action of G on exponential vectors of Fock space as follows

(9.2) WaS(h) = e-C~(h)g(ah) where C a ( h ) = < u, U h > -II~II~12

Since different exponential vectors are linearly independent, we may extend Wa by


linearity to the space g of linear combinations of exponential vectors, and it is clear
that $ is stable. Weyl operators Wa corresponding to pure translations o~= (u, I) are
usually denoted simply by W u .
Let us prove the following relation which may remind probabilists of the definition
As+t = A , + At o Os of additive functionals, up to the addition of a purely imaginary
term

(9.3) CaB (h) = C z ( h ) + Cc~(/3h) - i ~ m < u, Uv > .


(a) (b) (c)

Indeed we have a = < u + V v , V V h > + ]]u + Uv]I2/2, b = <v, Vh> + Iiv]12/2,


c = < u, V ( Y h + v ) > + I M ? / 2 , so t h a t a - b - c is equal t o - < u, V v > + 1 (< u, V . > +
< Uv, u > ) = - i ~ m < u, Uv > . From (9.2) we get the basic Weyl commutation relation
in general form

(9.4) W a W z = e - i am < u,gv > Wa~ ,


66 I V . F o c k s p a c e (1)

This is the same relation as the harmonic oscillator's formula (2.4) in Chapter III, §2
subsection 2. We have according to (9.2)

< W~g(~),W~E(h) > = e-C~(~)e-c~(h)< g(~k),C(~h)> ;


the last scalar product being equal to e < c~k,c~h > , there remains the exponential of

- < u , u k > - I l u l 1 2 / 2 - < u , g h > - I1u112/2+ < Uk + u, g h + u > = < g k , gh> =<k, h>.
Therefore < W ~ $ ( k ) , W c ~ $ ( h ) > = < h , k > = < $ ( k ) , $ ( h ) > , and W~ can be ex-
tended linearly as an isometry of the space g . It can then be extended by continuity as
an isometry in Fock space, and the Weyl commutation relation also extends, showing
that W~ is invertible, hence unitary.
The mapping a ~ Wc~ is a projective unitary representation --- i.e. a u n i t a r y
representation up ~;o a phase factor - - of the group G of rigid motions of 7-/. We
use it to define one-parameter unitary groups, and then (taking generators) selfadjoint
operators, i.e. random variables.
First of all, the operators Zt = W(t h,I) constitute a unitary group according to (9.4).
Then on exponential vectors we have from (9.2)
(9.5) Z~ g ( ~ ) • e - t < h,k :>-t~[lhll2 /2 g ( k .~ t h) ,

and the derivative at t = 0 is (a + h - - a h ) g ( k ) according to (4.4). Otherwise stated,


Z¢ = e x p ( - i t Ph), where the selfadjoint generator Ph is an extension of the operator
(5.3).
Next, consider operators Zt = W(o,ut) , where (Ut) = e itH is a unitary group on 7{.
Then according to (9.2) the unitary group (Zt) is the second quantization of (Ut), and
its generator is the differential second quantization X ( g ) (cf. subs. 4).
REMARK. The C * - a l g e b r a of operators on Fock space generated by pure translation
Weyl operators Wu is called the C* algebra of the CCR, and it is studied in detail in
the second volume 9f Bratelli-Robinson [BrR2]. Its representations are called represen-
tations of the CCR. The main point about it is that, in infinite dimensions, there is
nothing like the Stone-yon Neumann theorem, and there exists a confusing variety of
inequivalent representations of the CCR. We shall see in Chapter V examples of useful
(reducible) representations, similar to Gaussian laws of non-minimal ancertainty for the
canonical pair (pos!tive temperature states of the harmonic oscillator, see the end of
Chapter III). However, these laws do not live on Fock space, and the stress is shifted
from the Hilbert space on which the Weyl operators act to the C C R C * - a l g e b r a itself,
which is the same in all cases. On the other hand, it isn't a very attractive object.
For instance, any two Weyl operators corresponding to different translations are at the
m a x i m u m distance (that is, 2) allowed between unitaries, so that the C C R algebra is
never separable.
10 We shall meet several times unitary groups of operators Zt represented in the form
(10.1) Zt = d ~(t) W(u~, g t )
where c~(t),ut, Ut are respectively real, vector and operator valued with time 0 values
O, O, I . We assume they are continuous and differentiable (strongly in the case of Ut )
2. Multiple integrals 67

with derivatives at 0. c~', u' and i l l . It will be useful to compute the generator (1/i) Z~
on the exponential domain. We start from the formula

W ( u t , Ut) g ( h ) = e -II ~t 112/2-< u,, g,h > g ( u t + gt h)

from which we deduce the value of Z i g ( h )

d
i o / $ ( h ) - < u', h > g ( h ) + ~ £ ( u t + gth) lt= o = l o l l + a+([ u'> ) + a°(g~o) - a - ( < u' 1).

and therefore the generator is, on the exponential domain

(10.2) o~' I - P ( u ' ) + a ° ( H ) .

11 "vVe have seen that pure translation WeyI operators can be represented as
e x p ( - i P h ) . We are going to define exponentials of simple operators on Fock space.
Though these results belong to the "Focklore", we refer the reader to Holevo's paper
[Ho12] which contains an interesting extension to exponentials of stochastic integrals of
deterministic processes, under suitable integrability conditions. Precisely, we are going
to compute

(11.1) eMg(h) where M=a+(lu>)+a°(p)+a-(<vl) •

where p is a bounded operator. We use the notations

(11.2) e(z) = e z , el(Z ) = (e z -- 1 ) / z , e2(z) = (e z -- 1 -- z ) / z 2 ,

Holevo proves that the exponential ,series for e M convergea atrongIy on the exponential
domain E, and we have

(11.3) eMg(h)=exp(<v,el(p)h+e2(p)u>)g(e(p)h+el(p)u)

For p = 0, we get in particular

(11.4) e a + ( l u > ) + a - ( < , l ) g ( h ) = e <v,h+~u > g ( h + u) ,

and for u = - v we get a pure translation Weyl operator.


We only give a sketch of the proof. Replace M by z M (z E I12) and consider the
right side of (11.3) as a mapping P ( z ) from C to Fock space. It is clearly holomorphic
and we have
P ( z l + z2) = P ( z l ) P ( z 2 ) .

On the other hand, one may check that


DnP(O) g(h) = Mng(h) .

The proof is not obvious, because g is not stable by M : one must work on the enlarged
domain F1. Then the exponential series is the Taylor series of P ( z ) at 0, and its strong
convergence becomes cleat.
If p = i H where H is setfadjoint, then ep = U is unitary, and for v = - u
the operator e M is the product of the Weyl operator W ( e l ( p ) u , U ) by the scalar
e x p < u , (½ - e ~ ( p ) ) u > .
68 IV. Fock space (1)

§2. F O C K SPACE AND MULTIPLE INTEGRALS

Multiple Wiener-Ito integrals

1 We have tried to make the essentials of this section easy to understand, without
diving too much into the literature (see Chapter XXI in [DMM] for details). In this
chapter we present the one dimensional theory. The case of several dimensions requires
a more complete system of notation, and is given in the next chapter.
Multiple stochastic integrals were first defined by N. Wiener in [Wie] (1938), from
which the name "Wiener chaos" comes. But Wiener's definition was not the modern
one : it was rather related to the idea of multiple integrals in the Stratonovich sense. The
commonly used definition of the so called "multiple Wiener integrals" is due to K. Ito
[Ito], dealing with the case of Gaussian random measures. The relation with symmetric
Fock space was first underlined by I. Segal ([Segl], 1956). The literature on this subject
is boundless. We may recommend the recent Lecture Notes volume by Nielsen [Nie].
Let us take for granted Wiener's (1923) theorem on the existence of Brownian
motion : let ~ be the set of all continuous functions from JR+ to ]R and let Xs
be the evaluation mapping a ~ w(s). Let us provide ~ with the a-fields 5r , ~-t
generated by all mappings Xs with s arbitrary in the first case, a < t in the second
case. One can show that ~- is also the topological Borel field on g~ for local uniform
convergence. The Wiener measure is the only probability lid on ~ such that X0 = 0
a.s., and the process (Xt) has centered Gaussian independent increments with variance
IE [ ( X t - X s ) 2] = t - s . Paley and Wiener defined the elementary stochastic integral
f~x~ f ( s ) d X ( s ) of a (deterministic) L 2 function / ( s ) in 1934, a concrete example of
integration w.r.t, a spectral measure over the half line. Their method, using an isometry
property, was generalized by Ito to multiple integrals as follows.
Let En be the increasing (open) simplex of IR~_, i.e. the set of all n-uples
{sl < s2 < ... <Sn}. Let H be a rectangle Hx x . . . xHn contained in En - - otherwise
stated, p u t t i n g H i =] ai,bi] we have bi_ 1 < a i . We define the multiple integral of X
over H as the r.v. [Ii (Xbl - X a i ) . This is extended to a linear m a p Jn from (complex)
linear combinations f n ( s l , . . . , sn) of indicators of rectangles, to random variables, and
the isometry property

(1.1) IlJn(fn) ll2 = [ Ifn(sl,...,Sn)l 2dsl...dsn


d 2J
r~

shows t h a t Jn can be extended as an isometric mapping from L2(En) to L2(f~), the


image of which is called the n-th Wiener chaos and denoted by Cn. One makes the
convention that E0 is the one point set {O}, and that J0(f0) (f0 being a constant) is
the constant r.v. f0. Finally, an easy computation in the case of rectangles leads to the
orthogonality of Jn(fn) and Jm(gm) for n ¢ m , and therefore chaos spaces of different
orders are orthogonal. All this is very simple, only the following result has a not quite
trivial proof (see [DMM], Chap. XXI n ° 11).
2. Multiple integrals 69

THEOREM. The completed direct sum @n Cn is the whole of L 2 (~),


Otherwise stated, every (real or complex) r.v. f E L 2 ( ~ ) has a representation 1

(1.2) f ~n [ fn(~,,...,s~)dXso...dXs, = ~nJn(fn) ,


J~ n

where f0 is the expectation of f , and the sequence of functions fn (uniquely determined


up to null sets) satisfies the isometry property

(1.3) Hf H~ = E n Hfnll~ •

This will be interpreted in subsection 3 as an isomorphism between L 2 (ft) and Fock


space.
2 Let us comment on the preceding construction.
a) The first and very important point is that, in the construction of multiple
integrals, the Gaussian character of the process never appears. One only makes use of
the martingale property IE [ X t I . T s ] = 0 for s < t and of the martingale property
of X~ - t, i.e. 1 E [ ( X t - X s ) 2 19vs] = t - s - - in s t a n d a r d probabilistic jargon,
"the angle bracket of X t is t " . There are many examples of such martingales : the
simplest after Brownian motion is the compensated Poisson process Xt = ( N t - A t ) / x / ~ ,
where (Nt) is a Poisson process with (unit jumps and) intensity ,k. It turns out that
Poisson processes too have the chaotic representation property, i.e. all the L 2 functionals
of the Poisson process (Nt) can be expanded in multiple stochastic integrals w.r.t.
the martingale X . This was already known to Wiener, who studied Poisson chaotic
expansions under the name of "discrete chaos", but Wiener's work was p a r t l y forgotten,
and the Poisson chaotic representation property was rediscovered several times. Wiener
and Poisson are essentially the only processes with stationary independent increments
which possess the chaotic representation property. If stationarity is not required, there
is the additional possibility of choosing a variable intensity A(t) at each point t for
the Poisson "differential" dXt, the value A = 0o being allowed, and understood as a
Gaussian "differential". See for instance He and Wang [HEW].
The first example (the "Az6ma martingale") of a martingale pogsessing the chaotic
representation property and whose increments are not independent was discovered by
Emery [Eme] in 1988. We present it in Chapter VI, end of §3.
b) Let us return to the Wiener and Poisson cases. It seems that the order structure
of JR+ plays a basic role in the definition of multiple integrals, b u t it is not really so.
Ito had the idea of defining, for a symmetric L 2 function f n ( s l , . . . ,sn) of n variables

i=(f=) [ fn(s~,...,sn) dXs~...dXs=


J~

(2.1) n! /_ f n ( S l , . . . , S n ) d X s , . . . d X s , ~ = n! Jn(fn) .
J.U n

1 The order of the differential elements dXs~ • •. dXs~, with the smaller time on the
right, is the natura~ one when iterated integrals are computed. Since it does not change
the mathematics (at least for bosons), we generally do not insist on it.
70 IV. Fock space (1)

Using this notation, we may write (1.2) and (1.3) as follows, using symmetric functions
in

(2.2) f = ~n In(f,~)
T., with II f I1~ = ~ Ilfnll~/~!
Ito's aim was to extend the chaotic representation to functionals of arbitrary Gaussian
random measures with independent increments, and the use of symmetric functions
allows one to do it without order structure. This raises a number of interesting questions :
the integration domain En is most naturally an open simplex, and IRn+ is not a union of
open simplexes : it seems that we are forgetting the diagonals. This is related to Wiener's
original error, to the theory of the Stratonovich integral, and to some elementary
"renormalizations" in physics.
Since the symmetric integral gives an order-free definition of the multiple integrals
w.r.t. Brownian motion, we can extend it without any work to all Lusin measurable
spaces (E, g) with a nonatomic a-finite measure #. It suffices to use a measure
isomorphism of E onto the interval [ 0 , # ( E ) [ provided with Lebesgue measure. This
will be one of our leitmotive in this and the following chapter.
c) Our third comment introduces the "shorthand notation" for multiple integrals,
due to Guichardet [Gui] (though Guichardet discusses Fock spaces and not stochastic
integrals, and does not use any ordering). Since IR+ is totally ordered, there is a 1-1
correspondence between the increasing n simplex E n , and the set Pn of all finite
subsets of IR+ of cardinality n - - a finite subset A E Pn being represented as the list
sl < s2 < . . . < sn of its elements in increasing order. Then P n gets imbedded in IR~_,
and acquires a n a t u r a l a-field and a natural measure d A , the n-dimensional volume
element dsl . . . d s n . We denote by "P the sum of all P n , considered as a single measure
space. T h e n the n dimensional stochastic differential d X s l •.. dXs~ is considered as the
restriction to 7~n of a single differential d X A . The family of functions f n ( s l , s2,. •., Sn)
is considered as a single measurable function f'(A) on P , belonging to L 2 ('P) relative
to the measure d A . In this way, (1.2) and (1.3) are shortened to

The notation f" is meant to suggest a kind of Fourier analysis of f , but we often omit
the hat. This notation for continuous multiple integrals is strikingly similar to that of
discrete multiple integrals in Chapter II (symmetric Bernoulli r.v.'s; toy Fock space).
See subsection 7 for a heuristic formalism to handle this analogy.
Like the symmetric notation (2.2), the shorthand notation (2.3) is not order depen-
dent : given a Lusin space (E, g) with a a-finite measure tt (non-atomic for simplicity),
we can denote by P ( E ) the set of all finite subsets of E , endow it with a Lusin a-field
and a n a t u r a l measure d A , and use the Guichardet notation without fear. A measurable
ordering of the space is used only to prove the existence of these objects.
J. Kupsch remarked that Guichardet's notation can be used in special relativity, since
almost all n-uples of points in Minkowski'sspace have different time components, which
can be used to order the n-uple.
2. Multiple integrals 71

Wiener space and Fock space

3 The preceding remark a) about symmetric functions shows that boson Fock space
over L2(]R+) is isomorphic with L2(f~), where ~ denotes Wiener space. Indeed, this
is the reason of our choice of norms on Fock space (see (2.2)). The same is true for the
L 2 space of a Poisson process, and on the other hand, it is true also for antisymmetric
Fock space, a fact whose significance is not generally appreciated.
The Fock space over L2(]IK+) will be called 8impte Fock space in these notes. It has
special properties because of the order structure of the line. For example, considering
a N for h = I[0,t] we get creation and annihilation "processes" a + and a t , and we
will see later that a~ and a~ are best interpreted as "operator stochastic integrals"
f hs da+ and f-hs da-j. In Chapter VI we describe the H u d s o n - P a r t h a s a r a t h y theory of
operator stochastic integrals. Before this, we present in C h a p t e r V the theory of multiple
Fock spaces, which are constructed over a Hilbert space LZ(IR+) @ h~ which also has a
time structure.
We are going now to review the basic facts of boson Fock space theory, p u t t i n g on
the probabilistic spectacles of the Wiener interpretation.
Symmetric Fock space is L 2 (f~) and the vacuum vector 1 is the function 1. The
identification of Fock space with the L 2 space of a real measure provides it with a
natural conjugation f ~ f . Given h E 7~ = L2(IR+), let us consider the stochastic
differential equation

(3.1) Y~= l q-
/o Ysh(s)dXs ,
whose solution is the stochastic exponential of the martingale f: h(s) dXs. The s t a n d a r d
approximation of the solution by the Picard method gives the chaos expansion of 1~,
which is
(3.2)
Y~ ~-~n ~1<.. <s~<t h(sl)'"h(sn)dXsl ""dXs~ = ~-~n ~. In((hI]°'t] )°n) '

and in particular the limit Yoo belongs to L 2 and is the exponential vector E(h).
This interpretation of exponential vectors as stochastic exponentials is entirely general,
since the Picard approximation holds for all stochastic differentiM equations w.r.t.
martingales and semimartingales : for instance, in the Poisson inte~'pretation of Fock
space exponential vectors will correspond to Poisson stochastic exponentials. Let us
return to Wiener space, however.
It is convenient to embed the real part of 7Y -- L2(IR+) into Wiener space f~,
identifying h E L ~ with h = f: h(s) ds. Functions of this form (continuous, equal to
0 at time 0, admitting a weak derivative in L 2 ) are called Cameron-Martin (C-M)
functions. Of course, only real C - M functions f(t) can be considered as elements of ~ ,
but complex C - M functions exist as well and constitute a complex Hiibert space with
scalar product < x, y > =fx'(s)y'(s) ds.
Taking again h ~ L 2 (real), denote by h the stochastic integral f ~ h(s)dXs. Ac-
cording to the C a ~ : e r o n - M a r t i n - G i r s a n o v formula, we have for every positive function
72 IV. Fock s p a c e (1)

f on~

(3.3) lE[f(. +h)] = l E [ f e ~-I[h112/2] = I F , [ f $ ( h ) ] .

Therefore if we map the positive r.v. f on t2 to the positive r.v.

(3.4) G h f = f ( . + h ) e -~-Ilhll2/2

the integral is preserved - - to see why, rewrite - h as - f h(s) dXs(. + h ) + [[hH2 , and
apply (3.3). Once (3.4) is proved we may associate with h a unitary group on L 2 (~t),
which maps f (no longer positive) into

(3.5) Ttf = f(. - 2th)etf h ( s ) d x s - t 2 l l h l l ~ ,

and an explicit computation shows that

Tt E (g) = exp ( - t / gshs ds - t211hH2/2) E(g + th ) 9

which identifies Tt with the Weyl operator Wth for real h (see §1, subs. 9, (9.5)).
Then it is easy to ideatify the generator of this unitary group, which by definition is the
self-adjoint operator Ph : denote by V h the derivative operator on Wiener space along
the C - M vector h , (i.e. V h f = l i m t l o (f(. + t h ) - f ( . ) ) / t ) and as usual identify the
r.v. h with the corresponding multiplication operator. Then we have

(3.6) Ph = i(-h - 2Vh) for real h67-(.

On the other h a n d a direct computation on the stochastic exponential E(g) shows


that, for real h, we have V h $(g) : < h, g > E(g). This is the same as the action of a h
on E(g). Since we know Ph : i(a~ -- a ; ) , we deduce from it that Qh is simply the
multiplication operator by the stochastic integral h on the linear space E generated by
the exponential vectors. It is not difficult to see t h a t they are exactly the same on their
maximal domain as s.a. operators. The Weyl operator Wih corresponding to a purely
imaginary vector ih is equal to e IQh , and Qh has been computed above; therefore
Wih is the multiplication operator by e i~. Finally, the creation operator a~ is h - V h
on E, and it is extended to its natural domain by closure.
Let us compute ~he probability distribution of Ph in the vacuum state (recalling
that 1 : E ( 0 ) ) :

]E[e -itPh ] = < l , W i t h l > : <l,Ttl > = e -t21[hl[2/2

from which we deduce that the r.v.'s Ph and Qh have the same distribution. Otherwise
stated, Wiener space does not only contain the "obvious" Brownian motion ( X t ) , rep-
resented by the mul'oiplication operators Qt ; from the point of view of yon Neumarm~s
quantum probability, it also contains an "upside down" Brownian motion represented
by the family of operators Pt.
In the harmonic oscillator case (Sehrhdinger's representation) the classical Fourier-
Plancherel transform ~- exchanges the operators P and Q. On the or~honormal basis
2. Multiple integrals 73

hn consisting of the H e r m i t e - W e b e r functions hn(x)e -z~/¢, Y: operates simply by


.T(hn) = inhn. If we use the Gaussian representation the exponential factor gets
incorporated into the measure and we get a simple unitary map, diagonal in the Hermite
polynomial basis, which is called the Fourier-Wiener transform on Gauss space. This
transform exists as well in infinite dimensions (see Hida, Brownian motion, p. 182-184).
Here jr- acts by .7"(~ n In (fn))/n! = ~ n inln(fn)/n! on the representation (2.2), or
. T £ ( h ) = g ( i h ) , and it transforms Qh into Ph, Ph into - Q h as in the oscillator case.
All these results are very classical in Fock space theory. Those of the next section
are much more recent, and due to Hudson and P a r t h a s a r a t h y [HuP1].

Number operator and Polsson processes

4 We have seen in Chapter II that the scalar Bernoulli process is replaced on toy Foek
space by three basic operator martingales. We have given the analogue of the creation
and annihilation martingales, and it remains to introduce the third one, the number
operator process.
The number operator N is not unknown to probabilists, since - N is the generator
of the infinite dimensional Ornstein-Uhlenbeck semigroup on Wiener space, one of the
cornerstones of the "Malliavin Calculus" (Malliavin [Mall, Stroock [Strl][Str2]). The
operator N is positive self°adjoint, and acts on the chaos representation f - - ~ n fn by
U f = ~ n nfn (its domain D consisting exactly of those f for which ~Y~nn2 [[fn[[ S < oo);
N may also be represented as ~ i a + a ~ for an o.n.b. (el). The O - U semigroup
itself acts on exponential vectors by e - t N g ( h ) = $ ( e - t h ) , so that e - t N is the
second quantization of the contraction e -~ I , and N appears as the differential second
quantization A ( I ) .
More generally, let a a real Borel function on IR+, and let Xa be the unitary
operator on 7-/= L2(IR+)
x~ " f = J ~ f -

This corresponds to an a r b i t r a r y change of phase at each point, and m a y be considered


as a "gauge transformation", whence the name "gauge process" used by H u d s o n -
Parthasarathy. The second-quantized operators F(Xa) are unitary, and are Weyl
operators without translation term. For fixed a and u E IR the operators F(Xua)
constitute a unitary group which we denote by eiuN~ , the operator Nc~ acting on the
n - t h chaos as the multiplication operator by the symmetric function c~(xa)+... +a(xn).
Note t h a t the operators corresponding to different functions a all commute. We use a
"stochastic integral" notation, writing N~ = f a( s ) dNs . The notations f to a( s ) dNs , and
in particular Nt = j~ dN~ (the number operator up to t) then have obvious meanings.
The following theorem is (for a probabilist) the most striking result of the H u d s o n -
P a r t h a s a r a t h y paper [HuP1], since it shows how to construct a Poisson process from a
Wiener process by adding to it an o p e r a t o r . . , which is a.s. equal to 0 (in the vacuum
state). For a quantum physicist, on the other hand, this is not more surprising than
- - A + V having a discrete spectrum, while the multiplication operator by V has by
itself a continuous spectrum, and A is a.s. equal to 0 in the vacuum state.
74 IV. Fock space (1)

THEOREM. The operators

(4.1) Xt = Qt + cNt

have commuting selfadjoint extensions, whose joint probability law in the vacuum state
is that of a compensated Poisson process with j u m p size c and intensity 1/c 2 (so that
X~ - t is a martingale).
The limiting case c = 0 corresponds to the Brownian motion Qt.

P a o o F . The statement is, as usual, a little incomplete : we should have said t h a t the
operators are essentiMly selfadjoint on the exponential domain (or some other explicit
domain). But our proof (borrowed from H u d s o n - P a r t h a s a r a t h y ) does not require such
domain specifications, since it directly constructs the corresponding unitaries as Weyl
operators, and the generators themselves arise by differentiation.
More precisely, we are going to construct the unitaries exp (i f a ( s ) d X s ) for a
bounded with compact support. To achieve this, we consider the family of rigid motions
in L2(IR+) - - since a has compact support, e ia - 1 belongs to L 2

(4.2) O ( a ) . h = eiC~h + c(e ia - 1) .

We denote by W ( a ) the corresponding ~Veyl operator. According to §1, formula (9.2),


we have
W(o~)E(h) = e f c(ei~'-l)hds- f c2(1-c°sa)ds g(O(or) h) .

The operators W ( u a ) do not constitute a unitary group, but they do after being
multiplied by a phase factor exp(ic 2 f sin u a ( s ) d s ). Forgetting again u, we define

Z ( a ) = e it2 fsin ads W ( a )

and an easy computation using the Weyl commutation relation proves that Z ( a ) Z(fl) =
Z ( a + ~). Thus we have a family of commuting unitaries, and we know there exists a
family of commuting self-adjoint operators X a such that Za = e x p ( i X ~ ) , which may
be interpreted as ordinary random variables such that Xc~ + X~ = Xc~+~. The law of
X~ in the vacuum .;tate is given by its Fourier transform

IE [e i'~x~ ] = < 1, Z ( u a ) 1 > = exp ( c 2 ~ ( e iuc~ - 1)ds ) b

In particular, if a is an indicator function I A , this is equal to exp ( c 2 IAI (e i~ - 1) ),


IA] denoting the Lebesgue measure of A, i.e. we get the characteristic function of a
compensated Poisson distribution of variance IAI. This applies in particular for the
r.v.'s Xt corresponding to A = [0, t] , to the differences Xt - X s , and taking now for
a a step function, w~ find that the process Xt has independent increments.
It remains to idenqfy X t as an operator to Q t + c N t . This will be loft to the reader, as
an application of the computation of the generator in §1, subs. 10, or of the exponentials
in subs. 11.
2. Multiple integrals 75

It is important to note that the martingales X I corresponding to different values of


c do not commute. Their commutators are easily deduced from the relations

[Nc~,Qh] = i / a s h s d P s ; [Nc~,Ph] = - i fashsdQs.

The unitary Fourier-Wiener transform maps Qh into Ph, Ph into - Q h , and leaves Nt
and the vacuum vector invariant. Therefore it maps the compensated Poisson processes
Qt + cNt into the similar family Pt + c N t .

The Polsson interpretation of Foek space

5 Let us now proceed in the reverse direction, starting from a compensated Poisson
process (Xt) of jump size c and intensity 1/c 2 , generating a a-field 5r . We know that
it has the chaotic representation property, i.e. L~(9:) is isomorphic with F via multiple
stochastic integration w.r.t. ( X t ) . Can we show that the multiplication operator by X t
is equal to Qt + c N t , at least on exponential vectors? Instead of using X t we prefer to
consider the unitary operator e iuX~ . We have

eiuXt =- e-iUt/c H eiUC '


AXe¢0, s_<t
the product being extended to the random set of jump times s _< t. We mentioned
before that the exponential vector $ ( f ) is interpreted as a stochastic exponential in all
probabilistic situations, and the stochastic exponential of f fs dX~ for a compensated
Poisson process of jump size c and intensity 1/c 2 is

¢ ( h ) = e- f h~ dslc H ( 1 + chs) .
s
On the other hand, it follows from the computations of subsection 4 (taking for simplicity
h = 0 outside of ] 0, t I

e iuc - 1 ~o t e iuc - 1 - i u c g(eiUC h + eiUC


exp(Qt+cNt)g(h~ • = exp ( -c hsds+t c5 ) -1)

Computing explicitly g(eiUCh + e iuc - 1) as we did above for E(h) we get the equality
of the two operators.
6 In this subsection, we interpret in Fock space language some ideas from classical
stochastic calculus, including the Ito stochastic integral itself.
Let (f~, ~-, 1P) be the Wiener space, and X t denote Brownian motion (the coordinate
process). For every w ~ ~2 let 7tw be the mapping s ~ Xs+~(w) -- X t ( w ) . Denote
by k't] (or more often simply k-t) the past at time t , i.e. the a--field generated
by the random var:ables X s , s <_ t , and by 3r[t the future at time t, generated
by the increments X s - X t , s >_ t. The corresponding L 2 spaces will be denoted by
~t] (or simply Or) and ~ [t, and it is clear that they are respectively isomorphic to
the Fock spaces over L2(] 0 , t ] ) and L2( [t, oe [). On the other hand, the mapping
Ot : f ~ f o r t is an isomorphism from the whole Fock space F(L~(1R+))=O onto its
subspace ~ It-
76 IV. Fock space (1)

Brownian motion is a process with independent increments, and this is expressed as


follows : for f E L2(.¢-t) = 4)t and g E L 2(.7-[~) = 4) [t, the product f g belongs to L 2 and
we have I E [ f g ] = I E I f ] IE[g ] . Since products f g of this form generate L2(.7")=4),
the bilinear mapping (f, g) ~ f g gives rise in the usual way to a mapping of 4)t] ®4) It
into 4), which is an i~omorphism. This is a very particular case of the general fact that,
each time a Hilbert space 7-/ is split into a direct sum 7/1 G ~ 2 , as here L2(IR+) into
L2(] 0, t~ ) and 52( [t, oc [), the corresponding Fock space P(7-t) appears as a tensor
product F(7-tl)®F(7/2). Thus the "continuous sum" L2(IR+) gives rise to a "continuous
tensor product". This property is purely algebraic, and has nothing to do with Brownian
motion, or any other probabilistic interpretation.
Things can be stated in a different way : consider f E 4)t and g E 4) It" Then we
may define their "product" f g independently of any probabilistic interpretation (in
fact, if f and g belong to the sum of finitely many chaos, f g is simply a symmetric
product f o g), and we have with obvious notation < f g, ff gP > = < f, g > < fl g~ >.
On the other hand, if f , g , h belong to the three Fock spaces 4)s] , 4) Is,t] , 4) It (also
with obvious notations), then f 9 belongs to 4)t~ and gh to 4) Is, and the product is
associative. This pr')duct is the tensor product above, written without the ® sign.
Consider a curve t ~ x(t) in Fock space, with the following properties : x ( 0 ) = 0 ;
the curve is adapted : x(t) E ~t for all t ; its increments are adapted to the future :
for s < t, x ( t ) - x ( s ) E 4)[s ; finally, Hx(t) - 4 s ) l l ~ = t - ~ for s < t. Then it is
possible to define a stochastic integral Z (V)= f y ( s ) d x ( s ) for an adapted curve y(t) in
4) such that f IIy(~)II 2 d~ < ~ , with the isometric property II 1" (y)II 2 = f II y(~)II 2 d~ - -
otherwise stated, we have a purely Fock space theoretic version of the Ito integral (in
a special case). The way to achieve this is standard, proving the isometric property for
piecewise constant adapted curves, and then extending the integral. 5"his being done,
it is very easy to define also multiple stochastic integrals with respect to x ( . ) , which
have the same isom( tric property as Wiener's multiple integrals.
In the Wiener interpretation of Fock space, Brownian motion appears in two ways : as
a curve in Hilbert soace (the curve (Xt)) and as a family of multiplication operators Qt,
the vector X t being the result of the multiplication operator Qt = a~ + a t applied to
1, the function 1. Similarly, in the Poisson interpretations, the Poisson process is a
process of operators Qt + cNt and a curve X t - ( Q t + cNt) 1. Note that as curves they
are the same, while if we consider the second Brownian motion Pt, the corresponding
curve is just i X t . Thus stochastic integrals are also the same in different probabilistic
interpretations, and even some stochastic differential equations are meaningful in this
setup of curves in Fock space. An example of this is the linear s.d.e, satisfied by
exponential vectors.
3. Multiplication formulas 77

§3. M U L T I P L I C A T I O N FORMULAS

The Wiener multiplication formula

1 We have seen in the preceding section that Fock space has several probabilistic
interpretations - - and those we have mentioned are but the simplest ones. Given
two different probabilistic interpretations, the property of having the same chaotic
expansions sets up a 1 - 1 correspondence between their respective (classes of) square
integrable random variables. The difference between the two interpretations arises from
the way random variables are multiplied. Thus a probabilistic interpretation of Fock
space appears as an associative, not everywhere defined, multiplication. In this section,
we will have to distinguish carefully, on the same Fock space, the Wiener product, the
Poisson products, etc.
Let us start with the classical form of the multiplication formula for (Wiener)
stochastic integrals. We recall the notation of the preceding section, formula (2.1)

(1.1) In(fn)= / fn(sa,...,sn)dXsl...dXs~


dwt

where fn is a symmetric L 2 function of n variables. It will be convenient to extend


this notation (for this subsection only) to a possibly non-symmetric function, through
the convention that In (f) is the same as In(.fs), the symmetrized function of f (which
also belongs to L 2 , with a norm at most equal to ]lf[] )-
Given two functions f and g of m and n variables respectively, we define their
contraction of order p (where p is an integer _<hAm) to be the function of n + m - 2 p
variables given by

f Pg(st,...,Sm-p,tl,... ,tn-p)
(1.2) = Jf(sl,...,Sm-v,ttl,.--,Up)g(Up ..... al,tl,...,tn-p)dUl...dup .

Since our functions below are symmetric, the order of indices does not really matter :
we have chosen that which fits the antisymmetric case as well. Note that the contracted
function is not symmetric. One easily proves, using Schwarz's inequality, that contraction
has norm 1 as a bilinear mapping from L2(IR rn) x L2(IR n) to L2(IRm+n-2P).
REMARK. Contraction does not depend only on the abstract Fock space structure. It
can be described using the bilinear scalar product (f, g) = < ~, g > , which depends on
the choice of a conjugation on the basic Hilbert space.

Here is the Wiener multiplication formula. We have no integrability problem, since


it can be proved that the chaos spaces are contained in L p for all p < e~ (Surgailis
showed that this pleasant feature of Wiener measure is not true for Poisson measures).
78 IV. Fock space (1)

THEOREM. WJ~h the:'above notations, w e have


man

p=0

If n = 1, 11 (f) is the stochastic integral which we formerly denoted by 37. On


the other hand, ; , a s s u m e s only the two values r e + l , and the formula means that
multiplication by f is the operator a~ + a ] .
For a proof of thi~ relation, see [DMM] Chapter XXI. There is a less pleasant looking
formula (see for instance Shigekawa, J.Math. Kyoto Univ., 20, 1980, p. 276) whose
combinatorial mealring however is clearer. To describe it, we keep the above notations
fm and gn, but assume no symmetry. We denote by a and /3 two injective mappings
of { 1 , . . . , p } into { 1 , . . . , m } and { 1 , . . . , n } respectively, and define the contraction
f~--~g as we did in (1.2), but replacing sc~(i) and t~(i) by ui. Then the formula reads
as follows
1 /.
(1.4) I r a ( f ) In(g) = Z -~. Z m+n--2p ( f a ~ g )
P -- c~,~

To understand the meaning of this formula, let us consider first a measurable space
E , two measures p on E rn and a on E n, and their product measure ~ - = p ® a on
F = E m+n . Since all factors of the product are equal to E , let us define diagonals in F
as follows, using different letters x , y for the two sets of coordirates. I'he (simple) 1-
diagonals are defined, by one equation zi = yj and inequalities xk # Yl for all other
pairings, (simple) 2--diagonals by two equations x i = y j , x k = Ye with i # k and
j # g, and inequations for all other pairings, etc. There are also higher diagonals, in
which bunches of xi's are set equal together, and equated to bunches of yj's. Then the
measure T can be decomposed into an off-diagonal term, the contribution of the simple
1-diagonals, 2-diagonals, etc. If the measures p and a themselves do not charge the
diagonals in their own factor space, as will be the case here, we need to consider simple
diagonals only. Then we denotc by A k the union of all (simple) dia.gonals of order k,
including E m+n for k = 0, and the measure r can be decomposed as the sum of all
integrals of the function f ® g on the disjoint sets /Xk \ Ak_ 1 , the first term being the
off-diagonal integral. For instance,

Then formula (1.4} arises when this rigorous relation for measures is applied formally to
the stochastic integrals Im ( f ) and In (g). Since they are built out of integrals on open
simplexes, they "forget the diagonals" in their factor spaces, and only simple diagonals
occur. Then to compute the diagonal terms, we replace ( d X s ) ~ by ds whenever it occurs.
For instance in the preceding formula the off-diagonal contribution is h ( f ® g), and
3. M u l t i p l i c a t i o n formulas 79

we have

This is not a proof, but "explains away" the formula, and the same idea works in
other cases. For instance, for stochastic integrals corresponding to the compensated
Poisson processes of §2 subs. 5 the rule is (dXs) 2 =ds + cdXs, and leads to the correct
multiplication formula for Poisson stochastic integrals, due to Surgallis [Surl] [Sur2].
2 We axe going to express the multiplication formula in the shorthand notation of
section 2, formula (2.3). We recall that the representation h = ~'~nIn(hn)/n! is replaced
by a single formula h = fp h(A) dX A over the set of M1 finite subsets of IR+ (we use the
same letter for a r.v. and its "chaotic transform", but the hat may be omitted). On
we have a natural a-field and a measure dA, and we can use set-theoretic operations, in
particular the sum A+B, which is understood to be A U B if A MB = ~), and undefined
otherwise (all functions taking the value 0 on undefined sets). Formula (1.3) must be
transformed as fonows : instead of In(f), we are interested in Jm(f)= Ira(f )/m!, so
that we rewrite it au
(m + n - 2p)l
Jm(f) Jn(g) = E p p!(-~--p)~--Z_-p)! Jm+n-2p(hp) ,

where hp is the symmetrized form of f P g . Let us put

A = {Sl,... ,8rn-p} E Pm , B = {tl,... ,tn_p} C ~[:~rn , U 7_ { U l , . . . ,Up} E Pp


C 7_ { 8 1 , . . - , 8 r n - p , tl,---,tn-p} ~ ~:)m+n-2p •

The unsymmetrized function depends on the two sets of variables A, B and (1.2) can
be written
f P.~g(A, B) = p!/p, f(A + U)g(U + B)dU m

We may use a symbol like f(A+U) because 1) f is symmetric, so it does not matter
which variables come first or last, 2) U is a.s. disjoint from the given finite set A. To
symmetrize this function, we mix the points siEA with the points t i EB by means of
permutations a of the elements of C, as follows
1
hv(C ) _ 1 E f(a(A), a(B)) (m+~--2p)'
O" K+L=C a(A)=K
[Kl=m--p, [L[=n-.p ~ ( B ) = L
(m-p)! (.-to), p
- , f.--.g(K, .

K+L=C

Finally, we get
~IAn

m n p=O rn+n--2p
80 IV. Fock space (1)

where

hp(C) = E /7) f ( K + U)g(L + U)dU .


K+L=C P
]Kl=rn-p, I L l = n - p

This formula simplifies considerably, because the sum over p gets absorbed into an
integration over 70, and we get the following result - - on toy Fock space, U would be
disjoint from C, and the formula would express the convolution h = f * g.
THEOREM. Let two random variables

.f = j [ p f ( A ) d X A , g = L'~(B)dXB ,

belong to a finite sum of chaos spaces, and their product h = f g be expanded as


f p h(C) dX C . Then we have

(2.1) h(C) = / E f(K+U)g(L+U)dU.


J'P K+L=C

The simplicity of this formula, in contrast with the combinatorial complication


of (1.3), shows the interest of the shorthand notation. As an easy exercise, compute
]E[fg] = h(O).
3 Now that we have a convenient notation, let us see whether the associativity of the
Wiener multiplication can be seen on formula (2.1). This is not a mere play, since un-
derstanding this point will lead us later to the construction of other multiplications, and
to Maassen's kernel calculus. To prove associativity, we need the following fundamental
property of the measure dA on 70 ("Maassen's ~ - l e m m a " ) .
THEOREM. Let f(A, B) be a positive measurable function on 7° x 70, and let F ( A ) be
the (positive, measurable) function on P defined by

(3.1) F(A)= E f(H,K).


H+K=A
Then we have

(3.2) £ F(A) A = £× f(A,meA B.

PROOF. In the "sh,grthand notation" of Fock spaces, the exponential vectors g(u) are
read as the functior:s

(3.2) A II
sEA

The integral over 70 of such a function is e f u(s)ds , provided u is integrable on IR+. By


a density argument, it is sufficient to prove (3.1) assuming that f(A, B) is a product
g(A) h ( B ) , and then, assuming that g = g ( u ) , h = $ ( v ) , u and v beir.g bounded with
3. Multiplication formulas 81

compact support in JR+. Then the function F(A) given by (3.1) is E(u + v), and (3.2)
reduces to the multiplicative property of the exponential.
This property is shared by all measures "dA" on 70 arising from a non-atomic
measure on ]R_t_, and in particular by the measure ,~bAIdA on 70 corresponding to
the replacement of dt by Adt.
We may now return to associativity. Consider three functions f, g, h on P and set
p = f . g . Then

p.h(C) = f dZ ( Z p(H + Z)h(W + Z) )


H+W=C
p(H + Z)= / dY ( Z f(A + N)g(B + N) )
A+B=H+Z
A partition A + B = H + Z can be represented as
Z=L+M, H=U+V, A=U+L, B=V+M,
and the meaning of the preceding theorem is that, if Z is an integration variable over 7~ ,
we get the same integral by considering L, M as independent integration variables. The
triple product can therefore be written

/70, dLdMdN Z f(U + L + N)g(V + M + U)h(W + L + M),


u+v+w=c
which is perfectly symmetric. Associativity is proved.
We should perhaps repeat here one of the leitmotive of these notes : we are working
on Wiener space, but results like (2.1) are meaningful for all Guichardet-Foek spaces,
i.e. for the calculus on the space of finite subsets (configurations) on any reasonable
space (E, E, #) without atoms. If the space is Lusin, no new proof is necessary : one
simply uses a measurable isomorphism with Brownian motion.
REMARK. This multiplication formula is a consequence of the infinitesimal multiplica-
tion rule (dXs) 2 = ds, which expresses that the variance increase of standard Brownian
motion is 1 per unit time. If we replace ds by a 2ds we get the infinitesimal multipli-
cation rule for a Brownian motion of different variance, and the product formula (2.1)
is replaced (with the same notation) by

(3.3) h(C) = f7o Z }(K + V)g(L+ U) a ~[UjdU,


K+L=C
an element of Fock space at least in the case f, g belong to a finite sum of chaos. On the
other hand, we are not restricted to 0 -2 :> 0. For 0-2 = . 0 we get the symmetric product
h = f o g, also called Wick product and denoted by f : g, which is read in the shorthand
notation as

(3.4) h(A)= ~ f(H)g(K).


H+K=A
Thus the precedin?; theorem implies in particular that the integral over P is a
multiplicative linear functional w.r.t, the Wick product (a statement to take with
82 I V . F o c k s p a c e (1)

caution, since d o m a i n considerations are involved). Note that £ ( u + v ) = £ ( u ) : 6 " ( v )


(the e x p o n e n t i a l vectors are "Wick exponentials").
The name "Wick product" is not standard : physicists use it for a commutative
multiplication for operators, arising from "Wick ordering" or "normal ordering" of
the creation/annihilation operators, and this is the similar notion for vectors. On the
other hand, physicists have the strange custom of writing a Wick product of, say, three
operators ms : ABC : instead of A : B : C, the natural notation for an associative product.
For a 2 = i and o 2 = - 1 we get interesting structures, which have occasionally
a p p e a r e d in physics. T h e first one is related to a multiplication formula for F e y n m a n
integrals, and the second one to G u p t a ' s negative energy h a r m o n i c oscillator.

Gaussian computations

4 In subsections 4-6 (whose reading is not necessary to u n d e r s t a n d the sequel), we


will show t h a t f o r m u l a (2.1) implies some classical Gaussian c o m p u t a t i o n s , and also
p e r f o r m a passage from the continuous to the discrete, to get the G a u s s i a n m u l t i p l i c a t i o n
formula for the h a r m o n i c oscillator. This formula is slightly more c o m p l i c a t e d t h a n (2.1),
because on a discrete state space m a n y particles sit exactly at the same point, while in
the continuous case t h e y can only be arbitrarily close.
Our first step will be to e x t e n d formula (2.1) to a W i e n e r p r o d u c t h = fx ... fn.
Using (3.2) it is not difficult to prove that one needs n ( n - 1 ) / 2 " i n t e g r a t i o n variables"
Uij, i < j . It is convenient to set Uji = Uij, and then we have (exchanging ~ and f
for convenience)

(4.1) Z [I A(Hi+E~iUij) •
H~ +...+H,,=C i
For instance, if n = 3 we get the same formula as in the p r o o f of associativity. T h e
most interesting ease of (4.1) concerns the product of n elements from the first chaos :
fi(A) = 0 unless [A[ = 1. T h e n in (4.1) either [Hil = 1 and all the c o r r e s p o n d i n g Uij
are empty, or Igil = 0 and there is exactly one Uij such that Igijl = 1, the o t h e r s being
empty. It follows t h a t " h ( C ) = 0 if IC I (which is also the n u m b e r of n o n e m p t y Hi's) is
not of the f o r m n - 2 k . We assume it is, and set C = {sl < . . . < sn-2k}. T h e n the
n o n e m p t y Hi can be w r i t t e n in the following way, c d e n o t i n g an injeetive m a p p i n g
from { 1 , . . . , n - 2 k } into { 1 , . . . , n }

He(i) = {Sl} , . . . , Hc(,~_2k > = {sn_2k} .

T h e n o n e m p t y Uij's ( i < j ) are of the form Ua(i) b(i) = gb(i)a(i), a and b d e n o t i n g two
injective m a p p i n g s f r o m { 1 , . . . , k} to { 1 , . . . ,n} to the c o m p l e m e n t of the range of c,
such t h a t a(i) < b(i). In formula (4.1) the e m p t y integration variables yield a p r o d u c t
l-Ii fc(i)(si) and the n o n e m p t y ones a product rig(fa(i), fb(i) ) (bilinear scalar p r o d u c t
on P , also equal to ( f a ( i ) , fb(i) ) on Q). Finally

n-2k k
(4.2) h(C) = E II f'c(i)(si) I I ( f a ( J ) ' fb(j))"
a,b,c i=1 j=l
3. Multiplication formulas 83

In particular, the e x p e c t a t i o n of h = fl ... fn corresponds to C = 0 . Its value is 0


unless n is even, n - - 2 k , in which case

k
(4.a) If,... f2k] = II /oo), h o ) ) ,
a,b j = l

a,b d e n o t i n g injections from { 1 , . . . ,k} to { 1 , . . , 2 k } such t h a t a ( i ) < b ( i ) (they can


also be described as "pairings" in { 1 , . . . , 2k} ). This formula has been proved for the
first chaos of W i e n e r space, b u t it is also a well k n o w n universal G a u s s i a n formula (all
Gaussian spaces of c o u n t a b l y infinite dimension being isomorphic).
t~MARK. T h e W i e n e r p r o d u c t of two exponential vectors g ( u ) , $ ( v ) is e(U'v)g(u + v),
where (u, v) is the bilinear scalar p r o d u c t f u (s) v (s) ds = < ~, v > . This shows t h a t
the W i e n e r p r o d u c t is n o t "intrinsic" : it requires the choice of a c o n j u g a t i o n u ~ u,
to define the bilinear scalar product.
Every bilinear map (., .) leads to the definition of an associative multiplication by the
above formula, non-commutative if the map is not symmetric (see App. 5).
In this a b s t r a c t setup, formula (4.2) has a universal m e a n i n g , which we make explicit
as follows. Consider F0(7-/) = ~ as a n algebra with the s y m m e t r i c p r o d u c t o ( i.e. the
Wick p r o d u c t of subs. 3). It also admits a Wiener, or G a u s s i a n m ~ t i p l i c a t i o n , with 1
as u n i t element, a n d such that for lET-/

(4.4) f 2 = fof + 2(/,/) 1 .

This is the c o m m u t a t i v e analogue of Clifford multiplication. F o r m u l a (4.2) t h e n ex-


presses the W i e n e r p r o d u c t in terms of Wick products : h = f l .. • fn is given by

k
(4.5) h = ~ E H(fa(j), fb(j) ) fe(1) o . . . o fc(k) .
k a,b,c j = l

It is also possible to invert this formula, a n d to express the Wick p r o d u c t in terms of


the W i e n e r product. Surgailis has done in [Sur2] the same work for the Poisson product.
Two remarks m a y be in order here. The first is that, from the algebraic point of
view, everything can be done on a real linear space with a n a r b i t r a r y bilinear form (not
necessarily positive). T h e second one is that D y n k i n [Dynl] [Dyn2] had the idea of doing
the same c o m p u t a t i o n s , considering the coefficients (f, g) as new "scalars" - - a i m i n g
at a rigorous h a n d l i n g of G a u s s i a n laws with singular covariances.

5 Let ei be a n o r t h o n o r m a l basis of the first chaos. Let J[ be the set of all finite
o c c u p a t i o n vectors o n IN ; an element c~ of ,4 can be w r i t t e n nl .il ... + n k . i k , with
which we associate the vector e~ = ei~ o nl ... elk
o nk of Fock space. We have shown in
section 1, e n d of subs. 1, that the n o r m 2 of e~ is o ~ ! = n l ! . . , n k ! . A n y element of Fock
space can be e x p a n d e d in the orthogonat basis ec~ as

(5.1) ] = E T e~ with Ilfl12 - ~ I~!12.


84 I V . F o c k s p a c e (1)

The Wick product e(, : e# is equal to ca+ # by construction, a + fl being the sum of the
two occupation vectors. Then for a Wick product h = f : g we have the uninteresting
formula
^ ^ c~!
(5.2) = Z '
p+o'=o~

which must be compared with the corresponding formula for a Wiener product (we must
then assume that ea is a real basis) : for h=fg, we have
a!
(5.3) = 7(p + a) # . ! a !
It p+a=e~

This amounts to the Wiener multiplication rule for the basis vectors ea themselves

(5.4) =

There is an interesting way to deduce this from the continuous formula (2.1), which will
be extended later to operators. Let X, be Brownian motion, and let (i (i = 1, 2 , . . . )
be the increment X i - X i _ l . These vectors are orthonormal in the _first chaos; they
do not form a basis of it, but this does not prevent us from using them to get a
multiplication formula in the space they generate. Linear combinations of these vectors
can be expressed as stochastic integrals f f(s)dXs with f constant on each interval
[k, k + 1 [, and .more generally, a vector f f(A) dX A in Fock space belongs to this
a-field if and only if f(A) depends only on ]A C) [0, 1 [[ . . . . [AN [k. k + 1 [1... For
instance, if f is the indicator function of { A C [0, 1 [, [A[ = m }, then f f(A) dX A is
the elementary itera';ed integral

Jm = Z dXsl.., dXs,~ ,

which is known to have the value hm(~l)/m! where hm is the Hermite polynomial of
order m (one way to prove this is to note that ~ m tmJrn/m! is an exponential vector).
T h e n formula (2.1) gives easily the multiplication formula for Hermite polynomials in
one variable
re!n!
(5.5) hmhn = E ( m _ p)!p[(n_ p)! hm+n-2p "
p~rnAn

Formula (5.4) is a trivial extension of (5.5) to Hermite polynomials in several variables.


Note also that the coefficients in (5.5) are the same as in (5.4) : this i'~ not surprising,
since both amount to expanding the multiplication formula for exponential vectors.

Poisson multiplication
6 Poisson product~ are far from being as useful as Wiener's. Trms we reduce our
discussion to a few results, without complete proofs.
3. Multiplication formulas 85

We work on the probabilistic Fock space associated with a compensated Poisson


process of jump size c and intensity 1/c 2 . We refer the reader to §2, subsection 5 :
the operator of multiplication by X t is Qt + cNt, and the exponential vector g ( f ) is
interpreted as the random variable

(6.1) $(/) = e-f'f'ds/cII(1 +cfs).


8

First of all, there is a simple formula for the Poisson product of two exponential vectors.
Denoting by (f, g) the bilinear scalar product f fsgs ds = < ], g > , we have

(6.2) E ( S ) E ( g ) = e (f'g) E ( f + g + 1 f g ) .

For c = oo we get the Wiener product. This relation is correct as far as random variables
are concerned, but to be interpreted in Fock space it requires f g E L 2 , an indication
that Poisson multiplication requires integrability conditions. Note also that a h does
not act as a derivation on Poisson products, as it did on Wick and Wiener products.
The infinitesimal multiplication formula for Poisson products is

(6.3) dX~ = dt + cdXt .

Its probabilistic meaning is the following : we have dXt = c ( d N t - dt/c~), where (Nt)
is a Poisson process with unit jumps and intensity 1/c 2. Then the square bracket
d IX, X ] t (not to be confused with a commutator !) is equal to c 2 d [ N, N ]t = c2 dNt =
c(dXt + dt/c). Using this formula, we get the following result for a Poisson product
h = f g (the notations are the same as in (2.1))

(6.4) h(C)=/p E f(K+Z+U)g(L+Z+U)c-IZ]dU"


K+Z+L=C
For c = cx~ only the terms with IZI = 0 contribute, and we get the formula for Wiener
multiplication.

R e l a t i o n w i t h t o y Fock s p a c e

7 We are now ready to describe in a heuristic way the relation between Fock space and
finite spin systems. ~ccording to T. LindstrCm, a rigorous discussion is possible using
non-standard analysis, but I do not think there is anything published on this subject.
This section is not meant as serious mathematics, and pretends only to make formal
computations easier.
The idea is the following. In the discrete case, we had a unit vector ei at each
site i, and given a subset A = {il < ... < in} we had a corresp,mding unit vector
eA in the discrete n - t h chaos. One possible probabilistic interpretation was, that
e i is a symmetric Bernoulli random variable xi, and eA the product of the xi for
i E A (a Walsh monomial). Now in Fock space, we set formally ei = d X t / v " ~ , and
eA = d X A / ~ . In the case of Brownian motion, therefore, d X t / x / ~ is considered to
be a Bernoulli r.v., not a Gaussian one. This is nothing but the central limit theorem,
but it is unusual for probabilists not to think of dXt as something Gaussian. It is
86 IV. Fock space (1)

also unusual for physicists, since they are familiar with the idea that q u a n t u m fields
are systems of harmonic oscillators, rather than spins. Operators on Fock space will
be considered in another section, but since we are doing heuristics let us mention now
that a ~ corresponds to da~t/vrd-A, but a~t simply to da°A - - a fact that should not
surprise us, knowing the de Moivre-Laplace theorem and the different normalization for
the number operator. Then for instance the commutation relation [ a t , a~-] = I - 2a~
becomes
[da;,da~] = I d t - 2 d a ~ d t ,
which is the Fock space CCR up to a second order term.
The Bernoulli product formula in the discrete case is eA eB = eAA B . In continuous
time it becomes

(7.1) d X A d X B = d X A ~ B d(A A B)
with the following meaning : A and B are ordered subsets, say {Sa < ..- < sin},
{tl<...,<tn} respectively, and A N B is a s u b s e t { u t < . . . <up}. Then the obscure
looking d ( A N B ) at the right is just dul ... dup, while dXAA B i s a differential element
at the remaining sl's and tj 's, arranged in increasing order. This has exactly the same
meaning a~ the multiplication formula (2.1).
G r a s s m a n n a n d Clifford m u l t i p l i c a t i o n s
8 We are going to deal now with the antisymmetric Fock space (we refer the reader to
section 1 for notation). As before, we work on the increasing open simplexes En. The
meaning of a stochastic integral over En (cf. §2, subsection 1)

J,,(s) = [
J25 n
s(s,, = L S(A)dxA
J~'n

remains unchanged, but the stochastic integral over IRn+, I n ( f ) = n ! J n ( f ) , is inter-


preted as that of the antisymmetric extension of f . The two products we are going
to consider are the Grassmann (exterior) product (corresponding to the Wick product)
and the Clifford product (corresponding to Wiener's). They are completely defined by
associativity, the anticommutativity d X s d X t + d X t d X s = 0 for s 7~ t , and the rules for
s=t
dX2s = 0 ( G r a s s m a n n ) , dX2s = ds (Clifford).
From these infinitesimal rules one deduces finite formulas, giving the chaos expansion
of a product h = f g . We recall from the chapter on spin (§5, subsection 1) the notation
n ( A , B ) for the total number of inversions between two finite subsets A, B of IR+.
We have not much to say on the Grassmann product, which is given on the continuous
basis elements by
(8.1) d X A A d X B = ( - 1 ) n(A'B) dXA+ B
and in closed form by

(8.2) = (-i)
K+L=C
3. Multiplication formulas 87

Remember our leitmotiv : from the point of view of exterior algebra, mapping any
infinite dimensional separable Hilbert space onto L2(IR+), by an a r b i t r a r y isomorphism,
corresponds exactly to ordering a basis in a finite dimensional space, except that a
continuous basis is used instead of a discrete one. Thus what we are describing here
is the space of exterior differential forms with constant coefficients on any (infinite
dimensional, separable) Hilbert space. The usual forms with non-constant coefficients
require (at least) a tensor product of two Fock spaces, an antisymmetric one to provide
the differential elements and a symmetric one for the coefficients.
Multiplying on the left (in the Grassmann sense) by an element h of the first chaos is
the same thing as applying the antisymmetric creation operator b+h (§1, formula (4.1)).
The antisymmetric annihilation operator bh is an operation of contraction with h (§1,
formula (4.a)).
In the discrete case, we had for the Clifford product

CAeB = (--1)n(A'B)eAAB

The associativity (eAeB)e C = eA(e B ec) of Clifford multiplication amounts to the


following relation, where (-1) n(A'B) has been abbreviated to ¢(A, B)

(s.a) ¢( A, B AC) c(B, C) = c(A,B) c(AAB, C) .

This relation is valid for all ordered sets, even for partially ordered ones if we make
the convention that incomparable elements do not create inversions. Indeed, we have
n(A, B A C ) = n ( A , B)+n(A, C) (rood2), and a similar relation on the right side which
trivially gives the result.
A relation of the form (8.3) is called a 2 cocycle identity w.r.t, the operation A . See
the remark at the end of this subsection.
We now describe the Clifford product, which is deduced from the infinitesimal
relations dX~ = dt and dXsdXt +dXtdXs = 0 for s ¢ t. On the continuous basis
elements dXn, we have

(8.4) dX A dX B = ( - 1 ) n(A'B) dXAa B d(n N B)

and the corresponding finite formula is

(8.5) h(C)= L E (-1)n(I(+U'L+U)~f(K+U)g(L+U)dU"


K+L=C

Assuming that f(A) and g(A) vanish for IAI large enough, the same is true for h ( A ) ,
and a trivial domination a r ~ l m e n t by the corresponding terms of the Wiener product
for I]] and [gl proves that h is square integrable. Then associativity is proved exactly
as for Wiener's product, using formula (8.3).
There is also a product formula for multiple integrals, which is formally identical
to that for symmetric stochastic integrals, because we have defined the contraction
operation in the way which minimizes the inversion numbers. We copy it for the reader's
88 IV. Fock space (1)

convenience

f P'~-'g(sl,..., sin-p, t l , . . . , tn-p)

(8.6) ~- ] f ( , S l , . . . ,sm-p,ul,... ,up)g(up,...,ul,tl,... ,t n _ p ) d I l l . . . d u p .

(8.7) ; m ( f ) In(g) = E
p=o
p!(m}
\P /
-(;) im+n_2p(f p g ) .

p
In the last integral, f v g is not antisyrnmetric, and the multiple integral is understood
to be t h a t of its antisymmetrized function. It is easy to check on (8.7) t h a t the Clif-
ford product by an ,element h of the first chaos (acting to the left on the uncompleted
chaos sum) is the operator Rh = b~ + b ~ . In particular, it is a bounded operator (§1,
subsection 5).
The operators R h satisfy the basic Clifford algebra property

(8.8) { Rh,Rk } = 2<h,k > 1

Let e i be an orthonormal basis of the first chaos, which we take reM so that it is also
an orthonormal basis for the bilinear scalar product (f, g) = < 7, g > . Given a finite set
of indices H = { il < . .. < ira}, we define

eH = ei~ A . . . A e i m ,

an element of the m - t h chaos (because of orthogonality, it is easy to prove that


Grassmann and Clifford products coincide). These vectors constitme an orthonormal
basis of Fock space, and by the associativity of the Clifford product and the rule e 2 = 1,
we get the familiar expression of a Clifford multiplication

(8.9) e H el( = (--1) n(H'K) ella K .

An i m p o r t a n t consequence : the multiplication operator by el is isometric, hence has


norm 1. Since any real element of norm 1 in the first chaos can be taken for el , we
see that for h real in the first chaos, the Clifford (left) multiplication operator by h in
Fock space has norm ]]h[]. An exact formula for the norm of this operator when h is
complex is given in ;he p a p e r by Araki, [Ara].
REMARK. Lindsay and P a r t h a s a r a t h y [LIP] study associative products given as per-
turbations of the Wiener product formula

(8.9) h(C) = / ~ ff(K+U)g(L+U)e(U,K,L)dU,


JT' K+L=C

where the multiplier c(U, K, L) satisfies a "cocycle identity" w.r.t, the set operation + ,
slightly more complicated than (8.3). Many interesting products appear, some of which
are intermediate between Wiener and Clifford.
4. Maassen kernels 89

Fermion Brownian motion

9 For every real element h of the first chaos, the Clifford multiplication by h is
a selfadjoint bounded operator, hence a random variable. Since the square of this
operator is HhH2 I , as a r.v. it assumes the values ±llhlt. Since in the vacuum state
it has expectation 6, its distribution must be symmetric. Thus the first chaos appears
as a linear space of two-valued random variables, the anticommutative analogue of the
Gaussian space it was in the commutative case.
In particular, if we take as element of the first chaos the ordinary Brownian motion
random variable X t , the norm of which is v/t, the corresponding Clifford multiplication
operators constitute a remarkable non-commutative stochastic process, called fermion
Brownian motion. This process appears as a central limit of sums o~. anticommuting
spins, and its stocha~'~tic integration properties have been studied in a series of papers
by Barnett, Strea~er and Wilde [BSW].
We shall see later other examples of multiplication formulas, concerning operators
instead of vectors (but from an algebraic point of view there is no difference).

§4. M A A S S E N ' S KERNELS

Operators defined by kernels

1 After vectors and products of vectors, we describe operators and products of


operators in close analogy with the representation of toy^Fock space operators in
chapter II. Just as the discrete decomposition f = }-~-Af(A)r. A of vectors in toy
Foek space has become a stochastic integral f~ f(A)dXA, the discrete decomposition
K = ~A,B k(A,B)a~aB of operators leads to "stochastic integral,?' involving the
creation and annihilation operators in normal ordering, i.e. integral,

k ( s 1 , . . . ,sin ; t l , . . . ,tn)da+, ...da+,~datt ...dat, •


I < s 2 ' " < sr~
t 1 <t2 '--"':tn

Just as for vectors, there is also a form of this integral which does not depend on the
ordering of IR+

1
Im,n(k)- rain! /IR '~x~t~ k(Sl . . . . ' S r n ; t l ' " " t n ) d a + ' " ' d a + ' ~ d a h ' " d a t ~ '

where now the kernel has been symmetrized separately in each group of variables,
or antisymmetrized in the case of fermion operators. Such integrals, known as Wick
monomials, are s t a n d a r d in quantum field theory (see for instance Berezin's book
[Bet] ), with slightly different notation and a considerably different s,Mrit. Instead of
da~s the differential element is written a~ ds, the integrand being an "'operator valued
distribution"; and since distributions have entered the picture, why :lot assume that k
itself is a d i s t r i b u t i c n ? In this way, the theory is well launched into its heavenly orbit,
never to come down. In the case of vectors instead of operators the distribution point of
90 IV. Fock space (1)

view (the white noise approach to Brownian motion) is less popular among probabilists
than Ito's stochastic integration. Though the interest of a distribution approach (in
b o t h cases) is not to be denied, this seems to indicate that much can be done with
more down-to-earth methods. We shall describe later the stochastic integral calculus
of H u d s o n - P a r t h a s a r a t h y ; here we follow Maassen [Maa], who develops a theory with
plain bounded functions, a modest and reasonable first step. Still, the use of the number
operator allows to deal with kernels which in Berezin's description would be singular on
diagonals. Contrary to H u d s o n - P a r t h a s a r a t h y ' s stochastic calculus (given in Chapter
VI), Maassen's approach depends only apparently on the order of IR+, and can be
extended to Fock spaces over a general space L2(#).
We consider first operators defined, in Guichardet's shorthand notation, by a "kernel"
k with two arguments (including a term k(O, 0 ) I )

(1.1) K = [ k( A, C) da+ndac ]
Jp xp

and then, more generally, by a kernel with three arguments

(1.1 t) K = f k(A,B,C)da~da°Bdac
Jp xPxP

We start with formal algebraic computations, and prove only later (in subsection 5)
that under some restrictions kernels really define closable operators on a stable dense
domain.
To give a meaning to K as an operator, we describe how it. acts on a vector
f , defining b o t h J and g = K f by their chaos expansions f = f f ( L ) d X L and
g = f ' ~ ( L ) d X L . First of all, let us say how a "differential element" in (1.1) acts on
an element of the "continuous basis" d X H . Such formulas are meant only as heuristic
justifications, so that if our reader dislikes them, he may go directly to the finite formula
(1.5). They derive fi om t he definition of t he creation, annihilat ion and number operat ors,
and correspond to those concerning toy Fock space (Chap. II, §2, (3.1)), if x A is replaced
by dXA/x/-d-A , a~4 by da~.4/x,/-dA , and a~ by da°A

da+AdXL = dXL+A (dXLuA if L N A = O, 0 otherwise) ,


(1.2) da A d X r -- dXL_ a d A (dXL\ a d A if A C L, 0 otherwise) ,
da°AdX L = d X L if AcL, 0 otherwise.

Then by composition

(1.3) da+ad a c d X L = d X A+(L-C) dC .

Recall that on toy Fock space we had two bases for operators, one consisting of all
a~4aC with A and C not necessarily disjoint, and one consisting of all a+4a°BaC with
disjoint A, B, C . Thus it is natural to assume that they are also disjoint in the following
definition

(1.4) da +Ada°B da C d X L = d X A+B +(L_( B +C)) dC .


4. Maassen kernels 91

Now we have (still ~ 3reputing formally)

( / k(A,C) d ~ d a c ) ( / f(L)dXL ) = / k(A,C);(C)dXA~(L_c)dC ,

and to find the value of ~'(H) we express the solution to the equation A+(L-C) =H in
the form H = U + F , C=M, A=U, L = V + M . Thus U,V realize a p a r t i t i o n of the
given set H and ca, l be chosen in finitely many ways, while M is a parameter, disjoint
from V - - but since we integrate over the simplex, this condition holds a.s. and need
not be expressed. I r this way we get the basic formula of Maassen for g = K f

(1.5) g(~/)=£ E k(U,M)]'(V+M)dM


U+V=H

(the reader should write the corresponding formula on toy Fock spac,@ On the other
hand, M is also a.s. disjoint from U C H . Therefore, only the value of the kernel k(A, C)
on disjoint subsets n a t t e r s in this computation, and we are unable to represent the
number operator as a kernel, contrary to the discrete case where we h a d a[a[ = aio .
This shows the advantage of using the second basis of the discrete case, which contains
explicitly the number operator, and in which nothing of the kind happens. We then get
the formula

(1.5') ~(I1) = f_ E k(U,V,M)ff(V + W + M ) d M .


U+V+W=H

The combinatorics of kernels with three arguments is more complicated, but they are
well worth the additional trouble, as we shall see.
2 Let us give some examples of kernels.
1) The creation, number and annihilation operators are represented as follows by
three argument kernels k(A, B, C) (it is understood that k has the value 0 if the
triple A, B, C is not of the form indicated)

a+ k({t},O,0)=h(t) ; ah : k(O,O,{t})=-h(t) ;
(2.1) h:
a~ : k ( O , { t } , O ) = h(t).

For creation and armihilation we may also use two arguments kernels

(2.2) a h+ : k({t},O)=h(t) ; ah : k(O,{t})=-h(t).


Let us also make explicit, for future use, the action of such operators (writing t instead
of {t} to abbreviate notation)

(a~f)"(L) -t) ; (ahf)'(L)=/-h(t)f(L+t)dt ;


tEL J
(2.3)
(a~f~(L) = ( E h(t) ) f(L) .
tEL
92 IV. Fock space (1)

2) Multiplication operators can be represented by kernels. For instance, the kernel

k(A, 0, C) = a 21CIf(A + C)

represents the operator of Wick multiplication by f for ~r = 0, of Wiener multiplication


by f for a = 1, and a whole family of associative products for a r b i t r a r y complex values
of a . The kernel
k(A,B,C) = clB[ f ( A + B + C)
represents a Poissov multiplication by f . There are many other associative multipli-
cations which can be represented by kernels (including Clifford multiplications). See
Lindsay and P a r t h a s a r a t h y [LiP1].
The Wick product with f is represented by the kernel k(A, 0, ~ ) = f ( A ) , which is
a generalized creation operator. Kernels k (O, O, C) = f(C) are generalized annihilation
operators, and are called contractions.
3) Kernels depending on the middle variable. A kernel k(A, B, C) which is equal to
0 unless A = C = (0 (and then is equal to k(B)) acts as a multiplier transformation in
harmonic analysis (i-ecall that, in the discrete case, f could be interpreted as a kind of
Fourier transform of the r.v. f )

(2.6) ( K f ) " ( A ) = j(A)f(A) with j(A) = Z k(B) .


BCA
For instance, the multiplier j (A) = 1 if ]A I= m , 0 otherwise, represents the projection
operator on the cha~,s of order m . The kernel k corresponding to t. multiplier j may
be computed by means of the MSbius inversion formula

(2.7) k(a) = ~ (--1)IA-BI j(B) .


BcA
A remarkable family of such operators is that of the multipliers

(2.8) j~(A) = ( - 1 ) An] 0,t]

The corresponding operators .It = ( - 1 ) a ~ were first introduced by Jordan and Wigner
in the discrete case. As we shall see later, they can be used to define fermion creation
and annihilation operators from boson ones, and conversely.
4) Weyl operators. For u E L2(IR+), we consider the kernel with two arguments

(2.9) k(A,C) = e-Ilull~/2 l-I u(s) 1-I ( - u ( t ) )


sEA tEC
(note the analogy with exponential vectors). Let us compute j = K ~ ' ( h ) using formula
(1.5) for kernels with two arguments

g(h)'(d)= l-I h(r) ; j(g)= / dM k(V,M) i[I "


-EA U+V=H rEU+M
4. M a a s s e n k e r n e l s 93

First comes the constant exp(-NuII2/2), and a factor which does ~.ot contain the
integration variable 2/I

E g u(s) g g(u(')+h('))
U+V=H sEU rEV tEH
and then the integral over "P

J( II
rEM

The function a = - u h is integrable on IR+, and the integral on 7 ) is equal to


e x p ( f a(T) d r ) . Finally we have

KS(h) = e-< u'h >-Ilull~/2E(u + h)


and we recognize the action of Weyl operators Wu on exponential vectors. A similar
reasoning, with slightly more complicated notations, shows t h a t the following kernel
with three arguments, where )~ denotes a real valued function on IR+

k(A, B, C) = e -]]ull'/2 f l u(r) f l (eIA(s>- 1) H (--eiA(t>u(t))


tEA sEB tEC
represents the Weyl operator Wu, A .
5) Differential second quantization operators may be represented by kernels, at
least in some cases. Consider an operator U on LZ(IR+) which is represented by a
Hilbert-Schmidt kernel u (s, t). Then it is easy to compute the action of the kernel with
two arguments f u(s, t)da+da[, and to check that it represents the differential second
quantization A (U) = dF (U).
6) We end with a remarkable example, discovered independently by S. Attal and
J.M. Lindsay. It shows most evidently the importance of kernels with three arguments.
Consider a Hilbert-Schmidt operator H on Fock space. It is given by a kernel (in
the ordinary sense)

(2.10) H f(A) = / p h(A, M) f(M) dM .


Then it is also given a3 a Maassen-like operator with the Maassen kernel

(2.11) H(U, V, W) = (-1)lVIh(U, W) .


Indeed we have, before any integration is performed

~-~ H(U,V, M) f(V + W + M) = h(A, M) f(M) .


U+V+W=A
The computation is as follows : we rewrite the left hand side as

E E (-1)IVIh(U'M)f(A-U+M)= E h(U,M)f(A-U+M) E (-1)lvl'


UCAVcA-U UcA VcA-U
and this last sum is 0 unlcss A - U = 0 , in which case it is 1. Therefore what we get
is simply h(A, M) f(M).
94 IV. Fock space (1)

Composition of kernels

3 Kernels represent operators in normalform, i.e. with the creation operators to the
left of annihilation operators. If we are to multiply two operators J and K given by
kernels, we find in the middle a set of creation and annihilation operators which are not
normally ordered; the possibility of reordering them and the practical rules to achieve
this are popular in physics under the name of Wick's theorem. The shorthand notation
for Fock space allows us to give a closed formula for the product L = J K .
We begin with the composition formula for kernels with two arguments (there is a
corresponding, more complicated, discrete formula). Denoting by ~, j, k the kernels for
L, J, K we have

(3.1) t(A'B)= /Tn+s=Aj(R,T + M) k(S + M,U)dM


+U=B

Indeed, applying this kernel L to a vector f (and omitting the hats for simplicity) we
have

Lf(H)=/ E ~(U,Q)f(T+Q)dQ
U+T=H
=/ E E j(R,N+M) k(S+M,P)f(T+Q)dQdM
U+T=H N+P=Q
R+s=u

since we are splitting the integration variable Q into N + P, we may consider N, P as


two independent integration variables (§3, formula (3.1))

=f Z j(R,N+M)k(S+M,P)f(T+N+P)dMJNdP
R+S+~=H
=/ R+V=H S+T=Vj(I~,L) k(S + M,P) f(T + N + P)dLaP ,
M+N=L

where the same fo~-mula (3.1) of §3 has been used again. On the last line we may
recognize J ( K f ) .
The formula relative to kernels with three arguments is much more complicated.,
and we write it without proof. In the course of these notes we will rewrite it in several
different fashions (Chap. V, §2, 3 and §3, 2)

(3.2) g (U, V, W) =

=f E j(A1,A2+B,+B2,CI+C2+M)k(M+A2+Aa, B2+Ba+C2,Cs)dM.
AI+A2+As=U
BI+B2+B3=V
CI+C2+C3=W

Though these formulas look rather forbidding, we recall that they proceed from simple
differential rules, just as the Wiener, Wick, Poisson, Clifford, Grassmann multiplication
formulas follow only from associativity and (anti)commutativity, ~Ld from one single
4. Maassen kernels 95

rule giving the square of the differential element d X s . Here we have three basic
differential elements, and we must multiply them two by two. All products are 0, except
those in increasing order ( - < o < + ), which are

(3.3) da? da? = da t , d a t da+~ = d r , da? da? = da? , da? dai ~ = da~ .

To these rules should be added the commutation of all operators at different times.
After the work of Hudson-Parthasarathy, this table has been known as the Ito table - -
a slightly misleadir.g name, since Ito was concerned with adapted stochastic calculus,
and the composition of kernels does not require adaptation. Forgetting for one instant
the number operater, what we are doing is really constructing an associative algebra
which realizes the C C R or C A R as its commutators or anticommutators, and in the
next chapter we shall see other ways of realizing this.

4 In section 3, subsection 5, we introduced discrete chaotic expansions, with a m e t h o d


that reduces them to the (simpler) continuous case. The same m e t h o d works for kernels
too, and we indicate the corresponding formulas.
We start with the representation formula for vectors (formula (5.1) of §3)

(4.1) f =- E Tf'(t~) ea with Ilfll 2 = y ~ lY~


a ! I2
o¢ oL

We describe kernels using a similar notation

(4.2) K v-

where a is an occurJation vector (nil .. - , n i k ) and aoe


e abbreviates (aid)n1 .. .(a~k)n~ .
The vector g = K f i s given by

(4.3) g(a)=E E p!a!#! k(p,#)f'(a+#) .


# p+a=a

In the discrete cas~, there is no need of three-argument kernels, since the number
operator can be expressed as a+a - . The composition formula for a product L = 3 K
which corresponds to (3.1) is

(4.4) t(a,/3)= ,+~=~ P ! a ! r ! v ! # ! j ( p ' 7 + # ) k ( a + # ' v ) .


+v=~

It expresses Wick's theorem in closed form, though the diagrammatic version physicists
use is probably just as efficient and more suggestive.
96 IV. Pock space (1)

Maassen~s theorem

5 We r e t u r n to the continuous case. We have been doing algebra, b u t it necessary to


t r a n s f o r m this into analysis, i.e. to find some .~imple conditions u n d e r which the action
of a kernel on a vector really defines a vector in Fock space.
Maassen's teat vectors are vectors f of Fock space which satisfy the following
regularity a s s u m p t i o n s : 1) f ( A ) vanishes unless A is c o n t a i n e d in some b o u n d e d
interval ] 0 , T ] (if the results must be extended from IR+ to an a r b i t r a r y m e a s u r e
space, the b o u n d e d interval is replaced by any set of finite m e a s u r e ) ; 2) I/(A)I is
d o m i n a t e d by cMIAI, where c and M are constants ( M > 1 is allowed). T h e space 7"
of test vectors contains all exponential vectors g(h) such that h is b o u n d e d a n d has
compact s u p p o r t in IR+. In particular, it is <lense in Fock space.
The word "test vector" is convenient, here, but it has several meanings in Wiener space
analysis, and this is not the most common one.
Similarly, one defines regular kernels with two or three a r g u m e n t s as follows : 1)
k(A, B, C) vanishes unless A, B, C are all contained in some b o u n d e d interval ] 0, T ] ;
2) [k(A,B,C)I is dominated by cMIA[+IBI+ICl.
T h e m a i n result of Maassen is the following :
THEOREM. A p p l y i n g a regular kernel on a test vector gives again a test vector. Thus
regular kerne/s define operators on the common stable d o m a i n 7-, a n d on this d o m a i n
they form a *-alg,'ebra. In pazticular, since they have densely de/~ned adjoints they are
closable.
T h e proof of this theorem is not difficult, and we only sketch it. We denote by k
a regular kernel, with two a r g u m e n t s for simplicity, and by f a test vector. We m a y
assume t h a t the c o n s t a n t s c, T and M are the same for both. T h e n for the vector
g = K f we have g ( H ) = 0 if H is not contained in ]0, T ] , a n d if it is

Ig(g)l _</ ~ J M IAI+IUI M IBI+IUI d g .


A+B=H

c2M IAl+lBI = M IHI comes out, a n d there remains the integral of M '~[gl over the subsets
U of ] 0, T ] , which is equal to e M~T . Thus we have again a b o u n d of the same type.
For kernels with three argm-nents, the c o m p u t a t i o n is only slightly more complicated :
we split H into A + B + C instead of A + B , and therefore we have a n additional factor
of MINI <_MIHI. T h e proof that the composition of two regular kernels is again regular
is Mso essentially the same.
As for adjoints, it is easy to check that, given a regular kernel k with two resp. three
arguments, the kernel k* defined by

(5.1) k*(A,C)=k(C,A) resp. k*(A,B,C)=k(C,B,A)

is obviously regular, and defines an operator K* which is adjoint to K on test vectors.


To see this, we c o m p u t e the form q S ( g , . f ) - < g , K f > as follows (still o m i t t i n g hats)
4 Maassen's kernels 97

~(g'f)=/ E g(H)k(A,B,U)f(B+C+U)dUdH
A+B+C=H
= / g ( A + B + C) k ( A , B , U ) f(B + C + U) dUdAdBdC

where we have used the fundamental property of the measure dH, which Mlows treating
A, B, C as independent integration variables. In the last integral it is clear t h a t we can
exchange the roles of A and U to find the adjoint kernel (5.1). But it is also interesting
to transform it as follows, grouping B, C into one single integration variable V

~(g'f)=/ E g(A+V) I~(A,B,U)f(V +U)dUdAdV


B+C=L
(5.2) / g(A + V) ~(A, V, U) f(V + u) dAdVdU

where he have defined

(5.3) ~(A,V,U)= E k(A,B,U).


BCV
Note that k may be reconstructed from ~ using the M5bius inversion formula. Formula
(5.2) is a convenient symmetric way to define forms by kernels, and one may remark
that it is equivalent to demand Maassen-like domination conditions on k and on ~ .
REMARKS. a) There are some minor points to add to this theorem. The fact that the
composition of operators is associative does not quite imply the same property for the
composition of regular kernels, since an operator does not uniquely determine its kernel.
Maassen has given a detailed proof that it is associative indeed, using the same m e t h o d
as for the associativity of the Wiener product.
b) The second point is precisely that of uniqueness : If two regular kernels define
the same operator on test vectors, they are a.e. equal as functions on T' × 7~ × P . We
postpone this problem to the next chapter, §2, subsection 6.
c) The third point concerns the failure of this approach to construct exponentihls,
while the first step in a real physical problem is to construct a quantum evolution e-it H
from a hamiltonian H . The exponential series for a regular kernel usually diverges.

Fermion kernels

6 We are going to review summarily the representation of operators on antisymmetric


Fock space. The fermion annihilation, number and creation differentials will be denoted
by db~ with c = - , o , + .
As operators were defined, in the symmetric case, by kernels with two and three
arguments (1.1) and (1.1'), we may define operators in the antisymmetric case by

(6.1) K = .[ k(A, C)(-1) n(A'C) db+Adb5

(6.1') K = f k(A, B, C)(-1) n(A'B+C) dbldb°Bdbc l


98 IV. Fock space (1)

The alternants are meant to simplify the expressions to follow. The formulas in
subsection 1 concermng boson operators are replaced now by

db+AdXL := (--1)n(A'L)dXL+A , d b c d X L = (--1)n(C'L)dXL_c dC ,


db°BdXL = dXB+(L-B) (0 by convention if B ~ L)

from which we deduce the action K f = g of the operator K on a vector f =


5 f(M) dXM
k(U,M)?(M+V)(-1)n(U+M'M+V)dM
U+V=H
(6.2') ~ ( H ) = / ~ k(U,V,M)f(M+VTW)(-1)n(U+M'M+V+W)dM.
U+V+W=H
W i t h the conventions (6.1), it then appears that the kernel for left Clifford multiplication
by f is k ( A , C ) == f ( A + C ) , as for Wiener's multiplication. Let us also indicate the
formula for the composition of two kernels L = J K (@ (3.1))

(6.3) e(A,B) = f~+s=n J ( R ' T + M ) k ( M + S'U)(-1)n(R+T+M M+S+U)dM "


T+U=B

This is not too ugly, but we do not a t t e m p t to give the formula for three-argument
kernels [

The continuous Jordan-Wigner transformation

7 As a consequence of their work on non-commutative stochastic integration, Hudson


and P a r t h a s a r a t h y [HuP2] could extend to continuous time the J o r d a n - W i t h e r trans-
formation relating the boson and fermion creation and annihilation operators in the
discrete case (toy Fock space, Chap. II, §5 subsection 2). We shall explain the essential
point in a heuristic way, leaving the details to the reader. See also P a r t h a s a r a t h y and
Sinha [PaS1].
As usual, we work on Fock space over L2(IR+), which can naturally be considered
as symmetric or amisymmetric. Let Nt be the number operator up to time t , and
Jt be the operator ( - 1 ) N~ . In discrete time, we would not use Ni, but rather its
predictable analogue Ni_ 1 . Anticipating the continuous stochastic calculus, the result
of H u d s o n - P a r t h a s a r a t h y says that (Jr) is an a d a p t e d operator process, and that the
boson creation and annihilation operators a~ are related to their fermion analogues b~
by the relations

(7.1) b~= Jsdaes ; a~ = /0 Jsdb~ ;

which involve operator stochastic integrals (finite sums in the discrete case). To make
this formula transparent, let us look at its infinitesimal version : da~ transforms a
continuous basis element dXA into dXA+t, and then Jt adds a factor ( - 1 ) n, where
n = I(A + t)AJ 0, t [ I ( t is not counted : the finite sets A containing a given t can be
4 Maassen's kernels 99

neglected), and n is also the number of inversions n (t, A) i.e. the number of elements of
A (or A+t) strictly smaller than t. And therefore what we get is b~X A . The reasoning
is the same for annihilation operators.
The preceding derivation is heuristic, but the integral formula can be rigorously
verified using the rules of stochastic calculus.
As an exercise, let us give to (7.1) a formulation in terms of kernels. First of all, Js is a
multiplier t r a n s f o r m a t i o n : if f = f ]'(A) dXA, then Ys multiplies ]" by ( - 1 ) [ An] 0,s [1.
Using the MSbius inversion formula (subs. 2, formula (2.6)) we see that Js is given by
a kernel
js(O,B,O) = ( - 2 ) ]Bc~] °'s [ [

Given a family K s of operators, given by kernels ks(A,B, C), a d a p t e d in the sense


that ks vanishes unless all three arguments are contained in ] 0, s ] , the formal rules
defining the stochastic integral Le= f Ks da~s are the following. Given a subset A , let
us call VA its last element and put A - = A \ { V A } . Then the kernel of L ~ is

g+(A,B,C)=kvA(A-,B,C) ; t°(A,B,C)=kvB(A,B--,C) ;
(7.2)
g-(A,B,C) = k v c ( A , B , C - ) .
We explain this in detail in Chapter VI, §2 subs. 2. Thus we can compute the bosonic
kernel of a fermion creation operator (the annihilation operator is similar, with the first
variable empty instead of the third)

b~-(s, B, O) = ( - 2 ) I B n ] ° ' s [ I if s<t,B C ] 0 , t] , 0 otherwise.

The validity of this relation can be checked explicitly on a continuous basis vector dXA.

Other estimates on kernels

81 Maassen's theorem is a simple and useful result, but it is too restrictive to deal
only with uniform estimates on the kernel. We will give now a more recent estimate,
due to Belavkin-Lindsay ([BeL]), which uses a mixture of uniform norm in the number
operator variables, and L 2 (Hilbert-Schmidt) norm in the creation and annihilation
variables.
First, we introduce the idea of a scale of Fock spaces, denoting by Cv for p > 0 the
Hilbert of functions f(A) on P such that

(8.1) 11f He(p) = ; [f(A)[2P [AL dA < (3(3 m

Then we have (I)l = (I) ; for p > 1, (I)p is a dense subspace of Fock space, which is
may be interpreted as a space of "smooth" random variables (in fact, f3pOp is used as
a space of test-functlons on Wiener space) while for p < 1 (I)p is a space of generalized
functions or distributions.

1 This subsection is an addition.


100 IV. Fock space (1)

If g, f are elements of Fock space (random variables) belonging respectively to g21/p


and q)p, then g(A)p -IAI/2 and f(A)pl AI/2 belong to (I), and their scalar product is the
standard scalar product < g, f > . This can be extended to a duality functional between
Ol/p and (I)p, similar to the duality between (Hilbert) Sobolev spaces of functions and
distributions.
Next, given a kernel k, let us recall the computation of the bilinear functional
< g , K f > where K is the operator associated with k. For simplicity of notation
we use the same letter (omitting hats) for a vector and its chaotic expansion. Then we
have

(8.2) < g, K I > = f g(A + B) k(A, B, C) f ( B + C) ,


J

where k is defined by

(8.3) k(A, B, C) = E /c(A, V, C) .


VcB
For the proof, see subsection 5, formulas (5.2)-(5.3).
Let us now introduce the Belavkin-Lindsay norms. We first put for b > 0

(8.4) I'[b(A, C) = sup I~(A, B, C)[ ,


B ~[B]

and then, for a, c > 0

f K~(A, C)
(8.5) II k I~ab,~ = J alAlclCi dAdC .

It should be noted that our b would be called b2 in the notation of Belavkin-Lindsay.


The limiting case b = 0 is interesting too, since it means that ~ ' ( A , B , C ) does not
depend on B (see the example after the proof).
Here comes the main estimate.
THEOREM. Assume ]] T ]]a,b,c < oo, p > a, q > c and b ~ (p - a) (q - c). Then /'or
[[ g ][ (p) < oo, [[ f [[ (q) < c~ the integrM (8.2) is absolutely convergent, and dominated
by [Ig II (p) II T I[a,b,c [[ f II (q)"
PROOF. We may assume g, f, k > 0. Our aim is to dominate

f f g(A + B) Kb(A, C) f ( B + C) dAdBdC

We put K b ( A , C ) = S(A,C)v/~IAIv/~ IC[, where S ( A , C ) is square integrable by


hypothesis, we replace b by (p - a)(q - b), and we use the inequality x°y 1-e < x + y,
which gives us

a[A[(p_ a)[S[ ~_ p]m[+]B[ , c[C[(q_c)lB[ ~ q]C[+[B[


4 Maassen's kernels 101

T h e n there r e m a i n s

g(A + B) v@ A I+lB I S(A, C) f(B + C) v'~ [C[+lB[ dAdBdC

= f ~c ~,cl f g(A + B) ~l~l÷tBI S(A, C) f(B + C) ~"'~A~B


We apply the Schwarz inequality to the inner integral, thus t r a n s f o r m i n g it into

(/g2(A + Z,)plAl+l~L~B)l/2( f S~(A,C)i~(B + C)ql'i eAeB)'/~


T h e first factor is equal to II g H(p) a n d goes out of the integral. Let us set f S2(A,C) dn -=
~ ( C ) . We m u s t d o m i n a t e

f ec(f i~(B+C)q,'M%(c)em~/~
Again a factor ~(C) 1/2 comes out. A p p l y i n g again the Schwarz inequality for the
measure dC yields two factors. First ( f ~2(C) dC) 1/2 which is the kernel's norm, a n d
next ( f f f ( B +C)qlSHCI dBdC)a/~ equal to ]lfH(q)" D
EXAMPLES. The Maassen kernel of a H i l b e r t - S c h m i d t operator is of the form (see formula
(2.11))

K(A,B,C)=(-1)IBIh(A,C) with f l h ( A , C ) 12dAdC <'-w.

Therefore we have [c(A,B,C) = h(A,C) if B = 0 a n d 0 otherwise. T h e n we are


in the limiting case b = 0 with Kb(A , C) = Ih(A, C)I. T h e n a t u r a ~ choice for a , c is
a = c = 1, a n d the condition ( p - 1) (q - 1) >_ b is satisfied for p = 1, q = 1. We thus
recover the b o u n d e d n e s s of the operator defined by h. This is a r a t h e r exceptional case,
however, and d o e s n ' t m e a n the estimate is very sharp.
A n o t h e r exampi~ is t h a t of kernels satisfying the Maassen conditions, compact
time s u p p o r t a n d iK(A,B,C)] _< M ]AHBt+[c|. T h e n we have a similar p r o p e r t y
for K (at the cost of increasing M ) , a n d we m a y take b = M . T h e n we have
Kb(A, C) <_ M [AI+[C[, a n d since we are working o n a b o u n d e d simplex we m a y take
a, c arbitrarily close to 0. Accordingly, the c o n d i t i o n on p, q becomes pq > M, m e a n i n g
that K m a p s the "Sobolev space" ~p into the dual of the "Sobolev space" ~M/p, that
is, d2p/M . Assume now f is a test function in a sense different from Maassen's, n a m e l y
t h a t f E Flprl2p.T h e n the same is true for K f , a n d we get a result similar to M a a s s e n ' s
theorem.
REMARK. The B e l a v k i n - L i n d s a y estimates suggest conditions which should p r o b a b l y be
imposed on any reasonable kernel with three a r g u m e n t s K(A, B, C). T h e n a m i n i m a l
a s s u m p t i o n 1 shouM be t h a t , for all integers i, j , k, the f u n c t i o n of l 4, C)

sup I K ( A , ~ , C ) I = ifj(A,C))
IBl=j

1 It could still be weakened by localization to subsets of [0, t] for finite t.


102 IV. Fock space (1)

- integrable over the set {I A 1 = i, I C I = k . The similar condition on K


should be square
instead of K is equivalent to it. Such an assumption means that the cperator associated
with K maps a vector with a finite chaos expansion into a "distribution" whose formal
chaos expansion (nat necessarily convergent) is given by L2 coeffic1.-nts. If we want a
true vector in Fock space, the condition becomes

with fixed j and k , but summation on all i


Chapter V
Fock Space (2)
Multiple Fock Spaces
From the algebraic point of view, a multiple Fock space over 7"l is nothing but a
standard (symmetric or antisymmetric) Fock space over a direct sum of copies of 7-/, and
new definitions are not necessary in principle. However, this chapter contains important
new notation, and a few rather interesting constructions, like that of the "finite-
temperature" ( = extremal universally invariant) representations of the C CR. In classical
probability, it corresponds to the passage from one-dimensional to several-dimensional
Brownian motion, necessary for Ito's theory of stochastic differential equations.
We start the theory in a somewhat naive way, replacing one dimensional by finite
dimensional Brownian motion; we discuss multiple integrals and creation-annihilation
operators in §1, n u m b e r and exchange operators in §2. Then we discuss multiple Fock
space in a basis-free notation, including infinite multiplicity. Additional results on the
complex Brownian motion Fock space can be found in Appendix 5.

§1. M U L T I D I M E N S I O N A L STOCHASTIC INTEGRALS

Multiple integral representations


1 From the probabilistic point of view, Fock space over L2(]R+, C d) call be interpreted
as the L 2 space of a d-dimensional standard Brownian motion, and the case d = 2 has
special interest because of the additional structure of complex Brownian motion. The
integer d is called the multiplicity. The case of infinite (countable) multiplicity is better
described as a Fock space over L2(IR+,~), where ]C is some separable Hilbert space,
which we call the multiplicity space. We mostly consider the case d < oo in sections
1-2. The notations are the same in the infinite dimensional case, once an orthonormal
basis has been chosen.
Though it has an additional structure, multiple Fock space is a Fock space, and in
particular it has a vacuum vector 1, which spans the chaos of order 0. If the multiplicity
d is finite, multiple Fock space is the tensor product of d copies of simple Fock space,
and 1 is the tensor product of the individual vacuum vectors.
Fock space of finite multiplicity d can also be interpreted as }~bck space over
L 2 ( E , g , # ) , where :~ is the product measure on E = IR+ x { 1 , . . , d } of Lebesgue
measure on ~ and the counting measure. Our favourite interpretation of Fock space
104 V. Multiple Fock spaces

is that of Guichardet, in which F('H) is identified with the L 2 space of "P, the space of
all finite subsets of E . A finite set B C E with k elements consists of k pairs (si, a i ) ,
(si E IR+, ai E { 1 , . . . , d}) and, since the measure is non-atomic, we may assume the si
are all different. Then we may describe B in two ways :
1) We can give ourselves a subset of IR+ with k elements, A = {si < . . . < sk}, and a
mapping a from { 1 , . . . , k} to { 1 , . . . , d}. This gives rise on Wiener space to stochastic
integrals of the following form

(1.1) Ja(f) [ f(sl,°l,'",Sk,°ek)dXs%k'"dX~ i ,


,Is i<...<sk
with respect to d-dimensional Brownian motion. The k th chaos contains d k types of
stochastic integrals, corresponding to all possible mappings a .
2) T h e set B is completely described by its projection A, and the partition of A
into d (possibly empty) subsets A1 = {a = 1 } , . . . , A d = {a = d}. For simplicity we
assume here that d = 2. Then we are led to stochastic integrals of the following kind,
defining A i = { S l , . . . , s m } and A 2 = { t ~ , . . . , t n }

./m,,~(f) = ~<..-<*m f(sl,...,Sm,tl,...,tn)


. ]

(1.2) d X l< . . . d X Jm d X ~ . . . d X ~
This is not a strictly adapted integral like (1.1). To give it a precise meaning, a
probabilist would decompose it into a sum of adapted integrals (1.1) corresponding
to all the possible mutual orderings of the si's and t j 'S. Thus the representation (1.2)
is more economical than (1.1), as it groups all the integrals (1.1) which contain m
times X i and n times X 2. Even in the antisymmetric case, it is always assumed that
anticommutation takes place only between components d X ~ and d X ~ of any given
Brownian motion, different components dXs~ and dXfl~ commuting. Then one can still
move dXls~ components across dX~ components to perform the grouping (1.2).
The multiple integral (1.2) has also an unordered version, with a function f
symmetric (or antisymmetric) in each group of variables, and a factor l / r a i n ! in front.
We shall give to (1.2) a preference over (1.1), though both are necessary (to compute
the chaos expansion of solutions of a stochastic differential equation, for instance, only
the form (1.1) can be used).
WARNING. The usuM notation of probability theory leads us to denote the components
of Brownian motion as d X ~ with an u p p e r index. Because of the diagonal summation
convention, the g e n e r a / u s e of indexes gets reversed. We do not interpret this as vectors
becoming lines instead of columns, but as a modification of the usual convention that
the line index is the upper one. In matrix products, the summation thus takes place on
the descending diagonal, etc. Each time this happens, we will recall this "warning".
Let us now give the shorthand notation for expansions (1.2). For typographical
simplicity we take d = 2 and write X , Y instead of X 1 , X 2 . Then a random variable f
is expanded in the representation (1.2) as

(1.3) f = [ f(A,B)dXAdYB
J P x'P
1. M u l t i d i m e n s i o n a l integrals 105

with a norm giver: by

(L4) I1f II2 = f'Px'P l f ( A ' B)12 dAdB.

As usual, we are loose in our use of hats to distinguish f from its chaotic transform f .
For each component X a there is a creation-annihilation pair a +a, a - a ( b±a in the
antisymmetric case). This notation is clear enough : an operator on simple Fock space
can be cloned into multiple Fock space so that it acts only on the component of order
a , and then an upper index a is ascribed to it.
We will use in section 3 the remark (which we learnt from a paper of Belavkin) that
Fock space of possibly infinite multiplicity can be described as a continuous sum of
Hilbert spaces over P , the usual space of all finite subsets of IR+

(1.5) c~= / p ]~'AdA

where the "fibre" ]CA is equal to ]C®n for ]A[ = n (and ]C is the mu!tiplicity space).
The general notion of continuous sum is not necessary here : since ]CA is constant for
A 6 "Pn, what we have is simply ~n L2(7)n)®lC®n" To see why, choose an orthonormal
basis of ]C and chect~ that one gets exactly tile appropriate stochastic integrals.
Some product formulas

2 The general discussion of operators on multiple Fock spaces will be p o s t p o n e d to the


next section. To get used to the shorthand notation (1.4), let us give some multiplication
formulas for two dimensional Brownian motion. In this case we also have a very useful
complex Brownian motion representation, Z=( X + i Y) /x/~, Z = ( X - i Y ) / v ' ~ , so that

(2.1) f =/ f(A,B)d-ZAdZ B
J p x'p

with the same isometry property as (1.4) (without the v/2 factors, the measure would
be 2IAI+IBIdAdB). Note that if we restrict ourselves to stochastic integrals f(A, 0) or
f ( O , B) we get two copies of one-dimensional Fock space.
For the product h = fg of two random variables in the representation (1.3), we have

(2.2) "h(A,B) = / ~ f(R + M,T + N)g(S + M,U + N)dMdN .


R+S=A
T+U=B

On the other hand, the infinitesimal multiplication rule for the Wiener product in the
complex representation is given by

(2.3) dX~=d.Vs2 =ds, dXsdYs=O ; d-Z2s=dZ2s= 0 , d-ZsdZs =ds,


and therefore in the representation (2.1) the formula gets twisted

(2.4) h(A,B):/ Z f(R+M,T+N)g(S+N,U+M)dMdN.


R+S=A
T+U=B
106 V. Multiple Fock spaces

As in the case of simple Fock space, we may get a whole family of associative
multiplications by means of a weight ~r2(IMI+IN[) , in either one of the two formulas,
obtaining in particular the Wick product for a = 0. But here we can do something more
general, inserting a weight of the form aiM[ blNI . If we do this in formula (2.2), we get
for a = 1, b = 0 a Wiener-Wick product which corresponds to a classical differential
geometric construction : the symmetric multiplication of fields of covariant symmetric
tensors on a manifold (see [DMM] Chap. XXI, n ° 26). Doing the same in formula (2.4),
the products corresponding to a # b are non-commutative, and for a = 0, b = 1 we get
exactly the composition formula for kernels with two arguments.
On the other hap.d, there also exist interesting products which are, for instance,
Wiener products in the variable X , and Grassmann or Clifford in the variable Y. The
W i e n e r - G r a s s m a n n product is an infinite dimensional analogue of the exterior product
of differential forms on a manifold (some Brownian motions providing the coefficients
of the form, and some the anticommuting differential elements). Let us describe rapidly
this situation on double Fock space.
A functional in the double Wiener space generated by X, Y has a representation
f
0(¢o,~') : [ O ( w , t l , . . . , t p ) d Y Q ( ~ t) A... A dl~v(w' ) (a)
Jt ~<...<tv
: p~£ O(aJ,tl,...,tp) dY~(aJ)A . . . A d Y , , ( J ) (b)

In version (a), we integrate over the increasing simplex, according to the standard usage
in finite dimensional exterior algebra. In version (b), the coefficient C ( . , t l , . . . , t p ) is
assumed to be antisymmetric in its last variables. Since 0 itself has a chaotic expansion,
the differential form appears as a multiprocess 6) = f O( A, B) d X A dYB , with O( A, B) = 0
unless ]BI--P. This restriction is unimportant, and one often considers differential forms
which are not homogeneous. There is a natural Hilbert space structure on the space of
all differential forms, with squared norm f l0 (A, B)I s dAdB
As long as we ir..tegrate on the simplexes we cannot distinguish whether the second
Fock space is symmetric or antisymmetric, and our multiprocesses thus are the same as
in [DMM] Chap. XXI, n ° 26 : the difference lies in the multiplications and differential
operators we use. On ordinary double Fock space we had Wiener-Wick and W i e n e r -
Wiener products, iterated gradients and divergences. Here we have W i e n e r - G r a s s m a n n
and Wiener-Clifford products, exterior differential d and codifferential 5.
Differential forms and the corresponding products are very interesting objects, even
for probabilists : see in Appendix 5 the use Le Jan [LeJ1] makes of "supersymmetric"
computations in the theory of local times.

S o m e " n o n - F o c k " r e p r e s e n t a t i o n s of t h e C C R
The end of thi:, section has a great philosophical interest, since it exhibits a
continuous family of inequivalent representations of the (infinite dimensional) CCR,
a phenomenon which was felt very puzzling when it was first dicovered. On the other
hand, it is used nowhere in these notes and can be omitted altogether. The concrete
representations we construct are the extremal universally invarianf representations of
1. Multidimensional integrals 107

Segal [Sag2]. They correspond in the elementary Fock space case to Gaussian laws of
non-minimal uncertainty (Chap. Ill, §1 subs. 9 and §3 subs. 6). By analogy with the last
mentioned construction, they are called finite temperature representations by H u d s o n -
Lindsay, whose papers [HuLl] [HUE2] we are going to follow.
3 We discuss the case of dimension d = 2, and use the following picturesque language
(without physical content) to help intuition : we call X the particle and and Y the
antiparticle Brownian motions, and denote by a~+ the particle creation and annihilation
operators, by a t + the antiparticle ones. We put

(3.1) at+ ----~ a++t ~- ~ a t - ; a t -----Eat+- -~ ~at-+


where ~ and ~ are two complex numbers (the creation of an antiparticle counts as an
annihilation). The operators at+ and a t are adjoint to each other on the dense domain
consisting of the algebraic sum of the chaos spaces (one could also use coherent vectors :
this will be discussed later). The usual Ito table gives us the formal multiplication rules

(3.2) (da~) ~ = 0 = ( d a t ) 2 ; d a ~ d a ; = ~-~dt, d a t d a ~ = ~ d t .

In particular, [ d a t , da~] =(]~12-1UI 2) dt, and if 1~12--]?~]2 : 1 W e get a representation


of the CCR - - this formal computation will be justified in the next subsection. If we had
started with fermion creation and annihilation operators, we would have a representation
of the CAR provided 1~12+lql~ = 1.
If instead of usir~g creation and annihilation operators we use the "field variables"
(for E = ± )
dQ~ = da c+
t + da ~-
t , dP~ -- i ( da~ + da~- )

and define similarly dQt = da~ + d a t , we have a representation with real coefficients

dQt = c~dQ~[ + ~ d Q t , dPt = 7 d P ~ + 6 d P t ,

with a T - ~ 6 = 1. This corresponds exactly to what we did in Chap. III, §2 subs. 3,


with the difference that here we haven't a Stone-von N e u m a n n theorem to return from
double Fock space to simple Fock space, or even to a direct sum of simple Fock spaces.
On double Fock space, the operators Qt correspond to multiplication by the random
variables (~+~)Xt + (7/+q)Yt, which in the vacuum state constitute a Brownlan motion
(not a standard one in general).
4 Let us define rigorously the above field operators, as we did in the simple Fock space
case, with the help of Weyl operators. Everything we are doing here applies, not only
to Fock space over L2(IR+), but to any Fock space over a Hilbert space "H equipped
with a complex con mgation h ~ h.
We take two copies F + and F - of simple Fock space, and denote by F I their tensor
product, which is double Fock space. We define its vacuum vector I as 1 + ® 1 - . If U
is an operator on simple Fock space, we can have it act on double Fock space F ~ either
as U + = U ® I , or as U - = I ® U. In particular, we define on double Fock space W e f t
operators

(4.1) w~ = w ( ~ ) ® w(-~h) = w+(~h)W-(-~h).


108 V. Multiple Fock spaces

where W ( h ) (h e L2(IR+)) is a s t a n d a r d Weyl o p e r a t o r on F. T h e s e o p e r a t o r s are


unitary, and satisfy Lhe W e y l c o m m u t a t i o n relation

WhWk = ~-iam < h'k > W h + k •

T h u s we have a r e p r e s e n t a t i o n of the C C R on double Fock space. T h e n we m a y define


rigorously creation and annihilation operators, by the same rule we used in the simple
Fock space r e p r e s e n t a t i o n : differentiation of the formula

W z ~ ~za ÷- z a -

(on a d o m a i n which m a y consist of the finite sums of chaos spaces, or of t e n s o r p r o d u c t s


of Maassen test f u n c t i o n s . . . ), and (4.1) has b e e n chosen so t h a t t h e C C R creation-
annihilation p a i r is t h e s a m e as (3.1) - - this explains in p a r t i c u l a r th~ sign - 7 in the
second factor of (4.1).
To s t u d y this reI~resentation, we define ezponential vectors by

(4.2) $(f) = $(~f) ® g'(-r/f)

so t h a t we have, after an easy c o m p u t a t i o n

(4.3) < $(f),$(g) > = e a< f'g > + b < g , f >


(4.4) l~¢-h$(f ) ~- e-cl]uH2/2 - a , ~ h , f > - b . ( f ,h> ~ ( f .~ h)

where we have put a = I~12 , b = M 2 , c = a + b . Note t h a t c > 1 since we a s s u m e d a - b = 1.


If we take f = 0 in (4.3), we get the normalized coherent vectors (Weyl o p e r a t o r s acting
on the v a c u u m ) .
Let us prove t h a t the new exponential vectors $ ( f ) g e n e r a t e a dense subspace. If we
consider $ ( f ) as a r a n d o m variable, we m a y interpret it as the stochastic e x p o n e n t i a l
of a c o m p l e x m a r t i n g a l e

fo ~ L d X s - ,7.f ~ d g ~ .
-

Let us choose functions f of the form ¢iOg, g being real, anc: 0 such t h a t the
complex n u m b e r s ~e iO = ue ik and --~c -i8 = v¢ ik have the same a r g u m e n t . T h e n the
e x p o n e n t i a l vectors of this form can be w r i t t e n as a constant times e x p ( e i k f gs d B s ) ,
where B = u X + v Y is a n o n - n o r m a l i z e d B r o w n i a n motion, and taking (complex)
linear c o m b i n a t i o n s of such exponentials we can a p p r o x i m a t e in L 4 any given b o u n d e d
r a n d o m variable ~2 m e a s u r a b l e w.r.t, the a-field g e n e r a t e d by B . Similarly, we choose
01 so t h a t the two complex n u m b e r s above now have a r g u m e n t s differing by 7r, and
e x p o n e n t i a l vectors of this form correspond to stochastic integrals e x p ( e ik' f g~ dB~s),
where B t = u X - v Y , and we can a p p r o x i m a t e in L 4 b o u n d e d r.v.'s ~t in the a -
field of B t . Since p r o d u c t s of e x p o n e n t i a l vectors are e x p o n e n t i a l vectors up to a
m u l t i p l i c a t i v e constant, we see t h a t complex linear c o m b i n a t i o n s of e x p o n e n t i a l vectors
a p p r o x i m a t e TT r in L ~ , a n d therefore exponential vectors are t o t a l in L 2 . As for t h e
density result we have used, it is proved as follows : it is sufficient to a p p r o x i m a t e r.v.'s
of the f o r m
hi (Bt~) h2(Bt2 - B t ~ ) . . . hn(Bt,~ - Bt,~_,) ,
1. M u l t i d i m e n s i o n a l integrals 109

and then, because of the independence of increments, we are reduced to proving that
given a Gaussian r.v. B generating a a-field /3, exponentials exp()~eikB) with )~ real
are dense in L4(B)I This is deduced from the fact that e zB is an entire function in
every LP space, for p < oo : a random variable in the conjugate Lq space, which is
orthogonal to the given family of exponentials, must be orthogona! to all exponentials,
and therefore is zero by the uniqueness of Fourier transforms.
Thus double FocK space over a Hilbert space 7-/ has a description very similar to
that of simple Fock space in Chap. IV, §1, subs. 3 : it is a Hilbert space F equipped with
an exponential mapping g from 7"/ to F , whose image generates F , and which satisfies
(4.3) (simple Fock space is the limiting case a = 1, b = 0 ). There are several important
differences, however : in the case of simple Fock space, the Weyl representation can be
shown to be irreducible. Here it is obviously reducible, because the operators

(4.5) I~VI = W ( - ~ f ) ® W(~f) = W+(-yf) W-(~f) ,

commute with all ~Veyl operators (4.1). Note that one transforms (4.1) into (4.5) via
the conjugation mapping J ( u ® v) = v @ u on F.
Let us denote by 14] (~Y) the von Neumann algebra generated by the Weyl operators
(4.1) (resp. (4.5)). We proved above that the exponential vectors generate a dense space :
in von Neumann algebra language one says that the vacuum vector i:s cyclic for
(and for /43 by the same reasons). It is also cyclic for the commutant ~A21of ~V, which
obviously contains ]'2 (one can prove that W I = ~,V but we do not need this result). Now
it is easy to prove that if a vector 1 is cyclic for the commutant ]/Y~ of a yon Neumann
algebra, then it is ~eparating for l/V, meaning that an operator a E ~z satisfying a l = 0
must be 0 itself. Th:s is very different from the situation of simple Fv_ck space, in which
many operators killed the vacuum (note that the new annihilation operators here do not
kill the vacuum - - however, this does not come within the preceding discussion, which
concerns bounded o/gerators ). Having a cyclic and separating vacuum makes work easier
with these "non-Fock" representations of the C C R than for simple Fock space. See the
papers of Hudson-Lindsay [HuL].
As in the ease of simple Fock space, one may define a Weyl representation of the
whole group of rigid motions of 7"/. Apparently this has no special applications, but it
is natural to wonder about the particular case of the number operator. As in the simple
Fock space case, we define a~, for h real, by

(4.6) e x p ( i ~ ) z(f) = E(J*hf) = E(J~h~/) ® ~(~-"%7) •

Then it is easy to p_:ove that a~ = a ~ ° - - a h ° , at least on a dense domain (as usual we


leave aside the problem of essential selfadjointness on this domain). Thus the natural
extension of the nuraber operator is not positive : it rather represents a total charge.
Also, the bounded operators (4.6) leave the vacuum state invariant, and therefore they
cannot belong to the yon Neumann algebra generated by the Weft operators Wh , which
admits the vacuum as a separating vector. In fact, several results show t h a t a fully
satisfactory numbei operator exists only for (direct sums of copies of) the simple Fock
representation of the CCR. See Chaiken [Cha] and the historical discussion p.231 in
B r a t t e l i - R o b i n s o n [BrR2].
110 V. Multiple Fock spaces

5 Let us r e t u r n to the Ito table (3.2), which we are going to transform into a finite
multiplication formula.
First of all, we change slightly the representation of vectors as follows : instead of the
two standard Brownian motions (Xt) and (Y~), we use two non-normalized Brownian
motions, more close))" related to the creation and annihilation operators

(5.1) Zt+ = a~'l = ( Xt , Z~ = at1 = -~Y~ .


(the use of the letter Z should create no confusion with the complex Brownian motion
of subsection 2). Then a vector is represented as

(5.2) S=/jiA,
with a, b as in (4.4). Then we have

da-~(dZ~dZB) = dZ~+t dZ B + bdZ~dZB_ t dt


da~ (dZ~dZB) = adZ~_ t dZ B dt + dZ~dZB+ t .
The next step is to compute the effect of da~da T on dZ~dZ B . Assunfing that S and
T arc disjoint, we have

daSdaT(dZAdZB) = E dZ(+A_T~)+sdZ(A+T2)_S~ a[T~[b[82[dTldS2


S12~82 =S
T 1 +T 2 =T

Then we consider an operator given by a kernel with two arguments

(5.3) K ; × 7 , K(S, T) da~da~

(we assume that K(S,T) vanishes if S and T are not disjoin~), and we compute
formally the action g = K f of K on a vector f given by (5.2). The result is (after some
rearranging of terms, which by now must have become familiar to the reader)

(5.4) "~(A,B)=/ E K(R+M'T+N)f(S+N'U+M)a}MIb]N]dMdN"


R+S=A
T+U=B

and the composition formula for two kernels H = FG is

(5.5) H(A,B)= / Z F(R+M'T+N)G(S+N'U+M)alMIblNIdMdN"


R+S=A
T+U=B

u in formula (5.4) we take for f the vacuum vector, we ~ee that ~'(A,B ) = K(A,.).
Otherwise stated, a kernel with two arguments is uniquely determined by its action on
the vacuum. Then (5.4) and (5.5) appear as equivalent formulas, and (5.5) can also
be interpreted as a multiplication formula between vectors, which is exactly the kind
of associative product suggested after (2.4). As in the ease of simple Fock space, these
formal computations become rigorous under Maassen-like growth conditions on (test)
2. Exchange operators 111

vectors and (regular) kernels. It is clearly possible to construct an algebra of kernels with
four arguments, containing the two commuting Weyl systems of double Fock space.
We stop here the discussion, and refer the reader to the literature.

61 The following remark is due to Parthasarathy [Par7] as an explanation to construc-


tions of P. Major [Maj]. It somehow unifies the real and complex Brownian motions,
and their corresponding Fock spaces.
Consider a nice measure space (E, g, #) provided with a measurable and measure
preserving involution x ~ x I . Then E can be decomposed as G + F + G t , where F is
the set of fixed points of the involution, and G contains exactly one element from each
pair (x, J ) of conjugate points - - all three sets being measurable. For each function f
we define 97 by 97(x) = f ( x ' ) .
Our purpose is to construct a complex valued Gaussian field (not necessarily
complex-Gaussian : simply two-dimensional Gaussian) f ~ ~(f) for f E L2(#) with
the following properties :

t
(6.1) =~(f) , ]E[~(g)~(f)] = ]g(x)f(x)#(dx).
!

It is clearly sufficient to perform the construction when f is real.


Uniqueness. Consider
¢±(f) = c~(~(f) + ~(Y)

with e+ = 1/2, ~_ = i/2. Then ~+ and 4 - are complex linear combinations of complex
valued Gaussian r.v.'s, and therefore are jointly complex valued Gaussian. On the other
hand they are real valued. Thus they are jointly real Gaussian, and it is sufficient to
know their real covariance, for which ~+ and 4 - are orthogonal.
Existence. We consider two elementary cases. 1) The involution is trivial, then ~(f)
is real, and we have the model

t
~(f) = JR f ( s ) dX,

(stochastic integral). 2) E = G ® G' (a copy of G ) and the involution is (x,y')


(y, xt). Then we take a complex-Gaussian process Zs over G and put for f = g • h I

~(f) = £ g(~)dZs + lFh(s)d-2(~) .

The general case then follows using the decomposition E = G + F + G l .

In most cases, the set of fixed points has measure 0, and we are reduced to the
complex-Gaussian case, with a subtle difference : the splitting E = G+G J isn't intrinsic,
and amounts to an arbitrary choice of dZ:c and dZ z for every pair (x, xl).

1 This subsection is an ~ddition.


112 v . Multiple Fock spaces

§2. N U M B E R AND EXCHANGE OPERATORS

1 It was not realized until recently that the number operator of simple Fock space
comes on the same footing as the ordinary creation-annihilationoperators. The situation
is even more interesting with the rich family of "number operators" of a Fock space of
(finite) multiplicity d. In this section, we are going to discuss these operators, following
the work of Evans and Hudson [EvH1], Evans [Eva].
Consider first a general Fock space over 7-/. We recall that a bounded operator
A on 7-/ has an extension A(A) to the Fock space F ( ~ ) , as a differential second
quantization. In particular, if 7/ = L2(IR+) ® K~ and M is a bounded operator on
/C, we put Mt = I[0,t] ® M (where the indicator is interpreted as a multiplication
operator on L2(IR+)) and define

a~(M) = ~(M,) .

The following operators will play a particularly important role : we choose an orthonor-
real basis e cr of the multiplicity space ~ (the upper index corresponds to the "warning"
in §1 subs. 1) and denote by ea the dual basis of "bras". Then we denote by I fl the
operator I a~ = l efl > < ea 1, and we put
o
a, ( I 2 ) = a (t)

The effect of a~(t) on a continuous basis element d X s a l . . . d X s a # produces the finite


sum

dx2...dx ,...dx2
8 1 < t ~Oli~Od

It is simpler to describe the effect on the continuous basis element of the differential
dafla(t) : if t occurs among the si and the corresponding ai is a , the factor dX~ r is
replaced by d X g . Of.herwise the basis element is mapped to 0.
The diagonal operator daaa(t) is the number operator corresponding to the compo-
nent d X ~ (the "trace" ~ a dag(t) is the total number operator) while the operators
da~(t) with a # fl are ezchange operators. In the one dimensional case, there was only
one operator da I (t) = day.
The content of the above computation can be subsumed as da~a(t) d X ~ = 6 ~ d X g ,
from which we deduce a part of the "Ito table" for multiple Fock space

(L1) dab(t) = .

On the other hand, let us follow the notation of Evans and put d X ° = d t (or rather d t l ,
an element of the chaos of order 0). Since the creation operators transform the vacuum
into an element of the first chaos, it seems fit to denote them by da~(t), and in the
same way da°a(t) are annihilation operators. Then the rules for annihilation operators
become
da ao (t) dXt 7 = 6a7 d X to , dac,(t)dX
o to = 0 .
2. Exchange operators 113

For creation operators, they become

d a ~ ( t ) l = d X ~ , da~(t)dX~ = O .

The last equation is also valid for 7 = 0, since d X ~ d t is counted as 0. Let us put
da°(t) = I d t . Then we have from the usual Ito table for creation and annihilation
operators

da°(t)da~o(t ) = 5~da°(t)

and it turns out that, if we mix creation, annihilation and number/exchange operators,
relation (1.1) is true for all indices a,/3 including O, except for one relation :

daflo ( t ) da°~ = 0 # 5° dafla(t )

To get the correct formula, we introduce the Evans delta 5~a = 5~a unless a =/~ = 0, in
which case 5°0 = 0, and we have the complete Ito table for multiple Fock space 1

(1.2) da~(t) dab(t) = ~ dab(t).

If we compare this notation with that of simple Fock space, we see that the four basic
differential operators Idt, da +, da +, day have been replaced by four sets of operators :
again da°(t) = Idt (scalar), tile two dual vectors da°(t), dab(t) of creation and
annihilation operators, and the matrix of the number/exchange operators da~(t).
From now on we adopt the Evans notation, with the convention that indexes a, ~, 7
from the beginning of the Greek alphabet take only the values 1 , . . . , while indexes
p, er... are a/lowed the additional vMue 0.
EXAMPLE. The main application of the preceding discussion concerns the Evans-Hudson
theory of q u a n t u m diffusions, but let us illustrate it by the description of a continuous
spin field over the line, an object often mentioned in the physics literature, but of which
I have never seen a complete mathematical definition. We take d = 2 , and define

daz(t) = dab(t) + da2(t) , day(t) = i(da2(t) - dab(t)) ,

(1.3)
daz(t) = da](t) - da~(t)

Then we have d f a x , ay] = idaz, etc. Then "spin kernels" may be constructed as
operators acting on a domain of test flmctions, etc. As usual, the order structure of
JR+ is not essential, and the construction can be extended to any space L2(#) of a
non-atomic measure.

1 Because of our conventions (see the "warning" in §1, subs. 1) the index pair affected by 6 is not
the same as for matrix units in linear algebra.
114 V. Multiple Fock spaces

Kernel calculus on multiple Fock space

2 The following subsections may be omitted at a first reading. We present in section 3


a less explicit, but also less cumbersome, version of kernel calculus, which has the
advantage of allowing infinite multiplicity.
From a purely algebraic point of view, kernels are sums of "multiple integrals"

~< . . . < s , ~

over all possible integers n and n - t u p l e s of Evans indexes ¢i = (Pl)' except t h a t usually
the index (°0) is not allowed - - if it were, it could be removed by integration, modifying
the coefficients. On the other hand, it is interesting to have a variant of the definition
of kernels in which da ° is allowed, because such extended "kernels" appear naturally
when solving stochastic differential equations. The price to pay is the loss of uniqueness
in the kernel representation, but the formulas are only trivially modified.
As usual, one tries to define a shorthand notation, and to give d o s e d formulas
describing how kernels act on vectors, and how adjoints and products are computed. We
are going to extend the K ( A , B, C) notation of simple Fock space, though it becomes
very heavy.
Kernels then appear as sums of multiple integrals

(2.1) Ix" = / I ( ( ( A ~ ) ; (A~) ; ( A ° ) ) da~(A~) . . da~(A~)


. . . . o o •
daa(Aa)

This is an illustration of the Evans notation. First of all, ~, ~ are indexes which assume
the values 1 , . . . , d , 0 being excluded. The arguments A~ coming first correspond to
the creators, those A ° coming last correspond to the annihilators, and in the middle
we have the number and exchange operators. We may imagine all arguments arranged
in a m a t r i x instead of a line, with an empty set A0° at the upper left corner. All the
arguments are disjoint.
If the time differential da°(t) is included, formula (2.1) contains an additional
da°(A°), a n d the corresponding subset A ° is written as the first argument of K .
The notation can be brought closer to that of simple Fock space, writing the kernel
as

(2.1') I ( ( A ~ ) ; Bc~~ ; Ca) .


Let us describe the way a kernel K acts on a function f , first in the simple case where
neither K nor f depend on the time variable t. Here is the formula, denoting by the
same letter a function and its chaotic expansion.

Kf(A1,...,Ad) =
(2.2)

~ I((U,,V~,M~)f(M~+~V~,+W~)I-IdM~"
2. Exchange operators 115

Before we sketch a justification of this formula, let us also give the useful formula that
corresponds to formula (5.2) in the last section of Chapter IV. That is, the computation
of the functional ~(g, f) = < g, K f > : it is first seen to have the value

f - g ( W , + ~ Y,z + Us) K(U,, Y ~ , M,) f(Ma + ~ Y~, + Wa) x

02 Ot~ Ot Ol

According to the main property of the measure, we may consider as a single variable
each subset Wu + V ~ . We first denote it by V ' ~ , then omit the ' . In this way, we get

< g , K f > = Jf g ( E V a z + Ua)K(Ua, Vc~z,M,~)I(Ma + E Z Vza) ×

(2.3) l-I dUc~I I dVa~ H dMc, .

where ~" is a partial Moebius transform of K on the diagonal variables Vac~ only : i.e.
one performs a summation on all subsets of Vaa.
We now justify ibrmally (2.2). We allow the argument A0° in K , a n d for the sake
of symmetry, we allow also an additional differential dX°(U °) and the corresponding
argument U ° in the definition of a vector, though in practice this is unusual

(2.4) f = / I((UP)) 1--[dZP(UP)


p
= / f( U°,U1,..., U n) dX°(U°)dXl(U1)...dXr'(u n) .

To compute the effect of a kernel on a vector, we begin with the effect of an operator
differential I-[ daPa(SPa) on a vector differential l-I dZr(U~-) • We put

SPnUr=B~ ' , SPnU=A p, S n U ' : C ' ,


where S is the complement of USp , and U the complement of UUr . All these sets are
disjoint. Then it is easily seen that the product is 0 unless the only non-empty sets in
pc~
these decompositions are the C r , APo and Ba , and in this case

dX ° is produced by V° = A° + E T B707 + C o
,
dX c' is produced by v ~ = A~ + ~ B~ + c a
7
Then we have
s o = ~,. B °r + A ° = A 0 , U0 = ~ B~° + c °=C o,
ptr

s ~ = >~2r B~ r + A~ = , ~ " , s° = Zr"°r + Ao = Ao


116 v. Multiple Fock spaces

Then it is easy to find the expression of the vector K f = g : the coefficient g((V°~))
(with V ° empty : what we get is a standard chaos expansion) is given by a sum over all
decompositions
A ;• + + Ca

of the following integrals, where the sets A ° , B-~"~ and C O appearing in the coefficient of
d X ° are treated as integration variables M, N.~, P due to the combinatorial properties
of the measure

(2.5) K ( M , ( A ~ ) ; (B~ ); (2V~))f(P,(N~+~--~ B ~ + C ~ ) ) d M H d N ~ d P .

Usually, f ( P , . ) = 0 unless P = 0 and the integration variable P can be omitted.


Our next step is to give an analytical meaning to (2.2). Regular kernels in the sense
of Maassen are defined by the two properties which generalize the simple Fock space
situation (Chap. IV, §4 subs. 5) : 1) a compact support in time (K((APa))= 0 unless
all its arguments are contained in some bounded interval [0, T ] ), 2) a domination
inequality of the form

(2.6) IK((APa))I <: C M ~ [Ag l

It is easy to see that, if the additional variable da ° is removed by integration, then the
true kernel we get is still regular.
Here also, we may define test vectors (regular vectors) by a condition of compact
time support, and a domination property

(2.7) II((UP))I <_ C M ~ IU°l

Integrating out the additional variable preserves regularity. Then it is tedious, but not
difficult to check that the effect of a regular kernel on a test vector is again a test vector.
This is the beginning of the extension of Maassen's theorem to multiple Fock space
kernels. We leave the discussion of adjoints to the reader, and deal with composition in
subsection 3.
The Belavkin-Lindsay type conditions can also be extended to Fock spaces of finite
multiplicity, as shown by Attal.
REMARKS. a) Kernei.s decompose operators on Fock space into "elementary processes"
(which physicists would represent by diagrams), the formalism allowing one single
operation at a given ~'site" (time or place). An exchange operator is more singular
in this respect than a creation or annihilation operator, since the operation it describes
can be understood as the annihilation of a particle of type c~, followed immediately
by the creation at the same site of a particle of type ~ . Physicists consider still more
singular processes (pair annihilations, simultaneous creation of several particles). A
"kernel formalism" including all these processes seems to require distribl.,.tions (see Krfie
[Kr6], Kr~e and Raczka [KrR]), and doesn't concern us here.
b) Let us be moi,: specific. Foek space of multiplicity d over /2(1R+) is also ordinary
Fock space over L 2(IR+ x { 1 , . . . , d}). We may therefore consider Maassen kernels with
2. Exchange operators 117

three arguments involving operators a~4 , where e = - , o, + and A is a finite subset of


IR+ x { 1 , . . . , d} ; they depend on 3d subsets of ]R+. On the other hand, kernels involving
the exchange operators depend on d 2 + 2 d subsets. Thus the second description is more
refined than the first one. It is intuitively clear that it is not possible to express the
exchange operators by means of a Maassen kernel involving only creation, annihilation
and number. On the other hand, IR+ x { 1 , . . . , d } is isomorphic with IR+ from the
measure theoretic point of view, and we expect there are second quantization operators
on ordinary Fock space, which cannot be expressed by Maassen three argument kernels.
An explicit example of this kind is due to J.L. Journ~, $dm. Prob. X X , p. 313-316.
c) More striking still : we have mentioned in Chapter IV that the full Fock space
over L2(]R+) is isomorphic with symmetric Fock space as a Hilbert space. Therefore its
bounded creation/annihilation operators can be considered as operators on symmetric
Fock space, and P a r t h a s a r a t h y - S i n h a [PaS6] describe explicitly the way they act on
the s t a n d a r d continuous basis. It can be analyzed as annihilation of particles at a given
place and recreation at a different place. There is little hope of describing such operators
by a kernel formalism.
3 Let us now study the composition J = K L of two kernels. The formula is obviously
unpractical but its existence is important, as it implies the existence of a "Maassen
algebra" of kernels. The complete formula is due to A. Dermotme [Der].
Let us follow the same method as in subsection 2, and compute the composition of
two o p e r a t o r differentials

d a ° o ( R ° ) l - I d a o,~( R oc~) . , . d a. ~ (. R ~. ) dac~(


o R °c~,)

da°( S°) H da~( S~) . . . da~( S~) . . . da°u( S [ )

In the case of true kernels, R0° and S o are not used in this representation, and are
interpreted as empty. We denote by R the complement of t-JpaRPa and similarly S . We
define
RP~ N 'S = A~ ; RPo. F1S~ = BP~ ; R fl S~ = C~ .

All these sets are pairwise disjoint, and we use the relation d a e ( U + V ) = d a e ( U ) d a ~ ( V )
for disjoint U, V to write each one of the two operator differentials as a product involving
the elementary pieces APa, B pT~, and C~. Then we multiply these products using the
Evans rules, and we get that the product is 0 unless the only non-empty sets are of the
form APa, B ~pc~
, C~'. If these conditions are realized, the product is equal to

G I I da~(T~), . da~(T~)
. . . . o o
daa(T~)

with
V " BP7 ~- (?P
7
and
c =- °+ Z + e°).
118 v . M u l t i p l e Fock spaces

It is now easy to get the final Maassen-like formula : we have for the true kernel
KL((T~), (T~), (TO)) the following expression, replacing the sets Ao°, B°a~, C O by
integration variables M, Na, P

f Z K(M,A~ , A ~ + Z R P a- - ~ p ,a-°a + ~ B a T oa
+Na) x
T~P =As+
P
~ B~+C~
P~ P
P "/
(3.1) L( P,A~ *- / ' ~"~:-'DT~
~ao + C~ , Z B ~ + C~ , C° ) dMdP H dNa
p o~
It is relatively easy now to check that the composition of two regular kernels is a regular
kernel.

Uniqueness of kernel representations

4 In the case of simple Fock space, the uniqueness of kernel representations was
discussed by Maassen for two-arguments kernels. The case of kernels involving the
number operator appears in Sdm. Prob. XXI, p. 46-47. On the other hand, the proof
that is given there does not apply to kernels on a multiple Fock space : the first proof
valid in full generality is due to Attal [Att]. Since the notation of multiple kernels is very
cumbersome, we shall first give the idea of his proof in the case of simple Fock space,
and then extend it to higher multiplicities.
First of all, we associate with a kernel k(A, B, C) a bilineai form (we do not care
about making it her~.rfitean). From Chapter IV, §4 (1.1 t ) we have (using the basic
property of the measure)

(4.1) <-g,Kf>= f g(u+v+w)k(U,V,M)f(V+W+M),iVdVdWdM.


We take as integration variables U, V + W, M = A, B, C, to get

( Z k(A, V, C) ) g(A + B) f ( B + C) dAdBdC


VCB
= / ~(A, B, C) h(A, B, C) dAdBdC where h(A, B, C) = g(A + B) f ( B + C) .

Since the mapping k ~ W is 1 1 by the Moebius inversion formula, it suffices to prove


that ~ = 0. On the other hand, if K is a regular kernel, ~ belongs to L ~ (pa). Finally,
the uniqueness problem is reduced to proving that functions of the form h(A, B, C)
above, where f,g are test functions, are total in L2(~3).
We consider the algebra A of linear combinations of functions h(A,B, C) =
g(d + B) f ( B + C), where f(U),g(U) are bounded Borel functions on P , and are
equal to 0 if either IUI > n or U ~ [0, t] for some n,t. Let 13 ~enote the a-field
generated by .A. According to the monotone class theorem, .2, is dense in L2(B), and
we only have to show that B is the Borel a-field of :P. Since B is separable and T' is
a Polish space, we only need to show that .A is separating. This amounts to showing
that if A , B , C , A I, B I, C I are triples of disjoint sets, the relations A + B = A~+ B t,
2. Exchange operators 119

B + C = B I + C ~ imply A = A I, B = B r, C = C . Now putting H = A+B+C=A%BI+C I


we have A = H \ ( B + C ) = X etc.
In the ease of multiplicity d, the idea remains the same but the n u m b e r of subsets to
handle becomes much larger : operators are represented by kernels depending on d 2 + 2d
arguments B~ (Boo' = O), while vectors depend on d2 arguments a ~ (recalling the
convention that c~,/3 cannot assume the value 0, contrary to p,a,T). The functions
h((A~)) which should generate a dense subspace are of the form

g((Z ";)) i((Z nT))


Again we are reduced to proving that such functions separate points. Otherwise stated,
given systems B~ and B'~ of disjoint subsets (empty for p = a = 0 ) , such that for all

= .': : B;, : Z : :

are the two systems identical? The proof is very simple : since all B~p are disjoint,
we have B~ = Be C'l B/~ , and therefore B~ = B i/~
~. Then the equalities B ° = Bi0~,
B0~ = BI0~ follow by difference.

O p e r a t o r valued kernels

5 As we will see in the next chapter, the multiple Fock space (I) is "coupled" in
most applications to another Hilbert space ,7" (the initial apace) to produce the space
if2 = J ® (I'. The initial space is used to describe some physical system, and is the real
object of interest. Therefore, one is naturally led to considering kernels which, instead
of being complex valued, take their values in the space of operators on the initial space
J . Rather than using at once the cumbersome notation with families of sets, we may
write such kernels as sums, over all n and all n - t u p l e s (#) = (or) = ~ ~r, of Evans
(p) P~ -.. p,~
indexes, of multiple integrals

r r,<<,, 7f
Here ~'(o-)
rd(P) is a mapping from the increasing n - s i m p l e x to the space /~(J) of bounded
operators on J - - the case of unbounded operators will be considered later. Note that
(#) and A have the same number of elements, (#) telling which irAexes are ascribed
to the successive elements of A = {sl < ... < sn}. This is equivalent to defining the
partitiori A~ of A into the subsets where the ascribed index is (p). Thus the kernels
are exactly the same as those we considered above, except that now K is an operator
instead of a scalar. The index (~) is often excluded, since it may always be integrated
out first.
Such kernels operate on vectors f E ¢', which admit J - v a l u e d chaos expansions

(,~) (o<)
120 V. Multiple Fock spaces

It is not necessary t(, write the way K acts on f : the formula is the same as (2.5) with
a slightly different notation, the product K f being understood a~ an operator herin 9
on a vector, and producing a vector. Similarly, the terrible composition formula for
two kernels remains the same, except that in (3.1) the product K L is understood as
composition of operators. Test vectors (regular kernels) are defined as in the scalar ease,
replacing the absolute value by the Hilbert space norm (operator norm), and Maassen's
theorem that regular kernels form an algebra operating on test vectors remains true.
Given two vectors u,v C • and a kernel K on ~s, we may formally define an
operator ~" on the initial space by the formula

<g, .~j> = <g®v, K(j ®u)> .

An useful remark due to Holevo is the following : let us say the operator valued kernel
is dominated by a scalar kernel k if we have

II K . ( A ) [ I -< k . ( A ) .

Let us denote by iuI, Ivl the vectors in • whose chaos expansions are ]ulc~(A) =
II uo~(A)II and similarly for Iv I . Then we have

(5.1) I<g,Kj>l<-[[g[l[[jl[<lvl,k[v I>.

This follows immediately from the fact that the computation of a bilinear form
< g, K f > only involves additions and multiplications.

Parthasarathy's c o n s t r u c t i o n of Lgvy processes

6 We give now a beautiful application of Fock spaces with infinite multiplicity : Paxtha-
savathy's constructi m of L~vy's processes, which extends to general processes with
independent increments the construction of Brownian motion and Poisson processes on
simple Fock space. In the last chapter of these notes, we will extend it to a very general
algebraic situation.
We consider a L~vy function on ]R

(6.1) ~(u) = ~ , ~ + d u~/2 + f ( 1 - e ~ + iuxI~l~l<~}) .(ax)

where u is the L~v:y measure : e -¢(u) is the Fourier transform of an infinitely divisible
law 7r. We have

(6.2) ¢(u) + o(--v) - ¢ ( u - , ) = c u. + f ( 1 - e (1 - e .(dx)

Let K: be the Hill=.ert space C ® L2(u). We define a unitary representation of IR in K:

(6.3) p(u) = Z • e ~

(as a multiplication operator). We put

(6.4) T/(u) = icu (~ (1 - e iu~) .


2. Exchange operators 121

This time, the function is understood as a vector : the function 1 - e iux is square
integrable w.r.t. ~. Then 7/ satisfies a cocycle property

(6.5) ~(u + v) = ~(u) + p(~,)~(v) •

We note that

(6.6) < ,7(,,), ,~(u) > = ¢ ( ~ ) + ¢ ( - , 0 - ¢(,, - ~).

Next, we apply the Weyl commutation relations

w(~(~,), p(,,)) w(~ (v), p(v)) = w(,7(,,) + p(,~)~ (~), p(,,) p(~)) +-~ ~m < ~(,,),p(,,)~ (~)>
= w(,7(~ + v), o(u + v)) ~ - i ~ m < .(~),.(~+~)-.(~)>
= w(,7(~ + v), p(~ + .)) -+~m(~(u+v)-¢(~)-~(.)).
Accordingly, the operators

(6.7) R(u) = e-i~m¢(u)W(rl(u), p(u))

constitute a unitary group. In the vacuum state, the law of its generator is 7r, as shows
the following Fourier transform computation, starting from the formula

< 1, W(r/(u), p(u))1 > = e -IIo(~)IIV2

whose value here is e - ~ e ¢ ( u ) . Taking into account the phase factor (6.7) the Fourier
transform is e -~(~) as announced.
We compute the generator itself, assuming first that ~ has a bounded support.
Then the function x belongs to L2(u), and the multiplication operator Mx is bounded.
Also, ¢ is differentiable at 0 and we have 7/1(0) = ic (9 ( - i x ) (considered as a vector),
j ( O ) = 0 ® i M z , and

X = ~ m ~#'(0) 7- + Q(c ® x) + a ° ( M x ) ,

Q denoting as usual a sum of creators and annihilators. Under the same hypothesis, it
easy to describe the whole process, working on L2(IR+) ® )~ instead of L2(K]) :

x t = ~ -~,~ ¢'(0) + Qt(c ~ x) + ~?(M~) .

If the L~vy measure is concentrated at one single point, we fall back on the representation
of compensated Poisson processes on simple Fock space.
We do not try to describe the generator in the case of a L~vy process with u n b o u n d e d
jumps, since our purpose was only to show the interest of infinite multiplicity. More
details are given in the book [Par1], Chapter 2, 21, p. 152.
122

§3. B A S I S - F R E E NOTATION ON MULTIPLE FOCK SPACES

The way we followed led us from Fock space to Brownian motion, first one-
dimensional, and then of arbitrary dimension. It is also possible to forget the Brownian
motions and to use a condensed notation, the same for all multiplicities. The idea of
such "intrinsic" notation is due to Belavkin, though we do not follow him to the end,
and are content with a notation which we learnt from lectures of Parthasarathy and is
used in his book.
1 We start with a Foek space of arbitrary multiplicity /C, but with a trivial initial
space for simplicity. Consider a linear combination of differential elements at time t
- - in the theory of stochastic integration of Chapter VI, the coefficients of this linear
combination will become adapted operators at time t, but in all other respects the
notation will remain the same. We group the operators and imitate simple Fock space :
- - The creation term ~c~ )~c~da~(t) may be written as da+()O, where ~ = (L°a) C K;
is a column vector (in spite of the lower index).
- - The number/exchange term 2c~¢~a~da~(t), is written day(A), A (z~) being
a matrix (an operator on K ).
- - The annihilation term is da?('~), where ~ E K' is a line vector L~) (in many
cases it appears as a bra < # I associated with an element of ]C).
- - Finally we have the scalar term Ldt.
Then the standard differential dement can be written as

(1.1) dat(lL) = Ldt + da+ (A) + day(A) + dat ('~)


We may represent IL as a (2, 2) matrix

and consider IL as an operator from /C = ~ ®/C into itself. If we add to this setup
an initial space ,7, the notation remains the same, except that L instead of a scalar
becomes an operator on J , ,k becomes a column of operators on ,7" (a mapping from
J to K] ® J ) , ~ is a line of operators on ,7 (a mapping from /C ® £7 to J ) and A is
a matrix of operators (an operator on /C ® / 7 ). Thus IL itself becomes an operator on

Finally, in stochastic integration the coefficients are operators on Fock space tI,, and
therefore IL is an operator on /C ® ~ .
The product of two differential elements of the form (1.1) (the "square bracket" or
"Ito correction") is given by the following (associative) product

AM / "

Here ~# is the matrix product of a line of operators by a column of operators, i.e. an


operator, and similarly AM is a line.
3. B a s i s - f r e e n o t a t i o n 123

2 Though the concise notation above is adapted from Belavkin, the name of Belavkin'8
notation concerns more specially the representation of the set of four coefficients (1.1)
as a square matrix of order 3 with many zeroes 1

(2.1) m : A
0

which represents as above an operator on /~ (9,7 with an enlarged 1~ = C (91C (9 (]. This
apparent complication is justified by the formula for products corresponding to (1.3)

(2.2) 0
0
A
0
A
0 (i 0
=
(i A

which is a s t a n d a r d product of matrices. On the other hand, taking adjoints in Belavkin's


notation corresponds to a non-standard involution : the following matrices represent
adjoint pairs

(2.3) IL = D Ic > , ILL'= D* Ib > .


0 0 0 0

In a form of Belavkin's notation that is spreading among quantum probabilists, elements


of/C are written as a e _ ~ +e+ce+~ with a, c scalar and e E ~ , and therefore elements
of ~ (9 ff or /C (9 9 have similar representations with a , c E J or 9 , and e replaced
by a finite sum ~ i ei ® bi with ei C IC, bi ~ J or 9 . Then the action of Belavkin's
(3, 3) m a t r i x is read as follows

(2.4) IL(e_c,o (9 c + E i el (9 bl + coo ® a)

= e-oo @ (La + E i X(bi)) + ,k(a) + E i ei (9 Abi.


Belavkin's notation is far more transparent when the (3,3) matrix can be displayed [ We
suspect Q P has a mild fit of the differential geometers' "intrinsic fever" (to be fought
by small inoculations of the "index disease" virus).

Kernels in b a s i s - f r e e n o t a t i o n

3 The following discussion is borrowed from Sch(irmann's article [Sch3], though the
idea of (3.2) can be found in Belavkin's work. For simplicity, we do not include an initial
space.
We recall (cf. §1, (1.5)) that multiple Fock space with multiplicity space K: (possibly
infinite dimensional) can be interpreted as an integral of Hilbert spaces over the
simplex P

(3.1) ~=fpKAdA with K A = / C °lA[

1 Such a representation was also used by Holevo [Ho14].


124 v . Multiple Fock spaces

This can be written is a symbolic way as a "chaotic representation"

(3.2) f = / f(A) d X ~

where f ( A ) is K7A valued. If/C is finite dimensional, then taking an orthonormal basis
of K7 and expanding f(A) in the corresponding tensor basis of KA will produce the
s t a n d a r d multiple integral coefficients (but dXA
@ is just a notation).
The kernels we consider are three-arguments kernels K(U, 1/, W) with a notation
very close to that of simple Fock space, but K(U, V, W) instead of being a scalar is a
(bounded) operator from ICv+w to Ku+ w -- tensoring with identity on KTz, we may
consider it also as an operator from ICv+w+z to Ku+w+z for an arbitrary Z disjoint
from U+V+W changing notation, from /(2T to I~T_V+U provided T contains V+W
and is disjoint from U. Then we may define the action of the kernel on a vector by a
formula identical to that of simple Fock space :

(3.3) Kf(H) = f~ ~ K ( A , B, M) f ( B + C + M) dM ?

dt ~
A+B+C=H
where K ( A , B , M ) acts as an operator on f(B + C + M ) E t~,B+C+M and maps it to
~(B+C+M)-M+A = ]~H. The rule for composition of kernels and Maassen's theorem
are exactly the same as in the scalar case, with the only change that the absolute value
of K is replaced by the operator norm of K .
Chapter VI
Stochastic Calculus in Fock Space
In this chapter, we reach the main topic of these notes, non-commutative stochastic
calculus for a d a p t e d families of operators on Fock space, with respect to the basic
operator martingales. This calculus is a direct generalization of the classical Ito
integration of a d a p t e d stochastic processes w.r.t. Brownian motion, or other martingales.
Its physical motivation is quantum mechanical evolution in the presence of a "quantum
noise". Stochastic calculus has also been developed for fermions in a series of papers
by Barnett, Streater and Wilde [BSW], and in abstract versions by Accardi, Fagnola,
Quaegebeur.

§1. S T O C H A S T I C INTEGRATION OF OPERATORS

In this section, we define stochastic integration for operators with respect to the
(boson) basic processes, following H u d s o n - P a r t h a s a r a t h y [HuPl]. We proceed slowly,
beginning with simple Fock space, for which no difficulty of notation arises. The theory
will then be extended in two directions : the replacement of simple by multiple Fock
space, and the inclusion of an initial space.

Adapted processes of operators

1 We denote by 4) the usual (boson) Fock space over L2(IR+), the s t a n d a r d source
of "non-commutative noise" which replaces here the driving Brownian motions of
stochastic differential equations, and the time parameter t of classical mechanics.
To give an intuitive meaning to 4), we may use a Wiener probabilistic interpretation,
i.e. 4) = L2(f~), the sample space for a Brownian motion ( X t ) . Replacing 4) by a multiple
Fock space amounts to considering a d-dimensional Brownian motion, and we deal later
with the necessary notational changes.
Let us recall some notation from Chap. IV, §2, subsection 6. To each time t
corresponds a tensor product decomposition 4) = 4)tl ® 4)[t of Fock space (the "past"
space usually is simplified to 4)t ). More generally, if s < t, we have a decomposition
4) = 4)sl ® 4)Is,t] ® 4)It. If we interpret Fock space as Wiener space, the space 4)[s,t], for
instance, is L2(F[s,tl), the a-field generated by the increments X u - X s , s < u <_t, and
the decomposition arises from the independence of Brownian increments. On the other
hand, this interpretation suggests considering these spaces as contained in 4), f E 4)t
being identified with f ® l E 4), the vacuum vector which belongs to every 4)[s,t]. Instead
of an external tensor product 4)t] ® 4)[t, we then omit the symbol ® and think of an
internal multiplication, defined only for vectors which are well separated in time, and
for which 1 acts as unit element.
126 v I . Stochastic calculus

Given a Hilbert space J , called the initial apace, which m a y represent some concrete
physical s y s t e m we wish to "couple" to the noise source, we f o r m the tensor p r o d u c t
= J ® O. If ,.7" has finite dimension u and a fixed o.n.b. (en) has been chosen, t9
appears as a m e r e sum of ~ copies of (I).
In the decomposition 45 = ~5~®~[t, it is often convenient to identify ~[~ with q) using
the shift operator, and to consider q)t as an initial space. Thus initial spaces occur
naturally even in the study of simple Fock space.
T h e space k~ also has tensor p r o d u c t decompositions. First we have • = ~ t @ ~5[t,
where ~ t = J @ Ot] • On the o t h e r hand, there are d e c o m p o s i t i o n s • = q~t ® k~[t where
~[t = f l ® ¢[~.
We generally omit the tensor p r o d u c t sign in these decompositions, writing simply jc2
( j E f l , ~ E ~ ) or even ftjh[t ( ft C ~2t, h[t C ~2[t) as if initial vectors really did c o m m u t e
w i t h elements of )Pock space. We do not distinguish j l from j , so t h a t the initial space
J is identified w i t h q20. Similarly, given an o p e r a t o r A ( b o u n d e d for simplicity) on
the initial space (reap. on ~ ) , we do not distinguish it f r o m its a m p l i a t i o n A ® I (resp.
I ® A ) on ~ . For instance, the operators a~ on ~5 are e x t e n d e d to q2 in this way.
As Applebaum [App2] remarks, this identification isn't entirely safe when A is
unbounded and closed, because the algebraic ampliation A ® I is not closed in general.
An o p e r a t o r B on • ( b o u n d e d for simplicity) is said to be .s adapted or s -
measurable 1 if B m a p s ~8 into itself, and acts as follows on d e c o m p o s e d vectors
usv[s (usEq2s, V[sC~[s, ® o m i t t e d )

B ( u, vis ) = ( Bu~ ) vis .


In tensor p r o d u c t language, this means t h a t B is the a m p l i a t i o n B = b ® I[s of some
o p e r a t o r b on J ® ~ s . A family (Bs) of s - a d a p t e d o p e r a t o r s is called an adapted
process.
The projection (conditional expectation) Et on ~t isn't t measurable, but the future
projection on ~[~ is.
In this chapter, the n a t u r a l d o m a i n for operators will be g , g e n e r a t e d by e x p o n e n t i a l
vectors j g ( u ) , w i t h j E J and u E L2(1R+). To spare parentheses w h e n an o p e r a t o r is
applied, we take j inside and write $ ( j u ) , but n o t e t h a t $ ( ( t j ) u ) ¢ g ( j ( t u ) ) ! These
vectors are d e c o m p o s a b l e , since we have g(ju) = g(jlctu)g(k~u), with ktu = UIlo,t ]
and k~u=uIlt,oo[ - - w h e t h e r we o p e n or close the interval at t is u n i m p o r t a n t . T h e n
B is t - m e a s u r a b l e iff B g ( j k t u ) belongs to ~ t , and B g ( j u ) = B g ( j k t u ) ® C(k~u).
For instance, the three basic o p e r a t o r processes (a~) are a d a p t e d on the e x p o n e n t i a l
domain.
In practice, j may be restricted to a dense subspace D of the initial space, and u to
a dense subspace $ of L2(IR+) (see subsections 8 and 10).
We d e n o t e by E , the p r o j e c t i o n on the closed subspaee q2, of qJ. In particular,
Eo m a y be considered as p r o j e c t i n g on the initial space. Given an o p e r a t o r A on
• , b o u n d e d for simplicity, we denote by I E , ( A ) the s - a d a p t e d o p e r a t o r m a p p i n g
g(ju) to E s A g ( j k s u ) ) ® g(k~su). In particular, ]E0(A) m a y be identified w i t h t h e

1 This is unusual, but fits b e t t e r the analogy w i t h r a n d o m variables.


1. Stochastic integrals 127

operator E o A E o on the initial space. For any vector f E ~ s we have < f, A f > =
< f , lE0 (A) f > , and ]Es (A) can be interpreted as a vacuum conditional expectation of
A in the operator sense : for any s - a d a p t e d bounded operators B, C , we have

< 1,CAB1 > = < 1,CIEs(A) B 1 > .

This interpretation of lEs as a conditional expectation allows the definition of operator


martingales. For example, the three basic operator processes are martingales. However,
at the present time there is no general "martingale theory" for operators, corresponding
to the rich classical theory of martingales.
This description of the structure of Fock space continues in subsection 11.
P~MAa~:. In these notes we develop quantum stochastic calculus in Fock space over
L2(1R+), or more generally L2(IR+,](:). In P a r t h a s a r a t h y ' s book the reader will find
an interesting extension to Fock space over ~ , where 7-I is provided with an increasing
faraily (7-/t) of Hilbert spaces. This applies in particular to ~ = L2(f~) where f~ is a
filtered probability space.

Stochastic integrals of operator processes

2 Given azl a d a p t e d operator process H = (Ht), we are going to dcfme new operator
processes / ~ ( H ) = f : g s daes relative to the three basic processes on simple Fock space.
Up to subsection 6, we discuss the subject informally. The usual approach uses ap-
proximation by elementary step processes, but we prefer to justify the formulas without
approximation, through an "integration by parts formula" for stochastic integrals of
operators acting on a vector stochastic integral. For simplicity, we temporarily forget
about the initial space.
To apply the operator sr{ ( H ) to a vector F of Fock space we consider the martingale
Ft = IF, [ F l ~ t ] , which has a stochastic integral representation

(2.1) Ft = c + fsdXs , c = E[F] ,

where (fs) is a predictable process. We may consider (2.1), independently of any


concrete probabilistic interpretation, as a stochastic integral of the a d a p t e d curve (ft)
(the delicate point of predictability can now be forgotten) w.r.t, the curve X t = a ~ l ,
which has orthogonal increments, and is well related to the continuous tensor product
structure of Fock space, see Chap. IV §2 subs. 6. We imitate what is known for vectors
and t r y to compute I ~ ( H ) F t for t _< oo as the integral of d ( I ~ ( H ) F s ) , using an
integration by parts formula consisting of three differentials, I ~ ( H ) d F s , dI~Fs, and
dI~ dFs. In martingale theory, the last term is the square bracket or Ito correction; in
the operator case, it will be read from the "Ito table" of C h a p t e r IV, §3 subs. 5.
The first t e r m is the s - a d a p t e d operator I~(H) applied to the decomposed vector
fs d X s , which after integration gives

(2.2)
/0 Ies(H)fs dXs ,
128 vI. Stochastic calculus

a n o r d i n a r y stochastic integral provided the required integrability c o n d i t i o n is satisfied,


b u t still c o n t a i n i n g the o p e r a t o r t~(H) we are trying to define. T h e second differential
is Hs da~ Fs, a n d here it is daes t h a t "sticks into the future". T h u s we write Fs = Fs ® 1
and, since da~l=dXs for e = + a n d 0 otherwise, the second t e r m has the value

(2.3)
/0' HsFsdXs if c = + , 0 otherwise.

As for the third differential, the decomposed operator Hs da~s being applied to the de-
composed vector fs dXs, we get a decomposed vector (Hs fs)(daEsdXs), a n d r e m e m b e r
¢ +
that da~sdXs=dasdasl has the value 0 for e = + , ds for e = - , dXs for ¢ = o. T h u s
the third t e r m is

(2.4) 0 if e=+,
/0 HsfsdXs if e=o,
/0'Hsfsds if e=-.

T h e processes (2.3) a n d (2.4) are known, b u t (2.2) contains the still undefined operator
I ~ ( g ) ; however, the above computation allows a definition by induction since, so to
speak, (fs) is one chaos lower t h a n (Fs). This i n d u c t i o n starts with the chaos of
order 0 : the stochastic integral I~(H)1 is equal to 0 if e = - , o, a n d one sees easily
that for e = + we have

(2.5) I~-(H)I = HsldXs .

P u t t i n g e v e r y t h i n g together, we have the formula

t I ft°HsFsdXs if e = +
f
(2.6) I[(H)Ft=I I~(H)fsdXs+ I f:HsfsdXs ife=o
d0
f to Hsfs ds if e = -

W i t h such a definition in closed form, discrete c o m p u t a t i o n s on " R i e m a n n sums" are


replaced b y classical stochastic calculus. Let us give a few examples.

3 We are going to prove that, if Hs = u(s)I, where u is b o u n d e d with compact


support (for instance), the preceding definition of fto H(s) da~ leads to the correct value
a~. To this end, we climb on the chaos ladder. It suffices to prove the two processes
have the same value on 1, a n d b o t h satisfy property (2.6). We take for i n s t a n c e e = + ,
leaving the other two cases to the reader.
We consider a stochastic integral F = J ~ fs dXs, a n d check t h a t G = a + F satisfies
the first line of (2.6) :

(3.1) G =
/0 usFsdXs +
/0 a+ fsdXs .

We make use of the chaos expansions of these r a n d o m variables (Chap. IV, §4, subs. 2),

(3.2) G ( A ) = ~-:~seAU(S)F(A - s) .
1. S t o c h a s t i c integrals 129

On the other hand, if the r.v. fs is given by its chaos expansion f f(s, A) d X A , with
f ( s , A ) = 0 unless A C [0, s [ , the chaos expansion of F is given by

(3.3) F ( A ) = f'(VA, A - )

where VA is the last element of A , and A - = d \ {VA} (consider the case where fs
belongs to a chaos of a given order). Also Fs = f~ fr dXr has a chaos expansion given
by (3.3) if VA < s, 0 otherwise. Then the right side of (3.2) can be written as

u(VA)~'(A-) + ZsCA- u(s)F(A- s).

The first term is the chaos expansion of f us .Fs dXs, while the second term represents
f a+ufs dXs (2.2). The cases 6 = - , o are similar.
REMARK. Remaining in the case where Hs = u ( s ) I , we can give another example of
stochastic integration. Let us introduce the classical selfadjoint operators Qt = a ~ + a [ ,
Pt = i ( a ~ - a ~ ) , Nt = a~. Then for arbitrary c we may compute from section 2

(f +cNs) ) ( / fs, X, )
(an operator acting on a vector). On the other hand, we may interpret it as the product
of r a n d o m variables

in the probabilistic interpretation of Fock space where X t is a compensated Poisson


. . . . t
process with j u m p size c : p u t t i n g Ft = fo fs dXs, Ut = f : us d X s , we apply to this
product the usual integration by parts formula, given that d[ F, U ls = us fs d[ X , X ] ~ =
usfs (ds + cdXs), and we get

JusfsdXs + f FsusdXs + f usfs(ds+cdX2 .


This is the same result one gets from a convenient linear combination of the formu-
las (2.6).
4 In the preceding subsection, we took Hs = u ( s ) I where u was bounded with
compact support, and showed that f o H ( s ) d a + = a + . Assume now u has support
in [t, ec [ , and take Hs = u ( s ) L , where L is a t - m e a s u r a b l e operator, bounded for
simplicity. Then we have f ~ Hsda + = La + . This amounts to the remark that, for
classical stochastic integrals

L Is dXs = Lfs dXs .

From this result we deduce that, for an elementary (previsible) step process (Hs), i.e.
a process of the form

H~ = ~-"i LiI{si<t<_S~+l}
130 VI. S t o c h a s t i c calculus

relative to a finite subdivision 0 = so < ... < an < Sn+l = oo, with Li si-adapted
and Ln = O, I e ( H ) is the elementary integral ~ i Li(as~i+l - as~i) • Thus the definition
of operator stochastic integrals we are using coincides with the more usual definition
using elementary step processes and approximation.
5 The next easy computation using (2.6) plays a basic role in the approach of H u d s o n -
Parthasarathy. It deals with exponential vector~ F = g ( u ) . Let us put Ft = g t ( u ) =
$(uI] 0,t] ), this process satisfying the differential equation

Et (u) = 1 + u(s)£s(u)dXs ,

so that in (2.1) fs is equal to u ( s ) F s . We also put

H,E,(~) = ~, I ~ ( H ) E,(~) = if

We take for instance ¢ = +, and apply the computation in subsection 2, thus getting a
linear non-homogeneous stochastic differential equation for i~-

(5.1) i-~ = ~sdX~ + ~si+~ dx~

of a classical type ([DMM], Chap. XXIII, n ° 72) ; we give in subsection 6 below precise
conditions for the existence and uniqueness of a solution. For the other values of e we
have similarly

(5.2) i; = Uses ds + us i2 dXs ,

(5.3)

z~ =
/o' usrIs dXs +
/o "0
UsZs dXs .

In the case of the annihilation integral (5.2), the equation has a simple explicit solution

(5.4) Zt(H)Et(u ) =
/0' usHs£t(u)ds .

One can check directly that this process satisfies (5.2) - - also (5.4) is almost obvious
when (lit) is an elementary step process, and the formula then is extended by a passage
to the limit (we discuss this point below).

Stochastic integrals on the exponential domain

6 The time has come to replace formal computations by a rigorous analysis. We follow
Hudson Parthasarathy, and work with exponential vectors (the method of induction on
the chaos will not be discussed here, though it is useful too).
We are going to prove the existence and uniqueness of the stochastic integral process
I + ( H ) on the domain $ linearly generated by exponential vectors $ ( u ) .
It is sometimes convenient to restrict this domain by conditions on u, like local
boundedness, having compact support, etc. This implies only trivial modifications.
1. Stochastic integrals 131

We make the following assumptions on the adapted operator process (Ht) : The
domain of Ht contains E, and the g2-valued mapping t ~ HtE(u) is measurable,
and square integrable on bounded time intervals. Since Ht E ( u ) = Ht Ct( u ) E ( k~u ) , this
is equivalent to saying that t ~ qt = Htgt(u) belongs to L~oc .
For our argument, we need first an existence and uniqueness result for the classical
(vector) stochastic differential equation

i6.1) At = Ut + usAs dXs , (u e L2(IR+))

under the assumption that the given curve (Ut) in ~2 is adapted, and satisfies the
integrability condition

(6.2) /( Ilg(~)ll ~ lu(s)l 2 ds < ~ ,

while the unknown curve (At) in • is required to be adapted and such that the
stochastic integral exists. This is a typical case of the Picard iteration method : it
leads to a series (whose terms are meaningful according to (6.2))

(6.3) At = Ut + f us, Us, dXs, + f us, Us, Us, Us~ dXs, dXs, + . . .
.It>s~ .It>si >s2
To study its convergence, we construct (according to Feyel [Fey]) an auxiliary norm
of L 2 type for adapted stochastic processes A = (At) on a bounded interval [O,T] ,
associated with a function p(t) > 0

t" T 1/2
[[AII= ( ]o IlAs[l~P(S)& )

such that the mapping (At) , ~ (ftoAsusdX~) has a norm It" < 1, and the process
(Ut) has a finite norm. Then the Picard approximation procedure will converge to a
process (At), also of finite norm. The above mapping has a norm smaller t h a n K if
p(t) satisfies the inequality

pit ) > ~ 1 [u(t) ]2 j(t T p(s)ds ,

and we take p so that equality holds, i.e.

pit) = luit)l =~ f f lu(~)'=d=


With this choice, u being square integrable, the exponential is a function bounded above
and below, and existence and uniqueness follow. Then p(t) has played its role, and we
return to our original problem.
We construct the process At = I ~ ( H ) g t ( u ) as the solutior, of the differential
equation (6.1) corresponding to Ut = f : H s E s ( u ) d X s = f : q s d X s . We have in this
132 VI. Stochastic calculus

case llUt li2 = Jot lr/s 12 ds, which according to our assumptions is a bounded function
of *. Therefore the series (6.3) is norm convergent on 1R+, and the solution exists.
We introduce a second process of operators K satisfying the same assumptions as
H , and put similarly Bt = I~-(K)Ct(v), Vt = f : Ks Es(V) dXs = f t Xs d X s . Using (5.1)
we compute the scalar product < Bt, At >

(6.4) d < Bt, At > = Ftut < Bt, At > + ut < X t, At > + Ft < Bt, ~lt > + < )it, tit > •
-~

We move the first term on the right to the left side, and multiply both sides by
ef~°°vsu~ds . Then the differential of < B t , A t >eft °°vsusds, with t as lower limit,
appears, and we are now computing the derivative of < I ~ ( K ) £ ( v ) , I + ( H ) £ ( u ) > . If
we perform the easy computations, we get the basic quantum Ito formula of Hudson
P a r t h a s a r a t h y relative to the creation stochastic integral :
d
d~ < I~-(K) £ ( v ) , I~'(H) Cv(u) > = Ft < I~-(K) £ ( v ) , H t £ ( u ) >
(6.5) + < K t E ( v ) v t , i f ( H ) vC(u) > ut + < K t E ( v ) v t , H t E ( u ) ut > .
Why is this an Ito formula (or rather, an integration by parts formula)? If we could
move the operator I~-(K) to the right of the scalar product on the left, and similarly on
the right we would compute the matrix element between two exponential vectors, and
the composition d(I~-(K*)I](H)) on the left, and on the right three terms, the last of
which comes from the relation da[da~ = dt in the Ito table. On the other hand, this
composition of operators is not well defined because of domain problems. For operators
given by Maassen kernels, which have a large common stable domain, a true Ito formula
becomes meaningful.
The following simpler relation can be proved in the same way (since the space £ is
dense, this formula completely characterizes the stochastic integral)

(6.6) <£(v),I?(H)£(u)> =
Z <VsE(v),Hs£(u)>ds .

From (6.5) we may deduce an interesting estimate (due to J.L. aournd) for the norm
of a stochastic integral (the most useful form of such estimates will be given later in
(9.7)). Let us p u t for simplicity I~ ( H ) £ ( u ) = I~- and HtC~(u)= hr. Then we have from
(6.5) with H = K

I1±/~112 --2
/0 ~e<~s±s+,As>ds+
/0 IlAsll2ds.

The first integral on the right side is dominated by Ilul[ ( f0t l< I~+, hs >12 d~ ?/2. Apply
the Schwarz inequality to this scalar product, and bound IlI+[I by its supremum Wt on
s < t . Then p u t t i n g f0t IIAs II2 d~=Vt 2 we get an inequality

II//~[I= < 2 [lull wtv~ + vt z


The fight side is an increasing function of t, thus we may replace the left side by its
~up and get
w ? < 211411 w~y~ + y~2 ,
1. Stochastic integrals 133

from which we deduce the following inequality for Wt


1/2
(6.7) HI~-(H)C(u)]] < (][uH + ~ + 1 ) ( IlHsg(U)][2ds)
/0
As an application, we get the continuous dependence o f / ~ ( g ) g ( u ) on the process (Ht)
for fixed u. Therefore, if this process is approximated by elementary step processes in
tim topology given by the right side, the corresponding stochastic integrals converge
strongly on exponential vectors, thus showing the agreement of our definition with the
usual one.
A technical question that concerns us little here is the following : how can we
approximate an operator process (Hi) by elementary step processes? A partial, but
practically sufficient answer is given in [Parl], Prop. III.25.7, p. 190.
7 We have discussed the stochastic integral I~(H). Let us now discuss I t ( H ) and
I~(H), without giving all the details.
The case of I ~ is simpler. The formula similar to (6.6) is

<$(v),I[-(H)g(u)> =
(7.1)
/0' <g(v),Hsg(U)us>ds .

Comparing it with (6.6) we see that I~-(H) and I t ( H * ) are mutually adjoint on
exponential vectors (provided the adjoint process (H~) exists and satisfies the same
hypotheses as (Ht)). In the formula similar to (6.5), only the two terms of a classical
integration by parts formula are present on the right side :
d
,~--7< .r,-(K) g(v). Z-(H) g(,~) > = < I,-(Z~) g(,,), H, g(,*),*t >
(7.2) + < v, Kt E (,~), I;-( H) E(u) > .
There is also a formula involving two different stochastic integrals (the similar formula
with + interchanged is deduced by taking adjoints)
d
-~ < I t ( l ( ) $ ( v ) , I:(H) C(u) > = < v, I t ( I ( ) $(v), Hi $(u) >
(7.3) + < vt Ktg(v), I~( H)g(u) > .

The estimate similar to (6.7) is slightly simpler, and is deduced immediately from the
explicit formula (5.4)
1/2
(7.4) III((H)£(u) H <-II~ll
f I[Hsg(u)H z ds )

This formula can be improved as follows we decompose $ ( u ) = $(ktu)g(k~u), take


the second factor out of the a d a p t e d operator, apply the formula to k,u and then
reintroduce the second factor. We thus see that ]{u j[ can be replaced by ][ ktu H, which
tends to 0 with t. This cannot be done with (6.7).
Let us discuss finally the case of g ( H ) . a look at (5.1) and (5.3) shows that (for
a fixed exponential vector) the discussion is essentially the same as for I~(H), with
134 VI. Stochastic calculus

us Hsg(u) instead of Hsg(u). Thus we need a stronger integrability condition on the


process (Ht), namely

(7.5)
/0 [lu~g~g(u) II ~ ds < ~.

To make sure that this condition is fulfilled, one sometimes restricts the domain to
exponential vectors ~'(u) such that u is (locally) bounded. The formula which replaces
(6.6) and (7.1) is

(7.6) <g(v),Ig(H)g(u)> = <vsg(v),H~g(u)u~>ds .

That which replaces (6.5) and (7.2) is


d
d-t < I t ( K ) $ ( v ) , I?(H) $(u) > = < vtI?(K) $(v), Ht g ( u ) u t >
(7.7) + < v t I Q $ ( v ) , I ~ ( H ) g ( u ) u t > + < v t K t g ( v ) , Htg(u)us > •
And the inequality (6.7) becomes

(7.8) tl Zg(H)e(~)tl -< (tbll + ~/ll~il ~ + Z) ( // If~ HsE(~)tl 2 as )


1/2

Note that formula (7.7) contains a "square bracket" term, since da~da~ is a non
trivial term in the Ito table. It remains to write down the two formulas involving mixed
stochastic integrals, one of which also has an Ito correction

d < I~(K) $(v), I t ( H ) $(u) > = < I t ( K ) g(v), Hs $(u) ut >


dt
(7.9) -F < v t K t g ( v ) , I ( ( H ) g ( u ) ut > •

Here comes the last Ito formula. The reader should not worry : the case of multiple Fock
space will give us a unified form, much easier to remember.

d
d-t < 1~(I£) g(v), i f ( H ) g(u) > = < vt I~(I£) £(v), Hs $(u) >
(7.10) + < vtKsC(v), I ~ ( H ) g ( u ) ut > + < v t K t g ( v ) , H t g ( u ) > •

REMARKS. 1) Adding an initial space :7 would lead to the same formulas, except that
g ( u ) would be replaced everywhere by g ( j u ) , j C f t .
2) In the theory of stochastic differential equations, dt also appears as an integrand.
The corresponding formula is a trivial application of the Schwarz inequality. P u t t i n g
It(H) = f o g s ds, we have
1/2
(7.11) II/t(H)E(~) II _< Ibll( IIH~e(~) II~d~)

and here, as in (7.4), we may replace II~ II by II ulio,t I II-


1. Stochastic integrals 135

Multiple Fock spaces

8 In the case of a Fock space of finite multiplicity d, the basic curve (Xt) becomes a
vector curve (Xt) with components X ~ , c~= 1 , . . . , d (in the Wiener interpretation, the
components of a d-dimensional Brownian motion). Exponential vectors have a vector
argument u , with components uc~, and the linear differential equation they satisfy
becomes

(8.1) Ct ( j u ) = j +
/0' $s ( j u ) u s . d X s .

The boldface letters u, X are here for clarity only, and we are under no obligation to
use them forever.
The case of infinite multiplicity is not more difficult to handle, following Mohari-
Sinha [MoS]. The argument u of an exponential vector then is assumed to have only
finitely many components us different from O. The corresponding indexes constitute
the finite set S(u) (the "support" of u). Since we make a convention u0 = 1 below,
we also include 0 in S ( u ) .
As in Chap. V subs. 2, the index e for the basic operators becomes a matrix index
(~). Then da~(t) is a number operator for a = ~, an exchange operator if a ¢/~, and
we represent the creation and annihilation operators by means of the additional index
0 : (o) indicates a creation operator, (~) an annihilation operator, and da~(t) means
I dt.
This differs from the conventions of other authors (see the "warning" in Chapter V,
§2, subsection 1).
Then we have

da~(t) dX'~ = 5~ dX~ , da°~(t) dXT(t) = 5~ dt , dab(t) 1 : dX~ .

The stochastic integral operator I~(H) = f: Hsdaes acts as follows on exponential


vectors : At = I~(H) St (j u) satisfies a linear differential equation

(8.2) At =
]o dh~s +
/o As us.dXs

with

Hs£s ( j u ) dX~ if e = (o)


(8.3) dh~ = Hsgs ( j u ) uc~(s) as if e = (o)
Hsgs ( j u ) uZ(s) dXs~ if ¢ = (~)

The first two lines take a form similar to the last one if we make the convention that
Xs° = s , %o= 1. As in Chapter V, using the letters p, (z, 7-... instead of a , / 3 . . , indicates
that the index 0 is allowed, and we use the (Evans) modified Kronecker symbol 5P,
which differs from the usual one by the property ~0 = 0. Though we do not omit
summation signs, summing on diagonal indexes provides an useful control on formulas,
and therefore we allow for raised indexes u p (t)= up (t), in particular u ° ( t ) = 1. T h e n
136 vI. Stochastic calculus

the scalar product < u, v > (which does not involve the additional index O) is equal to

W i t h these conventions, formulas (8.2) and (8.3) are true also for e = (~), provided
the appropriate integrability condition f: IIHs C(ju)]l ds < ec is satisfied. In the same
way, if e = ( p ) , I ~ ( H ) = f to gs da~(s), the formula

(s.4) <g(ev),z~(g)g(ju)> = <g(ev),gsg(ju)>v~(~)~A~)ds,

which includes (6.6), (7.1), (7.6), is true for all indexes, the case of e = (~) being trivial,
Similarly, if e = (;) and r/ = (~), we may generalize (6.5), (7.2)... to all indexes. The
same formula will be written again as (9.3), in a slightly better notation .

<I~(K)$(gv),I~(H)$(ju)> =
/0' < I 2 ( K ) $ ( g v ) , v(r(S)Up(S)Hs$(ju)> ds

+ < vA(~) ~"(s) KsE(ev), Z~(H) E(ju) > as

(8.5) +
/0 "~"~<v:,(s)K~E(ev),~As)H.~E(ju)>d~.

The proof is essentially the same as for (6.4-5) : one first introduces the vector processes
At = I~(H)Et(ju), Bt = I~(K)gt(gv), which satisfy equations of the form (8.2)

At =
/0' Asus.dXs +
/0' HsSs(ju)up(s)dX[ ,

Bt =
/o B~ vs.dX~ +
/o IisE,(ev)v~(~)dX¢.

One then computes 7d/ < B t , A t > as

< vt, ut > < Bt, At > + < v~u#(t) Ift St (gv), At > + < Bt, Ht St ( j u ) vC~up(t) >
+ ~ ( r < v;~Kt 8 ( g v ) , Ht g't(ju) Up >

and the result is simplified by the same method we used after (6.4).
If initial operators ~70,X0 acting on ~20 are included in I * ( H ) , I ~ ( K ) , the term
< r/0E(gv), X0 E ( j u ) > must be added to the right h a n d side of (8.5).
9 Instead of individual stochastic integrals, we now consider processes of operators,
acting on the exponential domain, which admit a stochastic integral representation
involving all the basic processes together (also da°(t)= I d t )

(9.~) Z~(H) = ~ + ~ Hg(~) da;(s).


pjo

Such processes may be considered as substitutes for "operator semimartingales". The


initial operator r/ acting on ~0 will often be omitted for the sake of simplicity. The
1. Stochastic integrals 137

operator / t ( H ) will be applied to an exponential martingale C~(ju). Of course the


conditions of existence of the individual stochastic integrals (which are easy extensions
of the simple Fock space case) must be fulfilled, but we should keep in m i n d the case
of infinite multiplicity, and seek a simple, global integrability condition implying the
existence of (9.1) as a whole. We recall that in this case, the argument u has finitely
many non-zero components, whose indexes (and 0) constitute the "support" S ( u ) .
Only the operators H~ with p C S ( u ) contribute to / t ( H ) $ ( j u ) .
Assuming first we have a finite number of H p ~ 0 in (9.1), we have for arbitrary
exponential vectors E ( j u ) and E(gv)

(9.2) < E ( g v ) , It ( H ) £ ( j u ) > = < $ ( g v ) , HP(s)E(ju) > vZ(s)up(s) ds

Note that an index has been raised to straighten up the book-keeping of indexes. The
initial operator would contribute a term < g, ~j > c < v,u > .
The next formula requires a second "semlmartingale"

Then we have the fundamental Quantum Ito Formula. If initial operators were included,
they would contribute an additional term < Xg, qj > e < v,u > .

< It ( K ) E ( * v ) , It(H)E(ju) > = E ~0 t < I s ( K ) $ ( g v ) , HP(s)C(ju) v~(s)up(s) > ds


per

+E <v,(s)uA(s)K~(s)$(gv), Is(H)E(ju)>ds
Ap
(9.3) + E 5Aa<v"(s)K~ (s)E(£v)'HG(s)E(ju)uP(s)>ds"
p,a,)%tt

Note how the balance of indexes is realized.


I n e q u a l i t i e s . We now extend to multiple Fock space the basic estimates concerning
the L 2 norm of the vector

(9.4) A(t) = It(H) E ( j u ) .


In the case of infinite multiplicity these estimates may be used to extend the scope of
the representation (9.1) to infinitely many summands - - details are left to the reader.
To abbreviate notation, let us put

hP(t) =- H~(t)C(ju) , ha(t) = Eup(t)hP(t) , hit ) = E u a ( t ) h ~ ( t ) ,


p cr

d(t) = llA.(t)ll 2 , a0 = , E ( j u ) .
c~
138 VI. Stochastic calculus

It will be important below to include the initial term. We rewrite (9.3) as follows

(9.5) ]lA(t)ll 2 = iIao ll2 d - 2 ~ e


/0 <A(s),h(s)>ds+
// c2(s)ds.

One could apply the same method as for (6.7), and this was initially the road taken by
most authors. It turns out that an inequality due to the Delhi school is less cumbersome
and more efficient. We introduce the measure u(u, dt) = (1 + I[ u(t)I12) dt (this last
norm involving only the components ua ), so that for instance

(9.6)
/0 I[ G(s)l[2& <__~
p // [I G(s) H%(u, ds).

Then the basic Mohari-Sinha inequality is, putting u(u, t) = f0~ u(u, ds) and including
the initial term a0

(9.7) HA(t) II2 -< e'(u'O (]1 ao ][2 + 2 ~


per
// JJ hP(s) H2 ~(u, ds)) •

In the case of infinite multiplicity, the summation over p concerns only the finitely many
indices in the "support" S(u). Inequality (9.7) is deduced as follows from (9.5). The
scalar product 2 ~ e < A, h > is dominated by

211A 1111~ , u a h , , 11_< 211A 11E,~ [u~ [1] h~ ][ ,

and then the inequality 2xy < x~+y ~ is used to produce IIA II~ (E~ Iu~ I~)+E~ IIh~ II~ ,
That is,

IIA(t) II2 _< [[a0112 + // l] A(s) H2~(u, ds) + 2 // y ~ 11h~(~) I1~~(u,&) •


p~
Then (9.7) follows from the classical Gronwal] ]emma ([Parl], Prop. 25.5, p. 187).
I t e r a t i o n . It is necessary, for the theory of stochastic differential equations, to have
estimates of the same kind for iterated stochastic integrals. ~/Veuse abbreviated notation
as follows. First of all, one single letter ci may denote an Evans pair (2~), and one single
letter # may denote a n - t u p l e (s~,..., sn) (the length n of # being denoted by I~1).
We consider a family of initial operators H s ( s l , . . . , Sn), and put

Is(t , H) = f{t>s~ >...>s~ } HS(Sl'"""' Sn) da~'.., daes: 9

(integration on the decreasing simplex makes the proof by induction easier). If an index
ei appears as (aP~) in H s , it appears as (~) in the differential element to keep the
balance. We will need an estimate of a sum of iterated integrals

(9.8) II E s Is(t,H)g(Ju)II 2
~-~n (2e~<t'u))n ~--~lsI=n Z>~, >s~ IIH A ~ I , s~)E(ju) II2 ~(&l).. 4 & ~ )
1. S t o c h a s t i c integrals 139

This formula is a substitute for the isometry property of the standard chaos expansion.
It is deduced from (9.7) by an easy induction, isolating the largest time Sl - - the initial
term in (9.7) plays an essential role here.
Let us give another useful estimate of an individual iterated integral

±r [ da~:..,da~:
J 8 t <...<S,~ < t

which takes into account the number of times the special index e0 = (°0) appears in #.
Let us denote by 1 the multi-index of length k obtained by suppressing from # the
special index. Then # is constructed from I by inserting Pi times ¢0 after each index
ei of I (the value Pi = 0 being allowed), and also p0 times e0 in front of 1. We denote
by p = ~ i Pi = n - k the number of times e0 appears. Then we have, imitating a result
of Ben Arous presented in S~m. Prob. X X V , p. 425

(9.9) II It~$(J u) II2 -< II g(ju)I[ 2 ~.(t)


(n + p'~) +p
[ H. (2pd!
(pi[)2

Indeed, the corresponding integral is also equal to

Slp° (82 =.~s1)pt (t ~_s_k) pk darh


daisy.
l<...sk<t Po! Pl[ "'" pk ! "" sl

where r h , . . . , rlk are the indexes of 1. Then we apply the estimate (9.8), leading to the
product of [[ $(ju)][2 by the scalar

(po!)2 - (pk!)2 / s~P°(s2 - - S l ) 2 P t . . . ( t - - S k ) 2pk tJ(dSl)...tJ(dsk)

We bound (si+ 1 - si) 2pi by

u(] si, si+l] )2pi = (2pi)[ [ i l/(dUl)''' I/(du2pl)


Jsi<u 1...<u~p i <si+l

and integrating on all the variables together we ge~ the estimate (9.9).
Recall that p = ~ i P i " It is easy to bound the product on the right side of (9.9).
Putting c(m) = ( 2 m ) ! / ( m ! ) 2 we have c(rn + 1) _< 4c(m), therefore c(m) <_ 4 m , and the
product itself is at most 4P.
10 Let us discuss the uniqueness of a stochastic integral representation (9.1). We
assume for simplicity that the space S of arguments u of exponential vectors consists
of all square integrable functions, with finitely many components different from 0, right
continuous with left limits. Assume the stochastic integral I t ( H ) is equal to 0 on this
exponential domain, for every t > 0. According to (9.2), the following expression is equal
to 0 for arbitrary exponential vectors g(gv) and £ ( j u )

(10.1) < £(gv), I , ( H ) g ( j u ) > = E < £(gv), H ~ ( s ) g ( j u ) > v P ( s ) u ~ ( s ) d s .


p,o"
140 vI. Stochastic calculus

it follows by differentiation in t t h a t for given u, v

(10.2) ~-~<£(fv),Hp(t)g(ju)>vP(t)u¢(t)=0 fora.e, t,


pry

the exceptional set d e p e n d i n g possibly on u , v . Let us assume for a m o m e n t t h a t


H~(t)g(ju) is right continuous in t. Since u , v are right continuous, it follows t h a t
(10.2) is true for every t. We may now replace u, v by u I, v I , right continuous, equal
to u , v in t h e interval ] 0, t [ but a r b i t r a r y (except for the r e g u l a r i t y conditions) on
It, oo [. B y a d a p t a t i o n (10.2) becomes

(10.3) y~ <&(ev),H;(t)&(ju)>eL~<v"u;>a*,'qt)~'(t)=o
p~r

Since the values @ ( t ) , @(t) are a r b i t r a r y (except for p = 0) and the e x p o n e n t i a l is


strictly positive, we find t h a t the scalar p r o d u c t must be 0. Since v is arbitrary, we
have H~(t)Ct(ju) = 0, and by a d a p t a t i o n the same result for H ~ g ( j u ) .
This settles the p r o b l e m u n d e r our hypotheses on u . T h e above p r o o f requires
stability of the a r g u m e n t s of exponential vectors under "piecing t o g e t h e r " , and would
need to be modified to deal, say, w i t h s m o o t h or merely continuous functions. Also, if
the o p e r a t o r s Hap(s) are closed, the fact they are 0 on a dense d o m a i n implies t h e y are
0 everywhere.
A minor technical point on this domain enlargement : differences of unbounded closed
operators are not necessarily closed, and the proof of uniqueness must take a slightly
different form. See Attai [Attl I.
To deal w i t h functions which are not right continuous, one uses for the first t i m e a
strong m e a s u r a b i l i t y a s s u m p t i o n on the processes Hap(s) : as strong limits of sequences
of step processes, t h e y satisfy Lusin's property. N a m e l y there exists a p a r t i t i o n of IR+,
into a null set and a sequence of disjoint closed sets F n , on each of which HAP(.) is
continuous. Discarding again a countable set, we m a y assume Fn is right closed w i t h o u t
right isolated points. T h e n the same reasoning as above can be p e r f o r m e d on each Fn,
with t h e conclusion t h a t H ap (t) = 0 for a . e . t .
If no closedness assumption is made, simple counterexamples to uniqueness can be
given. See Lindsay [Linl] and Attal [Attl].
REMARK. We do not discuss in these notes the general p r o b l e m of the representation of
operator martingales by stochastic integrals with respect to the processes daP(t) (da°(t)
not included), corresponding to the predictable representation p r o p e r t y of classical
B r o w n i a n m o t i o n or Poisson processes. T h e discrete case indicates t h a t a result of this
sort could be e x p e c t e d , and one is known to hold in several interesting cases ([HLP],
[PaS1]) but a c o u n t e r e x a m p l e due to J.L. Journ6 (Sgm. Prob. XX, p. 313, see also the
note by P a r t h a s a r a t h y in the same v o l u m e p. 317) excludes the possibility of a naive
general t h e o r e m .

Shift operators on Fock space

T h e following two subsections are not directly related to stochastic calculus, and can
be o m i t t e d until they are needed. T h e y i n t r o d u c e some additional m a t e r i a l necessary for
1. Stochastic integrals 141

section 3. The reader should probably skip them for the moment, and read the examples
of stochastic calculus in subsection 13.
11 There is an "abstract nonsense" part of the theory of classical Markov processes,
which investigates the filtration and the shift operators on the canonical sample space.
We are going to discuss similar definitions on the Hilbert space q = f f ® ~ 5 (a Fock space
of a r b i t r a r y multiplicity). Two different shift operators occur in classical probability

x~ (ot~) = x~+~(.~) , xs(ot~) = Xs+t (~) - x~(.~) ,

the first one being used for Markov processes, and the second one requiring more
structure, and being used for processes with independent increments. The shift we
investigate here is corresponds to the second one, and acts trivially on initial vectors
(for a discussion of this point, see Meyer [Mey3] and the comments by L. Aecardi in the
same volume). A theory extending the first shift does not seem to exist at, the present
time.
We refer to subsection 1 for the general notation, the definition of the conditional
expectations Et for vectors (those for operators will be denoted by let ), the decompo-
sitions of q2 as ~ t ® ~[t and Ct ® tg[t.
The Hilbert space L2(IR+) carries shift operators ~-t and T~<, adjoint to each other,
and acting as follows (we do not use boldface letters for vector arguments)

"rtu(s)=u(s-t) for 5 _ > t , = 0 otherwise ; T~u(s)=u(s+t).

By second quantization of these operators, we construct operators Or, O~ on ~ . The


shift operators Ot on • are isometric, and satisfy the relation

(11.2) OtX 7 = X s~+ t - X~

(here X t is considered as a vector, not an operator). Tensoring with I j we define on


a semigroup of isometric shift operators, still denoted by Ot

(11.3) Ot(j ® f ) = j ® Off .

Between the operators Et and Ot we have the relation

(11.4) Os Et = Es+t Os .

To check (11.4) it is sufficient to apply both sides to a vector ~ = jf~ Otg (® signs
omitted) with j E i f , ft C Or, g C ~. In this case

Oscp = jOsftOs+tg , Et~fl = j ft< l , g > ,


OsEt ~o = jOsft < 1,g > = Es+t Os~ .

On the same vector, the adjoint shift O~ acts by O'~(jft Otg)=j < 1, ft > g.
The shift acts as follows on the representation f = f f ( A ) d X A of a vector ( f" taking
values in the initial space) :
142 VI. Stochastic calculus

where r ~ ( A ) moves A to the right by t, while r t ( A ) moves it to the left by t


if A C [t, o o [ , and is undefined otherwise (and then f ( r t ( A ) ) = 0). Note that
exponential martingales ft = g (u I] o,t ] ) satisfy a "multiplicative functional" property
fs+t=fsOsft.
Since Ot is isometric, it defines an isomorphism of • onto the closed space Ota2= a2[t,
the future space at time t.
Our purpose is now to define a shift Ot (A) for operators on ~ , and we consider
mostly bounded operators. The first idea is to take Ot ( A ) = Ot A 0~, but we use rather
Journ6's definition in [Jou] which yields an operator "adapted to the future" (J.'s
notation for Ot(A) is OtAO~):

(11.6) O t ( A ) ( j f t Otg) = ft ® OtA(jg) •

The notation is the same as after (11.4), and we have kept the ® sign on the right as
the decomposition ~ = Ot ® ~[t is somewhat unusual. On the same vector, OtAO~ would
produce < 1, ft > O t A ( j g ) , and would not be unitary for unitary A. As an example,
we have the "additive functional" equality

a~+s -- a s = o8 (a~)
these operators are unbounded, and the equality holds on the exponential domain.
Let us return to bounded operators (for additional results, see Bradshaw [Bra D :
LEMMA 1. We have

Ot(I)=I ; Ot(AB)=Ot(A)Ot(B) ; Ot(A* =Ot(A)* ;


OsOt = Os+t , OsiEr = IEs+tOs ;
®t (A) Ot = Ot A , O~ @t (A) = AO~

The first line is nearly obvious, except the last equality, for which we compute

< j ftStg, Ot(A)ghtOtk > = < j ftStg, ht ® OtA(~k ) > = < ft, ht >< Otg,OtA(f.k) >

In the last scalar product we remove Ot because of the isometry property, take A to the
left side, and reverse our steps. The relation 08 (®t (A)) = Os+t (A) is proved similarly
on a vector of the form j fs Os(ht Otk), and the same vectors are used to check the next
relation. Finally, we have Or(A)(Ot(jg)) = OtA(jg).
If A is s - a d a p t e d , it is easy to check that Ot (A) is (s + t ) - a d a p t e d .
12 We now reach an i m p o r t a n t definition : a left cocycle (or in probabilistic language
a muttiplicative functional) is an a d a p t e d process (Ut) of operators - - h e r e , bounded
for simplicity - - such that U0 = I and we have for all s, t

(12.1) Ut+s = U, Or(Us) .

The a d a p t e d process U~ then is a right cocycle, V~ standing to the right in the product.
This is but one of the possible definitions of cocycles : that of Accardi [Ace1], for
instance, is slightly different. A fundamental example of cocycles will be provided in the
1. Stochastic integrals 143

next section by the solutions of stochastic differential equations with time-independent


coefficients.
The left cocycle property and the last property of Lemma 1 imply together

(12.2) Ut+sOt = Ut Ot U8 ,
which may be called a weak cocycIe property. Given a left cocycle (Ut) - - consisting of
bounded operators for simplicity - - let us define on the initial space

(12.3) Pt = IEo(Ut) or P t j = E o U t j for j C / 7 .

otherwise stated, Pt = Eo UtEo. Let us check the semigroup property Pt+s = P t P s .


Since Ot acts trivially on initial vectors, we may replace j by O~j in (12.3). Then using
the weak cocycle property (12.2) we have

(12.4) Pt+s = Eo Ut+sO~Os = Eo Ut (Or UsOs) .


On the other hand, using (11.4)

(12.5) P t P s = EoUtOt(EoVsOs) = EoUt(OtEo)UsOs = EoUtEt(Ot UsOs) .

Comparing (12.4)-(12.5) we are reduced to proving that EoUt = Eo UtEt, and it is


better if we replace E0 by Et. Then using the t - a d a p t a t i o n of Ut, when either side
is applied to a vector j f t O t g one gets the same result U t ( j f t ) < 1, g > . The adjoint
operators P~ = IE0 (V~) also form a semigroup, associated with the right cocycle (U~).
We prove now a similar result for operators ([Accl], [HIP]). We define as follows a
mapping from the algebra of bounded operators o n / 7 (identified to 0-adapted operators
on • ) into itself

(12.4) 7)t (A) = lE0 (Ut*AU~) ,

where lE0 is the time 0 conditional expectation for operators. More generally, given
a bounded operator H on ~ , put ~r(A) = I E o ( H * A H ) , so that P~ = Ut. Then the
semigroup property "Ps+t = "Ps P~ is a consequence of
LEMMA 2. If H is s-adapted, K arbitrary and L = H O s K wehave L(A) = -~(Ar(d)).
When H = Ih®x> <h' ®x'l®I[s, ( h,h' E J , x , x ' C Os ) and K = [ k N y > <k' ®y' I
( k, k ~E/7, y, y' E (I,), the proof is computational (we omit it). Then the result is extended
to operators of finite rank by linearity, and to arbitrary bounded operators by a strong
convergence argument (Kaplansky's density theorem is the adequate functional analytic
tool). The second part of the lemma leads to a second semigroup P~(A) = lEo(UtAUt*).

A few e x a m p l e s

13 Many examples of processes with interesting stochastic integral representations


appear in [HP1], and still more are given in [Par1]. The standard way to find the
coefficients is to compute matrix elements between enponential vectors and to use (6.6)
or its multiple Fock space version (8.4).
1) Let us deal first with simple Fock space, and consider the process X t =
exp(pa + + qa t + rt) (p, q,r being three complex constants). The exponential can be
144 v I . Stochastic calculus

"disentangled" using the elementary Campbell-Hausdorff formula of Chapter III, §1


subs.3 (or Chapter IV, §3, subs. 11 which would apply to the number operator). We
have
X t = eva+ e qa'~ e (v+~pq) t

from which it is easy to compute

< ¢~(V), X t g(tt) > ---- e(r+½ vq)t e p Jo u(s) ds eq fo v(s) ds e < v,u > ,

and then, differentiating with respect to t,


d
--~< g ( v ) , X t g ( u ) > = (pu(t) + qV(t) + (r + ½pq)) < g ( v ) , X t g ( u ) > .

Using (6.6), we find that X t solves the stochastic differential equation

(13.1) Xt = I + X s (pda + + qda-~ + (r + ½pq) ds ) .

This result can be extended to multiple Fock space. For q = - p , r = ih purely


imaginary, the solution of (13.1) is a unitary process.
If r = 0, what we computed is f(a+t,aT) when f ( x , y ) = eP~+qu. Taking
p = iu, q = iv and integrating we can handle more general Fourier transforms, as
in the Weyl quantization procedure of Chapter III. Now it turns out that the same Ito
formula as in the classical case remains correct

(13.2) df(a +, da t ) = D x f ( a + , a t ) da + + D y f ( a + , a t ) da t +
71( D ~ z f ( a , + ,a~-) da+da + + D z y f ( a + , a t ) ( d a + d a t + d a t d a + ) + D y y f ( a +, a t ) d a t d a t ) .

when the value of the "square brackets" is read from the Ito table. It would be interesting
to see what happens when the number operator is included.
2) Let It be the indicator of [0, t [ and 1~ that of [t, oo [. Given an operator U
on the multiplicity space /C, we may extend it in a trivial way (and without changing
notation) to 7"t = L2(IR+,K~) = L2(1R+) ® K7 and consider the following process of
second quantized operators

(13.3) ~ = F(1, U + llZ)


- - since F ( U ) acts as U n on the n - t h cha~s ,/./on, I~ could be denoted U N* as Belavkin
does, N t being the total number operator up to time t. Then it is easy to compute the
matrix element of !~ between g(v) and g ( u ) , and to deduce from (6.6) that

(13.4) Yt = I +
/0' Ys (U - I) dNs .

An important special case is the Jordan-Wigner operator Jt = ( - 1 ) N' on simple Fock


space, corresponding to U = - I . See [Par1] p. 198-201 for a study of the a o r d a n - W i g n e r
transform operator using quantum stochastic calculus.
145

§2. S T O C H A S T I C CALCULUS WITH KERNELS

This short section may be omitted at a first reading, since it slightly deviates from
the main line of this chapter.
1 We begin in the case of multiplicity 1, with a trivial initial space for simplicity. Let
(ft) be a (square integrable) adapted process, and let F = f fs dXs. We will use below
the following relation on Wiener space, an easy exercise on stochastic calculus (or a
triviality if F belongs to one single Wiener chaos)

(1.1) IE[E(h)F] = // hslE[E(h)fs] ds.

On the other hand, knowing the chaotic expansion of every fs, that of F is given by
formula (3.3) of §1

(1.2) F ( A ) = fvA(A--) ,
with the following notation : for any non-empty finite subset A, VA is the last element
of A, and A - is A \ { V A } . If A is empty, these symbols are undefined, and scalars
depending on undefined symbols are equal to 0 by convention. To convince yourself of
this trivial formula, look at what it means when ft belongs to a chaos of a given order.
Formula (1.2) may also be written as follows

(1.2') F(A) = E fs(A - s).


sEA
Indeed, since fs is s-adapted, f s ( A - s) vanishes unless A - s is contained in [0, s l ,
i.e. unless s is the last element of A. Thus (1.2 t ) is a natural extension of the stochastic
integral to non-adapted processes, called the Skorohod integral, whose study has become
very fashionable.
2 Consider now an adapted operator process (Kt) on simple ]Pock space, given by a
(measurable) family of Maassen kernels

Kt = / Kt (A, B, C) da+Ada°Bdab.

We are going to show that (under appropriate restrictions on the size of the kernels
Kt ) the stochastic integral L e = I ~ ( K ) is associated with a kernel, given by a formula
similar to (1.2)

L+(A,B,C)= KvA(A-,B,C) ; L°(A,B,C)= I(vB(A,B-,C) ;


(2.1) L - ( A , B, C) = K v c (d, B, C - ) .

We already used this formula in Chapter IV, §4, formula (7.2). A convenient way to
prove it is to show that, taking (2.1) as the definition of L e , the scalar product
(2.2) < g ( v ) , LeE(u) > ,
146 vI. Stochastic calculus

where u and v belong to L 2 and are bounded, has the correct value, (6.6), (7.1) or
(7.6) according to the choice of e. We choose e = o (the other cases are similar), in
which case the value of (2.2) should be

(2.3) / v(s) < g(v), Ksg(u) > u ( s ) ds

The integral (2.2) can be computed by formula (5.2) in C h a p t c r IV, §4,

b, Ka > = f-b(U + V) K(U, V, W) a(V + W) dUdVdW ,

with b = $ ( v ) , a = g ( u ) , and since the transformation (2.1) affects only the middle
variable, we may "freeze" U and W , put

fs(V) = Ks(U, V, W) , F(V) = Kvv(U , V - , W)


Interpreting now F as a random variable, stochastic integral of an a d a p t e d process
( f s ) , this is nothing but formula (1.1) applied with h = vu. This proof is not complete,
as we did not make the analytical hypotheses required to apply (1.1) or (2.3), but there
is no problem if the kernels Ht(U, V, W) are 0 for t > T and satisfy a Maassen-like
inequality uniform in t < T

(2.4) IK~(U, V, W)I < C M IUl+lVl+lw[ ,

in which case the stochastic integrals I ~ ( K ) given by (2.1) satisfy similar properties.
Thus we can apply Maassen's theory, and such stochastic integrals define operators
which have adjoints of the same type, and can be composed. Then in relations like
(6.5), (7.2), (7.3), (7.7) we can move the two operators on the same side, and prove true
integration by parts formulas.
We discuss now the situation of a Fock space of finite multiplicity, with a trivial
initial space (the general case would not be more difficult, but would require £ ( 3 " ) -
valued kernels, see Chapter V, §2 subs. 5). We are going to use the general kernel
calculus of Chap. V, §2, subs. 4 5, allowing the use of the differential da°(t)=dr. The
advantage of the kernel point of view is the possibility of directly composing operators.
We are loose in distinguishing kernels from the operators they define, and in particular
we use the same letter for both.
We say that a family (Kt) of kernels is a regular proces~ if 1) Each Kt is ~ a d a p t e d ,
i.e. vanishes unless all its arguments are subsets of [0, t] , 2) the kernels Kt depend
measurably on t, and are regular in the sense of Maassen, with uniform regularity
constants on compact intervals, as in (2.5).
Stochastic integration of regular processes preserves regularity : let us say that a
process of operators Ut is smooth if it can be represented as

(2.6) Ut = U0 + ~ K ~ ( s ) daP~(s) ,
fl,ct

where each process K ~ ( s ) is regular (and U0 is a constant initial operator).


2. Kernel stochastic calculus 147

Then a smooth process is again regular. Indeed, a stochastic integral process


Vt = .['toKs da~(s) has a kernel given by
(2.7) lit ((AP~)) = KvA ~ (A °, ( A ~ ) , . . . , ( A ~ ) . . . , (A°a))
if UAP~ is contained in [0, t] , and 0 otherwise (see formula (2.1)). Then regularity is
easy to prove.
Using Maassen's theorem, one can check also that the composition of two regular
(smooth) processes of operators is again regular (smooth), and that in the smooth case,
there is a rigorous Ito formula for the computation of the product. Similarly, one can
define regular (smooth) processes of vectors, and check that applying a smooth process
of operators to a smooth process of vectors yields a smooth process of vectors, whose
stochastic integral representation is given by the n a t u r a l Ito formula. We leave to the
reader all details, and the case of a non-trivial initial space.
Another approach to stochastic integration

3 Belavkin [1] (and independently Lindsay [2]) give a definition of stochastic integra-
tion which unifies the point of view of H u d s o n - P a r t h a s a r a t h y and that of Maassen. It
also applies to n o n - a d a p t e d processes, though it is not clear whether such processes will
ever play an important role. We present the main idea first in the case of simple Fock
space 4 .
Let us first recall some classical results, popularized among probabitists by the
"Malliavin Calculus". We refer to Chapter XXI of [DMM] for details. Given an element
of 4, f = f f(A) dX A and s > O, we define formally fs E ff~ by its chaos expansion

(3.1) =f + A)
This is not rigorously defined since f ( . ) is a class of functions on 7~ , but one can prove
it is defined for a.a. s, and f l]sl 2 de is finite if and only if

(3.2) fJ + A)t f l,, i i < OC,,


J
and we then have
a<hl(f) = /-h(s) s ds ,
which defines ~s for a . e . s . The mapping f fs appears in white noise theory as
a partial derivative in a continuous coordinate system. Exponentials f = g ( u ) are
eigenvectors of these "derivatives" since we have

On the other hand, given a family fs of random variables (satisfying suitable


measurability and integrability conditions) and its Skorohod integral F (see (1.2')),
we have a duality formula

sEA
148 vI. Stochastic calculus

This is a special case of a general formula involving iterated gradients and divergences
([DMM], XXI. 28).
We consider a measurable family of kernels Kt (A, B, C ) , not necessarily adapted. For
the moment we perform algebraic computations, without caring for analytical details.
In analogy with the definition (1.21 ) of the Skorohod integral, we define the stochastic
integrals of K s = f ~ Ks daes for c = + , o, -

K+(A'B'C) = E Ks(A- s,B,C), K°(A,B,C) = ~ Ks(A,B - s,C),


sEA sEB
(3.4) K - ( A , B , C ) = ~ s E C I g 3 ( A , B , C - s)

If the kernel process is adapted, we get the usual definition. To integrate over (0, t ) ,
one sums only over s _<t. Given a vector f = f f ( H ) d X H , we compute K e f ( A ) using
Maassen's formula VIA, (1.51 ), and we remark that

K+ f ( A ) = ~ - ~ . ( K s f ) ( A - s) , K - f ( A ) = (Ks]s)(A)ds
sEA
(3.5) K ° f(A) = ~sEA(I'[sffs)(d s),

where, on the right side, we have Ks acting as an operator on the vector f or ~s" This
no longer requires the operators K~ being given by kernels.
A remarkable feature of this definition is the elegant way it allows to write the Ito
formula. Consider a stochastic integral

(3.6) I~(lH) = h(s) d s + r T ( s ) d a + + H ( s ) d a ° + ~ ' ( s ) d a 2

(the letter H on the right side is meant to be a capital 7] [). Then the scalar product
< g, I~(lH)f > is equal to the sum of four integrals f : . . . ds with integrands

<g, h s f s > , < ~s,~(s) f s > ,


• •
< gs,H(s) f s > , <g, rl'(s)
], > .
We may write this sum as

This can be extended to multiple Fock spaces as follows. First of all, in (3.6) h(s)
becomes an operator on 9 , r/(s) a column h°(s) of operators on 9 , i.e. a mapping
from • to K ® kI,, H is a matrix H i ( s ) of operators on 9 , i.e. an operator on /C ® tI,,
and finally r/(s) is a mapping from /C ® ~ to 9 . P u t t i n g everything together, and
defining ~ = C @/C, the interpretation of lI-I is a mapping from K ® ~ to itself, as
described by the matrix (1.2) in Chapter V, §3. On the other hand, given f E 9 , is
3. Stochastic differential equations 149

now has become a vector in /(7 ® 62 with components ] a ( s ) , and we add one component

~o(S) = fs to the vector is. Then we have the remarkable formula

(3.7) <g, It(lH)f >=


/0' < ~s,lH,]s>ds
If f = £ ( j u ) , g = g ( g v ) , what we get is formula (9.2) in §1. The Ito formula itself
requires a second stochastic integral It (IK), and the integrand becomes (in the a d a p t e d
case)

(a.s)

a considerable progress over the cumbersome (8.5) in §1.


We have not discussed the conditions under which these formulas can be rigorously
applied outside of the exponential domain.

§3. S T O C H A S T I C DIFFERENTIAL EQUATIONS AND FLOWS

This section is a general survey of the subject, which leaves aside most technicalities,
but contains the main ideas. We postpone to section 4 the rigorous proof of some selected
results on this rapidly growing subject.
1 The general scheme for stochastic differential equations is the following. We consider
a family of operators L¢ = L~ on the initial space, and we wish to solve an equation
of the following form e abbreviates an Evans index (~), and we usually write Le
instead of L~ ® I

(1.1) Ut = I + Z e Us(Le®I)da~.

The solution (Ut) should be an a d a p t e d process of operators on 62, each Ut being


defined at least on the subspace generated by all exponential vectors g ( j u ) , where u
is square integrable and locally bounded to avoid the difficulties relative to number and
exchange operators, and j belongs to some dense domain in J . Of course, regularity
conditions will be assumed so that the equation is meaningful. We are specially interested
in the study of s. d. e.'s (1.1) whose solution (Ut) is a atrongly continuous unitary
process. Equation (1.1) will be called the left s.d.e, with coefficients Le. Its solution
will be called the left exponential of the process ~ e L~a~, in analogy with the classical
"Dol@ans exponential" of a semimartingale, a very special case of the theory of classical
s.d.e.'s - - the analogue of non linear s.d.e.'s being given by Evans-Hudson flows (subs.
5).
One may also consider time dependent coefficients Le(s), either acting on the initial
space, or acting in an adapted way on Fock space. The theory is similar, but the solutions
150 V I . S t o c h a s t i c calculus

are called muItiplicative integrals instead of exponentials. Most of our attention will be
devoted to the time independent case.
There is a right equation similar to (1.1),

(1.1') Vt = I + E e
/o' ( L ~ ® I ) Vsda~s '

in which the unknown process stands to the right of the coefficient Le @ I . Its solution
is called a right exponential. The notations (Ut) and (Vt) will be systematically used
to distinguish them. Note that Vs commutes with da~ in (1.1P).
In b o t h equations, a basis-free notation may be used : restricting ourselves to time-
independent coefficients, we may write

(1.2) E ~ L~ da~ = L dt + da + (A) + day(A) + da~ ('~ ) ,

where ~ = (L °) is the column ("warning" in V.1.1) of operators on ,V consisting of the


creation coefficients, ~ is a line (L~). Under the best boundedness conditions, L, ~,
and A are interpreted as bounded mappings from J to ,7, J to ,7 ®)~, J ® )~ to
, ] , and ,7 ® K: to , ] @ K: respectively.
Formally, the adjoint U~ of the solution to the left equation solves the right equation
*p
with coefficients L a = (L~)*. However, the two equations are not handled in the same
way, and sometimes one of the two forms is more convenient than the other. For instance,
if the solution to the left equation can be proved to be bounded, but the coefficients are
unbounded, UsLP~ raises less domain problems than LPaVs. We will also prove that the
two equations are related by time reversal.
The discrete analogues of both equations axe given by inductive schemes

Un+l - Un = U n A M n ; Vn+l - Vn = ( A M n ) Vn ,

where (Mn) is a given operator process. It is easy to compute explicitly

Un=Uo(I+AMo)...(I+AM,,-1) ; V~-(I+AMn-I)...(I+AMo)Vo.

In the first induction procedure, the last operator must go down to the b o t t o m of the
pile; thus it is more difficult to compute than the second one.
Unbounded coefficients L~ do not raise major conceptual problems provided they all
act on the same stable domain, as well as their adjoints - - in another language, we may
restrict ourselves to this domain, which is a prehilbert initial space. If all coefficients are
equal to 0 except L°o = i l l , with a selfadjoint Hamiltonian H , the equations have the
same u n i t a r y solution Ut = e i t H = Vt, and we are facing a s t a n d a r d quantum evolution
problem (with all its difficulties if H is unbounded !). Thus what we are doing is to
p e r t u r b a quantum evolution by non-commutative noise terms. From the physicist's
point of view, the interesting object is the initial Hilbert space , f : the s t a n d a r d way of
getting back to ,~ is to apply the (vacuum) expectation operator E0. Then if we put

(1.3) Pt = EoUt and Qt = E0Yt,


3. S t o c h a s t i c differential e q u a t i o n s 151

these operators satisfy formally the equations

Pt = I +
/0 PsLg ds , Qt = I +
/0 L°Qsds ,

therefore, we should have Pt -- e tL° = Qt. This explains why we assume in section 4
that L ° is the generator of a strongly continuous contraction semigroup on ,7.
It seems, however, t h a t "Schr5dinger's picture" (1.3) is less i m p o r t a n t t h a n "Heisen-
berg's picture" describing an evolution of initial operators (§1 subs. 12)

(1.4) T't(A) = ]Eo(U: AUt) , Qt(A) -- IE0(Vt*dVt) .


If we consider (Ut) as a multiplicative functional of the quantum noise, these formulas
are analogues of the F e y n m a n - K a c or Girsanov formulas of classical probability.
Our discussion follows mostly Mohari's work, whose main subject is the study of
s.d.e.'s with infinite multiplicity and bounded coefficients. Therefore unless otherwise
stated, the coei°Jcients L~ are bounded and time-independent, but the multiplicity may
be/nfin/t e.
2 The classical way of solving an equation like (1.11 ) is the Picard iteration method.
Lct us have a look at the successive approximations it leads to. We start with Vt° = I .
Then the first iteration gives

VtI -'~ Z-4- E c L , aT ,

The second approximation Vt2 is given by

IA-E~otL~da~IV]=IA-E L~ss<tda~l(s)-t-E~ L~Le~sst<sa<tdaTl(s2)dae(sl) •

For the left equation we would have LeL n . It is clear how to continue : given a multi-
index ~t = ( ¢ 1 - - - , cn) we define the iterated integral (over the increasing simplex) and
the initial operator

(2.1) Itg = [ da~(Sn)...daC'(Sl) ; Lt~= L ~ . . . L ~ 1


ds t<...<sn<t
and the solution of (1.1) appears as the sum of the series

(2.2) = L.g,

the n - t h approximation Vtn being the sum over multi-indexes of length smaller than n.
In the case of a n o n - r a n d o m evolution, this is the exponential series for e x p ( L ° ) .
The formula for the left exponential is the same, but in the definition of L~ the
product is reversed, the operators corresponding to the larger times being applied first
to the vector.
Formula (2.2) is not satisfactory, as it contains many integrals of the same type: due
to the special role of the Evans index ¢0 = (°0). The following computation is formal,
but it can be justified in many cases and has important consequences.
152 VI. Stochastic calculus

Given a multi-index ,~ = (el • .. sk) that does not contain s0, one constructs all the
multi-indexes # that are reduced to ~ by integration as in §1, subs. 9, inserting P0
times ~0 in front of ~, then Pi times after st. Then, abbreviating L ° into L , the sum
over all these multi-indexes has the following value
LP~ LP~ LPo f
E -~k I. Lek • ..
Po,...~Pk
Pl!
L ~ - - ]s
P0[ ~<...<sk<t
(t -- sk) p~ .. . (s2 - sl)'P' o°P°" ~
1 uask... dagi

(2.3) = fs e(¢-SDLLek ""e(S~-sDLn~teS'Ldaes~'"da~sl "


~<...<sk<t

The first line has the same meaning as (2.2) except that one must be careful about
changing the order of summation of series which are only summable by blocks. The
second line involves formal exponentials Pt = e tL : it is strikingly similar to the Isobe-
Sato (or Krylov-Veretennikov) formula from the theory of Markov processes ([DMM],
Chapter XXI, n ° 57).
3 Let us discuss the conditions on the coefficients LP(t) that express the unitarity of
the operators Ut. The rigorous proof that they are necessary (due to Mohari in the case
of infinite multiplicity) rests on the uniqueness of stochastic integral representations.
We omit this point here and refer the reader to the book [Parl], p. 227.
We use formula (9.3) from §1 to compute < U t g ( f v ) , UtE(ju) > , which has to be
constant if (Ut) is a process of isometrics. The derivative of this scalar product is equal
to 0 a.e. in t. We assume that u , v are taken right continuous. Then we must have for
every t, because of the assumed strong continuity of Ut

E < Ut $ ( g v ) , UtLPa $ ( j u ) >va(t)up(t) + < UtL~ g ( g v ) , U t g ( j u ) >va(t)uP(t)+


p~r

+ E < UtL~ S ( e v ) , UtLP~S(ju) >v~(t)uo(t)'g ~ = O.


paX#

We may suppress U~ on both sides inside the scalar products, because of the assumed iso-
metric property. Since the Le are initial operators, the scalar product < E ( v ) , E(u) >
factorizes, and the following expression must be equal to 0

E < e , LPaj >vC'(t)up(t) + < L~e, j > ~(t)~P(t) + ~ < L~e, LPj >va(t)up(t)'g ~"
pa paA,u

Since j, g are arbitrary, we get (3.1) below as formal condition for isometry of (Ut). It is
the same condition that implies formally (by time reversal, see §4 subs.9) the isometry
property of ( ~ ) , and therefore we may call it simpIy the isometry condition

(3.1) L p + (L~)* + E c ( L g ) * L g = 0 or *p + ~c~


Lg + L~, ~ rp = 0 ]

Since the isometry condition is the same for (Ut) and ( ~ ) , the coisometry condition
*p
follows by a formal passage to the adjoint (i. e. replacing L~ by L a )

(3.11)
[LP~ + Lc~ a- ~
--
"
C ~ L 43(
*(--~Ot ~ 0 "
p =0
1
3. S t o c h a s t i c differential e q u a t i o n s 153

and the two conditions (3.1)-(3.1') together constitute the formal u n i t a r i t y condition.
One of the purposes of section 4 consists in proving that, under suitable analytical
assumptions, it really does imply the solution is unitary. This is similar to classical
non-explosion results for diffusions or Markov chains.
We rewrite the isometry condition as follows in the "basis free" notation of Chapter
V, §3. Assuming convenient boundedness properties are satisfied, we may introduce the
matrix L of an operator on J ® (~ • ]C)

L= (L. +L+~*)' A*+A+A*A)


+ A+A*A A* + A + A * A '

and the isometry condition means that L = 0 - - more generally, we will see in §4,
subsection 4, that the condition L < 0 is Mohari's contractivity condition. Similarly,
the coisometry condition can be written = 0, taking
• N

~,= (L+L+AA*
A*+A+AA*] '

The unitarity condition L = 0 = L is analyzed as follows. First of all, the lower diagonal
relations
A* + A + A * A = 0 = A* + A + hA*

mean that W = I + A is unitary. Then the lower left relations

A* + A + h*~ = 0 = A* + A + AA

can be written

(3.3) P + W*A = 0 = A + WA* ,

while the upper right relations gives an equivalent result. Finally, the equations

Z
can be written

(3.4) L = i H - T 1A * A = i H - T A A
1~* ,

where H is selfadjoint (here, bounded). This form of the conditions extends that of
[HuP1] for simple Fock space. We refer to this article for a more detailed discussion,
and in particular for the use of s.d.e.'s to construct quantum dynamical semigroups with
given generator.
Another exercise in translation consists in taking a basis of the initial space instead
of the multiplicity space. For simplicity we assume J ~ C ~' is finite dimensional. The
left equation takes the following explicit form, indexing the annihilation operators by
bra vectors instead of elements of ~1

vj(t)
154 VI. S t o c h a s t i c calculus

where (Z~) is a matrix of scalars, (A~) a ~atrix of operators on ~, and (A~) and (A~)-J
are two matrices of elements of K~. T h e n the unitarlty conditions b e c o m e :
i) A~ = w { -,Jl, where W is unitary 2) ~ -- - W * A 3) L --in - ~<~,
this last notation meaning a scalar matrix with coefficients ~ k < ) ~ , A/k >"
4 Let us define, for a given (bounded) operator A on .7

(4.1) (t(A) = U~AUt .

Then the Ito formula shows that At = ~t(A), as an operator process, has a stochastic
integral representation

(4.2) At = A + ~ ~s(LPA) da~(s) ,


p~r /0 '
where LPa is the linear map in the space of bounded operators

(4.3)
[ L ~ ( A ) = L P• ~ A + A L P a + E ~ L . a~AL~p ] .

This formula will have an important interpretation in the language of flows (subsec-
tion 5).
Under reasonable conditions, the operator process (Ut) is a unitary left cocycle,
and therefore the mapping A ~ 7)t(A) = 1Eo(U~AUt) is a semi-group. It is easy to
compute its generator from (4.3), at least formally

(4.4) L°°(A) = L• o0A + A L ° + Z ~ i~AzO

On the other hand, the simpler semigroup Ptj = EoUtj acting on J has the generator
L °. Thus (T't) is more dependent than (Pt) on the coefficients of the s.d.e., though
it does not contain at all the number/exchange coefficients. This phenomenon does
not occur with ordinary q u a n t u m mechanics, in which the SchrSdinger and Heisenberg
pictures are completely equivalent.
If the unitarity conditions on the coefficients are taken into account, (4.4) becomes
the s t a n d a r d (Lindblad) form of the generators of quantum dynamical semigroups, which
replace the classical Markov semigroups in the theory of irreversible quantum evolutions.
We do not discuss this subject, but our reader may consult Alicki Lendi [A1L] and
P a r t h a s a r a t h y [Par1].

Non-commutative flows

5 According to our program, we remain at a non-rigorous level, to give motivations and


an elementary description of the theory of non-commutative flows, created by Evans-
Hudson. We begin with a translation of the classical theory of stochastic differential
equations on E = IR~ into a more algebraic language.
We are given a probability space (f~ .7-, 1P) with a filtration (grt), a d-dimensional
Brownian motion (B~) w.r.t, this filtration (we add a deterministic component B~ = t ) .
3. Stochastic differential equations 155

Our aim is to construct an adapted and continuous stochastic process X = ( X t ) taking


values in E , such that for i = 1 , . . . ,u we have (X0 being given, usually X o = x E E )

(5.1) = xX + c (xs)dBf.

We apply a C~ function F to both sides, using the classical Ito formula

(5.2) F(X,) F(Xo) + L p F o Xs dB p ,

where L0 is the second order operator ~ i cioni + (1/2) ~ i o ~ cicJ~nij and Lo~ is the
9
vector field ~ i cc~D
i i . As before, indexes labelled p may assume the value 0, while
those labelled c~ may not. This formula no longer contains the coordinates x i , and can
be extended immediately to a manifold E .
We now submit (5.2) to an abstraction process : we retain only the fact that Ccc (E)
is an algebra, and that Xs defines a homomorphic mapping F ~ F o X s from COO(E)
to random variables on ~ . We also notice that C~ (E) somehow describes the "curved"
manifold E , while the driving Brownian motions (Btp) live in a "flat" space, the s.d.e.
itself being a machinery that "rolls up" the flat paths into the manifold. The flat space
and the algebra are "coupled" together by the homomorphism X0, which usually maps
F into a constant r.v. F ( x ) .
Making use of the fact that X~ is a homomorphism, the relation ( F G ) o X t =
( F o X ¢ ) ( G o X t ) and the Ito formula for continuous semimartingales lead to the relations

Lc~(FG) - FLc~(G) - Lc~(F) G = 0 (Lc~ is a vector field)


Lo(FG) - F L o ( G ) - Lo(F) G = E c ~ Lc~(F) L~(G) (Lo is a second order operator)

The algebra C~ is replaced now by an arbitrary complex *-algebra At. One usually
assumes it has a unit I . The "flat" probability space and the driving Brownian motion
are replaced by their non-commutative analogues, a boson Fock space (I) of multiplicity
d and its basic processes apa. The "coupling" between A and (~ is realized as follows :
we have .A act as a *-algebra of bounded operators on a Hilbert space J , and construct
the familiar space ~ = J ® ~ ; then X0 is understood as the mapping F ~ F ® I from
.A to operators on ~ . To enhance clarity~ we try to use lower case letters for elements
of , ] or ~ , capital letters for elements of .A or operators on /7" or fly, boldface letters
for the third level (mappings from ,4 to .A, or from operators to operators).
The interpretation of the process Xt itself is an adapted process of* -homomorphisms
from .A to £:(qJ) :for every F C .4, X t ( F ) is a bounded operator on ~I' which acts only
on the first factor of the decomposition q t ® (I)[t. The technical assumption is made
that these homomorphisms are norm contractive, and we also assume that X t ( I ) = I
- - though there are good reasons from classical probability to study "explosive" flows
for which this condition is not satisfied.
To keep the probabilistic flavour, we use the very strange notation F o X t or F ( X t )
instead of X t ( F ) . In particular, F o X0 = F ® I .
15g VI. S t o c h a s t i c calculus

Finally, we give ourselves a family of mappings L~ from .4 to .A and d e m a n d that,


for F C . A

(5.3) F o Xt =FoXo+ E ~0 t L ~ F o X s d a ~ ( s ) .
pc~

The requirement that X t be a unit preserving * - h o m o m o r p h i s m is now translated as


the so-called structure equations of a non-commutative flow :

(5.4) L~(5 = 0,
(5.5) (L;(F))* = Lg(F*),

(5.6) [L~ ( F G ) - F L ~ (G) - L~ ( F ) G = ~c~ L~ ( F ) L~ (G) I "

The third relation is deduced from Ito's formula as in the classical case above. We do
not pretend to prove these conditions are necessary, only to show they are natural, as
we did for the s.d.e, unitarity conditions. On the other hand, we shall prove in the next
section that these conditions are sufficient to construct a flow, provided the coefficients
are sufficiently regular.
EXAMPLES. Consider a non-commutative s.d.e, with coefficients L~, which has a unique
unitary solution (Ut). Take for .4 the algebra £ ( f f ) of all bounded operators on the
initial space, and for F E .4 define X t ( F ) = U~(F ® I) U~. Then we get a flow
whose coefficients are given by (4.3). The corresponding structure equations follow
from the eoisometry conditions (3.1') on the s.d.e, coefficients, as a somewhat tedious
computation shows.
There is another interesting example, which illustrates the relations between commu-
tative and non commutative probability : classical finite Markov chains in continuous
time can be interpreted as non-commutative flows, thus giving a precise m a t h e m a t i -
cal content to Kolmogorov's remarks on the analogy between continuous time Markov
chains and diffusions. Since we do not want to interrupt the discussion, however, we
postpone this elementary example to the end of this section.

D i s c r e t e flows

6 The continuous situation we are studying has a simple discrete analogue : denote
by 3.4 the algebra of complex (u + 1, u + 1) matrices, by a~ its s t a n d a r d basis with
multiplication table i
p c~ a p
a A a p = ~A a p .

Consider an algebra A with unit, and denote by M (.A) the algebra of square matrices
of order u + 1 with entries in .A, which is also the tensor product algebra . A ® M . Then
we put .A0 = M, .An+l = .An ® M . We "clone" a~ into a~ (n) --- 1,4 @ I . . . ® a~ ® I . . .

1 Because of the "warning" in Chap.V, §1 subs.l, the multiplication rule for matrix
units is slightly different from the usual one.
3. Stochastic differential equations 157

with the non-trivial m a t r i x in the n - t h M factor. Then a discrete flow is a family of


homomorphisms ~n from .A to .An such that for F E .A

(6.1) ~n+l ( F ) = ~n(F) ® In+l + E p a ~ n ( L P ( F ) ) a~(n + 1) ,

where the mappings LPa from .A to .A are given. As in the continuous case, it is more
picturesque to write FOX,, instead of ~n(F). The construction by induction is obvious,
and it is easy to see that multiplicativity of Xn is translated as a "discrete structure
equation"

(6.2) Lg (FG) - F L ~ ( G ) - LP~( F ) G = ~ L a~( F ) L ~P( G )

which differs from the equation we are familiar with by the fact that summation on the
right side takes place over all r _> 0 instead of a > 0. Relation (6.2) means that, if
we denote by A ( F ) the matrix (LP(F)) with entries in .A, by F I the m a t r i x with F
along the diagonal and 0 elsewhere, then the mapping F ~ F I + A ( F ) = E ( F ) is a
homomorphism from .A to .M (.A). A detailed discussion of discrete flows is given by
P a r t h a s a r a t h y [Par4], [Parl] Chapter II §18 (p. 111).

The algebra of structure equations

7 We return to continuous time flows, and formulate the structure equation in a


different language, introduced by Hudson.
First, we consider a left action and a right action of F E .2, over G C .A given
respectively by FG and GF. The associativity of multiplication expresses that the
left and right actions commute, i . e . . A is a bimoduIe over .A. We extend trivially this
bimodule structure to .Ad __ elements of .Ad are denoted by • = (Fc~), the components
carrying lower indexes because of our unusual conventions. We define a "hermitian scalar
product" on .Ad as follows

We also consider the algebra 34 = 34d(.A) of (d, d ) - m a t r i e e s with coefficients in .A


because of the unusual conventions, the matrix product summation bears on the
descending diagonal. Then 34 is also a bimodule over ,4, with a left and right action
of F C .A which multiplies every coefficient of the matrix by F to the left or the right
(it can also be represented as a left (right) matrix product by the diagonal matrix F I ) .
We denote by A ( F ) the matrix with coefficients L~(F), and by E ( F ) the matrix
F I + A ( F ) . For non-zero indexes, the relation

(7.1) L~ (FC) - FL~ (a) - L~ (F) C = ~-~c~L~ (F) L~ (C)

is read as E(FG) = E ( F ) E ( G ) . It can be interpreted as follows : on .Ad, keep the right


action of F E .A and redefine the left action as the effect of the matrix E ( F ) . Then we
get a twisted bimodule structure on .Ad.
Denote by l ( F ) the mapping F , ) ( L ° ( F ) ) from .A to .Ad. Then the relation

L ° ( F G ) - F L ° a ( a ) - L°(F) C = ~-~.a L~ (F) L#(G)


158 vI. Stochastic calculus

has the interpretation

~(ra) = r(F) a(a) + ~(f) a,

so the vector ( L ~ ( F ) ) = A(F) satisfies the property

(7.2) )~(FG) = )~(F)G + E(F)~(G) ,

which in the twisted bimodule structure is read as

(7.2') ~(FG) = a(F)G + F~(G) .

and means that ~ is a ( t w i s t e d ) d e r i v a t i o n . Finally, the relation concerning the


generator L ( F ) = L ° ( F ) of the semigroup ( P t ) , a mapping from ~4 to .d, can be
rewritten as

(7.3) L(F*G) - F*L(G) - L(F*)G = ~ L~(F*)L~(G) = < X(F), ~ ( a ) > .

This corresponds to a familiar idea in probability, the "squared field operator" ( o p d r a t e u r


carr~ du champ).
Evans and Hudson make in their papers (a limited) use of the language of Hochschild
cohomology : if .A is an algebra and B is a bimodule over .A, an n - c o c h a i n with values
in B is by definition a mapping ~ from A n to /~ (for n = 0 an element of B). The
c o b o u n d a r y of ~, is defined as the ( n + l ) - c o c h a i n

d ~ ( V , U l , . . . , U n ) = V ~ p ( U , , . . . , U n ) -- V ( V U ] , U2 . . . , U n ) + ~ ( U l , VU2, . . . , U n ) . . .
(7.5)
+(-1)n+l~(ul,...,un)v .

Therefore, a 1-cochain T(u) is a c o c y c l e ( i . e . its coboundary is 0) if and only if


vT(u) -T(vu)+ ~(u)v = 0, otherwise stated if ~ is a derivation. To say that
is the eoboundary of a 0-cochain g means that ~ ( u ) = u g - g u , i.e. ~2 is an inner
derivation. T h e n the theory of perturbation of q u a n t u m flows, for instance, can be
expressed elegantly in cohomological language - - though only the first cohomology
groups appear, and no deep algebraic results are used.

M a r k o v c h a i n s as q u a n t u m flows

8 No processes (except possibly Bernoulli trials) are simpler and more useful than
Markov chains with finite state spaces. It is one of the unexpected results of " q u a n t u m
probability" that it has led to new remarks on a subject of which every detail seemed
to be known.
We first consider the case of a Markov chain in discrete time n = 0 , . . . , T , taking its
values in a finite set E consisting of ~ + 1 points numbered from 0 to ~. We denote
by .4 the (finite dimensional) algebra of all complex valued functions on E .
The transition matrix of the chain is denoted by P = ( p ( i , j ) ) , and we assume for
the moment its entries are all > 0 (this hypothesis is for simplicity only, and we discuss
it later). The (discrete) generator of the chain is A = P - I . We denote by f~ the finite
set E { ° ' ' ' ' ' T } of all conceivable sample paths : our hypothesis implies that if the initial
3. S t o c h a s t i c differential e q u a t i o n s 159

m e a s u r e is c o n c e n t r a t e d at x, all p a t h s from x have strictly positive measure, a n d we


do n o t need to worry a b o u t sets of measure 0. The coordinate m a p p i n g s on f~ are
called Xn.
T h e additive functionals of the chain can be represented as follows, by m e a n s of
functions z on E x E
i=k
Zk = ~ z (Xi-l,Xi).
i=1

U n d e r o u r hypothesis this representation is unique. T h e additive functionais which are


m a r t i n g a l e s of the chain (for every s t a r t i n g point) correspond to functions z ( . , . ) such
that P z ( . ) = ~ j p ( . , j ) z ( . , j ) = O. Let us try to construct a (real) " o r t h o n o r m a l basis"
Z a , t a k i n g as first basis element Z~ = k, of the space of additive functionals, in the
following sense

(8.1) ]E [ A Z ~ A Z # [ .T'k_ 1] = 6 ar .

The functionals Z c~ ( c ~ > 0 ) t h e n are martingales (as usual, c~,/3,.., range from 1 to v,
while p, a , . . . range from 0 to v). O n the corresponding functions z ~ , these relations
become
Vi, Zjp(i,j)z~(i,j)zr(i,j) = 6~'~ D

This a m o u n t s to choosing a n o r t h o n o r m a l basis of (real) functions of two variables for


a "scalar product" which takes its values in the algebra .4 :

(8.2) < y, z > = P ( y z ) = ~-~j p(.,j) y ( . , j ) z ( . , j ) ,

a n d t a k i n g the f u n c t i o n 1 as its first vector. T h e c o n s t r u c t i o n is easy : for every fixed i


we are reduced to finding an o.n.b, for a s t a n d a r d bilinear form, a n d we m a y paste the
values together u n d e r the restriction t h a t the rank of this form should not depend on i.
O n the other h a n d , this r a n k is the n u m b e r Ni of points j t h a t can be reached from i
(p(i,j) > 0 ) . T h e strict positivity a s s u m p t i o n m e a n s that Ni = v + l for all i, a n d the
preceding condition is fulfilled.
S t a r t i n g from such a m a r t i n g a l e basis, we define discrete m u l t i p l e stochastic integrals,
which are finite sums of the form
p=T
(8.a) f=
p=l i~<...<ip

There is a n isometry formula

(8.4) IE[ If[ ~ IX0 ] = ~-]p ~ lap (Xo, ia, ~ , , . . . i v, c~p)12 .

T h e coefficients of a given r.v. in this chaos expansion d e p e n d on the initial point


X0 : from a n algebraic point of view, we m a y describe this as a "chaos expansion with
coefficients in the algebra .4". If the initial point is fixed, the Hilbert space generated by
these m u l t i p l e stochastic integrals is isomorphic to a toy Fock space of multiplicity v.
160 VI. Stochastic calculus

As remarked by Emery, it is easy to prove that every r.v. can be represented by a


chaos expansion (8.3) : Keeping X0 = x fixed, fl consists of ( ~ + 1) T points, all of which
have strictly positive measure, and this gives us the dimension of L 2 ( ~ ) . On the other
hand, the number of subsets of { 1 , . . . , T} with p elements is (T), and for each subset
the choice of indices a gives z/ possibilities, whence the same dimension (~ + 1) T.
As always, chaotic representation implies previsible representation w.r.t, the martin-
gales Z ~ . Therefore the martingale additive functional f(Xn ) - f(Xo) - ~k<n AI (Xk)
has a stochastic integral representation, and the corresponding predictable processes
may be computed as usual by means of angle brackets. This leads to a formula

(8.5) f(X,~)-f(Xo)- Z Af(Xk)=~-~- Z Laf(Xk) A Z ~ '


l<k<n oe l<_k<n

with operators La on .A given by

(8.6) Lc~f(i) = E j p ( i , j ) za(i,j)f(j) f

This can be interpreted as an "Ito formula" in which the Z a replace the components
B a of Brownian motion, A replaces the laplacian 71 A , and Lc~ replaces the derivative
operator D ~ . It is convenient to use the additional index 0 with Z~ = k, L0 = A.
REMARKS. 1) The restriction that the transition matrix be strictly positive can be
replaced, as said above, by the condition that the number Ni of states j such that
p(i,j) > 0 is a constant N independent of i, and then the number of necessary
martingales is N - 1 . On the other hand, P a r t h a s a r a t h y remarked that every finite
Markov chain can be considered as the image of a Markov chain (Xtn) possessing this
property, and which we are going to describe now i .
First, let N be the largest of the numbers Ni. The state space E I consists of E
("true" states ) and of "ghost" states, xkij, where i and j are true states such that
p(i,j) > O, and k is an integer (we say that z~. "begins" at i and "ends" at j ) .
We adopt the following rules : a) The total number of ghost states beginning at i is
N - N i . b) The new transition probability pt(i, .) is such that for each state j such that
p(i,j) > 0, the total mass p(i,j) is shared between j and all the ghost states beginning
at i and ending at j , each one receiving a strictly positive mass. Thus i leads to N
states in E'. c) If x is a ghost state ending at j, p'(x, .) = p'(j, . ) , therefore each
ghost state leads also to N states.
On the other hand, let ~r be the mapping from E I to E which induces the identity
on E and maps any ghost state to its end. Then it is clear that the chain (Xtn) is
m a p p e d by ~r into the chain ( X n ) , and therefore every r.v. of the chain (Xn) can be
expanded by means of the N - 1 martingales of the larger chain.
2) The discussion above can be extended to discrete filtrations .Tn in which .To is
trivial, .7-1 consists of exactly u + 1 atoms (of strictly positive measure), and similarly
each atom of ~-n is subdivided into exactly u + 1 atoms.

1 This discussion may be omitted without harm.


3. S t o c h a s t i c d i f f e r e n t i a l e q u a t i o n s 161

9 The functions zP(i,j) constitute a basis for functions on E x E w.r.t, the algebra
of (real) functions on E acting by multiplication, as functions of the variable i 2
Therefore, there must exist a formula of the following kind

(9.1) zP(i'J)z~(i'J) = E r ePic(i) z~(i'J) '

where the coefficients Cot~ = P (zOz~'z ~) are functions on E , which satisfy relations
expressing the associativity of multiplication (and in this case also its commutativity)

(9.2) = c2 c. .

Also, the particular role of the unit element 1 = z ° gives the relation C~ ° -_ C r0 o ' _ - ~rc r ,
and the choice of the functions z a gives Co/~ = 5 a/3 (the fact t h a t our functions are real
is used here). Since the mapping f ~ f ( X 1 ) is an algebra homomorphism, we deduce
from the Ito formula above

no (fg) - fLog - (Lof) g = E ~ n a f Lc~g


(9.3)
L ~ ( f g ) - fL.rg - ( L T f ) g = E , ~ C~/~ L ~ f Lzg .

10 We are going to consider the chain from the point of view of non commutative
(discrete) stochastic calculus - - that is, instead of the random variable f o X k we are
going to compute the corresponding multiplication operator, which we denote by ( k ( f ) .
We use induction on k, starting from ~0(f) which simply multiplies the coefficients of
the chaos expansion by f o X0.
We first investigate how multiplication by A Z ~ acts on a monomial A Z ~ 1 . . . A Z ; p .
If this monomial does not contain any AZk~ relative to time k, the multiplication
simply adds a factor A Z ~ , and this corresponds to a creation operator Aa~(k) (discrete
Evans notation). If the monomial contains AZkz with f l ¢ (~, the multiplication formula
replaces the term A Z ~ by the sum of all terms C~ z ( X k _ 1) d Z ~ , while the replacement
of A Z k~ by A Z kr is interpreted as a number or exchange operator A @ ( k ) . Finally, if
the term A Z ~ appears in the monomiai, the term C o B = 5 ~ in formula (9.1) produces
a monomial from which A Z ~ has disappeared, and this corresponds to an annihilation
operator Aa°a(k). Taking (8.4) into account, we have

(10.1) ~k(f) = ( k - , ( f ) + ~ " ~ r ( k - l ( r(f))


L °" Aar(k)

with the notation Lr~(f) = E p LpfCPr or" Let us denote by A ( f ) t h e matrix Lr~(f)
with coefficients in .4. Performing a short explicit computation using the associativity
property (9.2), formula (10.1) becomes an universal formula no longer containing the
coefficients Crpc' , which is a discrete structure equation

(10.2) L~a(fg) - f L ~ ( g ) - L ~ ( f ) g --- E LPr(f) L~(g) .


P

2 The fact that i is written first does not make it necessarily a "left" action : the
algebra .4 is commutative.
162 vI. Stochastic calculus

Note that on the right side, the summation concerns all indices, 0 included. This relation
can also be written (with the same convention on the matrix product as in subsection
8) A ( f g ) - f A ( g ) - A ( f ) g = A ( f ) A ( g ) , and means that f ~ E(f) = fI + A(f)
is a homomorphism from A into the algebra of matrices with coefficients in .4. This
is the discrete analogue of the Evans-Hudson twisted left multiplication, and since the
index 0 plays no particular role, the story ends there. The meaning is very clear : the
algebra of functions of two variables g ( i , j ) admits two multiplications by functions of
one variable, multiplication by f ( i ) and multiplication by f ( j ) , and the z ~ constitute
a basis of this algebra considered as an ./[module for the first multiplication, while
A ( f ) is the matrix in this basis of the second multiplication by f .
COMMENT. In the theory of classical s.d.e.'s, Brownian motion appears as an universal
noise using which all diffusions with reasonable generators can be constructed. In the
case of Markov chains, Brownian motion can be replaced in the Ito formula or chaos
expansions by the martingales Z c~, but the situation is artificial, since they are are
ad hoe constructs from the chain itself. It is only when the setup is extended to
include operators ("quantization") that we find again universal objects, the creation,
annihilation and number processes. On the other hand, we have no obvious analogue of
the flow of diffeomorphisms associated with a classical s.d.e..

T h e c o n t i n u o u s t i m e case

11 We consider now a continuous time Markov chain on the finite state space E . The
standard terminology from martingale theory (angle and square brackets) and from the
theory of Markov processes will be used without explanation. We denote by pt(i,j)
the transition matrix, by Pt the transition semigroup. The generator is called A (later
L0 as in the discrete case), with matrix a(i,j) = pto(i,j ). We assume for simplicity,
as we did in the discrete case, that all a ( i , j ) are different from 0. One also defines
n ( i , j ) = a(i,j) for i # j , n(i,i) = 0; given a function g ( i , j ) of two variables we
define Ng(i) = ~ j n ( i , j ) g ( i , j ) ; N is called the Ldvy kernel of the chain (it allows
compensating sums over jumps, as in the case of processes with independent increments).
The martingale additive functionals of the chain are compensated sums over jumps

(II.i) M (t) = f0 t Nz(X )d ,


s~_t

where z ( i , j ) is a function of two variables, equal to zero on the diagonal. Our hypothesis
implies that the mapping z , ~ M~ is injective, and we may identify completely the
space /3 of functions of two variables equal to 0 on the diagonal with the space of
martingale additive functionals.
We can map the space ,A of functions f of one variable in /3, defining f(i, j) =
f ( j ) - f ( i ) . The corresponding martingale is

(11.2) St(t) = ~ (f(Xs) - f(Xs-)) - ~ot N f ( X s ) ds .


s<t, AXs#o
3. S t o c h a s t i c differential e q u a t i o n s 163

This is the same as the usual martingale constructed from the generator

(11.3) Sf = f(X~) - I(Xo) - A f o X s ds .

The space B is a ring under pointwise multiplication (on the corresponding addi-
tive functionals, this "product" of M z and M w is the compensated square bracket
[ M z , M w ] - < M z , M w > ) . On the other hand, B is an A - b i m o d u l e with left and
right actions

(11.4) fz(i,j) = f(j)z(i,j) , zf(i,j) = z(i,j)f(i) .

The fact that the variable corresponding to X s - is written as the first one has no
special meaning, and the algebra .4 being commutative, we are completely free to call
either action right or left ; our choice will be explained shortly. From the point of view
of additive functionals, the right action of f is the previsible stochastic integral of the
process f ( X ~ _ ) , and the left one is the optional (compensated) stochastic integral of
f ( X t ) . W e are going to s h o w that it can be interpreted as the E v a n s - H u d s o n twisted
le£t action. Note that N is linear with respect to multiplication by functions f ( i ) , but
not by functions f(j).
The angle bracket of two martingales (11.1) is given by

(11.5) < M z , M w >t = ~0t N ( z w ) o X s ds ,

and we put
(11.6) < z,w > = N(~w) ,

which will turn out to be the "scalar product on ,-4d with values in .4" from the E v a n s -
Hudson theory. Note that for functions of one variable we have F ( f , g) -- N ( f g ) , where
the left hand side is the usual squared-field operator.
As in the discrete case, let us try to construct (real) martingales Z~ of the form
(11.1), such that
< Z ~ , Z/3>t = 5 ~ t .

On the corresponding functions z ~ , this is expressed as ~-~j n (i, j ) zOO(i,j ) J (i, j ) = 5 ~'/~ ,
and the problem is similar to the discrete case, but now concerns functions equal to 0
on the diagonal. Since all a ( i , j ) are different from 0, it is again possible to find d -- v
such functions, and therefore d orthogonal martingales with respect to which multiple
integrals can be defined, with the same properties as those relative to d - d i m e n s i o n a l
Brownian motion. Biane has proved that the chaotic representation property holds for
finite Markov chains - - otherwise stated, he has given a new probabilistic interpretation
of multiple Fock space. We refer the reader to [Bia2], or to Chapter XXI in [DMM]. For
the moment we do not make an explicit choice of the basis.
Since we have a martingale basis, the compensated martingale of the square bracket
( [ ] isn't a c o m m u t a t o r ! ) has a representation of the form

(11.7) d [z ], - dt = o X,_ rig:


7
164 VI. Stochastic calculus

Then C7 ~ o X t - can be computed as the density of the angle bracket with Z 7 , i.e. as
N(zC%~z7) (our martingales are real). On the other hand, we have

(11.8) E C~ ~(i) zT(i'J) = zC~(i'J) z~(i'J) "


7
Since multiplication of functions of two variables is associative (and commutative), we
have as in the discrete case

(11.9) E C ~ C ~ ~ = E C~ eC~'l "


C ¢

po"
We extend this notation to a complete set of coefficients Cr , p u t t i n g Co/~ = f a i l , and
Crpa = 0 if one index p, ~ is equal to 0. Then putting Z~ = t as usual, we have in full
generality
ez; = Z c; ° x,_
T

This can be translated, as for (10.1) in the discrete case, into a representation of the
multiplication operator by dZ{. However, the multiplication operator by dZ~ is not
0, and we must take into account the relation dZ°l = dt by means of a new set
of coefficients Crp~ , differing only from the preceding one by the coeffÉcient C0°° = 1
instead of 0

(11.10) dZ[ = E a T ~pr~ o Xt dar (t) .

The case p = 0 is uninteresting, but the formula for p ~ 0 is worth rewriting

(11.10') dZ~ = dQ7 + E ~ . y C~ fl o Xt d@ (t) ,

where dQ a denotes, as usual, the sum of the creator and the annihilator of index a .
The basis property also leads to an "Ito formula"

(11.11) foXt = foX0 + Do(f) o X ~ _ d Z f .

We first compute the operators Dp. We have

(11.12) Do(I)= AI = -qI + N'f , and D a ( f ) = N(Yza) ,


where q is the positive function q(x) = -a(x, x). Again from Ito's formula, and
comparing jumps, we deduce

(11.13) y ~ D~f(i) z~(i,j) = f ( j ) - f(i) ,


and on the other hand

(11.14) Dp(f g) - fDp(g) - (Do f) g = E C~ r D~f DTg '


O'7"
3. S t o c h a s t i c differential e q u a t i o n s 165

which splits in two

Dc~(fg) - fDc~(g) - (Duff) g = Z C~'YDz fD'yg '

Do(fg) - fDo(g) - ( D o f ) g = ~c, Dc~fDc~g .


The multiplication operator by f o Xt can be computed from (11.10) and (11.11), and
what we get is a quantum flow equation

(11.15) foXt =foXo+ E


/)err /0 Dp(Z) o X s C P r e r o X s d a r ( s ) .

The structure maps are given by

(11.16) L r ( f ) = E p D o ( f ) O~r "

In particular, L ° --= Do ,and for all the other coefficients we may replace C by C. We
now compute the matrix E ~ ( f ) -- 5~f + L ~ ( f ) Using (11.16), then (11.13), we have

Z L~(f)z~(i,j) = E D J ( i ) C 2 ( i ) zC~(i,j)

= Z D e f ( i ) ze(i,j) zZ(i,j) = ( f ( j ) - f(i)) z Z ( i , j ) .

Adding ~ c ~ 5 ~ f ( i ) z Z ( i , j ) we get the identification of N ( f ) with multiplication by


f(J).
As in the discrete case, there is a simple explicit choice of the martingale basis. We
denote by eu(i) the indicator function of the point u, and by Fuv the martingale (11.1)
corresponding to g ( i , j ) = e,,(i) ev(j) with u # v. The martingales relative to different
pairs (u, v) have no common jumps, and therefore a square bracket equal to 0, while
we have
d[FUV,FUV]t=n(u,v)euoXtdt+dF~V ; d<FUV, F U V > t = n ( u , v ) e u o X t d t .
Then we put M U V = F U V / ~ , so that the angle bracket of M uv with itself gets
simplified as eu o Xt dt, and its square bracket is
d[ MUV,MUV]t = eu o Z t d t + dM~V/ ~ .
Finally, we identify E with the group of integers m o d ( v + l ) and put

(11.17) Za = E i Mi'i+c~"

This constitutes the required basis of the space of martingales. In this case, the only
non-zero coefficients are Cg~' ( i ) = 1/ , / n ( i, i + od .
The case of finite Markov chains is simple, but as soon as one tries to extend
the results to infinite Markov chains, non-explosion conditions appear. The subject
is beginning now to be well understood, thanks to the work of Chebotarev, Mohari,
Fagnola (references [Chel, [ChFl, [Fag], [Mohl, [MoP], [MoS]).
166 v I . Stochastic calculus

Classical s.d.e.'s as quantum flows

12 We introduced q u a n t u m flows by analogy with classical s.d.e.'s in subs. 5. We


return to Ito equations in a more detailed way, following Applebaum lApp3]. It should
be noted that Applebaum's articles concern a more general definition of flows than
Evans-Hudson's, as the coefficients LPa do not map the algebra into itself. This feature
does not occur in the simple example below.
We consider a C°° manifold E of dimension n , compact for simplicity, and a classical
s.d.e, in Stratonovich form

(12.1) Xt = Xo + ~ p
/0 ~p(Xs) * dB~ ,

where * denotes a Stratonovich differential, ~p are vector fields on the manifold, dB~
( a > 0) are independent standard Brownian motions and dB ° = dt. Its Ito form is

(12.2) foXt =f0X0+ ~c(C~foXsdB]+LfoXsds,

where the generator L is ~0 + 71 }-~-c~~c~c~. Our purpose is to interprete this classical


flow as a q u a n t u m flow U~ f Ut (Applebaum considers U t f U [ ) associated with a left
unitary evolution. We take as initial space ff = L ~(#), where # is a "Lebesgue measure"
of E , i.e. has a smooth, strictly positive density in every local chart if the manifold
is orientable this amounts to choosing a smooth n - f o r m that vanishes nowhere, but n -
forms are not natural in this problem. We take as a convenient domain 29 the space of
complex C~ functions, and we also denote by `4 the algebra of multiplication operators
associated with Ccc functions. We denote by £ the family of operators of the form
K = ~ + k, where ~ is a smooth real vector field on E and k is a smooth real function
(considered as a multiplication operator). These operators preserve 29. We note the
following facts :
1) £; is a real Lie algebra, with [ ( + k , r l + e ] = [~,~] + ( ~ e - r ] k ) .
2) £ is stable under *, since for a real field ~ we have 4 + 4" = - div# ~, where
divz is the mapping from vector fields to functions given in local coordinates z i by
dirt, ( = ~ i Di( ~+ ((log #), # being identified here with its local density w.r.t. I]i dz*"
3) Therefore, if ~ is a smooth vector field, K = 4 + ½d i v , ~ defines a (formally) skew-
adjoint operator in L2(#), and in fact i K can be shown to be essentially self-adjoint
on 29.
We consider a left s.d.e, satisfying a formal unitarity condition
1
(12.3) dU~ : Ut( E c K c ~ d a ~ ( t ) - K C ~ d a ° ( t ) + ( i H - - ~ E KC~K~)dt )

where the operators Kc~ belong to £:, and K c~ = K * , while H is a self-adjoint operator
mapping 79 to 79. Note that the coefficients of the equation do not belong to .4, but
we do have [ K a , .4] C .4 (and the same with K c~), and we demand from g the same
property. We assume that the equation (12.3) has a unique u n i t a r y solution - - at the
time this report is being written, the theory of s.d.e.'s with u n b o u n d e d coefficients is
beginning to offer concrete conditions implying this assumption.
3. Stochastic differential equations 167

Let us compute the coefficients of the flow F o X t = U~ FUt on bounded operators.


According to (4.3), they are given by

L P ( F ) = LPaF
* + FLP~ + E L*o~
~ F L ps .
o~

The only non-zero coefficients are

(12.4) L°a(F) = __
[Ks, F] ,
L~(F) ~
[K s
, F] ,

L0°(F) = - i [ H , F ] + 1 Ej2KC, F I ( c ~ _ g s i x ~ F _ FKC, Kc~)


Z
1
= - i [H, F] + Zs(g'~ [r, Ksi + [!<,F] K,~)
These mappings preserve ,4. Indeed, it suffices to check that for f C A and M ---
+ k E /2 we have M * [ f , M ] + [ M * , f ] M E `4. Now we have a relation of
the form M* = - M + m with m E `4, and then the above operator is equal to
[ M, [ M, f ] ] - m [ M, I ] , which belongs to .4.
Assuming the general theorems are applicable, the flow itself will preserve the
commutative algebra .4, and we try to identify the corresponding flow on .4 with
the classical flow (12.2). Of course dB~ corresponds to dQ~, ~ f should correspond
to [ K s , f ] (where f is interpreted as a multiplication operator), which according to
p r o p e r t y 1) above implies K s = ~c~ and leaves ks unrestricted. The operator L0°(f) is
explicitly given by

- i [ H , f ] + -~
1E {a{s f + E s ( ½ d i v ~ s_k~){sf,

where the last two terms are interpreted as multiplication operators. It seems natural to
choose ks -= ½ div ~ , so that the last function vanishes, and this choice has an intrinsic
meaning, since it is precisely that which turns K s into a skew-symmetric operator on
L2(ff). Then we do the same with the additional vector field ~0 : we associate with
it the operator K0 = ~0 + ½ div ~0 and take for H the s.a. operator H = iKo ; then
i [ H , f ] is the multiplication operator by ( 0 f , and L0° can be identified with L. The
exponential equation becomes

dUt = U t ( E s K s d Q ~ - (Ko + 1 E s K a K s ) dt ) ,

which is very similar to (12.2). Thus the coefficients K s depend (in a simple, intrinsic
way) on the a r b i t r a r y "Lebesgue measure" #.
13 One can avoid this arbitrary "choice of gauge" if one takes as initial space the
intrinsic Hilbert space of the manifold E , also called the space of (square integrable)
half-densities, or wave functions on E . For the significance of half-densities in quantum
mechanics, see Accardi [Acc2].
A half-density ¢ is a geometric object on a manifold which at every point x and
in a local chart V around x has one single complex component ¢ v ( x ) ; in a change of
charts this component gets transformed according to the formula

¢w = ¢v (dzv /dxw )1/~ ,


168 vI. Stochastic calculus

this ratio denoting the absolute value of the jacobian determinant. Otherwise stated,
~bvdx/'~v is invariant, and one may define globally, for a measurable half-density, using
a partition of unity

Hell 2 = / M ] ~ ( x ) ] 2 d x m

The half-densities for which this integral is finite are called wave functions and
constitute the intrinsic ttilbert space .7 of the manifold.
Wave functions can be defined globally : given a Lebesgue measure # with density
m v in a local chart V (with respect to d x v ) , the reading of a wave function ~b w.r.t.
# at x is defined to be
¢. (x) = Cv(x) /
which does not depend on V. Then ~b, becomes an ordinary element of L2(#), while
the wave function itself may be denoted symbolically as ~b, v/ft. Otherwise stated, the
intrinsic Hilbert space is L~(#) with a rule saying how the description changes when #
is changed

(13.1) CA = ~#. (#/A) I12 -

Wave functions can be multiplied by Borel bounded functions, and the initial algebra
C~ can thus be realized as an algebra of multiplication operators on the intrinsic Hilbert
space y .
Let S be a diffeomorphism of the manifold E , let T be a half-density with reading
T~ with respect to #, and let ~ be the image measure # S . We define the half-density
----~ o S by its reading w.r.t. #

(13.2) Cp = ~ o S ~ / ~ .

This is a unitary mapping U, since

It is easy to see that, if M ( f ) denotes the multiplication operator by a bounded function


f , then U M ( f ) U* = M ( f oS).
A complete vector field ~ defines a flow of diffeomorphisms St = et( , which acts on
functions by S t f = f o St, on measures A taking images ASt, and the corresponding
derivatives at 0 are given by DStflo = 4 f , and for a measure a with smooth density
a in local charts, D(aS,) Io has the smooth density ~a + ~ i aDi~ i" In particular, if a
is a Lebesgue measure, the density of D(aSt)Io with respect to a is diva ~. T h e n if
we let the group act on half-densities, we find that its generator is read on L2(#) as
the skew-adjoint operator

(13.3) ( K ¢ ) . = ~ ¢ . + ½d i v . ~ ¢ . .

This explains out the additional term in the preceding subsection.


3. Stochastic differential equations 169

I n t r o d u c t i o n t o A z d m a m a r t i n g a l e s a n d t h e i r flows

14 As a last example in this introductory section, we describe a recently discovered


class of classical processes, which have non-commutative interpretations on Fock space.
The Azdma martingale appeared first in [Az6] as a particular case of martingales
naturally associated with random sets. Let (Bt) be a Brownian motion starting at 0,
and let H be the Cantor like set {B = 0}. For any t, we put

(14.1) gt=sup{s_<t : Bs = 0 } , Xt=sgn(Bt)v~-gt).

If we make the convention that sgn (0) = 0, this process is right continuous, with sample
functions which in each zero-free interval have a deterministic absolute value, and a
random sign equal to that of the Brownian excursion. Thus the only information needed
to reconstruct the process is the knowledge of H , and of a sign for each component of H c
(interval contiguous to H ). Otherwise stated, X may also describes a one dimensional
spin field, with parallel spins in each interval contiguous to H . Azdma proved :
THEOREM. The processes X t and X ~ - t are martingales (in the filtration (.~t ) generated
by ( X t ) , not in the Brownian t~Itration).

In probabilistic language, the angle bracket < X, X >t is equal to t. The same holds
for the martingale [ X t l - Lt (where L is local time), which has the predictable
representation property - - but whether it has the chaotic expansion property is an
open problem.
This allows to define multiple integrals and chaos spaces with respect to X , and
Emery proved :
THEOREM. The Hilbert sum o f the chaos spaces of X is the whole o f L2(J%¢).
For a proof of these results, we refer to [Emel], [Eme2], or to Protter [Pro]. For the
point of view of q u a n t u m probability, see Parthasarathy [Par3].
Otherwise stated, Emery discovered a new probabilistic interpretation of simple Fock
space. To prove chaotic representation, Emery made use of the following fact, which
can be directly verified on (14.1) : the martingale [ X , X ] t - t (involving the square
bracket from martingale theory, not a commutator !) has a simple stochastic integral
representation with respect to X t

(14.2) d[X,X]t = dt - X t - d X t , Xo = 0 .

Such a relation, expressing the square bracket of a martingale in a "predictable" way,


is called a (martingale) structure equation. The simplest structure equations are those
of Brownian motion and compensated Poisson processes,

d[X,X]t = dt , Xo = 0 ,
d[X,X]t =dt +cdXt , Xo=O.

and Emery defined a whole family of "Azdma martingales" containing all martingales
with structure equations of the next degree of complexity

(14.3) d[X,X]t =dt+(a+j3Xt_)dXt, Xo=x.


170 vI. Stochastic calculus

It has become standard to put fl = c - 1, but we do not use this notation here.
These equations are themselves particular cases of structure equations of "homogeneous
Marker type"

(14.4) d[X,X]t=dt÷gz(Xt_)dXt, X0=z.

which can be shown to have solutions if the function ~ is continuous. If the solution
starting at x is unique in law, the process (Xt) is a homogeneous Marker process.
Let us return to the Az~ma family with a = 0, x = 0. The interval /3 C [ - 2 , 0] is
the "good" interval : the case /3 = 0 is that of Brownian motion, and for [-2_</3 < 0
the r.v. X t can be shown to be bounded. The standard Az6ma martingale (/3 = - 1 )
belongs to the good interval, and the limiting case /3 = - 2 also has a simple description,
([Emel], p.79). The chaotic representation property holds in the "good interval", with
the same proof for all values of ft. On the other hand, it rests on the use of moments
(or equivalently of analytic vectors), and for /3 outside the good interval, the moments
of Xt are too large and very little is known.
15 For a moment, let us consider the ease of martingale structure equations (14..4),
with a continuous function p ( x ) , assuming the solution constitutes a homogeneous
Markov process. Then the processes Zt = X t - Xo is an additive flmctional of this
process. Given a smooth function f , we have an "Ito formula" as follows

(15.1) f(Xt) - f(Xo) -


j0 A f o X s ds =
// H I o X s - dZs

where the operators A (the generator) and H are defined as follows

f ( x + 9 ( x ) ) - f ( x ) - f ' ( x ) qo(x) if T(x) • 0,


(15.2) At(x) = ~(x)
1 II
~-f (x) if ~(x) = 0.

(15.3) H f ( x ) = f ( x + ~~(x)
(x))- f(x) if ~(x) # 0, = f'(x) if ~(x) = 0.

Let us also assume the chaotic representation property holds with the martingale ( Z t ) .
Then the structure equation allows us to compute the operator of multiplication by
X t : the formula dZ~ = dt + ~( Xo + Z t - ) dZt can be read as

(15.4) dZt = dQt + ~( X t ) dNt .

And then we may compute the multiplication operator by f o X t , in the notation of


q u a n t u m flows

(15.5) f o X t = f o X o +
/0 AfeXsds+
// HfoXsdQs+
// HfoXs~,o(Xs)dN~.

In the case of the Az6ma martingales, the multiplication operator by X t on simple Fock
space with initial space IR satisfies a linear s.d.e.

(15.6) X~ = Xo + Qt + /~Xs dNs .


171

§4. S . D . E . ' S A N D FLOWS : SOME RIGOROUS RESULTS

The existence and uniqueness theorem for classical (i.e. commutative) stochastic
differential equations in IR v under a Lipschitz condition includes the s t a n d a r d existence
theorem for ordinary differential equations. One can dream of a "stochastic Stone
theorem" which would similarly include the standard "deterministic" theorem that e itH
is well d e f n e d and unitary, provided H is symmetric and has a dense set of analytic
vectors, and on the other hand would include the existence and uniqueness theorem
for stochastic differential equations with C~° coefficients on a compact manifold. The
theory of quantum s.d.e.'s and flows is reaching this stage after decisive contributions
of Frigerio, Chebotarev, Mohari and Fagnola, though the stochastic case depends more
than expected on the deterministic case. It would not be reasonable to include here
results from just distributed preprints, but we may at least sketch the main ideas in the
last subsections of this chapter.
The development of the theory has gone through three stages. The first one, that of
s.d.e.'s or flows with bounded coefficients on a Fock space of finite multiplicity, was set-
tled by H u d s o n - P a r t h a s a r a t h y [HuPl] and by Evans [Eval]. These fundamental results
have been generalized in two directions : unbounded coefficients and finite multiplicity
([HuPl], A p p l e b a u m [App2], Vincent Smith [ViS], Fagnola [Fag1] for s.d.e.'s, F a g n o l a -
Sinha [FaS] for flows), and bounded coefficients with infinite multiplicity ( M o h a r i - S i n h a
[MoS]). This last setup (not much studied in classical probability) will be our main sub-
ject here. Then the last turn in the theory reaches the case of unbounded coefficients
and infinite multiplicity by an approximation argument.

T h e right e x p o n e n t i a l

1 We deal here with a Fock space of possibly infinite multiplicity. Thus our exponential
domain g is generated by vectors g ( j u ) such that only finitely many components of
u are different from 0 (we recall the notation S ( u ) for the set of indexes of the n o n -
trivial components of u, including 0). We also assume that the components Uc~ are
locally bounded.
We begin with the study of the right exponential equation with (adapted) coefficients
L~(t) and "initial condition" (Ht) (usually Ht - I )

(1.1) lit = Ht + ~-~p~


/0'LP~(s) Vs da~(s) = Ht + Kt .

It may be worthwhile to deal once with the general time dependent case. From subsection
2 on, we restrict ourselves to coefficients LP~ which act only on the initial space, with
an independent proof - - therefore, the reader may skip subs. 1 if he wishes to.
We assume that the coefficients L~ (s) are bounded operators, and that LP~( . ) f is
measurable for given f E q (and similar conditions for Ht). Our first step consists
in describing the natural integrability conditions under which the equation (1.1) has
a solution. To this end, we will reduce (1.1) to an ordinary s.d.e, driven by (infinitely
many) semimartingales.
Given an exponential vector g ( j u ) , we put v~ = V t g t ( j u ) , ht = H t g t ( j u ) ,
k~ = It'~ Ct ( j u ) . By the definition of the operator stochastic integral, the vector curve
172 vI. Stochastic calculus

kt satisfies the equation

/0' /o o

Therefore we have

(1.2) =~ +~ ]o ;~ (u, ~) ~(~) ax; ,

where L~r (u, .) = ~ p up (SAP~I+ LPa) (Evans symbol) and r/t is the bracket on the left.
We omit the mention of u, which now remains fixed.
We begin with a discussion of the case Ht =- I , in which we also have r/t - I . We
first solve the deterministic operator differential equations

(1.3) At=I+
/0 Lo(s) A s d s , Bt = I -
/0 BsLo(s)ds ,

under the n a t u r a l assumption that, for finite t

(1.4)
/0 IILo (~)II d~ < ~ .

The solution satisfies the properties

BrAt = I , IIA~II, IlBtll -< e f ~ l l z ° ( ~ ) l l d s

Then, putting vt = At zt (hence zt = Bt vt ) a n d L~ (t) = Bt L¢, (t) At we h a v e

(1.5) zt = j + ~
/0 Lc~ (s) zs d X ~ ,

a familiar Ito equation, solvable under the assumptions

(1.6) for Y e ~', ~--~.~ll~(~)fl[ 2 _<Ce(~)llf[I 2 , C2(~)d~<~,

which are in fact equivalent to the same conditions without hats. Then the solution
satisfies the inequalities

(1.7) IIztll2 <llJll~J~ c~(~)d~ , Ilzt-Jll~ <_llJll~(ef~C~(~)d~-l),


from which similar inequalities follow for vt. Returning to our initial problem, let us
prove that conditions (1.4) and (1.6) are implied by the following ones (for all p)

(1.8) Ez II Lg(~) f II~ -< C~(~) II f II2 , with C~(s) ds < o a .


4. F l o w s : r i g o r o u s r e s u l t s 173

Indeed, we have

Zll Z u~(~)L~(~)Sli2<-~( Z lu~(~)12)( Z IIL~(~)Sll ~)


(r pES(u) ~r pCS(u) pES(u)

(~.9) -< ( ~ t,,~(~)I: )( ~ cJ(~)) II f II~ ,


pES(u) pES(u)

and we may conclude since the sum is finite and the first factor is a locally bounded
function.
The case of general "initial data" (17t) is not more difficult. We use a coarser method,
assuming t <_ T and using the Schwarz inequality

II vt tl2 < 3( It,~t 112+ II


/o' Zo(~),~d~ t12 + II /~o.(~) ~,~dXV tl 2 )

<_ 3(11,7,112 + T
/o' II Lo(~)v, I1~d~ + II L~(~),s 112ds),

from which we may deduce an inequality

(1.10) II v(t)II ~ _< 3 I1q(t)II 2 + // C2(s) It .(5)II ~ d~,

where the function C2(s) is locally integrable. Applying Gronwall's lemma we see that,
whenever q(t) tends to 0 uniformly on bounded intervals (for instance), the same is
true for II v(t)ii.
REMARK. The seminal paper [HuP1] on non commutative s.d.e.'s dealt with the left
equation. In the time dependent situation it would raise domain problems, because Us
is generally defined on the exponential domain g while LPa(s) does not preserve g.
2 The most important particular case is that of a s.d.e, whose coefficients L~ are time
independent, and are ampliations to tIs of bounded operators on ,7. In this case, the
above conditions for the existence of the solution of the equation

(2.1) v~ : H~ + E p~ f'
J0 L~ Vs da~(s)
take the following form :
1) The adapted operator process (Ht) is defined on $, and for arbitrary j , u the
mapping HtE(ju) is locally square integrable w.r.t, the measure u(u, dr).
2) For every p we have the Mohari-Sinha conditions

(2.2) L'~J E<, IIL~j II2 _< CJ IIJ 112i.


Such a condition is trivially satisfied in the case of finite multiplicity and bounded
coefficients. We shall give it another interpretation after (2.8) at the end of this
subsection, and see in subs. 5 that it gives a meaning to the infinite sums which occur
in the expression for the generator.
174 VI. S t o c h a s t i c calculus

Property (2.2) may be considered as an estimate of the norm of a mapping L , from


,7 to ff ® ]C (the multiplicity space). Since tensoring with I , does not increase the
norm, we also have for f C

(2.2') Ea [1(Lg ® I) f U2 _< c~ Uf II2 ,

which implies the main assumption (1.8) of the preceding subsection. Instead of relying
upon subs. 1, let us give a quick, but complete proof in this case. We start the Picard
method with Vo(t) = Ht, and put

(2.3) Vn+l(t) = Ht + Z p o
/0' LP~Vn(s) damp(s) .
Therefore Vl(t) - Vo(t) = f: ~-~paLp Hsda~(s), and then

Yn+l(t) -- Vn(t) = ~ p o .
/0' L~ (Vn(s) - V n - l ( 8 ) ) dap(S) .

Denoting Vn(t)$(ju) as vn(t) and writing v for v(u, .) we have for t_< T, using the
domination inequalities (9.7) in §1

IIvl(t)-v°(t)ll2 --<2ev(T) Z [[L~H~£(Ju)[12v(ds)


pES(u) ,a

_< 2~v(T) ( ~ C~ ) IIH~g(ju)II 2"(d~) .


pES(u)

Let us put K = 2e u(T) ~ p E S ( u ) C p , and c = f ? Hs~(ju)112~'(ds), two quantities


depending on u and T. Then the right side is dominated by c K , and an easy induction
proof following the same lines gives the result
Kn,,'~(t)
IIv n + , ( t ) - vn(t)II ~ _< c K n[

Therefore the left side of (2.3) converges strongly on the exponential domain to a limit
~ g ( j u ) . To prove that this process solves the right equation, we must check that

pa /0 pa /0
or that

~pes(u),v ~0 t IIL~(Vn(s) - Vs) E ( j . ) II2.(ds) ~ 0,

which follows from the inequality

~-~peS(u),~ UL~ (Vn(s) - Vs)g(ju)II 2 _< ~-~peS(u)C~ I[ (V,~(s) - V~)$(ju)[[2 .


4. Flows : rigorous results 175

The way uniqueness on the exponential domain is proved is s t a n d a r d (consider the


difference W of two solutions, and iterate to prove that II w(t)II 2 < Kn/~! for some
K and all n ). This uniqueness implies that, if Ht =- I and the coefficients LP~ are time
independent, V(t) is a right cocycle, the idea being that for fixed s, the process

V~'=V¢ for t < s , ~'=@sVt-sVs fort>s,

also is a solution of (2.1). Otherwise stated, to construct Vs+t, we perform the same
construction as for l/t, taking ~s as the new initial space and Vs as the new initial
operator. One easily sees that ~ C ( j u ) is continuous in t.
The Picard approximation of order n, in this particular case, has an explicit form
as a sum on all multi-indexes # = ( e l , . . . , e m ) of order m_< n, where e l , . . . , e m are
Evans indexes including (~) (see §2, formula (2.2))

(2.4) v,,(t) = L.±.(t)


l,l<,*
where I t* is an iterated integral, and the order of indices in L t~ thus corresponds to the
higher times standing on the left

I~ = [ da~m(sm)...daSl(sl) ; L,=Le~...Lsl 4

.Is t < - . - < S r n < ~


REMARKS. 1) Assume that Ht =- I and the coefficients are time independent. There
is no difficulty in proving the existence and uniqueness results for solutions Ut of the
left equation, under the Mohari Sinha condition (2.2). Indeed, introducing the Picard
approximation scheme, we have

(2.5)
I,l_<~
!
the in L~ indicating that operators are composed in the reverse ordering. Then we
have exactly the same domination properties for Us(t) as for Vn(t), and the proof that
Ut satisfies the left equation is even a little easier, since L p preserves the exponential
domain.
2) Consider the case of a finite-dimensional initial space with o.n. basis (el), and
equations in matrix form

(2.6) uJ(t) = 6Ji + E k U~(s)(Lkids + ~/k>) + daO(A/k) + das(<~kl I))

with the following assumptions : the multiplicity space /C is the completion of a


prehilbert space /Co; A/k and ~/k are elements of K;, while the operators A/k m a p
/Co to /C, as do their adjoints. We take an orthonormal basis in K0, denoted by e a
because of our perverse notation system, denote by < ea I the corresponding " b r a ' , and
rewrite the s.d.e, in standard form, with coefficients L~ given by

(2.7) L° ei = E k L/k ek ' Lee0ei = Z k "( ec~, )~/k> ek '

L~ei = Z k <e<~'a/ke~>ek ' L~ei = E k <~ki ' eC~>ek "


176 VI. Stochastic calculus

Since ff is finite dimensional, L~ is always bounded and the M-S conditions are
satisfied : they reduce to the fact that A/k and A/kep belong to K , whiIe the dual
M-S conditions are also satisfied as ~/k and A/k*ec~ belong to ~ .
We may choose an arbitrary o.n.b, in /Co, its first element being any normalized
vector. Therefore the domain of Ut includes all exponential vectors £ ( j u ) with u E ]Co.
If J is arbitrary, we also deduce from the M-S theorem that a s.d.e, in the basis
free form, for instance the left equation

(2.8) dUt = U~( Ldt + da+ (A) + da~ (A) + daF(A ) )

is solvable whenever its coefficients are bounded ( L from /7 to 0", A from J to J ® ] C ,


etc.), the exponential domain including all vectors £ ( j u ) with u locally bounded (no
"finite support" restriction).

I n t r o d u c t i o n t o t h e u n b o u n d e d case

3 We give now an idea of the hypotheses that lead to the simplest results on unbounded
coefficients. In view of the recent results of Fagnola, it is not clear whether this approach
remains interesting or not.
Instead of assuming norm boundedness conditions on the operators LPa, we assume
they are defined on a dense stable domain :P C J , and satisfy for j E D the following
inequalities. Given a finite set U of indexes, we write # E U to mean that all the upper
indexes Pi of # belong to U. We choose Kn(j, U), for j E :D, such that

(3.1) E M = n , v E U ]l L , j II2 <_ g2n(j, U)

and we assume that for all j E :D and finite U


rn
(3.2) ~gn(j,U)~ <ec for all r > 0 .
n

We also assume that the operators L~ are closable. If the multiplicity is d < e~,
the n u m b e r of multi-indices of order n is (d + 1) ~ , which gets absorbed into the
coefficient r n , and assumption (3.1) may be replaced by a similar estimate on each
coefficient Lgj, instead of on their sum.
The same estimates as in the bounded case on the squared norm of

(Vn+l(t) - Vn(t))g(ju) = E L~It~g(Ju)


M=n+l, pES(u)

then prove that the operators Vn(t) = 21.1_<nL.(j)I~ have a strong limit 14 on
exponential vectors. We have simple boundedness properties for V t $ ( j u ) as in the
bounded case.
It is also easy to prove that the relation

Vn+l(t) = I + ~ ~0 t LPaVn(s)da~p(s)
ptr
4. Flows : rigorous results 177

becomes in the limit the right equation. One remarks that L~ (Vn+l -Vn) is also a sum
on multi-indices of order n + 2 and the series is normally convergent on the exponential
domain, following which one can define L~ Vt using the assumption that L~ is closable.
There is no serious difficulty and details are left to the reader.
Similar results hold for the left equation, even without a elosability assumption.
REMARK. It is intuitive that the result could be improved using the following rough
idea : since we are defining an "exponential", it should be sufficient to construct it
on some fixed interval [0, a] , and to extend it using its algebraic properties. Thus
instead of demanding convergence for all r > 0, we could assume it converges for r < r0
independent of j . Such an idea is made rigorous in Fagnola [Fag1] and F a g n o l a - S i n h a
[FaS] for flows.
On the other hand, if the only non-zero coefficient is L0° (skew-adjoint under the
unitarity condition), we would like to recover Nelson's analytic vectors theorem, i.e. in
this case the coefficient ~ in (3.2) should be replaced by n!. This may be partially
realized using a finer estimate for II It~g(J u) II, taking into account the number of times
(~) appears in /~, and a correspondingfiner assumption in (3.2). Such a domination
property has been given in §1, subs.9. In fact one can do slightly better, taking also
into account the number of annihilation indexes (see [Fag1]). Such inequalities have also
been used by Fagnola Sinha for flows on multiple Fock space.

The contractivity condition

4 In this subsection we give (following Mohari) a natural condition implying that the
solution Vt to the right equation is contractive on the exponential domain.
Instead of one single exponential vector, we must consider a finite linear combination,
f = ~ k g ( J k u k ) , 1 _< k _<p, the components of each uk being denoted ukp. We recall
the notation in §3 subs.4 : we define an operator L on /3 = ,7 ® (C •/C) (algebraic
tensor product) with matrix LPa given by
* Z *a, p
(4.1) LP~ = LP~+ LP~+ L~Lc~ ,
Since only the individual coefficients LPe are assumed to be bounded, the infinite sum
requires a discussion; see the next subsection. Then the contractivity condition is

(4.2) <b,Lb>_<0 for b E / 3 .

To prove it implies contractivity, we rewrite Ito's formula as

d < y , c ( e . , ) , v, E(ju) > = < vt ~ ( e v ) , ~ p . v~(t)up(t) Lp ~ E ( j u ) >


d-~
from which we deduce ( f being defined as above)
d
d--i II V t f II2 = E < Vtg(jmum), Z u~(t)Ukp(t ) LP~V~g(jkuk) > .
km po"

We put bp(t) = ~ k ukp(t)$(Jkuk), and consider these vectors as the components (all
but finitely many of which are = 0) of a vector b . Then the above derivative can be
written as < b ( t ) , L b ( t ) > , and contractivity follows from (4.2).
178 vI. Stochastic calculus

If the contractivity condition is satisfied, the fact that Vt ~ V0 strongly on the


exponential domain implies the same result on all vectors. Then using the right cocycle
property we find that the mapping 17. is strongly continuous. Note also that the
contractivity condition implies that I + A is a contraction ; therefore A can be extended
as a bounded operator.
The same reasoning applies to the case of unbounded coefficients (subs. 3).
Under the isometry condition L vanishes, and the reasoning proves ~ is isometric.
Let us anticipate a little the time reversal principle (subsections 6-9) to prove that
under (2.2) the formal unitarity condition on the coefficients does imply Vt and Ut are
*p
unitary. If the coefficients L~ and La satisfy the Mohari-Sinha condition (2.2) and the
isometry condition, then the solutions Vt and Vt to the corresponding right equations
are isometric. By time reversal, Ut is isometric too. On the other hand, it is easy to see
that the solution Ut of the left equation with coefficients L~ is adjoint to (Vt) on the
exponential domain, and therefore its bounded extension is coisometric. Therefore Ut
is unitary, and so is Vt by time reversal.
5 Let us return to the justification of the infinite sum in (4.1) - - it is precisely this
point that makes (2.2) such a natural condition. We use a lemma from Mohari Sinha
[MoS].
LEMMA. Let (An), (Bn) be two sequences o£ bounded operators on a Hilbert space
,.if, such that Y~-nA ' A n , E n BnBn
* are strongly convergent. Then E n A ~ B n is also
strongly convergent.
PROOF. The sequence of positive operators Pn = ~ k < n AkA k * is increasing, and to
say that it converges strongly is equivalent to saying tl~at II Pn II is bounded, or that
II ~ II is bounded, or (thanks to the uniform boundedness principle) that for every j
II V/-~J ]l2 = <J, PnJ > = ~ 11AkJ 112 is bounded.
k<n

Assuming these properties are satisfied, let us prove that ~ = 1 A*kBkJ is a Cauchy
sequence in norm. Defining Cmn = v',n
2_~rnA*k B k, we have

f< h, cm,~j > I <- It Akh It ItBkj tl <- (~_, IIAkh I?) ~/~ (~_, lr Bkj 17)~/~
m rft "frt

_< cII h I1( ~ IIBkj II~ )1/~


m

It only remains to take a supremum over h on the unit ball. The same lemma gives a
meaning to infinite sums with a bounded operator inserted between A~ and B k (§3,
(4.3)).

T h e r i g h t e q u a t i o n as a m u l t i p l i e a t i v e i n t e g r a l

6 Our aim in the next two subsections is to express the solution to the right equation
as a limit of "Riemann products". Our inspiration in the proof comes from a paper by
4. F l o w s : r i g o r o u s r e s u l t s 179

Holevo [Ho12], which however deals with a much more difficult problem. T h e n in a third
subsection we apply this result to "Journ~'s time reversal principle", following Mohari.
For simplicity, we deal only with the case of time independent coefficients.
Our first result will concern the boundedness of "Pdemann products". We work on
a fixed interval (0, T ) , and its finite subdivisions 0 = to < .. • < tn = T . We put

Ai = ~-~p~ M~(i) (a~ (ti+l) - ap (ti) )

where the operators M g ( i ) are assumed to be bounded on ~ i (abbreviated notation


for 62~i] ). We are going to estimate

[ ] Q n . . . Q l $ ( j u ) [ ] 2 with Q i = I + A i .

We begin with the first interval, omitting the index 1 in tl etc. Using the iterated
integral inequality (§1, (9.8)) we have

II(I÷ M~dap(s))E(ju)ll 2 < ~ ( t ) ljg(u)Jl2(lIjll2 +2~-~JlM3jrI2yu(t))


pc~

(the summation over p is restricted to the finite set S ( u ) ) . Let us assume the operators
Map satisfy the M o h a r i - S i n h a hypothesis (2.2)

E ] ] M a p J ] I 2 <_C[]jll 2
pa

where C depends only on S ( u ) . If u = 0 outside the interval (0, t), we first have
II e(u)II 2 _< e~o(t), and the right side is bounded by

IIJ 112~2"(~)( 1 + 2C.u(t)) _< JIMII2~C'°<t)


(the meaning of C has changed). We extend this result by induction on i as follows.
The part of u outside the large interval [0, T ] is uninteresting because of adaptation,
and we m a y assume it is equal to 0. Then we decompose u in pieces ul corresponding
to the subdivision intervals (ti, ti+l). Assume we have computed the vector

Ji = Q i - 1 . . . Q l g ( j ( u l + . . . + I1i-1) )

which belongs to if2i . We take this space as our new initial space (note that the space
of exponential vectors changes too), and we compute on (ti,ti+l)

l[ ( I + A i ) g(jiui)]]2 .

To this end, we apply the above reasoning, which gives us a bound of the form
HJl II2eC(~(t'+l)-~(t')), with the same C as above if the M o h a r i - S i n h a condition
holds uniformly in i. These bounds multiply well, and we get a global estimate by
HJ ][2 eCu~(T).
18(} VI. Stochastic calculus

7 We r e t u r n to the right exponential equation, with c o n s t a n t coefficients for simplicity,


a n d u n d e r the M o h a x i - S i n h a hypotheses,

Vt = I +
per /o'
L~Vsdap(s) .

We take as before a subdivision of [0, T ] and put

Q i = I + ~ p a L a ( aPp ( t~'
i)-a p" ( t i _ , ) )

We are going to show t h a t the R i e m a n n products Q n . . . Q i L ' ( j u ) tend weakly to


V T E ( j u ) , along the sequence of dyadic subdivisions for instance. Since we have just
shown these vectors are uniformly n o r m b o u n d e d , it is sufficient to test weak convergence
on e x p o n e n t i a l vectors g'(gv). T h e a r g u m e n t s u , v of b o t h e x p o n e n t i a l vectors axe
assumed to be b o u n d e d on [0, T ] .
We denote by Vs~ the value at time t of the solution to the right equation, on the
interval ( s , t ) with initial value I at time s. For the interval ( t i , t i + l ) we simplify this
to V/. T h e n we have V T -- V n . . . V1, a n d the difference V T - Qn ..- Q1 can b e w r i t t e n
as follows (the a b b r e v i a t e d n o t a t i o n on the last line is clear enough)

V . V~_~ . . . V~ - O n O ~ - i .. . O , =
(V~ - Qn) O n - , . . . Ol ~- V n ( V n - , -- O n - , ) O ~ - 2 . . . Q, + . . . v~ . . . v2(v1 - Q1)
(7.1) = D n Qtn_ 1 -f-' V. Innljn-1 f~!t~n_2 -~" . . . ~- V 2!D1 .

We are going to show t h a t

(7.2) < g ( e v ) , V { + , D i Q ~ _ l g ( j u ) > = o(ti+ 1 -- ti) ,

with sufficient u n i f o r m i t y in i to s u m these inequalities. A similar a r g u m e n t (see


8 below) is used by Holevo to prove strong convergence of much more complicated
R i e m a r m products, in a finite dimensional situation. Since now we are working o n a fixed
subdivision interval, let us simplify our notation, p u t t i n g t i = a, ti+ 1 = a-[-s, T = a + t .
We take koa as our new initial space, with new initial vectors

J' - Q ~ - I ( J ® C(uI]0,a])) , g' = g ® ~(VZ]o,a]) "

From the preceding subsection we shall use the n o r m b o u n d e d n e s s of j~, i n d e p e n d e n t l y


of the interval we work on (within [0, T ] ). T h e n our scalar p r o d u c t can be w r i t t e n as
follows
< ~' ® S ( v ' , ) ® S ( v ~ ) , Ys~ ( D S ( j ' u ' , ) ® S ( u ~ ) ) >

where the a r g u m e n t s u~ , v~ are carried by (0, s), u~, v i2 by ( s , t ) a n d the n o t a t i o n Vsz


is the solution o n ( s , t ) described above.
We e x p a n d everything using multi-index n o t a t i o n :

DE(j'~'~) = ~ L~j' ®X~Z(ui)


I~I>2
Vsz(DS(j'u~)) = ~ L,Lkj' ®/sA S(u~) ® I~'_sS(U~)
IX 1>2,1,1>o
4. Flows : rigorous results 181

and the scalar product we have to compute appears as

E < g ' L~ L M f > < £ ( v ' I ) ' I ~ £ ( u ' 1 ) > < £ ( v ~ ) ' It~-s £ ( u ~ ) > "
I~ I_>2, M_>0
This sum can be estimated along the following lines

~ n g' n n L , L ~ j ' H a.b~ <_


m>o,,~>2 It'l='~J~l=~
IIKII ~ ( ~ tlL.L~d'll~)m ( ~ a.)(~ b~).
The first inner sum is of order C m+n using the Mohari-Sinha assumptions. Next, if
the lower and upper indices of # are respectively a l , . . . , an and p l , . . , Pn, we have
(omitting indexes appended to u, v )
1
a. = I<E(v), I~_s£(u)>l _< ~<l..,l®..-®b~.l, bp, l®. bp.l>
and therefore, the finitely many non-zero components up, v~ being bounded on [0, T ]
C n (t - ,)~
E a~,_< n!

For the sum on A, we assume we work with subdivisions of step smaller t h a n 1. Since
the sum is only over ]~1~2 we then bound s TM by s 2
Cm
bk < s 2
- m!
I,Xl=m

The coefficient of s 2 being uniformly bounded, (7.2) is proved.


8 ADDITIONALRESULTS. 1) We are going to sketch a proof, due to Holevo, that leads
to strong convergence in many cases. This discussion will not be used in the sequel.
We return to formula (7.1), and compute

(8.1) II (VT -- Q n . . . Q1) ~'(ju)112 = E < ~ 1DiQ}-] $ ( j u ) , V~+IDkQ~k_I $ ( j u ) > .


ik
It is sufficient to prove the terms with i = k are o(ti+ 1 - ti) , while those with i ¢ k are
o((ti+l - ti) (tk+l - tk)). One first considers the case of a multiple Fock space (possibly
infinite dimensional), without initial space - - therefore the coefficients L~ are scalars,
and the initial vector j disappears from (8.1). Then Vi~+IDiQ~_~ is a tensor product
of three operators acting on different pieces of Fock space, and the scalar products on
the right side decompose. The scalar product with i = k appears as the product of
II Dig(ui)II ~ by a bounded factor, and we leave it to the reader, using Ito's formula, to
prove this is O((ti+~ - ti)2). On the other hand, the scalar product with i < k is the
product of a bounded factor, and two factors

<DiE(ui), Qig(ui)> , < Ykg(uk), Dkg(uk) > .


182 VI. Stochastic calculus

Since Vk = Qk + Dk and the case of equal indexes has been considered above, only the
first scalar product needs to be studied, which we again leave to the reader.
The case of a non-trivial initial space then is reduced to the preceding one using
Holevo's domination principle for kernels (Chap.V, §2, subs. 5), as follows. We associate
with the operators L p on Y the scalars LP = [[ L~ [[ ,.and solve the corresponding s.d.e.
with trivial initial Fock space, marking with hats V/, Di .. • the corresponding operators.
We also denote by fi the element of L2(IR+, ~ ) with components ~a = I ~ l . Then the
absolute value of any scalar product

<V/+l D i O i' _ , E 0 "u ) , VLIDkQk-,SOu)>


' ' "

is bounded by the corresponding scalar product

^' ^ ^' ^
Iljll ~ <Vi+lPiOi_lE(U), ^ ' b kOk_lE(
Vg+, ^' )>
mid therefore, strong convergence for the scalar equation implies the same result for
the vector one - - however, if the multiplicity is infinite, the M o h a r i - S i n h a conditions
for the scalar equation are stronger than for the vector equation, as they d e m a n d that
~ I[ i ~ [I2 < ec.
This m e t h o d can be extended to s.d.e.'s whose coefficients depend on time, but act
only on the initial space, while the method we have used for weak convergence can be
used for s.d.e.'s with coefficients LP(s) acting on ~ s -
2) Formally, what we did was to define a right multiplicative integral

V~ = H ( I + E LaP(s) d@(s) )
s<t po"

(the higher times standing to the left of the product), and to identify Vt as the solution
of a right s.d.e.. Holevo does the same thing with the product

Wt : Hexp(E Lap(s) daap(s) ) .


s<t por

Both the convergence problem and the identification of the limit are much more difficult,
and we can only refer to the article [Hol2].

Time reversal

9 We fix an interval (a, b) and consider the following unitary o p e r a t o r on L2(Ip~_, ]C)

(9.1) ~(s)=u(s) for s < a or s > b, =u(a+b-s) for a < s < b.

We extend it as an unitary operator on @ by second quantization, and then to 9


tensoring with I j . Since the "hat" notation may be ambiguous, we also introduce a
notation R or Rab (simply Rb if a = 0) for this unitary operator, called the time
reversal operator on (a, b). Note that R 2 = I . As usual, we have it act on operators on
by R ( A ) = R A R . This preserves b - a d a p t e d operators.
O~ OO ^
We have fo Fls.dXs = fo u s . d X ~ , where

(9.2) Xt : Xt for t _< a or t _> b, : Xa J- Xb -- Xa+b-t for a < t < b,


4. Flows : rigorous results 183

is a new Brownian motion, which generates the same Fock space (but not the same
filtration on Fock space). Note that the same transformation applied to X~ = t preserves
this function.
It is then very simple to describe how R acts on vectors and operators. F i r s t
it transforms any multiple integral w.r.t. X into the same integral w.r.t. X . It
preserves the exponential domain, and transforms the Evans operators a~ (t) into the
corresponding operators 3~ (t) w . r . t . X . Finally, any operator given by a kernel w.r.t.
the family apa is transformed into the operator with the same kernel w.r.t, the family 3~.
We will need the following elementary lemma : if the operator V is a-adapted, then

(9.3) Ra+ b ( R a ( V ) ) : 0 b ( V ) .

PROOF. We assume for simplicity the initial space is trivial, and denote by W an a -
a d a p t e d operator. It will be sufficient to test equality on vectors k = f ® 8bAgA8a+bh, with
f C e b , g E Oh, h E ¢ . Then we have Ra+bk = g®Saf®Sa+bh, where f , g denote f
A
and g reversed respectively at b and a. Applying W gives W(g)®~af®Sa+bh, and
applying Ra+ b gives

Ra+bWRa+ b k = f Q 8a(Ra WRag) ® 8a+bh •


If we now take W -- Ra(Y), what we find is Ra+b(Ra(V)) k = Oh(V) k.
Applying Ra+ b to (9.3) gives, with the same notation

(9.4) Ro+b(ObV) = Ra (V) .


Consider now a right cocycle (Vt) :

v~+t = e t ( v s ) Yt ,
and define U, = Rt Vt. Applying Rs+, to this relation and using (9.4) and (9.3) we
have
u~+t = ( Rs+~ ( 0 , Rs (Us)) ( ns+~ (nt (U,) ) = Us Os U,,
and we see that a right eoeycle has been transformed into a left cocycle by time reversal.
If there is a n o n - t r i v i a l initial space, properties (9.3) and (9.4) extend immediately
to operators of the form A ® V , where A acts on the initial space and V is a
a d a p t e d on O. Then they are extended to general a - a d a p t e d operators by taking linear
combinations and weak closure. The proof then is finished in the same way.
10 We take for our right cocycle the solution ( K ) to a right exponential equation and
try to identify the corresponding left cocycle. We know that Vt is a weak limit (on the
exponential domain) of products
p a
(10.1) Vt = lim l-I( I + ~-~ La(ap(ti+l ) - a~ (tl)) ) .
i pa

Applying R~ preserves weak convergence on the exponential domain, and therefore we


have

(10.2) Ut = lim I I ( I + ~ L~ (3~p(ti+l) - "d~(tl)) ) .


i po-
184 v I . Stochastic calculus

On the other hand, these increments of the process ~ are also increments of apo relative
to a reversed subdivision, i.e. the order of factors is now reversed ill time. Therefore Ut
is a good candidate to be a solution to the left equation. That it is indeed so is "Journ6's
time reversal principle".
We are able to prove the "principle" (following Mohari) whenever the M o h a r i -
Sinha regularity condition is satisfied by the adjoint coefficients. Indeed, in this case
we may apply the Riemann product approximation to the right equation with adjoint
coefficients, then take adjoints, and find a Riemann product approximation for the left
equation, which is the same as (10.2).
If the adjoint regularity condition is not satisfied, it is not clear in which sense (Ut)
constructed by time reversal can be considered a solution to the left equation.
REMARK. Our proof of unitarity in subs.4 used time reversal, which has been established
only in the case of bounded coefficients. For a proof which applies to unbounded
coefficients (and finite multiplicity), see Applebaum [App2], Theorem 4.2.
Existence of quantum flows

11 It will be useful for applications to generalize the setup of quantum flows, replacing
the algebra .2, of operators on the initial space ff by a mere linear space, the coefficients
LPa being linear mappings from A to A , and the role of "random variables" Xs being
played by linear mappings from A to s adapted operators on ~ . It is only assumed that
a linear mapping X0 is given, from ,4 to operators on ff (or equivalently 0 - a d a p t e d
operators on q ). Then we are going to discuss an equation

(11.1) F o X, = F o X0 +
/0 E LP°(F) o Xs da~(s) ,
pc,
where we have kept the notation F o Xt for X t ( F ) .
In many cases, A has an involution, and the mappings L p satisfy the condition
L ~ ( F * ) = ( L ~ ( F ) ) * , and X0 the condition F * o X0 = ( r o X0)*. Then we expect
also to have F* o Xt = ( F o Xt)*. However, this will not play an important role in the
discussion.
Equation (11.1) includes the s t a n d a r d equation of quantum flows, and also the usual
right equation (1.1), taking A = £ ( f f ) , L ~ ( F ) = L ~ F and F o X t = V t F , while the left
equation corresponds to L ~ ( F ) = FLPo, F o Xt = FUr. In these cases, the cumbersome
F o X0 is simply F . The reader may wish to consider only this situation at a first
reading, though we will mention later an application of (11.1) where ,7 = C and F o X o
is of the form 5 ( F ) I , 5 being a linear functional on A .
To avoid lengthy expressions, we use multi-index notation : # is a multi-index of
length I#l = n , with components ~i = ( ~ ) , and I~ is the corresponding multiple
integral fs,<...<s~<tda~'~(Sn)...da~(sl). , L , ( F ) i s LOa:(. .. ( L oo,~ ( F ) ) . . . ) (upper and
lower indices have been exchanged). Finally, we denote by Un the set of multi-indexes
tt of length n who~e indezes Pi belong to the "~upport" S ( u ) .
Given any fixed F C ,4, j E ff and u E L2(IR+,K:) with finite "support", we put

(11.2) S2n(F, u , j ) = E p e u ~ II L p ( F ) o Xo j II2 ]


i
q
4. F l o w s : r i g o r o u s r e s u l t s 185

Explicitly, the sum bears on multi-indexes of length n with all Pi belonging to S(u),
but ai arbitrary. In a somewhat inaccurate way, we will call Mohari-Sinha hypothesis
the following assumption (see remark 3 below) : there exists a constant K (depending
on F, j, u ) such that

(11.3) S ~ ( F , j , u ) <_ K n .

We will call a weak Mohari-Sinha hypothesis the following assumption

(11.3') S ~ ( F , j , u ) < K n ! R -'~ for all R > 0.

The constants K appearing in these formulas are examples of "uninteresting constants",


and we use for them dummy notations K, C, a . . . , without caring to change the letter
from place to place.
REMARKS 1) In the case of flows, the finiteness of S I ( F , j , u) for all j means that for
given p and F , the sum ~-~ LPa(F)*LP~(F) is strongly convergent; using the lemma
from subsection 5, this gives a meaning to the infinite sums ~ a LPa(G)L~(F) appearing
in the structure equations.
2) If .A has an involution satisfying the hypotheses mentioned above, replacing F
by F* and taking adjoints we may reverse the roles of upper and lower indexes.
3) A sufficient condition for (11.3), involving the mappings LPa and not their
products, is taken by Mohari-Sinha as their main assumption :

(11.4) E a II L P ~ ( F ) ° X o j I[2 <- ~-~,~ NF°XoDP~J ]12 with Ea II DP~J H2 <- C~HJ II2

(in particular, the operators Dap on J are bounded). However, the operators D p
themselves are never used, and only (11.3) appears in the proof given in [MoS]. We
prefer to emphasize the really basic property, at the cost of historical accuracy.
4) In the case of flows induced by unitary solutions of s.d.e.'s, §3 subs.4, the mappings
LP~ being given by §3, (4.3), it is not trivial to relate the two systems of "Mohari-Sinha
conditions".

12 We are going to prove that equation (11.1) has a unique solution under (11.3),
using the Picard method.
The standard iteration gives an approximation of Xs by a sequence of mappings X n
from .A, to s - a d a p t e d operators on ~ . We use again the notation F o X n for X p ( F )
(though in the case of flows it should not suggest that X~ is a random variable, i.e. an
algebra homomorphism). The value of F o X0 is known, while we have

(12.1) F o X ~ q-1 = F o X 0 -~- E LP(F) o X 2 dap(s).


per

In multi-index notation, we have an explicit form for F o X t ,

(12.2) F o Xt = ~-~p L•(F) o X0 ® I ~ ,


186 VI. Stochastic calculus

this series being summed by blocks according to the multi-indexes' lengths, and F o X~
being its partial sum corresponding to ]#l -< n. We apply this series to an exponential
vector C(ju) and prove it is strongly convergent. We have

(FoX~-FoX~-I)$(ju)= E Lv(F)oX0j®I~E(u)
pEUn

whose squared norm according to §1, formula (9.8)) is bounded for t _< T by the
following quantity

a (c'u(t))'~
(12.3) n[ E IlL. (F)°x°jll2 with a=lfg(u) ll' C = 2 e " ( T )
,uEun

The coefficient vu(t) is significant, since it tends to 0 with t. However, our discussion
here is not refined, and we pack it with the other ones. Using the notation (11.2) we get
the inequality
C n
(12.4) 11( F o X~ - F o X~ - 1 ) $ ( j u ) l[ -< ~ Sn(F,j,u)
x/n~

which under (11.3) or (11.3 ~) has a finite sum over n. Therefore we may pass to the
limit and define F o Xt £ ( j u ) . We also get a bound for ]I F o X/' g ( j u ) It, uniform in n
and t < T

C rt
(12.5) ]l F o X/~$(ju)II <- a ~ ~ Sn(F,j, u)
n

with a different constant C and a = g ( u ) . We denote by To(F,j,u) the right side of


this equation.
The weak Mohari-Sinha assumption (11.4) leads to an estimate that will be useful
for the sequel. Applying the Schwarz inequality to

To( F,j, u) = a E n (C/ R)n Sn( F,j, u) Rn/v/-Mn~.


we get that

cn .q2(Is"
(12.6) T{(Y,j,u) < S E n v~._n,_,j,u)

Let us return to the mappings Xt, and prove they satisfy the equation (11.1) - -
t p
and first, that the stochastic integral f0 ~ p ~ L , ( F ) o Xs da~ (s) has a meaning on the
exponential domain. According to §1, (9.8) this amounts to saying that

E ~0 t II (L~ (F) o Xs $(ju)[I 2 vu(ds)


p~
4. F l o w s : r i g o r o u s r e s u l t s 187

is finite. On the other hand, we have by definition L~ ( F ) o Xt = limn LP~( F ) o X ~ which


can be explicitly computed and is easy to dominate. Then one checks t h a t

/0 tEL~(F)
pa
oXsda~(s ) =limn
/0 E LP~(F)°X~da~ ( s ) '
po-
which follows from the fact that

p ~ ~0' 11( L ~ ( F ) o Xs - LP~(F) o X8n) £ ( j u ) I [ 2 vu(ds)

tends to 0 (similar reasoning). This being done, the equation is deduced from the
induction relation defining F o X ~ by a passage to the limit.
We leave to the reader a few easy details, like the strong continuity in t of F o X t
on the exponential domain, or the relation F* o X t = ( F o Xt)*.
It is interesting to note that instead of (11.3) we may assume that, for all R > 0 we
have an inequality

(12.6) S n ( F , j , u ) < Cx/~nt.R - n ,


similar to that we used for s.d.e.'s with unbounded coefficients.

The homomorpMsm property

13 Here we deal with the specific case of Evans-Hudson flows, and discuss the
multiplicative property of the flow, (GF) o Xt = (G o Xt) (F o Xt). This is the main
algebraic difficulty of the theory. The first proof (due to Evans) was transformed by
M o h a r i - S i n h a in a very remarkable way to a d a p t to infinite dimensional situations. Since
now .4 C £(ff) and FoXo = F ® I , we can write F j , F g ( j u ) without mentioning X0.
In view of later applications, the reader should read this proof carefully and convince
himself that, in the final estimate, the product G F disappears altogether, and the result
amounts to proving that some complicated sums of scalar products tend to 0, involving
only vectors of the form L , ( F ) g ( j u ) , L A G ) Z ( e v ) .
We want to prove that

B ~ ( G , F ) = < g ( e v ) , ( a o X~ F o X , ) g ( j u ) > - < g ( t v ) , (GF) o X t g ( j u ) > = 0

The first scalar product can be written < G* o X t g ( g v ) , F o Xt g ( j u ) > . Since strong
convergence holds on the exponential domain, it suffices to show that

(13.1) B ~ ( a , F ) = < G* o x l ~$ ( t v ) , F o x [ ~g ( j u ) > - < g ( g v ) , ( a f ) o X ~ g ( j u ) >

tends to 0 as n ~ c~.
Using Ito's formula, we transform this expression into ~po- f : va(s) up (s) CP~(s) ds,
putting

C~(s) = < a * o X 2 g ( e , , ) , L,~FoX


p s,~-1 Z ( j u ) > + < L p G * o X 2 - 1 g ( g v ) , F o X 2 g ( j u ) >

+E < L aa G • e Xsn-1 £ ( g v ) , L • F o X8n - I g ( y u ) > - < g ( g v ) , L~ (GF) o X 2 -1 g ( j u ) >


O~
188 VI. Stochastic calculus

Note t h a t indexes are n o t b a l a n c e d ! If X n - 1 were a h o m o m o r p h i s m , we could use the


s t r u c t u r e e q u a t i o n to t r a n s f o r m Bp(G, F) into the following expression (the s u m is over
e s(v) and p • S(u))

~"(a,F)=~
pa /0' v~(~)~.(~) ×

(13.2) ( < (G* o X n - G* o X n - l ) g ( g v ) , LOaF o X n - x g ( j u ) >


+< L~ (C*)oX2 -1Z (ev), (FoX2-FoX2 -1) Z(/u) > ) d~ ;
b u t since X n - 1 is n o t multiplicative, a second t e r m appears

z?(a, F) = Z ,
pa /0' v~(~)~.(~) ×

(13.3) (B2-~(G,L~F)+Bs n-~ (L~G,F)+~-]


p B~n-,(L~G, LP F) )ds
For possible extensions, it may be useful to note that the s t r u c t u r e e q u a t i o n is used
only w i t h i n a scalar product, i.e. the infinite s u m a p p e a r i n g in it need not be defined
as a n operator, only as a form.
We start with the d o m i n a t i o n of R'~(G,F), which consists of two similar terms
with G, F interchanged, of which we study the first. We recall the n o t a t i o n ~v(ds) =
(1 + IIv ( s ) [ [ 2 ) d s . Several measures of this kind occur in the proof, a n d for simplicity
we use only the largest one v(ds) with density (1 + Ilv(~)ll~)(1 + Ilu(~)ll 2) --
Ep~ I~(~)~.(~)I ~.
We d o m i n a t e the scalar p r o d u c t

<(C*oX2-C*oX2-~)e(ev), ~-~v~(s)uo(s)L~,FoXs
p n-1 E ( j u ) > = < A(s) , B(s) >
po-
by IIA(s)El IIB(s)I[, and the integral n?(a,F) t h e n is bounded using the Schwarz
inequality. We first apply the Mohari Sinha inequality e x t e n d e d to multi-indexes (§1,
( 9 . 8 ) ) : since A(s) = ~ p ~ u ~ L**(G*) 1#(5) E(£v) we have

IIA(s)[[2 < [ IIL . ( a * ) e o z~. E(v)II ~ v ( d ~ l ) . . . . ( d ~ )


.Is ~<...<sn<s
l](s) n
= ~. IIL.(a*)el[ ~ - V - , IIE(v)II ~ .
still to be integrated from 0 to t. Otherwise s t a t e d

(13.4)
L t IIA(~)II ~ d~ _< a ~"(~)'+~ s~(a*) .

Again a denotes an u n i n t e r e s t i n g quantity, a n d Con(G) abbreviates S n ( G , g , v ) , since


the last two d a t a always go h a n d in h a n d with G a n d no a m b i g u i t y occurs. Similarly

(13.5) II B(s) II~ ds <_


( ~ II L£(F) o x 2 -1 g ( j u ) I t 2) ( ~ I v~(s) Up(S)1 2 ) ds < a T21(F) u ( d s ) ,
pa pa
4. F l o w s : r i g o r o u s r e s u l t s 189

where we put generally

(13.6) T~(F) = T2n(F,j, u) = E , c u . T°2(L"(F)'J' u ) .

Integrating from 0 to t has no interesting effect, and contributes only to the constant a.
Therefore, the first term in Rp(G, F) is bounded by

(13.7) ~(t)(~+l)/~__ s.(a*) TI(F)


v ~ + 1)!

and the second term is similar, with Tt(G*) and S n ( F ) .


We decompose each one of the three B n-1 which appear in the expression of
Z~'(G,F) into the corresponding Ry -1 + Z2 -1 and apply the preceding method to
each R n - 1 . Repeating this reasoning, each Zsn-1 generates iterated integrals of order
2 acting on R sr l - - 2 , etc. Finally, we will have to evaluate a large number of iterated
integrals of order k _<n , bearing on functions Rr~-k(.,.) where the operators involved
are suitable L~(G) on the left and L x(F ) on the right.
P u t t i n g everything together, the quantity B~(G, F) that should tend to 0 is
estimated as the product of a factor Cn/V/-~ + 1)! by the following sum

Ck
F~ Z ~,( F~ S~_k(L,a*))~/~(~ T}(L.r)I1/~
k<n 0 /~EVq+~ ~zEup+r

and a similar sum we have forgotten. The internal sum depends on 0 only through
p,q,r. We are going to use the weak M-S estimate (11.3'). In the first sum on the
right, S2_p(G*)I/2 can be bounded by a v ~ -p)[ S -(~-p) with S arbitrarily large.
In the second sum, we have from (12.6) that Tg(F) <_ aEm(Cm/m!)S2ra(F).
Therefore we have an estimate
Cm Cm
Era ~ '-'¢2rn+l+P+ r ( F ) -< a ~-~.ra ~ X/(rn -}- 1 + p + r)! R "-(rn+l+p+r) .

We b o u n d the ratio (m + I + p + r)!/(rn + 1)[ by 2rn+l+p+r (p -]- r)! and we include


2ra in Cra and 2p+r in R -(p+r) . There remains something of order aR -(p+r) (p+ r)!.
Multiplying the two sums and taking a square root, we get

Ck
a~ ~ - ~ ~/(~ - p)!(p + ~)! S-(~-~)/~R-('+~)/~
k<n 0

The ratio (p + r)[/k[ can be replaced by 1, we take R = S > 1, we include the C k in


the C n that comes in front, and get finally

c~ ~ ~av~-p)rS -~
v/(~+l)! _ 0
190 VI. Stochastic calculus

We again replace (n - p ) ! / ( n + 1)! by 1, and since the total number of mappings from
a subset of { 1 , . . . , n } to {1,2,3} is 4 n, our sum is bounded by a ( 4 C / S ) n, which is
kind enough to tend to 0 if we have taken S large enough.
REMARK. In the case of a finite dimensional initial space, the solution F o Xt can be
expressed as an explicit kernel K ( F ) taking values in the initial algebra

F o Xt = E__~<...<s,,<t
~ I~1 ...gn ($1' "'" S~,,; F) daEs:.,, da~

K~,...~ (81,... , 8 n ; F ) = L¢ . . . Le, ( F )

the index ¢n corresponding to the larger time stands to the left in the product. One
may wonder whether the formula for composition of kernels can be combined with the
structure equation to give a purely combinatorial proof of the homomorphism property.
14 Let us prove ( = Xt is a contractive homomorphism. The reasoning comes from
C*-algebraic folklore, arranged by Evans. We consider a vector h = ~ i Ji ® $ ( u i ) in
the exponential domain and compute

[[~(F) h[I 2 = l < h , ~ ( F * r ) h>[ <- [IhII II~(F*F) hH ,


Since now F * F = G is selfadjoint, we have G*G = G 2 and we iterate

II ~(F) h II2 _< II h I I ' ÷ % ' ÷ , ~ II 4(a) ~=h II'/2" •


This last norm is bounded by

E l] ~(Ge'~) (Ji ® $(ulD)II -< E C(ui)[I G2,~ [I II Ji [I


i i

and since II a 2~ 11 = II a II~ , we get, after raising the inequality to the power 1/2 n, the
product of l[ G [[ = [I F2 t[ by a factor which tends to 1. Thus I[ ~ ( F ) h tl2 < [[ h [[2 I[ F [l2
and contractivity is proved.
We mention a useful result of Mohari, according to which F o Xt belongs to the
von Neumann Mgebra generated by .A ® .Tt (the algebra of operators on • a d a p t e d at
time t ).
15 We sketch, still following Mohari, the proof of a simple uniqueness result a much
deeper one will be given later). Consider two contractive versions of the flow - - here
the notation F o Xt becomes very misleading, and we call ~t ( F ) and r]t ( F ) the two
versions, and (t ( F ) their difference. We have

~t(F) = ~s(LPF)da~(s) , ][Ct(F) l[ _< CIIFII ,

this last p r o p e r t y (with C = 2) being a consequence of contractivity. Then we consider


the scalar product < g ( £ v ) , ( t ( F ) g ( j u ) > relative to two exponential vectors with
bounded arguments u, v , and introduce the operator (depending on s )

ns(F) = E v~(s) up(s) LPa(F) .


pa
4. F l o w s : r i g o r o u s r e s u l t s 191

Then we apply Ito's formula

< $(£v), (t(F)g(ju)> =


f < 6(iv), ~sl(Ls~F)$(ju)>dsl

We replace inside the integral

< E(ev), ~l(LslF)E(ju) > =


f[' < ~(e~,), ~s2(Ls~Cs,f)E(ju)>d~2

and iterate n times. Then using contractivity and (11.3 ~) it is easy to show that the
integral tends to 0.
16 Let us prove a beautiful theorem of Parthasarathy-Sinha, using which one may
construct classical stochastic processes by means of non commutative flows : if the
initial algebra .,4 is commutative, then F o X s and G o Xs+t c o m m u t e for t ~ O, thus
leading to a commutative process of operators. To see this, we take ~s as initial space,
considering ,,4 as an operator algebra on ~ s . We also put

tt (a) = (s+t (a) (~ (F) - (~ (F) (s+~ (a)


which for t -- 0 is equal to ( s ( G F - F G ) = O. Then it is easy to check that

(t(G) = E ~r(LgG)da;(r)
per

and the same reasoning as above proves that ~t (G) ~ 0.

Mohari's uniqueness theorem

17 The following result of Mohari is so elegant and simple that it may represent the
definitive result on uniqueness. We will present in the next subsection the very recent
results of Fagnola which provide the corresponding existence theorem. Thus the dream
of a "stochastic Stone theorem" is becoming a reality.
Consider a left exponential equation with coefficients L p (possibly unbounded,
admitting a common domain 23 dense in 3"), and assume it has a contractive solution,
strongly continuous in t on 7)® $

(17.1) Ut = I + E Us LPa da~(s) .


pa

Then Mohari proves the remarkable uniqueness result :


THEOREM. //" the closure o£ the operator L ° = L on 23 is the generator o f a strongly
continuous semigroup of contractions then the solution is unique.
PROOF. Taking a difference of two solutions, we are reduced to proving that a strongly
continuous family of uniformly bounded operators Wt satisfying on 23 the relation
t
w~ = ~
pq jr0 wsL~ ~a;(~)
192 vI. Stochastic calculus

m u s t be identically 0. For fixed k, u, v we define linear functionals on £7 by

ht'P(j ) = < k v ®p, W t ( j u ®m)> , h'~(j)= < k g ( v ) , W t ( j u ®m)> ,

and we will prove t h a t h~n = 0 for all t and rn. It would be misleading to put k or j
inside the exponentials here. We introduce a complex p a r a m e t e r z and apply the simple
Ito formula 1.(8.4) for e x p o n e n t i a l vectors

Zm
E ~ . I < k S ( v ) , W t ( j u ®m) > = < k g ( v ) , Wt(jg(zu)) > =
m

where ( p ) = l if p ~ O , 0 if p - - O . E x p a n d i n g the right side gives

~o t zn+(P)
---7--.,<(s)up(s)< kE(v), WsL~(ju®n)>ds .
p~,

We identify coefficients and get for m > 0

< k E ( v ) , Wt(juNm)> = ~
/o' va(s) < k E ( v ) , W s L ° ( j u ® m > d s

/o'
+,~y~ <(~)~(~)<kE(,,), WsL~(NuO(~-'))>d~.

For m = 0 the t e r m on the right disappears. We start w i t h m = 0

<kE(v), w , j > = y ~ /0'<(s)<kE(~), W~LOj>d~.

We again replace v by ( v a n d e x p a n d in powers ( n . For n = 0 we have

< k , Wtj > = < k , Ws L° j > ds .

or h~°j = f: h°° Ljds. We put kp = f o e-P~h°° ds, a b o u n d e d linear functional, and


we have pkp = kpL, or kp(pI - L) = 0 on 79. Now the image ( p I - L)7;) is dense,
hence kp = 0, and t h e n inverting the Laplace t r a n s f o r m h~ ° = 0. T h e n we p r o c e e d to
n=l,

<kv, Wt(j)> = Z o~ f0~< k v , Ws Lc~3>ds=O,


°
f r o m the preceding result. A n easy i n d u c t i o n shows t h a t h~n = 0 for all n. T h e n we
pass to m = 1, etc.
4. Flows : rigorous results 193

Fagnola's existence theorem

18 We give now a sketch of the results contained in very recent preprints of Fagnola
[Fag4] [Fag5] [Fag6], one of which is the existence result corresponding to Mohari's
uniqueness theorem.
Knowing Mohari's result, it is natural to make the following assumption on the
operator L = L ° with domain :D : the closure of L is the generator of a strongly
continuous contraction semigroup ( Pt ) . We denote by Wp = f o e-pS ps ds its resolvent,
whose range ~ is (according to the Hille-Yosida theory) the generator's domain, and
we put Rp = pWp, a strong approximation of identity as p ---* oo.
Secondly, we assume that the mappings A and ~* are defined on :D and controlled
byL

(18.1) ]] ;~J ][, ][ ~*J ]l -< C([[j ]] + [I L j ]])


(the norm on the left hand side is that of J" ® L: ), with the consequence that their
closures are defined on Q and ARp, "~*Rp are bounded mappings which converge to
;~, ~* as p ~ oo.
Finally, we assume that the contractivity condition

(18.2) <
C)
h 'L h > -<0

is satisfied/'or j C T~ and h C • ® L: (uncompleted), defining as in §3, subs. 3

(L*+L+~*~ ~*+~+~*A)
L= ~*+A+A*A A*+A+A*A "

Property (18.2) as we have written it must be properly interpreted, since the operators
and their products are not everywhere defined. First of all, A must be a bounded
operator on /7 ® L2. For the other operators, we use an interpretation in the form
sense :

<j,(L*+L+A*A)j> means 2~e<j, Lj>+<Aj, Aj>,


<j,(;~*+~+~*A)h> means <(A+~*)j,h>+<)9,Ah>.
The meaning of the two other terms is clear. The main remark of Fagnola [Fag5] is the
following : if we define

(18.3) Lp = R•LRp , Ap = ARp , ~p = R ; ~ , Ap = A ,

then the corresponding matrix Lp de/~nes a bounded operator, and still satist~es the
contractivity condition - - simply because, for j E ,7 and h C J ® K:

Using then the results of Mohari, one solves the left and right equations with these
bounded coefficients, thus constructing a family of contractive left cocycles U[ p) and
194 VI. Stochastic calculus

right cocycles Vt(p) (transformed into each other by time reversal). A n d now, a weak
compactness argument, a d a p t e d from a paper of Frigerio, allows us to take weak limits as
p --* co, which are contractive cocycles Ut and ~ solving the original s.d.e.'s. Mohari's
uniqueness theorem in subs. 17 plays an essential role in proving convergence. There are
of course many technical details to check, but the proof after (18.3) is very natural.
It should be mentioned in this connection that Fagnola [Fag6], completing the work
of Journ6 [Jou] and Accardi-Jonrn6-Lindsay [AJL], proves that all cocycles satisfying
a weak differentiability property are given by stochastic differential equations with
unbounded coefficients of the preceding form.
There remain several questions to be treated.
1) Assuming the coefficients satisfy, not only the contractivity condition, but the
formal unitarity condition, can one prove that the solution is unitary ? An abstract
necessary and sufficient condition is given, and a more practical one deriving from
results of Chebotarev.
2) An application of this last result is given to the construction of solutions of Ito
stochastic differential equations on IRn as quantum flows on the algebra C ~ , which
applies to a large class of elliptic diffusions. This is much better than all previous results,
though not yet completely satisfactory since ellipticity plays no role in Ito's theory.
Chapter VII
Independent Increments
We present here an introduction to some recent work of the Heidelberg school, due
to W. yon Waldenfels, P. Glockner, and specially M. Schiirmalm, on processes with
independent increments in a n o n - c o m m u t a t i v e setup. This chapter is reduced to the
bare essentials, and its purpose is only to lead the reader to the much richer original
articles. Some very recent and interesting remarks from Belavkin [6] could be inserted
into the chapter just before departure to the publisher's office. I am in great debt to
lectures by Sauvageot in Paris, and Parthasarathy in Strasbourg, which made clear for
me how natural coalgebras are in this setup.
Another definition of conditionally positive functions in a n o n - c o m m u t a t i v e setup
was developed by Holevo [Ho13], and seems disjoint from Schiirmann's : it concerns
maps from a group to the space of n o n - c o m m u t a t i v e "kernels" (bounded linear maps
from a C*-algebra into itseK).

§1 C O A L G E B R A S AND BIALGEBRAS

1 We have been long familiar with the idea that a "non-commutative state space
E " is represented by a unital *-algebra ..4, whose elements are "functions F ( x ) "
on E . Unless it is a C*-algebra, .4 is meant to represent smooth test functions rather
than continuous functions, and therefore linear functionals on `4 represent distributions
rather than measures - - though positive functionals play the role of measures.
A n o n - c o m m u t a t i v e probability space fl is represented as usual by a concrete oper-
ator * - a l g e b r a /3 on some Hilbert space, with a chosen state (we use the probabilistic
notation IE for the corresponding expectation). Then a random variable X on ft with
values in E is defined as a * -homomorphism from `4 to /3. Instead of the algebraists'
notation X(F) we like to use the probabilists' F o X or even F(X). In most cases /3
will be realized on Fock space, and the state will be the vacuum.
From now on, we say algebra instead of * -algebra with unit, homomorphism instead
of *-homomorphism, etc.
A n algebraic structure on E , like an internal multiplication (x,y) , ~ z y , is
described through its reflection on functions, namely, the mapping (homomorphism)
F(x) ~ F(zy) from the algebra of "functions of one variable" to the algebra of
"functions of two variables". This is called the coproduct, and the function F(zy) is
denoted by A F . For the algebra of functions of two variables, the only reasonable choice
when we have no topological structure on ,4 is the algebraic tensor product ,4 ® ,4,
though it represents only finite sums of functions F ( x ) G(y). Besides that, we would
196 vii. Independent increments

like to have a unit element in E , meaning that we have a mapping (homomorphism)


F ( x ) , ~ F(e) from .,4 to C, called the co-unit ~. The mappings A and 5 should be
homomorphisms, the product and involution on .d ® .A being the standard ones 1

(x ® y ) ( x ' ® y') = (xx') ® ( y J ) , (x ® y)* = y* ® x* .

We demand that

F((xy)z) = F(x(yz) ) (coassociativity) , F(xe) = F(ex) = F(x) .

The first property reads in algebraic notation ( I d ® A) A = (A ® Id) A as mappings


from .A to AN.AN.A, and the second one means that (Id ® 5) A = (5 ® Id) A = I d
as mappings from .A to .4. The structure we have thus described is called a bialgebra,
more precisely a *-bialgebra. Note that the co-unit, as a *-homomorphism from .d to
C, is also a state on the algebra ,4.
It turns out, particularly when dealing with stochastic calculus, that the bialgebra
axioms are very conveniently split into the standard algebra axioms and new coalgebra
azioms involving only the coproduct and co-unit. If the axioms are dissociated in this
way, a "random variable" taking values in a coalgebra is simply a linear mapping X
from .A to the operator algebra B such that F* o X = ( F O X ) * . We will keep using the
functionM notation with variables F ( x ) , F ( x y ) , which is easily translatable into tensor
notation, suggestive, and rarely misleading. The main idea from these axioms is that of
a representation as a finite sum

F(xy) = Ei F~(x) F~'(y) translating F = ~ i / ~ / / ® F/I,.

To represent a group, we would also need a mapping x ~ ~ x -1 , and therefore a linear


mapping (homomorphism) F ~ S F representing F ( x - 1 ) . It is called the antipode,
and a bialgebra with antipode is called a Hopf algebra. The antipode axiom expresses
r ( x x -1) = F ( x - l x ) = F ( e ) and reads

M ( I d ® S) A F = M ( S N I d ) F = 5(F) I .

Here M is the multiplication in ~4, interpreted as a mapping from .A @ J[ to ~4. One


doesn't demand (and it is not always true) that S 2 = Id - - thus the functional notation
F ( x -1) is misleading in the case of Hopf algebras. We never use an antipode in what
follows.
We return to probabilistic ideas : the coalgebra structure allows one to form products
of random variables, as we multiply (or add) group-valued random variables in classical
probability. Given two r a n d o m variables X, Y (mappings from .4 to B) we define
X Y = Z as follows : taking a representation A F = ~-~i F[ ® F[t , we put

F o Z = EI(F~ o X)(F[' o Z) (product in B).

The algebraic theory of tensor products implies that the left side is independent
of the choice of the representation. The involution axiom is trivially satisfied, but

1 There are other possibilities for graded algebras.


1. Bialgebras a n d S c h l l r m a n n triples 197

in the bialgebra setup, X and Y are homomorphisms, and we should see whether
Z is also. Taking a second element of A , G with A G ---- ~ k G ~ ® G~, we have
A ( F G ) : ~ i k FIGlik '~
~ F"~"ivk, and therefore, putting /;~ o X = ]~ E B, etc
I I II II ~-'~(rlrll\/ Igll,,
FaoZ = ~(figk)(fi gk) , (FoZ)(GoZ) = 2._,(ji] i )(gk k)
ik ik
and there is no reason why these products should be equal. They are equal in two
important eases : 1) if the operators F o X and G o Y take their values in two commuting
yon Neumann subalgebras (this will be often the ease with our processes on Fock space).
2) If A takes its values in the symmetric subspace of .4 ® .4 (verification left to the
reader), in which case `4 is said to be co-commutative.
2 E X A M P L E S . 1) The addition on ]Rn is reflected by bialgebra structures on two
useful sets of functions. The problem is to interpret f ( x + y) as a sum of products of
functions of the variables x, y. This is clear for the coordinate mappings themselves :
Xi + Yi is read as X i ® 1 + 1 ® X i , and we have a bialgebra structure on polynomials
arising from the rules A1 = 1 ® 1 (simply denoted 1 ) and

(2.1) A X i = Xi ® 1 + 1 ® X i ,

and extended by multiplication. Since the coordinates take the value 0 at the 0, the
co-unit is given by extension of the rules ~(1) = 1, ~(Xi) = O. Note that the space of
al~ne functions is a coalgebra.
The coproduct is also clear for exponentials eu(x) : e iu'x : the property e u ( x + y ) =
eu(x) e , ( y ) leads to a bialgebra structure on trigonometric polynomials, arising from
the rule

(2.2) Aeu = eu ® eu •

Since eu(0) = 1, the co-unit is given by extension of ~(eu) = 1. The involution is given
by e u = e - u .
2) Let ~ be a semi-group (the multiplication is written xy without a product sign)
with unit e. The matrix elements rn~(x) of a finite dimensional representation of ~ and
their complex conjugates generate a commutative * - a l g e b r a of functions on G, and the
relation rnJi(xY ) = ~ i j m3k(x) mki(Y) - - extended naturally to products - - turns it into
a bialgebra. Of course, ~ may be itself a semi-group of matrices, and the representation
the identity mapping. To take a simple example related to the interpretation of Az~ma
martingales, consider the matrix group (affine group)

(2.3) (b 0 1 ) ( b art ~ ) f aa'

We take as the basic algebra of functions on the affme group the complex polynomials
in the two variables a, b, the involution being complex conjugation (otherwise stated,
a* : a , b* = b). We have a coalgebra structure on the linear space generated by 1, a, b

(2.4) A1=1®1, Aa:a®a, Ab:b®a+l®b,


198 v i i . I n d e p e n d e n t increments

and the co-unit corresponds to the matrix of the unit element : 5(1) = 1 = ~(a),
~(b) = 0. We extend these rules to polynomials by multiplication, to get a bialgebra
structure.
The simplest and most useful example is the coalgebra structure on a space of
dimension n 2 with basis E~, given by the matrix product itself

(2.5)
Interesting bialgebras, related to the theory of quantum groups, arise when we take
the matrix bialgebras above, and consider the coefficients rnJl not as numbers, but as
n o n - c o m m u t a t i v e indeterminates. We refer the reader to Schfirmann [11].
3) We return to a semi group G with unit and involution * (which reverses
products), and call Ad the space of measures with finite support ~ i cicxi on ~ , which
becomes a bialgebra as follows

(2.6) ezey = ¢xy (convolution) , (ez)* = ¢x* ,


~(~x) = 1 , /,(~x) = ~ ® ~

The co-unit is the total mass functional. The linear functionals on 3.4 are arbitrary
fimetions on ~ , states being functions of positive type normalized to have the value
1 at e. The choice of the coproduct means that "convolution" of states is ordinary
multiplication of functions.
3 The (algebraic) dual space .,41 of a coalgebra is an algebra, under convolution, defined
as follows

(3.1) ( F , A , # ) -- (AF, A ® # ) .

The coalgebra is co-commutative if and only if convolution is commutative. This requires


only the coalgebra structure, but a useful lemma says that the convolution A * # of two
positive linear functionals on a bialgebra is positive. Indeed, if Aa ----~ i bi ® ci we have
A . #(a) = ~--~i A(bi) #(ci). Applying this to A(a* a) = y~4j(b*bj) ® (c* cj), we have to
show that
* ,
ij /~(bi bj ) #(c i cj) >_ 0 .

The quadratic form ~ i . u i u j A ( b * b j ) is positive, as is the similar one with # , and it


is classical that the product coefficient by coefficient of two positive quadratic forms is
positive, and the sum of all the coefficients of a positive quadratic form is a positive
number.
4 We conclude this section on algebraic preliminaries with the main algebraic tool for
Schiirmann's theory, the so called fundamental theorem on coalgebras :
THEOREM. In a coalgebra •, every finite subset is contained in a t~nite dimensional
sub-coalgebra.
A sub-coalgebra of A is a linear subspace B such that for b E B, Ab lies in B ® B.
It is clear that sums of sub-coalgebras are sub coalgebras, but the case of intersections
is not obvious (though the result is true). This is why we do not mention the coalgebra
1. Bialgebras and Schlirmann triples 199

generated by finitely many elements, and give a weaker statement. The involution plays
no role : to make a coalgebra stable, we add its adjoint to it.
PROOF. Instead of using the tensor notation, we reason as if elements of .A were functions
of one variable, elements of ..4 ® .A functions of two variables, the coproduct and co-
unit came from a true product with neutral element e, and linear functionals were
measures. Then the proof (which we owe to J.-C. Sauvageot) becomes very transparent,
and translating it back into tensor notation is a good exercise.
We consider an element c of A , and the representation of Ac

(4.1) Ac= ~iai®bi i.e. c(xy) = ~-~ai(x) bi(y).


i
This is of course not unique. Taking y = e (i.e. applying I@(~) we see that c is a linear
combination of the ai, and also of the bi. If the al are not linearly free, we express them
as linear combinations of a free subsystem and get a representation with fewer indices.
Therefore, if one choses a representation such that the number of indices is minimal,
both the ai and the bi are linearly free. Then one can find linear functionals h i , #i on
A , such that .ki(aj) = '~ij and #i(bj) = 8ij.
We write the eoassociativity property in .A ® .4 ® A

(4.2) c(xyz) = ~-~i ai(xy) hi(z) = ~-~i ai(x) bi(yz),


and we get down to ~4 ® .d by means of &k ® I ® I , i.e.

(4.3) bk(yz) = / "kk(dx) c(xyz) = ~~i (/"kk(dx)ai(xy))bi(z)"


Similarly

(43') aj(xy) = fc(x~z),~(dz)= ~ i a,(x) (f ~i(~z),/~z)).


"Integrating" (4.2), we introduce the elements

(4.4)

The relations (4.3) (4.3') then give us

(4.5) bk(yz) = ~ i Cki(Y) hi(z), aj(xy) = ~-~i ai(x) cij(Y) .


In the first formula, we replace y by xy and use coassociativity

~i ~k~(xv) bi(~) = bk((~)~) = bk(z(U~)) =

~ ~k~(~)~(~z) = Zij ck~(~)~i(Y)bj(z),


and "integrating" with respect to #j(dz) we get

(4.6) Ckj(xy) = ~-~i Cki(X) cij(y).


200 v i i . Independent increments

Therefore the linear space B generated by the cij is a sub-coalgebra. According to


(4.5) it contains the bk = Y~i ckiS(bi) (as well as the ak), and finally c itself.
REMARK. Relation (4.6) is the coproduct arising from the law of composition of matrices.
Here it appears as linked with one particular proof, but it comes naturally whenever
we have a finite dimensional coalgebra .,4, as follows. Let el be a basis of `4. T h e n we
have a coproduct representation
jk
(4.7) Aei = jk ci ej @ e k .

Let us put e4z = ~ k c~kek so that we have, putting ~i = ~(ei),

(4.8) AeJ = E k e~ ® e k 5(e j ) = 5"?" ¢i = ~ e~ 5 ; .

The first two properties show that the coalgebra formulas are the same as for the
coalgebra of matrices. Of course the e~ are not linearly free in `4 since there are too
many of them, but they generate `4 by the last formula. Then computations on matrices
become universal tools. This will be a good method to handle stochastic differential
equations in coalgebras without needing a new general theory.

Processes with independent increments

5 We give the axioms for a process with (multiplicative) independent increments,


separating the structure necessary for each property.
We consider a linear space .4 and a family of "random variables" Xrs ( 0 ~_r ~_s ), i.e.
mere mappings from `4 to an operator algebra B provided with a state. They represent
the (multiplicative) increments of the process Xt = Xot.
Independence means that for sl < t i ... < sn < t n and F1 ...-P'n E `4, the operators
F1 o Xslt, ... Fn o Xs~t~ commute in /~ and

(5.1) ~,[FloX~,,,...F~ox~] =~[FI oX8,,1] . . . ~ [ G o X s ~ ] .

This involves no structure on .4, and will be almost automatically realized in our
constructions, as the operators Xst will act between s and t on a Fock space, and
]E will be a vacuum expectation.
Another property that does not require a structure on .4 is the stationarity of
increments : for arbitrary F ff .4, h _> 0

(5.2) ~ IF o x.8] = ~ I F o X~+h,t+h] .

Multiplicativity of increments is the following property - - a coalgebra structure on `4 is


needed to define the "product" involved in the formula

(5.3) Xrt = X r s X s t for r < s < t .

Using (5.2) we define 9ot_s to be the law of X s t , i.e. the linear functional on
F ~ IF, [ F o X s t ] on `4. Then the three preceding axioms imply that ~0s*qPt = q0s+t,
1. B i a l g e b r a s and Schfirmann triples 201

i.e. we have a convolution semigroup. We add a continuity condition involving the co-
unit 6

(5.4) lira t $ s Wst = ~ s s = / ~ •

Finally, the last axiom is the only one that involves the algebra structure of .A : it
consists in requiring that the Xst be true random variables, i.e. algebra homomorphisms.

Schiirmann triples

6 We consider a bialgebra ..4 and a convolution semigroup of states ~t on ,4, tending


to the co-unit 6 as t --~ 0. This is the first part of Schiirmann's main theorem.
THEOREM For every F C A , the generator

(6.1) ¢ ( F ) = lira ( ~ t ( F ) - ~ ( F ) ) / t
t----*0

exists, and opt is the convolution exponential of t¢


~2
(6.2) ~t = 6 + t O + ~ - O * O + . . -

We have ¢(1) = 0, and ¢ is conditionally positive : on the subspace K -- K e r 6 (which


is an ideal in .,4) it satist~es the properties

(6.3) ¢(F*) = ~(F) , ¢(F*F) > 0.

PROOF. The main point is (6.1) and follows from 4 : let fl,0 be a finite dimensional
coalgebra containing F . Then the restrictions to .A0 of the functionals ~2t constitute a
semigroup in the finite dimensional convolution algebra A~, continuous at 0, therefore it
is differentiable at 0 and is the convolution exponential of its generator. The remainder
is almost obvious. Note that (6.3) depends on the co-unit, but not on the coproduct.
The next step is a construction for conditionally positive functionals which parallels
the GNS construction for positive ones (Appendix 4, §2, 2 3). We mix with Schfirmaxm~s
results some remarks due to Belavkin [Bel6].
We consider a * - a l g e b r a .A with unit 1 and a "co-unit" or "mass functional" ~, i.e.
a homomorphism from A to C. Let .A0 be the kernel of 6. On .A,, we consider a linear
functional ¢ such that ¢(1) = 0, which is conditionally of positive type. We provide
A0 with the scalar product

(6.4) < G, F > = ~b(G*F) ,

denote by No the null subspace of this positive hermitian form, and b y / C the prehilbert
space A0/.Af0. We do not complete ]C unless we say so explicitly.
If F belongs to A0 so does G F for G E A , and this defines a left action of A on
A0. If ~b(F*F)= 0 we have ¢ ( G * F ) = 0 for G C A0 by the Schwarz inequality, and
since the case of G = 1 is trivial No is stable under the action of A on A0. Therefore
A acts on the left on IC. We denote by p(G) the corresponding operator. It may be
unbounded, but it preserves the prehilbert space /C, as does its adjoint p(G*).
202 v i i . I n d e p e n d e n t increments

For F G M we define Fo -- F - 5(F) 1 E J[0, and denote by q ( F ) E K; as the class


of F0. T h e n we have by an immediate computation

(6.5) q(aF) = p(C)~(F) + ~(G)5(F) .

In algebraic language, /C is considered as an .A-bimodule with the left action p and


the right action 5, and r/ is a cocycle. Note that the range of q is the whole of ]C.
Finally, a simple computation gives the identity

(6.6) ¢ ( G * F ) - 5( G*) ¢( F) - ¢(G*) 5( F) = < q( G) , o( F) > .

A system (p, r], ¢ ) consisting of a * representation p of .A in a prehilbert space ]C,


of a cocycle q with values in /C, and a scalar valued mapping ¢ satisfying (6.5-6) will
be called a Schgrmann triple in this chapter - - maybe 5 should be asked to sit for the
picture too, and we then have a Schiirmann quadruple. One may reduce the prehilbert
space K: to the range of r/, but this plays no important role.
7 EXAMPLE. Which is the Schiirmann triple corresponding to a convolution semigroup
(Trt) of probability measures on ]Rn ? We use the bialgebra M of trigonometric
polynomials, and denote by ~2t the restriction of ~rt to .4 - - otherwise stated, we
take Fourier transforms. Another interpretation uses the dual group G of ]Rn , so that
a trigonometric polynomial appears as a measure of finite support on ~ as described
in subsection 2, example 3. Then ~t(u) is a normalized function of positive type on
~ , and "convolution" of states on M is plain multiplication. Introducing the generator
¢(u) - - the opposite of the standard L6vy function - - we may equivalently write ¢(~)
(a measure of finite support on ~ ) or ¢ ( F ) (a trigonometric polynomial). However,
writing the L6vy-Khinchin formula requires the second notation :
(7.1)
¢( F) = i E k mk D k F ( O ) - ½ E i j aiJ DijF(O)+ / ( F ( x ) - F ( O ) - h ( x ) xiDiF(O) ) v( dx) ,

where v is the L~vy measure, h is a bounded function with compact su.pport, equal
" has a representation as /-,k
to 1 in a neighbourhood of 0, and a '3 x-" a ki a )k" Then the
completed space /C is
(7.2) -~ = C n ® L2(,) .
The mapping ~ is given by

(7.3) r/(F) = AF'(O) ® ( F - F(0)) ,

the notation AF'(O) denoting the vector with components ~ k akiDkF(O) • Then the
incomplete space K: is the range of q. The representation p is given by

(7.4) p(F) = F(O) I O F ,


where F is interpreted as a multiplication operator on L2(v). Then it becomes an easy
computation to check that

¢( G F) - G( O)¢( F) - ¢( G) F( O) =

~- -i j aiJ niG(O) DjF(O) + d[ ( G ( x ) - G(O))(F(x) - F ( 0 ) ) v ( d x ) = ( q ( G ) , q ( F ) ) .


1. Bialgebras and Schlirmann triples 203

where the fight side is a bilinear inner product, since we used G instead of G*.

8 We return to the general case, keeping the notation from 6. Belavkin had the idea
of introducing on .4 instead of .A0 a non-positive definite hermitian scalar product

(8.1) [GIFI = ¢ ( G * F ) .
Note that [G* IF* ] = [G IF], hence the kernel N" of this hermitian form is stable
under *. Note the relation

(8.2) [ G I F ] = < Go, Fo > + 5 ( G * ) ¢ ( F ) + ¢(G*) 5 ( F ) .

Using this remark, it is easy to prove that the mapping F ~ (5(F), u ( F ) , ¢ ( F ) ) from
.4 to ~ = C @IC • C is an isometric isomorphism from A/N" into K: provided with the
non-positive det~nite scMar product

(8.3) [u + k + v Ju' + k' + v'] = ~v' + < k,k' > + v ~ ' .

Next, we define a representation on .2, on 1~, which on the image of ,4 by the above
mapping corresponds to .4 acting on itself by left multiplication. An element of ~ being
a column (!) (u, k, v), we consider the following (3, 3) matrix acting on K:

[ *(g) 0 0 "~
(8.4) R(H) = | I~(n) >
\ ¢(H)
p(H)
< T/(H*)I
0
6(g))
We then have

(8.5) 7](HF) | = R(H) q(F)


¢(HF) / ¢(F)

Then it is very easy to see that R ( H ) R ( K ) = R ( H K ) (this is the definition of a


Schiirrnann triple), while R(H*) is Belavkin's "twisted adjoint', which involves a sym-
metry w.r.t, the upward diagonal. If 5(H) = 0, (8.4) corresponds to Belavkin's notation
for coefficients of a quantum differential : otherwise stated, whenever conditionally pos-
itive functions appear, the way is open to quantum stochastic caJculus.
A striking application arises when we consider a *-algebra 7) without unit, and a
function of positive type zb on it. Then we take for A the unital extension of 7), and
put for c E C , d E T )

6(cl+d) =c , ¢(cl+d)=¢(d).

Then ¢ has become conditionally positive, the preceding theory applies and leads to a
Schfirmann triple. For example, 7) may have the four generators (it, da~ (~ = - , o, +))
with multiplication given by the Ito table and the usuM involution, ¢ taking the value
1 on dr, 0 on the other generators. This is why Belavkin uses the name of Ito algebras
to describe the general situation.
204

§2. C O N S T R U C T I O N OF THE PROCESS

In this section we use stochastic calculus to construct a process with i n d e p e n d e n t


increments from a Schfirmarm triple on a bialgebra.

Construction on a eoalgebra

1 T h e c o n s t r u c t i o n becomes very clear if we separate the coalgebraic a n d the algebraic


structure. We consider here a coalgebra 04 with a n involution, a n d a " S c h i i r m a n n triple"
consisting simply of a prehilbert space K~, of a linear m a p p i n g p from 04 to operators
on ~ satisfying only the property p(F*) = p(F)*, of a linear m a p p i n g r1 from ,4 to
/C, a n d a linear m a p p i n g ¢ from 04 to C. Since we have no algebraic s t r u c t u r e on .A,
we c a n n o t write the other conditions for a S c h i i r m a n n triple.
We consider a Fock space ~5 of multiplicity K;, with initial space C. We are going to
give a m e a n i n g to Schfirmann's stochastic differential e q u a t i o n for m a p p i n g s Xs from
04 to s - a d a p t e d operators on (I) (writing as usual F o Xs for X s ( F ) )

(1.1) F o X t = 5(F) I +

fo t F o Zs * (da+(]rl(F)>) + da°s(p(F) - 5(F) I) + d a ; ( < q ( F * ) ] ) + ¢(F) ds ) ,

a n d more generally, for 0 < s < t

(1.2) F o X~t = 5(F) I +

f t F o Xr * ( da+~(l~( F)>) + da~ (p( F) - 5( F) I) + da; (<~( F*) [) + ¢(F) dr)

We generally deal with (1.1), leaving (1.2) to the reader. The explicit m e a n i n g of the
convolution s y m b o l in (1.1) is the following: if A F = • i F[ ® F[', we have

FoX~ = ~(F)I +

Ei
I' F~ o Xs (da+(l~(F[')>) + da°s(p(F[ ') - 5(F[') I) + d a ; ( < y ( F i"* )]) + ¢ ( F / ' ) ds).

This is still very abstract : choose a f i n i t e - d i m e n s i o n a l coalgebra 040 c o n t a i n i n g all the


elements of 04 involved, take a basis ei of it, a n d define the coproduct a n d co-unit in
jk
this basis as Aei = ~ j k ei ej@ek~ ~(ei) = ~i. T h e n we are reduced to a linear e q u a t i o n
in a finite dimensional space

(1.3) Xi(t) = 5i I +

~jk f°t 4kXj(s)(da+(]~k> ) + da°(Pk -- Sk I) + da~-(<~k) 1) + Ck ds) .

where Pk = p(ek), "~k = r/(ek) * , etc.. We take a n o.n.b, e a of the prehilbert space /C
(the u p p e r index corresponds to the perverse n o t a t i o n system of C h a p t e r V), a n d we are
2. Construction of the process 205

reduced to a s t a n d a r d linear s.d.e, with finite dimensional initial space - - and therefore
bounded coefficients, given by

(1.5) r°(ei) = >4kej, L (ei) = >4kej, etc.


jk jk
This system with bounded coefficients and infinite multiplicity satisfies the Mohari-
Sinha conditions VI.4.(11.3), as well as the dual M o h a r i - S i n h a conditions for the adjoint
equation. Indeed, the sums Y ~ II L~(ea)112 are finite as all the vectors rlk,gk and
pk(e a) belong to K:, and this is sufficient to imply the conditions in a finite dimensional
space. Therefore the equation has a unique solution on the exponential domain. This
exponential domain, as described by the M o h a r i - S i n h a theorem, depends on the choice
of the basis e a , and consists of vectors which have finitely many non-zero components in
this basis, but changing the basis one reaches general exponential vectors with argument
in ~ (uncompleted).
Once uniqueness is known, we may prove easily the relation X t ( F * ) = ( X t ( F ) ) * .
Let us also put

Ek , etc

from which Xi(t) can be recovered as }-~j 5jXJi(t). Then the operator process (xJi(t))
is the unique solution of the standard matrix equation

x J ( t ) = 6~ + ~--~k f0 t x j ( s ) (da+(l~/k>) + da°(P~i - 5kl I ) + da;-(<~/kl) + ¢/k d s ) .

To prove that for r < s < t the increments are multiplicative,

it suffices to prove the easy eocycle property

to multiply by @ and sum over j . Though the operators in (1.6) are unbounded, and
do not preserve the exponential domain, there is no serious domain problem, since they
act on different parts on Foek space : we may consider that the domain for x J ( s , t )
eonsists of linear combinations of vectors a ® g(u) ® b, where a belongs to the past of
s, b to the future of t, and u vanishes outside Is, t] .
2 Now comes the main result :
THEOREM. If ,4 is a blalgebra, and if the hypotheses of Schiirmann triples are satistled
by p, 7/, ¢ , then the mappings Xst are algebra homomorphisms, i.e. r a n d o m variables.
This statement is proved as follows, considering X t = Xot for simplicity : First of
all, the properties of Schiirmann triples are expressed on the coefficients L ~ ( F ) as the
Evans-Hudson structure equations. Then a careful analysis of the M o h a r i - S i n h a proof
206 v i i . Independent increments

(prepared in the preceding chapter) shows t h a t we m a y prove the weak h o m o m o r p h i s m


property

(2.1) < g ( g v ) , Xt(CF) g ( j u ) > = < x t ( a * ) E ( f v ) , Xt(F) g ( j u ) >

in a f i n i t e - d i m e n s i o n a l coalgebra c o n t a i n i n g F a n d G , w i t h o u t need of f i n i t e -
dimensional bialgebras. However, in the theory of E v a n s - H u d s o n flows we dealt with
a n algebra .A of b o u n d e d operators, and we had no difficulty in defining Xt(G) Xt(F)
a n d in deducing the h o m o m o r p h i s m property from (2.1). Here the s i t u a t i o n is more
delicate, a n d the way S c h i i r m a n n solves this problem consists in showing t h a t Xt(F)
has a Maassen kernel, o p e r a t i n g on a stable d o m a i n of "test-functions." This requires a
new definition of Maassen kernels since we deal with a Fock space of infinite multiplicity.
We give no details.
U n b o u n d e d n e s s also creates a difficulty in the probabilistic i n t e r p r e t a t i o n of the
results. Indeed, to construct classical stochastic processes from q u a n t u m probabilistic
m e t h o d s one would need a theorem allowing to e x t e n d c o m m u t i n g families of s y m m e t r i c
operators into c o m m u t i n g families of seIfadjoint operators.

Example : Azdma martingales

3 We consider first a general situation : let A be the ( n o n - c o m m u t a t i v e ) free algebra


generated by n i n d e t e r m i n a t e s X1 ... Xn ; we define the involution by X* = Xi a n d
trivial extension to products, and we give it a coalgebra s t r u c t u r e by a f o r m u l a

axi= 4kXj®Xk,
with coefficients subject to a few conditions expressing coassociativity, etc; T h e n we
e x t e n d these operations u n i q u e l y as algebra h o m o m o r p h i s m s (in particular, we have
A1=1®1).
T h e n we choose a finite dimensional Hilbert space /C, a r b i t r a r y h e r m i t i a n operators
Pi on it, a r b i t r a r y vectors rli , a n d real scalars ci. There exists a u n i q u e extension of
these m a p p i n g s as a S c h i i r m a n n triple on the biaJgebra A .
The case of the Az6ma m a r t i n g a l e arises when one takes two generators, a n d deduce
the coalgebra s t r u c t u r e from example (2.4) in section 1 :

(6.2) AN 1 =Xl®Xl , AX2=X2®X1-Jcl®X2, ~(Xl) = 1 , (~(X2) = 0 .

The only non-zero coefficients are c I11 , c 221 , c02 . T h e space )C is taken o n e - d i m e n s i o n a l
so that everything is scalar. T h e n the Schfirmann equations can be w r i t t e n as

dX fft) = X l ( t ) ( m l d t + dQt (,1) + da~ (pl - I) )


dZ2( t) = Z 2 ( t ) ( m l dt + dQt (~,) + da~ (Pl - I) ) + Zo( t)(m2dt + dQt (r]2) + da°s (p2 - I) )
If we take rn 1 = rn 2 = O, r/1 = O, ~2 = 1, a n d finally p~ = I , Pl = cI, the system
separates into a pair of i n d e p e n d e n t equations

x ~ ( t ) = 1 + ( c - 1)
/0' x l ( s ) da? , X~(t) = Qt + ( ~ - 1)
/0 x ~ ( s ) da ° .
2. Construction of the process 207

The first one represents a (commutative) process, which can be explicitly computed and
is a.s. equal to 1 in the vacuum state. The ~econd equation is the quantum s.d.e, which
defines the family of Azdma martingales.

Example : Conditionally positive functions on semi-groups

4 We consider the case of a unital * - algebra .4 on which are defined a "mass


functional" 5 and a conditionally positive function ¢ vanishing at 1, from which we
construct as before the representation p operating on K:, and the cocycle r/. On the
other hand, e to = ~t is a state for every t > 0, and we are going to show how
the GNS representation of ~0t can be constructed by "exponentiation" (Fock second
quantization). A n outstanding example is the algebra A4 of measures of finite support
on a semi-group 9 with unit and involution - - if G is a locally compact abelian group,
this corresponds to processes with independent increments on the dual of ~ .
We will make use of two Fock spaces: F(K:) over ]C, and I"(/C) o v e r / ~ = C @ ~ @ C ,
the i indicating that the Fock space is the usual one, but provided with an unusual
scalar product corresponding to the non-positive scalar product (8.3) on /C. We begin
by explaining this.
We identify Fock space over K with the space of sequences ( f ) = fkm of
elements of F(/C) such that ~ k m II]km 1[2/k!m! < ~ , the scalar product being
~krn <fkm,f~km >/k!m!. Then the exponential vector $(u + k + v) is read as the
sequence ukvmg(k). We then twist the scalar product on this space, p u t t i n g

Z < fkm, gmk >


[ ( / ) [(g)] = kr~ k!.~!

The twisted scalar product of two (standard) exponential vectors g(u + k + w) and
g ( u ' + k ' + w ' ) then is e x p [ u + k + v l u ' + k ' + v ' ]
as it should be.
The operators R(H) (§1 subs. 8) on 1~ are then extended by second quantization
to the exponential domain of F(/C), with the same notation. In this way we get a
representation of .4, which is a *-representation w.r.t, the twisted adjoint. We want to
deduce from it a true * - r e p r e s e n t a t i o n on F(/C).
To accomplish that, Belavkin defines for every real p an isometry from F(K) to
F'(/~) as follows: given I • F(/C), he puts

(Jf)km=O if m e 0 , (Jf)ko=Pkf.

Thus for k • / C we have Jg(k) = g(p + k + 0). The (twisted) adjoint J * of J maps
the sequence (g) = gkm to
m
E P gore
Je(g) = rn m!

Indeed, we have

[Jf[(g)] = ~-~nP ~ < f, gok >/k! =< f , J*(g) > .


208 VII. Independent increments

Hence J * m a p s g ( u + k + v ) to epVg(k). It is clear that J * J = I since J is a n isometry,


while J J* g ( u + k + v) = epVg (p + k + 0). In particular, J J* = I o n e x p o n e n t i a l vectors
g(p + k + 0). We t h e n let .A operate on P ( E ) by

(4.1) s(a) = J*n(c) J.


It is clear t h a t S(G*) = S(G)* ( s t a n d a r d adjoint), a n d a small c o m p u t a t i o n gives, for
F of mass 0 a n d a r b i t r a r y G

(4.2) S( G) $ ( F ) = e p ' ¢ ( a ) + p < ' 7 ( a * ) ' F> g(prT( G) + p( G) F) .

If G has mass 1, we t h e n have S ( H ) S(G) = S ( H G ) , a n d we get a * - r e p r e s e n t a t i o n


of the multiplieative semi-group of elements of mass 1 in .A in the Foek space I'(]C),
which is a generalized Weyl representation. In particular, if F = 0 we have

< S ( H ) 1, S(G) 1 > = < eP~¢(tf)g(pq(H), eP2C(a)g(prl(G) >


= eP2(V(It*)+¢(G)+<q(H),~(G)> = ep~ [HIG]

Thus what we constructed is the GNS representation associated with the f u n c t i o n of


positive type e ¢(H*G) , on the space .Ap of all elements G, H of mass p.
EXAMPLE. Take for semi-group G the additive group of a prehilbert space, p u t x* = - x ,
z*y = y - x , a n d ¢ ( x ) = -]1 x 112/2. T h e n Ado is generated by the measures ez - e0,
the corresponding scalar p r o d u c t on K being equal to < x , y > . The semi-group of
measures of mass 1 is generated by the measures ~z, operating on G by translation.
The r e p r e s e n t a t i o n we get is exactly Weyl's.
5 Let us assume J[ = Ad, which has a simple bialgebra structure, a n d compare the
preceding c o n s t r u c t i o n with Sch(irmann's general theorem. T h e prehilbert space /C is
s p a n n e d by the vectors ex - ee which we write simply as x - e. Since the coproduct is
A~x = ex ® ex, the stochastic differential equations relative to different points can be
h a n d l e d separately, a n d become

Xt(x)=I+ Xs(x)(da+(x-e)+da°s(p(x)-I)+da-~(x*-c)+~b(x)ds) .

This e q u a t i o n m a y be solved explicitly. D e n o t i n g by It the indicator of [0, t [ , we have

X t ( x ) = e t~b(x) e x p ( a + ( ( x - e) ® l t ) ) F ( p ( x ) It + I(1 - It)) e x p ( a - ( ( x * - e) ® l t ) •

Indeed, to check t h a t these operators satisfy the appropriate s.d.e, we r e m a r k t h a t the


three terms can be differentiated without Ito correction since this p r o d u c t is n o r m a l l y
ordered. To c o m p u t e the differential of the middle t e r m we use the second example of
C h a p t e r VI, §1 subs. 13 : for every operator U the process Yt = Y ( U l t + I(1 - l t ) )
solves the s.d.e. Yt = I + f : Y s ( U - I ) d N s . Finally, one sees that X t on the Foek space
up to time t is the same as t h a t of the above representation, with p = v q .
Appendix 1

Functional Analysis
Hilbert space functional analysis plays for quantum probability the same role
measure theory plays for classical probability. Many papers in quantum probability are
unreadable by a non-specialist, because of their heavy load of references to advanced
functional analysis, and in particular to von Neumann algebras. However, one can do
a lot (not everything, but still a great deal) with a few simple tools. Such tools will be
summarily presented in these Appendices, essentially in three parts : here, elementary
results of functional analysis in Hilbert space : later, the basic theory of C*-algebras,
and finally, the essentials of von Neumann algebras. For most of the results quoted, a
proof will be presented, sometimes in a sketchy way.
These sections on functional analysis have nothing original : my main contribution
has consisted in choosing the omitted material this is almost as i m p o r t a n t as
choosing what one includes! The material itself comes from two books which I have
found excellent in their different ways : Bratteli Robinson [BrR1] Operator Algebras
and Quantum Statistical Mechanics I, and Pedersen [Ped] C*-algebras and their
Automorphism Groups. The second book includes very beautiful mathematics, but may
be too complete. The first one has greatly helped us by its useful selection of topics.
For the results in this section, we also recommend Reed and Simon [ReS], and specially
P a r t h a s a r a t h y ' s recent book [Parl], which is specially intended for q u a n t u m probability,.

Hilbert-Schmidt operators

1 We denote by ~ the basic Hilbert space. Given an orthonormal basis (On) and an
operator a , we put

(I.I) Ilal]2 = ( E n ]lash 112)1/2 _ + e c .

A p p a r e n t l y this depends on the basis, but let (e~n) be a second o.n.b. ; we have

II all~ = E n m I<aen'e~'>12 = E n m I<en'a* 'era> 2 = II a* II'~

where the / on the extreme right indicates that the "norm" is computed in the new
basis. Taking first the two bases to be the same we get that II a 112 = II a* 112, and then
one sees that tl a I]2 does not depend on the choice of the basis. It is called the Hilbert-
Schmidt norm of a , and the space of all operators of finite HS norm is denoted by
HS or by 122 . In contrast to this, the space 12(~) of all bounded operators on 7-I will
sometimes be denoted by 12ec and its norm by II Iloo' Since every unit vector can be
included in some o.n.b., we have II a Iloc -< II a ll2. According to (1.1) we have for every
bounded operator b

(1.2) II b a II~ = E n II baen II2 <_ II b 11211a 11~2


210 Appendix 1

and taking adjoints

(1.3) ]lab II~ < ][bli~llal[~ .


Note in particular that II ab 11~ < II a II~ll b II~-
It is clear from (1.1) that a (hermitian) scalar product between HS operators can
be defined by < a, b >HS = ~']rt < aen, ben >. It is easily proved that the space HS is
complete.

Trace class operators

2 Let first a be bounded and positive, and let b = ~ be its positive square root (if
a has the spectral representation f ~ t dEt, its square root is given by b = f x/'tdEt ).
We have in may o.n. basis (en)

(2.1) Z , ~ < e n ' a e n > = Z , ~ < b e n , b e n > = ][bll~ _< + o c .

Thus the left h a n d side does not depend on the basis : it is called the trace of a and
denoted by T r ( a ) . For a positive operator, the trace (finite or not) is always defined.
Since it does not depend on the basis, it is unitarily invariant ( T r ( u * a u ) = T r ( a ) if u
is unitary).
Consider now a product a = b e of two HS operators. We have

(2.2) ~-~, I<en, a e n > l = ~ , l<b*en,cen>l <_ llbll21lell2 < ~

and

(2.3) Z < en,aen > = Z


n
< b * e n , c e n > : < b * , e >HS "

Since the right hand side does not depend on the basis, the same is true of the left
hand side. On the other hand, the left hand side does not depend on the decomposition
a = b c , so the right side does not depend on it either. Operators a which can be
represented in this way as a product of two HS operators are called trace clas~ operators
(sometimes also nuclear operators), and the complex number (2.1) is denoted by T r ( a )
and called the trace of a. One sees easily that, for positive operators, this definition of
the trace is compatible with the preceding one. Note also that, given two a r b i t r a r y HS
operators b and c, their HS scalar product < b , e>HS is equal to T r ( b * c ) .
Intuitively speaking, HS operators correspond to "square integrable functions", and
a product of two square integrable function is just an "integrable function", the trace
corresponding to the integral. This is why the space of trace class operators is sometimes
denoted by £ a . In other contexts it may be denoted by A4 (7-/), a notation suggesting a
space of bounded measures. The same situation occurs in classical probability theory on
a discrete countable space like IN, on which all measures are absolutely continuous w.r.t.
the counting measure. Indeed, non-commutative probability on £ ( ~ ) is an extension
of such a situation, the trace playing the role of the counting integral. More general
c~-fields are represented in non-commutative probability by arbitrary yon Neumann
algebras, which offer much more variety than their commutative counterparts.
Functional Analysis 211

Given a b o u n d e d operator a, we denote by lal the (positive) square root of a*a


- - this is usually not the same as ~ , and this "absolute value" m a p p i n g has some
pathological properties : for instance it is not subadditive. A n e l e m e n t a r y result called
the polar decomposition of bounded operators asserts that a = ulal, Jal = u * a where u
is a u n i q u e partial isometry, i.e. is an isometry when restricted to (Ker u) ± . We will
not need the details, only the fact that u always has a n o r m _< 1, a n d is u n i t a r y if a
is invertible. For all this, see R e e d - S i m o n , theorem VI.10.
THEOREM. The operator a is a product of two HS operators (i.e. belongs to the trace
class) if and only if T r ( l a l ) is finite.
PROOF. Let us assume Tr(]al) < ~ , and put b = V / ~ , a HS operator. T h e n
a = ula[ = ( u b ) b is a product of two HS operators. Conversely, let a = h k be a
p r o d u c t of two HS operators. T h e n lal = u * a = ( u * h ) k a n d the same reasoning as
(2.1) gives (the trace being meaningful since ]a[ is positive)

T r ( l a l ) -< l] u * h 11211k 112 -< ]l h 11211k 112


since Ilull~ < 1.
The same kind of proof leads to other useful consequences. Before we state them, we
define the trace norm [I a II1 of the operator a as Tr(I a I). We shall see later t h a t this
is indeed a norm, u n d e r which £1 is complete.
a) ijr a is a trace class operator, we have [Tr(a)[ < Jlalll •
Indeed, p u t t i n g b = V / ~ as above, we have from the polar decomposition a =
(ub)b
,Tr(a)l = I<b*u, b > ~ s , < "b*uti2llbll5 -< iibli~ = Ila11, .
b) For a E ~1, h C ~ , we have il ah II1 -< II a II1 II h I1~.
Indeed, we have with the same n o t a t i o n a = u b b , a h = ( u b ) ( b h ) , t h e n

II a h II1 -< II u b 115II uh 115 -< II b 115II b Jl~ II h I1~ •


c) If a E £.] we have a* E 121, 11a 111 = II a* I1,.
I~deed a = vial gives a* = lalu* and the preceding property implies II a* II1 -<
II a II1, from which equality follows. Knowing this, one m a y take adjoints in b) a n d get
the same p r o p e r t y with h to the left of a .
d) For a E £1, h E £ ~ , we have T r ( a h ) = T r ( h a ) .
This f u n d a m e n t a l property has little to do with the above m e t h o d of proof : if h is
unitary, it reduces to the u n i t a r y invariance of the trace class a n d of the trace itself.
T h e result extends to all b o u n d e d operators, since they are linear c o m b i n a t i o n s of (four)
unitaries.
T h e last p r o p e r t y is a little less easy :
e) I f a • £1, b E £1 we have a + b • £1 and II a + b I1~ -< II a I1~ + II b [11
To see this, write the three polar decompositions a = ulal, b = v l b h a + b = w l a + b I .
T h e n la 4- b I = w * ( a 4- b ) = w ' u * lal 4- w ' v * lbl. Let (en) be a finite o.n. s y s t e m ; we
have

}--~ <en, l a + b l e ~ > - - } - ~ n <e~,w*u*lale~*>+}-~n< e~,w*v*lble~> -< Ilalll +llblll •


212 Appendix 1

T h e n we let (en) increase to a n o r t h o n o r m a l basis, etc.


EXAMPLE. Let a be a selfadjoint operator. T h e n it is easy to prove t h a t a belongs to
the trace class if and only if a has a discrete s p e c t r u m (hi) , a n d ~ i I~il is finite; t h e n
this s u m is II a II1, and Tr(a) = ~ i ~i. It follows easily that the two operators a +, a -
belong to /Z1 if a does, a n d that I[ a 111 = II a+ [I, + II a - ill, a result which corresponds
to the J o r d a n decomposition of b o u n d e d measures in classical measure theory.

Duality properties
3 T h e results of this subsection are essential for the theory of von N e u m a n n algebras,
a n d have some pleasant probabilistic interpretations.
Let us denote by Exy the operator of rank one

Eyz = <y,z>x ( Ix><yl in Dirac's notation.)

T h e n we have (Exy)* = E x (Exy) xv = Ilxll2Ey IIE ylli = Ilxllllvll T h e space


Dr generated by these operators consists of all operators of finite rank. One can show
t h a t its closure in operator n o r m consists of all compact operators. Our aim is to prove
THEOREM. The dual space of Dr is /21 , a n d the dual of /21 is the space /20o of all
bounded operators.
The proof also makes explicit the duality functional between these spaces, in b o t h
cases the bilinear functional (a, b) ~ T r ( a b ) ). We leave it to the reader to check t h a t
the t h e o r e m r e m a i n s true if all three spaces are restricted to their selfadjoint elements.
Note t h a t the theorem implies that /21 is complete.
PROOF. 1) We r e m a r k first that for a positive we have T r ( a ) = suPhej5 T r ( a h ) , where
5rl denotes the unit ball of )r. To see this, if a has discrete spectrum, diagonalize it
and choose for h a diagonal m a t r i x whose diagonal coefficients are zeroes a n d finitely
m a n y ones, t e n d i n g to the identity matrix. If a has some continuous s p e c t r u m , t h e n
the right side of the relation is + o o , and on the other h a n d a d o m i n a t e s ck where
c > 0 is a c o n s t a n t a n d k is the projector on some infinite dimensional subspaee. T h e n
t a k i n g h to be almost the identity m a t r i x of this subspace we see t h a t the right side is
also equal to + o c .
We e x t e n d this relation to arbitrary a using the polar decomposition a = u lal, lal =
u * a . T h e n we have from the above discussion

IIa II1 = Tr(ial) = sup T r ( l a l k )


kCF~
= sup T r ( u * a k ) = sup T r ( a k u * )
kEfl kES'I

We now p u t ku* = h , which belongs to ~-1, and we get T r ( a ) <_ suPhEF 1 T r ( a h ) ; the
reverse inequality is obvious.
2) Let ~ be a continuous linear functional on Dr. T h e n it is c o n t i n u o u s for the
topology i n d u c e d by HS, which is stronger t h a n the operator n o r m topology. Hence
it can be w r i t t e n ~ ( . ) = < a , ">HS for some HS operator a . Using the preceding
c o m p u t a t i o n , one sees t h a t llalll = lie211, a n d in particular that a belongs to / 2 1
Functional Analysis 213

3) Let x, y be two normalized vectors, and let (en) be an o.n. basis whose first
element is y. T h e n for every operator h we have hE~y(en) = h(x) for n = 1, 0
otherwise, hence T r ( h E y ) = <y, h ( x ) > . Since Exy belongs to the unit ball of l; 1 , we
have
II hll~ -< sup T r ( h a ) ,
Ila 1_<1

whence the equality. If h is selfadjoint one can take y = x in this argument, and
therefore a can be taken to be selfadjoint.
4) Finally, let ¢p be a continuous linear functional o n / : 0 o . Then (y, x), , ~(E~y) is
a continuous hermitian bilinear functional, which can be written as <y, h x > for some
bounded operator h. The computation above shows that the operator norm of h is
equal to the norm of ~p. The two functionals ~0(.) and T r ( h . ) are equal on operators
of finite rank, hence on all of £1. The proof is concluded.

Weak convergence properties

4 We are going now to study weak convergence properties for "measures" and
"functions". The results concerning measures are pleasant, but not very important,
and we do not prove them in detail.
We stop using boldface letters for operators.
We start with measures, recalling first Bourbaki's terminology for weak convergence
on a locally compact space E . One says that a sequence (we leave it to the reader to
rewrite the statements for filters, if he cares to) of probability measures (#n) converges
vaguely to # if # n ( f ) converges to # ( f ) for every continuous function f with compact
support. T h e n the limit is a positive measure with total mass < 1. If # is a true
probability measure, then the sequence is said to converge narrowly, and then #n(f)
converges to p ( f ) for every continuous bounded function f . Convergence of sequences
in the narrow topology implies Prohorov's condition : for every s > 0 there exists a
compact set /t" whose complement has measure < c for every measure #n • Conversely,
every sequence #n which satisfies Prohorov's condition is relatively compact in the
narrow topology. In fact, wc do not need any sophisticated theory : q u a n t u m probability
in the situation of this chapter is similar to measure theory on IN.
What corresponds to vague convergence for a sequence Pn of density matrices is the
convergence of T r ( p n a ) to Tr(pa) for every operator a of finite rank, while narrow
convergence holds if in addition Tr(p) = 1. Since the closure in norm of the space of
finite rank operators is the space of all compact operators, whose dual is /21 , the unit
ball of £1 is compact in the vague topology, just as in classical probability. If (Pn)
converges narrowly to p, then pn(a) converges to p(a) for every bounded operator a,
just as in classical probability. Sketch of proof : we reduce to a self adjoint, then
we approximate a in norm by operators with discrete spectrum : using the spectral
representation of a this amounts to approximating uniformly the function t by a step
function f(t) on a compact interval. Then a can be diagonalized in some o.n.b. (en)
and we have Tr(pna) = ~ n pnnAn, where An is the eigenvalue of a corresponding to
the eigenveetor en and Pnn is the n - t h diagonal matrix element of p. Then we are
reduced to a trivial problem of narrow convergence on IN.
214 Appendix 1

"Prohorov's condition" takes the following form : for every e > 0 there exists a
projector P of finite rank such that [IP~ - Pp,~PII~ <<- c f o r a l l n . If (Pn) converges
narrowly to p, t h e n Prohorov's condition in this sense holds ([Day], p. 291, l e m m a 4.3),
a n d in fact pn tends to p in trace norm. This is entirely similar to the fact t h a t weak
a n d strong convergence are the same for sequences in e I ( D u n f o r d - S c h w a r t z , Linear
Operators I, p. 296, Cor. 14). This theorem is wrong for states on a general yon N e u m a n n
algebra (see D e l l ' a n t o n i o , [D'AI ).

The ~normal" t o p o l o g y for o p e r a t o r s

5 T h e results in this subsection are f u n d a m e n t a l for the theory of von N e u m a n n


algebras. T h e y describe several weak topologies on the space £c~ of all b o u n d e d
operators on "H.
The best k n o w n a m o n g t h e m are the weak and the strong topologies. As usual with
locally convex topologies, they are defined by a family of seminorms (P,X),X6A , operators
(ai)iE I converging to 0 (along some filter on I ) if a n d only if for every A the n u m b e r s
p;~(al) converge to 0. In the case of the weak topology, A is the set of all pairs (x,y)
of vectors, mad px,y (a) = I< y, ax >1. In the case of the strong topology, A is the set of
all vectors, a n d p~ ( a ) = Ilaxtl.
There is a third basic topology, which arises from the fact that t; ~ is the d u a l of
l; 1 : in this case, A is the set of all trace class operators (or, what a m o u n t s to the
same, the set of all density matrices p), and P0 (a) = I T r ( p a ) I. It t u r n s out t h a t it is
the most i m p o r t a n t one, and that it has no satisfactory name. Its old n a m e "ultraweak
topology" is bad, since it seems to m e a n "stiIi weaker t h a n the weak topology", while
it is stronger t h a n the weak topology! Many people call it (r-weak topology ((~ has the
same m e a n i n g here as in crfinite, the explanation being given below). My own tendency,
given its i m p o r t a n c e , a n d the very frequent use of the adjective normal in the context
of von N e u m a n n algebras, to m e a n continuous in this topology, consists in calling it the
normal topology.
The usual description of the n o r m a l topology is as follows : note t h a t every trace
class o p e r a t o r can be w r i t t e n as a product bc of two H i l b e r t - S c h m i d t operators, a n d
t h a t in a n o.n.b. (e,~) one can write

Tr(bca)= Tr(cab)= ~ <c*en,ab(en)>


n

with ~ n II Xn II2 < oo, ~ n II Yn II2 < oc. Consider now a direct s u m K: of c o u n t a b l y
m a n y copies of 7-/; an element of this space is a sequence x = (xn) of elements of 7-~
such t h a t II x Ii2 = ~ n II xn ]12 < ec. On the other hand, associate with every b o u n d e d
o p e r a t o r a o n 7-L the o p e r a t o r a on h~ which m a p s (xn) into (axn). T h e n n o t e t h a t
for every sequence y there is exactly one operator b which transforms (On) into (Yn).
So the above displayed expression can be written < x , a y > , a n d we see that convergence
of a in the normal topology of Tl amounts to the convergence of a in the usual weak
topology of ]C.
One can define in the same way the "c~ strong topology", which is interesting but will
not be considered here.
Functional Analysis 215

Since L;c~ is the dual of £1, its unit ball is compact in the normal topology, hence on
the unit ball there cannot be a different weaker Hausdorff topology. Otherwise stated, as
far as one restricts oneself to some norm bounded set of operators, there is no distinction
between the normal and the weak topologies.
We end this section with a simple and well known result.
THEOREM. Let ~ be a linear functional on £oc(~), continuous for the strong topology
of operators. Then c2 is a finite linear combination of functionals of the type a
< Yk, axk > (in particular, ~ is also continuous for the weak topology).
PaOOF. By definition of the strong topology, there exist finitely many vectors xi such
that suPi [[ axi II -< 1 ~ [~(a)l _< 1. Let p be the projector on the subspace /E generated
by these vectors. The relation ap = 0 implies axi = 0 for all i, hence ~(a) = 0, so
we must have c2(a) = ~(ap) for all a. Choose an o.n.b. (en) whose first k elements
generate K:, and express c2(a) as a linear combination of matrix elements of a in this
basis, etc.
Since the dual space of £oc in the normal topology is £ 1 we see that the two
topologies are rather different. On the other hand, a classical theorem of Banach (of
which a very simple proof has been given by Mokobodzki : see for instance [DeM3], chap.
X, n ° 52), asserts that a linear functional on a dual Banach space is continuous in the
weak* topology if and only if its restriction to the unit ball is continuous. Here, a linear
functional on L;~ is normally continuous (or, as stated briefly, is normal) if and only
if its restriction to the unit ball is normally continuous. But on the unit ball the weak
and normal topologies coincide. This will be useful as a convenient characterization of
normal functionals without looking at the normal topology itself.

T e n s o r p r o d u c t s of H i l b e r t spaces

6 The tensor product of two Hilbert spaces plays in q u a n t u m probability the role of
the ordinary product of measure spaces in classical probability. This subsection is very
important, and nearly trivial.
Given two Hilbert spaces .4 and /3, we define their Hilbert space tensor product to
consist of 1) a Hilbert space C and 2) a bilinear mapping (f,g) ~ f ®g from .4 x B
to C such that

(6.1) <f ®g,h®k> c = < f,h>A <g,k> s .


On the other hand, the set of all vectors f ® g should span C.
From these two properties it is easy to deduce that, given two o.n. bases (fc~) of A
and (gfi) of B, the family (f~ @ gfi) is an o.n.b, of C. It follows immediately that the
space C and the bilinear mapping @ are defined up to isomorphism. It is possible also
to define uniquely a canonical tensor product, i . e . . 4 ® B and all the mappings involved
are constructed from `4 and B in the language of set theory, but the need for such a
precise definition is never felt.
What about existence? Assume .4 and B are given as concrete Hilbert spaces
L2(E,g,A) and L2(F,.F, #) (for instance, o.n.b.'s (ec~) and (f/~) have been chosen,
and A,# are the counting measures on the index sets). Then C is nothing but
L2(E x F, A ® # ) , the product measure, f ® g being interpreted as the function of
216 Appendix 1

two variables f(x)g(y) on E × F (which is quite often denoted f ® g in a more general


context). The verification of the two axioms is very easy. Besides that, the concrete case
of L 2 spaces covers all practical situations.
Returning to abstract Hilbert spaces, it is convenient to have a name for the linear
span (no closure operation) of all vectors f ® g, f and g ranging respectively over two
subspaces ,40 and /3o of .4 and /3 : we call it the algebraic tensor product of Ao and
/3o. We introduce no specific notation for it (usually, one puts some mark like O on the
tensor sign).
Let 7-/ and K: be two Hilbert spaces, U and V be two continuous linear mappings
from ..4 to 7-/, /3 to K:. There exists a unique continuous mapping W = U ® V from
the algebraic tensor product of .2, and /3 into "H ® tS such that

(6.2) W ( f ® g) = ( [ I f ) 0 (Vg),

and we are going to prove that

(6.3) IlWlt-< l l U l t l l r l l •
Then it will extend by continuity to the full tensor product. To prove (6.3) we may
proceed in two steps, assuming one of the two mappings to be identity - - for instance
B = K , V = I . Let (f~),(g/3) b e o . n , bases for A and /3, and let Ao,/3o be their
linear spans (without completion). Since the vectors fc~ ® g¢~ are linearly independent,
there is a unique operator W with domain Ao®/3o which maps for®g/3 to U(fa)®g 3 .
Every vector in the domain can be written as a finite sum z = ~ 3 x3 0 93, and its
image is a sum z / = Wz = ~/3 U(x3) ® g3 of orthogonal vectors. Then we have

II z' II2 -- ~ l[ g(~p ® g~ II2 _< II cz II2 ~ II x~ 11211g~ II2 -- II g 11211zll2 .

Relation (6.2) is proved on the domain, which is dense, and then the extension of W
to an everywhere defined operator satisfying (6.2) and (6.3) is straightforward.
In classical probability, when we want to consider simultaneously two probabilistie
objects, i.e. spaces ( E , g , / P ) and (F, ~-, Q), we consider the product ( E × F , g x ~-),
and then a joint law on it which, if the two objects are physically unrelated, is the
product law lP O Q (classical probabilistic independence), and is some other law if
there is a n o n - t r i v i a l correlation between them. The q u a n t u m probabilistic analogue
consists in forming the tensor product C = A ®/3 of the Hilbert spaces describing
the two q u a n t u m objects, the state of the pair being the tensor product p ® a of the
individual states if there is no interaction. This explains the basic character of the
tensor product operation. On the other hand, we met in Chapter IV a new feature of
q u a n t u m probability : systems of indistinguishable objects are not adequately described
by the ordinary tensor product, but rather by subspaces of the tensor product subject
to symmetry rules.
7 There are some relations between tensor products and HS operators. We describe
them briefly.
First of all, we consider a Hilbert space 7-[ and its dual space 7-(r , i.e. the space of
complex linear functionals on 7-/. Mapping a ket x = I z > to the corresponding bra
Functional Analysis 217

x* = < x I provides an antilinear 1-1 m a p p i n g from 7-/ onto ~ r . Since ~ l is a space of


complex functions on 7-/, the tensor product u ® v of two elements of 7-/~ has a n a t u r a l
i n t e r p r e t a t i o n as the f u n c t i o n u ( x ) v ( y ) on ~ x ~ , which is bilinear. If we denote by
(ec~) a n o.n.b, for 7-g, by e a its dual basis, an o.n.b, for 7-/t ® 7"/I is provided by the
bilinear functionals e c*¢~ = e c~®e/3 , and the elements of ~ l ® ~ l are bilinear forms which
m a y be e x p a n d e d as ~ cc~oe c~¢~ with a square s u m m a b l e family of complex coefficients
ca~. This is much smaller t h a n the space of all continuous bilinear forms on ~ x 7-g, a n d
is called the space of ttilbert-Schraidt forms. One may define similarly m u l t i p l e tensor
w o d u c t s , a n d H i l b e r t - S e h m i d t n - l i n e a r forms.
W h y Hilbert Schmidt ? I n s t e a d of considering 7-¢I ® ~ r let us consider 7-l' @ ~ , a
space of bilinear functionals on 7-[ × 7-/t . Any operator A on 7-/ provides such a bilinear
form, n a m e l y (x, yt) , , (y~,Ax) everything here is bilinear a n d the h e r m i t i a n
scalar p r o d u c t is not used - - a n d the basis elements e a ® e3 are read in this way as
the operators ] e/~ > < e~ I (Dirac's notation) which constitute an o.n.b, for the space of
H i l b e r t - S c h m i d t operators.
More generally, the tensor product x* ®y can be interpreted as the r a n k one o p e r a t o r
l y > < x l , a n d the algebraic tensor product ~ ' Q ~ as the space of all operators of finite
rank.
Appendix 2

Conditioning and Kernels


Conditioning is one of the basic ideas of classical probability, and the greatest success
of the Kolmogorov system of axioms was its inclusion of conditioning as a derived
notion, without need of special axioms. Therefore classical probabilists will expect a
discussion of conditioning in quantum probability. On the other hand, the subject is
very delicate : in q u a n t u m physics, "knowing" a random variable does not concern only
the mind of the physicist. It may be impossible to "know" without destroying some
features of the system. Thus conditional expectations do not exist in general, and the
simplest constructions of classical probability (like elementary Markov chains with given
transition probabilities) run into serious difficulties. The end of this Appendix has been
modified to include all important idea of Bhat and Parthasarathy, which extends a
classical construction of Stinespring to give a clear (though restricted) meaning to the
q u a n t u m Markov property.
This Appendix is not necessary to read the main body of these notes. It can also
be read independently, except that its last section uses a little of the language of C* -
algebras and yon N e m n a n n algebras (Appendix 4).

C o n d i t i o n i n g : d i s c r e t e case

1 The discussion in this and the following subsection is inspired from Davies' book
[Dav], Quantum Theory of Open System.%p. 15 17.
Consider first a classical probability space (f~, ~-, IP) and a random variable X taking
values in some nice measurable space E . Then the space f~ is decomposed into the
"slices" f~x = X - I ( x ) , x C E , and the measure ]P can be disintegrated according to
the observed value of X . Namely, for every x C E , there exists a law IP x on f~, carried
by f~x, such that for A C .T (IL denoting the law of X under ]P)

IP(A) / ]Px(d) #(dx) .

Intuitively speaking, if we observe that X = x, then the absolute law lP is reduced


to the conditional law lPx. What about getting a similar disintegration in q u a n t u m
probability ?
We now denote by ~ , as we did previously, the basic Hilbert space, and by X a
r.v. on ~ , taking values in E (a spectral measure over E ) . We begin with the case
of a countable space E : then for each i E E we have an "event" (subspace) Ai and
its "indicator" (spectral projector) Pi. Let Z denote a real valued random variable
(selfadjoint operator), bounded for simplicity.
The easiest thing to describe is the decomposition of fl in slices : these are simply
the subspaces Ai = {X = i}. For continuous random variables, this will be non-trivial,
and we deal with this problem in subsection 4.
Conditioning 219

In quantum physics, a measurement is a model for the concrete physical process of


installing a macroscopic apparatus which filters the population of particles according to
some property, here the value of X . When this is done (physically : when the power is
turned on) the state of the system changes. If it was described by the density operator
p, it is now described by

(1.1) Z-- PipP •

Note that ~ is a density operator, which commutes with all the spectral projections
Pi of X (we simply say it commutes with X ) . Also note that, if p originally did
commute with X , no change has occurred. This change of state has nothing to do
with our looking at the result of the experiment. Indeed, if we do, that is if we filter
the population which goes out of the apparatus and select those particles for which X
takes the value i, a further change takes place, which this time is the same familiar one
as in classical probability : ~ is replaced by the conditional density operator

(1.2) Pi PipPi
Tr (pP/)

Starting from these assumptions about the effect of a measurement of X on the state
of the system, we are going to compute the expectation of Z and "joint distributions".
First of all, the expectation of Z under the new law ~ is equal to

[Z] = Tr(~Z) = ~i Tr(PiPPiZ)

= E , Tr(pPiZP{)

(the property that T r ( A B ) = T r ( B A ) has been used). This is not the same as the
original expectation of Z otherwise stated, contrary to the classical probability case,
conditioning changes expectation values. The difference is

]E [Z] - ]E [Z] : Ei~ Tr(pPiZP~) - Ei Tr(pPiZPi)


= E i e k Tr(pPiZPk)

(note that the difference vanishes if either Z or p commutes with X ) . Instead of


deciding that the state has changed (Schr6dinger's picture) we might have decided t h a t
the r a n d o m variable Z has been replaced by 2 = ~-~dPiZPi, the state remaining
unchanged (Heisenberg's picture). Though both points of view are equivalent for
Hamiltonian evolutions, here the first point of view seems more satisfactory. Indeed,
basic properties of Z like being integer valued (having a spectrum contained in IN,
with the possible physical meaning of being the output of a counter) is not respected
by the above operation on r.v.'s, while one has no objection to a shift from discrete
spectrum to continuous spectrum in a change of density matrix.
Let us consider a second r.v. Y taking values in a countable space F (points de-
noted j , events Bj, spectral projections Qj ), and let us compute a "joint distribution"
for X and Y according to the preceding rules. If it has been observed that X : i , then
220 Appendix 2

the probability that Y = j is Tr(~iQj) = Tr(pPiQjPI)/Tr(pPi), and the probability


that first the measure of X yields i and then the measure of Y yields j is

pij = Tr(pPiqff'i)

These coefficients define a probability law, but they depend on the order in which the
measurements are performed. Note also that the mapping (i,j) ~ PiQjPi does not
define an observable on E x F , since the selfadjoint operators PiQjPi generally are not
projectors (unless X and Y commute). This obviously calls for a generalization of the
notion of observable (see subs. 4.1).

Conditioning : continuous case

2 We are going now to deal with the case of a continuous r.v. X , taking their values in
a measurable space (E, £) with a countably generated c~-field. Consider an increasing
family of finite a-fields ~'n whose union generates $. For notational simplicity~ let us
assume that the first c~ field g0 is trivial, and that every atom Hi of gn is divided in two
atoms of ~t+1 ' SO that the atoms of g~ are indexed by the set W ( n ) of dyadic words
with n letters. Let Xn be the random variable X considered as a spectral measure on
(E, Cn). Since Xn is a discrete random variable, the conditional expectation of Z given
Xn is the following function 77n on E ( Ill i here is an indicator function in the usual
sense, not a projector, and our conditional expectation is a r.v. in the usual sense)

V" Tr(pPiZPi) I .
~n
/ ~iew(n) Tr(pPi) Hi
The analogy with the theory of derivation leads us to compute r / n - IE[r/n+~t$,~]
(ordinary conditional expectation of the r.v. X , with respect to the law lP). To
help intuition, let us set for every "event" (subspace) A with associated projector P ,
IE [A ; Z] = T r ( p P Z P ) . Then the function we want to compute is equal on Hi to
1
]P(Hi) (IF, [ { X • Hi}; Z] - IE [{X • Hi0}; Z] - IE [{X e Hia}; Z] )

and we ha~e

~, [1~. - m [,1,,+~l c~] I] = ~ ] ~ c w / ~ / I T r ( p P ~ o Z P . ) + Tr(pP.ZP~o)l

Let H belong to $ and PH denote the projection on the subspace {X • H } . Then


the function

(2.1) ( H, I~) ~-~ Tr(pP H z P K )


is a complex bimeasure u, and we will assmne it has bounded variation, i.e. can be
extended into a bounded complex measure on E x E , still denoted by u (we return
to the discussion of u at the end of this subsection). The above expectation then is
bounded by

~- -i C W ( n ) (luI(Hi° x Hil ) + I~'l(Hil x Hio))


Conditioning 221

and finally, summing over n , we get that

~-~n E [l~,~ - ~ [~/,~+llg~] I] -< I~l( E × E \ A)

A denoting the diagonal. Thus the (ordinary) random variables T/n constitute a
quasimartingale. Since we have ]E [1~7,~13 = Ei~(,~> ITr(pPiZPdl < Ei i~l(gi ×Hd _<
I~i(E × E), this quasimartingale is bounded in L 1 , and finally ~ln converges IP-a.s..
Let us end with a remark on the bimeasure L,(H, K ) = T r ( P H Z P K ). We have used
it to estimate the change in expectation due to conditioning, that is an expression of
the following form, where (H/) is some partition of E

1
-'-'[~iP. ZP.~ - zl = ~ [~ Zi(2P.,zP~, - PH, z - ZPH~)]

= I1E, ia .,111.
Hence instead of studying the complex bimeasure ~, we may as well study the real
bimeasure O ( H , K ) = Tr (P[PH, [ Z ' P K ] ] )
3 We illustrate the preceding computations, by the basic example of the canonical
pair (studied in detail in Chapter 3). Explicitly, fl is the space L2(IR) (Lebesgue
measure); E is the line, the initial partition is given by the dyadic integers, and then
we proceed by cutting each interval in two equal halves; X is the identity mapping
from f~ to IR, i.e. for a Borel set A of the line PA is the operator of multiplication
by I a . For p we choose the pure state ¢~, corresponding to the "wave function" w.
Finally, the operator we choose for Z will not be a bounded selfadjoint operator, but
rather the unitary operator Z f ( x ) = f ( x - u ) :since Z is a complex linear combination
of two (commuting) selfadjoint operators, the preceding theory can be applied without
problem. It is clear that Z is the worst possible kind of operator from the point of view
of the "slicing" : instead of operating along the slices it interchanges them. We shall
see later that Z = e -iuY , where Y = - i D is the momentum operator of the canonical
pair.
The bimeasure we have to consider in this case is

u ( H , K ) = Tr(PPHZPh. ) = <W, PHZPIIw>

= ] w ( s ) IU(S ) IK(S -- u ) w ( s -- u) ds .

Let A(ds) be the complex measure on the line with density :o(s)w(s - u ) ; since w is
normalized, the total mass of :~ is at most 1. Let n be the image of A under the mapping
s ~ (.s, s + u) ; then ~:(H, I() = n ( H × I ( ) , and therefore the basic assumption for
the convergence of the conditional expectations is satisfied ; we may forget the notation
n and use r: for both objects, bimeasure and measure.
In the case we are considering, the limit of IE[ ZIEn] is a.s. equal to O. Indeed,
Tr(pPiZPi) = 0 when the partition is fine enough, simply because then the square
Hi x Hi does not meet the line g = x + u which carries u. Otherwise stated, we have

~,[~-i~YIX] =0 if u#O, =1 if u=O.


222 Appendix 2

In weak convergence problems, this degenerate characteristic function is typical of cases


where all the mass escapes to infinity.

Multiplicity theory
4 We are going now to extend the idea of "slicing" or "combing" the basic Hilbert
space f~ to the case of a continuous r . v . X . This is also called "multiplicity theory",
and applies just as well to real Hilbert spaces (for an application of the real case, see
S6m. Prob IX, LN. 465, p. 73 88, which also contains a proof of the theorem itself). A
complete proof is also given in [Parl].
Note first that, a nice measurable space being isomorphic to a Borel subset of IR,
we lose no generality by assuming that X is real valued. Hence X is associated with an
orthogonal resolution of identity ( ~ t ) with spectral projectors Et. Let us call martingale
any curve x = x ( . ) such that x(s) = E s x ( t ) for s < t, and denote by q the "bracket"
of the martingale, i.e. the measure on IR such that 77(] s, t] ) = n x(t) - x(s)II 2 . L e t us
call stable subspace of ~ any closed subspace which is stable under all projectors Et.
An example of such a subspace is given by S ( x ) , the set of all "stochastic integrals"
ff(s)dx(s) with f E L2(q) ; its orthogonal space S(x) ± is a stable subspace too.
The mapping f , , f f ( s ) d x ( s ) is an isomorphism from the Hilbert space 3.4 (q) =
L2(IR, q), which we call the model 8pace, onto S(x) ; it is slightly more than that : the
model space carries a natural spectral family (2t), corresponding to functions supported
by the h a l 5 l i n e ] - o c , t ] (the selfadjoint operator it generates is multiplication by
the function x), and 2-t is carried by the isomorphism into the resolution of identity
~ t N S ( x ) , induced on S ( x ) by our original spectral family. Thus Q contains a stable
subspace which is a copy of the model.
We now replace f~ by f~ = S(x) ± and (if it is not reduced to O) extract from it a
second copy of the model, possibly with a different measure rh . Iterating transfinitely
this procedure, it is very intuitive that ft can be decomposed into a direct sum of copies
of model spaces. This intuition can be made rigorous using Zorn's lemma. Since ~ is
always assumed to be separable, this direct sum decomposition is necessarily countable.
Rearranging the indexes into a single sequence, denote by S ( x n ) the spaces and by
qn the corresponding measures, choose a measure 0 such that every qn is absolutely
continuous w.r. to 0 (a probability measure if you wish), and denote by hn a density
of qn w.r. to 0.
Every w E ft can be represented uniquely in the form

w:~-~n i f'i(s)dx''(s)
with

Associate with every ~ E IR the Hilbert space ~-s consisting of the sequences (Xn) of
complex numbers such that Y~n Ix-12h-(~) < oc (for simplicity we assume ha(s) > 0
for all n ; if this condition is not fulfilled, consider only those n such that ha(s) > 0).
Then for 0-a.e. s the sequence (f,~(s)) belongs to S-s, and the above isomorphism
realizes the "slicing" of f~ we were looking for. Such a "slicing" has an official name :
in Hilbert space language, one says that f~ is isomorphic to the continuous sum of the
measurable family of Hilbert spaces ~s over the measure space (IR, 0). We do not need
to make here an axiomatic theory of continuous sums of Hilbert spaces, since the above
Conditioning 223

description was entirely explicit. For a general discussion, see Dixmier [Dix], part II,
Chapter 1.
Up to now, we have not risen much above the level of triviality. Things become
more interesting when we try to get some kind of uniqueness. Here we shall explain the
results, without even sketching a proof.
First of all, the "models" we use are not uniquely determined. If 7 and r/ are two
equivalent nleasures on IR, and j is a density of 71 w.r. to 7, the mapping f ~ f v / ~ is
an isomorphism of M (r}) onto JL4 (7) which preserves the given resolutions of identity.
Thus it is the equivalence class of 7/ (also called the spectral type of the model) which
matters. Next, if q is decomposed into a sum of two mutually singular measures
and # , the model space Jt4(~l) gets decomposed into a direct sum M ( ) Q ® M ( # ) ,
compatible with the given resolutions of identity. Hence if we want to add as many
"models" as we can in a single operation, our interest is to choose a spectral type as
strong as we can. Then the above construction can be refined : one starts with a measure
r/which has maximal spectral type. At the following step, when one restricts oneself to the
orthogonal f~l of the first model, one again chooses the maximal spectral type allowed
in i l l , and so on. Then one can show that transfinite induction is unnecessary, and
(more important) the spectral types of 711, 712,..., which become weaker and weaker, are
uniquely determined. This is the well known spectral multiplicity theorem (Hellinger-
Hahn theorem). For a detailed proof, see [ReS], Chap. VII. On the other hand, the
"slicing" itself (the decomposition of ft into a continuous sum of Hilbert spaces) can be
shown to be unique once the measure 0 is chosen.
Finally, let us show briefly the relation of the above slicing with the definition of a
complete observable, as given above in this chapter (§2, subs. 3) : X is complete if and
only if one single copy of the "model" is sufficient to exhaust Q, i.e. if the slices are
one-dimensional. Given that operators which operate slice by slice commute with X ,
it is easy to prove that the observable X is complete if and only if all operators which
commute to X are of the form J}(.

Transition kernels and completely positive mappingsl

5 Let us first recall the definition of a (transition) kernel in classical probability.


Consider two measurable spaces (Eo,go) and (Eo, g~). Then a kernel P from E0
to E1 is a mapping which associates with every point x of E0 a probability measure
P ( x , dy) on E l , depending measurably on x. Thus for every bounded measurable
function f on E1 (note the reverse order), the function P f : x ~ f P ( x , d y ) f ( y ) is
measurable on E0, and P may be described as a mapping from (measurable) functions
on E1 to (measurable) functions on E0, which is positive, transforms 1 into 1, and
satisfies a countable additivity property.
On the other hand, let P0 be the convex set of probability measures on E0, and
similarly T'I. The kernel P defines an affine mapping from T'0 to T'l

a ap = f a ( d x ) P ( x , . )

1 This subsection has been entirely rewritten.


224 Appendix 2

However, one c a n n o t characterize simply, by a p r o p e r t y like countable additivity, those


affine m a p p i n g s which arise in this way from transition kernels.
Every m e a s u r a b l e m a p p i n g h from E0 to E1 defines a kernel H , such t h a t
H(x, dy) = eh(x)(dy). Such a kernel is deterministic, in the following sense : if one
imagines a kernel P(x, dy) as the probability distribution of bullets fired on E1 from a
gun located at x C E0, then the gun corresponding to the kernel H has no spreading : all
bullets f r o m x fall at the same point h(x). This is reflected into the algebraic p r o p e r t y
t h a t H(fg) = g ( f ) H ( g ) for any two ( b o u n d e d measurable) functions f, g on F .
Let P be a kernel from E0 to E1, aild let # be a probability law (a state) on E0.
T h e n we m a y build a probability space (~, .7-, IP), a pair X0, X1 of r a n d o m variables
on f~, t a k i n g values in Eo,E1, such that the law of X0 is # and the conditional law
of X1 given t h a t X0 = x is P(x, dy). T h e construction is well known : we take f~ to
be E0 x E1 w i t h the p r o d u c t or-field, we define X0 and X1 to be the two c o o r d i n a t e
m a p p i n g s on E0 × E 1 , and we define the law IP as the only m e a s u r e on fl such t h a t - -
as usual, t h e tensor product a0 C) al of two bounded measurable functions ao(x), al (y)
on Eo,E1 respectively denotes the fllnction ao(x)al(y) on E0 × E1

(5.1) IE [a0 @ al ] - - / _ [d(dx)ao(X) Pal(x)


J I5 0

We will now t r a n s l a t e this construction into n o n - c o m m u t a t i v e language. W h a t we get is


a well known c o n s t r u c t i o n due to Stinespring, which will lead us to the p r o p e r definition
of n o n - c o m m u t a t i v e kernels.
6 It is a f u n d a m e n t a l principle of n o n - c o m m u t a t i v e g e o m e t r y t h a t a " n o n - c o m m u t a t i v e
space" E is an a b s t r a c t algebra A , whose elements intuitively represent "functions" on
E a d d i t i o n a l s t r u c t u r e on .,4 will tell whether they must be u n d e r t o o d as measurable,
continuous, differentiable.., functions.
T h u s the two spaces E0, E1 become now two algebras .40,.A1, our "kernel from E0
to E1 " will be a linear m a p p i n g P from fill to fl,0, and our measure # will be a s t a t e
on .A0 1. As a minor technical assumption, we will assume ~40, A1 are C* -algebras. T h e
intuitive idea is t h a t in this case we are dealing with the n o n - c o m m u t a t i v e analogue
of two compact spaces and their algebras of continuous functions, and of Feller kernels
which preserve continuity. T h e n we need not worry a b o u t the countable a d d i t i v i t y of
measures and kernels. If the reader does not know yet what a C * - a l g e b r a is, let h i m
simply o m i t the corresponding details.
It is clear t h a t we must assume P maps I1 to -To, and is a positive m a p p i n g . However,
and this will be our m a i n point, positivity is not ~ufficient.
We are going to c o n s t r u c t a pair or " r a n d o m variables X0, X1 with values in E0, E1 "
such t h a t the law of X0 is # , and P somehow describes the conditioning of X1 by Xo •
T h e i d e a is to e x t r a c t , from the c o m m u t a t i v e case, the way the Hilbert space L2(]P) is
c o n s t r u c t e d from L2(#) and the transition kernel P . To achieve this, we first c o n s t r u c t
the analogue of L2(tl). T h a t is, we give ourselves a Hilbert space ]Co, a r e p r e s e n t a t i o n
X0 f r o m .A0 on ]Co, and a unit vector ~ i m p l e m e n t i n g the s t a t e p

~ ( a 0 ) - < ~ , (a0 o X0) ~ > (a0 C A 0 ) •

1 Since E 0 and E 1 do not exist as spaces, one generally says P is a kernel from ,41 to ..40 .
Conditioning 225

We recall that Io o X0 is assumed to be the identity operator on /Co. We may for


instance construct ]Co using the GNS construction (Appendix 4, §2), but it is b e t t e r
to keep some freedom here : changing the initial vector will then give different states #
- - though a simultaneaous construction for all possible initial states (as in the classical
case) could not be achieved with a separable /Co.
Our purpose is to construct a Hilbert space /C1 containing /Co, a representation X1
from ,,z~1 on ]~1 (/1 ° X 1 being the identity of/C1 ), such that the following extension of
(5.1) holds (the notation a0 oXo means Xo(ao), in order to stress the intuitive meaning
of a0 as a function on E0 ; similarly for al o X1 ).

(6.1) <~,a, oX, a0oXo~>=<~,(Pa, ao) o X 0 e > .

Though ao o X0 is an operator on /Co, not on /C1, still it can be applied to ~b E /Co 1


On the other hand ao o X o a l o X1 ~P would be meaningless. Thus we are constructing
something coarser than a "joint law" for X0 and X1 - - intuitively speaking, P predicts
the value of X1 while X0 is under observation.
A necessary condition for the existence of such a Hilbert space /C1 is that, given any
finite families of elements a i0 of "40 and a i1 of "41, and putting A = ~ i a ~ o X l a °oX0,
< ~, A* A ~ > should be positive, that is,

#(~ij aj0* P ( a j1. a i1) a i)


0 >0.

It should be a property of the mapping P that the preceding assumption is satisfied


for an a r b i t r a r y state # on A0, and thus we are led naturally to the basic definition of
complete positivity

(6.2) E ij aj0* P ( a j1" a i)


1 a i0 _> 0 for arbitrary a i1 E .41, a io E .40.

A trivial case where this condition is satisfied is that of an algebra homomorphism P


from .41 to ,4o (corresponding to the "no spreading" case in the commutative situation).
On the other hand, if this condition is satisfied, we can construct the Hilbert space
/C1 and the representation X~, using the following procedure, which generalizes the
GNS m e t h o d :
1) Provide the algebraic tensor product /Co @ "41 with the only hermitian form such
that, for k , h E ]Co, b l , a l E "41

(6.3) <k@bl, h ® al > = < k , P ( b ~ a l ) h > .

Let us deduce from (6.2) that we get in this way a positive (possibly degenerate) scalar
product. Otherwise stated, we have for finite families ki, hi E /co, bi, ai E .41

Zij < k~ j , P ( b *j a i ) hi > >_ 0 o

This follows from the complete positivity assumption on P if hi = (a ° o X o ) ~ , ki =


(b ° o X0)~b for some vector g, E /co and elements a9 b.° of .4o- Then we get the

1 In fact, nothing depends on the particular choice of ~ E K0.


226 Appendix 2

result by density for a r b i t r a r y h i , k i E /co if the representation /co is cyclic with


g e n e r a t i n g vector ~b (GNS case). Finally, we decompose /co into a direct s u m of cyclic
representations to reach the general case.
2) K n o w i n g this scalar product is positive, we construct the Hilbert space /C1 by
killing the null-space a n d completing. If we m a p h E /Co to h ® I1 E /Co ® A1, we get
a n isometry : its inmge does not intersect the null space, and we m a y identify h with
h ® I1, thus i m b e d d i n g /Co into /ca • In particular, we identify ~b with ¢ ® It -
We define a representation X1 of A1 in /C1 as follows. Given a E A1, we p u t

(6.4) (a O X l ) ( h @ b) = h ® (ab) .

It is clear t h a t this o p e r a t i o n behaves well with respect to sums a n d products, a little


less obvious t h a t ( a o X)* = a* o X , and that the action of a is b o u n d e d with n o r m at
most II a II - - to prove this last point, assume II a II -< 1 aa~d r e m a r k that a*a < I m a y
be w r i t t e n as a*a = I - c*c. T h e n

II ~ hi ® ~b~ II~ = ~ < ],j 0 (,by), h~ ~ ~b, > = ~ < hj, P(b;a;aibi)h i >
i ij ij
= N~ hi e bi II9- - ~ < hj, P ( b ~ c ; c i b i ) h i >
i ij

a n d the last t e r m to the right is positive. T h e n the operators are extended to K~I by
c o n t i n u i t y a n d our representation of -A1 is constructed.
In contrast with this, one generally cannot define a representation of a E A0 in /C1,
e x t e n d i n g its action a o X0 on lC0, by the formula

(6.5) (a o x 0 ) (h o b) = ((~ o X0) h) ® b.

This o p e r a t i o n behaves well with respect to sums and products, b u t not with respect to
adjoints, a n d generally it is not continuous. We may still consider X0 as a r e p r e s e n t a t i o n
of .40 i n / C 1 , b u t not quite in the usual sense : [ooXo is not the identity operator, b u t the
orthogonal projection on /c0. If we really want to have a representation respecting the
u n i t , t h e n we assume A0 has a nmltiplicative state ~5 and put, for x = y + z E ~co ®tCo~

(a o x 0 ) x = (a o x 0 ) y + ~(~) z .

Uniqueness. T h e GNS construction is unique up to isomorphism, u n d e r a n a s s u m p t i o n


of cyclicity. Here the a s s u m p t i o n that ensures uniqueness is the following : the (closed)
invariant subspace for the representation X1 generated by ~co is the whole of /C1.
Otherwise stated, the vectors (al o X1) h ( a l d A1, h E /c0) form a total set in /C1-
This p r o p e r t y it satisfied here from the construction, and it is easily seen that any two
situations for which it is satisfied are unitarily equivalent. This a s s u m p t i o n is called
minimality.
In particular, if /C1 is the GNS space relative to a state # , with g e n e r a t i n g vector
~b, the v e c t o r s ( a 1 o X l ) ( a 0 o X0)if) are total in /C1, a n d the m a t r i x elements b e t w e e n
such vectors are known.
Conditioning 227

COMMENT. This construction describes the structure of completely positive mappings


P : .4~ -~ -40. Assuming .40 acts faithfully I o n /C0, SO that we may consider -40
as a sub-algebra of £;(/c0), and P as a mapping from -41 to £(/co), whose image
lies in -4o- Then P appears a~s a composition of two elementary models of completely
positive mappings :
1) a homomorphism from A1 t o ~(~C1) ,
2) a mapping A --4 V * A V from £;(/C1) to /2(/c0), V denoting a mapping from /Co
to /C1 - - here, V is an isometry.
7 REMARKS. 1) Though (6.5) does not waork in general, it does work when a belongs
to the center Zo of -40, and this representation of Z0 commutes with the representation
of -41. More generMly (anticipating on definitions concerning yon N e u m a n n algebras),
we may define a representation of the commutant -4~ of (the image of) -40 acting on
/Co, defined for a C -41 by a ( h ® al) = a h ® al as in (6.5), and this extension clearly
commutes with all operators bl o X1. Since a (h ® I) = (a h)® I it preserves /co, hence
/Co2 . Then it commutes with all the operators a0 o X0 as extended in Remark 3 below
to the whole of /C1, and therefore with the whole algebra generated by the "random
variables" X0 and X1.
2) Let us return to the commutative case of two random variables X i taking values
in compact state spaces Ei (thus -40 = C(Ei)). Then /co is L2(Eo,#) on which -40
operates by multiplication, and /C1 is L2(Eo x EI,IP), where IP is the joint law of
( X o , X 1 ) . This situation satisfies the minimality assumption, in the sense that vectors
of the form f l ( x l ) f o ( x o ) 1 are total in /C1. Thus the q u a n t u m situation is a true
extension of the cornmutative one. On the other hand, in the classical situation a product
ao o X o a l o X1 is well defined, while it is not defined in the q u a n t u m situation.
3) If we insist on dealing with operators defined on the whole of /C1, the simplest
way to extend a0 oX0, and more generally operators on /Co, to operators on /C, consists
in deciding their extension kills /COL. Thus £(/c0) gets identified with the "(r algebra"
-To of all operators EoAEo on /C not a true von Neumann algebra on ~t since its
unit is E0 not I . Extending H E £(/c0) in this way, let us compute a matrix element
like < k, bl o X l H a l o X l h > , with h,k C ~co. Since H a l o X l h belongs to /co we
may replace bl o X1 by Pbl o Xo. Then we move H P b l o Xo to the left side, taking
adjoints. Again we may replace al o X1 h by its projection Pao o Xo h on /co, and then
move all operators to the right. In probabilistic notation, we have a "Markov property"

]E[bl o X 1 H a l o X 1 IF0] = Phi o X o H P a l o Xo .

This computation is easily extended by induction to a product of an arbitrary number


of ai o X 1 and elements of -To. This simple extension remains non commutative even
in the classical set-up. More generally, if a belongs to the center Z0, the extension of
a o X0 killing /co2 does not coincide with the action of the center as defined in Remark
1.
The computation above concerns matrix elements only : if we are to compute the
operator ao o X 0 a l oX1 (say), it is not sufficient to tell how it acts on h E/co ; we must

1 This is not a serious restriction : an), C*-algebra has a faithful representation, except that the
Hilbert space involved may not be separable.
228 Appendix 2

compute it on a vector c~ o X1 h, and what we get is ( a o P ( a l c l ) ) o X o h . General rules


can be found in the original article of Bhat and Parthasarathy.
4) We again anticipate on the theory of yon Neumann algebras. Assume .A0 and ,41
are weakly closed ( = y o n Neumann) algebras, X0 is a normal homomorphism, and P
is normal - - in both cases this means they behave welt with respect to strong limits
of bounded increasing families fc~ T f of positive operators in .4o. Then X1 is also a
normal homomorphism. Indeed, let fc~ T f be a bounded increasing family of positive
elements of .41, and let F = supc~fc~oX1. To check that F = f o X 1 , it suffices to check
equality of diagonal matrix elements on a total set, here by the minimality assumption,
for al E .41, h0 E )Co

SUpc~ < al oXlho,.fc~al o X l h 0 > = < al o X l h o , f a l o X l h 0 > .

We apply the Markov property and are reduced to

supc~ P(al fc~al ) o Xo = P(a* f a l ) o Xo

That is, to the nornlality of X0 and P .


5) A problem of practical importance is : how to make sure that K~I is separable ? In
classical situations, we would assume that /C0 is separable and ,41 is norm-separable,
from which the separability of K1 would follow at once. But (as the example of the Weyl
algebra on Boson Fock space shows) norm separability is not a welcome assumption in
q u a n t u m probability. Instead, let us assume the following properties, taking `40 = .41
for simplicity.
i) ]Co is separable, .4 is an operator algebra on K0 and X0 is the identity
homomorphism; hence £(K:0) is separable in all the usual weak operator topologies,
and so is ,4.
ii) P : .4 --~ .4 is strong/strong continuous (or even strong/weak continuous) on the
unit ball (this is meant to be easy to check in practical cases).
Then K:l is separable.
To prove this, we will prove first that c ~ coX1 is continuous from the unit ball of
.4 with the strong topology, to £ ( £ 1 ) given the weak topology. It suffices to check weak
convergence on a total set, and therefore to prove that < bl oX1 ko, coX1 al oX1 h0 > is
continuous in c. We rewrite this as < k0, P ( b * c a l ) o X 0 h0 > . Then if ca ~ c strongly
b~ c o t a 1 ~ b*t ca1 strongly, thus applying P we get weak convergence, which is preserved
under X0.
This being proved, consider the total set of vectors al o X l h o where al may be
chosen in the unit ball of .4. Choose a countable strongly dense set D in the unit ball
of .4, a countable norm dense set K in )Co, and take aa C D tending to al strongly,
hn C K tending to h in norm. Then ca o X1 tends weakly to al o X1 from the above,
with norm bounded by 1, and therefore (ec~o X1) hn tends weakly to (al o X1) h. T h a t ' s
it.
If .4 is given as a yon Neumann algebra, it will be convenient instead of ii) to assume
that P is normal.
Conditioning 229

Q u a n t u m M a r k o v processes

8 As in the commutative case, once the case of two algebras has been fully worked out,
the case of a finite number of algebras A 0 , . . . , . A n and completely positive "kernels"
Pi,i+l : .Ai+l --+ .Ai follows by an easy induction 1 leading to the construction of
an increasing faanily of Hilbert spaces /c0 C K71... C KTn and of homomorphisms
X i : .Ai ---+ £'(/ci) such that, for h i , k i E }~i and ai+l,bi+l E .Ai+l (and writing
ai+loXi+l for Xi+l(ai+l) )

(8.1) < (bi+loXi+~) I~i , (ai+loXi+t) h i > = < k i , (Pi+l(bi+lai+l* o Xi) > .

and in particular, given elements h0, k0 E it0, we may compute by an easy induction
matrix elements of the form

(8.2) < bn o X,, . . . b l o X1 ko, an o X n . . . a l o X1 ho >

by a formula which closely resembles the computation of probabilities relative to a


classical Markov process. Taking h0 = k0 = ¢ we are computing expectations relative
to the initial state p. On the other hand, our "operators" are not everywhere defined,
and thus cannot be composed in a different order.
The corresponding minimality assumption, which ensures uniqueness, is the density
of vectors of the form an o X n . • .al o X1 ho with ho E .Ao, ai E .Ai.
As in classical probability, the stationary case is the most important : .Ai = .2,,
Pi = P do not depend on i. There is no difficulty in extending the above construction
to that of an infinite "quantmn Markov chain ( X n ) with transition probability P " . A
very important additional element of structure is the central process : as we could define
in (6.5) an action of the center of ,40 on /Cl, extending the given action of the center
on /Co, then in the discrete case we may define homomorphisms Yn from the center Z
of .A to /:(/C), such that for a E Z a o Yn extends a o X n . All these h o m o m o r p h i a m s
commute, and therefore constitute together a classical process - - though not necessarily
a classical Markov process, unless the transition kernel also maps Z into itself.
A similar construction applies in continuous time. Let .A be a C*-algebra. A
q u a n t u m dynamical semigroup or simply a Markov semigroup on .A is a family (Pt)t>0
of completely positive mappings, such that Pt 1 = 1, possessing the semigroup property
Ps+t = P s P t , and continuous in some reasonable topology. One generally assumes it
is strongly continuous in the sense that for any fixed element of .2,, II Pta - a II ---* 0
as t --~ 0, which implies (by the general theory of semigroups on Banach spaces) that
t ~ Pta is norm-continuous. Since there are non-pathological algebras M which are
not norm-separable (this occurs most often with yon Neumann algebras), continuity in
weak topologies must be considered too. Let us just mention that in the case of von
N e u m a n n algebras, the mappings Pt are assumed to be normal, and the n a t u r a l kind
of continuity involves the "ultraweak" topology.
Let us now describe the "quantum Markov process" associated with a q u a n t u m
dynamical semigroup (Pt) and an initial state # on .A. As usual, we assume that A

1 Here one sees the interest of starting fl'om an arbitrary Hitbert space K:0 , since in the induction
the space K,~ already constructed takes the place of K0 .
230 Appendix 2

acts on an initial Hilbert space ~0 and # corresponds to an unit vector ¢ in this space.
Then it is possible to construct :
a Hilbert space K containing K0, and an increasing family ( ~ t ) of intermediate
Hilbert spaces,
a family ( X t ) of homomorphisms from A to £(K.t) (not to £ ( K ) ) such that for
s < t, for h , k C K~s, for a,b E A , we have a "Markov property"

<boXtk,aoXth> = < k , Pt_s(b*a) o X s h > .

Besides that, uniqueness is ensured by a minimality property : vectors of the form

an o X t , ...a~ o Xt~ ho , (tl < ... < t,~, a l , . . . , a n C A , ho E ~o)

are dense in ~ .
We refer to the original article for this construction, though the essential ideas
have been given, and it only remains to take the limit of an increasing system of
Hilbert spaces. The central process too can be defined in continuous time. These results
corresponds in the classical case to the construction of a coarse Markov process, while
the non-trivial part of the classical theory (starting with the strong Markov property)
requires choosing a better version, usually right continuous. What corresponds to such
a choice in q u a n t u m probability is not yet known I .
Q u a n t u m dynamical semigroups are interesting objects by themselves, and much
work has been devoted to understanding their structure. It is natural to look for an
"infinitesimal Stinespring theorem" giving the structure of their generators, but the
search for such a theorem has been successful only in the case of a bounded generator,
or equivalently, of the generator of a norm-continuous q u a n t u m dynamical semigroup.
We will mention a partial result without proof. For a complete account see Parthasa-
rathy's book, (p. 267-268, the Gorini-Kossakowski Sudarshan theorem) and the original
paper by Gorini et al [GKS]. For comments and applications, see also the Lecture Notes
[A1L] by Alicki and Lendi.
The (bounded) generator G of a uniformly continuous quantum dynamical semi-
group on the C* -algebra £(7~) of a HiIbert space 7~ has the following form :

G(X) = i [H,X] - 1 x + - 2L XL )
i
where H is a bounded self-adjoint operator on ~ , and (Li) is a sequence of bounded
operators such that ~ i L* Li is strongly convergent to a bounded operator.
This is often called, for historical reasons, the Lindblad form ([Lin]) of the generator.
It is similar to the H6rmander form G = X + ~ i X~ of the generator of a classical
diffusion on a manifold, where the X i are vector fields on the manifold, and as the
Hhrmander form, it lends itself to a construction of the semi-group and the process by
means of 3tochastic differential equations.

1 A quantum probabilistic interpretation of the strong Markov property has recently (1995) been
given by Attal and Parthasarathy.
Appendix 3

Two Events
This Appendix illustrates a strange feature of q u a n t u m probability : the "a-field"
generated by two non commuting events contains uncountably many projectors. The
results are borrowed from M.A. Pdeffel and A. van Daele, [RvD1] (1977), and through
them from Halmos [Hall (1969). As one may guess from the title of [RvD1], the results
are useful for the proof of the main theorem of Tomita-Takesaki, which will be given in
Appendix 4.
The operators denoted here with lower case or greek letters p, q, j, d, % ¢r... are denoted
P, Q, J, D, S, C in the section on Tomita's theory.

1 Let 7-{ be a Hilbert space, complex or real (the applications to T o m i t ~ T a k e s a k i


essentially use the real case) and let A and B be two "events" ( = closed subspaces)
and I n , I B be the corresponding projectors. These projectors leave the four subspaces
A n B, A o B ±, A ± N B, A j- N B ± invariant. On the sum /C of these four subspaces,
the projectors I n and I B commute, and nothing interesting happens. If K: = {0} we
say that the two subspaces are in g e n e r a l position. This property can be realized by
restriction to the invariant space ~ ± . Thus we assume

(1.1) AnB=A ~ n B ± : { 0 } ( : A n B ± : A ±riB).

However, in the application to the Tomita-Takesaki theory only the first two conditions
are realized, and we indicate carefully the places where "general position" assumptions
come into play.
To simplify notation we set I A = p, I B = q. Then we put

(1.2) cr=p-q ; ff=p+q-I

These operators are bounded and selfadjoint, and we have the following relations
(depending only on p and q being projectors : p2 __ p, q2 = q : none of the conditions
(1.1) is used)
(1.3) ~2+2/~=1 ; c~7+7~r=0

The notations a and 3' suggest a "sine" and a "cosine". The first relation implies
that < ax, crx > + < 7x, 7x > = < x, x > , so the spectrum of each operator lies in the
interval [ - 1 , 1] , and we may write the spectral representations (valid also in the real
case[ )

(1.4) a :
/1 ldE,~ ; O' =
? IdF)~
1 1
232 Appendix 3

The relation a x = 0 m e a n s px = q x , which implies px = 0 = qx since A n B = {0} ;


t h e n x C A j- f / B ± which in t u r n implies x = {0}. Otherwise stated, a is injective.
Similarly, one can prove t h a t I ~ 7 is injective. Using the r e m a i n i n g relations (1.1),
one m a y prove t h a t 3', I - a, I + a are injective. In the application to T o m i t a - T a k e s a k i
theory, only the left side of (1.1) is true, so these last three operators will not be injective.
Note t h a t the injeetivity of a m e a n s t h a t the spectral measure d ~ x has no j u m p at 0,
i.e. Eo = Eo
We now define

(1.5) j=-.gn(a)=
i 1
sgn(A)dEx ; d=lal=
? 1
I;qdEx

Since the spectral m e a s u r e has no j u m p at 0, it is n o t necessary g~ ~qefine the sign of 0


and we have j2 = I : j is a symmetry, which we call the main symr~,etry. O n the other
h a n d , d is self-adjo!.nt positive. Since we have d 2 = a 2 , d is the c , l y positive square
root of 1 - 7 2. T h ~ n it c o m m u t e s with 7 and, since it already c o m m u t e s with a , it
c o m m u t e s with all the operators we are considering. Finally, we have

djp = a - p = ( p - q ) p = ( I - q)(p- q) = ( I - q)dj = d(I- q)j .

Since d is injective, this implies

(1.6) jp = (I- q)j whence t a k i n g adjoints jq = (1--p)j .

A d d i n g these relatio-.:s, we get

(1.7) JT=-TJ ; ja=aj

(the second equality is n o t new, we recall it for s y m m e t r y ) . T h e fact t h a t the spaces are
in general position has not been fully used yet. If we do, o p e r a t i n g (,:~ FA as we did on
E,X gives a second s y m m e t r y k = sgn(7 ) a n d a second positive o p e r a t o r e = ITI such
that

7 = ~:,'- = ek, e = I71 = ( I - G 2 ) ~ / 2 , k~r = - ~ k , k"r = 7k

a n d e c o m m u t e s with all other operators. We call k the second s y m m e t r y . It t u r n s out


t h a t the two s y m m e t r i e s k and j antieommute. Indeed, k a n t i c o m m u t e s with every
odd f u n c t i o n of a , a n d in particular with s g n ( a ) = j .
Let us add a few words on the application of the above results to the Tomita-
Takesaki theory : in t:-ais situation, 7-I is a complex Hilbert space (also endowed with
the real Hilbert s p a : e s t r u c t u r e given by the real p a r t of its scalar p r o d u c t ) , a n d the
two subspaces A and B are real, a n d such that B = i A . A n o t h e r way to state this is
the relation ip = q~. T h e n the "cosine" operator 7 is complex linear, b u t the "sine"
operator a a n d the m a i n s y m m e t r y j are conjugate linear. We shall r e t u r n to this
s i t u a t i o n in due time.
Two events 233

T h e a-field g e n e r a t e d by two events

2 Assuming that the two subspaces are in general position, denote by 2t4 the
eigenspace { j x = x] of the main symmetry. Then A4 ± is the eige~:~space { j x = - x } ,
and it is not diflicul~ to see that k is an isomorphism of Ad onto M -l- . Note that
since 7"/ = .M @ J•4 -l- it must have even or infinite dimension. If we identify At[ to
M -I- by means of ^', we identify 7-I with M @ M (every vector in ?~ having a unique
representation as x + k y , x, y C J~ ) and every bounded operator on "bL is represented by
a (2, 2) matrix of operators on .Ad. In particular, j ( x + k y ) = x - k y , : c ( x + k y ) = y + k x ,
and therefore

J=
(,0 '
,)
0

It is now clear that "!-I ,~ ./M®C 2 with j corresponding to I ® o z and K to I ® a x • If we


assume our Hilbert space is complex and put l = - i j k (where i denotes the operator
of multiplication by the complex scalar i ! ) this operator is represented by the third
Pauli matrix

:)
The operator d commutes with j and k and leaves ~ l invariant. Consider now the
family ,4 of all operators

x+iy t

where x, y, z, t are aot scalars, but are restrictions to .h.4 of functions of d. T h e n it is


very easy to see that .4 is closed under multiplication and passage to the adjoint, and
one can show that A is exactly the yon N e u m a n n algebra generated by p and q. So
this family contains uncountably many projectors, i.e. events in the probabilistic sense.
An interesting by-product of the preceding discussion is the fact that two anti-
commuting symmetries j and k on 7-/ necessarily look like two of the Pauli matri-
ces, and in fact (taking an o.n.b, of M ) the space decomposes into a direct sum
of copies of C 2 equipped with the Pauli matrices. On the o'~her l~.and, consider a
Hilbert space 7-/ ~nd two mutually adjoint bounded operators b+ and b - such that
b+2 = b -2 = 0 = t ' b - + b - b +. Then it is easy to see that b+ + l < - and i(b + - b - )
are anticommuting symmetries . Thus the Hilbert space splits into a countable sum of
copies of C 2 equipped with the standard creation and annihilation operators. This is the
rather trivial analogue, for the canonical anticommutation relations, of the Stone-yon
N e u m a n n uniqueness theorem for the canonical commutation relatioas.
Appendix 4
C*-Algebras
This appendix contains a minimal course on C* and yon N e u m a n n algebras, with
emphasis on probabilistic topics, given independently of the course on Fock spaces. Von
N e u m a n n algebras play in non-commutative probability the role of measure theory in
classical probability, though in practice (as in the classical case !) deep results on yon
N e u m a n n algebras are rarely used, and a superficial knowledge of the language is all
one needs to read the literature. Our presentation does not claim to be original, except
possibly by the choice of the material it excludes.

§1. ELEMENTARY THEORY

Definition of C*-algebras
The proofs in this section are very classical and beautiful, and can be traced back
to Gelfand and Naimark, with several substantial later improvements. For details
of presentation we are specially indebted to tile (much more complete) books by
Bratelli-Robinson and Pedersen. We assume that our reader has some knowledge of the
elementary theory of Banach algebras, which is available in many first year functional
analysis courses.
1 By a C* -algebra we mean a complex algebra ,4 with a norm II-II, an involution *
and also a unit I (the existence of an unit is not always assumed in standard courses,
but we are supposed to do probability), which is complete with respect to its norm, and
satisfies the basic axiom

(1.1) II a*a II : Ilall 2 •

The most familiar * algebra of elementary harmonic analysis, the convolution algebra
L 1(G) of a locally compact group, is not a C*-algebra.
C*-algebras are non-commutative analogues of the algebras C(K) of complex,
continuous functions over a compact space K (whence the C ) with the uniform norm,
the complex conjugation as involution, and the function 1 as unit. Thus bounded linear
functionals on C* algebras are the non-commutative analogues of (bounded) complex
measures on a compact space t ( .
The relation (ab)* = b'a* implies that I* is a unit, thus I* = I, and (1.1)
then implies [11 II = 1. On the other hand, to prove (1.1) is suffices to check that
II a*a rl -> II a II2 . I n d e e d , this property implies (since the inequality 11ab II -< II a II II bll is
1. C* -algebras 235

a n axiom of n o r m e d algebras) [{a* [{ [I a ][ >- [[ a I[2 , hence [[ a* [[ -> I[a [I, hence equality.
Finally [] a*a II < [[c* ][ [I a I] = ]1a II2 , and equality obtains.
This allows us to give the f u n d a m e n t a l example of a n o n - c o m m u t a t i v e C* - a l g e b r a :
the algebra f~(7-l) of all bounded linear operators on a complez Itilbert space, with the
o p e r a t o r n o r m as II • {{, the adjoint m a p p i n g as involution, a n d the operator f as unit.
To see this, we r e m a r k t h a t for an operator A

IIA[I:'= sup <dx,dx>= sup <A*Ax,x>iiIA*AII.


ll~ll_<~ ll~{l_<x
One can show that every C * - a l g e b r a is isomorphic to a closed s u b a l g e b r a of some
c(~).
By analogy with the case of /:(7-f), one says that a in some C * - a l g e b r a .4 is
selfadjoint or hermitian if a = a*, is normal if a a n d a* c o m m u t r , a n d is unitary if
ha* = a*a = I. Th~ word seIfadjoint is so often used that we a b b r e ; q a t e it into s.a..

Application of spectral theory

2 Let us recall a few facts from the elementary theory of B a n a c h algebras. First of all,
the spectrum Sp(a) of a n element a of a B a n a c h algebra .A with u n i t I is the set of
all h E C such t h a t ,kI - a is not invertible in .A. It is a n o n - e m p t y compact set of
the complex plane, contained in the disk with radius IIa {I, b u t the spectral radius p(a)
of a, i.e. the radius of the smallest disk c o n t a i n i n g S p ( a ) , m a y l)e sme, lier t h a n IIa it"
T h e c o m p l e m e n t cf Sl:,(a) is called the resolvent set R e s ( a ) . T h e spec',rum, a n d hence
the spectral radius, ,.re defined w i t h o u t reference to the norm. On t'le other hand, the
spectral radius is given explicitly by

(2.1) p(a) = l i m s u p n I] an [I1/n .

We use m a n y times the spectral mapping theorem, which in its simple,,t form asserts t h a t
if F ( z ) is a n entire f u n c t i o n ~ n cn zn on I1~, and F(a) denotes the .'urn ~ n cnan e .,4,
t h e n S p ( F ( a ) ) is the image F ( S p ( a ) ) . This result is true more generally for functions
which are not holomorphie in the whole plane, b u t only on a n e i g h b o u r h o o d of the
spectrum. F r o m thi.' theorem we only need here the following particular case

(2.2) If a is invertible, then S p ( a - 1 ) = ( S p ( a ) ) -~ .

A n o t h e r p r o p e r t y we will use is the following

(2.3) Sp(ab) and Sp(ba) differ at most by {0}.

T h e proof goes as follows : assume hi - ba has a n inverse c. T h e n ~:he easy equality

(hi - ab)(I + acb) -- (I + acb)(hI - ab) = hi

implies, for h ~ 0, that hi - ab is invertible too.

3 T h e following properties are specific to C* algebras. T h e y are very simple a n d have


i m p o r t a n t consequences.
236 C* -algebras

a) If a is normad, ,re have p(a) = ]1a ]1" This follows from the c o m p u t a t i o n (2.1) of
p(a), a n d the following consequence of (1.1) (note t h a t b = a*a is s a.)

II a~° II~ = II a* 2~ a 2~ II = I] ( a ' a ) 2~ II = II ( a ' a ) ~ - ~ II~ . . . . . II a t]2"+t •

As a corollary, note that a B a n a c h algebra Jl has at most one C* n o r m , given by


II a II = (p(a*a)) 1/2. Also, if ,4 a n d /3 are two C* algebras, f a m o r p h i s m from ,A to
13 (i.e. f preserves the algebraic operations, involutions a n d units), iL is clear t h a t f ( a )
has a smaller spectral radius t h a n a, hence also a smaller norm.
b) The spectruu, of a u n i t a r y element a is contained in the unit circle. Indeed, the
fact t h a t a*a = I implies p(a*a) = 1. Therefore the s p e c t r u m of a lies in t h e u n i t disk.
T h e same applies to a -1 , a n d using (2.2) we find their spectra are on the u n i t circle.
c) A s.a. element a has a reM spectrum, a n d at least one of the points :hi] a [1 belongs
to S p ( a ) . One first remarks that e ia is unitary, hence has its s p e c t r u m o n the u n i t circle.
T h e first s t a t e m e n t t h e n follows from the spectral m a p p i n g theorem. Or, the other h a n d ,
Sp(a) is compact, a n d therefore contains a point ,k such that I~1 = p(a). Since the
s p e c t r u m is real ma,t p(a) = II a II, we have ~ = 4-11a I1"

Positive elements

4 We adopt the foilowing definition of a positive element a in tkc C* algebra .,4 : a


is s.a. and its spectrum is contained in IR+. For instance, since every s.a. d e m e n t has
a real s p e c t r u m , its square is positive by the spectral m a p p i n g theor, m . In subsection 7
we prove t h a t positive elements are exactly those of the form b*b. We start with a few
e l e m e n t a r y results which have extremely clever a n d elegant proofs.
a) I£ a finite suns of positive elements is equal to O, each of them is O. It is sufficient
to deal with a s u m a + b = 0. Since Sp(a) = - S p ( b ) are in IR+ we m u s t have
Sp(a) = {0}, hence p(a) : 0 : [I a I[.
b) Let a be s.a. with II a 1[ <- 1. Then a k 0 ~ Il I - a tl -< 1. Indeed, if a >_ 0 a n d
II a H < 1 t h e n Sp(I - a) C [0, 1] , hence p(I - a) = I I I - a tl ~ 1. Conversely, from
II a II -< 1 and [lI II -< 1 we deduce that S p ( a ) lies in the i:,tersee~ion of the real
- a

axis, a n d of the two disks of radius 1 with centers at 0 a n d 1. T h a t is, in the interval
[ 0 , 1 ] C IR+ a n d a is positive.
Let Jr+ be the set of positive elements. It is clearly a cone in A. T h e preceding
property easily implJes that its intersection with the u n i t ball is convex a n d closed, from
which it easily follows that M+ is dosed, and dosed u n d e r sums (i.e. is a closed convex
cone)
The following result is extremely i m p o r t a n t :
c) Every positive element a has a unique square root b ; it belongs to the closed
algebra generated b.v a (it is not necessary to add the u n i t element).
We m a y assum,: ]lal] -< 1, a n d t h e n a = I - a has n o r m _< 1. We recall
t h a t (1 - z) 1/2 = 1 - ~ n > l Cnzn with positive coefficients Cn of s u m 1. We define
b = I - ~ n c n a n = ~ n c a ( I - (I - a ) n ) . T h e n it is very easy to see t h a t b is a positive
square root of a. It appears as a limit of polynomials in a w i t h o u t c o n s t a n t term,
whence the last s t a t e m e n t .
1. C* -algebras 237

Before we prove uniqueness, we note that a product of two commuting positive


dements a, a I is positive. Indeed, their (just constructed) square roots b, bI commute
as limits of polynomials in a, a ~, hence bb~ is s.a. with square aa I .
To prove uniqueness, consider any positive c such that c 2 = a. Then a = c2 belongs
to the dosed algebra generated by c, and then the same is true for the square root b
considered above. Thus all three elements commute and we have 0 = (b2 - c2)(b - c) -
( b - c)2(b+c). Now ( b - c)2b and (b - c)2c are positive from the preceding remark and
their sum is 0. By a) above, they are themselves equal to 0, and so is their difference
(b - c) 3 . This implies p(b - c) = 0, and finally IIb - c l[ = 0.

S y m b o l i c c a l c u l u s for s.a. e l e m e n t s

5 Symbolic calcul:ts in an algebra .4 consists in giving a reasonable meaning to the


symbol F ( u ) , whe:e u ranges over some class of elements of .4 and F over some
algebra of functions on ~ . Notations like ea or x/~ are examples of symbolic calculus
in algebras. It is clear that polynomials operate on any algebra with unit, and entire
functions on every Banach algebra; more generally, the symbolic calculus in Banach
algebras defines F(u) whenever F is analytic in a neighbourhood of Sp(u). We are
going to show that continuous functions on lit operate on s.a. elements of C* algebras.
Later, we will see that in von N e u m a n n algebras this can be extended to Borel bounded
functions.
Let u be a s.a. e(ement of a C*-algebra, with (compact) spectrum Sp(u) = K .
Let 7~ be the *-ai~ebra of all complex polynomials on JR, with the usual complex
conjugation as invohttion. Then the mapping P ~ P ( u ) is a morphism from P to
the subalgebra of .A generated by u and I (a *-subalgebra since u is s.a.). By the
spectral mapping theorem, p(P(u)) (which is equal to II P(u)II ri~ce this element is
normal) is exactly the uniform norm of P on Sp(u) = K . Polynomials being dense
in g ( K ) by Weierstrass' theorem, we extend the mapping to C(K ~, by continuity, to
define an isomorphism between C ( K ) , and the closed * - a l g e b r a generated by u and I.
This isomorphism i:~ the continuous symbolic calculus. Essentially the same reasoning
can be extended to normal u instead of s.a., starting from polynomials P(z,'~) on C,
and more generally .one may define continuous functions of several commuting s.a. or
normal elements.

6 Let us give some applications of the symbolic calculus.


a) The decomposition f = f + - f - of a continuous function gives rise to a
decomposition u = u + - u - of any s.a. element of .4 into a difference of two
(commuting) positive elements. Also we may define lul and extend to, the closed real
algebra generated by u the order properties of continuous functions.
b) Given a ~ta~e # of A (see §2 below), # induces a state of the closed subalgebra
generated by u, which translates into a positive linear functional of m.~ms 1 on C(Sp (u)),
called the law of u in the state #. More generally, one could define i:~ this way the joint
law of a finite n u m b e r of commuting s.a. elements of A .
c) Let u be s.a. (or more generally, normal) and invertible in .4. t h e n its spectrum
K does not contair, 0, and the function 1/x thus belongs to C ( K ) . Therefore u -1
belongs to the closed algebra generated by u. More generally, for v not necessarily s.a.,
238 C*-~gebras

v is invertible if and only if u = v*v is invertible, and therefore if v is invertible in ,4,


it is invertible in any C * - s u b a l g e b r a B of .4 containing v. It follows t h a t S p ( v ) also
is the s a m e in .4 and B.
d) E v e r y element a in a C* algebra is a linear c o m b i n a t i o n of (four) unitaries. We
first d e c o m p o s e a kato p + iq with p = (a + a * ) / 2 , q = i(a - a * ) / 2 selfadjoint. T h e n
we r e m a r k t h a t if the norms of p,q are < 1, the elements p + i x / I _ p 2 , q-4-iv/-f-Z~q2
are unitary.

Characterization of positive elements

7 E v e r y positive element in a C * - a l g e b r a .4 has a s.a. square roe, t, and therefore


can be w r i t t e n in the form a*a. Conversely, in the o p e r a t o r algebra L ; ( ~ ) it is trivial
t h a t A * A is positive in the o p e r a t o r sense, hence has a positive s p e c t r u m , and finally
satisfies the algebraic definition of positivity. So it was suspected from the b e g i n n i n g
t h a t in an a r b i t r a r y C* algebra every element of t h e form a*a is poJitive, b u t Gelfand
and N a i m a r k could not prove it : it was finally proved by Kelley and Vaught.
We put b = a*a. Since b is s.a., we may decompose it into b+ -- b - and we want
to prove t h a t b - = O. We put ab- = s + i t with s , t s.a. and define u = ( a b - ) * ( a b - ) ,
v = ( a b - ) ( a b - ) * . We m a k e two remarks.
1) u = ( a b - ) * ( c b - ) = ( b - ) * a * a b - = b bb = ( - b - ) a is s.a. < 0. O n the o t h e r
h a n d u = (s - it)(s + it) = s 2 + t 2 + i(st - t s ) , thus i ( s t - ts) <_ O.
2) T h e n v = ( a b - ) ( a b - ) * = s2 +t ~-i(st-ts) is s.a. >_ 0 as a s u m o f t w o such
elements.
A c c o r d i n g to (2.3), u and v have the same s p e c t r u m , except possibly the value 0.
Since u _< 0 and v > 0, this s p e c t r u m is {0}. Since u, v are s.a., this im;)lies u = v = 0.
T h e n t h e relation :~ := - ( b - ) 3 implies p ( b - ) = 0, and b - = 0 since b is s.a..

Non-commutative inequalities

8 M a n y i n e q u a l i q e s t h a t are used everyday in c o m m u t a t i v e ?ife b e c o m e n o n -


c o m m u t a t i v e traps For instance, it is not true t h a t 0 _< a _< b ==> a 2 _< b2. If we
try to define [a[ as (a*a)~/2 it is not true that l a + b l __< lal + Ibl, e ~ . T h u s it is w o r t h
m a k i n g a list of tru,: inequalities•
a) Given one single positive element a , we have a <_ II a IlI, and a 2 __< II a II a : this
a m o u n t s to the posJfivity of the polynomials I[ a II - • and II a II • - x~ o n a N ( a ) .
b) a _> 0 implies c*ac > 0 for a r b i t r a r y c (write a as b'b). By difference,
a >_ b ~-> c*ac > c*bc.
c) a > b > O ~ I l a l l _ > l l b l l • Indeed, Ilall I > a > b, hence Sp(b) C [0, l l a l l ] •
d) a > b > 0 --=> (AI + a) -1 _< (AI + b) -a for A _> 0. This is !ess trivial : one starts
from 0 _< AI + b < AI + a , t h e n applies b) to get

I _< (AI + b)-l/~(AI + a)(AI + b) -1/2 .

T h e n the f u n c t i o n x -1 reverses inequalities on positive s.a. operators, and we deduce

I >_ (AI + b)l/2(AI + a ) - l ( A I + b) 1/2 ,


2. S t a t e s o n C * - a l g e b r a s 239

and we apply again b) to get the final result.


e) T h e preceding result gives an e x a m p l e of increasing functions F ( x ) on ~ such
that 0 < a < b ~ F ( a ) _< F(b), n a m e l y t h e f u n c t i o n 1 - (A + x) - 1 ' = x / ( x + ~) for
_> 0. We get o t h e i e x a m p l e s by integration w.r.t, a positive m e a s u r e #(dA), t h e m o s t
i m p o r t a n t of which are the powers x a for 0 < a < 1, and in p a r t i c u l a r the f u n c t i o n
,z.

§2. S T A T E S ON C*-ALGEBRAS

1 We gave in Cha,pter I the general definition of a probability law, or state, on a


complex unital * - a l g e b r a A . Let # be a linear functional on a C * - a l g e b r a are no
exception, such t h a t # ( a ' a ) >_ 0 for a r b i t r a r y a. A c c o r d i n g to subsection 7, this m e a n s
exactly t h a t # is positive on positive elements, hence real on s.a. elements. Since every
element of A can be w r i t t e n as a = p + i q , a* = p - iq, we see t h a t # ( a * ) = # ( a ) , and
one of t h e general axioms becomes unnecessary.
T h e same familiar reasoning t h a t leads to the Schwarz inequality gh~-es here

(1.1) I ~(b*a)12 _< p(b*b) ~ ( a * a ) .

In particular, we hav,~ [#(a)12 < # ( a ' a ) # ( I ) . On t h e o t h e r hand, we. have a*a <_ II a 1[2I,
and since #(I) = 1 we find t h a t I t ( a ) [ _< n a II - - otherwise stated, # is b o u n d e d and
has n o r m 1.
Conversely, a b c u n d e d linear functional on A such t h a t #(I) = !' # II = 1 is a state.
Consider indeed a ~_.a. positive element a w i t h s p e c t r u m /~ ; define a linear functional
on C ( K ) by r e ( f ) := # ( f ( a ) ) . Since # is b o u n d e d , m is a complex measure, of n o r m
It ra II -< I1# I1 = #(1) = r e ( l ) . T h e n rn is positive f r o m s t a n d a r d m e a s u r e theory, and
we have # ( a ) -- m(:.-) > 0.
C* algebra have m a n y states : let us prove t h a t for every s.a. a E .4, there exists a
s t a t e # such that I # ( a ) l = Hail. We put K = S p ( a ) , I l a l l = a , and recall (§1, subs.2,
c)) t h a t at least one of the two points 4-a belongs to K ; for definiteness we assume
a does. T h e linear functional f ( a ) ~ f ( a ) on the C * - a l g e b r a g e n e r a t e d by a is
b o u n d e d w i t h n o r m 1, and by the H a h n B a n a c h t h e o r e m it can be ext~:nded to a linear
functional /~ of n o r m 1 on A . T h e equality #(1) = 1 = II # n implies t h a t # is a state
on A such t h a t t:((,.):= a .
REMARK. T h e set of all b o u n d e d linear functionals on A is a Banact, ,;pace, the unit ball
of which is c o m p a c t m its weak* topology. T h e subset of all positive linear functionals
is weakly closed mid hence c o m p a c t A , and finally the set of all s t a b s itself is a weakly
c o m p a c t convex sei. A c c o r d i n g to the K r e i n - M i l m a n t h e o r e m , it is the closed convex
hull of t h e set of its e x t r e m e points, also called pure states. For ;nstance, the state
#(A) = < x, Ax > o!" t h e algebra £(7-{), associated w i t h an unit vector x , can be shown
to be p u r e in this sense. It is not difficult to show t h a t the state # is t h e preceding
p r o o f can be chosen to be pure.
The space of all states of a C* -algebra has received considerable attention, specially
because of the (unsuccessful) attempts to classify all states of the CCR and CAR
algebras.
240 C*-~gebras

Representations and the GNS theorem

2 A representatio,z of a C * - a l g e b r a A in a Hilbert space 7-~ is a m o r p h i s m


from `4 into t h e C * - a l g e b r a £(7-/). T h e n every unit vector x C ~ gives rise to a
state # ( a ) = < x, qp(a)x > of ,4 (generally not a pure one). T h e c e l e b r a t e d t h e o r e m
of G e l f a n d - N a i m a r k - S e g a l , which we now prove, asserts t h a t every s t a t e of a C * -
algebra A arises in this way - - and pure states arise f r o m irreducible representations,
an i m p o r t a n t point we do not need here.
prom an analysc's point of view, the main point of the GNS theorem i: to show that
a C*-algebra Las many (irreducible) representations. For us, its interest lies in the
construction itself, which is a basic step in the integration theory w.r.t, a state.
T h e p r o o f is very easy. For a, b C `4 one defines < b,a > = # ( b ' a ) . T h e n `4 becomes
a p r e h i l b e r t space. Let "H denote the associated Hilbert space, wh:,:h m e a n s t h a t first
we neglect the space; N" of all a C `4 such t h a t # ( a ' a ) = 0, and t h e n c o m p l e t e A / A / .
We denote by K, fox a few lines only, the class rnodN" of a E ,4.
We have #((ba)*ba) = #(a*(b*b)a), and since b*b <_ II b l]2I this is b o u n d e d by
I[ b l[~#(a*a) . Thus the o p e r a t o r gb of left multiplication by b is b o u n d e d on 7-/. We
have gugv = euv, e';c. so t h a t /~. is a r e p r e s e n t a t i o n of `4 in 7-{. D e n o t i n g by 1 the
vector in 7-C corresponding to I, we have # ( a ) = < 1 , ~ a l > .
T h e r e l a t i o n aI = a for all a E .4 implies t h a t ga 1 = ~. T h e set of all t~a 1 is thus
dense in T{ : 1 is said to be a cyclic vector for the representation, and the r e p r e s e n t a t i o n
itself is said to be cyclic.
Assume some o t h e r representation ¢p is given on a Hilbert space ;C, w i t h a cyclic
vector x such t h a t t ~ { a ) = < x , ~ ( a ) x > for d E . 4 . T h e n the mappi~:g a , , ~ ( a ) x
is an i s o m e t r y from the prehilbert space .4 into ~ , w i t h a dense image. T h e n it can
be transferred to . A / N ' , e x t e n d e d to 7-I by continuity, and becorr.es an i s o m o r p h i s m
b e t w e e n 7-I and K . T h u s the pair (7-/, 1) is i n d e p e n d e n t , up to isomorphism, of the
p a r t i c u l a r way it was constructed, and we may speak loosely of the G N S r e p r e s e n t a t i o n
associated w i t h the state #.

COMMENTS. a) i~.ssume we are in the c o m m u t a t i v e case, a n d .4 is one of the three


usual C * - a l g e b r a s of m e a s u r e t h e o r y : the algebra C ( K ) of continuous functions on a
c o m p a c t set K (wit h an a r b i t r a r y state # ) , the algebra B(C) of all b o u n d e d functions
for a a - f i e l d g (in tiffs case, to develop integration theory we need a c o u n t a b l e a d d i t i v i t y
a s s u m p t i o n on # ) , and the algebra L ~ ( # ) (this one is a yon N e u m a n n algebra, of which
# is a " n o r m a l state"). In all three cases, ~ is the space L e ( # ) , on which .4 acts by
multiplication. T h u s it seems t h a t 7-I is a kind of non c o m m u t a t i v e L 2 space. It is indeed,
b u t the G N S c o n s t r u c t i o n leads only to a "left" L 2 space, on which right m u l t i p l i c a t i o n
generally defines u n b o u n d e d operators. We return to this subject !ater.
b) F r o m the C N S construction, one can u n d e r s t a n d how every C* - a l g e b r a can be
realized as a C* - a l g e b r a of o p e r a t o r s on some Hilbert space - - a very large one, possibly
the direct s u m of the Hilbert spaces associated w i t h all irreducible r e p r e s e n t a t i o n s of
t h e algebra. We do not insist on this point.
c) T h e state # is said to be faithful if

(3.1) V a e .4 ( # ( a ' a ) = O) --> (a = O) .


2. S t a t e s o n C * - a l g e b r a s 241

This is equivalent to saying that for a > O, #(a) = 0 - - > a = O. From the point
of view of the representation, we have # ( a ' a ) = II e.1 II, ~,nd (?.1) m e ~ s that
~ a l = 0 - - > a = 0 : the vector 1 then is said to be separating, Note that the
state being faithful ;s a stronger property than the GNS represent~ticn being faithful
((ga = 0) =:=;> (a = 0)).
In classical measure theory, # is a faithful state on C ( K ) if the s apport of H is the
whole of K ; H is practically never faithful on the algebra of BoIe] Lounded functions,
and is always (by construction) faithful on Leo(y). In the non-cor.~.mutative case, we
cannot generally turn a state into a faithful one by taking a quotient ~ubalgebra, because
Af usually is only c, left ideal, not a two sided one. This difficulty does not occur for
tracial state~, defined by the property #(ab) = H(ba).
As a conclusion, while in classical measure theory all states on C ( K ) give rise to
countably additive treasures on the Borel field of K with an excellent integration theory,
it seems t h a t not all states on non-commutative C* algebras A are "good". Traces
certainly are good states, as well as states which are normal and faithful on some von
Neumann algebra containing A . Up to now, the most efficient definition of a class of
"good" states has been the so called K M S condition. We state {t in the next section,
subsection 10.
4 EXAMPLES. As !'~ Chapter II, consider the probability space [~ generated by N
independent Bernou]li random variables x k , and the corresponding "toy Fock space"
F = L2(f/) with basis XA ( A ranging over the subsets of { 1 , . . . , N } ). We turn I' into
a Clifford algebra with the product

XAXB = (--1)n(A'B)xA~B •

and we embed F ii,to / : ( F ) , each element being interpreted as the corresponding left
multiplication operator. Then the vacuum state on /2(P)

H(U)= <1, Yl>

induces a state on the Clifford algebra, completely defined by

#(z~)=0 if A ~ O , H(1)=I.

Then we have #(XAXB) = (--1) n(A'A) if A ~ 0 , 0 otherwise, and thi;, .~tate is tracial.
The C * - a l g e b r a generated by all creation and annihilation oper:~tors a/t= is the
whole of £ ( F ) . The, vacuum state # is read on the basis a+Aa-~ o:' normally ordered
products as #(a+a-~) = 0 unless A = B = 0 , p(I) = 1. It is no~ tracial, since for
A 7~ 0 #(a+AaA) == (J, #(aAa+A) 7~ 0 - - anyhow, t:(F) has no other tracial state than
the s t a n d a r d one U ~ 2 - N T r ( U ) . It is also not faithful (in part!cular, annihilation
operators kill the vacuum).
5 We mentioned a,bove several times the construction of a quotient algebra , A / J ,
where J is a two sided ideal. This raises a few interesting problems. For instance,
whether the quotiert algebra will be a C * - a l g e b r a . We mention the classical answers,
mostly due to Segal It is not necessary to study this subsection at a first reading.
Let first ,,7 be a left ideal in ,4, not necessarily closed or stable under the involution.
We assume J ¢ ,4, or equivalently I ~ ,:7. Let J + denote the set of all positive
242 C*-~gebras

elements in ,7. We have seen in §1, subs. 8, d) t h a t the m a p p i n g j ~ ej = j ( I + j ) -a


on ,7+ is increasing. It m a p s ,7+ into the "interval [0, I] " in ,7. The symbol lirnj is
u n d e r s t o o d as a limit along the directed set ,7+.
The following i m p o r t a n t result explains why ej is called an approximate u n i t for `7.
THEOREM. F o r a "-2 (/ we have limj II a - aej I[ = 0.
PROOF. For n • IN e n d j >_ na* a, we have (§1, 8 b))

(a -- e j ) * ( a -- a e j ) = (I -- e j ) a * a ( I -- ej) < l ( I -- e j ) j ( 1 - ej) .

We m u s t show t h a t ~he n o r m of the right side tends to 0, and takin{, n large it suffices
to find a universal b o u n d for the n o r m of j ( I - e j ) 2 . Using symbolic calculus this reduces
to the trivial r e m a r k t h a t x / ( 1 + x) 2 is b o u n d e d for x • IR. [-]
APPLICATIONS. a) This reasoning does not really use the fact t h a t a • ,7 : since ,7 is a
left ideal aej belongs to ,7 for a • .4 : it applies whenever a*a is d o m i n a t e d by some
element of `7+, with the consequence t h a t a then belongs to the closure of 27.
b) If a = lira a e j : we have a* = lira ej a*. Thus if f l is a closed two sided ideal, `7
is Mso stable u n d e r the involution.
e) Let `7 be a closed two sided ideal, and ~ denote the class m o d J of a • .4. The
general definition of !1~ II in the Banach space A / f l is i n f b e j II a ~- b it, and therefore
we have II a - aej It ~ II ~ II. Let us prove that

(5.1) l] "~l[ = limj l1 a - aej 11 •

Since I l I - ej 11 <- 1 we have

]la +bIl_> I I ( a + b ) ( I - e j ) l l = I](a-aej)+(b-be~)ll

For b e J , b - bej t e n d s to 0 in n o r m , and therefore II a + b II -> lim supj l[ a - a e j II,


implying (5.1).
d) It follows tha~ . 4 / , 7 is a C * - a l g e b r a :

II~ll 2 = limj II a -- aej II2 = limj l] (a* - e j a * ) ( a -- aej)II


= limj ]l (I - e j ) ( a * a + b)(I - ej)II for every b G f l
-< II a*a + b II for every b G f l

Taking an inf b we i'ave t h a t II ~ It2 <- It a~*~ I]~ , hence equality (§1, subs. 1).
e) An interesting consequence : let f be a m o r p h i s m from a C*--algebra `4 into a
second one /3, and let J be its kernel. We already know t h a t f is continuous ( n o r m
decreasing), hence `7 is closed. T h e n let f be the algebraic i s o a l o r p h i s m from the
quotient C * - a l g e b r a . 4 / , 7 into /3 ; from subs. 3 b), f is also a n o r m isomorphism, and
therefore its image ( ( . 4 ) is closed in B .
3. V o n N e u m a n n a l g e b r a s 243

§3. VON N E U M A N N ALGEBRAS

T h i s s e c t i o n is a m o d e s t o n e : m o r e t h a n ever its proofs are b o r r o w e d f r o m e x p o s i t o r y


b o o k s a n d p a p e r s , a n d we h a v e left aside t h e h a r d e r r e s u l t s ( a n d specially all of t h e
classification t h e o r y ) .

Weak topologies for operators

1 T h o u g h yon N e u m a n n ( v N ) a l g e b r a s c a n b e defined as a class of a b s t r a c t C * -


a l g e b r a s , all t h e v N - a l g e b r a s we discuss below will b e c o n c r e t e * - a l g e b r a s of o p e r a t o r s
o n s o m e H i l b e r t s p a c e ~ . As o p e r a t o r algebras, t h e y are c h a r a c t e r i z e d b y t h e p r o p e r t y
of b e i n g closed u n d e r t h e weak o p e r a t o r topologies i n s t e a d of t h e u n i f o r m topology, a n d
o u r first t a s k c o n s i s t s in m a k i n g t h i s precise.
The most striking characterization of vN algebras as abstract C* algebras is possibly
that of Sakai : a C*-algebra A is a vN algebra if and only if the underlying Banach
space is a dual. Also, vN algebras are characterized by the fact that bounded increasing
families of s.a. elements possess upper bounds. On tile other hand, Pedersen has studied
natural clasees of algebras satisfying weaker properties, like that of admitting a Boret
symbolic calculus for individual s.a. elements, or of admitting upper bounds of bounded
increasing sequences of s.a. elements. He has shown that, in all practical eases, they
turn out to be the same as vN algebras.
T h e r e are t h r e e really useful weak topologies for b o u n d e d o p e r a t o r s . T h e first o n e is
t h e w e a k o p e r a t o r t o p o l o g y : a i --+ a iff < y, a i x > -+ < y, a x > for fixed x, y - - h e r e a n d
below, t h e use of i r a t h e r t h a n n m e a n s t h a t c o n v e r g e n c e is m e a n t for filters or d i r e c t e d
sets, n o t j u s t for sequences. T h e s e c o n d is t h e s t r o n g o p e r a t o r t o p o l o g y : ai x ---+ a x i n
n o r m for fixed x . However, t h e m o s t i m p o r t a n t for us is t h e t h i r d one, u s u a l l y called
t h e (r - w e a k or u l t r a w e a k topology. We h a v e seen in A p p e n d i x 1 t h a t t h e s p a c e £ = L;o°
of all b o u n d e d o p e r a t o r s is t h e d u a l of t h e space L; 1 of t r a c e class o p e r a t o r s , a n d t h e
u l t r a w e a k t o p o l o g y is t h e weak t o p o l o g y o n 12 relative to t h i s duality. O t h e r w i s e s t a t e d ,
c o n v e r g e n c e of ai to a m e a n s t h a t T r ( a i b ) --+ T r ( a b ) for e v e r y b E/21 • To e m p h a s i z e
t h e i m p o r t a n c e of this topology, a n d also b e c a u s e s t a t e s w h i c h are c o n t i n u o u s i n t h i s
t o p o l o g y are called n o r m a l s t a t e s , we call it t h e n o r m a l topology (see A p p . 1, subs. 5).
More precisely, the word normal usually refers to a kind of order continuity, described
later on, which will be proved to be equivalent to continuity in the "normal" topology
(see subs. 6). Thus our language doesn't create any dangerous confusion.
It is g o o d t o keep in m i n d t h e following facts f r o m A p p e n d i x 1 :
a) T h e n o r m a l t o p o l o g y is s t r o n g e r t h a n t h e weak topology, a n d n o t c o m p a r a b l e
t o t h e s t r o n g one. T h e s t r o n g topology b e h a v e s r e a s o n a b l y well w.r.t, p r o d u c t s : if
ai ---+ a a n d bi -+ b strongly, ]l bi 11 (or 1] ai H) r e m a i n i n g b o u n d e d , t h e n aibi --+ ab a n d
bia i --+ ba strongly. O n t h e o t h e r h a n d , it b e h a v e s b a d l y w.r.t, a d j o i n t s , w h i l e if ai ---+ a
weakly, t h e n a •i --+ a* weakly.
b) T h e d u a l of £ ( ~ ) is t h e s a m e for t h e weak a n d t h e s t r o n g topologies, a n d c o n s i s t s
of all l i n e a r f u n c t i o n a l s f ( a ) = ~ j < y j , a x j > w h e r e x j , y j are (finitely m a n y ) e l e m e n t s
244 C* -algebras

of 7-/. The dual for the normal topology consists of all linear functionals f ( a ) -~ Wr(aw)
where w is a trace class operator.
c) One can give a simple characterization of the normal lineai- functionals on /:(7"f),
i.e. those which are continuous in the normal topology : a linear functional # on f~(Tl)
i~ normal iff its rest~,'iction to the unit ball i~ continuous in the weak topology.
As an application, since every weakly or strongly convergen~ sequence is norm
bounded, it also converges in the normal topology.

Von Neumann algebras

2 Since the weak and strong topologies on L;(7-f) have the same dual, the H a h n - B a n a c h
theorem implies that the weakly and strongly closed subspaces of L;(~) are the same.
DEFINITION A (coT~crete) yon N e u m a n n algebra (vNa) is a *-subalgebra of L:(~),
containing I , closed, in the strong or weak topology.
The e o m m u t a n t j i of a subset J C L:(7-f) is the set of all bounded operators a
such that ab = ba for every b E J . It is clear that j t is an algebra, closed in the
strong topology, and that t h e bicommutant J" of J contains J . If J is stable under
the involution, so is j t (which therefore is a vNa), and so is J ' .
Here is the first, a r d most important, result in this theory.
YON NEUMANN'S B, 3OMMUTANT THEOREM. Let .2, be a s u b - * -a/.g.~bra o f I:(TI) con-
taining I . Then the b i c o m m u t a n t A tt is exactly the strong (= we'al:) closure of .4. In
particular, .4 is a yon N e u m a n n algebra itf i~ is equal to its hicommut&nt.
Note that it is n o t q u i t e obvious that the strong c l o s u r e of a n a l g e b r a i:3 a n a l g e b r a !
PROOF. It is clear that the strong closure of A is contained in . / ; t We must prove
conversely that, for every a E `4", every c > 0 and every finite fmnily X l , . . . , X n of
elements of "/-f, the~e exists b E A such that II bxi - axl I] -< ¢ for all i.
We begin with the case of one single vector x. Let K the closed subspace in 7"f
generated by all bx, b E ,4; K is closed under the action of `4, and the same
is true for K -L (if < y, bx > = 0 for every b E ..4, we Mso have for c E .4
< c y , b x > = < y , c * b x > = 0). This means that the orthogonal projection p on K
commutes with every b E ,4 ; otherwise stated, p E ~4t. Then a E `4" commutes with
p, so that ax E K . This means there exist a~ E A such that a , x --* e x .
Extension to n ~ectors : let 7~ be the direct sum of n copies of Tf. A linear
operator on 7-f c0:a be considered as a ( n , n ) matrix of operaters on 7-/, and we
call .A the algebra consisting of all operators ~ repeating[ a E .4 along the diagonal
('~(yi 0 . . . • Yn) = ay~ @ . . . ~3 ayn). It is easy to see that `4 is a vNa whose commutant
consists of all matrices with coefficients in A t, and whose bicommu;ant consists of all
diagonals 3 with e E .4tt. Then the approximation result for n vectors ( X l , . . . , x n ) in
,4 follows from the preceding result applied to xl ® . . . @ x n .
COROLLARY. T h e yon N e u m a n n algebra generated by a subset ,1 of £(7"f) is ( J U j . ) t ~ .

3 We give other useful corollaries as comments, rather than formM statements (the
reader is referred al:~o to the beautiful exposition of Nelson [Nel2]). The general idea is,
that every "intrinsic" construction of operator theory with an uniquely defined result,
3. Von Neumann algebras 245

when performed on elements of a vNa `4, also leads to elements of `4. We illustrate
this by an example.
First of all, let us say that an operator with domain 7:), possibly unbounded, is
affiliated to the vNa .4 if Z) is stable under every b E`41 and abx = bax for x E D
(thus a bounded everywhere defined operator is affiliated to `4 iff it belongs to `4 : this
is a restatement of von Neumann's theorem).
Consider now an unbounded selfadjoint operator S on ~ , and its spectral decom-
position S = f ~ .k d E ~ . This decomposition is unique. More precisely, if u : 7-/-~ K: is
an isomorphism between Hilbert spaces, and T is the operator u S u -1 on K:, then the
spectral projections of T are given by Ft = u E t u -1 • Using this trivial remark, we prove
by "abstract nonsense" that if S is affiliated to a vNa `4, then its spectral projections
Et belong to A .
Indeed, it suffices to prove that for every b C .4t we have Etb = b e t . Since b is a
linear combination of unitaries within .Ar , we may assume b = u is unitary. Then we
apply the preceding remark with K: = 7-/, T = S, and get that Et = Ft = u E t u -1 , the
desired result.
More generally, this applies to the so called polar decomposition of every closed operator
affiliated with A.

Kaplansky's density theorem

4 Let `4 be a * - a l g e b r a of operators. Since strang convergence without boundedness


is not very useful, yon Neumann's theorem is not powerful enough as an approximation
result of elements in `4" by elements of A. The Kaplansky theorem settles completely
this problem, and shows that elements in A" of one given kind can be approximated
boundedly by elements of the same kind from A.
THEOREM. For every d e m e n t a from the unit ball of `4", there exists a ~iter a i on the
unit ball o f `4 (not necessarily a sequence) that converges strongly to a. I f a is s.a.,
positive, unitary, the ai may be chosen with the same property.
P R o o f (from Pedersen [Ped]). Let f be a real valued, continuous function on IR, so that
we may define f ( a ) for every s.a. operator a. We say that f is strong if the mapping
f ( . ) is continuous in the strong operator topology. Obvious examples of strong functions
are f ( t ) = 1, f ( t ) = t. The set 8 of all strong functions is a linear space, closed in the
uniform topology. On the other hand, the product of two elements of ,.q, one of which
is bounded, belongs to $ . The main remark is
LEMMA. Every continuous and bounded function on IR is strong.
We first put h(t) = 2t/(1 + t 2) and prove that it is strong. Let a, b be s.a. and put
A = (1 + a2) -1 , B = (1 + b~) -1 . Then we compute as follows the difference h(b) - h(a)
(forgetting the factor 2)

Bb- Aa = B [ b ( l + a 2 ) - ( l + b 2 ) a ] A = B ( b - a) A + B b ( a - b ) a A .

If bi ---* a, Bi and Bibi remain bounded in norm, and therefore h(bi) ---* h(a).
This bounded function being strong, we find (by multiplication with the strong
function t ) t h a t t 2 / ( l + t 2) by t is strong, and so is 1/(1 + t 2) by difference. Applying
246 c* -algebras

the Stone-Weierstrass theorem on ]R we find that all bounded continuous functions on


lR are strong. Then, if f is only continuous and bounded on IR, if(t)/(1 + t 2) tends
to 0 at infinity and hence is strong and bounded. Then writing

f(t) tf(t)
f(t) = -------~
1 + t + t -----~
1+t
we find that f itself is strong. This applies in particular to cost, s i n t , and we have
everything we need to prove the theorem.
S.a. operators. Let b E .A/I be s.a. with norm < 1. Since b belongs to the weak
closure of .4 and the involution is continuous in the weak topology, it belongs to the
weak closure of .Asa- This set being convex, b belongs in fact to its strong closure. We
choose selfadjoint bi C .Asa converging strongly to b, and we deduce f:'om the lemma
that h(bi) --* h(b) strongly. And now h(bl) belongs to the unit ball of .Asa, while
every element a iu the unit ball of .A~a can be represented as h(b), since h induces a
homeomorphism of [ - 1 , 1] to itself.
The same reasoning gives a little more : if a is positive so is b. Then we also have
h+(bi) ---* h+(b) = a, and these approximating operators are positive too.
Unitaries. If u ~2 .A" is unitary, it has a spectral representation ,'~s f~-~reitdEt with
spectral projections Et E .A'. The s.a. operator a = fE,:tdEt ca:" be approximated
strongly by s.a. operators aj C .,4, and then the unitaries e ia3 converge strongly to u.
Arbitrary operators of norm _< 1. Consider a C .All of norm < 1. On 7 - / • 7-/ let
thevNa.Aconsist:,fallmatrices (~ ink) with j'k'l'rn E .A" Then "~' c°nsists °f

thematrices (Po Op) withpC.A' ' andtheoperator ( 0a a*


0 ) is s.a. with norm _< 1

and belongs to the bicommutant A ' . From the result above it c a n be approximated
by operators (: t . of norm < 1, with r , s , t C .A. Then ~ has norm _< 1 and
converges strongly .o a, and the proof gives an additional result : the approximants
s ---* a can be chosen so that s* ---* a*.
REMARK. Assume A was closed in the "normal" topology. We have just seen that the
unit ball B of A" is the strong or weak closure of the unit ball of ~4. But on the unit
ball of £(7-/) the weak and normal topologies coincide, and therefore B is contained in
A . Thus .A = A" and therefore .4. is a vNa.

T h e p r e d u a l of a v o n N e u m a n n algebra

5 The Banach space £ ( ~ ) is the dual of T ( ~ ) , the space of trace class operators.
Therefore every subspace .4 C £ ( ~ ) which is closed for the normal topology is itself a
dual, that of the quotient Banach space T/.A ±. In the case .4 is a vNa, this space is
called the predual of .4 and denoted by . 4 , . The predual can be described as the space
of all complex linear functionals on .A which are continuous in the: normal topology. We
call them normal l,ine~r functionals below.
According to ~t,e: Hahn-Banach theorem, every normal functi:,nal # on A can
be extended to a normal functional on /2(7-/), and therefore ca1~ be represented as
3. Yon Neumann algebras 247

#(a) = Tr(aw) where w is a trace class operator. In particular, decomposing w we see


that every normal linear flmctional is a complex linear combination of four normal laws
(states). In the case of laws, the following lemma (which is not essential for the sequel)
gives a more precise description
LEMMA. E v e r y noi:mal law # on .4 can be extended as a normal law on £(7-~), hence
the trace class operator w can be taken positive.
PROOF. For simplicity, we assume that ~ is separable. Since #(a) = ~ ( a * ) , # is
also associated with the trace class operator w*, and Replacing w by (w + w*)/2
we may assume to is s.a.. Then we choose an o.n.b. (en) such that wen = An en with
}-~n [ An t < e¢. Consider the direct sum 7~ of countably many copies "Ha of 7t, on which
,A operates diagonally ( 8 ( ~ n xn) = ~ n a x , , ) . Let 1 = C ~ n VII An lea • ~ , where c
is a normalization constant, and u be the corresponding state u(a) = ~ n < 1,21 > .
We have

= As < ae. >, = c2 4 1 An I < e . , >.


n n

Let K; be the closed (stable) subspace generated by ,41 in 7~. Since An is bounded # is
dominated by a scalar multiple of u, and therefore (by a simple lemma proved in subs.
8 below) is of the form < T I , ~ I > where T is a bounded positive operator on K; which
commutes with the representation. Then putting j = x/T1 we have #(a) = < j , ' d j > .
Taking a = I note that j is an unit vector. In the last expression we may replace a by
b • 12(~) operating diagonally on 7~, and we have found an extension of # as a law on
£(~).

6 In this subsection, we prove that for a positive linear functional on A , being normal
is equivalent to being order continuous, an intrinsic property, in the sense that it can
be defined within A , independently of the way it acts on the Hilbert space ~ . As a
consequence, the predual .A. and the normal topology itself are intrinsic.
In the literature, properties of a concrete vNa A of operators on ~ which depend
explicitly on 7-/ are often called spatial properties.
Let (ai) be a family of positive element.s of A, which is filtering to the right and
norm bounded. Then for x • "ht the limit limi < z, aix > exists, and by polarization
also liml < y, alz > . Therefore limi ai = a exists in the weak topology, and of course
belongs to A. Since ][ ai I[ is bounded, the convergence also takes place in the normal
topology. It is also strong, as shown by the relation

I I ( a - ~i)~ II2 <_ I l a - a i l l < x , ( a - a i ) x > --~0

(the inequality b2 <_ II b [[ b for b _> 0 is used here). On the other hand, ~ c~n be
described without reference to ~ as the 1.u.b. of the family (ai) in .A.
Let # be a positive linear functional on A. If a = supi ai implies #(a) = supi #(ai)
we say that # is order continuous. Since order convergence implies normal convergence,
every normal functional is also order continous. We are going to prove the converse.
We assume that # is order continuous. We associate with every positive element
b • ,,4 a (complex [) linear functional rib(a) = tt(ab) and if #b is normal we say that b
is regular. Thus our aim is to prove that I is regular. We achieve this in three steps.
248 C*-Ngebras

1) Let (bi) a n o r m bounded, increasing family of regular elements. We prove its


u p p e r b o u n d b is regular.
Since the predual ..4. is a B a n a c h space, it suffices to prove t h a t #b~ converges in
n o r m to #b. For a in the unit ball of A we have

I # ( a ( b - bi) I = I,u(a( b - bi)1/2( b - bi)l/2) l


< #(a(b - h i ) a * ) l / 2 / l ( b - hi) 1/2 (Schwarz)

T h e first factor is b o u n d e d independently of a, the second one tends to 0 since # is


order continuous.
2) Let rn E M be positive and 56 0. T h e n there exists a regular element q # 0
d o m i n a t e d by m .
We choose a vector x such that < x, m x > > #(rn) and choose p positive and _< m ,
such t h a t < x , p x > _< #(p). Since /1 is order continuous, Zorn's l e m m a allows us to
choose a m a x i m a l p. Let us prove that q = m - p fulfills our conditions. First of all, it
is positive, d o m i n a t e d by m , and the choice of x forbids that q = 0. T h e m a x i m a l i t y of
p implies t h a t for every positive u < q we have ~(u) <_ < x, ux > , and this is e x t e n d e d
to the case u <_ Cq by homogeneity.
We now prove t h a t #q is strongly continuous, hence also weakly continuous, and
finally normal. For a E ,4 we have ]#(aq)] _< IJ(I)l/2tt(qa*aq)l/2. O n the o t h e r hand,
qa*aq <_ II a 112q2 < II a 11211q II q, a constant times q. Therefore
#(qa*aq) < < x , q a * a q x > : li aqx II2 .

So finally we have pq(a) < #(1)1/211 aqx 11, whence the strong continuity of /.tq.
3) Zorn's l e m m a and 1) allow us to choose a m a x i m a l regular element b such t h a t
0 < b < I . Using 2) the only possibility is b = I , and we are finished.

About integration theory

7 T h e s t a n d a r d theory of n o n - c o m m u t a t i v e integration is developed for faithful normal


laws (states) on a yon N e u m a n n algebra. However, as usual with integration theory, the
p r o b l e m is to s t a r t f r o m a functional on a small space of functions and to e x t e n d it to
a larger space. We are going to discuss (rather superficially) this problem.
First of all, we consider a C*-algebra. denoted by .4 ° , w i t h a probability law #.
T h e G N S c o n s t r u c t i o n provides us with a Hilbert space 7/ on which A ° acts (we
simply write ax for the action of a E .4 ° on x E 7/), and a cyclic vector 1 such t h a t
# ( a ) = < 1 , a l > . Since the kernel N" of the GNS representation is a two sided ideal
and the q u o t i e n t A ° / N " is a C* algebra, we do not lose generality by a s s u m i n g the
GNS r e p r e s e n t a t i o n is 1-1. T h u s A ° appears as a n o r m closed subalgebra of Z2(7-{). Let
us d e n o t e by .4 its b i c o m n m t a n t , a yon N e u m a n n algebra containing .A° , and to which
# trivially e x t e n d s as the (normal) law #(a) = < 1, a l > .
W h a t have we achieved from the point of view of integration t h e o r y ? A l m o s t nothing.
T h i n k of the c o m m u t a t i v e case. Let y o be a boolean algebra of subsets of some set E ,
and let # be a finitely additive law on .7.o . T h e space of complex e l e m e n t a r y f u n c t i o n s
(linear c o m b i n a t i o n s of finitely m a n y indicator of sets) is a unital * - a l g e b r a , and its
3. V o n N e u m a n n a l g e b r a s 249

c o m p l e t i o n in the uniform n o r m is a c o m m u t a t i v e C * - M g e b r a .4 ° . If we p e r f o r m all


t h e c o n s t r u c t i o n above, we get a normal law oil some huge c o m m u t a t i v e v N a c o n t a i n i n g
~4° , and the essential a s s u m p t i o n of integration theory, n a m e l y the c o u n t a b l e a d d i t i v i t y
of # , is c o m p l e t e l y irrelevant !
Besides that, to develop a rich integration theory, it seems necessary to use a faithful
law, which cannot be achieved simply (as we pointed out in §2 subs. 3) by a f u r t h e r
passage to the quotient. To d e m a n d t h a t the state be faithful on .4 ° is not sufficient to
m a k e sure it will be faithful on ~4. However, the following easy l e m m a sets a necessary
condition :
LEMMA. A s s u m e the law # is faithful, and (ai)icl is a norm bounded f a m i l y such that
limi #(a* ai) = 0 (along some filter on I ) . Then we also have limi #(aib) = 0 for b E A .
PROOF. We use t h e l a a g u a g e of the GNS representation. It is sufficient to prove t h a t
#(aib) ~ 0 along every ultrafilter on the index set I , finer t h a n the given filter. Since
the o p e r a t o r s ai are b o u n d e d in norm, they have a limit a in the weak t o p o l o g y of
o p e r a t o r s (even in the n o r m a l one), and a i l --* a ] weakly. T h e condition #(a*ai) ~ 0
m e a n s t h a t limi a i l = 0 in the strong sense, and therefore a l = 0. Since t h e vector
1 is separating, this implies a = 0. On the other hand l~(aib) = < 1 , a i b l > tends to
< l,abl > =0.
The same conclusion would be true under the hypothesis that the norm bounded family
ai converges weakly to 0 : this will be useful in subsection 9.
K n o w i n g this, we assume that the probability law # on the C* -algebra .,4° satisfies
the p r o p e r t y

(7.1) I[ ai II b o u n d e d , limi #(a*ai) = 0 implies limi #(aib) = 0 for every b E A ° •

(It would be sufficient to assume this for sequences : the easy p r o o f is left to the reader).
In p a r t i c u l a r , ( # ( a ' a ) = O) --> (#(ab) = 0) for every b, and therefore the set of all "left
negligible elements" is the same as the kernel A/" of the GNS representation. Passing to
the q u o t i e n t , we m a y assume that the GNS representation is faithful, and i m b e d .A° in
i : ( ~ ) . O n the o t h e r hand, the p r o p e r t y (7.1) is preserved when we pass to the quotient.
Let us prove now t h a t the cyclic vector 1 is separating for the von N e u m a n n
algebra .4, the b i c o m m u t a n t of A ° .
PROOF. Consider a C .4 such t h a t a l = 0. According to K a p l a n s k y ' s t h e o r e m , there
exists a n o r m b o u n d e d family of elements ai of A ° that converges strongly to a, and
in p a r t i c u l a r limi [[ aibl [[ = 0 for every b C A ° . Taking b = [ we have #(a*ai) ~ 0,
and the s a m e for c*ai (c C .4 ° since left multiplications are bounded. T h e n we deduce
from (7.1) that

0 = limi #(c*aib ) = limi < c l , albl > = < c l , a b l > .

Since b, c are a r b i t r a r y and 1 is cyclic, we have a = O.


From n o w on, we use the notation ~a~ for the operator norm, and keep the
notation l[ ~ [[ fo~" t h e n o r m , ( a % ) ~ t 2 .

8 In the c o m m u t a t i v e case, a space L ° ° ( # ) can be i n t e r p r e t e d as the space of


essentially b o u n d e d fimctions, and as a space of measures possessing b o u n d e d densities
250 C* -algebras

with respect to /.t. In the n o n - c o m m u t a t i v e case, if we think of .4 ° as consisting


of "continuous functions", the first i n t e r p r e t a t i o n of L ~ becomes the von N e u m a n n
algebra A , a n d we are going to show that the second i n t e r p r e t a t i o n leads, w h e n ,4 has
a cyclic a n d s e p a r a t i n g vector 1, to the commutant of A .
A positive linear functional rr on .2, may be said to have a b o u n d e d density if it is
d o m i n a t e d by a c o n s t a n t c times /~. T h e n we have for a, b E A

(8.1) 17r(b*a) I < 7r(b*b)1/2 7r(a*a) 1/2 ~ C II b II II a II


Forgetting the middle, the inequality is meaningful for a complex linear f u n c t i o n a l 7r. If
it is satisfied, we say that 7r is a complex measure with bounded density. Let 79 be the
dense subspace A 1 of 7~ (in the GNS construction, this is n o t h i n g b u t a n o t h e r n a m e
for ,4 itself). Since 1 is separating we may define on 73 p ( b l , a l ) = 7r(b*a), a b o u n d e d
sesquilinear form, which can be extended to 7-/. Given any u n i t a r y u E ,4 we have
p(ubl, u a l ) = p(bl, a l ) , and by continuity this can be extended to p(uy, ux) = p(y, x)
for x, y E 7-/. W i t h the b o u n d e d form p on 7-/ x "H we associate the u n i q u e b o u n d e d
o p e r a t o r a on ~ such that p(y,x) = < ay, z > . T h e n the u n i t a r y invariance of p
m e a n s t h a t a c o m m u t e s with every u n i t a r y u E A and thus belongs to `4'. Otherwise
stated, the c o m m u t a n t `4' of `4 appears as a space of measures on ,4, given by the
formula

(8.2) 7r(a) = p(1, a l ) = < o~1, a l > .

Conversely, any linear functional on ,4 of the form (8.2) satisfies (8.1).


REMARK. T h e n a m e of "measures with b o u n d e d density" for functionals of the form
7r(a) = #(c~a) associated with c~ E A' is reasonable. First assume c~ _> 0. T h e n 7r is
positive a n d for a E `4+ we have

0 < ~(a) --II V'~v/-~1 II2 -< II , / g II2 II v~1112 -- II v/-gll2 # ( a ) •

T h u s 7r is d o m i n a t e d by a c o n s t a n t times #. On the other hand, every element in .At


can be w r i t t e n as p + iq using s.a. elements, each of which can be decomposed into two
positive elements of ,At to which the preceding reasoning applies.
The s y m m e t r y between `4 a n d `4' is very clear in the following l e m m a (which can
easily be improved to show that 1 is cyclic (separating) for `4 iff it is separating (cyclic)
for `4' ).
LEMMA. If the vector 1 is cyclic and separating for ,4, it has the same properties with
respect to ,4'.
PROOF. 1) Let K: be the closed subspace generated by ,4'1. T h e projection o n K;±
c o m m u t e s w i t h ,4' (hence belongs to ,4) and kills 1. Therefore K:± = {0} a n d 1 is
cyclic for ,4'.
2) Consider a E ,4' such that c~l = 0. T h e n for a E ,4 we have a a l = a a l = O,
hence a = 0 on the dense subspace 7), thus a = 0 a n d we see t h a t 1 is s e p a r a t i n g
for ,4'.
3. Von N e u m a n n algebras 251

The linear Radon-Nikodym theorem

9 As we have just seen, measures absolutely continuous w.r.t. # are n a t u r a l l y defined


by "densities" belonging to A I . But if we insist to have densities in A , it is u n r e a s o n a b l e
to define t h e " m e a s u r e w i t h (s.a.) density m " by a formula like 7r(a) = # ( m a ) , since it
would give a c o m p l e x result for non c o m m u t i n g s.a. operators. We have a choice b e t w e e n
non linear formulas like 7r(a) = #(x/~-Ta V/-~) (for m >_ 0, leading to a positive linear
functional) a n d linear ones like

(9.1) 7r(a) = 1 #(ma + am) ,

which give a real functional on s.a. operators if the "density" m is s.a., b u t usually not
a positive f u n c t i o n a l for m ~ 0. The possibility of representing "measures" by such a
formula is an i m p o r t a n t l e m m a for the T o m i t a Takesaki theory.
However, if we want to construct a s.a. o p e r a t o r from a pair of s.a. o p e r a t o r s a, m ,
we are not r e s t r i c t e d to (9.1) : we can use any linear c o m b i n a t i o n of the f o r m k a m + k m a
where k is complex. For this reason, we put

(9.2) ~m,k(a) = 1 # ( k a m + -kma)

( w r i t t e n simply as 7r if there is no ambiguity). T h e n we can s t a t e Sakai's linear Radon-


Nikodym theorem as follows.
THEOREM. Let qo be a positive linear functional on fit dominated by #. For e v e r y k
such that ~%e(k) > 0 there exists a unique positive element m 6 A such that <p = 7rm,k ,
and we have m <_ ~e( k ) - 1 I .
PROOF. Uniqueness. Let us assume t h a t # ( k a m +-krna) = 0 for a 6 Asa. Since
Re(k) # 0 t h e case a = m gives #(m 2) = 0. Since m is s.a. and # is faithful this
implies m = 0.
Existence. We m a y assume t h a t ~e(k) = 1. T h e set M of all m E Asa such t h a t
0 _< m _< I is c o m p a c t and convex in the normal topology. For m 6 M t h e linear
functional 7rm from (9.2) on .4 is n o r m a l ; indeed, if a family (ai) f r o m the unit ball
of .,4 is weakly convergent to 0, the same is true for mai and aim , and therefore
7rm(ai) --+ 0. Also, the 1-1 m a p p i n g m , , 7r,, is continuous from M to A , : this
a m o u n t s to saying t h a t if a family (mi) from the unit ball of .4 converges weakly to
0, t h e n #(mia) and #(ami) t e n d to 0, and this was m e n t i o n e d as a r e m a r k following
the l e m m a in subsection 7.
Let the real p r e d u a l consist of those normal functionals on .4 which are real valued
on .4sa. It is easy to see t h a t it is a real Banach space w i t h dual Asa, and we m a y
e m b e d M in it, via the preceding 1-1 mapping, as a convex, weakly c o m p a c t subset.
We want to prove t h a t ~ E M . According to the H a h n - B a n a c h t h e o r e m , this can be
reduced to the following p r o p e r t y

for every a E Asa such t h a t < M , a > _< 1 we have <c2, a > _< 1.

The condition <M,a> _< 1 means t h a t { # ( k a m + - k m a ) < 1 for every m E ,Asa


such t h a t 0 <_ m _< I . We decompose a into a + - a - and take for m t h e p r o j e c t i o n
252 C* -algebras

f ] 0 , ~ [ ( a ) . Then m a = a m = a + , and the condition implies that ~(k + ~ ) # ( a +) _ 1.


Since we assumed that ~e(k) = 1 we have #(a +) < 1. On the other hand, ~ is a
positive linear functional dominated by t~, and therefore ~(a) _< ~2(a+) _< #(a +) < 1.

The KMS condition

10 Let 7-/be a Hilbert space, and let H be a "Hamiltonian" : a s.a. operator, generally
u n b o u n d e d and positive (or at least bounded from below) generating a unitary group
Ut = e i*H on 7-l, and a group of automorphisms of the C * - a l g e b r a £(7-/)

r/t(a) = e itll a e - i ~ H .

One of the problems in statistical mechanics is the construction and the study of equi-
l i b r i u m s t a t e s for this evolution. Von Neumann suggested that the natural equilibrium
state at (absolute) temperature T is given by

/~(a) = Tr(aw) with tt, = Ce -H/~T

where ,~ would be Boltzmann's constant if we had been using the correct physical
notations and units. The coefficient 1 / ~ T is frequently denoted by /3. However, this
definition is meaningful only if e - ~ H is a trace class operator, a condition which is not
generally satisfied for large quantum systems. For instance, it is satisfied by the number
operator of a finite dimensional harmonic oscillator, but not by the number operator on
boson Fock space.
The KMS (Kubo-Martin-Schwinger) condition is a far reaching generalization of
the precedin~ idea, which consists in forgetting about traces, and retaining only the
fact that e -¢~H is an analytic extension of e i t H to purely imaginary values of t which
is well behaved w.r.t. /~. From our point of view, the most remarkable feature of the
KMS property is the natural way, completely independent of its roots in statistical
mechanics, in which it appears in the Tomita Takesaki theory. It seems to be a very
basic idea indeed!
Given an automorphism group (r/t) of a C* aigebra ..4, we say that a E .4 is e n t i r e
if the function rlt a on the real axis is extendable as an ft.-valued entire function on C.
Every strongly continuous automorphism group of a C* algebra has a dense set of
entire elements (see subsection 11).
DEFINITION. A l a w # on t h e C * -algebra .-4 satisf}es t h e K M S c o n d i t i o n a.t ~ w.r.t, t h e
a u t o m o r p h i s m g r o u p (r/t) i f there exists a dense s e t of entire d e m e n t s a s u c h t h a t w e
h a v e , for every b E A a n d t r e a l

(10.1) #((r/~ a) ~) = .(~(r/~+~ ~)).

One can always assume that /3 = 1, replacing if necessary r/t by qt/3 (this applies also
to ~<0).
One ca.n prove that, whenever e - ~ H is a trace class operator, the state defined above
satisfies the KMS property at ~.
There is another version of the KMS condition, which has the advantage of applying
to arbitrary elements instead of entire ones. We say that a function is h o l o m o r p h i c o n a
3. Von Neumann algebras 253

closed strip of the complex plane if it is holomorphic in the open strip and continuous
on the closure.
THEOREM. The K M S condition is equivalent to the following one : for arbitrary
a, b C ,4, there exists a bounded function f , holomorphic on the horizontal unit strip
{0 _< ,~m(z) <_ 1}, .such that

(10.2) .f(t + iO) = #(b(~ta)) , f ( t + i l ) = #((77ta )b) .

PROOF. Assume the KMS condition holds. We may approximate a by a sequence (an)
of entire elements satisfying (10.1), and we set fn(z) = #(b(zlzan)), an entire function.
We prove that fn converges uniformly on the horizontal unit strip, its limit being then
the function f . Since the analytic functions ~?t+zan and ~]tTlzan (for real t ) are equal
for z E IR, they are equal, and

I _< Ibl l,Tt÷i=a, l = Ibl l =a=l


is bounded in every horizontal strip, and therefore satisfies the maximum principle on
horizontal strips. On the other hand

I#(b~t(a~- am))I _< l ~ I I a n - a m l


and using KMS

I#(@t+i(a,~- a m ) ) l - - I ~ ( ~ t ( a , , - a m ) b ) I _< 1 ~ 1 1 a n - a m l .
Therefore fn converges uniformly.
Conversely, assume (10.2) holds and consider an entire element a. The function
f ( z ) - # ( b ( r l z a ) ) is holomorphic in the open unit strip and its limit on the real axis is 0.
By the reflection principle, it is extendable to the open strip { - 1 < C~m(z) < 1}, equal
to 0 on the real axis, hence equal to 0. Therefore we also have f ( t + i) = #(b(rYt+i a)).
On the other hand, by (10.2) we have f ( t + i) = #((rlt a) b), and (10.1) is established.
Note t h a t it has been proved for an arbitrary entire vector a.
We draw two consequences from the KMS condition. The first one is the invariance
of #. Let a be an entire element. We apply KMS with b = I . We have just remarked
that the function #z(a) is bounded on horizontal strips. On the other hand, KMS
implies that it is periodic with period i. An entire bounded function being constant, #
is invariant.
The second consequence is the regularity property (7.1) :

bj bounded, limj Fz(b;bj) 0 implies VaE,A limj g(bja)= O.

We may assume a is an entire vector, and pass to the limit. On the other h a n d by KMS
#(bja) = # ( ( r / _ i a ) b ) , and we are reduced to the trivial case of left multiplication by a
fixed element.
254 C*-~gebras

11 Finally, we m e n t i o n t h e result o n e n t i r e vectors we used above. Let (Ut) b e a n y


s t r o n g l y c o n t i n u o u s g r o u p of c o n t r a c t i o n s o n a B a n a c h space B . For every b C B set
bn = C n JP e - - n 8 2 Usb ds, w h e r e Cn is a n o r m a l i z a t i o n c o n s t a n t . T h e n we h a v e

Utbn = Cn e-nt2 ( oo
e 2nst-s2 Usbds

a n d it is t r i v i a l t o replace t by z in this formula. O n t h e o t h e r h a n d , s t r o n g c o n t i n u i t y


of t h e s e m i g r o u p implies t h a t for every b we h a v e bn --+ b in n o r m . W e r e m a r k t h a t
t h i s f u n c t i o n is n o r m b o u n d e d in vertical strips of finite w i d t h .

§4. T H E T O M I T A - T A K E S A K I THEORY

N e a r l y all p a p e r s o n n o n - c o m m u t a t i v e i n t e g r a t i o n b e g i n w i t h t h e s t a t e m e n t "let .4
b e a y o n N e u m a n n a l g e b r a w i t h a faithful n o r m a l s t a t e . _ . " In t h e l a n g u a g e of G N S
r e p r e s e n t a t i o n s , t h e s t a t e m e n t b e c o m e s " Let A b e a v o n N e u m a n n a l g e b r a of o p e r a t o r s
o n a H i l b e r t s p a c e 7-/, w i t h a cyclic a n d s e p a r a t i n g v e c t o r . . . ". T h i s is also t h e s t a r t i n g
p o i n t for t h e T o m i t a - T a k e s a k i t h e o r e m .
Though the theory is extremely beautiful, it does not apply to our main example, that
of the vacuum state on the CCR algebra acting on simple ]Pock space, which isn't a
separating state. It applies to the vacuum state of non lock representations.
W e d i d o u r b e s t in t h e following sections to justify this s t a r t i n g p o i n t : # b e i n g a
law o n a C * - a l g e b r a A ° , we know from t h e GNS t h e o r e m t h a t it c a n b e i n t e r p r e t e d as
t h e law a s s o c i a t e d w i t h a cyclic v e c t o r 1 in a H i l b e r t s p a c e ~ o n w h i c h .4 ° o p e r a t e s .
T h e n we s h o w e d t h a t a weak r e g u l a r i t y a s s u m p t i o n o n # allows to r e d u c e t o t h e case
1 is also s e p a r a t i n g for t h e v N a .4 g e n e r a t e d by .4 ° .
W e follow t h e a p p r o a c h of Rieffel v a n Daele [RvD1], w h i c h avoids a l m o s t c o m p l e t e l y
t h e t e c h n i q u e s of u n b o u n d e d closable o p e r a t o r s . T h e b e g i n n i n g of t h e i r p a p e r h a s
a l r e a d y b e e n given in A p p e n d i x 3. It s h o u l d b e m e n t i o n e d t h a t t h e s a m e m e t h o d s
lead to a s i m p l e p r o o f ([RvD2]) of t h e c o m m u t a t i o n t h e o r e m (.4 ® 13)' = .At ® 13' for
t e n s o r p r o d u c t s of y o n N e u m a n n algebras. T h e r e is also a T o m i t ~ T a k e s a k i t h e o r e m for
weights o n v N a l g e b r a s , w h i c h are t h e a n a l o g u e of or-finite m e a s u r e s . T h i s r e s u l t does
n o t c o n c e r n us here.
Before we s t a r t , let us d e s c r i b e t h e a i m of this section. In classical i n t e g r a t i o n t h e o r y ,
b o u n d e d m e a s u r e s a b s o l u t e l y c o n t i n u o u s w.r.t, t h e law # o n L 2 ( # ) are r e p r e s e n t e d as
scalar products

re(f) = J p(x) f ( x ) I~(dx) .

for p E L 1 , a n d in p a r t i c u l a r m e a s u r e s w i t h b o u n d e d densities are r e p r e s e n t e d b y m e a n s


of e l e m e n t s of L ~ . T h u s we h a v e a n a n t i l i n e a r 1-1 m a p p i n g p ~ ~ p f r o m f u n c t i o n s
t o m e a s u r e s w i t h b o u n d e d densities. In t h e n o n c o m m u t a t i v e s e t u p , t h i s b e c o m e s a n
a n t i l i n e a r 1-1 m a p p i n g b e t w e e n t h e v N a A a n d its e o m m u t a n t . 4 ' , called t h e modular
conjugation Y.
4. Tomita-Takesaki theory 255

This conjugaticl* then gives a general meaning to the preceding formula, at least in
the ease of a positive measure 7r absolutely continuous w.r.t. # (translation : a positive
element of the pred;Lal of ¢4). Its positive density in L 1 is interpreted as a product qq
for some q E L 2 , i.e. a scalar product 7 r ( f ) = < q, f q > . A similar representation holds
in the non commutative case, but we will only state this result, which is too long to
prove.
Finally, when one has defined conveniently L 1 , L 2 and L ~ , the way to LP is open
by interpolation and duality. We do not include the theory here.

The main operator's


We will use the rotations and results of the Appendix on "two events", beginning
with the definition ~f the real subspaces of ~ to which it will be applied.
1 We consider ~ as a real Hilbert space, with the scalar product (2, y) = ~e < x, y > .
We introduce the real subspaces
I
.40=Asal, At = A s a l , B0=iA0 , BIo=iBo.

We denote the corresponding closures with the same letters, omitting the index 0 • Since
1 is cyclic for .4, A0 + B0 (and therefore A + B ) is dense in 7-~. Similarly, A l + B I is
dense.
The following lemma translates a little of the algebra into a geometric property.
LEMMA. W e h a v e A ± = B t , a n d s i m i l a r l y (A~) ± = B .
PROOF. Consider a C , 4 s a , a C A sJa . Since the product of two col:mmtl.,.~g s.a. operators
is s.a., < a l , i a l > is purely imaginary, and this is translated as the or',:hogonality of A
and B I .
To say that the orthogonal of A is B ~ and no larger amounts t.o saying that the
only x E ~ orthogonal to A and B ~ is x = 0. As often, this will be proved by a matrix
trick.
Wehave aE.A, actonT-~@7-[ a s a m a t r i x ~ = (0 0a)'Weput"ls° ~= (1) "Let
j be the (complex) projection on the subspace K generated by all vectors ~ ( a E ,4);
since it commutes with each ~, it can be written as j = 7* with a , ~ ~_ ~4~a
(more precisely, 0 < a < 1 ), and 3' E A t • From the definition of x we deduce three
simple properties.
1) f o r a C A s a , < x , a l > i~ p u r e l y i m a g i n a r y . Otherwise stated

<x. al> =<al, z>=-<al,x>=-<l,ax~>.

The equality between the extreme members is a II?-linear propc.rty, and therefore
extends from ~4sa to ~4. Then it is interpreted as the complex orthogonality of
(~ a0)(xl) and ( 1 ) , o r a s t h e p r o p e r t y J(1) = 0 . Inparticular

(i) ax+71=0, implying < x , a x > = - < x , 7 1 > .


256 C*-~gebras

2) For e E fl~tsa , "~. X, e l > iS real. As above, we deduce

<c,el>=<el,x> =<el,x>= <l,ez>.

The equality between the extreme members is a C - l i n e a r property, and therefore


extends to e E .AI. Taking then e = 3' we have

(ii) - < x, "/1 > = - < 1, fix > .

3) Since I E .A, ~ belongs to the ~ubspace K , therefore j ~ = ~. In particular

(iii) ctl+3,x=l, implying - < l , 3 , z > = - < l , l - c e l > .

The relations (i)-(iii) taken together imply < x,c~x > = - < 1, 1 - c~1 > , an equality
between a positive and a negative number. Therefore < x, c~x > = 0, implying x/~x = 0,
then c~x = 0. Similarly the second relation implies ( I - a) 1 = 0. Since 1 is separating
for ~41 we have c~ = Z, and finally x = 0.
2 In this subsection we define the modular symmetry and the modular group. We
refer to Appendix 3 for some details (note that we use capital letters here for the main
operators)
The real subspaces A, B are not in "general position", since A O B -I- = A V/A ~
contains c l for every c belonging to the center of .d, and similarly A -l- 21 B contains
i c l . We have

(2.1) A n B = {0} = A ± n B ± .

The projections on A, B being denoted by P, Q, we put

S=P-Q ; C=P+Q-I.

These operators are symmetric and bounded, such that S 2 + C 2 = 1, S C + C S = O,


and their spectra are contained in [ - 1 , 1] . The operator S being injective, we may
define J = sgn(S), which will be called the modular symmetry. We also put D = Is I,
which commutes with P, Q, S, C, J . On the other hand, we have

(2.2) JP = (I- Q)J, JQ = ( I - P) J .

The operator of multiplication by i satisfies i P = Qi, so that C and D commute with


i, i.e. are complex :inear) while S and J antieommute with i (are conjugate linear).
3 The operators ! 4- C are injective too, and we define the modular operator A by
the following formula

(3.1) A--1-C /_" 1 - : ~


1 +~ -- 1~ dF~

It is selfadjoint, po,.itive, injective, generally unbounded. Its square root occurs often
and wilt be denoted by c. The domain of A contains the range of I q- C . For instance,
we have P 1 = 1, Q1 = 0 , hence ( I + C ) I = 1 and A1 = 1 .
4. Tomita-Takesaki theory 257

The modular group is the unitary group


(3.2) /k it = ( I - c ) i t ( I + C) - i t .

Note that the right side does not involve unbounded operators. These operators
commute with C and D , but they have the fundamental property of also c o m m u t i n g
with J , hence also with J D = S , with P and Q. Otherwise stated, they preserve the
two subspaces A and B , and define automorphisms of the whole :geometric situation.
On the other hand, the really deep result of the Tomita theory is the fact that they
preserve, not only the closed subspaces A and B , but also A0, B0, A~, B~.
PROOF. It suffices to prove that J ( I - 4 - c ) i t j = ( I T C) - i t . Introducing the spectral
decompositions UA and VA of I - C and I + C respectively, this e:nounts to

J(/ Ait dUA) J = / A - i t dVA .

Since J and C anti::ommute, we have J ( I - C ) = ( I + C ) J , and therefore J U A J = IrA.


The result then follows from the fact that J is conjugate linear.
If we exchange the vNa .A with its commutant A ' , the space A gets exchanged with
A I = B -l- (this is almost the only place where we need the precise result of subsection
1). Thus P ' = I - Q , Ql=i_p, S I = s , ,l t = J , c t = - c ~ and A r = A - 1 . T h i s
remark will be used at the very end of the proof.

Interpretation o f che a d j o i n t

4 To simplify notation, we often identify a E A with the vector a l C 7-/. Then the
space A0 + B0 is identified with the vNa A itself. Every x E 7-/ c a r be interpreted as
a (generally unbounded) operator Opz : e l ~ a x with domain D = A ' I .
The following result is very important, because it sets a bridge bet'veen the preceding
(i. e. Rieffel and van Daele's) definition of J and e, and the classical presentation of the
Tomita-Takesaki theory, in which J z would appear as the polar decomposition of the
(unbounded, closable) operator a + ib ~ a - ib for a, b C A s a , which describes the
operation *
THEOREM For a, b E .Asa, the vector a + ib belongs to Dom(~) and we have

(4.1) J c ( a + ib) = a - ib (e = A 1/2) .

PROOF. We remark that, for a E Asa

2a=2Pa=(I+C+S) a=(I+C+(I-C 2)~/2J)a=(l+C)~/2x

with x = ( I + C ) l / z a + ( I - C ) l / 2 J a , so that a belongs to the range of ( I + C) ~/2 ,


and therefore to the domain of ¢ = ( I - C ) 1 / 2 / ( I + C) ~/2 . Then we have

~(, = ( I -- C 2 ) 1 / 2 a ~- ( ~ - C ) J a : D a + ( I - S ) J~z

and applying J we have 2 J e a = J D a + (I + C) a = 2a.


Since C,e are complex linear and J conjugate linear, this implies Jeib = - b for
b C As~, and (4.1) follows. [1
258 C*-Mgebras

The modular property

5 The second impcrtant result is the so called "modular property" of the unitary group
A it relative to the real subspace A. It will be interpreted later as a KMS condition.
Recall that a function holomorphic in a closed strip of the complex plane is a function
holomorphic in the open strip and continuous in the closure. For clarity, we do not
identify a and a l , etc. in the statement of the theorem (we do in the proof).
THEOREM Given arbitrary a,b E A s a , there exists a bounded holomorpMc function
f ( z ) in the closed strip 0 <_ ~ m z < 1, and such that for t real

f ( t -[- if,) = < b l , z N - i t a l > , f(t + il) = < A-ital, bl >

PROOF. We try to extend the relation


/k - i t = ( I -- C) -i$ ( I + C ) it

to complex values of t. Since 1 4- C is a positive operator, ( I - C ) - i z can be defined


as a b o u n d e d o p e r a . ; o r for ~ m z _ > 0 and ( I + C ) iz for ~ m z < 0 : . t h e r e i s no global
extension outside tke real axis. However, we saw at the beginning of the preceding proof
that a E A can be expressed as ( I + C ) l / 2 u . Then we put

/k-iZ a = F ( z ) = ( I - c ) - i z ( I + c)iz+l/2u ,

which is well defined, holomorphic and bounded in the closed strip 0 < ~ m z <_ 1/2.
For z = t + i0 we have < b, F ( t ) > = < b, ~k-ira >. We are going to prove that
< b, F ( t + 1 / 2 ) > is real. By the Schwarz reflection principle, this function will then
be extendable to the closed strip 0 _< ~ m z < 1, and assume con.i,agate values at
corresponding points ~ + i0, t + i l of the boundary.
We start from the relation

F ( t - [ - , / 2 ) = (Z - C ) -it-l-I/2 ( I + c ) i t t t ~- A - i t ( ~ - C ) 1 / 2 u .

On the other hand, a = ( I + C)1/2u, hence ( I - C)1/2u = J a , and

<b,F(t+i/2)>=<b,A-i¢ja> < b, J c > with c = A - ' : t a C A.

Now J c belongs to (iA) ± (the real orthogonal space of i A ) , meaning that < ib, J c >
is purely imaginary, or < b, Jc > is real. 0
Rieffel and van Daele prove that the modular group is the only strongly continuous
unitary group which leaves A invariant and satisfies the above property.

Using the linear Radon-Nikodym theorem

6 All the preceding discussion was about the closed real subspaceu A , A ' . We now
discuss the yon N e u m a n n algabras .4, A I themselves, and proceed to the deeper results,
translating first the Sakai linear R - N theorem, whose statement we recall. Given any
k such that ~e k > 9, and any /3 E .4'Ha (/3 is a "measure", whence the greek letter)
there exists a uniqu,~ b C .4Ha (a "function") such that for a C .4

(6.1) < / 3 1 , a l > = < 1, (kab + -kba) 1 >


4. Tomita-Takesaki theory 259

(we keep writing explicitly 1 here and below, for clarity). For each k, this extends to
a conjugate linear mapping from .41 to ~4, which for the moment has no name except
/3 ~ b. We axe going to express (6.1) with the help of the operators C and S , as
follows
LEMMA. For k = 1/2, we have

(6.2) P~l = bl , Q3I = 0 hence S/31 = b l .

a n d in the g e n e r a l c a s e

(6.a) s / 3 s = k( z + c ) ~( z - c ) + -fi(z - c ) ~( ~ + c )

PROOF. The case k = 1/2 goes as follows :

1
</31,al>= ~<l, (ab + ba) l > =

_1(<al b l > + < b l al>)=~e<al bl> =(al, bl).


2 ' ' '

Since a is arbitraxy, this means bl -- P i l l . On the other hand, ~,t,~q~1 is orthogonal to


B = i A , hence Q/31 = O.
We now consider ~ general k. We replace in (6.1) a by ca ( a , b, c (: A s a , but ca C A
only), so that

<cfll,al> =k<l,cabl>Tk<l,bcal> =k<cl,abl>+k-<cbl,al>.

Let ~ , # be a r b i t r a r y in A~a, and g,m be the corresponding elements of .A through


the preceding constluction with k = 1/2. In the preceding formula we replace c by g
and a by m
<gfll,ml>=k<gl,mbl>+k<gbl,ml>.

Since SA1 = g l , S # I = m l , and g commutes with /3, this is rewritten

</3S~l,S#l>=k<SAl,mbl>+k<g.bl,S/.l >

Since S is real seh'adjoint and complex conjugate linear, it satisfies the relation
< u, S v > = < S u , v > - . Therefore we can move the second S in the left member
to the "bra" side, the result being < S f l S A 1 , # 1 > - .
In the right member, we use the result from subsection 4 :

mbl = Je(mb)*l = Jebml = JebS#l , similarly gbl = .lebSAl.

Then < S ) ~ l , m b l > = < S A 1 , J e b S t z l > = < J c S . k l , bSl_tl > - , J e = e J being real
self-adjoint and complex conjugate linear. On the other hand, J e S = [ - C . Doing the
same on the second term we get

<S/3SAI,pl>- = k<(I-C) hI, bS#l>- +k<bS.,kl,(I-C)#l>- .


260 C* -algebras

On the other hand, A1, #1 belong to the kernel of Q, and the relations S =
P - Q , I + C = P Jr Q allow us to replace S by I + C . Taking 6:it the conjugation
sign, and using the fact that C and S f l S are complex selfadjoint, we get

<11, S/3S#1> =~<A1, ( I - C) b( I + C) # 1 > + k < h i , ( I + C) b( I - C) # 1 > .

This is complex lit:ear in #, and therefore can be extended from .A' a to .A', and
similarly in A. Since 1 is cyclic for A I , we can write this as the operator relation (6.1).

7 We now make explicit the dependence on k, p u t t i n g k = e i0/2 - - the condition


N e ( k ) > 0 then becomes 10[ < 7r. We keep fl C .Atsa fixed, and denote by b0 the
element of Asa corresponding to fl for the given value of 0. It is really amazing t h a t
b0 can be explicitly c.~mputed by the integral formula

(7.1) bo = fir A i t j / 3 J A - i t ho(t)dt , e-Or


ho(t) = 2Ch(Trt)

PROOF. We consider two analytic vectors x , y E 7{ for the unk:~ry group /k it, as
constructed in §2, subs.11. Then / k z x , A Z y are entire functions of z, bounded in every
vertical strip of finPe width, and the same is true for the scalar fun~ tion

f(z) = </kZx, DbDA-Zy > .

We recall that D = IS[ commutes with C and S , and S = J D (see Appendix 3). We
have / k l / 2 D = I - C , / k - 1 / 2 D = I + C. Then

f(½ + i t ) = < x , / k - i ~ A 1 / 2 x , D b D A - 1 / 2 A - i t y > = < x , /ki~ (I -- C) b(I + C ) / k - l t y > .

Similarly
j+( ._~7 1 it) = < x , / k i t ( i + C) b(I - C ) / k - i t y > .

Then using (6.3) we get the basic formula ( k = e i°/~ )

k f(½ + it) + k f ( - ½ + it) = < x , /kit S / 3 S / k - i t y > .

We now apply the fo!lowing Cauchy integral formula for a bounded hc%morphic function
g(z) in the closed w~rtical strip I ~ e ( z ) l _ 1/2

g(0)= / (g(}+it)+g(-½+it)) 2 Ch(Trt)

We replace g(z) by eizOf(z) with 0 < ~r, and get

f(O) = f ( k f(½ + it) + -k f ( - ½ + it)) ho(t ) & ,


J
or explicitly

< x D b D g > = < x , ( f f A i t S ~ S A -it ho(t ) d t ) y > .


4. Tomita-Takesald theory 261

This equality can be, extended by density to arbitrary x, y. Since .r, is selfadjoint and
commutes with J, a% we may write

< D z , bDy > = < D x , ( / z ~ i t j f l J A -it ho(t ) dt ) Dy > ,

and (7.1) follows, the range of D being dense.


Before we proceed to the main results, we need one more remark :

(7.2) For fl 6 -4' we have /'xitjflJA -it 6 A .

It suffices to prove this operator commutes with every c~ 6 -41. We put g(t) =
<x, [ A i t j f l J A - i t , c ~ ] y > . F r o m ( 7 . 1 ) w e d e d u c e f g(t) ho(t ) d r = < x , [bo, c ~ ] y > =
0. Since 0 is arbitrary one may deduce that g(t) = 0 a.e., and then everywhere.

The main theorems

8 We first introduce some notation. The modular symmetry J ope'ates on vectors on


"H ; given a bounded operator a 6 £(7t) we define j a = J a J . Tais is a conjugation
on £(7-/) : it is conjugate linear, but doesn't reverse the order of products. Similarly,
we extend the modular group as a group of automorphisms on f ( T / ) , p u t t i n g ~ta =
A i t a A - i t . The fact that A it leaves 1 fixed implies that the law # is invariant (this is
made more precise by the KMS condition in theorem 3).
THEOREM 1. The c.r njugation j exchanges A and A I .
PROOF. If in (7.2) we take t = 0, we find that j (.4') C .4. Exchanging the roles of ,4
and A I doesn't cha lge the modular symmetry J , and therefore j ( , 4 ) C A I. Then we
have A = j (j (A)) C j (.4') and equality follows.
COMMENT. We had seen previously that J maps the closed space A onto A t . Now we
have the much more precise result that it maps Ao onto A t , without taking closures.
Also, j a is the unique c~ C A / such that c~l = J a l .
THEOREM 2. The group (St) preserves A and A I .
PROOF. The fact thac .4 is preserved follows from (7.2) and theorem 1, since J f l ] is
an arbitrary elemen~ of .4. Then we exchange the roles of .4 and .Y.
The statement ir the next theorem is not exactly the same as Lhat of the KMS
condition (10.2) in the preceding section : the boundary lines of the s~rip are exchanged.
THEOREM 3. Given arbitrary a, b 6 A , there exists a bounded l:o'omorphic function
f ( z ) in the dosed s~rip 0 _< ~ m ( z ) _< 1 such that

(8.1) f ( t + iO) = #((hta) b) , f ( t + i l ) = #(bhta) .


PROOF. It is sufficient to prove this result when a, b are selfadjoint. T h e n the result
follows from the modular property on vectors, if we remark that ( I being invariant by
A +it )

#(b$ta) = < 1, b A i t a A - i t l > = < b l , A i t a l >


#((Sta)b) = #(a~-tb) = < 1, a A - i t b A i t l > = < A i t a l , bl > .

One can prove that this property characterizes the automorp~ism group (~t).
262 C*-Mgebras

A d d i t i o n a l results

9 The Tomita-Tak~saki theorem is not an end in itself, but the gateway to new
developments. The md of the academic year also put an end to the author's efforts
to give examples and self-contalned proofs, and the work could not be resumed. We
only give a sketch without proofs of important results due to Araki, Connes, Haagerup.
They are available in book form in [BrR1].
Everything we are doing depends on the choice of the faithful normal state # (or the
separating cyclic vector 1 in the GNS Hilbert space ~ ) . It turns ov.t at the end, as in
the classical case, that replacing # by an equivalent state produces leaves the situation
unchanged up to well controlled isomorphisms.
We take the bolcl step of representing by ¥ the vector J x for x E 7/, and by
the operator j ( c ) = J c J for c E £(7-/), returning to the " J " notation whenever clarity
demands it. Note that Kb = bK for a , b E A . For x E 7-t, c E Z:(7-/), we have (cx) = c x ;
indeed, ( J c J ) J x = J(cx). Similarly, for c , d E £(7~), (cd) = c d a n d cd = c d .
The positive cone 79 in the Hilbert space "H is by definition tile closure of the set of
vectors aK 1 for a E .A. Since 1 is invariant by J , the same is true for every element of
79 (all elements of 7) are "real"). Also, for a, b E A we have bbaa = (ha) (ba), therefore
79 is stable under bt.
We now give a list of properties of the positive cone
1) 79 is t h e cIosure of A 1/4 (,4+1), and therefore is a c o n v e x con ~.
2) 79 is s t a b l e raider t h e m o d u l a r g r o u p A it .
3) 79 is p o i n t e d , ~,.e. (-79)M79 = {0}, a n d is s e l f - d u a l (i.e. a f o r m < x, . > is p o s i t i v e
on 79 i f and o n l y i f z E 79 ).
4) E v e r y "real" - e c t o r x (i. e. g x = x ) is a difference o f t w o elements of 79, w h i c h
can be u n i q u e l y chosen so as to be orthogonal.
Now, we describe what happens when we change states, assuming first the new state
is a pure state t,(a) = <w, a w > with w E 79 (we will see later that this is not a
restriction).
5) T h e s t a t e ~, is f a i t h f u l i f a n d o n l y i f w is a cyclic v e c t o r fc,'r A o p e r a t i n g on H .
6) A s s u m i n g t h i s "~ true, we h a v e J ~ = J , 7 ~ = 79.
Finally, we give lhe main result, which describes all states on .4 ~ aich are absolutely
continuous with respect to #.
THEOREM. E v e r y po.~itive n o r m a l Bnear f u n c t i o n a l ~ on .,4 can be u , i q u e l y r e p r e s e n t e d
as

~(a)=<co,aw> with coE79.

Formally, this says that every positive element of L 1 is something like the square of
a positive element of L 2 .
10 EXAMPLES. 1) t h e trivial case from the point of view of the T - T theory is that of
a tracial state #. The tracial property can be written

< a l , b l > = #(a*b) = p(ba*) = < b l , a l > .


4. Tomita-Takesaki theory 263

Then property (8.1) holds with 5t = I , and taking for granted the u:fiqueness property
mentioned after Thearem 3, we see that the modular group is trivial. According to (4.1),
since e = I the mo,iular symmetry J must be given by

Jal=a*l for aE.A..

Indeed, this is a well defined operator on .41, and we have < J a l , J b l > = < b l , a l >
because the state is traeial, so that J extends to a well defined conjugation on 7-/. To
check that g = J a J belongs to .2,/ , it is sufficient to prove that for arbitrary b, c, d E .A,

< c l , -db d l > = < c l , bg d l > .

On the left hand side we replace -Sbdl = J a J b d l by J a d * b * l arJd then by b d a * l .


Similarly on the rigt..t hand side we replace b'Sdl by b d a * l , and the result is the same.
2) The second case which can be more or less explicitly handled is that of A = / : ( / C )
with a state of the form #(a) = Tr(aw). Then w is faithful if and orly if w is injective.
The modular group is given by St(a) = w i t a w - i t , because this groulc of automorphisms
satisfies the KMS condition of Theorem 3. Indeed, if we put for a, b E J[

f ( z ) = Tr(w w i Z a w - i Z 5 )

it is easy to see that f ( z ) is holomorphic and bounded in the unit horizontal strip, with
f ( t + iO) = #(St(a),,) and f ( t + i l ) = #(bSt(a)).
3) Let us mention a striking application of the T - T theory, due to Takesaki : given a
yon N e u m a n n algebra A with a faithful normal law # and a yon N e u m a n n subalgebra
B, a conditional expectation relative to/~ exists if and only if B is stable by the modular
automorphism group of .4 w.r.t. #. On the subject of conditional ez.:pectations, we
advise the reader to consult the two papers [AcC] by Accardi and Cecchini given in the
references.

These lectures o r yon Neumann algebras stopped here, just whe:L things started to
be really interesting, and no chance was given to resume work and discuss L p spaces,
etc.. Unfortunately~ most results are still scattered in journals, and no exposition for
probabilists seems to exist.
Appendix 5

Local Times and Fock Space


This A p p e n d i x is devoted to applications of Fock space to the theory of local times.
We first present Dynkin% formula, ([Dyn2]) which shows how expectations relative to
the stochastic process of local times L ~ of a symmetric Markov process, indexed by the
points of the state space, can be computed by means of an auxiliary Gaussian process
whose covariance is the potential density of the Markov process. Though this theorem
does not mention explicitly Fock space, it was suggested to Dynkin by Symanzik's
program for the construction of interacting quantum fields. Then we mention without
proof a remarkable application to the continuity of local times, discovered by Marcus
and Rosen. We continue with the "supersymmetric" version of Dynkin's theorem, due
to Le Jan, and mention (again without proof) its application to the self-intersection
theory of two-dimensional Brownian motion. Thus local times and self-intersection form
the probabilistic background to present a variety of results on Fock space (symmetric~
antisymmetric, and mixed) which might belong as well to Chapter V. There is an
incredible amount of literature on the antisymmetric case, of which I have read very
little : so please consider this chapter as an mere introduction to the subject.
Some parts of this Appendix depend on the theory of Markov processes. As they
concern mostly the probabilistic motivations, and we presume our reader is interested
principally in the non-commutative methods, we have chosen to omit technical details
altogether. Everything can be found in the original papers, and the beautiful Marcus-
Rosen article also contains information on Markov processes for non-specialists.

§1. D y n k i n ~ s f o r m u l a

Symmetric Markov processes

1 Let E be a state space with a a - f i n i t e measure ~/. We consider on this space a


transient (sub)Markov semigroup (Pt) whose potential kernel G has a potential density
g(x, y) with respect to r/. Explicitly, for f > 0 on E

(1.1) /ec fEPt(x,dy)f(y)dt= f C(x,dy)f(y)=/g(x,y)f(y)~(dy).


Then g ( . , y) may be chosen to be excessive ( = positive superharmonic) for every y, and
we may define potentials of positive measures, G#(x) = f g(x, y)#(dy). We are mostly
interested in the case of a symmetric density g. The case of non-transient processes
1. Dynkin's formula 265

like low-dimensional Brownian motion can be reduced to the transient case, replacing
the semigroup (Pt) by e-ctpt (which amounts to killing the process at an independent
exponential time).
On the sample space f~ of all right continuous mappings from IR+ to E with lifetime
{, and left limits in E up to the lifetime, we define the measures IP0 , under which (Xt)
(the co-ordinate process) is Markov with semigroup (Pt) and initial measure 0. This
requires some assumptions on E and (Pt), which we do not care to describe. W h e n
0 = ex, one writes simply IPX,IEx, and for every positive r.v. h on ~2, IE ° [h] is a
function on E .
We will also need to use "conditioned" measures, IPO/k where k is an excessive
function; the corresponding initial measure is 0, and the semigroup is replaced by

(1.2) P/k(x, dy) = P,(x, dy) k(y)


k(x)
The corresponding potential G/k(x, dy) is G(x, dy) k(y)/k(x), and the potential density
w.r.t, r1 has the same form 1 . It would take us too far to explain how conditioning
establishes a symmetry between initial and final specifications on the process, but this
will be visible on the last formula (1.9).
A random field is a linear mapping f ~ q~(f) from some class of functions to
random variables. We shall be interested here in the occupation field, which maps f
to A ( I ) ~ = f ~ f ( X s ) d s - - this is always meaningful for positive functions, and
transience means that it is meaningful for bounded signed functions of suitably restricted
support.
Let f be a function such that Ifl has a bounded potential, let us put A = A ( f )
and define, for n a positive integer,

(1.3) h Cn) = ]E ° [ A ~ ] .
In particular, h = h (1) is the potential IE' JAm] = G f . Then we have

h (n) = IE" [ - d ( A ~ - As) n ] = IE" [ n ( A ~ - As) n-1 dAs]

=~'[
/o
= G(h(n-1)f).
n E [ ( A ~ - As)n-llY~ ] d&] = E ' [
/o nh(n-1)(Xs)aA.]
An easy induction then gives the formula

(1.4) h(n)(x) = n~ f g ( x , y , ) / ( y , ) , ( d y , ) g ( y , , y 2 ) . . . g ( Y n - , , Y n ) f(Yn)rl(dyn) m

We put # = f.7/ and integrate w.r.t, an initial measure 0, adding an integration to the
left

(1.5) ]E0 [(noo) n] = rt[/g(x,yl)g(~jl,Y2)...g(~]n_l,yn) O(dz),(dyl)...#(dYn) •

1 Symmetry is lost, but could be recovered using k2r/ as reference measure.


266 Appendix 5

To add similarly an integration on the right, we apply the preceding formula to the
conditioned semigroup --P/k with k = G)~, assumed for simplicity to be finite and
strictly positive. Then we have, still denoting by p the measure f . r 1

(1.6) ~k0/k [(A~)~] =


/ g(x,yl)g(y],Y2)...g(yn-,,Yn)g(yn,Z) O(dx)#(dyl).. • [J, ( dyn ) ~ ( dz ) •
J

In the computation, the first 1/k(x) of g/k simplifies with the k of kO, the intermediate
k(yi)'s collapse to 1, and the last remaining k(y,~) is made explicit as the integral w.r.t.
X on the right.
Now the symmetry comes into play. We define the symmetric bilinear form on
measures (first positive ones)

(1.7)
~(,.,) =f~(x.y),(a~),(a~).
We say that # has finite energy if e(l#I, Ipl) < oo. For instance, all positive bounded
measures with bounded potential have finite energy. One can prove that the space of all
measures of finite energy with the bilinear form (1.7) is a prehilbert space ~ (usually it
is not complete : there exist "distributions of finite energy" which are not measures). On
the other hand, with every measure p of finite energy one can associate a continuous
additive functional At (it) (or A{' for the sake of typesetting) reducing to At (f) when
# = f.77, and we still have

(1.8) IEk°/k [(A~(#)) n] =


n! / g(x, yl)g(yl,y2)...g(yn-l,Yn)g(yn,Z) O(dx)p(dyl)...#(dyn)x(dz) .
J
Some dust has been swept under the rugs here : there was no problem about defining
the same A(f) for the two semigroups (Pt) and (p[k), because we had an explicit
expression for it, but as far as A(#) is concerned it requires some work. Take us on
faith that it can be done.
Finally, we "polarize" this relation to compute the expectation IEk°/k [ A ~ . . . A ~n ]
(called by Dynkin "the n - p o i n t function of the occupation field") as a sum over all
permutations a of { 1 , . . . , n }

(1.9) ~/ g(z, yl) g(~i, y2)... ~(y~, z) 0(dx) ~ ( , ) ( d y i ) . . . ,~(,)(dy~) ~(dz).

This is the m a i n probabilistic formula, which will be interpreted using the combinatorics
of Gaussian random variables. Note that it is symmetric in the measures # i , even if the
potential kernel is not symmetric.
1. D y n k i n ' s formula 267

A Gaussian process

2 For simplicity, we reduce the prehilbert space 7"/ to the space of all signed measures
which can he w r i t t e n as differences of two positive b o u n d e d measures with b o u n d e d
potentials - - such measures clearly are of finite energy. T h o u g h we will need it only
m u c h later, let us give a n i n t e r p r e t a t i o n of formula (1.9) on this space : we denote by
~i the o p e r a t o r on 7-I which m a p s a measure ~ to (G)Q #i (a b o u n d e d m e a s u r e with
b o u n d e d p o t e n t i a l since GA is b o u n d e d ) , and rewrite (1.9) as

(2.1) ~ ( 0 , OLO-(1)OLO-(2)... OLo-(n))£) .

O n the same probability space as ( X t ) a n d i n d e p e n d e n t l y of it, we c o n s t r u c t a G a u s s i a n


(real, centered) linear process Y~, indexed by # E ~ , with covariance

(2.2) < r.,r > =/.(dx)g(x,u).(du)


We claim that, if 0 and X are point masses Ex and sy 1, given a f u n c t i o n F > 0 on
]R n , we have Dynkin's formula

(2.3) IE kO/k [ F ( A1 + ( Y ~ / 2 ) , . . . , An + (Y~/2)) ] = ]E [ F ( Y ~ / 2 , . . . , Y2n/2 ) YoYn+I].

Note t h a t 0 and X have been moved inside the Gaussian field e x p e c t a t i o n on the right
h a n d side.
We follow the exposition of Marcus and Rosen, which is D y n k i n ' s proof w i t h o u t
F e y n m a n diagrams. T h e first step consists in considering the case of a f u n c t i o n of
n variables which is a product of coordinates, F ( x l , . . . , x n ) = Xl . . . X n . Since the
measures #i are not assumed to be different, (2.3) will follow for polynomials. T h e n a
r o u t i n e p a t h will lead us to the general case via complex exponentials, Fourier transforms
a n d a m o n o t o n e class t h e o r e m - - this extension will not be detailed, see D y n k i n [Dyn2]
or M a r c u s - R o s e n [MAR].
Given a set E with an even n u m b e r 2p of elements, a pairing 7r of E is a p a r t i t i o n
of E into p sets with two elements - - one of which is arbitrarily chosen so that we
call t h e m {a,/3}. We m a y also consider 7r as the m a p p i n g ~r((~) = /3, zr(/3) = ~ , a n
involution w i t h o u t fixed point. The expectation of a product of a n even n u m b e r m = 2p
of (centered) gaussian r a n d o m variables ~i is given by a s u m over all pairings zr of the
set { 1 , . . . , 2p} (see formula (4.3) in C h a p t e r IV, subs. 3)

(2.4) [I •

7r ct

We apply (2.4) to the following family of 2n + 2 gaussian r.v.'s : for i = 1 , . . . , n we


define x i = Yi, Yi = Y/, a n d the r.v.'s Yo, Yx which play a special role axe d e n o t e d by
x0 a n d x n + l . Given a pairing rr, we separate the indexes i in two classes : "closed"
indexes such that xi is paired to Yi, and "open" indexes i, such that xi is paired to
some x j or yj with j ¢ i i , necessarily open. Note that i = 0 , n + l is open since there is

1 A n a s s u m p t i o n wrongly o m i t t e d from the earlier edition, as Prof. M a r c u s k i n d l y p o i n t e d to us.


268 Appendix 5

no Yi, a n d if i ~ 0, n + 1 Yi is paired to some x k or Yk with k ¢ i . T h e n we consider the


c o r r e s p o n d i n g p r o d u c t in (2.5) : first we have factors corresponding to "closed" indexes,
E [xiYi] = IE [y/2] . N e x t we start from the open index 0 ; 7r(x0) is some zj, where
j is o p e n and zj is either xj or yj (unless j = n + l in which case t h e only choice is
zj = x j ). T h e n ~r(zj) is some z k with k open, etc. and we iterate until we end at Zn+l •
This o p e n chain of r a n d o m variables contributes to the p r o d u c t a factor

O n the o t h e r hand, this chain does not necessarily exhaust the " o p e n " indexes (think
of the case :co is paired directly with Xn+l ). T h e n starting w i t h an " o p e n " index t h a t
does not belong to the chain, we may isolate a new chain, this t i m e a closed one, or
cycle

one or m o r e such closed chains m a y be necessary to exhaust the p r o d u c t . Calling the


n u m b e r of e x p e c t a t i o n signs the length of tile chain (open or closed), we see t h a t the
case of "closed indexes" appears as that of cycles of m i n i m a l length 1.
T h u s we are reorganizing the right hand side of (2.3) as a sum of p r o d u c t s of
e x p e c t a t i o n s over one open and several closed chains of indexes, and we must take into
account the n u m b e r s of s u m m a n d s and the powers of 2. If we e x c h a n g e in all possible
ways the n a m e s zi, gi given to each "open" r.v. Yi, i = 1 , . . . , n , we get all the pairings
leading to the same d e c o m p o s i t i o n into chains. Assuming the lengths of the chains
are m0 for t h e o p e n chain, m l , - . . ,mk for the cycles, with rn0 + . . . + m k = n + 2,
the n u m b e r of such pairings corresponding to the given d e c o m p o s i t i o n in chains is
2 m ° - I . . . 2 m k - 1 = 2 n + l - k . On the other hand, the right side of (4) has a factor of
2 - n . Therefore, we end w i t h a factor of 2 k-1 , k being the number of cycles.
We would have a similar c o m p u t a t i o n for a p r o d u c t containing only squares, i.e.
w i t h o u t the factor Y0, Yn+l : the open chain linking Y0 to Yn+a would disappear, and
we would get a s u m of p r o d u c t s over cycles only.
We now t u r n to the left h a n d side of (2.3) : we e x p a n d the p r o d u c t into a s u m of
t e r m s 2 - J l E [ ~[2 2 ] I E [Ai, ... Ai~_j ] where the two sets of indexes p a r t i t i o n
ml " " Y~3
{ 1 , . . . , n } . W e have just c o m p u t e d the first e x p e c t a t i o n as a sum of p r o d u c t s over
cycles, a n d formula (1.9) tells exactly that the second e x p e c t a t i o n contributes the open
chain t e r m .

3 Marcus and Rosen use Dynkin's formula to attack the p r o b l e m of continuity of local
times : a s s u m i n g the p o t e n t i a l density is finite and continuous (as in t h e case of one
dimensional B r o w n i a n m o t i o n killed at an exponential time), point masses are m e a s u r e s
of finite energy, and the corresponding additive functionals are the local times (L~). O n
the o t h e r hand, we have a Gaussian process Yx indexed by points of the state space.
T h e n M a r c u s and Rosen prove that the local times can be chosen to be jointly continuous
in (t, x) if and only if the Gaussian process has a continuous version. T h e relationship
can be m a d e m o r e precise, local continuity or boundedness properties being equivalent
for the two processes. This is proved directly, without referring to the powerful t h e o r y
of regularity of Gaussian processes - - but as soon as the result is known, the knowledge
2. Supersymrnetric approach 269

of Gaussian processes can be transferred to local times. We can only refer the reader to
the article [MAR], which is beautifully written for non-specialists.

§2 Le Jan's "supersymmetric" approach

Taking his inspiration from Dynkin's theorem and from a paper by Luttinger, Le Jan
developed a m e t h o d leading to an isomorphism theorem of a simpler algebraic structure
than Dynkin's. This theorem is "supersymmetrie" in the general sense that it mixes
commuting and anticommuting variables. That is, functions as the main object of
analysis are replaced by something like non homogeneous differential forms of a r b i t r a r y
degrees. Starting from a Hilbert space 7-l, one works on the tensor product of the
symmetric Fock space over 7~, providing the coefficients of the differential forms, and the
antisymmetric Foek space providing the differential elements (the example of s t a n d a r d
differential forms on a manifold, in which the coefficients are smooth functions while the
differential elements belong to a finite dimensional exterior algebra, suggests that two
different Hilbert spaces should be involved here). However, this is not quite sufficient :
Le Jan's theorem requires analogues of a complex Brownian motion Fock space, i.e. ,
the symmetric and antisymmetric components are built over a Hilbert space which has
already been "doubled" We are going first to investigate this structure.
The construction will be rather long, as we will first construct leisurely the required
Fock spaces, adding several interesting results (many of which are borrowed from papers
by J. Kupsch), supplementing Chapter V on complex Brownian motion.

Computations on complex Brownian motion (symmetric case)

1 We first recall a few facts about complex Brownian motion, mentioned in C h a p t e r


V, §1. The corresponding L 2 space, generated by multiple integrals of all orders
(m, n) w . r . t , two real Brownian motions X, Y , is isomorphic to the Fock space over
G = L2(IR+)@ L2(IR+). It is more elegant to generate it by multiple integrals w.r.t, two
conjugate complex Brownian motions Z , 7 . The first chaos of the ordinary Fock space
over G contains two kinds of stochastic integrals associated with a given u E L2(IR+)

(1.1) zu=fusdZs , -zo=fus s


both of them being linear in u. The conjugation maps Zu into Z g , exchanging the
two kinds of integrals. The Wiener product of random variables is related to the Wick
product by

(1.2) ZuZv=Zu'Zv, ZuZv=Zu'-Zv, Zu-Zv=Zu:-Zv+(U,v)l.


It is convenient to formulate things in a more algebraic way : we have a complex Hilbert
space G with a conjugation, and two complex subspaces ~ and 7-/I exchanged by
the conjugation, such that G = 7 - / • 7-II • On 9 we have a bilinear scalar product
(u,v) = < u , v > , and this bilinear form is equal to 0 on 7-I and ~1 __ they are
(maximal) i~otropic sub~pace~. Thus the Fock space structure over G is enriched in two
270 Appendix 5

ways : by the conjugation, which leads to the bilinear f o r m (u, v) (and to t h e n o t i o n of


a W i e n e r p r o d u c t ) , and by the choice of the pair 7/, 7-II .
Here is an i m p o r t a n t e x a m p l e of such a situation : we take any complex Hilbert space
7-/, d e n o t e by ~ ' its dual space - - the pairing is written v ' ( u ) = ( v r , u ) and define
= T / • 7-//. T h e r e is a canonical antilinear m a p p i n g u ~ u* from 7-/ to 7/I (if u
is a ket, u* is the corresponding bra), and it is very easy to e x t e n d it to a c o n j u g a t i o n
of G. T h e n every element, of G can be written as zl + v* w i t h u , v E 7-/, and we have

(1.3) (u + v * , w + z*) = < v , w > + < z , u > .

An element of the i n c o m p l e t e (boson) Fock space over G is a finite sum of h o m o g e n e o u s


elements of order (m, n), themselves linear combinations of elements of the form

(1.4) ul o . . . o um o v', o...OV~n

w i t h ui ~ Tl, v i E ,Ht . A n o t h e r way of writing this is


I

(1.4 t) ulo...OUmO% o...ov n

where this t i m e all u i , v i belong to 7-l, and * is the s t a n d a r d m a p from 7-I to 7/~ .
If a c o n j u g a t i o n is given on ~ , we also have a canonical scalar p r o d u c t (bilinear)
(v,u) = <v,u>, and a canonical linear m a p p i n g from ~ to %{I d e n o t e d v ~ v I.
T h e n we m a y split each v' in (1.4) into a v E ~ and a ' . T h i s is essentially w h a t we
do w h e n we use the probabilistic notation, a third way of writing the generic vectors :

(1.4") Zu~ : . . . :Z~,,~ :-Z,~ : . . . :-Zv~

again w i t h u i , v i E 7-~= L2(IR+), with its s t a n d a r d conjugation. T h e symbols o and :


have the stone m e a n i n g : generally, we use the first when dealing with a b s t r a c t vectors,
and the second one for r a n d o m variables - - a mere m a t t e r of taste.
T h e f o r m ( 1 . 4 " ) can be r e w r i t t e n as a multiple Ito integral

(1.5) /tll(N1)...~lrn(S,z)~)l(~l)...Vn(tn)dZsl ...dZsm d-Ztt ...d-Z~n ,


which is incorrectly w r i t t e n : through a suitable s y m m e t r i z a t i o n of the integrand, (1.5)
should be given t h e f o r m

(1.6) re!n! f(sl,...,Sra; tl ..... t n ) d Z s t . . . d Z s , ~ d-Z h . . . d - Z t ~ .

e x t e n d e d to IR~ x IR n+ , f being a s y m m e t r i c function in the variables si, ti separately.


In our case, because of the factorials in front of the integral, f is equal to a s u m over
permutations

(1.7) ~ ?Aff(l)(81)... uo,(m)(,~m)?)r(1)(tl)... y,(n)(tn) .


O'~T
2 I n s t e a d of the W i c k p r o d u c t ( 1 . 4 " ) , let us c o m p u t e the W i e n e r p r o d u c t (usually
w r i t t e n w i t h o u t a p r o d u c t sign ; if necessary we use a dot)

(2.1) Z ~ , . . . Z ..... z., ...Z.~


2. S u p e r s y m . m e t r i c approach 271

According to the complex B r o w n i a n m o t i o n form of tile multiplication formula ( C h a p t e r


V, §1, formula (2.4)) the W i e n e r product (2.1) has a c o m p o n e n t in every chaos of
order ( m - p, n - p). For p = 0 we have the Wick product

Z~ 1 : . . . : Z ~ m : 7 , , : . . . : 7 ~ ,
for p = 1, we have a s u m of Wick products

(2.2) (vj,ui)aut :...Zui...:Zum : Zv~ : . . . Z v i . . . : Z v n ,

where the terms with a hat are omitted. Such a term is called the contraction of
the indices s i a n d tj in the Wick product. Similarly, the t e r m in the chaos of order
(m - 2, n - 2) involves all contractions of two pairs of indexes, etc.
If we had computed a Wiener product in the form (1.4t), in algebraic notation
Ul.....Urn.V] .....v n ,
then the contractions would have involved the hermiiian scalar product < vj, ui >.
T h e e x p e c t a t i o n of (2.1) is equal to 0 for r n ~ n , and for m = n it is given by

(2.3) ~[z~, ... z~,,, Zv, . . . z ~ ] = per(,;,,,d


where per denotes a permanent. W e may translate this into algebraic language, as a
linear f u n c t i o n a l on the incomplete Fock space, given in the r e p r e s e n t a t i o n (1.4) by

(2.4) Ex['ulO...0urno?211o...o?2/m] =per(@,ui).

(the n o t a t i o n E x should recall the word "expectation" w i t h o u t suggesting a n "expo-


nential"). In the representation ( 1 . 4 ' ) we have

(2.5) Ex[ulo...OUmOV*O...o?2*~l =per<vj,ui> .

a T h e following two subsections are devoted to comments on the preceding definitions.


T h e y are very close to considerations in Slowikowski [Slol] a n d Nielsen [Nie].
The W i e n e r a n d Wick products can be described in the same m a n n e r : consider a
(pre)Hilbert space 7-/ with a bilinear functional {(u, v). We are going to define on the
incomplete Fock space F0(7-/) an associative product a d m i t t i n g 1 as u n i t element, such
t h a t on the first chaos

(3.1) u.v = u o ?2 + { ( u , v ) 1 .

Note the analogy with Clifford algebra. This product is c o m m u t a t i v e if ~ is symmetric,


a n d satisfies in general an analogue of the C C R

u.?2 - v . u = ({(u, ?2) - {(v, u)) 1 .

We m u s t s u p p l e m e n t the rule (3.1) by a prescription for higher order p r o d u c t s of vectors


b e l o n g i n g to the first chaos

(ll I 0 . . . O Ura ) . ?2 = 1~1 0 . . . o u , l z 072 -F ~ " i ~ ( ?gi' V ) tt l 0 . . . t ~ i . .


(3.2)
~ . ( ~ , o . . . o ?2n) = , , o , , o . . o72,~ + ~ ~(~,72~) 72, o . . . ~ i ~ O V~
272 Appendix 5

the hat on a vector indicating that it is omitted. T h e n an easy i n d u c t i o n shows t h a t


the p r o d u c t (ul o . . . o Urn). (Vl 0 . . . 0 Vn) is well defined, with a t e r m in each chaos of
order rn + n - 2p involving all possible p fold contractions. T h e Wick ( s y m m e t r i c )
p r o d u c t corresponds to ~ = 0 ; the W i e n e r product needs a conjugation and corresponds
to ~ ( , ~ , v ) = < u,v >.
We have, u m
o and v on denoting s y m m e t r i c (Wick) powers,

4o . , o = : ~(~,,) o .

Note the analogy with the multiplication formula for H e r m i t e polynomials. We deduce
a p r o d u c t f o r m u l a for exponential vectors

(3.4) £(u). £(v) = e~(<v)g(u + v) ,

on which associativity is seen to d e p e n d on the identity

(3.5) ~(~, ~ + w) + ~(~, w) = ~(~, v) + ~(u + v, w ) ,

trivially satisfied in our case. A n o t h e r relation with H e r m i t e p o l y n o m i a l occurs if we


express the ( power u~n using Wick powers, and conversely (the index u( or Uo
indicates in which sense the polynomial operates on u )

(3.5) Uom = H m ( u ( , ~ ( u , u ) ) u~~ = H m ( u o , - ( ( u , u ) )

where the generalized H e r m i t e polynomiMs are defined by


~771
(3.5') ~'~-°~:/: = Z,,~ ,,~-7.H , , ( . < , ) ,

and we have in particular, as in the W i e n e r product case

(3.6) eg= ee(",")/2e(u).


from which a generalized Weyl relation follows

exp~ exp~ = e (~(u'v)-U'''u))/2 exp{(u + v) .

Let us give at least one interesting non-commutative example. We r e t u r n to the case of


= ~-/@ 7-{r , w i t h elements represented as u + v I , and we identify u w i t h the creation
o p e r a t o r a + , and v / w i t h the annihilation o p e r a t o r ave (indexed by an element of
~_{I : see VI.1.4). Then, the opera~or rauliiplication can be described as an ~ - p r o d u e t
involving the n o n - s y m m e t r i c bilinear form

(3.7) ~(u + v * , w + z*) = < v , w > ,

"half" the s y m m e t r i c bilinear form (1.3) that leads to the W i e n e r p r o d u c t . For i n s t a n c e

(3.8) (a+ + av,)(a+w + az,) = (%+ + av,):(a+w + az,) + ( v ' , w ) l .


2. S u p e r s y m i n e t r i c a p p r o a c h 273

One easily recovers the well known fact that the (commutative) Wick product of a string
of creation and annihilation operators is equal to its normally ordered operator product.
On the other hand, for non normally ordered strings the operator product is reduced to
the Wick product using the CCR.
For instance, the operator exponential eu+v* is given by (3.6), and can be seen to
act on coherent vectors as follows, (interpreting the Wick product as a normally ordered
product, and then performing the computation)

eu+v*g(h) = e< .... >l~ eu eV*g(h)


= e < ~,h > + < v,~ >12£( u + h)

which is the formula we gave in Chapter IV for the action of Weyl operators on
exponential vectors.
4 We now discuss a topic of probabilistic interest, that of Stratonovich integrals. This
is close to what [Slol] and [Nie] call "ultracoherence", with a different language. We
again proceed leisurely, including topics which are not necessary for our main purpose.
On the other hand, many details have been omitted : see the papers of Hu Meyer [HUM],
Johnson Kallianpur [JoK], and a partial exposition in [DMM].
First of all, we consider for a while a real Hilbert space 7-/, to forget the distinction
between the hermitian and bilinear scalar products. As usual, a generic element of Fock
space is a sum f = ~ n fn/n!, fn being a symmetric element of ~ ® n and the sum
~,~ II fn 112/,~!being convergent. We call F = (f,~) the representing sequence of f , and
use the notation f = I ( F ) , I for "Ito" since when 7-/ = L2(IR+), F is the sequence
of coefficients in the Wiener Ito expansion of the r . v . f . Many interesting spaces of
sequences (with varying assumptions of growth or convergence) can be interpreted as
spaces of "test-functions" or of "generalized random variables" (see the references [Kr~],
[KrR]).
Given a (continuous) bilinear functional ~ oll 7-/, we wish to define a mapping Tr~
from 7-/®m to 7 - / ® ( ' - 2 ) , which in the case of 7-/ = L2(]R+), ~(u,v) = (u,v) (the
bilinear scalar product), should be the trace

(4.1) Tr fro(s1 . . . . . Srn--2) = / frn(Sl . . . . . Sin--2, S, S) ds

(for m = 0, 1 we define Tr fm = 0 ) . If ~ is given by a kernel a(s,t), we will have

(4.2) Tr fm(Sl ..... sin-2) = /fm(Sl,... ,sm_2,s,t) a(s,t)dsdt .

To give an algebraic definition, we begin with the incomplete tensor power 7-~m and
define

(4.3) T r ~ ( u 1 @ . . . ~) Urlz) -- ~ ( l / r n _ l , u r n ) u 1 @ . . . @ u r n _ 2 .

Since we will apply this definition to symmetric tensors, which indexes are contracted
is irrelevant, and we made an arbitrary choice. This definition is meaningful for any
bounded bilinear functional ~, and it can be extended to the completed rn-th tensor
power if ~ belongs to the Hilbert-Schmidt class.
274 Appendix 5

We now consider Tr~ as a mapping from sequences F = (fn) to sequences


Tr( F = ((Tr(fn+2)). For instance, given an element u from the first chaos, the sequence
F = (u ®n) is the representing sequence for g(u), and we have T r y ( F ) = {(u, u) F .
At least on the space of finite representing sequences, we may define the iterates
Tr~ and put T~ = e(1/2)Tr~ = ~ n T r ~ / 2 n n ! . On the example above, we have
T4(F) -- eO/2)((u,U)F. It is clear on this example that, if we carry the operator T~
to Fock space without changing notation, what we get is

(4.4) T(g(u) = e(1/2)~(u'u)~(it) = exp{(tt) ,

the exponential of u for the ~ product. If ~ is symmetric, we have exp~(u + v)


exp,(u) exp,(v), and this suggests T~ is a homomorphism from the incomplete Fock
space with the Wick (=symmetric) product onto itselfwith the ~-product. It turns out
that this result is true (see [HUM] for the case of the Wiener product) and it can't
be true in the non-symmetric case, since the Wick product is commutative.
Given a sequence F , and assuming I(T~(F)) is meaningful, it is denoted by S ~ ( f ) .
In the case of L2(IR+) and the usual scalar product, S(F) is called a Stratonovich chaos
expansion. An intuitive description can be given as follows : an Ito multiple integral
and a Stratonovich integral of the same symmetric function f ( s l , . . . , am) differ by the
contribution of the diagonals, which is 0 in the case of the Ito integral, and c o m p u t e d
in the case of the Stratonovich integral according to the rule dX2s = ds.
Let us now discuss the most important case for the theory of local times : that of
complex Browniam motion. The chaos coefficients are separately symmetric functions,
and the "diagonals" which contribute to the Stratonovich integral are the diagonals
{sl = tj}, the trace being computed according to the rules

dZ~ = d g ] = 0 , dZs d-2s = dT~ dZ~ = d,

More generally, there exist contracted integrals using a function {(s, t ) , not necessarily
symmetric. The trace in the complex case involves the contraction of one single s with
one single t , and the Tr( operator applied to the representing sequence F =(u®rnv ®n)
of an exponential vector multiplies it by {(u, v). The operator T{ (Stratonovich integral)
has no coefficient 1/2 : S~(F) = I(eTr~F).
The linear functional E x on tile space of sequences, i.e. the ordinary (vacuum)
expectation of the Stratonovich integral, is generalized as follows (we write the formula
in the complex case)

(4.5) Ex~ [F] = E ~1 Tr~fmm "

5 The following computation is an essential step in the proof of Le Jan's result. Let
G be an element of the chaos of order (1, 1)

(5.1) G = Z~, oZ.~ + . . . + Z~= o g . ~


2. Supersyn~netric approach 275

In algebraic language, the vectors vi would rather be written as v iI E 7_/I , and G would
be ul o v~ + ... + un o vln. We also associate with G the corresponding Wiener product

(5.2) G = Zul-Zv, + . . . + Zu,~-Zv,~

which is the "Stratonovich" version of (5.1). Similarly, the Wiener exponential e - s t is


the "Stratonovieh" version of the Wick exponential eo ¢G , and therefore we have
A
(5.3) IF, [e - c a ] = E x [ e J G ] ,
which will be seen to exist for c close to 0. We are going to compute this quantity.
LEMMA 1. Let A be the square matrix (vj, ui). Then we have
^ 1
(5.4) 1E [ e -~a ] = det ( I + cA)

Here is a useful extension : for e small enough, we have


per( bj, ( I + c C ) - l a i )
(5.5) ]F~ [ Z a t - Z b . . . Zam-Zbme -¢~ ] =
d e t ( I + cA)
where C is the matrix (bj, ai). We give two proofs of this formula, a combinatorial one
and a more probabilistic one (only sketched) - - indeed, this is nothing but a complex
Gaussian computation.
We work with the Wiener product and IE instead of the Wick product and E x . We
expand the exponential series (5.2)

(5.6) ~ [ e - ~ a ] = l + ~ ~ m',[(zu,gv,+...+z,,j~) k]
k
(_~)k
(5.7) =1+~-]~ k ~ ]EEZu,(,)-Zv,(t)+...+Zu,(k)-Zvr(k)]
k rEEk
where F k is the set of all mappings from { 1 , . . . , k} to {1, .., n}. Before we compute
(5.7), we make a remark : using the basic formula (2.3) for the expectation of a product,
we know it can be developed into a permanent of scalar products (vj, ui). Therefore,
the expectation does not change if we replace each v i by its projection on the subspace
K generated by the vectors u j . On the other hand, 0 depends only on the finite rank
operator c~ = ~)~ @Ul -t-.. . Vn®Un~ with range K , and from the above we may assume the
v j ' s also belong to K . Then we may assume that the ui constitute an orthogonal basis
of K , in which the matrix of c~ is triangular, and the determinant we are interested in
is the product of the diagonal elements (1 + eAi) of I + c a .
We now start computing (5.7) : let il < . . . <ip be the range of the mapping ~- ; since
r is not necessarily injective, each point ij has a multiplicity kj with kl + • .. + kp = k.
All the mappings r with the same range and multiplicities contribute the same term
to the sum,and their number is k!/kl! ... kp!. Therefore we rewrite (5.7) as

(5.s) 1+~(-~) k ~ ~ [ ( z ~ , , ) ~,...(z~J~,,)k~]Ik,!...kp!


k kl+...+kp=k
il <.., <ip
276 Appendix 5

We rewrite the expectation as

IE [ z f -ahl . . .Zfk-ghk ]

with f j = ui~, hj = vi, for 1 _< j <_ k l , ui2,vi= for kl < j < kl + k2, ere, and
formula (2.3) expmads it as a sum, over all permutations of { 1 , . . . , k } , of products
r l j ( h j , f~r(j)). The matrix A being triangular, a does not contribute unless it preserves
the intervals (1, k l ) , (kl + 1, kl + k 2 ) . . . ; the contribution of such a p e r m u t a t i o n is
(vi,, uil )lq ... (vip, ui v)kp , and the number of such permutations is k l ! . . , kp!. Therefore
the expectation (5.6) has the value

1+ E(--c)k E kk'zl " " " A~vP = (1 q- C/~1)-1 " " " (1 -F C/~n)-I
k kl+...+kp=k
i 1 < ... <ip

for ¢ small enough. The theorem is proved. The extension is not difficult : assuming the
t i ' s and ¢ are small enough, we have from the preceding result

IF, [ e x p ( - £ G ~- tlZatZ-bl A ~ . . JF trTtZa,ngbm)] = 1 / d e t ( I + cA - U) ,

where U = ~z~itib} ® ai. Putting C = I + cA and assuming C is invertible, this


determinant can be written d e t ( C ) d e t ( I - C - 1 U ) . On the other hand, C - 1 U is the
finite rank operator ~ i tib} @ ci, with ci = C - l a i , and if the ti are small enough, we
may use again the main result to compute 1 / d e t ( I - C - 1 U ) . Therefore

IE [ e x p ( - e G + ~ i tiZ~-Zb') ] = IE [exp(~-~i t i Z ~ j ~ , ) ] / det(C)

We identify the coefficients of t l . •. tm on both sides and get the required formula.
There is a similar computation for the "{ -product" defined above, if { is symmetric :
A is now the matrix {(vj, ui) and C the matrix {(bj,ai).
Let us now sketch a proof using Gaussian integrals. It has the advantage of expressing
the Wick exponential itself as a standard (Wiener) exponential, instead of merely
computing its expectation. We start with the elementary Fock space over C or 1122,
i.e. the L 2 space generated by a standard Gaussian variable X (real) or Z(complex).
We recall the classical formula (for c > 0)

(5.9) eCZ~/2 = -~1 i e e -u2/2c du


TM

where du is the "Plancherel measure". From the interpretation of e x p : ( u X ) as


e uX-u=/2 we deduce that

(5.10) e x p : ( c X 2 / 2 ) - v/1 1+ c e 1--~'7X2/2 ' e aX2/2 = "v/1 - a exp:( 1 a a X 2 / 2 ) .

W h a t about the { - p r o d u c t ? Since the basic Hilbert space 7"[ is simply C, let us put
{(1, 1) = q and assume for the moment that q > 0. Then the { - p r o d u c t is the Wiener
product w.r.t, a Gaussian law of variance q, and the Wick exponential e x p : ( u x ) takes
2. Supersymmetric approach 277

the form e u x - q u 2 / 2 . This has a combinatorial meaning which can be extended to all
values of q. The same reasoning as above then gives us

(5.11) exp.(cX2/2) - ~ e x p ~ ( 1 - - ~ qc X 2 / 2 ) .

The computation is similar, but simpler, in the complex case : we have e x p : ( u Z + v Z ) =


e u Z + v - Z - u v , and (returning to the representation Z = ( X + i Y ) / v / ' 2 as an intermediate
step) we deduce

(5.12) exp:(cz2) = 1 ~,+c~z2


1+c
This elementary formula can now be used to compute Wick exponentials on the complex
Brownian motion Foek space, for elements of the second cha~s of the form

G = Zu~ : Z v ,l + . . . + Zu~ : Z v ,

(vectors and linear functionals are identified via the bilinear scalar product). To perform
this, one remarks that G depends only on the operator A = ~ i ui ® v iI, and then one
reduces A to a diagonal or at least triangular form : we give no details.

Antisymmetric computations
For a complete exposition of the rich algebra underlying the informal presentation
we give here, we refer to the papers by Hehnstetter in the references.
6 We start with a translation of the elementary Clifford algebra of Chapter II. There
we had a finite dimensional Hilbert space 7-/ with a basis xi, orthonormal for the
bilinear scalar product ( , ) ; we built the corresponding basis (XA) for the (symmetric
or) antisymmetric (toy) Fock space, and the Clifford multiplication, which we now
denote explicitly by <5, was defined by the table x A ~ x B = ( - 1 ) n ( A ' B ) XAzx B . Now
the antisymmetric Fock space is nothing but the exterior algebra over 7-/, and for
A={ia <...<in} XA is nothing but the exterior product xi~ A . . . A x i n . Thus we
have two different multiplications on the same space, and we want to put into algebraic
language the relation between them, as we did for Wick and Wiener. The main element
of structure is the bilinear scalar product, which we denote by {(u, v) instead of (u, v).
First of all, 1 is the unit for both products. Then for elements of the first chaos, we
have

(6.1) u ~v = u A v + ~(u,v) 1 ,

implying the CAR {u, v}<) = 2~ (u, v) 1.


We have a "multiplication formula for stochastic integrals" which expresses

(ul A...Aum)©(vl A... Avn)


as the sum of terms in all the chaos of order (m - p, n - p) involving p contractions.
Let us write the simplest non trivial case :

(6.2) u ©(vl A . . . A vn) =

,, A ,~, A . . . A v,, + ~ (-1)i+'~(,~, v~)v, A . . . A V~ A . . . A v,,,


278 Appendix 5

omitting the vector with a hat. We leave it to the reader to write down the general
formula. These prescriptions can be checked directly on the multiplication table, when
each ui or vj is a basis element Xk, and then extended to arbitrary elements of the first
chaos by linearity. On the other hand, the associativity of the multiplication formula
we get is clear, as we are computing within an algebra known to be associative! Since
any non-degenerate symmetric bilinear form can be interpreted as a scalar product,
we get in this way a general associativity result Once it is known for non-degenerate
bilinear forms it extends to degenerate ones. Finally, the result extends at once to
infinite-dimensional spaces.
REMARK. In the symmetric case, it is clear on formula (3.4) that the standard annihila-
tion operators a u, act as derivations on all (-products. Here we have the same result
for fermion operators b~-,, with the only difference that the derivations are graded ones,
i.e. bu,(X 9 y) = (bu, z ) (>y + or(x) <>b~,(9), where a is the parity automorphism.
Now comes a crucial remark for Le Jan's result : what we have done does not
require the symmetry of ( though of course the proof of associativity by reduction
to toy Fock space does. We postpone the general proof of associativity to the laJt
~ub~ection of thi~ Appendix. The symmetric form 2( must be replaced in the CAR by
~(u,v) = ((u, v) + ( ( v , u ) : thus all algebras with the same ~ are isomorphic Clifford
algebras, but the way they are concretely realized on the antisymmetric Fock space
depends on the choice of (. In particular, if ( is antisymmetric, what we get is a new
Grassmann algebra structure I on Foek space, which we distinguish from the usual one
by means of the sign O.
We take now a linear space ~ with a bilinear form ( , and denote by ~ / a second copy
of ~ - - the i may thus have two meanings, either of suggesting duality when a scalar
product is given, or of a mark distinguishing elements of the second summand, and the
confusion between these meanings is harrnless, or even useful. We define ~ = 7-/@ 7-/~,
and extend ( as a bilinear form on G under which the two summands are totally
isotropic spaces :
(6.3) ((u + v', w + z') = ((u, z) :k ((w, v) .
If we choose the + ( - ) sign, we get a symmetric (antisymmetric) form. The generators
we use for the (incomplete) antisymmetric Fock space over G are "normally ordered"
ones, ul A...AumAv~A...AvOn. Applying the preceding remark, we define multiplications
on this double Fock space, with the following prescriptions : strings of u ' s or v ' s are
multiplied as in the exterior algebra without correction; the computation of products
of mixed strings are expressed as modified exterior products involving contractions. For
instance
! I
(6.4) uO(ulA,..AumAt hA..,Avn}=
. . . . . ,~..~i ~ ) ql , ~} 1 . . . . . .

(6.4') v' <>(u, A . . . A u,, A v', A . . . A v',,) --


! !
( - 1)'~u~ A . . . A u m A v Av~ A...Av:~ :F ~-2<(-1)'~+/¢(u,vi) u~ A . . . ~ i . . . Aura Av!l A . . . A v ~

1 This was misunderstood in the report [Mey4], which contains a confusion between
this "new Grassmann" product and a Clifford product.
2. S u p e r s y v a m e t r i c approach 279

(the sign 7: is - for Clifford, + for "new Grassmann").


If ~ = L2(IR+) and ~ is the standard scalar product, one may also use a probabilistic
notation : we take an antisymmetric double Fock space and denote by P and Q the
corresponding two complex Brownian motions (which anticommute). Then we read u
as f u ( s ) d P s and v' as f v ( s ) d Q s . We also read

ulA...AumAv'IA...Av ~ as Pu, A...APu,,AQv, A...AQv~.


The notation P, Q comes from symplectic geometry : it has nothing to do with the two
real Brownian motions on simple Fock space, which satisfied the CCR : we are here in
the CAR realm. In both cases we have for all s, t that dPs anticommutes with dPt,
dQ8 with dQt without restriction, and dPs anticommutes with dQt for s ~ t. For the
Clifford product, we have

(6.5) dP~ 0 dQt = dt = dQt A dPt ,

and therefore {dPt, dQt} = 2dr, while for the "new Grassmann" product, we have

(6.6) dPt <>dQt = I d t = - d Q t ~ dPt ,


and thus d P s anticommutes with dQt without restriction.
We concentrate our interest on the "new Grassmann" product <>, and leave aside
the Clifford one.
T h e ~ - p r o d u c t s associated with n o n - s y m m e t r i c forms have another interesting appli-
cation : Le J a n [LeJ3] used them to extend Dynkin's method to n o n - s y m m e t r i c Markov
processes,
One can define on the incomplete double Fock space a linear functional E x similar
to that of symmetric Fock space, and such that
! !
E x [ u I A ... A u m A v 1A ... Av/n] = IE[u 1 0 . . . © U r n O v 1 ( > , . . O V I ] :
IE [ ( t t 1 A . . . A ?.trn)O('vl1 A . . . A'on)] .

Intuitively speaking, E x is the expectation of an antisymmetric Stratonovich integral,


which, however, has not been systematically developed in the antisymmetric case : a
"trace" operator should be introduced and studied to define E x for classes of coefficients
belonging to the completed L 2 spaces.
This linear functional is given by the formula

(6.7) Ex[PulA...APumAQvlA...AQv~] =p(m)det((ui,vj)) if r e = n ,

and 0 if m ~ n .
7 We are going now to prove the antisymmetric analogue of (5.4). We consider an
element of the second chaos of the form

(7.1) G = P u l A Q , , + . . . + P u , ~ AQv,~ ,

a finite sum ; Ul A v~ + • - • + un A v~n if you prefer algebraic notation. We associate with


it a similar "new Grassman" object

(7.2) G = P~,, <)Qv, + . . . + Pun OQv~ .


280 Appendix 5

The corresponding exponential series are a finite sums, and no convergence problems
arise. The result is the following :
LEMMA 2. Denoting by A the ma~rlx (v~, ui), we have

(7.3) Ex [ ~ ] = ~ [e~] = det(I + A).


As in the symmetric case there is a useful extension :

(7.4) Ex[Pal AQbl...Pam AQbmeGA] = d e t ( b j , ( I + C ) - l a i ) det(I+A),

where C is the matrix (bj,ai) and I + C is assumed to be invertible.


PROOF. We use the standard Grassmann product and E x . We first expand

enG= ld-E~-T1 ~ P~,I,)AQv,(,,A'"APu,(k)AQv,(k)


k rf=lk
where Ik is the set of all injective mappings from { 1 , . . . , k } to { 1 , . . . , n } . Moving
around a block PuQv produces an even number of inversions and does not change the
product; therefore we may reduce Ik to the set Jk of strictly increasing mappings,
cancelling the factor l/k!. Now we take all the Pu's to the left and get

1 + E E p(k)Pu,(1) A... A Pu,(k) A Q~,(, A... A Q~,(k) •


k rEJk
Then we apply E x and the factor p(k) disappears : we get (putting aji=(vj,ui) )

1+ ~ ~ det((vT(j),uT(i))) = 1 + ~ d e t , , ~ a,~.
k rE3k BC{1,...,n}

We must show this is the same as

det (I + 3) = ~ ~ 1-I(~,~,) + a ~ ) ) .
a i
We expand the product on the right. Each index i may contribute either a factor aia(i )
or a factor 6ia(i). Let B the the set of indices of the first kind. If the product is
different from 0, a must leave the indices of the second kind fixed, and therefore arises
from a permutation of B , which has the same signature. We then finish the proof by
rearranging the sum as a sum over B . The extension is proved as in the symmetric case.

S u p e r s y m m e t r i c Fock space

8 We now consider the supersymmetric Fock space, i.e. the tensor product of the
previously considered symmetric and antisymmetric Fock spaces, contructed over ~ ) 7 - ( t
- with 7-L = L2(IR+) in the probabilistic interpretation, two complex Brownian motions
-

being thus involved, or four real ones. The classical analogue is the space of all differential
forms on a sympleetic manifold. We may provide this space with several different
products. Those we are interested in are the "Wick" product (Wick ® Grassmann) which
2. Supersymmetric approach 281

we denote by *, and the "Wiener" product (Wiener ® new Grassmann) which we denote
by 0 (sometimes o m i t t e d ! ) as its antisymmetric part.
Consider an operator a of finite rank on 7-( : it can be written (non-uniquely) as

(8.1) = Z i v'i(.)

with v~ E 7"(I. Then we associate with it an element of the supersyrnmetric Fock space
(which depends only oil a , not on the representation)

(8.2) ~ ( " ) = Ei(Zui-Zvi - Pul ~ Qvi) = E i ( Z u , :Zvi - Pu, A Qv,) •

We have used the probabilistie interpretation, where v~ becomes v i. The "supersym-


metric miracle" which happens with these elements is the following : if we multiply
(5.4) and (7.3), the determinants on the numerator and the denominator cancel, and
therefore we have, for c smM1 enough, the exponential being a Wiener one

(8.3) IE [exp<>(~(c~)] = I .

N o w we insert in front of each term in the sum (8.2) a coefficient Ei (assumed to be


small) and differentiatewith respect to all of them. W e find that given any finite family
of operators of finite rank

(8.4) IE [ A ( a , ) . . . , k ( a n ) ] = O.

If instead of applying (5.4) and (7.3) we apply their extensions, we find that

lE [ Za-Zbe ~`ei(Z~;Z~i-P"i OQ~i)] = (b, ( I - ( E tivti ® u i ) - l a)


i

and then if we identify the coefficients of el . . . e n on both sides, we get, p u t t i n g


ai = v} ® ui (an operator of rank 1)

(8.5) IE [Za-Zb~(Oq)....~(Otn) ] : E a ( b, CLo-(1)... a(7(n)a )

By linearity, we may extend this to operators ai of finite rank. The formula can be
extended to Hilbert-Schmidt operators, but still this is a serious restriction.
The right side of (8.5) is the same as (2.1) in section 1, which itself was a rewriting
of (1.9), "the n - p o i n t function of the occupation field". This has now become the
expectation of one single product on supersymmetric Fock space, a simpler result t h a n
we could achieve in §1. May be our reader has forgotten the concrete application we
had in view, assuming local times do exist : the computation, for F a polynomial in n
variables, of the expectation lEke/k [ F ( L ~ . . . L ~ ) ] !
282 Appendix 5

Properties of the supersyinmetric Wiener product

9 We are going to prove a few useful algebraic results concerning the supersymmetric
Wiener product. First, let us compute the norm of A(a) in Fock space. Let a and
be two finite rank operators. Then we have

(9.1) < A ( a ) , A ( ~ ) > = 2< c~,~ >HS ,

where the scalar product on the right is the Hilbert-Schmidt scalar product between
operators. To see this, we write

A(~) = ~-~i Z,~ :-Zvi - Pu~ A Q,,

and similarly for 8 with vectors ui,t;i (it does not restrict generality to assume the
index sets are the same). We understand these elements as random variables in the
second chaos of a double complex Brownian motion Fock space. Then the Z , Z and
P, Q parts decouple, and contribute the same quantity (hence the factor 2), which is

which is easily interpreted as a H - S scalar product.


W e n o w describe a second "supersymmetric miracle". W e consider an arbitrary vector
u in 7"i with (u, u) = q. W e deduce from formula (5.12) that, for s small enough

I sq
(9.2) e x p : t s Z u - Z , ) - 1 + sq exp( 1---~sq Z Z ) .

On the other hand, we have

( 9 . 8 ~ p A ( - - s P , A Q,,) = 1 - sP~ A O~ = 1 - s(P~ 0 Q,, - q)


sq PoQ)
= (1 + sq) (1 1 +sq sq P o Q ) = ( l + s q ) e x p o ( l+sq

Multiplying (9.2) and (9.3) we get

(9.4) exp.(sA(u Q u')) = exPo( l @ s q A(u ® u'))

We recall the generating function for a family of Laguerre polynomials


tx
exPl+t=~,P.(~)~ ~ with P.(x)=(-1)~L~-')(x),
and then expanding the exponential gives an expression for supersymmetric "Wick"
powers using "Wiener" powers

(9.5) )~(u ® u ! ).71 = n! qn P,~(),(u ® u')lq)<> .

10 After we used the theory of local times as our motivation to get into twenty pages of
algebra, it may be disappointing that we devote little space to probabilistic applications.
2. Supersymmetric approach 283

However, we cannot hide the fact that, for these applications, the best methods are
purely probabilistic : a good way to dive into the literature is to look at the volumes
XIX to XXI of the Sdminaire de Probabilitda, and to look therein at the papers by
Dynkin, Le Gall, Le Jan, Rosen and Yor (and their references). On the other hand, the
relations with q u a n t u m field theory are described in a series of papers by Dynkin. Here,
we merely mention the problems.
Recall that our basic (pre)Hilbert space 7-/ is a space of signed measures with
bounded potential, with the energy scalar product. W i t h every measure # of finite
energy, we associated an operator atL on the space

~,(~) = (a~), .

If the process has local times, i.e. if point measures have finite energy, and if the support
of # is a finite set, then a , has finite raxlk, and we may associate with it an element
,~(#) of the second chaos as explained before, and apply all the previous algebraic
computations. In particular, we may associate with every point x an element Z x Z x
of the second symmetric chaos, and an element •x = Z x Z x - PxQx of the second
supersymmetric chaos. Computations involving the system of local times may be reduced
to algebraic computations on these vectors (see Le Jan's paper [LeJ1] for details). Note
that ~x has expectation 0 : its definition implicitly contains a "renormalization".
Assume now that local do not exist. Then we may think of Z x Z x and ~x as
"generalized random variables" and wonder whether they belong to a true "generalized
field", i.e. whether we may define something like f Ax f ( x ) dx for a smooth function with
compact support f , and more generally f a;'f(x)dx (Wick power). The symmetric
part of this integral will then be interpreted as a "renormalized power" of the occupation
field. The m e t h o d to achieve this is the following : we replace the point measure ez by
a regularization e x ( t ) , Ix by the corresponding I x ( t ) , and use the algebra to check
whether the corresponding integrals converge in L 2 as t ~ 0. It turns out that the
computation is not too difficult, and that convergence takes place in dimension 2,
but not in higher dimensions (the divergence of g(x, y) on the diagonal is logarithmic
in dimension 2). For details, we refer to [LeJ2]. Unfortunately, this description of
the renormalization problem has reduced it to its abstract core, stripping it of its
probabilistic interest : relations with multiple points of Brownian motion, asymptotic
behaviour of the volume of the Wiener sausage, etc.

Proof of associativity of ~-products


11 We are going to prove that the computations involving contractions we performed
around (6.2) define an associative product. The following proof is a d a p t e d from Le Jan.
We may reduce the infinite dimensional case to a finite dimensional one : consider a
space 7-/of finite dimension N with a scalar product (u, v). We give ourselves a bilinear
form ~ ( u, v ) = ( A u, v ) on ~ , and put rl ( u, v ) = ~ ( u, v ) + ~ ( v , u ) . We will assume t h a t
and r/ are non-degenerate. This is not a serious restriction : non-degeneracy holds for
~(u, v) + t(u, v) except for finitely many values of t, and the associativity result may
be extended by continuity to t = 0 .
On the antisymmetric Fock space q5 over ~ , we denote by b+u , bu the s t a n d a r d
fermion creation and annihilation operators, satisfying the C A R {b+u, by} = ( u , v ) .
284 Appendix 5

Then we put

(.1) R u = b + +bA~ , S,=b~ -by,

where B satisfies (Au, B y ) = ( u , v ) . Knowing how b~ operates on exterior products,


all amounts to proving this : there exists an associative product (> on ~2, with 1 ~s
unit, such that for u E 7-/ and x C • we have u Ox = R u x .
It is easy to see that

{Ru,Rv}=(u,Av)+(Au,v), {Su,Sv}=-(u, Bv)-(Bu, v), {Ru,Sv}=O.

Therefore, the operators R u , Sv taken together generate a Clifford algebra over the
space 7-/@ T/ of even dimension 2 N , relative to a non-degenerate quadratic form.
Applying the fundamental uniqueness result on Clifford algebras (II.5.5) we see that
the operators R u S v generate an algebra of dimension 22N , which is therefore the full
algebra £(q~). On the other hand, the two algebras T/ and $ generated by the Ru and
Sv separately are Clifford algebras of dimension 2 N .
Let (I)+ ( ~ - ) be the even (odd) subspaces of • ; let ~ + ( $ + ) and T~- ( S - ) be
the even (odd) subspaces of 7~ and $ . Note that Ru and Su are odd operators in the
usual sense, and therefore 7~+, S+ preserve parity in while N - , S - change it. We
denote below by R L, S L generic elements of ~ 4 - $ + .
Let A C T~ such that A1 = 0 ; decompose it into A + + A - . Since A L l belongs to
q~+ we must have A L l = 0. Then applying R e for c = -t- and using the appropriate
commutation relation we find that A + R e l = 0. Then applying S ~ for 5 = + and
using the appropriate commutation relation with A + R ~ we find that A + R e S ~ I = O.
P u t t i n g everything together we find that A R S 1 = 0 for arbitrary R E T~, S E S .
On the other hand, the algebra "R.N is the full algebra, and therefore A = 0. The
mapping A , , A1 from ~ to q~ being inject±re, a dimension argument shows that
it is surjective. Therefore we may use it to identify • with ~ , and define on q~ an
associative product (>, with 1 as unit, such that, if x, y are elements of q~, and R x , Ry
are the unique elements of T~ such that x = R x l , y = R y l , we have x ~ y = R x R y l .
In particular, if u, v belong to 7-/, we have

u Ov = n u R v l = (b + + bAu)V = u A v + ( A u , v ) .

More generally, we have for x E '~ u~hx = R u R z l = R u x , the main property we


wanted to prove.
REMARK. In Helmstetter's notes [Hel2] (or p. 35 of the more easily accessible paper
[Hell]), one can find an explicit formula for the ~ product, from which a direct and
general proof of associativity can be deduced, including the degenerate cases. We will
give now this formula.
Let g be a finite dimensional space $ with dual g ' . We use greek letters for elements
of g/ and its exterior algebra. We recall there is a mapping ( a , x ) ~ a ~ x from
A(g') x A(g) to A ( g ) , called the interior product, which is a generalized annihilation
operator :
1) For a C g ' , x e A($), we have a ~ x = b ~ ( x ) , with the additional convention
that l ~ x = x .
2. Supersyn~n~netric approach ')85

2) (o, A S) ,~x = ~.~ (,~.~x).


This product acts as a graded derivation :
3) a ~ (x A y) = (a < x) A y + ~(x) A (c~ < y), ~ denoting the parity automorphism.
More generally, Helmstetter ([Hal2] p.25) shows that we may replace A by ~, an
arbitrary ( product on E.
We take $ = 7-I (9 7-t, where ~ is finite dimensional with dual ~ . To distinguish
the second component we use a - since we cannot use the P as before. We identify the
exterior algebra A(g) with A ( ~ ) ® A ( ~ ) : this means we use antisymmetry to express
elements of A(g)
(,,, + ~,) A . . . A (,~p + ~p)

as linear combinations of normally ordered products

('Ul, A . . . A uim) Q ('vj, A . . . A Vjn )

with r n + n = p . The dual A ( $ ' ) then gets identified with A ( ~ ' ) @ A ( ~ ' ) . On the other
hand, a bilinear form ~ on ~ defines an antisymmetric bilinear form on 7-t @ 7-t

and therefore ~ ® ~ is imbedded into A2($'), and has an exterior exponential in A ( g ' ) .
Finally, there is a natural mapping # from g to ~ given by #(u + g) = u + v, which
extends to the exterior algebras by #(hAk) = # ( h ) A # ( k ) for h, k E A(g) ; for example,
given x, y E A ( ~ ) we simply have #(co ® ~) = x A y.
After all these preliminaries, we can give the closed formula for the ( product on
A(7-/) : given x, y c A ( ~ ) , x @ y belongs to A ( g ) , and we take

x 0 y = ~(exp~(O ~(x ® ~ )

For instance, if x , y E ~ , the only terms in the exponential that contribute are 1 + ( ,
and we get
l~(x®~))=x@[, ¢,l(xO.~)=((x,y)l,
and applying # we have x 0 y = x A y + ~(x, y) 1 as it should be. This is clearly the
analogue of the trace formula for Stratonovich integrals.
Appendix 6

More on Stochastic Integration


In this appendix, we prove the representation theorem for operator martingales of
Parthasarathy and Sinha, its extension to semimartingales by Attal, and the "true" Ito
formula for bounded representable semimartingales, also due to Attal.

§1. E V E R Y W H E R E DEFINED STOCHASTIC INTEGRALS

A strong disadvantage of the H - P stochastic calcuhis compared to classical stochastic


calcuhls is the fact that operators defined on the exponential domain cannot be
composed, and there is no true "Ito formula". Following Attal we are going to define
a family of bounded "semimartingales" which is closed under composition, and for
which an Ito formula can be proved. We will work directly on a Fock space of arbitrary
multiplicity (calling /C the multiplicity space), but for simplicity we take a trivial initial
space. Most of the results given here appear in the paper [AtM] by Attal and Meyer.
1 First of all, we are going to prove a fornmla - - not quite explicit outside of the
exponential domain - - which tells how a "semimartingale of operators" acts on a
"semimartingale of vectors". It makes rigorous the heuristic fornmla (2.6) of Chapter
VI, §2 (p. 128) and extends it to arbitrary multiplicity. First of all, let us say what we
mean by a seminiartingale of vectors : it is an adapted process of vectors

(1.1) ft = c + ~ p Ji t ¢p(s) dXP(s)

- - we recall that p, a... may take the value 0, contrary to a,/3..., and that dX°s = ds.
In this formula c is a constant, ]p(.) is a complex valued process, square-integrable,
measurable and adapted, which satisfies the conditions

(i.2)
/0' II ]0(~)II d~ < ~ , ~o
/0' L ( ~ ) I1~ d~ < oo

for t finite. If there is a non-trivial initial space ,7 c becomes an element of ,7 and


the mappings ]p(s) take values in gs = `7 ® q~, but we will not insist on this extension.
We may rewrite (1.1) in a symbolic way as

(11') .f~ = c +
I' /0(~) d,~ +
I' t(~) d X ~ ,

where ((]0(~)) is an adapted process in L~o~(~ ), wl~ile (f(s)) is in L~o~(~ 0 JC) - - the
component of index 0 is now excluded from it.
Stochastic Integration 287

We have ]0 = 0 iffthe process (ft)is a martingale. In all cases, the coefficients L ( s )


are uniquely determined for a . e . s . In the case of an exponential vector fs = £(uI] 0,s ] )
we have ]0(s) = O, ]~(s) = u~(s) fs. It is always assumed that u is locally bounded.
Next, a "semimartingale of operators" is a fmnily of adapted operators ,It which has
a stochastic integral representation

(1.3) Jt = ZpG
/0 Hp(s) claP(s).
We could add an initial operator, but omit it for simplicity. We assume that all operators
involved ( J t and H~' ) have a common domain 7? which contains the exponential
domain 1. Again, we may write (1.3) in compact form

(1.3') Jt = ~0 t H+(s) da+(s) + H°(s) da°(s) + H-(s) do-(s) + H'(s) ds ,

where H ' = H0°, H ° is the matrix ( H ~ ) ( a n operator on ¢ ® / C ) , H + is the column


( H ~ ) , mapping • to ~5®/C and H - the line (H °) mapping ONK: to ~5. The unusual
position of the line or column index is an unfortunate feature of our notation.
On the exponential domain, the meaning of Jt is clear by the H - P theory of
stochastic integration, under the standard integrability hypotheses, but on a larger
domain it can only be defined by closure (i. e. double adjoint). Thus we will also assume
*p
the operators Jt and Hi(t ) have adjoints J [ and (Hp(t))* = Ho(t), defined on the
exponential domain, and satisfying on it a similar representation (in the H - P sense)

f t ,
(1.4) J~" = ~-~P~ ]0 H;(s) da~(s) D

Assume now that we have a semimartingale of vectors ft such that ft and all ]p(t)
belong to D such a representation will be called admissible. Let us define a new set
of functions fp(s) by

(1.5) f~(s) = L ( s ) but f0(s) = f(s), not ]0(~).

Then we want to show that, under convenient assumptions, we have

(1.6) I Jtft=Zo/otjsJp(s)dXPs +ZpcrfotH~(s)fc'(s)dXP(8) ]

In the case of martingales (]0 = 0) the first sum on the right side bears on c~ instead
of p, and in the case of exponential vectors f~(s) = u~(s) fs with u0 = 1, and we get
the familiar expressions of Chapter VI.

1 The exponential domain could be replaced for most purposes by the incomplete Fock space (the
algebraic sum of the chaos spaces), but we give 1do details.
288 Appendix 6

In the n o t a t i o n w i t h o u t indexes, the formula is written as follows ( Js{~s m e a n s Js ® I


acting oi1 (I) @ K )

(1.6')
Jtft =
I' (&f(s)+H+(s)f(s)+H°(s)f(s))dXs
rt
+/o (&.Co(s) + H-(s)f(,~) + H°(s) f(s))ds

F o r n m l a (1.6) is meaningful under the flfllowing conditions, for all t < oo

(1.7)
I' IIJs¢0(s)llds<oo,~ [ HJs]o(s)ll2ds<~,
(1.8)
/0' 11H°(,~) f(s)11 ds < oo ,

(1.9)
/o'IIg-(~)t(~)llzds 10' < oc , I[H+(s)f(s)ll2ds < oc ,

(1.10)
/o' II H°(')i'(s)I#d~ < ~o.

In each one of these s t a t e m e n t s involving the norm of a vector, it is assumed this vector
exists, which implicitly means some infinite series is strongly convergent. A n d now we
have a satisfactory result, due to A t t a l :
THEOREM. Under these conditions which make (1.6) meaMngful, it is also true.
Before we prove this result let us state its main application. Let us assume all
operators H~(s) and Yt are known to be bounded, and take for 7P the whole space. Let
us assume t h a t the n o r m of H°(s) is locally in L 1 , the norms of H - ( s ) and H + ( s ) are
locally in L 2 , and the n o r m of H ° ( s ) locally in L °° . T h e n all the preceding hypotheses
are easily seen to be satisfied and (I.5) holds true identically.
To prove this theorem, we will show that the scalar p r o d u c t s of the two sides of (1.6)
w i t h an a r b i t r a r y e x p o n e n t i a l m a r t i n g a l e g~ = gt(u) are equal. To this end, we take
adjoints and are reduced to the t r u t h of (1.6) for the adjoint process, and for exponential
vectors :

E /o' *
pG /o'"
Note t h a t we wrote as above c)p in the first integral, and g~ in the second one, while
g0 = 0 since (gt) is an exponential martingale. We want to prove

0¢ /0' po /0'
The left h a n d side is equal to < J[gt, ft > . We transform the first sum on the right
h a n d side into

~ fot < {ta(s), Js.fc~(S) > ds


Stochastic Integration 289

a n d replace .Is by or* acting to the left, after which we r e t u r n to a form

< ~0 t gs*g~(s)
- ~a A > .
d3£s,

In the second sum, we separate the tet'ms p = a > 0 and p = 0. In the first case we
can rewrite the s u m

f0 t Ha(~ ) > ds
(7

a n d shift H ~ t a k i n g an adjoint. T h e n we again distinguish two cases, a = /3 > 0 for


which we get

j~0t < u~(~)O,~(s)dX~,


* f, > ,

a n d er = 0, for which

/0'" < Hc~ {]a(s), fs > ds


/0''°< Ha ~ta(s), ft > ds

The second case p = 0 is similar. Details are left to the reader.

An algebra of operator senfimartingales

2 Let us now- define the space $ of operator semimartingales .It of bounded operators,
a d m i t t i n g a representation

(2.1) .]t=Jo+ £ H+(s)da+(s)+ H°(s)da°(s)+ H-(s)da-(s)+ H'(s)ds ,

with b o u n d e d a d a p t e d coefficients H~(.s) between the appropriate spaces a n d with

(2.2) I I H ' ( ~ ) I I CLIo c , ItH±( ~ ) l i c L ~ o ~ , I I N ° ( s )11 ~L,o%.

The crucial a s s u m p t i o n in this s t a t e n mnt is that of individual b o u n d e d n e s s of the


operators Jt ; together with (2.1) it implies the stronger property of local bounded'nes~
0/II Jt II. Indeed, the operators

Mt = Jt - H ' ( s ) ds

c o n s t i t u t e a m a r t i n g a l e of b o u n d e d operators, hence f[ A/it I[ is an increasing function,


a n d [[ Jt I[ must be locally b o n n d e d .
It is clear t h a t S is stable under passage to adjoints. Let us prove that it is also closed
under producta~ a n d c o m p u t e the coefficients of the representation of the p r o d u c t 1.
T h u s let us consider a second "semin*artingale" ( f t ) with coefficients / t ~ ( t ) a n d p u t

1 Let us recall incidently that the coefficients of such a representation are unique (Chapter VI, §1,
#10), but the proof there is only sketched : a more satisfactory proof is given by Attal [Attl].
290 Appendix 6

Lt = ~ J t with coefficients I(~ to be computed. We take a martingale (an exponential


one is enough)

ft = c + dXs

and compnte first Jtft using (1.6'), then Jt(Jtft) in the stone way, and get an expression
of the form (1.6I), which means that (Lt) is a representable semimartingale of operators.
We make its coefficients explicit as follows

K +:fish ++~r+J~+~°H + , K~- = J s H s + H ~ - J s + ~ r ~ - H s °,


(2.3)
= L H: + + H? , I;f = L H: + + H$
It is very easy now to check that these coefficients satis~' again the assumptions (2.1),
otherwise stated, (Lt) belongs to S . The computation (2.3) is nothing but the Ito
formula, the last term in the expression of each coefficient being the Ito correction. It
will be convenient later to rewrite this formula as follows

(2.4) d(JsJs) = Ysd.]~ + dfsZs + [ d J s , dJs~ ,

where the last term is called the sq~l,are bracket of the two semimartingales, and
represents the Ito correction :

(2.5) [dJs,dJs~ =!7I°H+da+ + t t ~ H ° s d a ° + I t T H ° d a ; +tI2H+ds.

As in the commutative case theory, the process itself is denoted by [ a7, J ~ t"
This definition is due to Attal, and he proved that this "square bracket" has many
of the properties of the square bracket of classical stochastic calculus. It appears clearly
that the "square bracket" has a nlartingale part consisting of the first three terms, and
a "finite variation part" which is the last term : it is naturally called the angle bracket
of the two semimartingales

(2.6) (dJs, dJs) = ~I2 H + <ls .

We refer to A t t a l ' s papers for a detailed study of these "brackets", and limit ourselves
to a few remarks.
REMARI,:S. a) The square bracket of two semimartingales belonging to S does not
belong to $ in general, for the following reason : the representation (2.5) has bounded
coefficients with the appropriate norm conditions, b.t~t the operators WJ, J-~ t may be
unboanded.
Thus it seems worthwhile to introduce the following definition : an adapted process
(Jr) defined on the exponential domain belongs to the class $' if it has a representa-
tion (1.3') with bounded coet~cients He(s), such that It H ' ( s ) II , l[ H + ( s ) II, [I g ° ( s ) I I
belong locally to L 1, L 2, L °° .
This class is somewhat of a mystery : in the commutative theory it would consist
of processes having BMO-like properties, and though it would not act on L ~ by
multiplication, it would map into L °- very iarge domains. In the non-commutative case
nothing is known. But other elements of the class occur naturally. For instance, given
Stochastic Integration 291

two elements d and L of S , it is easy to define the left and right stochastic integrals
f to Lsdds and fto(dds)Ls, which belong to the class $ ' , and to write the integration by
parts fornmla

(2.7) d ( J L ) t = Jt dLt + (dJr) Lt + EdJt, d L t ] ,


the left side belonging to $ , and the three processes on the right hand side to $ ' .
It is easy to see that the square bracket is an associative operation on the class S ' .
On the other hand, it can be approxiated by "quadratic Riemann sums" as in classical
proba.bility theory. Namely we have

(2.8) [,1, L~ t = lim Z i ( J t i + ~ - Jti)(Lti+~ - Lti) ,

the limit being taken over dya.dic partitions of the interval [0, t] , and understood as
weak convergence on exponential vectors. This shows that the bracket doesn't depend
on the integral representation. There is a similar (and easier) formula for the angle
bracket, the quadratic sum being replaced by

Zi Et{ ( Jt{+~ - Jt{ ) ( Lt,+, - Lti ) Et{ •

§2. R E G U L A R S E M I M A R T I N G A L E S

It is well known that every square integi'a.ble martingale on Wiener space has a
stochastic integral representation w.r.t. Brownian motion. A natural conjecture is that,
similarly, any reasonable martingale of operators on Fock space has a stochastic integral
representation w.r.t, the three basic operator martingales. The general validity of this
conjecture was disproved by a counterexample of J.-L. Journ6 (Sdm. Prob. X X ) , showing
that the martingale of conditional expectations of a fixed bounded operator may not be
representable in the usual sense. Hence the main positive result in this direction remains
that of P a r t h a s a r a t h y and Sinha, which we present now in Attal's extended version.

N o n - c o m m u t a t i v e "semlmartingales"

1 The space $ of "quantum semimartingales" considered in the preceding section


may be very convenient, but how can we prove that a process of bounded operators is
representable as a stochastic integral, with coefficients satisfying the appropriate norm
conditions ?
For simplicity, we work on simple Fock space. We consider an operator martingale
( 1.1 ) or semimartingale (1.2)

(1.1) _Air = cZ +
/0'(H+da + + H ° d a ° + H 2 d a 2 ) ,

(1.2) Mt=c[+
/0'(H+do++g°da°+HTdaT +H~ds).
292 Appendix 6

and describe first how one may compute the operators H e knowing (Mr) Given a fixed
time r a n d a vector fr C qb., the process Y) = (Mr - M r ) f r in the case (1.1) is an
o r d i n a r y m a r t i n g a l e with representation

(1.3) (M-t - Mr) fr =


/' H + f r dXu ,

a n d in the case (1.2) it is a semimartingale

(1.4) (~lt - M r ) . f , = H2.frdX~, + H'ufrdu .

Since the seminlartingale Y on [r, oc [ has a unique decomposition, we m a y recover


the process ( H u"f f ) as the drift of Y (tile density of its finite variation part) a n d the
process (H+~fr) as the density of the angle bracket < Y , X > . In principle, k n o w i n g
H'ufr for f r E d2r, 7" < u, and using boundedness we may c o m p u t e Hu" fu for fu C (Pu
a n d then, by a d a p t a t i o n , the operator H , itself. The same r e m a r k applies to H + .
A p p l y i n g the same procedure to the adjoint process, we could get ( H t - ) * a n d t h e n
H t - . B u t we can also get H t- itself by a similar construction, as follows. Consider
the process Zt = 5 4 t f t . We can see on (1.3) that its drift is equal to H ~ f t + H t f t .
Since we already know H t" this allows us to compute H ~ - f t , from which the o p e r a t o r
H t- itself can be computed. Similarly, tile density of tile m~gle bracket < Z , X > is
l~/it.¢t + H ~ f t + H ? f t , from which (a.s we a h c a d y know Mt a n d H + ) H ? m a y be
computed.
We are going to t u r n these formal computations into a rigorous reasoning.
DEFINITION. Let M = (5It) be an adapted process of b o u n d e d operators. We say
54 is a regular s e m i m a r t i n g a l e if there exists some increasing absolutely continuous
deterministic fimction re(t) = j to m ' ( s ) ds such that, for r < s < t, f r E Or

(1.5) H(Mr - M~).t;, tl ~ _< II fr 11"(7~(t) - .~(s)),


(1.6) tl ( M [ - M * ) f , . [I2 _< [[ ]'~ H2(rn(t) - r n ( s ) ) ,
(1.7) 11( E s M t - 2~is) f r II <-- 11.fr ]l (re(t) -- re(s)).

TttEOREM. A process M = (;~It) of bounded operators is a regular semimartingale i f


and only if it belongs to S , i.e. if it has a representation (1.1) with bounded operators
H I satisfying the conditions
T h e mappings s ~ ]l H~ )l are locally in L 1 for 5 = . , in L 2 for e = + ,
in L °° for s = o ;

The fact t h a t these assumptions on the coefficients imply the regularity of (~lt) is
neither difficult nor very i m p o r t a n t , and we refer to Attal [Att4] for details. T h e r e is one
interesting remark, however : the local boundedness of II g~ II is not used in this part of
the proof, a local L 2 property is all we need. On the other hand, no k n o w n a s s u m p t i o n
on the coefficients - - here as in the classical ease - - implies the b o u n d e d n e s s of M t

1 Since the operators are bounded, we may restrict fr to a dense set.


Stochastic Integration 293

itself, a n d this property (together with the regularity of the s e m i m a r t i n g a l e as stated)


implies that II H~ tl is locally in L ec , as we shall see.
To show t h a t regularity implies the representation as stated is the m a i n part of the
proof, a n d for it we will need a lifting theorem. Let us introduce an auxiliary result from
measure theory which greatly simplifies the reasoning. People reluctant to use additional
axioms for set theory here, the c o n t i n u u m hypothesis, b u t M a r t i n ' s axiom could be
used i n s t e a d - - m a y refer to the original proof of P a r t h a s a r a t h y - S i n h a .

2 A lifting theorem. This result is due i n d e p e n d e n t l y to Chatterji and to Mokobodzki


(here we follow the latter's version). We add a small remark, which a p p a r e n t l y went
u n n o t i c e d before, b u t which is very usefill in our problem.
THEOREM. There exists a linear mapping f ~ 7 ( f ) from the usual Lebesgue space
£~c(IR+,dt) to the Banach space B~(]I-I+) of bounded everywhere defined Borel
functions on gl+ with the unitbrm norm, such that
1) 7 is a lifting : for every class f , "~(f) is a representative of f .
2) 3' is order preserving ( f < g a.e. implies 7 ( f ) <- 7(g) everywhere), norm
preserving, and product preserving ( 7( f g ) = 7 ( f ) t ( g ) ) .
3) If g is continuous on IR+ and has a limit at infinity, the 1ifting of the class of g
is 9 itself (this applies in particular to g = 1).
The additional remark is the locality of such a lifting : if a class f vanishes a.e.
in some open set U of the line, t h e n 7 ( f ) vanishes everywhere in U . Indeed, let h
be a c o n t i n u o u s function with compact support in U ; then f h = 0 a.e., therefore
7(.f) h = 7 ( f ) 7 ( h ) = 7 ( f h ) wmishes everywhere. In turn, the locality implies that 3)
holds for all continuous b o u n d e d flmctions, regardless of their behaviour at infinity.
We will need an obvious extension of this theorem to r a n d o m variables t a k i n g values
in a separable Hilbert space ~t. Let f be such a class. T h e n for every u E 7-{ the class
< u, h > has a n o r m in £cc b o u n d e d by II u 11II h Ho~, a n d therefore its lifting 7 ( < ~t, f > )
is a Borel f u n c t i o n of the same norm, which depends antilinearly on u. Therefore its
value at t ,nay be w r i t t e n as < u, ~/(f)t > where 7 ( f ) t is an element of ~ . T h e m a p p i n g
t ~ ~/i(t) is scalarly Borel, and hence Borel since 7/ is separable. We denote it by
7 ( f ) a n d we are finished.

3 The drift term. We s t m t with the coefficient H ~ . Since we have for r < s < t

II E ~ M t f , , - M~.f,. 11 -< II f,, II (.-,(t) - .~(~))

the vector process A/Itfr on [7", oc [ is a quasimartingale. Now Hilbert space quasi-
martingales were studied by O. Enchev who proved that, as in the scalar case, they can
be decomposed into a martingale and a process with finite variation in the n o r m sense.
A report on E n c h e v ' s work can be found in Sdrn. Prob. X X H , p. 86 88). Using it leads
to considerable simplification of Attal's proof, which in part reproves Enchev's theorem.
Its finite variation part is a limit over partitions (ti) of [ r , t ]

At = lira ~ Eti AIt~+1fr - A~/tifr


(t{) {
294 Appendix 6

a n d therefore we have tl At - A s 1[ <- II A II (.~(t)- ,~(~)) for s < t . Since m is a s s u m e d


to be absolutely continuous, we may write

At-A.s = // ,h,(.f,.)d'a where Jl'/u(f,')]J < Jlf,-jlm'(u) a.e..

Let us apply the lifting theorem to the Hilbert space valued m a p p i n g u


rlu(f,.)/rn'(~* ) (defined to be 0 at points where m ' ( u ) does not exist) : we get in this
way an everywhere defined real)ping , with a n o r m d o m i n a t e d by II f~ II, a n d m u l t i p l y i n g
by r n ' ( u ) again, we get a m a p p i n g u ~ H~(f,.) from ] r, ec [ into 42, satisfying a n
inequality I[ H~fr II <- II f,. II m'(,~) (and defined to be 0 whenever m ' ( u ) does not exist).
We m a y consider Hur to be a b o u n d e d linear operator from Or to 42. Let us describe
its properties.
Assume r < ~. T h e n a vector f,. E gar also belongs to ~ s , a n d the two functions
7]u(fr) corresponding to the two interpretations of .[;. agree a.e. on ] s, co [. Therefore,
the two functions H**.f,.
r 8
and H~,f,. agree everywhere on ] s, oo [ . Otherwise stated, for
fixed u a n d r < s < l/ the operator H s~, restricted to ~ r is the same as Hur. Therefore
are restrictions of one single operator H~, defined on Us<u~s. By continuity, we e x t e n d
it to ~ u .
Next, consider again ~h,(.fr) for r < u and fix c' > r . Let g be a vector orthogonal
to @v- T h e n < 9, t ~ t t f r > = 0 for a.e. tt C] r, ~; [. Using locality again we have t h a t
< g , H ~ f r > = 0 everywhere in the same interval, m e a n i n g that Huf,. C ~2v for
r < u < v. This is extended to H~,f with .f C ~ , and then t a k i n g a n intersection over
v we find that Hu applies 42~ into itself. T h e n it is trivial to t u r n it into a n a d a p t e d
operator H~'~ on ~ .
Because of the inequality II ~ , ' II < ,,z'(~), it is clear that the operator f to H~ du are
b o u n d e d - - a n d then, the process "VIt- Jft0 H "u dt~ is a 7nartirtgale of bounded operators,
which satisfies the same regulmity conditions (with a different function m ). W~e change
n o t a t i o n s to denote this martingale by (flit), and we are reduced to the case considered
by Partha, aarathy Sinha, discussed in the next subsection.
4 The case of regular martingales. We now assmne (flit) is an operator martingale.
Given f E 42,., we define an a d a p t e d process h,~(f) on ] r, co [ such that, for r < s < t

1 et
flit f - M , f = J.~ t~(.f)~lx~
From p r o p e r t y (1.5) we know that the (a.e. defined) function h . ( f ) / m ' belongs to
L ° ° ( ] r , o o [ , 4 2 ) , with n o r m _< I l f l l . We e×tei~d it by 0 on ] 0 , r [ a n d apply the
lifting 7- T h e value of the lifting on ] t, co [ only depends on the restriction of h.u(f)
to ] t, co [ ; therefore, if f h a p p e n s to belong to some 42r' with r I < r , its lifting does
not change. M u l t i p l y i n g by m ~, we get a b o u n d e d linear operator Hrt from 42r to 42t
(zero if m~(t) does not exist), depending measurably on t and such t h a t

for all r < t, f C 42,., H,.tf = ht(f) a.s.

Besides t h a t , if f C O,.<t~5,. H,.tf depends only on f not on r , a n d therefore the


operator extends by continuity to 42t- = ~ t , as an operator Hr. For f C 42r H t f
Stochastic Integration 295

is strongly Borel, and replacing it by E t H t f if necessary we add a d a p t a t i o n to the


preceding properties.
To e x t r a c t H - , we m a y either pass to the adjoint, or proceed directly in a similar
way, from the definition of H - a.s a drift.

5 The number operator coefficient. We recall the notation

(5.1) ft = c + ~ot Cs dXs


which defines for each t an isometry between the spaces ~bt and ~ ® L2a( [ 0 , t ] x f~)
( L2a indicates t h a t only a d a p t e d processes are considered), and we define for every t an
o p e r a t o r Nt on Ot as follows

(5.2) Nt It = Mt ft -
i 2VI,, ,/, dXu -
/0' n.f + dX.-
/0'
ZCg/.d .

The preceding section implies the following s t a t e m e n t s :


1) For every t, Nt is a b o u n d e d operator on ~bt.
2) For s < t we have ( N t - N s ) f s = O .
3) N . t r a n s f o r m s martingales into martingales, otherwise stated E~Nt ft = N~ f s =
N t f s , or finally E~Nt = N~Es for s < t.
T h e n we define a constant c' and a process (j/~) by

x (c+
f0'LdX,)=c'+ f0
(they do not d e p e n d on t). Applying E0 we see that c r = c N 1 , N 1 being a constant.
Consider the m a p p i n g (between processes) ]: ~ f~; for every T it is b o u n d e d from
L~( [0, T ] , q~r) - - the space of all square integrable a d a p t e d processes on [0, T ] - -
into itself, and c o m m u t e s with multiplication by I [ 0,s ] , ( s < T ) . T h e n it also c o m n m t e s
with m u l t i p l i c a t i o n by Borel b o u n d e d flmctions on [0, T ] . C o m p o s i n g it with the
previsible projection operator, we extend it to a m a p p i n g from L2( [ 0 , r ] , ¢~) into
itself, which c o m m u t e s with multiplication by deterministic Borel b o u n d e d functions.
We now use the following classical result, whid~ says t h a t L °z is m a x i m a l abelian,
or t h a t classical p r o b a b i l i t y defines a complete observable in q u a n t u m p r o b a b i l i t y :
a bounded operator on L2( [0, T ] ) which commutes with multiplication by bounded
functions, it itself a multiplication operator by an essentially bounded function, whose
L ~ - n o r m is the same as its operator norm. It is not difficult, using the lifting theorem,
to e x t e n d this result to operators on L~( [0, T ] , 7-t) : every b o u n d e d operator on this
,space, of norm ~', which commutes with nmltiplication by bounded scalar functions
a(t), is of the form (ht) ~ ( K t h t ) , where (ICt) is a strongly measurable family of
operators on 7{ with [[ I£t t[ <- ~: tbr a.e. t .
In the case of the m a p p i n g f ~ /-t we have ~ = ~ T , and we m a y replace K t
by l E t ( K t ) w i t h o u t change, i.e. suppose the family is a d a p t e d . Also, the local p r o p e r t y
296 Appendix 6

of the lifting implies that Ift does not depend on the choice of the interval [0, T ]
(containing t). Then returning to (5.2) we have the fornmla

(5.a) M~ It =
/0'(Ms + I~~) L aXs +
/0' H{ J; aX~ +
/0' Lr;- L d~.

Then it is clear that K t is a good version of the number operator coefficient. W h e n f


is an exponential vector we get the theorem of Parthasarathy-Sinha.
Let us again emphasize the following resuh, since it is only implicit in P - S : If a
martingale of bounded operators i~ repre,~entable at all, the number operator coefficients
form a family of bounded operators, whose norrns not only are locally in L ~ , bat even
locally in L cc . This turns out to be important for applications.
REMARI,:. Let (3lit) be a real valued bounded martingale (Mr) on Wiener space (say)

Mr=c+
f0' ItsdXs ,

considered as a family of multiplication operators. When is it regular ? Is every bounded


martingale regular ?
The regularity condition is IE [(Mt - ,~/s)2][~] _< II ,f~ II~(-~(t) - -~(~)), or

(5.4) E[(Mt-.~)21~-s]= E l #,2drl.sc-s] ~ ' , ~ ( t ) - , ~ ( s ) a.8..

Since m is absolutely continuous, this amounts to saying that #~ belongs to L °c with a


locally square integrable norm. It is clear there should exist bounded martingales which
do not satisfy this condition, but explicit counterexamples are not easily found. M. Yor
indicated us on Wiener space the example of the continuous martingale

Mt = L t X t =
I' L~dXs ,

X denoting Brownian motion and L its local time at 0 ; this martingale is unbounded,
so we stop it at the first time T at which IMtl exceeds C. Then the r.v.'s Lt are not
bounded on {t _< T}. Indeed, such a property would mean there exists some t and
some constants M , c such that

(l~xs I <- M for s < t) --> (Lt <_ c)


Choosing a > c and b small enough so that ab ( M , we would have

(Lt <_ a and X~ _< b) --> (Lt <_ c) ,

and the law of the pair ( L t , X ~ ) would not charge ] c,a] x [0, b] . On the other hand,
it can be shown to have a strictly positive density everywhere on IR~_, and we get a
contradiction.
We do not discuss here the very interesting result that the martingale of conditional
expectations of a Hilbert-Schmidt operator is always representable, and has in fact a
representation with a number operator coefficient of a very special kind. An exposition
Stochastic Integration 297

of the theorem of P a r t h a s a r a t h y and Sinha concerning these "Hilbert-Schmidt martin-


gales" can be found in S6m. Prob. X X V I I .
The extension of the above result to Fock space with arbitrary nmltiplicity, or
involving a non-trivial initial space, is easy provided one uses the compact notation
(1.3') in section 1 for the stochastic integral representation. Details are left to the reader
(a complete proof is given in Attal [Att4]).

§3. R E P R E S E N T A B L E I N T E R P R E T A T I O N S OF F O C K S P A C E

1 A probabilistic interpretation of simple Fock space is a commutative martingale X t


on some probability space, which is normal ( X ~ - t is a martingale), and which has
the chaotic representation property : the space L 2 of the ~r field generated by this
martingale is isomorphic to Fock space. This definition is not precise enough, in the
sense that no relation is given between the standard structure of Fock space (creation
and annihilation operators, etc) and the probabilistic structure. We assume for simplicity
that X0 = 0
Since X has the chaotic representation property, it has the predictable representation
property, and satisfies a structure equation

(1.1) d [X, X ] t = dt + ~btdX t .

which simply means that the martingale [X, X ] t - t has the predictable representation
property.
We also denote by X t and ~'t the multiplication operators by the corresponding
random variables in this probabilistic interpretation. It is reasonable to expect that

(1.2) dXt = dQt + ~btda~ ,

where Qt = a + + a t as usual. A necessary condition for this equation to be meaningful


is that, for an exponential vector g(u) we should have for t finite

(1.3)
/0'llg, s g ( u ) ]12 ds < oc .

If this property holds, the probabilistic interpretation will be said to be representable.


We now" prove, following Attal, that (1.2) is true. We call (<It) the operator process
defined by the right hand side. According to the basic formula (1.6) in §1, which holds
at least on the exponential domain

i0' /0'
while using the relation ft = c + f t ¢ s d X s , the classical Ito (integration by parts)
formula and the structure equation, we get

Xtf~ =
L' -¥s]sdXs +
L' fsdXs +
/o' ¢s(ds + ¢ s d X s ) .
298 Appendix 6

Taking a difference, we see that tile ordinary process lJ't = J t f t - X t f t is a semimartin-


gale satisfying the ordinary stochastic differential equation ~ = f to Ys d X s , whose only
solution is 0. In particular, we get that Xtg(z~) belongs to L 2.
An interesting variant of this proof concerns a normal martingale X which has
the predictable representation property, but not necessarily the chaotic representation
property, so that Fock space may be strictly smaller than the corresponding L 2 space.
Let us assume, not only (1.2), but tha.t multiplication by ~bt maps the exponential
domain into Fock space - - this is not very natural : some other domain like an incomplete
chaos space may be simpler to use here. Then the stochastic integral process (1.2) is well
defined, and coincides on the exponential domain with the operator of multiplication
by X t . For instance, these hypotheses seem to be satisfied by the Az~ma martingales
for all values of the parameter, since ~/)t = c X t is known to belong to all L p spaces; it is
not known whether this is the case for all exponentials, but the domain of polynomials
in the coordinate mappings is likely to be stable under the mappings f ~ /s (this is
called "admissible" in Attal's papers).
Attal proves a slightly different result : we assume nothing like (1.3), and consider
instead a bounded martingale (ft) with a predictable representation c + f to / s d X s , such
that II/~ I1~ and t]/t(gt I1~ are bonnded. There are plenty of such martingales : ,ve start
from an arbitrary bounded martingale 9t. = c + f to )s d X s , then put

t~t = c +
/0'c)~/{ij~t<q/{ig~,&<,~ } dX.~

to get a mart!ngale which is arbitrarily close to the preceding one for n large, and has a
"derivative" ht satisfying the required coladitions; then (ht) itself may not be bounded,
but its jmnps are smaller than those of (9t), hence bounded. Then we may construct (f~)
by stopping. Attal then proves that the process of (bounded) multiplication operators
by (ft) belongs to the class $ , since it has the everywhere defined stochastic integral
representation

ft = c I + ./t dQt + / t '4'l da~ .

The proof is essentially the same as for X t above.


2 Let us consider a probabilistic interpretation as above. Denote by * the correspond-
ing multiplication. In all such interpreta.tions, the c'urve X t * 1 = a + l is the same° Also
in all interpretations, given a vector g, the martingale Yt = E t y is the same as a curve,
and its predictable representation a.s a vector stochastic integral c + f to ~]s dxs depends
only on the Fock space structure, not on the product *.
On the other hand, there are several interesting operators which depend on the
probabilistic interpretation, a.s explained by Attal. For any vector f , consider the
associated martingale ft = e + j~ ,¢s d X s .
1) Given an adapted process ( h t ) , define

(2.1) .]t.ti =
/0' h~dJ's =
/0 h~/,dX~ .
Stochastic Integration 299

This depends on the interpretation, since the product is involved. It follows from formula
(1.6) in §1 that we have

(2.2) J, =
f0'(-J~ + h~)d~ ° ,

where hs is meant as a muhiplication operator in the given interpretation. This fornmla


is rigorous if II ht ]too is locally bounded (then dt is bounded with a locally bounded
norm).
2) Let M t be a martingale with representation c + f t o m s d x s (as a process of vectors).
We consider the operators

(2.3) Jt f t =
fo' L aM~ =
Jo' f srns d X s .

According to §1, (1.6), we have

(2.4) Jt = -
Ji' Js d a ° +
/0' m s da + ,

again ms is a multiplication operator.


3) Consider now the angle bracket

(2.5) ,hft =
/0'(~t~,~t,,~) =
/0' , , s ]~ d~ .

This time we have

(2.6) Jt = -
/o' Js daes + m s da 2 ,

4) Finally, (pt) being a measurable adapted process with I l p ~ ]] locally bounded,


the operator

(2.7) ,5 = 0t Lm ds

has the representation

(2.8) Jt =
/0' Jsda ° +
f0' psds ,

In all four cases, there is a contribution of the nmnber operator which prevents
the representation from being an explicit one : the left side is involved in its own
representation.
References

The references QP.I-QP. VIII, concern the volumes on quantum probability and applications, edited
by L. Accardi and W. von Waldenfels from vol. II on, whose numbers in the Springer Lecture Notes
series are respectively 1055 (1986), 1136 (1987), 1303 (1988), 1396 (1989), 1442 (1990); Vol. VI (1992)
Vol. VII (1992) and Vol. VIII (1993) are published by World Scientific (Singapore). The references
Sdm. Prob. concern the Sdrainaire de Probab*lit~s, edited by J. Az~ma, P.A. Meyer and M. Yor. The
abbreviations LNM, LNP, LNCI refer to the Springer Lecture Notes series in Mathematics, Physics,
Control and Information. The order of articles of a given author is arbitrary.

lAce] ACCARDI (L.). 1. On the quantum Feynman-Kac formula. Rendiconti Sere. Mat. e Fis.
di Milano, 48, 1978. 2. On square roots of measures, C* algebras and their applications, Proc.
International School "Enrico Fermi", Varenna 1973, North-Holland. 3. A note on Meyer's note, QP.III,
p. 1-5.4. On the quantum Feynman-Kac formula, Rend. Sem. Mat. Fis. Univ. Polit. Milano, 48, 1978,
p. 135 179.
[AcB] ACCARDI (L.) and BACH (A.). 1. The harmonic oscillator as quantum central limit of Bernoulli
processes, 1985, unpublished preprint. 2. Central limits of squeezing operators, Q P . I V , p. 7-19. 3.
Quantum central limit theorems for strongly mixing random variables, Z. fiir 14Ztheorie Verw. Geb.,
68, 1985, p. 393-402.
[AcC] ACCARDI (L.) and CECCHINI (C.). 1. Conditional expectations in von Neumann algebras
and a theorem of Takesaki, J. Funet. Anal, 45, 1982, p. 245 273. 2. Surjectivity of the conditional
expectations on the L 1 spaces, Harmonic Analysis, Cortona 1982, LNM 982, 1983, p.434 442.
[AcF] ACCARDI (L.) and FAGNOLA (F.). Stochastic integration, QP.III, p. 6 19.
[AFQ] ACCARDI (L.), FAGNOLA (F.) and QUAEGEBEUR (J.). A representation-free quantum
stochastic calculus, J. Funct. Anal., 104, 1992, p. 149-197
[AFL] ACCARDI (L.), FRIGERIO (A.) and LEWIS (J.). Quantum stochastic processes, Publ. R I M S
Kyoto, 18, 1982, p. 94-133.
[AJL] ACCARDI (L.), JOURNI~ (3.L.) and LINDSAY (J.M.). On multidimensional markovian cocycles,
Q P . I V , p.59 66.
[AcM] ACCARDI (L.) and MOHARI (A.). On the structure of classical and quantum flows. J. Funct.
Anal., to appear.
[AeP] ACCARDt (L.) and PARTHASARATHY (K.R.). A martingale characterization of the canonical
commutation and anticommutation relations. J. Funct. Anal., 77, 1988, p. 211 231.
[AcQ] ACCARDI (L.) and QUAEGEBEUR (J.). A Fermion L~vy theorem. J. Funct. Anal., 110, 1992,
p. 131-160.
JAIL] ALICKI (R.) and LENDI (K.). Quantum Dynamical Semigroups and Applications, LNP 286,
1987.
lApp] APPLEBAUM (D.). 1. Quantum stochastic parallel transport on non-commutative vector bundles,
QP.III, p. 20-36. 2. Unitary evolutions and horizontal lifts in quantum stochastic calculus, Comm.
Math. Phys., 140, 1991, p. 63-80. 3. Towards a quantum theory of classical diffusions on Riemannian
manifolds, QP. VI, p. 93 111.
[ApH] APPLEBAUM (D.) and HUDSON (R.L.). 1. Fermion Ito's formula and stochastic evolutions,
Comm. Math. Phys., 96, 1984, p. 473-496.2. Fermion diffusions, J. Math. Phys., 25, 1984, p. 858-861.
[Ara] ARAKI (It.). Bogoliubov automorphisms and Fock representations of canonical anticommutation
relations, Operator Algebras and Math. Physics; A.M.S. Contemp. Math., 62, 1987, p.23-141.
[ACGT] ARECCHI (F.T.), COURTENS (E.), GILMORE (R.) and THOMAS (H.). Atomic coherent
states in quantum optics, Phys. Review A, 6, 1972, p. 2211-2237.
References 301

[Att] ATTAL (S.). 1. Probl6mes d'unicit6 dans les repr6sentations d'op6rateurs sur l'espace de Fock,
Sgm. Prob. X X V I , LN 1526, 1992, p. 617-632. 2. Characterizations of some operators on Fock
space, QP. VIII, 1993, p. 37-47. 3. Characterization of operators commuting with some conditional
expectations on multiple Fock spaces, QP. VIII, p. 47-70.4. An algebra of non commutative bounded
semimartingales : square and angle quantum brackets, J. Funet. Anal., 124, 1994, p. 292-331.
[AtM] ATTAL (S.) and MEYER (P.A.). Interpr4tation probabiliste et extension des int6grales stochas-
tiques non eoramutatives, S&a. Prob. X X V I I , LNM 1557, p.312 327.
[Az$] AZI~MA (J.). Sur les fermds aldatoires, Sgm. Prob. X I X , LNM 1123, p. 397 495.
[Bar] BARGMANN (V.). A Hilbert space of analytic functions and an associated integral transform,
Pure Appl. Math. Sci., 14, 1961, p. 187-214.
[BSW] BARNETT (C.), STREATER (R.F.) and WILDE (I.F.). 1. The Ito Clifford integral I, J. Funct.
Anal. 48, 1982, p. 172-212. 2. - - II, J. London Math. Soc. 27, 1983, p. 373-384. 3. - - III, Markov
properties of solutions to s.d.e.'s, Comm. Math. Phys. 89, 1983, p. 13 17. 4 - - I V , a Radon Nikodym
theorem and bracket processes, J. Oper. Th. 11, 1984, p. 255-271. 5. Quasi-free quantum stochastic
integrals for the C A R and CCR, d. Funet. Anal., 48, 1982, p. 255-271.
[BaWl BARNETT (C.) and WILDE (I.F.). 1. Quantum stopping times, QP. V[, p. 127 136.2. Natural
processes and Doob Meyer decompositions over u probability gage space, J. Funct. Anal., 58, 1984,
p. 320-334.3. Quantum Doob-Meyer decompositions, 3. Oper. Th., 29, 1988, p. 133-164.
[Bel] BELAVKIN (V.P.). 1. A quantum non adapted Ito formula and non stationary evolution in Fock
scale, QP. VI, p. 137-180. 2. A new form and a * algebraic structure of quantum stochastic integrals
in Fock space, Rend. Sem. Mat. Fis. Milano, 58, 1988, p. 177-193. 3. A new wave equation for a
continuous nondemolition experiment, and 4. A quantum particle undergoing continuous observation
Phys. Rev. A, 140, 1989, p. 355-362. 5. A quantum non adapted Ito formula and stochastic analysis in
Foek scale, J. Funct. Anal., 102, 1991, p. 414 447.6. Kernel representations of * -semigroups associated
with infinitely divisible states, QP. VII, 1992, p 31 50.7. The quantum functional Ito formula has the
pseudo-Poisson structure, QP. VIII, 1993, p. 81-86.
[BeL] BELAVKIN (V.P.) and LINDSAY (3.M.). The kernel of a Foek space operator II, QP. VIII, 1993,
p. 87-94.
[BCR] BERG (C.), CHRISTENSEN (J.P.R.) and RESSEL (P.). Harmonic Analysis on Semigroups,
Springer, 1984.
[Ber] BEREZIN (F.A.). The method of second quantization, Academic Press, New York 1966 (transla-
tion).
[Bha] BHAT ( B . V . R . ) . Markov dilations of nonconservative quantum dynamical semigroups and a
quantum boundary theory. Dissertation, Indian Statistical Institute, Delhi 1993.
[BhP] BHAT (B.V.R.) and PARTHASARATHY (K.R.). 1. Kolmogorov's existence theorem for Markov
processes on C*-algebras. Proc. Indian Acad. Sci., 104, 1994, p. 253-262. 2. Markov dilations of
nonconservative quantmn dynamical semigroups and a quantum boundary theory. To appear.
[Bia] BIANE (Ph.). 1. Marches de Bernoulli quantiques, Sdm. Prob. X X I V , LNM 1426, p. 329 344. 2.
Chaotic representations for finite Markov chains, Stochastics, 30, 1990, p. 61-68. 3. Q u a n t u m random
walks on the dual of S U ( n ) , Prob. Th. Rel. Fields, 89, 1991, p. 117-129.4. Some properties of quantum
Bernoulli random walks, QP. VI, p. 193 204.
[Bra] BRADSHAW (W.S.). Stochastic cocycles as a characterization of quantum flows. Bull. Sc. Math.,
116, 1992, p. 1 34.
[BrR] BRATTELI (O.) and ROBINSON (D.W.). 1.2. Operator Algebras and Quantum Statistical
Mechanics I, II, Springer 1981.
[Cha] CHAIKEN (J.M.). Number operators for representations of the CCR, Comm. Math. Phys. 8,
1968, p. 164 1~$4.
[CAN] CARMONA (R.) and NUALART (D.). Traces on Wiener space and the Onsager-Machlup
fnnctional. J. Funct. Anal., 1992.
302 References

[Che] CHEBOTAREV (A.M.). 1. Sufficient conditions for conservativity of dynamical semigroups, Theor.
Math. Phys., 80, 1989. 2. The theory of conservative dynamical semigroups and its applications, to
appear.
[ChF] CHEBOTAREV (A.M.) and FAGNOLA (F.). Suiticient conditions for conservativity of q u a n t u m
dynamical semigroups. J. Funct. Anal., 118, 1993, p. 131-153.
[CFF] CHEBOTAREV (A.M.), FAGNOLA (F.) and FRIGERIO (A.). Towards a stochastic Stone's
theorem, Stochastic Partial Diff. Eq. and Applications III, Pitman Research Notes in M., vol. 268,
1992, p. 86-97.
[CoH] COCKROFT (A.M.) and HUDSON (R.L.). Q u a n t u m mechanical Wiener processes, J. Multiv.
Anal., 7, 1978, p. 107-124.
[CCSS] COHENDET (O.), COMBE (P.), SIRUGUE (M.) and SIRUGUE COLLIN (M.). 1. A stochastic
treatment of the dynamics on an integer spin, d. Phys. A, 21, 1988, p. 2875-2883. See also same J.,
23, 1990, p. 2001-2011. 2. Weyl quantization for Z n x Z "~ phase space : stochastic aspect. Stochastic
Methods in Math. and Phys., Karpacz 1988, World Scientific 1989.
[Coo] COOK (J.M.). The mathematics of second quantization, Trans. Amer. Math. Soc., 74, 1953,
p. 222-245.
[CuH] CUSHEN (C.D.) and HUDSON (R.L.). A q u a n t u m mechanical central limit theorem, J. Appl.
Prob., 8, 1971, p. 454 469.
[Dav] DAVIES (E.B.). Quantum Theory of Open Systems, Academic Press, 1976. 2. On the Borel
structure of C* algebras, with an Appendix by R.V. Kadison, Comm. Math. Phys., 8, 1968, p. 147
163.
[D'A] Dell'ANTONIO (G.F). On the limits of sequences of normal states, Comm. Pure Appl. Math.,
20, 1967, p.413 429.
[DMM] DELLACHERIE (C.), MAISONNEUVE (B.) and MEYER (P.A.). Probabilit6s et Potenliel, Vol.
E, Hermann 1992.
[DeM] DELLACHERIE (C.) and MEYER (P.A). 1. Probabilit6s et Potentiel, Vol. A, Hermann, Paris,
1975, english translation, North-Holland 1977. 2., Vol. B, 1980, transl. 1982. 3. Vol. C, 1987, transl.
1989.
[Der] DERMOUNE (A.). 1. Formule de composition pour une classe d'op~rateurs, S6m. Prob. X X I V ,
LNM 1426, p 397-401.2. Op~rateurs d'~change et quelques applications des m~thodes de noyaux, Ann.
Sci. Univ. Clermont, 96, 1991, p.45 54 (see also p. 55 58).
[Dix] DIXMIER (J.). Les alg~bres d'op6rateurs dans l'espaee Hilbertien, Paris, Gauthier Villars 1969.
[Dyn] DYNKIN (E.B.). 1. Polynomials of the occupation field and related random fields, J. Funct. Anal.,
58, 1984, p.20 52. 2. Gaussian and non Gaussiun random fields associated with Markov processes,
J. Funct. Anal., 55, 1984, p. 344 376. 3. Functionals associated with self-intersection of the planar
Brownian motion, Sdm. Prob. X X , LNM 1204, 1986, p.553-571. 4. Self-intersection local times,
occupation fields and stochastic integrals, Adv. Math., 65, 1987, p. 254-271.5. Self-intersection gauge for
random walks and for Brownian motion, Ann. Prob., 16, 1988, p. 1-57. 6. Regularized self-intersection
local times of planar Brownian motion, Ann. Prob., 16, 1988, p. 58-74.
[Eme] EMERY (M.). 1. On the Az~ma martingales, S6m. Prob. X X I I I , LNM 1372, p.66-87. 2.
Sur les martingales d'Az~ma (suite), S6m. Prob. X X I V , LNM 1426, p. 442 447. 3. Quelques cas de
representation ehaotique, S6m. Prob. X X V , LN 1485, p. 10-23.
[Enc] ENCHEV (O.). tIilbert space valued quasimartingales, Boll. Unione Mat. Ital., 2, 1988, p. 19-39.
[Eva] EVANS (M.). Existence of q u a n t u m diffusions, Prob. Th. Rel. Fields, 81, 1989, p.473-483.
[EvH] EVANS ( M ) and HUDSON (R.L.). 1. Multidimensional q u a n t u m diffusions, QP.HI, p. 69-88.2.
Perturbation of q u a n t u m diffusions, J. London Math. Soc., 41, 1992, p. 373 384.
[EvL] EVANS (D.E.) and LEWIS (J.T.). Dilations of Irreversible Evolutions in Algebraic Quantum
Theory, Publ. Dublin Institute for Advanced Study, Series A, 1977.
References 303

[Fag] FAGNOLA (F.). 1. On q u a n t u m stochastic differential equations with unbounded coefficients,


Prob. Theory Rel. Fields, 86, 1990, p. 501-516. 2. Pure birth and death processes as q u a n t u m flows
in Fock space, Sankhya, set. A, 53, 1991, p. 288-297. 3. Unitarity of solutions to q u a n t u m stochas-
tic differential equations and conservativity of the associated q u a n t u m dynamical semigroup, QP. VII,
1992, p. 139-148.4. Chebotarev's sufficient conditions for conservativity of q u a n t u m dynamical semi-
groups, QP. VIII, 1993, p. 123-142.5. Diffusion processes in Fock space, preprint. 6. Characterization
of isometric and unitary weakly differentiable cocycles in Fock space, QP. VIII, 1993, p. 143-164.
[FaS] FAGNOI.A (F.) and SINHA (K.B.). Q u a n t u m flows with unbounded structure m a p s and finite
degrees of freedom, J. London M. Soc., 48, 1993, p. 537-551.
[Fan] FANO (U.). Description of states in q u a n t u m mechanics by density matrix and operator
techniques, Rev. Mod. Phys. 29~ 1957, p. 74-93.
[Fey] FEYEL (D.). Sur la m6tbode de Pieard (ddo et 4ds). Sdm. Prob. XXI, LNM 1247, 1987, p. 515
519.
[Foc] FOCK (V.A.). Zur Quantenelektrodynamik, Soviet Phys., 6, 1934, p. 425.
[Foil FOLLAND (G.B.). Harmonic Analysis in Phase Space, Ann. Math. Studies, Princeton 1989.
[Fug] FUGLEDE (B.). The multidimensional moment problem, Exp. Math., 1, 1983, p. 47-65.
[GaW] GARDING (L.) and WIGHTMAN (A.S.). Representations of the c o m m u t a t i o n relations, Proc.
Natl. Acad. Sci. USA, 40, 1954, p. 617-621.
[GvW] GIRI (N.) and Von WALDENFELS (W.). An algebraic version of the central limit theorem, Z.
W-theorie Verw. Geb., 42, 1978, p. 129-134.
[Glo] GLOCKNER (P.). Q u a n t u m stochastic differential equations on * bigebras, Math. Proc. Cam-
bridge PhiL Soc., 109, 1991, p. 571-595.
[GKS] GORINI (V.), KOSSAKOWSKI (A.) and SUDARSHAN (E.). Completely positive dynamical
semigroups of n-level systems, J. Math. Phys., 17, 1976, p. 821-825.
[Gro] GROSS (L.). 1. Existence and uniqueness of physical ground states, J. Funct. Anal., 10, 1972,
p. 52-109.
[Gui] GUICHARDET (A.). Symmetric Hilbert spaces and related topics, LNM 261, 1970.
[Hall HALMOS (R.). Two subspaces, Trans. Amer. Math. Soc., 144, 1969, p.381 389.
[HEW] HE (S.W) and WANG (J.G.). Two results on j u m p processes, S4m. Prob. XVIII, LNM 1059,
1984, p. 256-267.
[Hell HELMSTETTER (J). 1. Monoides de Clifford et d6formations d'alg6bres de Clifford, J. of Algebra,
111, 1987, p. 14 48. 2. Alg6bres de Clifford et alg;~bres de Weyl, Cahiers Math. Univ. de Montpellier,
25, 1982. 3. Alg6bres de Weyl at *-produit, Cahiers Math. Univ. de Montpellier, 34, 1985.
[Hid] HIDA (T.). Brownian motion, Springer 1980.
[Hol] HOLEVO (A.S.). 1. Probabilistic and Statistical Aspects of Quantum Theory, North Holland,
1982. 2. Time ordered exponentials in q u a n t u m stochastic calculus, QP. VII, 1992, p. 175-202. 3.
Conditionally positive definite functions in q u a n t u m probability, Proc. Intern. Congress Math. Berkeley
1986, p. 1011 1018. 4. Stochastic representation of q u a n t u m dynamical semigroups, Trudy Mat. Inst.
A.N. SSSR, 191, 1989, p. 130-139 (Russian).
[HUM] HU (Y.Z.) and MEYER (P.A.). 1. Sur les int4grales multiples de Stratonovitch, Sdm. Prob. XXII,
LNM 1321, 1987, p. 72-81. 2. On the approximation of Stratonovieh multiple integrals, to appear in
the volume dedicated to G. Kallianpur, Springer 1992.
[HIP] HUDSON (R.L.), ION (P.D.F.) and PARTIIASARATHY (K.R.). Time orthogonal unitary dilations
and non-commutative Feynman-Kac fornmlas, Comm. Math. Phys., 83, 1982, p. 261-280.
[HKP] HUDSON (R.L.), KARANDIKAR (R.L.) and PARTHASARATHY (K.R.). Towards a theory of
non commutative semimartingales adapted to brownian motion and a q u a n t u m Ito's formula, Theory
and Applications of Random Fields, Bangalore 1982, LNCI 49, 1983, p. 96-110.
304 References

[HuLl tIUDSON (R.L.) and LINDSAY (J.M.). 1. A noacommutative martingale representation theorem
for non Fock quantum Brownian motion, J. Funct. Anal., 61, 1985, p. 202-221. 2. Uses of non-Fock
quantum Brownian motion and a quantmn martingale representation theorem, QP.II, p. 276-305.
[HLP] HUDSON (IR.L.), LINDSAY (J.M.) and PARTHASARATHY (K.R.). Stochastic integral represen-
tation of some quanttun martingales in Fock space, Front local times to global geometry, control and
physics, p. 121 131, Pitman Research Notes, 1986.
[HuP] HUDSON (R.L.) and PARTHASARATHY (K.R.). 1. Quantum lto's formula and stochastic
evolutions, Comm. Math. Phys., 93, 1984, p. 301 303. 2. Stochastic dilations of uniformly continuous
completely positive semigroups, Acts Appl. Math., 2, 1988, p.353-398. 3. Unification of fermion and
boson stochastic calculus, Comm. Math. Phys., 104, 1986, p.457-470. 4. Casimir chaos in a Boson
Foek space, J. Funct. Anal., 119, 1994, p. 319 339.
[HuRl HUDSON (R.L.) and ROBINSON (P.). Quantum diffusions and the noncommutative torus, Left.
Math. Phys., 15, 1988, p.47-53.
[HuS] HUDSON (R.L.) and STREATER (R.F.). 1. Ito's formula is the chain rule with Wick ordering,
Phys. Letters, 86, 1982, p.277-279. 2. Examples of quantum martingales, Phys. Letters, 85, 1981,
p. 64 67.
[Ito] ITO (K.). Multiple Wiener integral, a. Math. Soc. Japan, 3, 1951, p. 157-169.
[aoK] JOHNSON (G.W.) and KALLIANPUR (G.). 1. Some remarks on Hu and Meyer's paper and
infinite dimensional calculus on finitely additive canonical Hilbert space, Teomla Verojat., 34, 1989,
p. 742-752.2. llomogeneous chaos, p-forms, scaling and the Feynman integral, Technical Report n ° 274,
Center of Stoch. Processes, Univ. of North Carolina, 1989. To appear in Trans. Amer. Math. Sou..
[Jou] JOURNE (J.L.). Structure des coeycles markoviens sur l'espace de Foek. Prob. Theory Rel. Fields,
7.5, 1987, p. 291 316.
[JoMl JOUaNk (J.L.) and MEYER (P.A.). Une martingale d'ot%rateurs born6s, non repr/:sentable en
int6grale stochastique, S&n. Prob. X X . LNM 1204, p. 313 316.
[KaM] KALLIANPUR (G.) and MANDIREKAR (V.).'Multiplicity and representation theory of purely
non-deterministic stochastic processes, Teor. Vet. 1O, 1965, p. 553-580 (transl.). 2. Semigroups of
isometries and the representation and multiplicity of weakly stationary processes Ark.. fo'r Mat. 6,
1965, p. 319-335.
[Kas] KASTLER (D.). Introduction a l'dlectrodynamique quantique, Dunod, Paris, 1961.
[K1S] KLAUDER (J.R.) and SKAGERSTAM (B.S.) Coherent States, World Scientific, 1985.
[Kr6] KRt~E (P.). La th6orie des distributions en dimension quelconque et l'int6gration stochastique,
Stochastic Analysis and Related Topics, Proc. Silivri 1986, LNM 1316, p. 170-233.
[KrR] KRI~E (P.) and RACZKA (R.). Kernels and symbols in quantum field theory, Ann. Inst. Henri
Poincard ser. A, 28, 1978, p. 41-73.
[Kum] KOMMERER (B.). 1. Markov dilations on IV* algebras, J. Funct. Anal., 63, 1985, p. 139 177.
2. Survey on a theory of noncommutative stationary Markov processes, QP.IH, p. 154-182.
[Kup] KUPSCH (J.). 1. Functional integration for euclidean Dirac fields, Ann. Inst. Henri Poincar4,
(Phys..), 50, 1989, p. 143 160. 2. Functional integration for euclidean Fermi fields, Gauge theory and
fundamental interactions, Proc. S. Banach Center 1988, World Scientific. Singapore 1990.
[LaM] de Sam LAZARO (J.) and MEYER (P.A.). Questions de Th6orie des Flots V, VI Sdm. Prob. IX,
LNM 465, p. 52-89.
[LeJ] Le JAN (Y.). 1. Temps local et superchamp, Sdm. Prob. X X I , LNM 1247, 1987, p. 176 190. 2.
On the Fock space representation of functionals of the occupation field and their renormalization, J.
Funct. Anal., 80, 1988, p.88-108. 3. On the Foek space representation of occupation times for non
reversible Markov processes, Stochastic Analysis, LNM 1322, 1988, p. 134-138.
[Ldb] LINDBLAD (G.). On the generators of quantum dynamical semigroups. Comm. Math. Phys., 48,
1976, p. 119-130.
References 305

[Lin] LINDSAY (J.M.). 1. Independence for quantum stochastic integrators, QP. V/, p. 325-332. 2.
Quantum and non-causal stochastic calculus, Prob. Th. Rel. Fields, to appear.. 3. On set convolution
and integral sum kernel operators, Prob. Theory and Math. Star., 2, 1990, p.105-123.4. The kernel of
a Fock space operator 1, QP. VIII, 1993, p. 271-280.
[LiM] LINDSAY (J.M.) and MAASSEN (H.). 1. An integral kernel approach to noise, QP.HI, 1988,
p. 192-208.2. Duality transform as * -algebraic isomorphism, QP. V, p. 247-250. Stochastic calculus for
quantum brownian motion of non-minimal variance.
[LIP] LINDSAY (JM.) and PARTHASARATHY (K.R.). t. Cohomology of power sets with applications
in quantum probability, Comm. Math. Phys., 124, 1989, p. 337-364. 2. The passage from random walk
to diffusion in quantum probability II, Sankhya, set. A, 50, 1988, p. 151-170.3. Rigidity of the Poisson
convolution, QP. V, 1989, p. 251-262.
[LiW] LINDSAY (Jm.) and WILDE ([.F.). On non Fock boson stochastic integrals, d. Funct. Anal.,
65, 1986, p. 76-82.
[Lou] LOUISELL ( W H 0. Radiation and Noise in Quantum Electronics, McGraw Hill, 1964.
[Maj] MAJOR (P.). Multiple Wiener-Ira integrals, LNM 849, 1981.
[Maa] MAASSEN (ti.). 1. Quantum Markov procmses on Fock space described by integral kernels,
QP.II, 1985, p.361-374. 2. Addition of freely independent random variables, J. Funct. Anal., 106,
1992, p. 409-438.
[Mall MALLIAVIN (P.). Stochastic calculus of variations and hypoelliptic operators. Proc. Intern. Conf.
on Stochastic Differential Equations, Kyoto 1976, p. 195-263, Kinokuniya, Wiley 1978.
[MAR] MARCUS (M.D.) and ROSEN (J.). Sampie path properties of the local times of strongly
symmetric Markov processes via Gaussian processes. To appear in Ann. Prob., 1992.
[MMC] MASSON (D.) and McCLARY (W.K.). Classes of C°o vectors and essential self-adjointness, J.
Funct. Anal., 10, 1972, p. 19-32.
[Mey] MEYER (P.A.). 1. l~ldments de probabilit4s quantiques, S&n. Pro& XX, LNM 1204, p. 186
312.2. Sdm. Prob. X X I , LNM. 1247, p. 34-80. 3. A note on shifts and cocyeles, QP.III, p. 209-212.4.
Calculs antisym~triques et supersym~triques en probabilit4s, S~m. Prob. X X I I , LNM 1321, p. 101-123
(correction in S&n. Prob. X X V ) .
[Moh] MOHARI (A.). 1. Quantum stochastic calculus with infinite degrees of freedom and its appli-
cations. Dissertation, ISI Delhi 1992. 2. Quantum stochastic differential equations with unbounded
coefficients and dilations of Feller's minimal solution. Sankhya, set. A, 53, 1991, p. 255 287.
[MOP] MOttARI (A.) and PARTHASARATHY (K.R.). 1. On a class of generalized Evans-Hudson flows
related to classical Markov processes. QP. VII, 1992, p. 221 250.2. A quantum probabilistic analogue
of Feller's condition for the existence of unitary Markovian cocycles in Fock space, preprint.
[MoS] MOHARI (A.) and SINHA (K.B.). 1. Quantum stochastic flows with infinite degrees of freedom
and countable state Markov processes. Sankhya, set. A, 52, 1990, p. 43-57.
[Moy] MOYAL (J.E.). Quantum mechanics as a statistical theory, Proc. Cambridge Phil. Soc. 45, 1949,
p. 99-124.
[Nell NELSON (E.). 1. Analytic vectors, Ann. Math., 70, 1959, p. 572-614.2. Notes on non commutative
integration, J. Funct. Anal., 15, 1974, p. 103 116.
[Nie] NIELSEN (T.T.). Bose Algebras : The Comple~ and Real Wave Representations, LNM 1472,
1991.
[Nus] NUSSBAUM (A.). 1. Quasi analytic vectors, Ark. Mat., 6, 1965, p.179-191. 2. A note on quasi-
analytic vectors, Studia Math., 33, 1969, p. 305-310.
306 References

[Par] PARTHASARATHY (K.R.). 1. An Introduction to Quantum Stochastic Calculus, Birkh£user,


Basel 1992. 2. What is a Bernoulli trial m Quantum Probability? (unpublished, see Sdm. Prob. X X I I I ,
LNM 1372, p. 183 185). 3. Az~ma martingales and quantmn stochastic calculus, Proceedings of the R.C.
Bose Syrup. on Prob. and Star., Wiley Eastern, New Delhi 1990, p. 551 569.4. Discrete thne quantum
stochastic flows, Markov chains and chaos expansions, Proc. 5-th Intern. Vilnius Conf. on Prob. and
Star., to appear. 5. The passage from random walk to diffusion in quantum probability, Celebration
Volume in Applied Probability, 1988, p. 231-245. Applied Prob. Trust, Sheffield. 6. Comparison of
completely positive maps on a G~-algebra and a Lebesgue decomposition theorem, to appear. 7.
Maassen kernels and self-similar quantmn fields, to appear.
[PSch] PARTHASARATHY (K.R.) and SCHM1DT (K.). Positive definite ]~ernels, continuous tensor
products, and central limit theorems of probabzlity theory, LNM 272, 1972.
[PaS] PARTHASARATttY ( K R . ) and S1NHA (K.B.). 1. Stochastic integral representations of bounded
quantmn martingales in Foek space. J. Funct. Anal., 67, 1986, p. 126-151, see also QP. III, p. 231-250.
2. Boson-Fermion transformations in several dimensions, Pramana J. Phys., 27, 1986, p. 105-116. 3.
Markov chains as Evans-Hudson diffusions in Fork space. Sdm. Prob. X X I V , LNM 1426, 1990, p. 362-
369. 4. Representation of a class of quantum martingales, QP.IH, p. 232 250. 5 Stop times in Fork
space stochastic calculus, Prob. Th. Rel. Fields, 75, 1987, p. 431 458. 6. Unification of quantum noise
processes in Fork space, QP. V[, p. 371-384.
[Per] PERELOMOV (A.). Generalized Coherent States and their Applications, Springer 1986.
[Ply] PLYMEN (IR.J.). C* algebras and Mackey's axioms, Comm. Math. Phys., 8, 1968, p. 132 146.
[Ped] PEDERSEN (G.K.). C* -algebras and their automorphism groups, Academic Press, 1979.
[Pool POOL (J C.T.). Mathematical aspects of the Weyl correspondence, J. Math. Phys. 7, 1966, p. 66
76.
[Pro] PROTTER (P.) Stochastic Integralion and Differential Equations, a New Approach, Springer
1990.
[ReS] REED (M.) and SIMON (B.). Methods of Modern Mathematical Physics II, Fourier Analysis,
Self-adjointness, Academic Press, 1970.
[RvD] RIEFFEL (M.A.) and VAN DAELE (A.). A bounded operator approach to Tomita-Takesaki
theory, Pacific J. Math., 69, 1977, p. 187-221. 2. The commutation theorem for tensor products of von
Neumann algebras, Bull. London Math. Soc., 7, 1975, p. 257-260.
[Sch] SCHORMANN (M.). 1. Positive and conditionally positive linear functionals on coalgebras.
Quantum Probability II, LNM 1136, 1985, p. 475-492.2. White noise on involutive bialgebras, QP. VI,
p. 401-420.3. Non commutative stochastic processes with independent and stationary increments satisfy
quantum stochastic differential equations. Prob. Th. Rel. Fields, 84, 1990, p. 473 490. 4. A class of
representations of involutive bialgebras, Math. Proc. Cambridge Phil. Sot., 107, 1990, p.149 175. 5.
The Az~ma martingales as components of quantum independent increment processes, Sere. Prob. X X V ,
LNM 1485, p. 24 30.6. A central limit theorem for coalgebras, Probability Measures on Groups VIH,
LNM 1210, 1986, p. 153-157. 7. Infinitely divisible states on cocmnmutative bialgebras, Probability
Measures on Groups I X , LNM 1379, 1987, p 310 32,t. 8. Noncommutative stochastic processes with
independent and stationary additive increments, J. Multiv. Anal., 38, I990, p. 15-35.9. Gaussian states
on bialgebras, Quantum Probability V, LNM i442, p. 347-367. 10. Quantmn q-white noise and a q -
central limit theorem. Comm. Math. Phys., i40, 1991, p. 589 615. 11. White Noise on Bialgebras,
LNM, to appear.
[SvW] SCHI)IRMANN (M.) and von WALDENFELS (W.). A central limit theorem on the fi'ee Lie group,
QP.III, p. 300-318.
[Seg] SEGAL (1.). 1. Tensor algebras over Hilbert spaces I, Trans. Amer. Math. Soc., 81, 1956, p. 106-
134. 2. The complex wave representation of the free Bose field, Topics on Functional Analysis, Adv.
in Math. Suppl. n ° 3, 1978. 3. Mathematical characterization of the physical vacuum, Ill. J. Math.,
6, 1962, p. 500-523. 4. A non conmmtative extension of abstract integration, Ann. Math. 57, 1953,
p. 401-457.
[Sim] SIMON (B.). The P(~)2 Euclidean Quantum Field Theory, Princeton Univ. Press, 1974.
References 307

[Slo] SLOWIKOWSKI (W.). 1. Ultracoherence in Bose algebras, Adv. Appl. M., 8, 1988, p. 377 427. 2.
Commutative Wick algebras 1, Vector Measures and Applications, LNM 644, 1978.
[Spe] SPEICHER (R.). I. A new example of "independence" and "white noise", Prob. Th. Rel. Fields,
84, 1990, p. 141-159. 2. Survey of stochastic integration on the full Foek space, QP. VI, p. 421-437.
[Str] STROOCK (D.W.). 1. The Malliavm Calculus and its applications to second order partial
differential equations, Math. Systems Th., 14, 1981, p. 25-65 et 141-171. 2. The Malliavin Calculus : a
functional analytic approach, J. Funct. Anal., 44, 1981, p. 212-257.
[Sur] SURGAILIS (D.). 1. Oil Poissou multiple stochastic integrals and associated equilibrium Markov
semi groups, Theory and Application of Random Fields, LNCI 49, 1983, p.233-248. 2. Same title,
Prob. and Math. Stat. 3, 1984, p. 227-231.
[ViS] VINCENT SMITH (G.F.). 1. Unitary quantum stochastic evolutions, Proc. London Math. Soc.,
63, 1991, p. 1-25 2. Quantum stochastic evolutions with unbounded generators, QP. VI, p. 465-472.
[Voi] VOICULESCU (D.). 1. Addition of certain non-commuting random variables, J. Funct. Anal., 66,
1986, p. 323-346. 2. Multiplication of certain non commuting random variables, J. Operator Th., 18,
1987, p. 223 235.
[vN] VON NEUMANN (J.). 1. Mathematische Grundlagen der Quantenmechanik, Springer, 1932.2. On
infinite direct products, Compos. Math., 6, 1938, p. 1 77.
[vW] VON WALDENFELS (W.). 1. An algebraic central limit theorem in the anticommuting case, Z.
W-theorie Ve~w. Geb., 42, 1978, p. 135 140.2. The Markov process of total spins, Sdm. Prob. X X I V ,
LNM 1426, p 357-361. 3. Illustration of the central limit theorem by independent addition of spins,
Sdm. Prob. X X I V , LNM 1426, p. 349-356.4. Ito solution of the linear quantum stochastic differential
equation describing light emission and absorption, QP.I, p. a84 411. 5. Positive and conditionally
positive sesquilinear forms on anticocommutative coalgebras, Probabihty Measures on Groups VII,
LNM 1064, 1984.
[Wie] WIENER (N.). 1 The homogeneous chaos, Amer. J. Math., 55, 1938, p. 897 936.
[Wor] WORONOWICZ (S.L.). The quantum problem of moments I, II, Reports Math. Phys., 1, 1970,
p. 135-145 and p. 175-183.
Index
Adapted operators, processes, II.2.4, VI.l.I, curve, IV.2.6. Additive functional, A5.1.1. Affiliated,
operator - to a vNa, A4.3.3. Analytic vectors, 1.1.6. Angular momentum commutation relations,
II.3.1. Annihilation operator, canonical pair --, III.2.1, Fock space --, IV.1.4. Annihilation operator,
elementary , II.1.3, discrete, II.2.2, spin - - , II.3.2, fermion discrete - - , II.5.1. Antiparticle, V.1.3.
Antipode, VII.I.1. Antisymmetric product, norm, IV.I.1. Approximate unit, A4.2.5. Automorphisms
of the Clifford structure, II.5.9. Az6ma martingale, IV.2.2, VI.3.14, VII.2.3.
Belavkin's formulas for stochastic integrals, VI.3.3. Belavkin's notation, V.3.2. Bialgebra VII.I.1.
Bicommutant, A4.3.2. Bimodule, VI.3.7. Bloch coherent vectors, II.3.3. Bosons, IV.I.1. Bracket, square,
angle, A6.1.2.
Cameron-Martin functions, IV.2.3. Canonical, commutation relation, III.1.1, pair, III.1.1, - - trans-
formation, III.1.3. CAR, II.5.1, IV.l.5. CCR algebra, IV.1.9 remark. CCR, canonical conamutation rela-
tion, III.l.1, IV.1.5. Chaos spaces, IV.I.1. Chaotic representation property, IV.2.2. Characteristic func-
tion, III.1.3. Clebsch Gordan coefficients, II.3.4. Cliflbrd algebra, II.5.5. Clifford product (continuous),
IV.3.8. C-M, abbr. for CameroT~-Martin, IV.2.3. Coalgebra, VII.I.1. Cochain, VI.3.7. Co commutative,
VII.I.1. Cocycle, (nmltiplicative functional) VI.l.12, (cohonmlogy) VI.3.7. Coherent vector (normalized
exponential vector), IV.1.3, V.1.4. Coherent vector, elementary - - , II.1.4, discrete --, II.25, spin - - ,
II.3.3, harmonic oscillator - - , III.2.4. Cohomology, VI.3.7. Combinatorial lennna, IV.3.3. Completely
positive mapping, A2.5. Complex gaussian model, III.2.5. Composition formula for kernels, IV.4.3 (3.1),
(3.2), V.2.3, V.3.3. Conditionally positive, VII.1.6. Conditioned semigroup or process, A5.1.1. Contin-
uous Markov chain, VI.3.11. Continuous spin field, V.2.1, example. Continuous sum of Hilbert spaces,
A2, 4. Contraction, A5.2.2, of order p, IV.3.1. Contractivity condition, VI.4.4. Convolution, VII.1.3.
Coproduct, VII.I.1. Counit, VII.I.1. Creation operator, canonical pair, III.2.1, Fock space, IV.1.4. Cre-
ation operator, elementary - - , II.1.3, discrete , II.2.2, spin , II.3.2, ferminn discrete - - , II.5.1. Cuntz
algebra, IV.1.8. Cyclic vector, V.1.4, A4.2.2.
Density operator, 1.1.4 Diagonals, IV.3.1. Differential second quantization, IV.1.6. Dirac's notation,
1.1.9. Discrete flows, VI.3.6. Discrete kernels, IV 4.4. Discrete Markov chain, VI.3.8-10. Discrete multiple
integrMs, VI.3.6. Dominated, operator kernel - - by a scalar kernel, V.2.5. Dynamical selnigroup, A2.5.
Dynkin's formula, A5A.2.
Energy, A5.1.1. Enlarged exponential domain F 1(7~), 1V.1.4 remark. Evans Notation, V.2.1, delta,
V.2.1. Evans Hudson flow, VI.2.5, VI.4.11. Exchange operators, V.2.1. Exclusion principle, IV.I.1.
Exponential domain g, IV.I.3. Exponential s.d.e., right and left, VI.3.1, VI.4.1. Exponential vectm~
g(h), IV.1.3, V.1.4 (double Fock space). VI.1.8. Exponential vectors, discrete - - , II.2.5, spin - - , II.3.5.
Faithful state, A4.2.3. Fermion brownian motion, IV.3.9, kernels, IV.4.6. Fermion creation/annihi-
lation operators II.5.1, IV.1.4. Feynman integral, lV.3.3.Field operators, IV.1.5. Flow, VI.2.5. Fock
space, IV.1.2. Fourier Wiener transform, IV.2.3. Full Fock space, IV.1.2 (remark), IV.1.8 (remark).
Gauge process, IV.2.4. Gaussian states of the CCR, I11.1.4. General position (subspaces), A3.1. GNS
construction and theorem, A4.2.2 3. Gra~ssmanu product, IV.3.8. Guichardet's notation, 1V.2.2. Gupta's
negative energy oscillator, IV.3.3.
Half-density, VI.3.13. Hamiltonian, III.1.1. tlarmonic oscillator, III.2.2. Hellinger Hahn theorem,
A2.4. Hermite polynomials, III.2.3. Hermitian element in an algebra, I.l.1. Hilbert Schmidt operators,
AI.1. Hochschild cohomology, VI.3.7. Hopf algebra, VII.I.1. ItS, abbr. for Hilbert Schmidt, AI.1.
Incomplete n - t h chaos, IV.I.1, Fock space, IV.1.2. Increasing simplex, IV.1.2. Initial space J , V.2.5.
Interpretation of Fock space, A6.3.1. Intrinsic Hilbert space of a manifold, VI.3.13. Isobe-Sato formula,
VI.3.2. Isometry, coisometry and unitarity conditions, VI.2.3. Isotropic subspaze, A 5 2 . t . Ito formula,
VI.1. (9.3) Ito table, IV.4.3, V.1.5, V.2.1, V.3.1.
Jordan-Wigner transformation, II.5.2, 1V.4.2 (2.8). Journ6's time reversal principle, VI.4.9-10.
Kaplansky's theorem, A4.3.4. Kernels, IV.4.1, IV.1.5, V.2.2, V.3.3. KMS condition, A4.3.10.
Left cncycle, VI.1.12. Left exponential, VI.3.1. L6vy process, V.2.6. Lifting theorem, A6.2.2. Lindblad
generator, A2.8. Local times, A5.1.3. Lusin space, IV.2.2.
Index 309

Maassen's kernel, IV.4.1 (1.2)-(1.2'), V.2.2. Maassen's ~ - l a m i n a , IV.3.3. Markov chain, VI.3.8 11.
Markov process (quantum), A2.7. Martingales (of operators), VI.I.1. Matrix model of the canonical
pair, III.2.3. Minimal (Markov) process, A2.6. Minimal uncertainty states, III.15. Mixed state, 1.1.4.
Modular group, A4.4.3 Moebius inversion formula, IV.4.2, lV.4.5. Mohari Sinha conditions for s.d.e.'s,
VI.4.2, for flows VI.4.11. Moment sequence, l.l.1. Multiple integrals, IV.2.1, V.I.1. Multiplication
formula (Wiener), IV.3.1., IV.3.2 (2.1), [V.3.4 (4.1), V.1.2, V.1.5. Multiplication formula for Hermite
polynomials, IV.3.5. Multiplicative integral, VIA.7. Multiplicity of Fork space, multiplicity space, V.1.1.
Multiplicity theory, A2.4.
Narrow convergence, A1.4. Nelson*s theorem, 1.1.6. Non Fock representations of the CCR, V.1.3.
Normal element in an algebra, I.l.1, A4.1. Normal form, ordering, II.2.3, IV.4.3. Normal martingale,
A6.3.1. Normal state, 1.1.4, A4.3.1. Normal topology of operators (=ultraweak), A1.5, A4.3.1. n
particle space, IV.I.1. Nuclear operator, A1.2. Number operator, discrete, II.2.2, canonical pair, III.2.1,
Fork space, IV.1.6.
Observable, bounded, 1.1.4. Occupation field, A5.1.1. Occupation numbers, IV.I.1. O r n s t e i n -
Uhlenbeck semigroup, III.2.2, IV.2.4. O U, abbr. for Ornstein-Uhlenbeck, 1V.2.4.
Partial isometry, A1.2. Pauli matrices, II.1.2. Pauli's exclusion principle, II.1.3. Permanent, IV.I.1,
A5.2.2. Poisson process, IV.2.2. Poisson product formula, IV.3.6. Polar decomposition, A1.2. Polariza-
tion formula, IV.1.(3.3). Positive cone, A4.4.9. Potential kernel, A5.1.1. Predictable, previsible, II.2.4.
Predual of a vN algebra, A4.3.5. Prohorov's condition, A1.4. Projective unitary representation, III.1.3,
IV.1.9. Pure state, 1.1.4, A4.21.
Q u a n t u m dynamical semigroup, VI.2.4, A2.9. Q u a n t u m lto formula, vi.1.(9.3)
R a d o n - N i k o d y m theorem (vN algebras), A4.3.9. Random field, A5.1.1. Real gaussian model of
the harmonic oscillator, III.2.2. Regular kernel, IV.4.5, V.2.2. Regular process of kernels, VI.3.2.
Regular semimartingale, A6.2.1. Representation of a C*-algebra, A4.2.2. Representing sequence, IV.I.2,
remark, A5.2.4. Resolution of idenlity, 1.1.6. Riemann products, VI.4.6 7. Right cocycle, VI.l.12. Right
exponential, VI.3.1, VI.4.1
Schr6dinger's equation, llI 1.1, model, III.1.2. S c h f r m a n n triple, VII.1.6. Screw line, III.l.6.
Second quantization, IV.1.6. Selfadjoint operators, bounded, 1.1.4, unbounded 1.1.5. Semimartingale,
A6.1.1. Separating vector, V.1.4, A4.2.3. Shift operators, VI.I.ll. Shorthand notation, IV.2.2, V.1.1.
Simplex, IV.1.2. Skorohod integral, VI.2.1. Smooth process of kernels, VI.3.2. Spatial property, A4.3.6.
Spectral measure, 1.1.6. Spectral type, A2.4. Spin field, V.2.1. Spin ladder, II.3.2. Spin operator,
II.3.2. State, I.l.1. Stinespring's theorem, A2.5. Stochastic exponential, IV.2.3. Stone's theorem, 1.1.5.
Stone-yon N e u m a n n theorem, 111.1.6. Stratonovich integral, A5.2.4, chaos expansion, A5.2.4. Strong
topology of operators, A1.5. Structure equations, VI.2.5, A6.3.1. Supersymmetric Fork space, A5.2.8.
Support of u , VI.1.8, vI.a.1. Synnnetrie product, norm, IV.I.1.
Temperature states, I11.2.6. Tensor norm, IV.I.1. Test vector, IV.4.5, V.2.2. Time reversal, VI.4.9.
Toy Fork space, II.2.2. Trace (m a chaos expansion), A5.2.4. Trace class, A1.2, - - norm, A 1 . 2 Tracial
state, A4.2.3. Transition kernel, A2.5.
Ultraweak topology, A1.5, A4.3.1. Uncertainty relations, III.1.5. Unitary group, 1.1.3, - - element,
A4.1.
Vacuum vector, III.2.2, IV.I.1. Vague convergence, At.4. v N a = yon N e u m a n n algebra. Von N e u m a n n
algebra, A4.3.2, - - bicomnmtant theorem, A4.3.2.
Wave function, VI.3.13. Weak cocycle property, VI.l.12. Weak topology of operators, A1.5. Weyl
c o m m u t a t i o n relation, IV.1.9. Weyl operator, elementary , II.1.4, spin - - , 11.3.3, canonical pair - - ,
III.2.4, finite dimensional - - , III.1.3. Weyl operators, Fock space, IV.1.9, V.1.4. Weyl's c o m m u t a t i o n
relation, III.1.3. Wick monomial, IV.4.1. Wick product of vectors, IV.3.3, discrete , II.5.7. Wick's
theorem, IV.4.a. Wiener measure, IV.2.1. Wiener product, IV.3.1, supersymmetric A5.2.8. Wigner's
semicircle law, 1.1.2.
Index of Notation
( Yl, Iz } bra and ket vectors, 1.1.9
a z , a y , c%, the Pauli matrices, 11.1.2.
XA, discrete basis elements, 11.2.1.
~P, a set of finite subsets, II.2.1.
a~ , ak~
o discrete creation/annih./number operators, II.22.
b± , b° , elementary creation/annih./number operators, II.2.3.
qk Pk, discrete
, field operators, II.2.2.
jr, the "Fourier transform" on toy Fock space, 11.2.2.
J, spin operator, II.3.2.
n(A, B), number of inversions, II.5.1.
p, ~, r , mappings in a Clifford algebra, II.5.9.
P, Q, the canonical pair, III.l.1.
A

f , the Fourier transform of f , tii.1.2.


dx, the Planeherel measure, III.l.(2.1).
1, the vacuum vector, III.2.i. etc.
G,~, group of permutations, IV.t.1.
ea basis elements, IV.I.1.
A, o, exterior, symmetric product, IV.I.1.
T/®~, IV.I.1, the n - t h tensor power of T/.
1, the vacuum vector, IV.I.1.
T/o~, T/An, T/s, IV.I.1, n - t h sym/antisym/full chaos.
£,~, the increasing simplex, IV.1.2.
F(7~), the Fock space over T/, IV.1.2.
F0(T/), the incomplete Fock space, IV.1.2.
g , the exponential domain, 1V.1.3.
g(h), an exponential vector, IV.l.a.
a+h, b+h boson/fermion creation operator, IV.1.4.
a h , bh boson/fermion annihilation operator, IV.1.4.
a+), a'(hl, IV.1.4
I'l (T/), the enlarged exponential domain, IV.1.4.
Qh, Ph, field operators, IV.1.5.
~.(H), a°(H), the differential second quantization of A, IV.1.6.
aoh, b°h, number operators, IV.1.6.
F(A), the second quantization of A, IV.1.6.
C,~, the n - t h Wiener chaos, IV.2.1.
e h , Ch
+ , annihilation/creation operators on the full Fock space, IV.1.8.
f f(A) dXA, shorthand notation for multiple integrals, IV.2.2.
IAI, the number of elements of A, IV.2.2.
"P, "Pn, the space of finite subsets of lR+, IV.2.2.
h', abbreviation for f h(s) dXs, IV.2.3.
h , a Cameron-Martin function, IV.2.3
Index 311

V h , the derivative o p e r a t o r on Wiener space, IV.2.3.


N , the n u m b e r operator on lock space, IV.2.4.
F~] : F t , F[~, past and future Fock spaces, I'[s,t ] , IV.2.6.
f : g, Wick product, IV.3.3.
K:, the multiplicity space, V.I.1.
a O , a g , annihilation and creation operators, V.2.1.
a ~ , n u m b e r and exchange operators, V.2.1.
dX ° = dt, Evans' notation, V.2.1.
^a6p, Evans' delta, V.2.1.
J , the initial space, V.2.5.
a+([u)), a~-((u I), a°(Q), V.3.1.
Et f, lEt(A), conditional expectation of a vector or operator, VI.I.1.
'I~, Fock space (no initial space), VI.1.1.
q~t], (P~, ~[t, (P[~,~], VI.1.1.
xp : ,.7 ® qb Fock space (initial space added), VI.I.1.
ktu, kltu, VI.I.1.
~P~], ~ t , ~P[~, VI.I.1.
l f f ( K ) , I ~ - ( K ) , I~)(IC), stochastic integrals, VI.1.6 7.
S ( u ) , the " s u p p o r t " of an exponential vector, VIA.8.
u(u, dr), a measure associated with an exponential vector, VI.1.(9.7).
rt,v~" shift operators on L 2 ( I R + ) , V I . I . l l .
@t, (~", shift operators on operators of Fock space, V I . I . l l .
0t, 8*, shift operators on vectors of Fock space, V I . I . l l .

tap = (L~)*, VI.3.1.


Lap , coefficients of a flow, VI.3.5.
F o Xt ( X t h o m o m o r p h i s m , applied to F ) , V1.3.5.
Pt, Pt, evolution of vectors and operators, VI.3. (1.3 4).
~bx/-fi, half density, VI.3.13.
Rt , R ~ , time reversal operators, VI.4.9.
/2, £¢¢, the space of b o u n d e d operators, A1.3.
£ 2 , the space of Hilbert Sehmidt operators, AI.1.
121 , the space of trace class operators, A1.2.
T r ( a ) , the trace of a, A1.2.
Sp(a), the s p e c t r u m of a, A4.1.2.
p(a), the spectral radius of a, A4.1.2.
j r , j n , the c o m m u t a n t and b i c o m m u t a n t of J , A4.3.2.
A , coproduct, 6, counit, VII.I.1.
~, convolution, VII.1.3.
g(x, y), a potential density, A5.1.1.
p / h , lpu/h, conditioned semigroup, A5.1.1.
e(tt, u), energy form, A5.1.1.
E x , the expectation of a "Stratonovich" integral, A5.2.2, A5.2.6.
~ - p r o d u c t , symmetric, A5.2.3, antisymmetric, A5.2.6.
312 Index

I ( F ) , the standard chaos expansion for a representing sequence F , A5.2.4.


Tr, Try, trace operator on representing sequences, A5.2.4.
S(F), Sg(F) the Stratonovich chaos expansion for a representing sequence F, A5.2.4.
0, Clifford product, A5.2.6, supersymmetric Wiener-Grassmann product, A5.2.8.
P, Q, two anticommuting complex Brownian motions, A5.2.6.
*, Wick-Grassmann product, A5.2.8.
k(c~), an element of the supersymmetric second chaos, A5.2.8.
S algebra of bounded semimartingales, A6.1.2.
[ldtt,, dK,]], square bracket of semimartingales, A6.2.2.

You might also like