Mathematical Methods
in Quantum Mechanics
With Applications to Schr¨odinger Operators
Gerald Teschl
Gerald Teschl
Fakult¨at f¨ ur Mathematik
Nordbergstraße 15
Universit¨at Wien
1090 Wien, Austria
Email: Gerald.Teschl@univie.ac.at
URL: http://www.mat.univie.ac.at/˜gerald/
2000 Mathematics subject classiﬁcation. 8101, 81Qxx, 4601
Abstract. This manuscript provides a selfcontained introduction to math
ematical methods in quantum mechanics (spectral theory) with applications
to Schr¨odinger operators. The ﬁrst part covers mathematical foundations
of quantum mechanics from selfadjointness, the spectral theorem, quantum
dynamics (including Stone’s and the RAGE theorem) to perturbation theory
for selfadjoint operators.
The second part starts with a detailed study of the free Schr¨odinger op
erator respectively position, momentum and angular momentum operators.
Then we develop WeylTitchmarsh theory for SturmLiouville operators and
apply it to spherically symmetric problems, in particular to the hydrogen
atom. Next we investigate selfadjointness of atomic Schr¨odinger operators
and their essential spectrum, in particular the HVZ theorem. Finally we
have a look at scattering theory and prove asymptotic completeness in the
short range case.
Keywords and phrases. Schr¨odinger operators, quantum mechanics, un
bounded operators, spectral theory.
Typeset by /
/
oL
A
T
E
X and Makeindex.
Version: April 19, 2006
Copyright c _ 19992005 by Gerald Teschl
Contents
Preface vii
Part 0. Preliminaries
Chapter 0. A ﬁrst look at Banach and Hilbert spaces 3
¸0.1. Warm up: Metric and topological spaces 3
¸0.2. The Banach space of continuous functions 10
¸0.3. The geometry of Hilbert spaces 14
¸0.4. Completeness 19
¸0.5. Bounded operators 20
¸0.6. Lebesgue L
p
spaces 22
¸0.7. Appendix: The uniform boundedness principle 27
Part 1. Mathematical Foundations of Quantum Mechanics
Chapter 1. Hilbert spaces 31
¸1.1. Hilbert spaces 31
¸1.2. Orthonormal bases 33
¸1.3. The projection theorem and the Riesz lemma 36
¸1.4. Orthogonal sums and tensor products 38
¸1.5. The C
∗
algebra of bounded linear operators 40
¸1.6. Weak and strong convergence 41
¸1.7. Appendix: The Stone–Weierstraß theorem 44
Chapter 2. Selfadjointness and spectrum 47
iii
iv Contents
¸2.1. Some quantum mechanics 47
¸2.2. Selfadjoint operators 50
¸2.3. Resolvents and spectra 61
¸2.4. Orthogonal sums of operators 67
¸2.5. Selfadjoint extensions 68
¸2.6. Appendix: Absolutely continuous functions 72
Chapter 3. The spectral theorem 75
¸3.1. The spectral theorem 75
¸3.2. More on Borel measures 85
¸3.3. Spectral types 89
¸3.4. Appendix: The Herglotz theorem 91
Chapter 4. Applications of the spectral theorem 97
¸4.1. Integral formulas 97
¸4.2. Commuting operators 100
¸4.3. The minmax theorem 103
¸4.4. Estimating eigenspaces 104
¸4.5. Tensor products of operators 105
Chapter 5. Quantum dynamics 107
¸5.1. The time evolution and Stone’s theorem 107
¸5.2. The RAGE theorem 110
¸5.3. The Trotter product formula 115
Chapter 6. Perturbation theory for selfadjoint operators 117
¸6.1. Relatively bounded operators and the Kato–Rellich theorem 117
¸6.2. More on compact operators 119
¸6.3. Hilbert–Schmidt and trace class operators 122
¸6.4. Relatively compact operators and Weyl’s theorem 128
¸6.5. Strong and norm resolvent convergence 131
Part 2. Schr¨odinger Operators
Chapter 7. The free Schr¨odinger operator 139
¸7.1. The Fourier transform 139
¸7.2. The free Schr¨odinger operator 142
¸7.3. The time evolution in the free case 144
¸7.4. The resolvent and Green’s function 145
Contents v
Chapter 8. Algebraic methods 149
¸8.1. Position and momentum 149
¸8.2. Angular momentum 151
¸8.3. The harmonic oscillator 154
Chapter 9. One dimensional Schr¨odinger operators 157
¸9.1. SturmLiouville operators 157
¸9.2. Weyl’s limit circle, limit point alternative 161
¸9.3. Spectral transformations 168
Chapter 10. Oneparticle Schr¨odinger operators 177
¸10.1. Selfadjointness and spectrum 177
¸10.2. The hydrogen atom 178
¸10.3. Angular momentum 181
¸10.4. The eigenvalues of the hydrogen atom 184
¸10.5. Nondegeneracy of the ground state 186
Chapter 11. Atomic Schr¨odinger operators 189
¸11.1. Selfadjointness 189
¸11.2. The HVZ theorem 191
Chapter 12. Scattering theory 197
¸12.1. Abstract theory 197
¸12.2. Incoming and outgoing states 200
¸12.3. Schr¨odinger operators with short range potentials 202
Part 3. Appendix
Appendix A. Almost everything about Lebesgue integration 209
¸A.1. Borel measures in a nut shell 209
¸A.2. Extending a premasure to a measure 213
¸A.3. Measurable functions 218
¸A.4. The Lebesgue integral 220
¸A.5. Product measures 224
¸A.6. Decomposition of measures 227
¸A.7. Derivatives of measures 229
Bibliography 235
Glossary of notations 237
Index 241
Preface
Overview
The present manuscript was written for my course Schr¨odinger Operators
held at the University of Vienna in Winter 1999, Summer 2002, and Summer
2005. It is supposed to give a brief but rather self contained introduction
to the mathematical methods of quantum mechanics with a view towards
applications to Schr¨odinger operators. The applications presented are highly
selective and many important and interesting items are not touched.
The ﬁrst part is a stripped down introduction to spectral theory of un
bounded operators where I try to introduce only those topics which are
needed for the applications later on. This has the advantage that you will
not get drowned in results which are never used again before you get to
the applications. In particular, I am not trying to provide an encyclope
dic reference. Nevertheless I still feel that the ﬁrst part should give you a
solid background covering all important results which are usually taken for
granted in more advanced books and research papers.
My approach is built around the spectral theorem as the central object.
Hence I try to get to it as quickly as possible. Moreover, I do not take the
detour over bounded operators but I go straight for the unbounded case. In
addition, existence of spectral measures is established via the Herglotz rather
than the Riesz representation theorem since this approach paves the way for
an investigation of spectral types via boundary values of the resolvent as the
spectral parameter approaches the real line.
vii
viii Preface
The second part starts with the free Schr¨odinger equation and computes
the free resolvent and time evolution. In addition, I discuss position, mo
mentum, and angular momentum operators via algebraic methods. This is
usually found in any physics textbook on quantum mechanics, with the only
diﬀerence that I include some technical details which are usually not found
there. Furthermore, I compute the spectrum of the hydrogen atom, again
I try to provide some mathematical details not found in physics textbooks.
Further topics are nondegeneracy of the ground state, spectra of atoms (the
HVZ theorem) and scattering theory.
Prerequisites
I assume some previous experience with Hilbert spaces and bounded
linear operators which should be covered in any basic course on functional
analysis. However, while this assumption is reasonable for mathematics
students, it might not always be for physics students. For this reason there
is a preliminary chapter reviewing all necessary results (including proofs).
In addition, there is an appendix (again with proofs) providing all necessary
results from measure theory.
Readers guide
There is some intentional overlap between Chapter 0, Chapter 1 and
Chapter 2. Hence, provided you have the necessary background, you can
start reading in Chapter 1 or even Chapter 2. Chapters 2, 3 are key chapters
and you should study them in detail (except for Section 2.5 which can be
skipped on ﬁrst reading). Chapter 4 should give you an idea of how the
spectral theorem is used. You should have a look at (e.g.) the ﬁrst section
and you can come back to the remaining ones as needed. Chapter 5 contains
two key results from quantum dynamics, Stone’s theorem and the RAGE
theorem. In particular the RAGE theorem shows the connections between
long time behavior and spectral types. Finally, Chapter 6 is again of central
importance and should be studied in detail.
The chapters in the second part are mostly independent of each others
except for the ﬁrst one, Chapter 7, which is a prerequisite for all others
except for Chapter 9.
If you are interested in one dimensional models (SturmLiouville equa
tions), Chapter 9 is all you need.
If you are interested in atoms, read Chapter 7, Chapter 10, and Chap
ter 11. In particular, you can skip the separation of variables (Sections 10.3
Preface ix
and 10.4, which require Chapter 9) method for computing the eigenvalues of
the Hydrogen atom if you are happy with the fact that there are countably
many which accumulate at the bottom of the continuous spectrum.
If you are interested in scattering theory, read Chapter 7, the ﬁrst two
sections of Chapter 10, and Chapter 12. Chapter 5 is one of the key prereq
uisites in this case.
Availability
It is available from
http://www.mat.univie.ac.at/~gerald/ftp/bookschroe/
Acknowledgments
I’d like to thank Volker Enß for making his lecture notes available to me and
Wang Lanning, Maria HoﬀmannOstenhof, Zhenyou Huang, Harald Rindler,
and Karl Unterkoﬂer for pointing out errors in previous versions.
Gerald Teschl
Vienna, Austria
February, 2005
Part 0
Preliminaries
Chapter 0
A ﬁrst look at Banach
and Hilbert spaces
I assume that the reader has some basic familiarity with measure theory and func
tional analysis. For convenience, some facts needed from Banach and L
p
spaces
are reviewed in this chapter. A crash course in measure theory can be found in
the appendix. If you feel comfortable with terms like Lebesgue L
p
spaces, Banach
space, or bounded linear operator, you can skip this entire chapter. However, you
might want to at least browse through it to refresh your memory.
0.1. Warm up: Metric and topological spaces
Before we begin I want to recall some basic facts from metric and topological
spaces. I presume that you are familiar with these topics from your calculus
course. A good reference is [8].
A metric space is a space X together with a function d : X X →R
such that
(i) d(x, y) ≥ 0
(ii) d(x, y) = 0 if and only if x = y
(iii) d(x, y) = d(y, x)
(iv) d(x, z) ≤ d(x, y) +d(y, z) (triangle inequality)
If (ii) does not hold, d is called a semimetric.
Example. Euclidean space R
n
together with d(x, y) = (
n
k=1
(x
k
−y
k
)
2
)
1/2
is a metric space and so is C
n
together with d(x, y) = (
n
k=1
[x
k
−y
k
[
2
)
1/2
.
3
4 0. A ﬁrst look at Banach and Hilbert spaces
The set
B
r
(x) = ¦y ∈ X[d(x, y) < r¦ (0.1)
is called an open ball around x with radius r > 0. A point x of some set
U is called an interior point of U if U contains some ball around x. If x is
an interior point of U, then U is also called a neighborhood of x. A point
x is called a limit point of U if B
r
(x) ∩ (U¸¦x¦) ,= ∅ for every ball. Note
that a limit point must not lie in U, but U contains points arbitrarily close
to x. Moreover, x is not a limit point of U if and only if it is an interior
point of the complement of U.
Example. Consider R with the usual metric and let U = (−1, 1). Then
every point x ∈ U is an interior point of U. The points ±1 are limit points
of U.
A set consisting only of interior points is called open. The family of
open sets O satisﬁes the following properties
(i) ∅, X ∈ O
(ii) O
1
, O
2
∈ O implies O
1
∩ O
2
∈ O
(iii) ¦O
α
¦ ⊆ O implies
α
O
α
∈ O
That is, O is closed under ﬁnite intersections and arbitrary unions.
In general, a space X together with a family of sets O, the open sets,
satisfying (i)–(iii) is called a topological space. The notions of interior
point, limit point, and neighborhood carry over to topological spaces if we
replace open ball by open set.
There are usually diﬀerent choices for the topology. Two usually not
very interesting examples are the trivial topology O = ¦∅, X¦ and the
discrete topology O = P(X) (the powerset of X). Given two topologies
O
1
and O
2
on X, O
1
is called weaker (or coarser) than O
2
if and only if
O
1
⊆ O
2
.
Example. Note that diﬀerent metrics can give rise to the same topology.
For example, we can equip R
n
(or C
n
) with the Euclidean distance as before,
or we could also use
˜
d(x, y) =
n
k=1
[x
k
−y
k
[ (0.2)
Since
1
√
n
n
k=1
[x
k
[ ≤
¸
¸
¸
_
n
k=1
[x
k
[
2
≤
n
k=1
[x
k
[ (0.3)
shows B
r/
√
n
((x, y)) ⊆
˜
B
r
((x, y)) ⊆ B
r
((x, y)), where B,
˜
B are balls com
puted using d,
˜
d, respectively. Hence the topology is the same for both
metrics.
0.1. Warm up: Metric and topological spaces 5
Example. We can always replace a metric d by the bounded metric
˜
d(x, y) =
d(x, y)
1 +d(x, y)
(0.4)
without changing the topology.
Every subspace Y of a topological space X becomes a topological space
of its own if we call O ⊆ Y open if there is some open set
˜
O ⊆ X such that
O =
˜
O ∩ Y (induced topology).
Example. The set (0, 1] ⊆ R is not open in the topology of X = R, but it is
open in the incuded topology when considered as a subset of Y = [−1, 1].
A family of open sets B ⊆ O is called a base for the topology if for each
x and each neighborhood U(x), there is some set O ∈ B with x ∈ O ⊆ U.
Since O =
x∈O
U(x) we have
Lemma 0.1. If B ⊆ O is a base for the topology, then every open set can
be written as a union of elements from B.
If there exists a countable base, then X is called second countable.
Example. By construction the open balls B
1/n
(x) are a base for the topol
ogy in a metric space. In the case of R
n
(or C
n
) it even suﬃces to take balls
with rational center and hence R
n
(and C
n
) are second countable.
A topological space is called Hausdorﬀ space if for two diﬀerent points
there are always two disjoint neighborhoods.
Example. Any metric space is a Hausdorﬀ space: Given two diﬀerent
points x and y the balls B
d/2
(x) and B
d/2
(y), where d = d(x, y) > 0, are
disjoint neighborhoods (a semimetric space will not be Hausdorﬀ).
The complement of an open set is called a closed set. It follows from
de Morgan’s rules that the family of closed sets ( satisﬁes
(i) ∅, X ∈ (
(ii) C
1
, C
2
∈ ( implies C
1
∪ C
2
∈ (
(iii) ¦C
α
¦ ⊆ ( implies
α
C
α
∈ (
That is, closed sets are closed under ﬁnite unions and arbitrary intersections.
The smallest closed set containing a given set U is called the closure
U =
C∈C,U⊆C
C, (0.5)
and the largest open set contained in a given set U is called the interior
U
◦
=
_
O∈O,O⊆U
O. (0.6)
6 0. A ﬁrst look at Banach and Hilbert spaces
It is straightforward to check that
Lemma 0.2. Let X be a topological space, then the interior of U is the set
of all interior points of U and the closure of U is the set of all limit points
of U.
A sequence (x
n
)
∞
n=1
⊆ X is said to converge to some point x ∈ X if
d(x, x
n
) → 0. We write lim
n→∞
x
n
= x as usual in this case. Clearly the
limit is unique if it exists (this is not true for a semimetric).
Every convergent sequence is a Cauchy sequence, that is, for every
ε > 0 there is some N ∈ N such that
d(x
n
, x
m
) ≤ ε n, m ≥ N. (0.7)
If the converse is also true, that is, if every Cauchy sequence has a limit,
then X is called complete.
Example. Both R
n
and C
n
are complete metric spaces.
A point x is clearly a limit point of U if and only if there is some sequence
x
n
∈ U converging to x. Hence
Lemma 0.3. A closed subset of a complete metric space is again a complete
metric space.
Note that convergence can also be equivalently formulated in terms of
topological terms: A sequence x
n
converges to x if and only if for every
neighborhood U of x there is some N ∈ N such that x
n
∈ U for n ≥ N. In
a Hausdorﬀ space the limit is unique.
A metric space is called separable if it contains a countable dense set.
A set U is called dense, if its closure is all of X, that is if U = X.
Lemma 0.4. Let X be a separable metric space. Every subset of X is again
separable.
Proof. Let A = ¦x
n
¦
n∈N
be a dense set in X. The only problem is that
A∩Y might contain no elements at all. However, some elements of A must
be at least arbitrarily close: Let J ⊆ N
2
be the set of all pairs (n, m) for
which B
1/m
(x
n
) ∩ Y ,= ∅ and choose some y
n,m
∈ B
1/m
(x
n
) ∩ Y for all
(n, m) ∈ J. Then B = ¦y
n,m
¦
(n,m)∈J
⊆ Y is countable. To see that B is
dense choose y ∈ Y . Then there is some sequence x
n
k
with d(x
n
k
, y) < 1/4.
Hence (n
k
, k) ∈ J and d(y
n
k
,k
, y) ≤ d(y
n
k
,k
, x
n
k
) +d(x
n
k
, y) ≤ 2/k → 0.
A function between metric spaces X and Y is called continuous at a
point x ∈ X if for every ε > 0 we can ﬁnd a δ > 0 such that
d
Y
(f(x), f(y)) ≤ ε if d
X
(x, y) < δ. (0.8)
If f is continuous at every point it is called continuous.
0.1. Warm up: Metric and topological spaces 7
Lemma 0.5. Let X be a metric space. The following are equivalent
(i) f is continuous at x (i.e, (0.8) holds).
(ii) f(x
n
) → f(x) whenever x
n
→ x
(iii) For every neighborhood V of f(x), f
−1
(V ) is a neighborhood of x.
Proof. (i) ⇒ (ii) is obvious. (ii) ⇒ (iii): If (iii) does not hold there is
a neighborhood V of f(x) such that B
δ
(x) ,⊆ f
−1
(V ) for every δ. Hence
we can choose a sequence x
n
∈ B
1/n
(x) such that f(x
n
) ,∈ f
−1
(V ). Thus
x
n
→ x but f(x
n
) ,→ f(x). (iii) ⇒ (i): Choose V = B
ε
(f(x)) and observe
that by (iii) B
δ
(x) ⊆ f
−1
(V ) for some δ.
The last item implies that f is continuous if and only if the inverse image
of every open (closed) set is again open (closed).
Note: In a topological space, (iii) is used as deﬁnition for continuity.
However, in general (ii) and (iii) will no longer be equivalent unless one uses
generalized sequences, so called nets, where the index set N is replaced by
arbitrary directed sets.
If X and X are metric spaces then X Y together with
d((x
1
, y
1
), (x
2
, y
2
)) = d
X
(x
1
, x
2
) +d
Y
(y
1
, y
2
) (0.9)
is a metric space. A sequence (x
n
, y
n
) converges to (x, y) if and only if
x
n
→ x and y
n
→ y. In particular, the projections onto the ﬁrst (x, y) → x
respectively onto the second (x, y) → y coordinate are continuous.
In particular, by
[d(x
n
, y
n
) −d(x, y)[ ≤ d(x
n
, x) +d(y
n
, y) (0.10)
we see that d : X X →R is continuous.
Example. If we consider RR we do not get the Euclidean distance of R
2
unless we modify (0.9) as follows:
˜
d((x
1
, y
1
), (x
2
, y
2
)) =
_
d
X
(x
1
, x
2
)
2
+d
Y
(y
1
, y
2
)
2
. (0.11)
As noted in our previous example, the topology (and thus also conver
gence/continuity) is independent of this choice.
If X and Y are just topological spaces, the product topology is deﬁned
by calling O ⊆ X Y open if for every point (x, y) ∈ O there are open
neighborhoods U of x and V of y such that U V ⊆ O. In the case of
metric spaces this clearly agrees with the topology deﬁned via the product
metric (0.9).
A cover of a set Y ⊆ X is a family of sets ¦U
α
¦ such that Y ⊆
α
U
α
. A
cover is call open if all U
α
are open. A subset of ¦U
α
¦ is called a subcover.
8 0. A ﬁrst look at Banach and Hilbert spaces
A subset K ⊂ X is called compact if every open cover has a ﬁnite
subcover.
Lemma 0.6. A topological space is compact if and only if it has the ﬁnite
intersection property: The intersection of a family of closed sets is empty
if and only if the intersection of some ﬁnite subfamily is empty.
Proof. By taking complements, to every family of open sets there is a cor
responding family of closed sets and vice versa. Moreover, the open sets
are a cover if and only if the corresponding closed sets have empty intersec
tion.
A subset K ⊂ X is called sequentially compact if every sequence has
a convergent subsequence.
Lemma 0.7. Let X be a topological space.
(i) The continuous image of a compact set is compact.
(ii) Every closed subset of a compact set is compact.
(iii) If X is Hausdorﬀ, any compact set is closed.
(iv) The product of compact sets is compact.
(v) A compact set is also sequentially compact.
Proof. (i) Just observe that if ¦O
α
¦ is an open cover for f(Y ), then ¦f
−1
(O
α
)¦
is one for Y .
(ii) Let ¦O
α
¦ be an open cover for the closed subset Y . Then ¦O
α
¦ ∪
¦X¸Y ¦ is an open cover for X.
(iii) Let Y ⊆ X be compact. We show that X¸Y is open. Fix x ∈ X¸Y
(if Y = X there is nothing to do). By the deﬁnition of Hausdorﬀ, for
every y ∈ Y there are disjoint neighborhoods V (y) of y and U
y
(x) of x. By
compactness of Y , there are y
1
, . . . y
n
such that V (y
j
) cover Y . But then
U(x) =
n
j=1
U
y
j
(x) is a neighborhood of x which does not intersect Y .
(iv) Let ¦O
α
¦ be an open cover for X Y . For every (x, y) ∈ X Y
there is some α(x, y) such that (x, y) ∈ O
α(x,y)
. By deﬁnition of the product
topology there is some open rectangle U(x, y) V (x, y) ⊆ O
α(x,y)
. Hence
for ﬁxed x, ¦V (x, y)¦
y∈Y
is an open cover of Y . Hence there are ﬁnitely
many points y
k
(x) such V (x, y
k
(x)) cover Y . Set U(x) =
k
U(x, y
k
(x)).
Since ﬁnite intersections of open sets are open, ¦U(x)¦
x∈X
is an open cover
and there are ﬁnitely many points x
j
such U(x
j
) cover X. By construction,
U(x
j
) V (x
j
, y
k
(x
j
)) ⊆ O
α(x
j
,y
k
(x
j
))
cover X Y .
(v) Let x
n
be a sequence which has no convergent subsequence. Then
K = ¦x
n
¦ has no limit points and is hence compact by (ii). For every n
0.1. Warm up: Metric and topological spaces 9
there is a ball B
εn
(x
n
) which contains only ﬁnitely many elements of K.
However, ﬁnitely many suﬃce to cover K, a contradiction.
In a metric space compact and sequentially compact are equivalent.
Lemma 0.8. Let X be a metric space. Then a subset is compact if and only
if it is sequentially compact.
Proof. First of all note that every cover of open balls with ﬁxed radius
ε > 0 has a ﬁnite subcover. Since if this were false we could construct a
sequence x
n
∈ X¸
n−1
m=1
B
ε
(x
m
) such that d(x
n
, x
m
) > ε for m < n.
In particular, we are done if we can show that for every open cover
¦O
α
¦ there is some ε > 0 such that for every x we have B
ε
(x) ⊆ O
α
for
some α = α(x). Indeed, choosing ¦x
k
¦
n
k=1
such that B
ε
(x
k
) is a cover, we
have that O
α(x
k
)
is a cover as well.
So it remains to show that there is such an ε. If there were none, for
every ε > 0 there must be an x such that B
ε
(x) ,⊆ O
α
for every α. Choose
ε =
1
n
and pick a corresponding x
n
. Since X is sequentially compact, it is no
restriction to assume x
n
converges (after maybe passing to a subsequence).
Let x = limx
n
, then x lies in some O
α
and hence B
ε
(x) ⊆ O
α
. But choosing
n so large that
1
n
<
ε
2
and d(x
n
, x) <
ε
2
we have B
1/n
(x
n
) ⊆ B
ε
(x) ⊆ O
α
contradicting our assumption.
Please also recall the HeineBorel theorem:
Theorem 0.9 (HeineBorel). In R
n
(or C
n
) a set is compact if and only if
it is bounded and closed.
Proof. By Lemma 0.7 (ii) and (iii) it suﬃces to show that a closed interval
in I ⊆ R is compact. Moreover, by Lemma 0.8 it suﬃces to show that
every sequence in I = [a, b] has a convergent subsequence. Let x
n
be our
sequence and divide I = [a,
a+b
2
] ∪ [
a+b
2
]. Then at least one of these two
intervals, call it I
1
, contains inﬁnitely many elements of our sequence. Let
y
1
= x
n
1
be the ﬁrst one. Subdivide I
1
and pick y
2
= x
n
2
, with n
2
> n
1
as
before. Proceeding like this we obtain a Cauchy sequence y
n
(note that by
construction I
n+1
⊆ I
n
and hence [y
n
−y
m
[ ≤
b−a
n
for m ≥ n).
A topological space is called locally compact if every point has a com
pact neighborhood.
Example. R
n
is locally compact.
The distance between a point x ∈ X and a subset Y ⊆ X is
dist(x, Y ) = inf
y∈Y
d(x, y). (0.12)
10 0. A ﬁrst look at Banach and Hilbert spaces
Note that x ∈ Y if and only if dist(x, Y ) = 0.
Lemma 0.10. Let X be a metric space, then
[dist(x, Y ) −dist(z, Y )[ ≤ dist(x, z). (0.13)
In particular, x → dist(x, Y ) is continuous.
Proof. Taking the inﬁmum in the triangle inequality d(x, y) ≤ d(x, z) +
d(z, y) shows dist(x, Y ) ≤ d(x, z)+dist(z, Y ). Hence dist(x, Y )−dist(z, Y ) ≤
dist(x, z). Interchanging x and z shows dist(z, Y ) − dist(x, Y ) ≤ dist(x, z).
Lemma 0.11 (Urysohn). Suppose C
1
and C
2
are disjoint closed subsets of
a metric space X. Then there is a continuous function f : X → [0, 1] such
that f is zero on C
1
and one on C
2
.
If X is locally compact and U is compact, one can choose f with compact
support.
Proof. To prove the ﬁrst claim set f(x) =
dist(x,C
2
)
dist(x,C
1
)+dist(x,C
2
)
. For the
second claim, observe that there is an open set O such that O is compact
and C
1
⊂ O ⊂ O ⊂ X¸C
2
. In fact, for every x, there is a ball B
ε
(x) such
that B
ε
(x) is compact and B
ε
(x) ⊂ X¸C
2
. Since U is compact, ﬁnitely
many of them cover C
1
and we can choose the union of those balls to be O.
Now replace C
2
by X¸O.
Note that Urysohn’s lemma implies that a metric space is normal, that
is, for any two disjoint closed sets C
1
and C
2
, there are disjoint open sets
O
1
and O
2
such that C
j
⊆ O
j
, j = 1, 2. In fact, choose f as in Urysohn’s
lemma and set O
1
= f
−1
([0, 1/2)) respectively O
2
= f
−1
((1/2, 1]).
0.2. The Banach space of continuous functions
Now let us have a ﬁrst look at Banach spaces by investigating set of contin
uous functions C(I) on a compact interval I = [a, b] ⊂ R. Since we want to
handle complex models, we will always consider complex valued functions!
One way of declaring a distance, wellknown from calculus, is the max
imum norm:
f(x) −g(x)
∞
= max
x∈I
[f(x) −g(x)[. (0.14)
It is not hard to see that with this deﬁnition C(I) becomes a normed linear
space:
A normed linear space X is a vector space X over C (or R) with a
realvalued function (the norm) . such that
• f ≥ 0 for all f ∈ X and f = 0 if and only if f = 0,
0.2. The Banach space of continuous functions 11
• λf = [λ[ f for all λ ∈ C and f ∈ X, and
• f +g ≤ f +g for all f, g ∈ X (triangle inequality).
From the triangle inequality we also get the inverse triangle inequality
(Problem 0.1)
[f −g[ ≤ f −g. (0.15)
Once we have a norm, we have a distance d(f, g) = f−g and hence we
know when a sequence of vectors f
n
converges to a vector f. We will write
f
n
→ f or lim
n→∞
f
n
= f, as usual, in this case. Moreover, a mapping
F : X → Y between to normed spaces is called continuous if f
n
→ f
implies F(f
n
) → F(f). In fact, it is not hard to see that the norm, vector
addition, and multiplication by scalars are continuous (Problem 0.2).
In addition to the concept of convergence we have also the concept of
a Cauchy sequence and hence the concept of completeness: A normed
space is called complete if every Cauchy sequence has a limit. A complete
normed space is called a Banach space.
Example. The space
1
(N) of all sequences a = (a
j
)
∞
j=1
for which the norm
a
1
=
∞
j=1
[a
j
[ (0.16)
is ﬁnite, is a Banach space.
To show this, we need to verify three things: (i)
1
(N) is a Vector space,
that is closed under addition and scalar multiplication (ii) .
1
satisﬁes the
three requirements for a norm and (iii)
1
(N) is complete.
First of all observe
k
j=1
[a
j
+b
j
[ ≤
k
j=1
[a
j
[ +
k
j=1
[b
j
[ ≤ a
1
+b
1
(0.17)
for any ﬁnite k. Letting k → ∞ we conclude that
1
(N) is closed under
addition and that the triangle inequality holds. That
1
(N) is closed under
scalar multiplication and the two other properties of a norm are straight
forward. It remains to show that
1
(N) is complete. Let a
n
= (a
n
j
)
∞
j=1
be
a Cauchy sequence, that is, for given ε > 0 we can ﬁnd an N
ε
such that
a
m
−a
n

1
≤ ε for m, n ≥ N
ε
. This implies in particular [a
m
j
−a
n
j
[ ≤ ε for
any ﬁxed j. Thus a
n
j
is a Cauchy sequence for ﬁxed j and by completeness
of C has a limit: lim
n→∞
a
n
j
= a
j
. Now consider
k
j=1
[a
m
j
−a
n
j
[ ≤ ε (0.18)
12 0. A ﬁrst look at Banach and Hilbert spaces
and take m → ∞:
k
j=1
[a
j
−a
n
j
[ ≤ ε. (0.19)
Since this holds for any ﬁnite k we even have a−a
n

1
≤ ε. Hence (a−a
n
) ∈
1
(N) and since a
n
∈
1
(N) we ﬁnally conclude a = a
n
+(a−a
n
) ∈
1
(N).
Example. The space
∞
(N) of all bounded sequences a = (a
j
)
∞
j=1
together
with the norm
a
∞
= sup
j∈N
[a
j
[ (0.20)
is a Banach space (Problem 0.3).
Now what about convergence in this space? A sequence of functions
f
n
(x) converges to f if and only if
lim
n→∞
f −f
n
 = lim
n→∞
sup
x∈I
[f
n
(x) −f(x)[ = 0. (0.21)
That is, in the language of real analysis, f
n
converges uniformly to f. Now
let us look at the case where f
n
is only a Cauchy sequence. Then f
n
(x) is
clearly a Cauchy sequence of real numbers for any ﬁxed x ∈ I. In particular,
by completeness of C, there is a limit f(x) for each x. Thus we get a limiting
function f(x). Moreover, letting m → ∞ in
[f
m
(x) −f
n
(x)[ ≤ ε ∀m, n > N
ε
, x ∈ I (0.22)
we see
[f(x) −f
n
(x)[ ≤ ε ∀n > N
ε
, x ∈ I, (0.23)
that is, f
n
(x) converges uniformly to f(x). However, up to this point we
don’t know whether it is in our vector space C(I) or not, that is, whether
it is continuous or not. Fortunately, there is a wellknown result from real
analysis which tells us that the uniform limit of continuous functions is again
continuous. Hence f(x) ∈ C(I) and thus every Cauchy sequence in C(I)
converges. Or, in other words
Theorem 0.12. C(I) with the maximum norm is a Banach space.
Next we want to know if there is a basis for C(I). In order to have only
countable sums, we would even prefer a countable basis. If such a basis
exists, that is, if there is a set ¦u
n
¦ ⊂ X of linearly independent vectors
such that every element f ∈ X can be written as
f =
n
c
n
u
n
, c
n
∈ C, (0.24)
then the span span¦u
n
¦ (the set of all ﬁnite linear combinations) of ¦u
n
¦ is
dense in X. A set whose span is dense is called total and if we have a total
set, we also have a countable dense set (consider only linear combinations
0.2. The Banach space of continuous functions 13
with rational coeﬃcients – show this). A normed linear space containing a
countable dense set is called separable.
Example. The Banach space
1
(N) is separable. In fact, the set of vectors
δ
n
, with δ
n
n
= 1 and δ
n
m
= 0, n ,= m is total: Let a ∈
1
(N) be given and set
a
n
=
n
k=1
a
k
δ
k
, then
a −a
n

1
=
∞
j=n+1
[a
j
[ → 0 (0.25)
since a
n
j
= a
j
for 1 ≤ j ≤ n and a
n
j
= 0 for j > n.
Luckily this is also the case for C(I):
Theorem 0.13 (Weierstraß). Let I be a compact interval. Then the set of
polynomials is dense in C(I).
Proof. Let f(x) ∈ C(I) be given. By considering f(x) − f(a) + (f(b) −
f(a))(x −b) it is no loss to assume that f vanishes at the boundary points.
Moreover, without restriction we only consider I = [
−1
2
,
1
2
] (why?).
Now the claim follows from the lemma below using
u
n
(x) =
1
I
n
(1 −x
2
)
n
, (0.26)
where
I
n
=
_
1
−1
(1 −x
2
)
n
dx =
n!
1
2
(
1
2
+ 1) (
1
2
+n)
=
√
π
Γ(1 +n)
Γ(
3
2
+n)
=
_
π
n
(1 +O(
1
n
)). (0.27)
(Remark: The integral is known as Beta function and the asymptotics follow
from Stirling’s formula.)
Lemma 0.14 (Smoothing). Let u
n
(x) be a sequence of nonnegative contin
uous functions on [−1, 1] such that
_
x≤1
u
n
(x)dx = 1 and
_
δ≤x≤1
u
n
(x)dx → 0, δ > 0. (0.28)
(In other words, u
n
has mass one and concentrates near x = 0 as n → ∞.)
Then for every f ∈ C[−
1
2
,
1
2
] which vanishes at the endpoints, f(−
1
2
) =
f(
1
2
) = 0, we have that
f
n
(x) =
_
1/2
−1/2
u
n
(x −y)f(y)dy (0.29)
converges uniformly to f(x).
14 0. A ﬁrst look at Banach and Hilbert spaces
Proof. Since f is uniformly continuous, for given ε we can ﬁnd a δ (inde
pendent of x) such that [f(x)−f(y)[ ≤ ε whenever [x−y[ ≤ δ. Moreover, we
can choose n such that
_
δ≤y≤1
u
n
(y)dy ≤ ε. Now abbreviate M = max f
and note
[f(x)−
_
1/2
−1/2
u
n
(x−y)f(x)dy[ = [f(x)[ [1−
_
1/2
−1/2
u
n
(x−y)dy[ ≤ Mε. (0.30)
In fact, either the distance of x to one of the boundary points ±
1
2
is smaller
than δ and hence [f(x)[ ≤ ε or otherwise the diﬀerence between one and the
integral is smaller than ε.
Using this we have
[f
n
(x) −f(x)[ ≤
_
1/2
−1/2
u
n
(x −y)[f(y) −f(x)[dy +Mε
≤
_
y≤1/2,x−y≤δ
u
n
(x −y)[f(y) −f(x)[dy
+
_
y≤1/2,x−y≥δ
u
n
(x −y)[f(y) −f(x)[dy +Mε
= ε + 2Mε +Mε = (1 + 3M)ε, (0.31)
which proves the claim.
Note that f
n
will be as smooth as u
n
, hence the title smoothing lemma.
The same idea is used to approximate noncontinuous functions by smooth
ones (of course the convergence will no longer be uniform in this case).
Corollary 0.15. C(I) is separable.
The same is true for
1
(N), but not for
∞
(N) (Problem 0.4)!
Problem 0.1. Show that [f −g[ ≤ f −g.
Problem 0.2. Show that the norm, vector addition, and multiplication by
scalars are continuous. That is, if f
n
→ f, g
n
→ g, and λ
n
→ λ then
f
n
 → f, f
n
+g
n
→ f +g, and λ
n
g
n
→ λg.
Problem 0.3. Show that
∞
(N) is a Banach space.
Problem 0.4. Show that
∞
(N) is not separable (Hint: Consider sequences
which take only the value one and zero. How many are there? What is the
distance between two such sequences?).
0.3. The geometry of Hilbert spaces
So it looks like C(I) has all the properties we want. However, there is
still one thing missing: How should we deﬁne orthogonality in C(I)? In
0.3. The geometry of Hilbert spaces 15
Euclidean space, two vectors are called orthogonal if their scalar product
vanishes, so we would need a scalar product:
Suppose H is a vector space. A map ¸., ..) : H H → C is called skew
linear form if it is conjugate linear in the ﬁrst and linear in the second
argument, that is,
¸α
1
f
1
+α
2
f
2
, g) = α
∗
1
¸f
1
, g) +α
∗
2
¸f
2
, g)
¸f, α
1
g
1
+α
2
g
2
) = α
1
¸f, g
1
) +α
2
¸f, g
2
)
, α
1
, α
2
∈ C, (0.32)
where ‘∗’ denotes complex conjugation. A skew linear form satisfying the
requirements
(i) ¸f, f) > 0 for f ,= 0 (positive deﬁnite)
(ii) ¸f, g) = ¸g, f)
∗
(symmetry)
is called inner product or scalar product. Associated with every scalar
product is a norm
f =
_
¸f, f). (0.33)
The pair (H, ¸., ..)) is called inner product space. If H is complete it is
called a Hilbert space.
Example. Clearly C
n
with the usual scalar product
¸a, b) =
n
j=1
a
∗
j
b
j
(0.34)
is a (ﬁnite dimensional) Hilbert space.
Example. A somewhat more interesting example is the Hilbert space
2
(N),
that is, the set of all sequences
_
(a
j
)
∞
j=1
¸
¸
¸
∞
j=1
[a
j
[
2
< ∞
_
(0.35)
with scalar product
¸a, b) =
∞
j=1
a
∗
j
b
j
. (0.36)
(Show that this is in fact a separable Hilbert space! Problem 0.5)
Of course I still owe you a proof for the claim that
_
¸f, f) is indeed a
norm. Only the triangle inequality is nontrivial which will follow from the
CauchySchwarz inequality below.
A vector f ∈ H is called normalized or unit vector if f = 1. Two
vectors f, g ∈ H are called orthogonal or perpendicular (f ⊥ g) if ¸f, g) =
0 and parallel if one is a multiple of the other.
16 0. A ﬁrst look at Banach and Hilbert spaces
For two orthogonal vectors we have the Pythagorean theorem:
f +g
2
= f
2
+g
2
, f ⊥ g, (0.37)
which is one line of computation.
Suppose u is a unit vector, then the projection of f in the direction of
u is given by
f
= ¸u, f)u (0.38)
and f
⊥
deﬁned via
f
⊥
= f −¸u, f)u (0.39)
is perpendicular to u since ¸u, f
⊥
) = ¸u, f −¸u, f)u) = ¸u, f) −¸u, f)¸u, u) =
0.
f
f
f
⊥
u
I
I
f
f
f
f
fw
Taking any other vector parallel to u it is easy to see
f −αu
2
= f
⊥
+ (f
−αu)
2
= f
⊥

2
+[¸u, f) −α[
2
(0.40)
and hence f
= ¸u, f)u is the unique vector parallel to u which is closest to
f.
As a ﬁrst consequence we obtain the CauchySchwarzBunjakowski
inequality:
Theorem 0.16 (CauchySchwarzBunjakowski). Let H
0
be an inner product
space, then for every f, g ∈ H
0
we have
[¸f, g)[ ≤ f g (0.41)
with equality if and only if f and g are parallel.
Proof. It suﬃces to prove the case g = 1. But then the claim follows
from f
2
= [¸g, f)[
2
+f
⊥

2
.
Note that the CauchySchwarz inequality entails that the scalar product
is continuous in both variables, that is, if f
n
→ f and g
n
→ g we have
¸f
n
, g
n
) → ¸f, g).
As another consequence we infer that the map . is indeed a norm.
f +g
2
= f
2
+¸f, g) +¸g, f) +g
2
≤ (f +g)
2
. (0.42)
But let us return to C(I). Can we ﬁnd a scalar product which has the
maximum norm as associated norm? Unfortunately the answer is no! The
0.3. The geometry of Hilbert spaces 17
reason is that the maximum norm does not satisfy the parallelogram law
(Problem 0.7).
Theorem 0.17 (Jordanvon Neumann). A norm is associated with a scalar
product if and only if the parallelogram law
f +g
2
+f −g
2
= 2f
2
+ 2g
2
(0.43)
holds.
In this case the scalar product can be recovered from its norm by virtue
of the polarization identity
¸f, g) =
1
4
_
f +g
2
−f −g
2
+ if −ig
2
−if + ig
2
_
. (0.44)
Proof. If an inner product space is given, veriﬁcation of the parallelogram
law and the polarization identity is straight forward (Problem 0.6).
To show the converse, we deﬁne
s(f, g) =
1
4
_
f +g
2
−f −g
2
+ if −ig
2
−if + ig
2
_
. (0.45)
Then s(f, f) = f
2
and s(f, g) = s(g, f)
∗
are straightforward to check.
Moreover, another straightforward computation using the parallelogram law
shows
s(f, g) +s(f, h) = 2s(f,
g +h
2
). (0.46)
Now choosing h = 0 (and using s(f, 0) = 0) shows s(f, g) = 2s(f,
g
2
) and
thus s(f, g) + s(f, h) = s(f, g + h). Furthermore, by induction we infer
m
2
n
s(f, g) = s(f,
m
2
n
g), that is λs(f, g) = s(f, λg) for every positive rational λ.
By continuity (check this!) this holds for all λ > 0 and s(f, −g) = −s(f, g)
respectively s(f, ig) = i s(f, g) ﬁnishes the proof.
Note that the parallelogram law and the polarization identity even hold
for skew linear forms (Problem 0.6).
But how do we deﬁne a scalar product on C(I)? One possibility is
¸f, g) =
_
b
a
f
∗
(x)g(x)dx. (0.47)
The corresponding inner product space is denoted by L
2
cont
(I). Note that
we have
f ≤
_
[b −a[f
∞
(0.48)
and hence the maximum norm is stronger than the L
2
cont
norm.
Suppose we have two norms .
1
and .
2
on a space X. Then .
2
is
said to be stronger than .
1
if there is a constant m > 0 such that
f
1
≤ mf
2
. (0.49)
18 0. A ﬁrst look at Banach and Hilbert spaces
It is straightforward to check that
Lemma 0.18. If .
2
is stronger than .
1
, then any .
2
Cauchy sequence
is also a .
1
Cauchy sequence.
Hence if a function F : X → Y is continuous in (X, .
1
) it is also
continuos in (X, .
2
) and if a set is dense in (X, .
2
) it is also dense in
(X, .
1
).
In particular, L
2
cont
is separable. But is it also complete? Unfortunately
the answer is no:
Example. Take I = [0, 2] and deﬁne
f
n
(x) =
_
_
_
0, 0 ≤ x ≤ 1 −
1
n
1 +n(x −1), 1 −
1
n
≤ x ≤ 1
1, 1 ≤ x ≤ 2
(0.50)
then f
n
(x) is a Cauchy sequence in L
2
cont
, but there is no limit in L
2
cont
!
Clearly the limit should be the step function which is 0 for 0 ≤ x < 1 and 1
for 1 ≤ x ≤ 2, but this step function is discontinuous (Problem 0.8)!
This shows that in inﬁnite dimensional spaces diﬀerent norms will give
raise to diﬀerent convergent sequences! In fact, the key to solving prob
lems in inﬁnite dimensional spaces is often ﬁnding the right norm! This is
something which cannot happen in the ﬁnite dimensional case.
Theorem 0.19. If X is a ﬁnite dimensional case, then all norms are equiv
alent. That is, for given two norms .
1
and .
2
there are constants m
1
and m
2
such that
1
m
2
f
1
≤ f
2
≤ m
1
f
1
. (0.51)
Proof. Clearly we can choose a basis u
j
, 1 ≤ j ≤ n, and assume that .
2
is
the usual Euclidean norm, 
j
α
j
u
j

2
2
=
j
[α
j
[
2
. Let f =
j
α
j
u
j
, then
by the triangle and Cauchy Schwartz inequalities
f
1
≤
j
[α
j
[u
j

1
≤
¸
j
u
j

2
1
f
2
(0.52)
and we can choose m
2
=
_
j
u
j

1
.
In particular, if f
n
is convergent with respect to .
2
it is also convergent
with respect to .
1
. Thus .
1
is continuous with respect to .
2
and attains
its minimum m > 0 on the unit sphere (which is compact by the HeineBorel
theorem). Now choose m
1
= 1/m.
Problem 0.5. Show that
2
(N) is a separable Hilbert space.
0.4. Completeness 19
Problem 0.6. Let s(f, g) be a skew linear form and p(f) = s(f, f) the
associated quadratic form. Prove the parallelogram law
p(f +g) +p(f −g) = 2p(f) + 2p(g) (0.53)
and the polarization identity
s(f, g) =
1
4
(p(f +g) −p(f −g) + i p(f −ig) −i p(f + ig)) . (0.54)
Problem 0.7. Show that the maximum norm (on C[0, 1]) does not satisfy
the parallelogram law.
Problem 0.8. Prove the claims made about f
n
, deﬁned in (0.50), in the
last example.
0.4. Completeness
Since L
2
cont
is not complete, how can we obtain a Hilbert space out of it?
Well the answer is simple: take the completion.
If X is a (incomplete) normed space, consider the set of all Cauchy
sequences
˜
X. Call two Cauchy sequences equivalent if their diﬀerence con
verges to zero and denote by
¯
X the set of all equivalence classes. It is easy
to see that
¯
X (and
˜
X) inherit the vector space structure from X. Moreover,
Lemma 0.20. If x
n
is a Cauchy sequence, then x
n
 converges.
Consequently the norm of a Cauchy sequence (x
n
)
∞
n=1
can be deﬁned by
(x
n
)
∞
n=1
 = lim
n→∞
x
n
 and is independent of the equivalence class (show
this!). Thus
¯
X is a normed space (
˜
X is not! why?).
Theorem 0.21.
¯
X is a Banach space containing X as a dense subspace if
we identify x ∈ X with the equivalence class of all sequences converging to
x.
Proof. (Outline) It remains to show that
¯
X is complete. Let ξ
n
= [(x
n,j
)
∞
j=1
]
be a Cauchy sequence in
¯
X. Then it is not hard to see that ξ = [(x
j,j
)
∞
j=1
]
is its limit.
Let me remark that the completion X is unique. More precisely any
other complete space which contains X as a dense subset is isomorphic to
X. This can for example be seen by showing that the identity map on X
has a unique extension to X (compare Theorem 0.24 below).
In particular it is no restriction to assume that a normed linear space
or an inner product space is complete. However, in the important case
of L
2
cont
it is somewhat inconvenient to work with equivalence classes of
Cauchy sequences and hence we will give a diﬀerent characterization using
the Lebesgue integral later.
20 0. A ﬁrst look at Banach and Hilbert spaces
0.5. Bounded operators
A linear map A between two normed spaces X and Y will be called a (lin
ear) operator
A : D(A) ⊆ X → Y. (0.55)
The linear subspace D(A) on which A is deﬁned, is called the domain of A
and is usually required to be dense. The kernel
Ker(A) = ¦f ∈ D(A)[Af = 0¦ (0.56)
and range
Ran(A) = ¦Af[f ∈ D(A)¦ = AD(A) (0.57)
are deﬁned as usual. The operator A is called bounded if the following
operator norm
A = sup
f
X
=1
Af
Y
(0.58)
is ﬁnite.
The set of all bounded linear operators from X to Y is denoted by
L(X, Y ). If X = Y we write L(X, X) = L(X).
Theorem 0.22. The space L(X, Y ) together with the operator norm (0.58)
is a normed space. It is a Banach space if Y is.
Proof. That (0.58) is indeed a norm is straightforward. If Y is complete and
A
n
is a Cauchy sequence of operators, then A
n
f converges to an element
g for every f. Deﬁne a new operator A via Af = g. By continuity of
the vector operations, A is linear and by continuity of the norm Af =
lim
n→∞
A
n
f ≤ (lim
n→∞
A
n
)f it is bounded. Furthermore, given
ε > 0 there is some N such that A
n
− A
m
 ≤ ε for n, m ≥ N and thus
A
n
f −A
m
f ≤ εf. Taking the limit m → ∞ we see A
n
f −Af ≤ εf,
that is A
n
→ A.
By construction, a bounded operator is Lipschitz continuous
Af
Y
≤ Af
X
(0.59)
and hence continuous. The converse is also true
Theorem 0.23. An operator A is bounded if and only if it is continuous.
Proof. Suppose A is continuous but not bounded. Then there is a sequence
of unit vectors u
n
such that Au
n
 ≥ n. Then f
n
=
1
n
u
n
converges to 0 but
Af
n
 ≥ 1 does not converge to 0.
Moreover, if A is bounded and densely deﬁned, it is no restriction to
assume that it is deﬁned on all of X.
0.5. Bounded operators 21
Theorem 0.24. Let A ∈ L(X, Y ) and let Y be a Banach space. If D(A)
is dense, there is a unique (continuous) extension of A to X, which has the
same norm.
Proof. Since a bounded operator maps Cauchy sequences to Cauchy se
quences, this extension can only be given by
Af = lim
n→∞
Af
n
, f
n
∈ D(A), f ∈ X. (0.60)
To show that this deﬁnition is independent of the sequence f
n
→ f, let
g
n
→ f be a second sequence and observe
Af
n
−Ag
n
 = A(f
n
−g
n
) ≤ Af
n
−g
n
 → 0. (0.61)
From continuity of vector addition and scalar multiplication it follows that
our extension is linear. Finally, from continuity of the norm we conclude
that the norm does not increase.
An operator in L(X, C) is called a bounded linear functional and the
space X
∗
= L(X, C) is called the dual space of X. A sequence f
n
is said to
converge weakly f
n
f if (f
n
) → (f) for every ∈ X
∗
.
The Banach space of bounded linear operators L(X) even has a multi
plication given by composition. Clearly this multiplication satisﬁes
(A+B)C = AC +BC, A(B +C) = AB +BC, A, B, C ∈ L(X) (0.62)
and
(AB)C = A(BC), α(AB) = (αA)B = A(αB), α ∈ C. (0.63)
Moreover, it is easy to see that we have
AB ≤ AB. (0.64)
However, note that our multiplication is not commutative (unless X is one
dimensional). We even have an identity, the identity operator I satisfying
I = 1.
A Banach space together with a multiplication satisfying the above re
quirements is called a Banach algebra. In particular, note that (0.64)
ensures that multiplication is continuous.
Problem 0.9. Show that the integral operator
(Kf)(x) =
_
1
0
K(x, y)f(y)dy, (0.65)
where K(x, y) ∈ C([0, 1] [0, 1]), deﬁned on D(K) = C[0, 1] is a bounded
operator both in X = C[0, 1] (max norm) and X = L
2
cont
(0, 1).
Problem 0.10. Show that the diﬀerential operator A =
d
dx
deﬁned on
D(A) = C
1
[0, 1] ⊂ C[0, 1] is an unbounded operator.
22 0. A ﬁrst look at Banach and Hilbert spaces
Problem 0.11. Show that AB ≤ AB for every A, B ∈ L(X).
Problem 0.12. Show that the multiplication in a Banach algebra X is con
tinuous: x
n
→ x and y
n
→ y implies x
n
y
n
→ xy.
0.6. Lebesgue L
p
spaces
We ﬁx some measure space (X, Σ, µ) and deﬁne the L
p
norm by
f
p
=
__
X
[f[
p
dµ
_
1/p
, 1 ≤ p (0.66)
and denote by L
p
(X, dµ) the set of all complex valued measurable functions
for which f
p
is ﬁnite. First of all note that L
p
(X, dµ) is a linear space,
since [f + g[
p
≤ 2
p
max([f[, [g[)
p
≤ 2
p
max([f[
p
, [g[
p
) ≤ 2
p
([f[
p
+ [g[
p
). Of
course our hope is that L
p
(X, dµ) is a Banach space. However, there is
a small technical problem (recall that a property is said to hold almost
everywhere if the set where it fails to hold is contained in a set of measure
zero):
Lemma 0.25. Let f be measurable, then
_
X
[f[
p
dµ = 0 (0.67)
if and only if f(x) = 0 almost everywhere with respect to µ.
Proof. Observe that we have A = ¦x[f(x) ,= 0¦ =
n
A
n
, where A
n
=
¦x[ [f(x)[ >
1
n
¦. If
_
[f[
p
dµ = 0 we must have µ(A
n
) = 0 for every n and
hence µ(A) = lim
n→∞
µ(A
n
) = 0. The converse is obvious.
Note that the proof also shows that if f is not 0 almost everywhere,
there is an ε > 0 such that µ(¦x[ [f(x)[ ≥ ε¦) > 0.
Example. Let λ be the Lebesgue measure on R. Then the characteristic
function of the rationals χ
Q
is zero a.e. (with respect to λ). Let Θ be the
Dirac measure centered at 0, then f(x) = 0 a.e. (with respect to Θ) if and
only if f(0) = 0.
Thus f
p
= 0 only implies f(x) = 0 for almost every x, but not for all!
Hence .
p
is not a norm on L
p
(X, dµ). The way out of this misery is to
identify functions which are equal almost everywhere: Let
^(X, dµ) = ¦f[f(x) = 0 µalmost everywhere¦. (0.68)
Then ^(X, dµ) is a linear subspace of L
p
(X, dµ) and we can consider the
quotient space
L
p
(X, dµ) = L
p
(X, dµ)/^(X, dµ). (0.69)
0.6. Lebesgue L
p
spaces 23
If dµ is the Lebesgue measure on X ⊆ R
n
we simply write L
p
(X). Observe
that f
p
is well deﬁned on L
p
(X, dµ).
Even though the elements of L
p
(X, dµ) are strictly speaking equivalence
classes of functions, we will still call them functions for notational conve
nience. However, note that for f ∈ L
p
(X, dµ) the value f(x) is not well
deﬁned (unless there is a continuous representative and diﬀerent continuous
functions are in diﬀerent equivalence classes, e.g., in the case of Lebesgue
measure).
With this modiﬁcation we are back in business since L
p
(X, dµ) turns
out to be a Banach space. We will show this in the following sections.
But before that let us also deﬁne L
∞
(X, dµ). It should be the set of
bounded measurable functions B(X) together with the sup norm. The only
problem is that if we want to identify functions equal almost everywhere, the
supremum is no longer independent of the equivalence class. The solution
is the essential supremum
f
∞
= inf¦C [ µ(¦x[ [f(x)[ > C¦) = 0¦. (0.70)
That is, C is an essential bound if [f(x)[ ≤ C almost everywhere and the
essential supremum is the inﬁmum over all essential bounds.
Example. If λ is the Lebesgue measure, then the essential sup of χ
Q
with
respect to λ is 0. If Θ is the Dirac measure centered at 0, then the essential
sup of χ
Q
with respect to Θ is 1 (since χ
Q
(0) = 1, and x = 0 is the only
point which counts for Θ).
As before we set
L
∞
(X, dµ) = B(X)/^(X, dµ) (0.71)
and observe that f
∞
is independent of the equivalence class.
If you wonder where the ∞ comes from, have a look at Problem 0.13.
As a preparation for proving that L
p
is a Banach space, we will need
H¨older’s inequality, which plays a central role in the theory of L
p
spaces.
In particular, it will imply Minkowski’s inequality, which is just the triangle
inequality for L
p
.
Theorem 0.26 (H¨older’s inequality). Let p and q be dual indices, that is,
1
p
+
1
q
= 1 (0.72)
with 1 ≤ p ≤ ∞. If f ∈ L
p
(X, dµ) and g ∈ L
q
(X, dµ) then fg ∈ L
1
(X, dµ)
and
f g
1
≤ f
p
g
q
. (0.73)
24 0. A ﬁrst look at Banach and Hilbert spaces
Proof. The case p = 1, q = ∞ (respectively p = ∞, q = 1) follows directly
from the properties of the integral and hence it remains to consider 1 <
p, q < ∞.
First of all it is no restriction to assume f
p
= g
q
= 1. Then, using
the elementary inequality (Problem 0.14)
a
1/p
b
1/q
≤
1
p
a +
1
q
b, a, b ≥ 0, (0.74)
with a = [f[
p
and b = [g[
q
and integrating over X gives
_
X
[f g[dµ ≤
1
p
_
X
[f[
p
dµ +
1
q
_
X
[g[
q
dµ = 1 (0.75)
and ﬁnishes the proof.
As a consequence we also get
Theorem 0.27 (Minkowski’s inequality). Let f, g ∈ L
p
(X, dµ), then
f +g
p
≤ f
p
+g
p
. (0.76)
Proof. Since the cases p = 1, ∞ are straightforward, we only consider 1 <
p < ∞. Using [f +g[
p
≤ [f[ [f +g[
p−1
+[g[ [f +g[
p−1
we obtain from H¨older’s
inequality (note (p −1)q = p)
f +g
p
p
≤ f
p
(f +g)
p−1

q
+g
p
(f +g)
p−1

q
= (f
p
+g
p
)(f +g)
p−1
p
. (0.77)
This shows that L
p
(X, dµ) is a normed linear space. Finally it remains
to show that L
p
(X, dµ) is complete.
Theorem 0.28. The space L
p
(X, dµ) is a Banach space.
Proof. Suppose f
n
is a Cauchy sequence. It suﬃces to show that some
subsequence converges (show this). Hence we can drop some terms such
that
f
n+1
−f
n

p
≤
1
2
n
. (0.78)
Now consider g
n
= f
n
−f
n−1
(set f
0
= 0). Then
G(x) =
∞
k=1
[g
k
(x)[ (0.79)
is in L
p
. This follows from
_
_
_
n
k=1
[g
k
[
_
_
_
p
≤
n
k=1
g
k
(x)
p
≤ f
1

p
+
1
2
(0.80)
0.6. Lebesgue L
p
spaces 25
using the monotone convergence theorem. In particular, G(x) < ∞ almost
everywhere and the sum
∞
n=1
g
n
(x) = lim
n→∞
f
n
(x) (0.81)
is absolutely convergent for those x. Now let f(x) be this limit. Since
[f(x) − f
n
(x)[
p
converges to zero almost everywhere and [f(x) − f
n
(x)[
p
≤
2
p
G(x)
p
∈ L
1
, dominated convergence shows f −f
n

p
→ 0.
In particular, in the proof of the last theorem we have seen:
Corollary 0.29. If f
n
− f
p
→ 0 then there is a subsequence which con
verges pointwise almost everywhere.
It even turns out that L
p
is separable.
Lemma 0.30. Suppose X is a second countable topological space (i.e., it
has a countable basis) and µ is a regular Borel measure. Then L
p
(X, dµ),
1 ≤ p < ∞ is separable.
Proof. The set of all characteristic functions χ
A
(x) with A ∈ Σ and µ(A) <
∞, is total by construction of the integral. Now our strategy is as follows:
Using outer regularity we can restrict A to open sets and using the existence
of a countable base, we can restrict A to open sets from this base.
Fix A. By outer regularity, there is a decreasing sequence of open sets
O
n
such that µ(O
n
) → µ(A). Since µ(A) < ∞ it is no restriction to assume
µ(O
n
) < ∞, and thus µ(O
n
¸A) = µ(O
n
) − µ(A) → 0. Now dominated
convergence implies χ
A
− χ
On

p
→ 0. Thus the set of all characteristic
functions χ
O
(x) with O open and µ(O) < ∞, is total. Finally let B be a
countable basis for the topology. Then, every open set O can be written as
O =
∞
j=1
˜
O
j
with
˜
O
j
∈ B. Moreover, by considering the set of all ﬁnite
unions of elements from B it is no restriction to assume
n
j=1
˜
O
j
∈ B. Hence
there is an increasing sequence
˜
O
n
¸ O with
˜
O
n
∈ B. By monotone con
vergence, χ
O
−χ
˜
On

p
→ 0 and hence the set of all characteristic functions
χ
˜
O
with
˜
O ∈ B is total.
To ﬁnish this chapter, let us show that continuous functions are dense
in L
p
.
Theorem 0.31. Let X be a locally compact metric space and let µ be a
σﬁnite regular Borel measure. Then the set C
c
(X) of continuous functions
with compact support is dense in L
p
(X, dµ), 1 ≤ p < ∞.
Proof. As in the previous proof the set of all characteristic functions χ
K
(x)
with K compact is total (using inner regularity). Hence it suﬃces to show
26 0. A ﬁrst look at Banach and Hilbert spaces
that χ
K
(x) can be approximated by continuous functions. By outer regu
larity there is an open set O ⊃ K such that µ(O¸K) ≤ ε. By Urysohn’s
lemma (Lemma 0.11) there is a continuous function f
ε
which is one on K
and 0 outside O. Since
_
X
[χ
K
−f
ε
[
p
dµ =
_
O\K
[f
ε
[
p
dµ ≤ µ(O¸K) ≤ ε (0.82)
we have f
ε
−χ
K
 → 0 and we are done.
If X is some subset of R
n
we can do even better.
A nonnegative function u ∈ C
∞
c
(R
n
) is called a molliﬁer if
_
R
n
u(x)dx = 1 (0.83)
The standard molliﬁer is u(x) = exp(
1
x
2
−1
) for [x[ < 1 and u(x) = 0 else.
If we scale a molliﬁer according to u
k
(x) = k
n
u(k x) such that its mass is
preserved (u
k

1
= 1) and it concentrates more and more around the origin
E
T
u
k
we have the following result (Problem 0.17):
Lemma 0.32. Let u be a molliﬁer in R
n
and set u
k
(x) = k
n
u(k x). Then
for any (uniformly) continuous function f : R
n
→C we have that
f
k
(x) =
_
R
n
u
k
(x −y)f(y)dy (0.84)
is in C
∞
(R
n
) and converges to f (uniformly).
Now we are ready to prove
Theorem 0.33. If X ⊆ R
n
and µ is a Borel measure, then the set C
∞
c
(X) of
all smooth functions with compact support is dense in L
p
(X, dµ), 1 ≤ p < ∞.
Proof. By our previous result it suﬃces to show that any continuous func
tion f(x) with compact support can be approximated by smooth ones. By
setting f(x) = 0 for x ,∈ X, it is no restriction to assume X = R
n
. Now
choose a molliﬁer u and observe that f
k
has compact support (since f
0.7. Appendix: The uniform boundedness principle 27
has). Moreover, since f has compact support it is uniformly continuous
and f
k
→ f uniformly. But this implies f
k
→ f in L
p
.
Problem 0.13. Suppose µ(X) < ∞. Show that
lim
p→∞
f
p
= f
∞
(0.85)
for any bounded measurable function.
Problem 0.14. Prove (0.74). (Hint: Show that f(x) = (1 − t) + tx − x
t
,
x > 0, 0 < t < 1 satisﬁes f(a/b) ≥ 0 = f(1).)
Problem 0.15. Show the following generalization of H¨older’s inequality
f g
r
≤ f
p
g
q
, (0.86)
where
1
p
+
1
q
=
1
r
.
Problem 0.16 (Lyapunov inequality). Let 0 < θ < 1. Show that if f ∈
L
p
1
∩ L
p
2
, then f ∈ L
p
and
f
p
≤ f
θ
p
1
f
1−θ
p
2
, (0.87)
where
1
p
=
θ
p
1
+
1−θ
p
2
.
Problem 0.17. Prove Lemma 0.32. (Hint: To show that f
k
is smooth use
Problem A.7 and A.8.)
Problem 0.18. Construct a function f ∈ L
p
(0, 1) which has a pole at every
rational number in [0, 1]. (Hint: Start with the function f
0
(x) = [x[
−α
which
has a single pole at 0, then f
j
(x) = f
0
(x −x
j
) has a pole at x
j
.)
0.7. Appendix: The uniform boundedness
principle
Recall that the interior of a set is the largest open subset (that is, the union
of all open subsets). A set is called nowhere dense if its closure has empty
interior. The key to several important theorems about Banach spaces is the
observation that a Banach space cannot be the countable union of nowhere
dense sets.
Theorem 0.34 (Baire category theorem). Let X be a complete metric space,
then X cannot be the countable union of nowhere dense sets.
Proof. Suppose X =
∞
n=1
X
n
. We can assume that the sets X
n
are closed
and none of them contains a ball, that is, X¸X
n
is open and nonempty for
every n. We will construct a Cauchy sequence x
n
which stays away from all
X
n
.
28 0. A ﬁrst look at Banach and Hilbert spaces
Since X¸X
1
is open and nonempty there is a closed ball B
r
1
(x
1
) ⊆
X¸X
1
. Reducing r
1
a little, we can even assume B
r
1
(x
1
) ⊆ X¸X
1
. More
over, since X
2
cannot contain B
r
1
(x
1
) there is some x
2
∈ B
r
1
(x
1
) that is
not in X
2
. Since B
r
1
(x
1
) ∩ (X¸X
2
) is open there is a closed ball B
r
2
(x
2
) ⊆
B
r
1
(x
1
) ∩ (X¸X
2
). Proceeding by induction we obtain a sequence of balls
such that
B
rn
(x
n
) ⊆ B
r
n−1
(x
n−1
) ∩ (X¸X
n
). (0.88)
Now observe that in every step we can choose r
n
as small as we please, hence
without loss of generality r
n
→ 0. Since by construction x
n
∈ B
r
N
(x
N
) for
n ≥ N, we conclude that x
n
is Cauchy and converges to some point x ∈ X.
But x ∈ B
rn
(x
n
) ⊆ X¸X
n
for every n, contradicting our assumption that
the X
n
cover X.
(Sets which can be written as countable union of nowhere dense sets are
called of ﬁrst category. All other sets are second category. Hence the name
category theorem.)
In other words, if X
n
⊆ X is a sequence of closed subsets which cover
X, at least one X
n
contains a ball of radius ε > 0.
Now we come to the ﬁrst important consequence, the uniform bound
edness principle.
Theorem 0.35 (BanachSteinhaus). Let X be a Banach space and Y some
normed linear space. Let ¦A
α
¦ ⊆ L(X, Y ) be a family of bounded operators.
Suppose A
α
x ≤ C(x) is bounded for ﬁxed x ∈ X, then A
α
 ≤ C is
uniformly bounded.
Proof. Let
X
n
= ¦x[ A
α
x ≤ n for all α¦ =
α
¦x[ A
α
x ≤ n¦, (0.89)
then
n
X
n
= X by assumption. Moreover, by continuity of A
α
and the
norm, each X
n
is an intersection of closed sets and hence closed. By Baire’s
theorem at least one contains a ball of positive radius: B
ε
(x
0
) ⊂ X
n
. Now
observe
A
α
y ≤ A
α
(y +x
0
) +A
α
x
0
 ≤ n +A
α
x
0
 (0.90)
for y < ε. Setting y = ε
x
x
we obtain
A
α
x ≤
n +C(x
0
)
ε
x (0.91)
for any x.
Part 1
Mathematical
Foundations of
Quantum Mechanics
Chapter 1
Hilbert spaces
The phase space in classical mechanics is the Euclidean space R
2n
(for the n
position and n momentum coordinates). In quantum mechanics the phase
space is always a Hilbert space H. Hence the geometry of Hilbert spaces
stands at the outset of our investigations.
1.1. Hilbert spaces
Suppose H is a vector space. A map ¸., ..) : H H → C is called skew
linear form if it is conjugate linear in the ﬁrst and linear in the second
argument. A positive deﬁnite skew linear form is called inner product or
scalar product. Associated with every scalar product is a norm
ψ =
_
¸ψ, ψ). (1.1)
The triangle inequality follows from the CauchySchwarzBunjakowski
inequality:
[¸ψ, ϕ)[ ≤ ψ ϕ (1.2)
with equality if and only if ψ and ϕ are parallel.
If H is complete with respect to the above norm, it is called a Hilbert
space. It is no restriction to assume that H is complete since one can easily
replace it by its completion.
Example. The space L
2
(M, dµ) is a Hilbert space with scalar product given
by
¸f, g) =
_
M
f(x)
∗
g(x)dµ(x). (1.3)
31
32 1. Hilbert spaces
Similarly, the set of all square summable sequences
2
(N) is a Hilbert space
with scalar product
¸f, g) =
j∈N
f
∗
j
g
j
. (1.4)
(Note that the second example is a special case of the ﬁrst one; take M = R
and µ a sum of Dirac measures.)
A vector ψ ∈ H is called normalized or unit vector if ψ = 1. Two
vectors ψ, ϕ ∈ H are called orthogonal or perpendicular (ψ ⊥ ϕ) if
¸ψ, ϕ) = 0 and parallel if one is a multiple of the other. If ψ and ϕ are
orthogonal we have the Pythagorean theorem:
ψ +ϕ
2
= ψ
2
+ϕ
2
, ψ ⊥ ϕ, (1.5)
which is straightforward to check.
Suppose ϕ is a unit vector, then the projection of ψ in the direction of
ϕ is given by
ψ
= ¸ϕ, ψ)ϕ (1.6)
and ψ
⊥
deﬁned via
ψ
⊥
= ψ −¸ϕ, ψ)ϕ (1.7)
is perpendicular to ϕ.
These results can also be generalized to more than one vector. A set
of vectors ¦ϕ
j
¦ is called orthonormal set if ¸ϕ
j
, ϕ
k
) = 0 for j ,= k and
¸ϕ
j
, ϕ
j
) = 1.
Lemma 1.1. Suppose ¦ϕ
j
¦
n
j=0
is an orthonormal set. Then every ψ ∈ H
can be written as
ψ = ψ
+ψ
⊥
, ψ
=
n
j=0
¸ϕ
j
, ψ)ϕ
j
, (1.8)
where ψ
and ψ
⊥
are orthogonal. Moreover, ¸ϕ
j
, ψ
⊥
) = 0 for all 1 ≤ j ≤ n.
In particular,
ψ
2
=
n
j=0
[¸ϕ
j
, ψ)[
2
+ψ
⊥

2
. (1.9)
Moreover, every
ˆ
ψ in the span of ¦ϕ
j
¦
n
j=0
satisﬁes
ψ −
ˆ
ψ ≥ ψ
⊥
 (1.10)
with equality holding if and only if
ˆ
ψ = ψ
. In other words, ψ
is uniquely
characterized as the vector in the span of ¦ϕ
j
¦
n
j=0
being closest to ψ.
1.2. Orthonormal bases 33
Proof. A straightforward calculation shows ¸ϕ
j
, ψ −ψ
) = 0 and hence ψ
and ψ
⊥
= ψ − ψ
are orthogonal. The formula for the norm follows by
applying (1.5) iteratively.
Now, ﬁx a vector
ˆ
ψ =
n
j=0
c
j
ϕ
j
. (1.11)
in the span of ¦ϕ
j
¦
n
j=0
. Then one computes
ψ −
ˆ
ψ
2
= ψ
+ψ
⊥
−
ˆ
ψ
2
= ψ
⊥

2
+ψ
−
ˆ
ψ
2
= ψ
⊥

2
+
n
j=0
[c
j
−¸ϕ
j
, ψ)[
2
(1.12)
from which the last claim follows.
From (1.9) we obtain Bessel’s inequality
n
j=0
[¸ϕ
j
, ψ)[
2
≤ ψ
2
(1.13)
with equality holding if and only if ψ lies in the span of ¦ϕ
j
¦
n
j=0
.
Recall that a scalar product can be recovered from its norm by virtue of
the polarization identity
¸ϕ, ψ) =
1
4
_
ϕ +ψ
2
−ϕ −ψ
2
+ iϕ −iψ
2
−iϕ + iψ
2
_
. (1.14)
A bijective operator U ∈ L(H
1
, H
2
) is called unitary if U preserves
scalar products:
¸Uϕ, Uψ)
2
= ¸ϕ, ψ)
1
, ϕ, ψ ∈ H
1
. (1.15)
By the polarization identity this is the case if and only if U preserves norms:
Uψ
2
= ψ
1
for all ψ ∈ H
1
. The two Hilbert space H
1
and H
2
are called
unitarily equivalent in this case.
Problem 1.1. The operator S :
2
(N) →
2
(N), (a
1
, a
2
, a
3
. . . ) → (0, a
1
, a
2
, . . . )
clearly satisﬁes Ua = a. Is it unitary?
1.2. Orthonormal bases
Of course, since we cannot assume H to be a ﬁnite dimensional vector space,
we need to generalize Lemma 1.1 to arbitrary orthonormal sets ¦ϕ
j
¦
j∈J
.
We start by assuming that J is countable. Then Bessel’s inequality (1.13)
shows that
j∈J
[¸ϕ
j
, ψ)[
2
(1.16)
34 1. Hilbert spaces
converges absolutely. Moreover, for any ﬁnite subset K ⊂ J we have

j∈K
¸ϕ
j
, ψ)ϕ
j

2
=
j∈K
[¸ϕ
j
, ψ)[
2
(1.17)
by the Pythagorean theorem and thus
j∈J
¸ϕ
j
, ψ)ϕ
j
is Cauchy if and only
if
j∈J
[¸ϕ
j
, ψ)[
2
is. Now let J be arbitrary. Again, Bessel’s inequality
shows that for any given ε > 0 there are at most ﬁnitely many j for which
[¸ϕ
j
, ψ)[ ≥ ε. Hence there are at most countably many j for which [¸ϕ
j
, ψ)[ >
0. Thus it follows that
j∈J
[¸ϕ
j
, ψ)[
2
(1.18)
is welldeﬁned and so is
j∈J
¸ϕ
j
, ψ)ϕ
j
. (1.19)
In particular, by continuity of the scalar product we see that Lemma 1.1
holds for arbitrary orthonormal sets without modiﬁcations.
Theorem 1.2. Suppose ¦ϕ
j
¦
j∈J
is an orthonormal set. Then every ψ ∈ H
can be written as
ψ = ψ
+ψ
⊥
, ψ
=
j∈J
¸ϕ
j
, ψ)ϕ
j
, (1.20)
where ψ
and ψ
⊥
are orthogonal. Moreover, ¸ϕ
j
, ψ
⊥
) = 0 for all j ∈ J. In
particular,
ψ
2
=
j∈J
[¸ϕ
j
, ψ)[
2
+ψ
⊥

2
. (1.21)
Moreover, every
ˆ
ψ in the span of ¦ϕ
j
¦
j∈J
satisﬁes
ψ −
ˆ
ψ ≥ ψ
⊥
 (1.22)
with equality holding if and only if
ˆ
ψ = ψ
. In other words, ψ
is uniquely
characterized as the vector in the span of ¦ϕ
j
¦
j∈J
being closest to ψ.
Note that from Bessel’s inequality (which of course still holds) it follows
that the map ψ → ψ
is continuous.
An orthonormal set which is not a proper subset of any other orthonor
mal set is called an orthonormal basis due to following result:
Theorem 1.3. For an orthonormal set ¦ϕ
j
¦
j∈J
the following conditions are
equivalent:
(i) ¦ϕ
j
¦
j∈J
is a maximal orthonormal set.
(ii) For every vector ψ ∈ H we have
ψ =
j∈J
¸ϕ
j
, ψ)ϕ
j
. (1.23)
1.2. Orthonormal bases 35
(iii) For every vector ψ ∈ H we have
ψ
2
=
j∈J
[¸ϕ
j
, ψ)[
2
. (1.24)
(iv) ¸ϕ
j
, ψ) = 0 for all j ∈ J implies ψ = 0.
Proof. We will use the notation from Theorem 1.2.
(i) ⇒ (ii): If ψ
⊥
,= 0 than we can normalize ψ
⊥
to obtain a unit vector
˜
ψ
⊥
which is orthogonal to all vectors ϕ
j
. But then ¦ϕ
j
¦
j∈J
∪ ¦
˜
ψ
⊥
¦ would be a
larger orthonormal set, contradicting maximality of ¦ϕ
j
¦
j∈J
.
(ii) ⇒ (iii): Follows since (ii) implies ψ
⊥
= 0.
(iii) ⇒ (iv): If ¸ψ, ϕ
j
) = 0 for all j ∈ J we conclude ψ
2
= 0 and hence
ψ = 0.
(iv) ⇒ (i): If ¦ϕ
j
¦
j∈J
were not maximal, there would be a unit vector ϕ
such that ¦ϕ
j
¦
j∈J
∪ ¦ϕ¦ is larger orthonormal set. But ¸ϕ
j
, ϕ) = 0 for all
j ∈ J implies ϕ = 0 by (iv), a contradiction.
Since ψ → ψ
is continuous, it suﬃces to check conditions (ii) and (iii)
on a dense set.
Example. The set of functions
ϕ
n
(x) =
1
√
2π
e
inx
, n ∈ Z, (1.25)
forms an orthonormal basis for H = L
2
(0, 2π). The corresponding orthogo
nal expansion is just the ordinary Fourier series. (Problem 1.17)
A Hilbert space is separable if and only if there is a countable orthonor
mal basis. In fact, if H is separable, then there exists a countable total set
¦ψ
j
¦
N
j=0
. After throwing away some vectors we can assume that ψ
n+1
can
not be expressed as a linear combinations of the vectors ψ
0
, . . . ψ
n
. Now we
can construct an orthonormal basis as follows: We begin by normalizing ψ
0
ϕ
0
=
ψ
0
ψ
0

. (1.26)
Next we take ψ
1
and remove the component parallel to ϕ
0
and normalize
again
ϕ
1
=
ψ
1
−¸ϕ
0
, ψ
1
)ϕ
0
ψ
1
−¸ϕ
0
, ψ
1
)ϕ
0

. (1.27)
Proceeding like this we deﬁne recursively
ϕ
n
=
ψ
n
−
n−1
j=0
¸ϕ
j
, ψ
n
)ϕ
j
ψ
n
−
n−1
j=0
¸ϕ
j
, ψ
n
)ϕ
j

. (1.28)
This procedure is known as GramSchmidt orthogonalization. Hence
we obtain an orthonormal set ¦ϕ
j
¦
N
j=0
such that span¦ϕ
j
¦
n
j=0
= span¦ψ
j
¦
n
j=0
36 1. Hilbert spaces
for any ﬁnite n and thus also for N. Since ¦ψ
j
¦
N
j=0
is total, we infer that
¦ϕ
j
¦
N
j=0
is an orthonormal basis.
Example. In L
2
(−1, 1) we can orthogonalize the polynomial f
n
(x) = x
n
.
The resulting polynomials are up to a normalization equal to the Legendre
polynomials
P
0
(x) = 1, P
1
(x) = x, P
2
(x) =
3 x
2
−1
2
, . . . (1.29)
(which are normalized such that P
n
(1) = 1).
If fact, if there is one countable basis, then it follows that any other basis
is countable as well.
Theorem 1.4. If H is separable, then every orthonormal basis is countable.
Proof. We know that there is at least one countable orthonormal basis
¦ϕ
j
¦
j∈J
. Now let ¦φ
k
¦
k∈K
be a second basis and consider the set K
j
=
¦k ∈ K[¸φ
k
, ϕ
j
) , = 0¦. Since these are the expansion coeﬃcients of ϕ
j
with
respect to ¦φ
k
¦
k∈K
, this set is countable. Hence the set
˜
K =
j∈J
K
j
is
countable as well. But k ∈ K¸
˜
K implies φ
k
= 0 and hence
˜
K = K.
We will assume all Hilbert spaces to be separable.
In particular, it can be shown that L
2
(M, dµ) is separable. Moreover, it
turns out that, up to unitary equivalence, there is only one (separable)
inﬁnite dimensional Hilbert space:
Let H be an inﬁnite dimensional Hilbert space and let ¦ϕ
j
¦
j∈N
be any
orthogonal basis. Then the map U : H →
2
(N), ψ → (¸ϕ
j
, ψ))
j∈N
is unitary
(by Theorem 1.3 (iii)). In particular,
Theorem 1.5. Any separable inﬁnite dimensional Hilbert space is unitarily
equivalent to
2
(N).
Let me remark that if H is not separable, there still exists an orthonor
mal basis. However, the proof requires Zorn’s lemma: The collection of
all orthonormal sets in H can be partially ordered by inclusion. Moreover,
any linearly ordered chain has an upper bound (the union of all sets in the
chain). Hence Zorn’s lemma implies the existence of a maximal element,
that is, an orthonormal basis.
1.3. The projection theorem and the Riesz
lemma
Let M ⊆ H be a subset, then M
⊥
= ¦ψ[¸ϕ, ψ) = 0, ∀ϕ ∈ M¦ is called
the orthogonal complement of M. By continuity of the scalar prod
uct it follows that M
⊥
is a closed linear subspace and by linearity that
1.3. The projection theorem and the Riesz lemma 37
(span(M))
⊥
= M
⊥
. For example we have H
⊥
= ¦0¦ since any vector in H
⊥
must be in particular orthogonal to all vectors in some orthonormal basis.
Theorem 1.6 (projection theorem). Let M be a closed linear subspace of a
Hilbert space H, then every ψ ∈ H can be uniquely written as ψ = ψ
+ ψ
⊥
with ψ
∈ M and ψ
⊥
∈ M
⊥
. One writes
M ⊕M
⊥
= H (1.30)
in this situation.
Proof. Since M is closed, it is a Hilbert space and has an orthonormal basis
¦ϕ
j
¦
j∈J
. Hence the result follows from Theorem 1.2.
In other words, to every ψ ∈ H we can assign a unique vector ψ
which
is the vector in M closest to ψ. The rest ψ − ψ
lies in M
⊥
. The operator
P
M
ψ = ψ
is called the orthogonal projection corresponding to M. Note
that we have
P
2
M
= P
M
and ¸P
M
ψ, ϕ) = ¸ψ, P
M
ϕ) (1.31)
since ¸P
M
ψ, ϕ) = ¸ψ
, ϕ
) = ¸ψ, P
M
ϕ). Clearly we have P
M
⊥ψ = ψ −
P
M
ψ = ψ
⊥
.
Moreover, we see that the vectors in a closed subspace M are precisely
those which are orthogonal to all vectors in M
⊥
, that is, M
⊥⊥
= M. If M
is an arbitrary subset we have at least
M
⊥⊥
= span(M). (1.32)
Note that by H
⊥
= ¦¦ we see that M
⊥
= ¦0¦ if and only if M is dense.
Finally we turn to linear functionals, that is, to operators : H →
C. By the CauchySchwarz inequality we know that
ϕ
: ψ → ¸ϕ, ψ) is a
bounded linear functional (with norm ϕ). In turns out that in a Hilbert
space every bounded linear functional can be written in this way.
Theorem 1.7 (Riesz lemma). Suppose is a bounded linear functional on
a Hilbert space H. Then there is a vector ϕ ∈ H such that (ψ) = ¸ϕ, ψ) for
all ψ ∈ H. In other words, a Hilbert space is equivalent to its own dual space
H
∗
= H.
Proof. If ≡ 0 we can choose ϕ = 0. Otherwise Ker() = ¦ψ[(ψ) = 0¦
is a proper subspace and we can ﬁnd a unit vector ˜ ϕ ∈ Ker()
⊥
. For every
ψ ∈ H we have (ψ) ˜ ϕ −( ˜ ϕ)ψ ∈ Ker() and hence
0 = ¸ ˜ ϕ, (ψ) ˜ ϕ −( ˜ ϕ)ψ) = (ψ) −( ˜ ϕ)¸ ˜ ϕ, ψ). (1.33)
In other words, we can choose ϕ = ( ˜ ϕ)
∗
˜ ϕ.
The following easy consequence is left as an exercise.
38 1. Hilbert spaces
Corollary 1.8. Suppose B is a bounded skew liner form, that is,
[B(ψ, ϕ)[ ≤ Cψ ϕ. (1.34)
Then there is a unique bounded operator A such that
B(ψ, ϕ) = ¸Aψ, ϕ). (1.35)
Problem 1.2. Show that an orthogonal projection P
M
,= 0 has norm one.
Problem 1.3. Suppose P
1
and P
1
are orthogonal projections. Show that
P
1
≤ P
2
(that is ¸ψ, P
1
ψ) ≤ ¸ψ, P
2
ψ)) is equivalent to Ran(P
1
) ⊆ Ran(P
2
).
Problem 1.4. Prove Corollary 1.8.
Problem 1.5. Let ¦ϕ
j
¦ be some orthonormal basis. Show that a bounded
linear operator A is uniquely determined by its matrix elements A
jk
=
¸ϕ
j
, Aϕ
k
) with respect to this basis.
Problem 1.6. Show that L(H) is not separable H is inﬁnite dimensional.
Problem 1.7. Show P : L
2
(R) → L
2
(R), f(x) →
1
2
(f(x) + f(−x)) is a
projection. Compute its range and kernel.
1.4. Orthogonal sums and tensor products
Given two Hilbert spaces H
1
and H
2
we deﬁne their orthogonal sumH
1
⊕H
2
to be the set of all pairs (ψ
1
, ψ
2
) ∈ H
1
H
2
together with the scalar product
¸(ϕ
1
, ϕ
2
), (ψ
1
, ψ
2
)) = ¸ϕ
1
, ψ
1
)
1
+¸ϕ
2
, ψ
2
)
2
. (1.36)
It is left as an exercise to verify that H
1
⊕ H
2
is again a Hilbert space.
Moreover, H
1
can be identiﬁed with ¦(ψ
1
, 0)[ψ
1
∈ H
1
¦ and we can regard
H
1
as a subspace of H
1
⊕ H
2
. Similarly for H
2
. It is also custom to write
ψ
1
+ψ
2
instead of (ψ
1
, ψ
2
).
More generally, let H
j
j ∈ N, be a countable collection of Hilbert spaces
and deﬁne
∞
j=1
H
j
= ¦
∞
j=1
ψ
j
[ ψ
j
∈ H
j
,
∞
j=1
ψ
j

2
j
< ∞¦, (1.37)
which becomes a Hilbert space with the scalar product
¸
∞
j=1
ϕ
j
,
∞
j=1
ψ
j
) =
∞
j=1
¸ϕ
j
, ψ
j
)
j
. (1.38)
Suppose H and
˜
H are two Hilbert spaces. Our goal is to construct their
tensor product. The elements should be products ψ ⊗
˜
ψ of elements ψ ∈ H
1.4. Orthogonal sums and tensor products 39
and
˜
ψ ∈
˜
H. Hence we start with the set of all ﬁnite linear combinations of
elements of H
˜
H
T(H,
˜
H) = ¦
n
j=1
α
j
(ψ
j
,
˜
ψ
j
)[(ψ
j
,
˜
ψ
j
) ∈ H
˜
H, α
j
∈ C¦. (1.39)
Since we want (ψ
1
+ψ
2
)⊗
˜
ψ = ψ
1
⊗
˜
ψ+ψ
2
⊗
˜
ψ, ψ⊗(
˜
ψ
1
+
˜
ψ
2
) = ψ⊗
˜
ψ
1
+ψ⊗
˜
ψ
2
,
and (αψ) ⊗
˜
ψ = ψ ⊗(α
˜
ψ) we consider T(H,
˜
H)/^(H,
˜
H), where
^(H,
˜
H) = span¦
n
j,k=1
α
j
β
k
(ψ
j
,
˜
ψ
k
) −(
n
j=1
α
j
ψ
j
,
n
k=1
β
k
˜
ψ
k
)¦ (1.40)
and write ψ ⊗
˜
ψ for the equivalence class of (ψ,
˜
ψ).
Next we deﬁne
¸ψ ⊗
˜
ψ, φ ⊗
˜
φ) = ¸ψ, φ)¸
˜
ψ,
˜
φ) (1.41)
which extends to a skew linear form on T(H,
˜
H)/^(H,
˜
H). To show that we
obtain a scalar product, we need to ensure positivity. Let ψ =
i
α
i
ψ
i
⊗
˜
ψ
i
,=
0 and pick orthonormal bases ϕ
j
, ˜ ϕ
k
for span¦ψ
i
¦, span¦
˜
ψ
i
¦, respectively.
Then
ψ =
j,k
α
jk
ϕ
j
⊗ ˜ ϕ
k
, α
jk
=
i
α
i
¸ϕ
j
, ψ
i
)¸ ˜ ϕ
k
,
˜
ψ
i
) (1.42)
and we compute
¸ψ, ψ) =
j,k
[α
jk
[
2
> 0. (1.43)
The completion of T(H,
˜
H)/^(H,
˜
H) with respect to the induced norm is
called the tensor product H ⊗
˜
H of H and
˜
H.
Lemma 1.9. If ϕ
j
, ˜ ϕ
k
are orthonormal bases for H,
˜
H, respectively, then
ϕ
j
⊗ ˜ ϕ
k
is an orthonormal basis for H ⊗
˜
H.
Proof. That ϕ
j
⊗ ˜ ϕ
k
is an orthonormal set is immediate from (1.41). More
over, since span¦ϕ
j
¦, span¦ ˜ ϕ
k
¦ is dense in H,
˜
H, respectively, it is easy to
see that ϕ
j
⊗ ˜ ϕ
k
is dense in T(H,
˜
H)/^(H,
˜
H). But the latter is dense in
H ⊗
˜
H.
Example. We have H ⊗C
n
= H
n
.
Example. Let (M, dµ) and (
˜
M, d˜ µ) be two measure spaces. Then we have
L
2
(M, dµ) ⊗L
2
(
˜
M, d˜ µ) = L
2
(M
˜
M, dµ d˜ µ).
Clearly we have L
2
(M, dµ) ⊗ L
2
(
˜
M, d˜ µ) ⊆ L
2
(M
˜
M, dµ d˜ µ). Now
take an orthonormal basis ϕ
j
⊗ ˜ ϕ
k
for L
2
(M, dµ) ⊗ L
2
(
˜
M, d˜ µ) as in our
previous lemma. Then
_
M
_
˜
M
(ϕ
j
(x) ˜ ϕ
k
(y))
∗
f(x, y)dµ(x)d˜ µ(y) = 0 (1.44)
40 1. Hilbert spaces
implies
_
M
ϕ
j
(x)
∗
f
k
(x)dµ(x) = 0, f
k
(x) =
_
˜
M
˜ ϕ
k
(y)
∗
f(x, y)d˜ µ(y) (1.45)
and hence f
k
(x) = 0 µa.e. x. But this implies f(x, y) = 0 for µa.e. x and
˜ µa.e. y and thus f = 0. Hence ϕ
j
⊗ ˜ ϕ
k
is a basis for L
2
(M
˜
M, dµ d˜ µ)
and equality follows.
It is straightforward to extend the tensor product to any ﬁnite number
of Hilbert spaces. We even note
(
∞
j=1
H
j
) ⊗H =
∞
j=1
(H
j
⊗H), (1.46)
where equality has to be understood in the sense, that both spaces are
unitarily equivalent by virtue of the identiﬁcation
(
∞
j=1
ψ
j
) ⊗ψ =
∞
j=1
ψ
j
⊗ψ. (1.47)
Problem 1.8. We have ψ⊗
˜
ψ = φ⊗
˜
φ if and only if there is some α ∈ C¸¦0¦
such that ψ = αφ and
˜
ψ = α
−1
˜
φ.
Problem 1.9. Show (1.46)
1.5. The C
∗
algebra of bounded linear operators
We start by introducing a conjugation for operators on a Hilbert space H.
Let A = L(H), then the adjoint operator is deﬁned via
¸ϕ, A
∗
ψ) = ¸Aϕ, ψ) (1.48)
(compare Corollary 1.8).
Example. If H = C
n
and A = (a
jk
)
1≤j,k≤n
, then A
∗
= (a
∗
kj
)
1≤j,k≤n
.
Lemma 1.10. Let A, B ∈ L(H), then
(i) (A+B)
∗
= A
∗
+B
∗
, (αA)
∗
= α
∗
A
∗
,
(ii) A
∗∗
= A,
(iii) (AB)
∗
= B
∗
A
∗
,
(iv) A = A
∗
 and A
2
= A
∗
A = AA
∗
.
Proof. (i) and (ii) are obvious. (iii) follows from¸ϕ, (AB)ψ) = ¸A
∗
ϕ, Bψ) =
¸B
∗
A
∗
ϕ, ψ). (iv) follows from
A
∗
 = sup
ϕ=ψ=1
[¸ψ, A
∗
ϕ)[ = sup
ϕ=ψ=1
[¸Aψ, ϕ)[ = A. (1.49)
1.6. Weak and strong convergence 41
and
A
∗
A = sup
ϕ=ψ=1
[¸ϕ, A
∗
Aψ)[ = sup
ϕ=ψ=1
[¸Aϕ, Aψ)[
= sup
ϕ=1
Aϕ
2
= A
2
, (1.50)
where we have used ϕ = sup
ψ=1
[¸ψ, ϕ)[.
As a consequence of A
∗
 = A observe that taking the adjoint is
continuous.
In general, a Banach algebra / together with an involution
(a +b)
∗
= a
∗
+b
∗
, (αa)
∗
= α
∗
a
∗
, a
∗∗
= a, (ab)
∗
= b
∗
a
∗
, (1.51)
satisfying
a
2
= a
∗
a (1.52)
is called a C
∗
algebra. The element a
∗
is called the adjoint of a. Note that
a
∗
 = a follows from (1.52) and aa
∗
 ≤ aa
∗
.
Any subalgebra which is also closed under involution, is called a ∗
algebra. An ideal is a subspace 1 ⊆ / such that a ∈ 1, b ∈ / implies
ab ∈ 1 and ba ∈ 1. If it is closed under the adjoint map it is called a ∗ideal.
Note that if there is and identity e we have e
∗
= e and hence (a
−1
)
∗
= (a
∗
)
−1
(show this).
Example. The continuous function C(I) together with complex conjuga
tion form a commutative C
∗
algebra.
An element a ∈ / is called normal if aa
∗
= a
∗
a, selfadjoint if a = a
∗
,
unitary if aa
∗
= a
∗
a = I, (orthogonal) projection if a = a
∗
= a
2
, and
positive if a = bb
∗
for some b ∈ /. Clearly both selfadjoint and unitary
elements are normal.
Problem 1.10. Let A ∈ L(H). Show that A is normal if and only if
Aψ = A
∗
ψ, ∀ψ ∈ H. (1.53)
(Hint: Problem 0.6)
Problem 1.11. Show that U : H →H is unitary if and only if U
−1
= U
∗
.
Problem 1.12. Compute the adjoint of S :
2
(N) →
2
(N), (a
1
, a
2
, a
3
. . . ) →
(0, a
1
, a
2
, . . . ).
1.6. Weak and strong convergence
Sometimes a weaker notion of convergence is useful: We say that ψ
n
con
verges weakly to ψ and write
wlim
n→∞
ψ
n
= ψ or ψ
n
ψ. (1.54)
42 1. Hilbert spaces
if ¸ϕ, ψ
n
) → ¸ϕ, ψ) for every ϕ ∈ H (show that a weak limit is unique).
Example. Let ϕ
n
be an (inﬁnite) orthonormal set. Then ¸ψ, ϕ
n
) → 0 for
every ψ since these are just the expansion coeﬃcients of ψ. (ϕ
n
does not
converge to 0, since ϕ
n
 = 1.)
Clearly ψ
n
→ ψ implies ψ
n
ψ and hence this notion of convergence is
indeed weaker. Moreover, the weak limit is unique, since ¸ϕ, ψ
n
) → ¸ϕ, ψ)
and ¸ϕ, ψ
n
) → ¸ϕ,
˜
ψ) implies ¸ϕ, (ψ−
˜
ψ)) = 0. A sequence ψ
n
is called weak
Cauchy sequence if ¸ϕ, ψ
n
) is Cauchy for every ϕ ∈ H.
Lemma 1.11. Let H be a Hilbert space.
(i) ψ
n
ψ implies ψ ≤ liminf ψ
n
.
(ii) Every weak Cauchy sequence ψ
n
is bounded: ψ
n
 ≤ C.
(iii) Every weak Cauchy sequence converges weakly.
(iv) For a weakly convergent sequence ψ
n
ψ we have: ψ
n
→ ψ if
and only if limsup ψ
n
 ≤ ψ.
Proof. (i) Observe
ψ
2
= ¸ψ, ψ) = liminf¸ψ, ψ
n
) ≤ ψ liminf ψ
n
. (1.55)
(ii) For every ϕ we have that [¸ϕ, ψ
n
)[ ≤ C(ϕ) is bounded. Hence by the
uniform boundedness principle we have ψ
n
 = ¸ψ
n
, .) ≤ C.
(iii) Let ϕ
m
be an orthonormal basis and deﬁne c
m
= lim
n→∞
¸ϕ
m
, ψ
n
).
Then ψ =
m
c
m
ϕ
m
is the desired limit.
(iv) By (i) we have limψ
n
 = ψ and hence
ψ −ψ
n

2
= ψ
2
−2Re(¸ψ, ψ
n
)) +ψ
n

2
→ 0. (1.56)
The converse is straightforward.
Clearly an orthonormal basis does not have a norm convergent subse
quence. Hence the unit ball in an inﬁnite dimensional Hilbert space is never
compact. However, we can at least extract weakly convergent subsequences:
Lemma 1.12. Let H be a Hilbert space. Every bounded sequence ψ
n
has
weakly convergent subsequence.
Proof. Let ϕ
k
be an ONB, then by the usual diagonal sequence argument
we can ﬁnd a subsequence ψ
nm
such that ¸ϕ
k
, ψ
nm
) converges for all k. Since
ψ
n
is bounded, ¸ϕ, ψ
nm
) converges for every ϕ ∈ H and hence ψ
nm
is a weak
Cauchy sequence.
Finally, let me remark that similar concepts can be introduced for oper
ators. This is of particular importance for the case of unbounded operators,
where convergence in the operator norm makes no sense at all.
1.6. Weak and strong convergence 43
A sequence of operators A
n
is said to converge strongly to A,
slim
n→∞
A
n
= A :⇔ A
n
ψ → Aψ ∀x ∈ D(A) ⊆ D(A
n
). (1.57)
It is said to converge weakly to A,
wlim
n→∞
A
n
= A :⇔ A
n
ψ Aψ ∀ψ ∈ D(A) ⊆ D(A
n
). (1.58)
Clearly norm convergence implies strong convergence and strong conver
gence implies weak convergence.
Example. Consider the operator S
n
∈ L(
2
(N)) which shifts a sequence n
places to the left
S
n
(x
1
, x
2
, . . . ) = (x
n+1
, x
n+2
, . . . ). (1.59)
and the operator S
∗
n
∈ L(
2
(N)) which shifts a sequence n places to the right
and ﬁlls up the ﬁrst n places with zeros
S
∗
n
(x
1
, x
2
, . . . ) = (0, . . . , 0
. ¸¸ .
n places
, x
1
, x
2
, . . . ). (1.60)
Then S
n
converges to zero strongly but not in norm and S
∗
n
converges weakly
to zero but not strongly.
Note that this example also shows that taking adjoints is not continuous
with respect to strong convergence! If A
n
s
→ A we only have
¸ϕ, A
∗
n
ψ) = ¸A
n
ϕ, ψ) → ¸Aϕ, ψ) = ¸ϕ, A
∗
ψ) (1.61)
and hence A
∗
n
A
∗
in general. However, if A
n
and A are normal we have
(A
n
−A)
∗
ψ = (A
n
−A)ψ (1.62)
and hence A
∗
n
s
→ A
∗
in this case. Thus at least for normal operators taking
adjoints is continuous with respect to strong convergence.
Lemma 1.13. Suppose A
n
is a sequence of bounded operators.
(i) slim
n→∞
A
n
= A implies A ≤ liminf A
n
.
(ii) Every strong Cauchy sequence A
n
is bounded: A
n
 ≤ C.
(iii) If A
n
y → Ay for y in a dense set and A
n
 ≤ C, than slim
n→∞
A
n
=
A.
The same result holds if strong convergence is replaced by weak convergence.
Proof. (i) and (ii) follow as in Lemma 1.11 (i).
(ii) Just use
A
n
ψ −Aψ ≤ A
n
ψ −A
n
ϕ +A
n
ϕ −Aϕ +Aϕ −Aψ
≤ 2Cψ −ϕ +A
n
ϕ −Aϕ (1.63)
44 1. Hilbert spaces
and choose ϕ in the dense subspace such that ψ − ϕ ≤
ε
4C
and n large
such that A
n
ϕ −Aϕ ≤
ε
2
.
The case of weak convergence is left as an exercise.
Problem 1.13. Suppose ψ
n
→ ψ and ϕ
n
ϕ. Then ¸ψ
n
, ϕ
n
) → ¸ψ, ϕ).
Problem 1.14. Let ¦ϕ
j
¦ be some orthonormal basis. Show that ψ
n
ϕ
if and only if ψ
n
is bounded and ¸ϕ
j
, ψ
n
) → ¸ϕ
j
, ψ) for every j. Show that
this is wrong without the boundedness assumption.
Problem 1.15. Let ¦ϕ
j
¦
∞
j=1
be some orthonormal basis and deﬁne
ψ
w
=
∞
j=1
1
2
j
[¸ϕ
j
, ψ)[. (1.64)
Show that .
w
is a norm. Show that ψ
n
ϕ if and only if ψ
n
−ψ
w
→ 0.
Problem 1.16. A subspace M ⊆ H is closed if and only if every weak
Cauchy sequence in M has a limit in M. (Hint: M = M
⊥⊥
.)
1.7. Appendix: The Stone–Weierstraß theorem
In case of a selfadjoint operator, the spectral theorem will show that the
closed ∗algebra generated by this operator is isomorphic to the C
∗
algebra
of continuous functions C(K) over some compact set. Hence it is important
to be able to identify dense sets:
Theorem 1.14 (Stone–Weierstraß, real version). Suppose K is a compact
set and let C(K, R) be the Banach algebra of continuous functions (with the
sup norm).
If F ⊂ C(K, R) contains the identity 1 and separates points (i.e., for
every x
1
,= x
2
there is some function f ∈ F such that f(x
1
) ,= f(x
2
)), then
the algebra generated by F is dense.
Proof. Denote by A the algebra generated by F. Note that if f ∈ A,
we have [f[ ∈ A: By the Weierstraß approximation theorem (Theorem 0.13)
there is a polynomial p
n
(t) such that
¸
¸
[t[−p
n
(t)
¸
¸
<
1
n
and hence p
n
(f) → [f[.
In particular, if f, g in A, we also have
max¦f, g¦ =
(f +g) +[f −g[
2
, min¦f, g¦ =
(f +g) −[f −g[
2
(1.65)
in A.
Now ﬁx f ∈ C(K, R). We need to ﬁnd some f
ε
∈ A with f −f
ε

∞
< ε.
First of all, since A separates points, observe that for given y, z ∈ K
there is a function f
y,z
∈ A such that f
y,z
(y) = f(y) and f
y,z
(z) = f(z)
1.7. Appendix: The Stone–Weierstraß theorem 45
(show this). Next, for every y ∈ K there is a neighborhood U(y) such that
f
y,z
(x) > f(x) −ε, x ∈ U(y) (1.66)
and since K is compact, ﬁnitely many, say U(y
1
), . . . U(y
j
), cover K. Then
f
z
= max¦f
y
1
,z
, . . . , f
y
j
,z
¦ ∈ A (1.67)
and satisﬁes f
z
> f −ε by construction. Since f
z
(z) = f(z) for every z ∈ K
there is a neighborhood V (z) such that
f
z
(x) < f(x) +ε, x ∈ V (z) (1.68)
and a corresponding ﬁnite cover V (z
1
), . . . V (z
k
). Now
f
ε
= min¦f
z
1
, . . . , f
z
k
¦ ∈ A (1.69)
satisﬁes f
ε
< f + ε. Since f − ε < f
z
l
< f
ε
, we have found a required
function.
Theorem 1.15 (Stone–Weierstraß). Suppose K is a compact set and let
C(K) be the C
∗
algebra of continuous functions (with the sup norm).
If F ⊂ C(K) contains the identity 1 and separates points, then the ∗
algebra generated by F is dense.
Proof. Just observe that
˜
F = ¦Re(f), Im(f)[f ∈ F¦ satisﬁes the assump
tion of the real version. Hence any realvalued continuous functions can be
approximated by elements from
˜
F, in particular this holds for the real and
imaginary part for any given complexvalued function.
Note that the additional requirement of being closed under complex
conjugation is crucial: The functions holomorphic on the unit ball and con
tinuous on the boundary separate points, but they are not dense (since the
uniform limit of holomorphic functions is again holomorphic).
Corollary 1.16. Suppose K is a compact set and let C(K) be the C
∗
algebra
of continuous functions (with the sup norm).
If F ⊂ C(K) separates points, then the closure of the ∗algebra generated
by F is either C(K) or ¦f ∈ C(K)[f(t
0
) = 0¦ for some t
0
∈ K.
Proof. There are two possibilities, either all f ∈ F vanish at one point
t
0
∈ K (there can be at most one such point since F separates points)
or there is no such point. If there is no such point we can proceed as in
the proof of the Stone–Weierstraß theorem to show that the identity can
be approximated by elements in A (note that to show [f[ ∈ A if f ∈ A
we do not need the identity, since p
n
can be chosen to contain no constant
term). If there is such a t
0
, the identity is clearly missing from A. However,
adding the identity to A we get A + C = C(K) and it is easy to see that
A = ¦f ∈ C(K)[f(t
0
) = 0¦.
46 1. Hilbert spaces
Problem 1.17. Show that the of functions ϕ
n
(x) =
1
√
2π
e
inx
, n ∈ Z, form
an orthonormal basis for H = L
2
(0, 2π).
Problem 1.18. Show that the ∗algebra generated by f
z
(t) =
1
t−z
, z ∈ C, is
dense in the C
∗
algebra C
∞
(R) of continuous functions vanishing at inﬁnity.
(Hint: Add ∞ to R to make it compact.)
Chapter 2
Selfadjointness and
spectrum
2.1. Some quantum mechanics
In quantum mechanics, a single particle living in R
3
is described by a
complexvalued function (the wave function)
ψ(x, t), (x, t) ∈ R
3
R, (2.1)
where x corresponds to a point in space and t corresponds to time. The
quantity ρ
t
(x) = [ψ(x, t)[
2
is interpreted as the probability density of the
particle at the time t. In particular, ψ must be normalized according to
_
R
3
[ψ(x, t)[
2
d
3
x = 1, t ∈ R. (2.2)
The location x of the particle is a quantity which can be observed (i.e.,
measured) and is hence called observable. Due to our probabilistic inter
pretation it is also a random variable whose expectation is given by
E
ψ
(x) =
_
R
3
x[ψ(x, t)[
2
d
3
x. (2.3)
In a real life setting, it will not be possible to measure x directly and one will
only be able to measure certain functions of x. For example, it is possible to
check whether the particle is inside a certain area Ω of space (e.g., inside a
detector). The corresponding observable is the characteristic function χ
Ω
(x)
of this set. In particular, the number
E
ψ
(χ
Ω
) =
_
R
3
χ
Ω
(x)[ψ(x, t)[
2
d
3
x =
_
Ω
[ψ(x, t)[
2
d
3
x (2.4)
47
48 2. Selfadjointness and spectrum
corresponds to the probability of ﬁnding the particle inside Ω ⊆ R
3
. An
important point to observe is that, in contradistinction to classical mechan
ics, the particle is no longer localized at a certain point. In particular,
the meansquare deviation (or variance) ∆
ψ
(x)
2
= E
ψ
(x
2
) − E
ψ
(x)
2
is
always nonzero.
In general, the conﬁguration space (or phase space) of a quantum
system is a (complex) Hilbert space H and the possible states of this system
are represented by the elements ψ having norm one, ψ = 1.
An observable a corresponds to a linear operator A in this Hilbert space
and its expectation, if the system is in the state ψ, is given by the real
number
E
ψ
(A) = ¸ψ, Aψ) = ¸Aψ, ψ), (2.5)
where ¸., ..) denotes the scalar product of H. Similarly, the meansquare
deviation is given by
∆
ψ
(A)
2
= E
ψ
(A
2
) −E
ψ
(A)
2
= (A−E
ψ
(A))ψ
2
. (2.6)
Note that ∆
ψ
(A) vanishes if and only if ψ is an eigenstate corresponding to
the eigenvalue E
ψ
(A), that is, Aψ = E
ψ
(A)ψ.
From a physical point of view, (2.5) should make sense for any ψ ∈ H.
However, this is not in the cards as our simple example of one particle already
shows. In fact, the reader is invited to ﬁnd a square integrable function ψ(x)
for which xψ(x) is no longer square integrable. The deeper reason behind
this nuisance is that E
ψ
(x) can attain arbitrary large values if the particle
is not conﬁned to a ﬁnite domain, which renders the corresponding opera
tor unbounded. But unbounded operators cannot be deﬁned on the entire
Hilbert space in a natural way by the closed graph theorem (Theorem 2.7
below) .
Hence, A will only be deﬁned on a subset D(A) ⊆ H called the domain
of A. Since we want A to at least be deﬁned for most states, we require
D(A) to be dense.
However, it should be noted that there is no general prescription how to
ﬁnd the operator corresponding to a given observable.
Now let us turn to the time evolution of such a quantum mechanical
system. Given an initial state ψ(0) of the system, there should be a unique
ψ(t) representing the state of the system at time t ∈ R. We will write
ψ(t) = U(t)ψ(0). (2.7)
Moreover, it follows from physical experiments, that superposition of
states holds, that is, U(t)(α
1
ψ
1
(0) +α
2
ψ
2
(0)) = α
1
ψ
1
(t) +α
2
ψ
2
(t) ([α
1
[
2
+
[α
2
[
2
= 1). In other words, U(t) should be a linear operator. Moreover,
2.1. Some quantum mechanics 49
since ψ(t) is a state (i.e., ψ(t) = 1), we have
U(t)ψ = ψ. (2.8)
Such operators are called unitary. Next, since we have assumed uniqueness
of solutions to the initial value problem, we must have
U(0) = I, U(t +s) = U(t)U(s). (2.9)
A family of unitary operators U(t) having this property is called a one
parameter unitary group. In addition, it is natural to assume that this
group is strongly continuous
lim
t→t
0
U(t)ψ = U(t
0
)ψ, ψ ∈ H. (2.10)
Each such group has an inﬁnitesimal generator deﬁned by
Hψ = lim
t→0
i
t
(U(t)ψ −ψ), D(H) = ¦ψ ∈ H[ lim
t→0
1
t
(U(t)ψ −ψ) exists¦.
(2.11)
This operator is called the Hamiltonian and corresponds to the energy of
the system. If ψ(0) ∈ D(H), then ψ(t) is a solution of the Schr¨odinger
equation (in suitable units)
i
d
dt
ψ(t) = Hψ(t). (2.12)
This equation will be the main subject of our course.
In summary, we have the following axioms of quantum mechanics.
Axiom 1. The conﬁguration space of a quantum system is a complex
separable Hilbert space H and the possible states of this system are repre
sented by the elements of H which have norm one.
Axiom 2. Each observable a corresponds to a linear operator A deﬁned
maximally on a dense subset D(A). Moreover, the operator correspond
ing to a polynomial P
n
(a) =
n
j=0
α
j
a
j
, α
j
∈ R, is P
n
(A) =
n
j=0
α
j
A
j
,
D(P
n
(A)) = D(A
n
) = ¦ψ ∈ D(A)[Aψ ∈ D(A
n−1
)¦ (A
0
= I).
Axiom 3. The expectation value for a measurement of a, when the
system is in the state ψ ∈ D(A), is given by (2.5), which must be real for
all ψ ∈ D(A).
Axiom 4. The time evolution is given by a strongly continuous one
parameter unitary group U(t). The generator of this group corresponds to
the energy of the system.
In the following sections we will try to draw some mathematical conse
quences from these assumptions:
50 2. Selfadjointness and spectrum
First we will see that Axiom 2 and 3 imply that observables correspond
to selfadjoint operators. Hence these operators play a central role in quan
tum mechanics and we will derive some of their basic properties. Another
crucial role is played by the set of all possible expectation values for the
measurement of a, which is connected with the spectrum σ(A) of the corre
sponding operator A.
The problem of deﬁning functions of an observable will lead us to the
spectral theorem (in the next chapter), which generalizes the diagonalization
of symmetric matrices.
Axiom 4 will be the topic of Chapter 5.
2.2. Selfadjoint operators
Let H be a (complex separable) Hilbert space. A linear operator is a linear
mapping
A : D(A) →H, (2.13)
where D(A) is a linear subspace of H, called the domain of A. It is called
bounded if the operator norm
A = sup
ψ=1
Aψ = sup
ϕ=ψ=1
[¸ψ, Aϕ)[ (2.14)
is ﬁnite. The second equality follows since equality in [¸ψ, Aϕ)[ ≤ ψ Aϕ
is attained when Aϕ = zψ for some z ∈ C. If A is bounded it is no restriction
to assume D(A) = H and we will always do so. The Banach space of all
bounded linear operators is denoted by L(H).
The expression ¸ψ, Aψ) encountered in the previous section is called the
quadratic form
q
A
(ψ) = ¸ψ, Aψ), ψ ∈ D(A), (2.15)
associated to A. An operator can be reconstructed from its quadratic form
via the polarization identity
¸ϕ, Aψ) =
1
4
(q
A
(ϕ +ψ) −q
A
(ϕ −ψ) + iq
A
(ϕ −iψ) −iq
A
(ϕ + iψ)) . (2.16)
A densely deﬁned linear operator A is called symmetric (or Hermitian)
if
¸ϕ, Aψ) = ¸Aϕ, ψ), ψ, ϕ ∈ D(A). (2.17)
The justiﬁcation for this deﬁnition is provided by the following
Lemma 2.1. A densely deﬁned operator A is symmetric if and only if the
corresponding quadratic form is realvalued.
2.2. Selfadjoint operators 51
Proof. Clearly (2.17) implies that Im(q
A
(ψ)) = 0. Conversely, taking the
imaginary part of the identity
q
A
(ψ + iϕ) = q
A
(ψ) +q
A
(ϕ) + i(¸ψ, Aϕ) −¸ϕ, Aψ)) (2.18)
shows Re¸Aϕ, ψ) = Re¸ϕ, Aψ). Replacing ϕ by iϕ in this last equation
shows Im¸Aϕ, ψ) = Im¸ϕ, Aψ) and ﬁnishes the proof.
In other words, a densely deﬁned operator A is symmetric if and only if
¸ψ, Aψ) = ¸Aψ, ψ), ψ ∈ D(A). (2.19)
This already narrows the class of admissible operators to the class of
symmetric operators by Axiom 3. Next, let us tackle the issue of the correct
domain.
By Axiom 2, A should be deﬁned maximally, that is, if
˜
A is another
symmetric operator such that A ⊆
˜
A, then A =
˜
A. Here we write A ⊆
˜
A if
D(A) ⊆ D(
˜
A) and Aψ =
˜
Aψ for all ψ ∈ D(A). In addition, we write A =
˜
A
if both
˜
A ⊆ A and A ⊆
˜
A hold.
The adjoint operator A
∗
of a densely deﬁned linear operator A is
deﬁned by
D(A
∗
) = ¦ψ ∈ H[∃
˜
ψ ∈ H : ¸ψ, Aϕ) = ¸
˜
ψ, ϕ), ∀ϕ ∈ D(A)¦
A
∗
ψ =
˜
ψ
. (2.20)
The requirement that D(A) is dense implies that A
∗
is welldeﬁned. How
ever, note that D(A
∗
) might not be dense in general. In fact, it might
contain no vectors other than 0.
Clearly we have (αA)
∗
= α
∗
A
∗
for α ∈ C and (A + B)
∗
⊇ A
∗
+ B
∗
provided D(A + B) = D(A) ∩ D(B) is dense. However, equality will not
hold in general unless one operator is bounded (Problem 2.1).
For later use, note that (Problem 2.2)
Ker(A
∗
) = Ran(A)
⊥
. (2.21)
For symmetric operators we clearly have A ⊆ A
∗
. If in addition, A = A
∗
holds, then A is called selfadjoint. Our goal is to show that observables
correspond to selfadjoint operators. This is for example true in the case of
the position operator x, which is a special case of a multiplication operator.
Example. (Multiplication operator) Consider the multiplication operator
(Af)(x) = A(x)f(x), D(A) = ¦f ∈ L
2
(R
n
, dµ) [ Af ∈ L
2
(R
n
, dµ)¦,
(2.22)
given by multiplication with the measurable function A : R
n
→ C. First
of all note that D(A) is dense. In fact, consider Ω
n
= ¦x ∈ R
n
[ [A(x)[ ≤
n¦ ¸ R
n
. Then, for every f ∈ L
2
(R
n
, dµ) the function f
n
= χ
Ωn
f ∈ D(A)
converges to f as n → ∞ by dominated convergence.
52 2. Selfadjointness and spectrum
Next, let us compute the adjoint of A. Performing a formal computation
we have for h, f ∈ D(A) that
¸h, Af) =
_
h(x)
∗
A(x)f(x)dµ(x) =
_
(A(x)
∗
h(x))
∗
f(x)dµ(x) = ¸
˜
Ah, f),
(2.23)
where
˜
A is multiplication by A(x)
∗
,
(
˜
Af)(x) = A(x)
∗
f(x), D(
˜
A) = ¦f ∈ L
2
(R
n
, dµ) [
˜
Af ∈ L
2
(R
n
, dµ)¦.
(2.24)
Note D(
˜
A) = D(A). At ﬁrst sight this seems to show that the adjoint of
A is
˜
A. But for our calculation we had to assume h ∈ D(A) and there
might be some functions in D(A
∗
) which do not satisfy this requirement! In
particular, our calculation only shows
˜
A ⊆ A
∗
. To show that equality holds,
we need to work a little harder:
If h ∈ D(A
∗
) there is some g ∈ L
2
(R
n
, dµ) such that
_
h(x)
∗
A(x)f(x)dµ(x) =
_
g(x)
∗
f(x)dµ(x), f ∈ D(A), (2.25)
and thus
_
(h(x)A(x)
∗
−g(x))
∗
f(x)dµ(x) = 0, f ∈ D(A). (2.26)
In particular,
_
χ
Ωn
(x)(h(x)A(x)
∗
−g(x))
∗
f(x)dµ(x) = 0, f ∈ L
2
(R
n
, dµ), (2.27)
which shows that χ
Ωn
(h(x)A(x)
∗
− g(x))
∗
∈ L
2
(R
n
, dµ) vanishes. Since n
is arbitrary, we even have h(x)A(x)
∗
= g(x) ∈ L
2
(R
n
, dµ) and thus A
∗
is
multiplication by A(x)
∗
and D(A
∗
) = D(A).
In particular, A is selfadjoint if A is realvalued. In the general case we
have at least Af = A
∗
f for all f ∈ D(A) = D(A
∗
). Such operators are
called normal.
Now note that
A ⊆ B ⇒ B
∗
⊆ A
∗
, (2.28)
that is, increasing the domain of A implies decreasing the domain of A
∗
.
Thus there is no point in trying to extend the domain of a selfadjoint
operator further. In fact, if A is selfadjoint and B is a symmetric extension,
we infer A ⊆ B ⊆ B
∗
⊆ A
∗
= A implying A = B.
Corollary 2.2. Selfadjoint operators are maximal, that is, they do not have
any symmetric extensions.
Furthermore, if A
∗
is densely deﬁned (which is the case if A is symmetric)
we can consider A
∗∗
. From the deﬁnition (2.20) it is clear that A ⊆ A
∗∗
and
2.2. Selfadjoint operators 53
thus A
∗∗
is an extension of A. This extension is closely related to extending
a linear subspace M via M
⊥⊥
= M (as we will see a bit later) and thus is
called the closure A = A
∗∗
of A.
If A is symmetric we have A ⊆ A
∗
and hence A = A
∗∗
⊆ A
∗
, that is,
A lies between A and A
∗
. Moreover, ¸ψ, A
∗
ϕ) = ¸Aψ, ϕ) for all ψ ∈ D(A),
ϕ ∈ D(A
∗
) implies that A is symmetric since A
∗
ϕ = Aϕ for ϕ ∈ D(A).
Example. (Diﬀerential operator) Take H = L
2
(0, 2π).
(i). Consider the operator
A
0
f = −i
d
dx
f, D(A
0
) = ¦f ∈ C
1
[0, 2π] [ f(0) = f(2π) = 0¦. (2.29)
That A
0
is symmetric can be shown by a simple integration by parts (do
this). Note that the boundary conditions f(0) = f(2π) = 0 are chosen
such that the boundary terms occurring from integration by parts vanish.
However, this will also follow once we have computed A
∗
0
. If g ∈ D(A
∗
0
) we
must have
_
2π
0
g(x)
∗
(−if
(x))dx =
_
2π
0
˜ g(x)
∗
f(x)dx (2.30)
for some ˜ g ∈ L
2
(0, 2π). Integration by parts shows
_
2π
0
f
(x)
_
g(x) −i
_
x
0
˜ g(t)dt
_
∗
dx = 0 (2.31)
and hence g(x) − i
_
x
0
˜ g(t)dt ∈ ¦f
[f ∈ D(A
0
)¦
⊥
. But ¦f
[f ∈ D(A
0
)¦ =
¦h ∈ C(0, 2π)[
_
2π
0
h(t)dt = 0¦ implying g(x) = g(0) + i
_
x
0
˜ g(t)dt since
¦f
[f ∈ D(A
0
)¦ = ¦h ∈ H[¸1, h) = 0¦ = ¦1¦
⊥
and ¦1¦
⊥⊥
= span¦1¦. Thus
g ∈ AC[0, 2π], where
AC[a, b] = ¦f ∈ C[a, b][f(x) = f(a) +
_
x
a
g(t)dt, g ∈ L
1
(a, b)¦ (2.32)
denotes the set of all absolutely continuous functions (see Section 2.6). In
summary, g ∈ D(A
∗
0
) implies g ∈ AC[0, 2π] and A
∗
0
g = ˜ g = −ig
. Conversely,
for every g ∈ H
1
(0, 2π) = ¦f ∈ AC[0, 2π][f
∈ L
2
(0, 2π)¦ (2.30) holds with
˜ g = −ig
and we conclude
A
∗
0
f = −i
d
dx
f, D(A
∗
0
) = H
1
(0, 2π). (2.33)
In particular, A is symmetric but not selfadjoint. Since A
∗∗
⊆ A
∗
we
compute
0 = ¸g, A
0
f) −¸A
∗
0
g, f) = i(f(0)g(0)
∗
−f(2π)g(2π)
∗
) (2.34)
54 2. Selfadjointness and spectrum
and since the boundary values of g ∈ D(A
∗
0
) can be prescribed arbitrary, we
must have f(0) = f(2π) = 0. Thus
A
0
f = −i
d
dx
f, D(A
0
) = ¦f ∈ D(A
∗
0
) [ f(0) = f(2π) = 0¦. (2.35)
(ii). Now let us take
Af = −i
d
dx
f, D(A) = ¦f ∈ C
1
[0, 2π] [ f(0) = f(2π)¦. (2.36)
which is clearly an extension of A
0
. Thus A
∗
⊆ A
∗
0
and we compute
0 = ¸g, Af) −¸A
∗
g, f) = if(0)(g(0)
∗
−g(2π)
∗
). (2.37)
Since this must hold for all f ∈ D(A) we conclude g(0) = g(2π) and
A
∗
f = −i
d
dx
f, D(A
∗
) = ¦f ∈ H
1
(0, 2π) [ f(0) = f(2π)¦. (2.38)
Similarly, as before, A = A
∗
and thus A is selfadjoint.
One might suspect that there is no big diﬀerence between the two sym
metric operators A
0
and A from the previous example, since they coincide
on a dense set of vectors. However, the converse is true: For example, the
ﬁrst operator A
0
has no eigenvectors at all (i.e., solutions of the equation
A
0
ψ = zψ, z ∈ C) whereas the second one has an orthonormal basis of
eigenvectors!
Example. Compute the eigenvectors of A
0
and A from the previous exam
ple.
(i). By deﬁnition an eigenvector is a (nonzero) solution of A
0
u = zu,
z ∈ C, that is, a solution of the ordinary diﬀerential equation
u
(x) = zu(x) (2.39)
satisfying the boundary conditions u(0) = u(2π) = 0 (since we must have
u ∈ D(A
0
). The general solution of the diﬀerential equation is u(x) =
u(0)e
izx
and the boundary conditions imply u(x) = 0. Hence there are no
eigenvectors.
(ii). Now we look for solutions of Au = zu, that is the same diﬀerential
equation as before, but now subject to the boundary condition u(0) = u(2π).
Again the general solution is u(x) = u(0)e
izx
and the boundary condition
requires u(0) = u(0)e
2πiz
. Thus there are two possibilities. Either u(0) = 0
(which is of no use for us) or z ∈ Z. In particular, we see that all eigenvectors
are given by
u
n
(x) =
1
√
2π
e
inx
, n ∈ Z, (2.40)
which are wellknown to form an orthonormal basis.
2.2. Selfadjoint operators 55
We will see a bit later that this is a consequence of selfadjointness of
A. Hence it will be important to know whether a given operator is self
adjoint or not. Our example shows that symmetry is easy to check (in case
of diﬀerential operators it usually boils down to integration by parts), but
computing the adjoint of an operator is a nontrivial job even in simple situ
ations. However, we will learn soon that selfadjointness is a much stronger
property than symmetry justifying the additional eﬀort needed to prove it.
On the other hand, if a given symmetric operator A turns out not to
be selfadjoint, this raises the question of selfadjoint extensions. Two cases
need to be distinguished. If A is selfadjoint, then there is only one self
adjoint extension (if B is another one, we have A ⊆ B and hence A = B
by Corollary 2.2). In this case A is called essentially selfadjoint and
D(A) is called a core for A. Otherwise there might be more than one self
adjoint extension or none at all. This situation is more delicate and will be
investigated in Section 2.5.
Since we have seen that computing A
∗
is not always easy, a criterion for
selfadjointness not involving A
∗
will be useful.
Lemma 2.3. Let A be symmetric such that Ran(A+z) = Ran(A+z
∗
) = H
for one z ∈ C. Then A is selfadjoint.
Proof. Let ψ ∈ D(A
∗
) and A
∗
ψ =
˜
ψ. Since Ran(A + z
∗
) = H, there is a
ϑ ∈ D(A) such that (A+z
∗
)ϑ =
˜
ψ +z
∗
ψ. Now we compute
¸ψ, (A+z)ϕ) = ¸
˜
ψ +z
∗
ψ, ϕ) = ¸(A+z
∗
)ϑ, ϕ) = ¸ϑ, (A+z)ϕ), ϕ ∈ D(A),
(2.41)
and hence ψ = ϑ ∈ D(A) since Ran(A+z) = H.
To proceed further, we will need more information on the closure of
an operator. We will use a diﬀerent approach which avoids the use of the
adjoint operator. We will establish equivalence with our original deﬁnition
in Lemma 2.4.
The simplest way of extending an operator A is to take the closure of its
graph Γ(A) = ¦(ψ, Aψ)[ψ ∈ D(A)¦ ⊂ H
2
. That is, if (ψ
n
, Aψ
n
) → (ψ,
˜
ψ)
we might try to deﬁne Aψ =
˜
ψ. For Aψ to be welldeﬁned, we need that
(ψ
n
, Aψ
n
) → (0,
˜
ψ) implies
˜
ψ = 0. In this case A is called closable and the
unique operator A which satisﬁes Γ(A) = Γ(A) is called the closure of A.
Clearly, A is called closed if A = A, which is the case if and only if the
graph of A is closed. Equivalently, A is closed if and only if Γ(A) equipped
with the graph norm ψ
2
Γ(A)
= ψ
2
+ Aψ
2
is a Hilbert space (i.e.,
closed). A bounded operator is closed if and only if its domain is closed
(show this!).
56 2. Selfadjointness and spectrum
Example. Let us compute the closure of the operator A
0
from the pre
vious example without the use of the adjoint operator. Let f ∈ D(A
0
)
and let f
n
∈ D(A
0
) be a sequence such that f
n
→ f, A
0
f
n
→ −ig. Then
f
n
→ g and hence f(x) =
_
x
0
g(t)dt. Thus f ∈ AC[0, 2π] and f(0) = 0.
Moreover f(2π) = lim
n→0
_
2π
0
f
n
(t)dt = 0. Conversely, any such f can be
approximated by functions in D(A
0
) (show this).
Next, let us collect a few important results.
Lemma 2.4. Suppose A is a densely deﬁned operator.
(i) A
∗
is closed.
(ii) A is closable if and only if D(A
∗
) is dense and A = A
∗∗
respectively
(A)
∗
= A
∗
in this case.
(iii) If A is injective and the Ran(A) is dense, then (A
∗
)
−1
= (A
−1
)
∗
.
If A is closable and A is injective, then A
−1
= A
−1
.
Proof. Let us consider the following two unitary operators from H
2
to itself
U(ϕ, ψ) = (ψ, −ϕ), V (ϕ, ψ) = (ψ, ϕ). (2.42)
(i). From
Γ(A
∗
) = ¦(ϕ, ˜ ϕ) ∈ H
2
[¸ϕ, Aψ) = ¸ ˜ ϕ, ψ) ∀ϕ ∈ D(A
∗
)¦
= ¦(ϕ, ˜ ϕ) ∈ H
2
[¸(−˜ ϕ, ϕ), (ψ,
˜
ψ))
Γ(A)
= 0 ∀(ψ,
˜
ψ) ∈ Γ(A)¦
= U(Γ(A)
⊥
) = (UΓ(A))
⊥
(2.43)
we conclude that A
∗
is closed.
(ii). From
Γ(A) = Γ(A)
⊥⊥
= (UΓ(A
∗
))
⊥
= ¦(ψ,
˜
ψ)[ ¸ψ, A
∗
ϕ) −¸
˜
ψ, ϕ) = 0, ∀ϕ ∈ D(A
∗
)¦ (2.44)
we see that (0,
˜
ψ) ∈ Γ(A) if and only if
˜
ψ ∈ D(A
∗
)
⊥
. Hence A is closable if
and only if D(A
∗
) is dense. In this case, equation (2.43) also shows A
∗
= A
∗
.
Moreover, replacing A by A
∗
in (2.43) and comparing with the last formula
shows A
∗∗
= A.
(iii). Next note that (provided A is injective)
Γ(A
−1
) = V Γ(A). (2.45)
Hence if Ran(A) is dense, then Ker(A
∗
) = Ran(A)
⊥
= ¦0¦ and
Γ((A
∗
)
−1
) = V Γ(A
∗
) = V UΓ(A)
⊥
= UV Γ(A)
⊥
= U(V Γ(A))
⊥
(2.46)
2.2. Selfadjoint operators 57
shows that (A
∗
)
−1
= (A
−1
)
∗
. Similarly, if A is closable and A is injective,
then A
−1
= A
−1
by
Γ(A
−1
) = V Γ(A) = V Γ(A) = Γ(A
−1
). (2.47)
If A ∈ L(H) we clearly have D(A
∗
) = H and by Corollary 1.8 A ∈ L(H).
In particular, since A = A
∗∗
we obtain
Theorem 2.5. We have A ∈ L(H) if and only if A
∗
∈ L(H).
Now we can also generalize Lemma 2.3 to the case of essential selfadjoint
operators.
Lemma 2.6. A symmetric operator A is essentially selfadjoint if and only
if one of the following conditions holds for one z ∈ C¸R.
• Ran(A+z) = Ran(A+z
∗
) = H.
• Ker(A
∗
+z) = Ker(A
∗
+z
∗
) = ¦0¦.
If A is nonnegative, that is ¸ψ, Aψ) ≥ 0 for all ψ ∈ D(A), we can also
admit z ∈ (−∞, 0).
Proof. As noted earlier Ker(A
∗
) = Ran(A)
⊥
, and hence the two conditions
are equivalent. By taking the closure of A it is no restriction to assume that
A is closed. Let z = x + iy. From
(A−z)ψ
2
= (A−x)ψ−iyψ
2
= (A−x)ψ
2
+y
2
ψ
2
≥ y
2
ψ
2
, (2.48)
we infer that Ker(A−z) = ¦0¦ and hence (A−z)
−1
exists. Moreover, setting
ψ = (A − z)
−1
ϕ (y ,= 0) shows (A − z)
−1
 ≤ [y[
−1
. Hence (A − z)
−1
is
bounded and closed. Since it is densely deﬁned by assumption, its domain
Ran(A+z) must be equal to H. Replacing z by z
∗
and applying Lemma 2.3
ﬁnishes the general case. The argument for the nonnegative case with
z < 0 is similar using εψ
2
≤ [¸ψ, (A + ε)ψ)[
2
≤ ψ(A + ε)ψ which
shows (A+ε)
−1
≤ ε
−1
, ε > 0.
In addition, we can also prove the closed graph theorem which shows
that an unbounded operator cannot be deﬁned on the entire Hilbert space.
Theorem 2.7 (closed graph). Let H
1
and H
2
be two Hilbert spaces and A
an operator deﬁned on all of H
1
. Then A is bounded if and only if Γ(A) is
closed.
Proof. If A is bounded than it is easy to see that Γ(A) is closed. So let us
assume that Γ(A) is closed. Then A
∗
is well deﬁned and for all unit vectors
58 2. Selfadjointness and spectrum
ϕ ∈ D(A
∗
) we have that the linear functional
ϕ
(ψ) = ¸A
∗
ϕ, ψ) is pointwise
bounded

ϕ
(ψ) = [¸ϕ, Aψ)[ ≤ Aψ. (2.49)
Hence by the uniform boundedness principle there is a constant C such that

ϕ
 = A
∗
ϕ ≤ C. That is, A
∗
is bounded and so is A = A
∗∗
.
Finally we want to draw some some further consequences of Axiom 2
and show that observables correspond to selfadjoint operators. Since self
adjoint operators are already maximal, the diﬃcult part remaining is to
show that an observable has at least one selfadjoint extension. There is a
good way of doing this for nonnegative operators and hence we will consider
this case ﬁrst.
An operator is called nonnegative (resp. positive) if ¸ψ, Aψ) ≥ 0
(resp. > 0 for ψ ,= 0) for all ψ ∈ D(A). If A is positive, the map (ϕ, ψ) →
¸ϕ, Aψ) is a scalar product. However, there might be sequences which are
Cauchy with respect to this scalar product but not with respect to our
original one. To avoid this, we introduce the scalar product
¸ϕ, ψ)
A
= ¸ϕ, (A+ 1)ψ), A ≥ 0, (2.50)
deﬁned on D(A), which satisﬁes ψ ≤ ψ
A
. Let H
A
be the completion of
D(A) with respect to the above scalar product. We claim that H
A
can be
regarded as a subspace of H, that is, D(A) ⊆ H
A
⊆ H.
If (ψ
n
) is a Cauchy sequence in D(A), then it is also Cauchy in H (since
ψ ≤ ψ
A
by assumption) and hence we can identify it with the limit of
(ψ
n
) regarded as a sequence in H. For this identiﬁcation to be unique, we
need to show that if (ψ
n
) ⊂ D(A) is a Cauchy sequence in H
A
such that
ψ
n
 → 0, then ψ
n

A
→ 0. This follows from
ψ
n

2
A
= ¸ψ
n
, ψ
n
−ψ
m
)
A
+¸ψ
n
, ψ
m
)
A
≤ ψ
n

A
ψ
n
−ψ
m

A
+ψ
n
(A+ 1)ψ
m
 (2.51)
since the right hand side can be made arbitrarily small choosing m, n large.
Clearly the quadratic form q
A
can be extended to every ψ ∈ H
A
by
setting
q
A
(ψ) = ¸ψ, ψ)
A
−ψ
2
, ψ ∈ Q(A) = H
A
. (2.52)
The set Q(A) is also called the form domain of A.
Example. (Multiplication operator) Let A be multiplication by A(x) ≥ 0
in L
2
(R
n
, dµ). Then
Q(A) = D(A
1/2
) = ¦f ∈ L
2
(R
n
, dµ) [ A
1/2
f ∈ L
2
(R
n
, dµ)¦ (2.53)
and
q
A
(x) =
_
R
n
A(x)[f(x)[
2
dµ(x) (2.54)
2.2. Selfadjoint operators 59
(show this).
Now we come to our extension result. Note that A + 1 is injective and
the best we can hope for is that for a nonnegative extension
˜
A,
˜
A + 1 is a
bijection from D(
˜
A) onto H.
Lemma 2.8. Suppose A is a nonnegative operator, then there is a non
negative extension
˜
A such that Ran(
˜
A+ 1) = H.
Proof. Let us deﬁne an operator
˜
A by
D(
˜
A) = ¦ψ ∈ H
A
[∃
˜
ψ ∈ H : ¸ϕ, ψ)
A
= ¸ϕ,
˜
ψ), ∀ϕ ∈ H
A
¦
˜
Aψ =
˜
ψ −ψ
. (2.55)
Since H
A
is dense,
˜
ψ is welldeﬁned. Moreover, it is straightforward to see
that
˜
A is a nonnegative extension of A.
It is also not hard to see that Ran(
˜
A + 1) = H. Indeed, for any
˜
ψ ∈ H,
ϕ → ¸
˜
ψ, ϕ) is bounded linear functional on H
A
. Hence there is an element
ψ ∈ H
A
such that ¸
˜
ψ, ϕ) = ¸ψ, ϕ)
A
for all ϕ ∈ H
A
. By the deﬁnition of
˜
A,
(
˜
A+ 1)ψ =
˜
ψ and hence
˜
A+ 1 is onto.
Now it is time for another
Example. Let us take H = L
2
(0, π) and consider the operator
Af = −
d
2
dx
2
f, D(A) = ¦f ∈ C
2
[0, π] [ f(0) = f(π) = 0¦, (2.56)
which corresponds to the onedimensional model of a particle conﬁned to a
box.
(i). First of all, using integration by parts twice, it is straightforward to
check that A is symmetric
_
π
0
g(x)
∗
(−f
)(x)dx =
_
π
0
g
(x)
∗
f
(x)dx =
_
π
0
(−g
)(x)
∗
f(x)dx. (2.57)
Note that the boundary conditions f(0) = f(π) = 0 are chosen such that
the boundary terms occurring from integration by parts vanish. Moreover,
the same calculation also shows that A is positive
_
π
0
f(x)
∗
(−f
)(x)dx =
_
π
0
[f
(x)[
2
dx > 0, f ,= 0. (2.58)
(ii). Next let us show H
A
= ¦f ∈ H
1
(0, π) [ f(0) = f(π) = 0¦. In fact,
since
¸g, f)
A
=
_
π
0
_
g
(x)
∗
f
(x) +g(x)
∗
f(x)
_
dx, (2.59)
we see that f
n
is Cauchy in H
A
if and only if both f
n
and f
n
are Cauchy
in L
2
(0, π). Thus f
n
→ f and f
n
→ g in L
2
(0, π) and f
n
(x) =
_
x
0
f
n
(t)dt
60 2. Selfadjointness and spectrum
implies f(x) =
_
x
0
g(t)dt. Thus f ∈ AC[0, π]. Moreover, f(0) = 0 is obvious
and from 0 = f
n
(π) =
_
π
0
f
n
(t)dt we have f(π) = lim
n→∞
_
π
0
f
n
(t)dt = 0.
So we have H
A
⊆ ¦f ∈ H
1
(0, π) [ f(0) = f(π) = 0¦. To see the converse
approximate f
by smooth functions g
n
. Using g
n
−
_
π
0
g
n
(t)dt instead of g
n
it is no restriction to assume
_
π
0
g
n
(t)dt = 0. Now deﬁne f
n
(x) =
_
x
0
g
n
(t)dt
and note f
n
∈ D(A) → f.
(iii). Finally, let us compute the extension
˜
A. We have f ∈ D(
˜
A) if for
all g ∈ H
A
there is an
˜
f such that ¸g, f)
A
= ¸g,
˜
f). That is,
_
π
0
g
(x)
∗
f
(x)dx =
_
π
0
g(x)
∗
(
˜
f(x) −f(x))dx. (2.60)
Integration by parts on the right hand side shows
_
π
0
g
(x)
∗
f
(x)dx = −
_
π
0
g
(x)
∗
_
x
0
(
˜
f(t) −f(t))dt dx (2.61)
or equivalently
_
π
0
g
(x)
∗
_
f
(x) +
_
x
0
(
˜
f(t) −f(t))dt
_
dx = 0. (2.62)
Now observe ¦g
∈ H[g ∈ H
A
¦ = ¦h ∈ H[
_
π
0
h(t)dt = 0¦ = ¦1¦
⊥
and thus
f
(x) +
_
x
0
(
˜
f(t) − f(t))dt ∈ ¦1¦
⊥⊥
= span¦1¦. So we see f ∈ H
2
(0, π) =
¦f ∈ AC[0, π][f
∈ H
1
(0, π)¦ and
˜
Af = −f
. The converse is easy and
hence
˜
Af = −
d
2
dx
2
f, D(
˜
A) = ¦f ∈ H
2
[0, π] [ f(0) = f(π) = 0¦. (2.63)
Now let us apply this result to operators A corresponding to observ
ables. Since A will, in general, not satisfy the assumptions of our lemma, we
will consider 1 + A
2
, D(1 + A
2
) = D(A
2
), instead, which has a symmetric
extension whose range is H. By our requirement for observables, 1 + A
2
is maximally deﬁned and hence is equal to this extension. In other words,
Ran(1 +A
2
) = H. Moreover, for any ϕ ∈ H there is a ψ ∈ D(A
2
) such that
(A−i)(A+ i)ψ = (A+ i)(A−i)ψ = ϕ (2.64)
and since (A ± i)ψ ∈ D(A), we infer Ran(A ± i) = H. As an immediate
consequence we obtain
Corollary 2.9. Observables correspond to selfadjoint operators.
But there is another important consequence of the results which is worth
while mentioning.
2.3. Resolvents and spectra 61
Theorem 2.10 (Friedrichs extension). Let A be a semibounded symmet
ric operator, that is,
q
A
(ψ) = ¸ψ, Aψ) ≥ γψ
2
, γ ∈ R. (2.65)
Then there is a selfadjoint extension
˜
A which is also bounded from below
by γ and which satisﬁes D(
˜
A) ⊆ H
A−γ
.
Proof. Replacing A by A − γ we can reduce it to the case considered in
Lemma 2.8. The rest is straightforward.
Problem 2.1. Show (αA)
∗
= α
∗
A
∗
and (A+B)
∗
⊇ A
∗
+B
∗
(where D(A
∗
+
B
∗
) = D(A
∗
) ∩ D(B
∗
)) with equality if one operator is bounded. Give an
example where equality does not hold.
Problem 2.2. Show (2.21).
Problem 2.3. Show that if A is normal, so is A+z for any z ∈ C.
Problem 2.4. Show that normal operators are closed.
Problem 2.5. Show that the kernel of a closed operator is closed.
Problem 2.6. Show that if A is bounded and B closed, then BA is closed.
Problem 2.7. Let A = −
d
2
dx
2
, D(A) = ¦f ∈ H
2
(0, π) [ f(0) = f(π) = 0¦
and let ψ(x) =
1
2
√
π
x(π−x). Find the error in the following argument: Since
A is symmetric we have 1 = ¸Aψ, Aψ) = ¸ψ, A
2
ψ) = 0.
Problem 2.8. Suppose A is a closed operator. Show that A
∗
A (with D(A
∗
A) =
¦ψ ∈ D(A)[Aψ ∈ D(A
∗
)¦ is selfadjoint. (Hint: A
∗
A ≥ 0.)
Problem 2.9. Show that A is normal if and only if AA
∗
= A
∗
A.
2.3. Resolvents and spectra
Let A be a (densely deﬁned) closed operator. The resolvent set of A is
deﬁned by
ρ(A) = ¦z ∈ C[(A−z)
−1
∈ L(H)¦. (2.66)
More precisely, z ∈ ρ(A) if and only if (A − z) : D(A) → H is bijective
and its inverse is bounded. By the closed graph theorem (Theorem 2.7), it
suﬃces to check that A − z is bijective. The complement of the resolvent
set is called the spectrum
σ(A) = C¸ρ(A) (2.67)
of A. In particular, z ∈ σ(A) if A − z has a nontrivial kernel. A vector
ψ ∈ Ker(A−z) is called an eigenvector and z is called eigenvalue in this
case.
62 2. Selfadjointness and spectrum
The function
R
A
: ρ(A) → L(H)
z → (A−z)
−1
(2.68)
is called resolvent of A. Note the convenient formula
R
A
(z)
∗
= ((A−z)
−1
)
∗
= ((A−z)
∗
)
−1
= (A
∗
−z
∗
)
−1
= R
A
∗(z
∗
). (2.69)
In particular,
ρ(A
∗
) = ρ(A)
∗
. (2.70)
Example. (Multiplication operator) Consider again the multiplication op
erator
(Af)(x) = A(x)f(x), D(A) = ¦f ∈ L
2
(R
n
, dµ) [ Af ∈ L
2
(R
n
, dµ)¦,
(2.71)
given by multiplication with the measurable function A : R
n
→ C. Clearly
(A−z)
−1
is given by the multiplication operator
(A−z)
−1
f(x) =
1
A(x) −z
f(x),
D((A−z)
−1
) = ¦f ∈ L
2
(R
n
, dµ) [
1
A−z
f ∈ L
2
(R
n
, dµ)¦ (2.72)
whenever this operator is bounded. But (A − z)
−1
 = 
1
A−z

∞
≤
1
ε
is
equivalent to µ(¦x[ [A(x) −z[ ≥ ε¦) = 0 and hence
ρ(A) = ¦z ∈ C[∃ε > 0 : µ(¦x[ [A(x) −z[ ≤ ε¦) = 0¦. (2.73)
Moreover, z is an eigenvalue of A if µ(A
−1
(¦z¦)) > 0 and χ
A
−1
({z})
is a
corresponding eigenfunction in this case.
Example. (Diﬀerential operator) Consider again the diﬀerential operator
Af = −i
d
dx
f, D(A) = ¦f ∈ AC[0, 2π] [ f
∈ L
2
, f(0) = f(2π)¦ (2.74)
in L
2
(0, 2π). We already know that the eigenvalues of A are the integers
and that the corresponding normalized eigenfunctions
u
n
(x) =
1
√
2π
e
inx
(2.75)
form an orthonormal basis.
To compute the resolvent we must ﬁnd the solution of the correspond
ing inhomogeneous equation −if
(x) − z f(x) = g(x). By the variation of
constants formula the solution is given by (this can also be easily veriﬁed
directly)
f(x) = f(0)e
izx
+ i
_
x
0
e
iz(x−t)
g(t)dt. (2.76)
2.3. Resolvents and spectra 63
Since f must lie in the domain of A, we must have f(0) = f(2π) which gives
f(0) =
i
e
−2πiz
−1
_
2π
0
e
−izt
g(t)dt, z ∈ C¸Z. (2.77)
(Since z ∈ Z are the eigenvalues, the inverse cannot exist in this case.) Hence
(A−z)
−1
g(x) =
_
2π
0
G(z, x, t)g(t)dt, (2.78)
where
G(z, x, t) = e
iz(x−t)
_
−i
1−e
−2πiz
, t > x
i
1−e
2πiz
, t < x
, z ∈ C¸Z. (2.79)
In particular σ(A) = Z.
If z, z
∈ ρ(A), we have the ﬁrst resolvent formula
R
A
(z) −R
A
(z
) = (z −z
)R
A
(z)R
A
(z
) = (z −z
)R
A
(z
)R
A
(z). (2.80)
In fact,
(A−z)
−1
−(z −z
)(A−z)
−1
(A−z
)
−1
=
(A−z)
−1
(1 −(z −A+A−z
)(A−z
)
−1
) = (A−z
)
−1
, (2.81)
which proves the ﬁrst equality. The second follows after interchanging z and
z
. Now ﬁx z
= z
0
and use (2.80) recursively to obtain
R
A
(z) =
n
j=0
(z −z
0
)
j
R
A
(z
0
)
j+1
+ (z −z
0
)
n+1
R
A
(z
0
)
n+1
R
A
(z). (2.82)
The sequence of bounded operators
R
n
=
n
j=0
(z −z
0
)
j
R
A
(z
0
)
j+1
(2.83)
converges to a bounded operator if [z − z
0
[ < R
A
(z
0
)
−1
and clearly we
expect z ∈ ρ(A) and R
n
→ R
A
(z) in this case. Let R
∞
= lim
n→∞
R
n
and
set ϕ
n
= R
n
ψ, ϕ = R
∞
ψ for some ψ ∈ H. Then a quick calculation shows
AR
n
ψ = ψ + (z −z
0
)ϕ
n−1
+z
0
ϕ
n
. (2.84)
Hence (ϕ
n
, Aϕ
n
) → (ϕ, ψ + zϕ) shows ϕ ∈ D(A) (since A is closed) and
(A−z)R
∞
ψ = ψ. Similarly, for ψ ∈ D(A),
R
n
Aψ = ψ + (z −z
0
)ϕ
n−1
+z
0
ϕ
n
(2.85)
and hence R
∞
(A − z)ψ = ψ after taking the limit. Thus R
∞
= R
A
(z) as
anticipated.
64 2. Selfadjointness and spectrum
If A is bounded, a similar argument veriﬁes the Neumann series for
the resolvent
R
A
(z) = −
n−1
j=0
A
j
z
j+1
+
1
z
n
A
n
R
A
(z)
= −
∞
j=0
A
j
z
j+1
, [z[ > A. (2.86)
In summary we have proved the following
Theorem 2.11. The resolvent set ρ(A) is open and R
A
: ρ(A) → L(H) is
holomorphic, that is, it has an absolutely convergent power series expansion
around every point z
0
∈ ρ(A). In addition,
R
A
(z) ≥ dist(z, σ(A))
−1
(2.87)
and if A is bounded we have ¦z ∈ C[ [z[ > A¦ ⊆ ρ(A).
As a consequence we obtain the useful
Lemma 2.12. We have z ∈ σ(A) if there is a sequence ψ
n
∈ D(A) such
that ψ
n
 = 1 and (A−z)ψ
n
 → 0. If z is a boundary point of ρ(A), then
the converse is also true. Such a sequence is called Weyl sequence.
Proof. Let ψ
n
be a Weyl sequence. Then z ∈ ρ(A) is impossible by 1 =
ψ
n
 = R
A
(z)(A − z)ψ
n
 ≤ R
A
(z)(A − z)ψ
n
 → 0. Conversely, by
(2.87) there is a sequence z
n
→ z and corresponding vectors ϕ
n
∈ H such
that R
A
(z)ϕ
n
ϕ
n

−1
→ ∞. Let ψ
n
= R
A
(z
n
)ϕ
n
and rescale ϕ
n
such that
ψ
n
 = 1. Then ϕ
n
 → 0 and hence
(A−z)ψ
n
 = ϕ
n
+ (z
n
−z)ψ
n
 ≤ ϕ
n
 +[z −z
n
[ → 0 (2.88)
shows that ψ
n
is a Weyl sequence.
Let us also note the following spectral mapping result.
Lemma 2.13. Suppose A is injective, then
σ(A
−1
)¸¦0¦ = (σ(A)¸¦0¦)
−1
. (2.89)
In addition, we have Aψ = zψ if and only if A
−1
ψ = z
−1
ψ.
Proof. Suppose z ∈ ρ(A)¸¦0¦. Then we claim
R
A
−1(z
−1
) = −zAR
A
(z) = −z −R
A
(z). (2.90)
In fact, the right hand side is a bounded operator from H → Ran(A) =
D(A
−1
) and
(A
−1
−z
−1
)(−zAR
A
(z))ϕ = (−z +A)R
A
(z)ϕ = ϕ, ϕ ∈ H. (2.91)
2.3. Resolvents and spectra 65
Conversely, if ψ ∈ D(A
−1
) = Ran(A) we have ψ = Aϕ and hence
(−zAR
A
(z))(A
−1
−z
−1
)ψ = AR
A
(z)((A−z)ϕ) = Aϕ = ψ. (2.92)
Thus z
−1
∈ ρ(A
−1
). The rest follows after interchanging the roles of A and
A
−1
.
Next, let us characterize the spectra of selfadjoint operators.
Theorem 2.14. Let A be symmetric. Then A is selfadjoint if and only if
σ(A) ⊆ R and A ≥ 0 if and only if σ(A) ⊆ [0, ∞). Moreover, R
A
(z) ≤
[Im(z)[
−1
and, if A ≥ 0, R
A
(λ) ≤ [λ[
−1
, λ < 0.
Proof. If σ(A) ⊆ R, then Ran(A + z) = H, z ∈ C¸R, and hence A is self
adjoint by Lemma 2.6. Conversely, if A is selfadjoint (resp. A ≥ 0), then
R
A
(z) exists for z ∈ C¸R (resp. z ∈ C¸(−∞, 0]) and satisﬁes the given
estimates as has been shown in the proof of Lemma 2.6.
In particular, we obtain
Theorem 2.15. Let A be selfadjoint, then
inf σ(A) = inf
ψ∈D(A), ψ=1
¸ψ, Aψ). (2.93)
For the eigenvalues and corresponding eigenfunctions we have:
Lemma 2.16. Let A be symmetric. Then all eigenvalues are real and eigen
vectors corresponding to diﬀerent eigenvalues are orthogonal.
Proof. If Aψ
j
= λ
j
ψ
j
, j = 1, 2, we have
λ
1
ψ
1

2
= ¸ψ
1
, λ
1
ψ
1
) = ¸ψ
1
, Aψ
1
) = ¸ψ
1
, Aψ
1
) = ¸λ
1
ψ
1
, ψ
1
) = λ
∗
1
ψ
1

2
(2.94)
and
(λ
1
−λ
2
)¸ψ
1
, ψ
2
) = ¸Aψ
1
, ψ
2
) −¸Aψ
1
, ψ
2
) = 0, (2.95)
ﬁnishing the proof.
The result does not imply that two linearly independent eigenfunctions
to the same eigenvalue are orthogonal. However, it is no restriction to
assume that they are since we can use GramSchmidt to ﬁnd an orthonormal
basis for Ker(A − λ). If H is ﬁnite dimensional, we can always ﬁnd an
orthonormal basis of eigenvectors. In the inﬁnite dimensional case this is
no longer true in general. However, if there is an orthonormal basis of
eigenvectors, then A is essentially selfadjoint.
Theorem 2.17. Suppose A is a symmetric operator which has an orthonor
mal basis of eigenfunctions ¦ϕ
j
¦, then A is essentially selfadjoint. In par
ticular, it is essentially selfadjoint on span¦ϕ
j
¦.
66 2. Selfadjointness and spectrum
Proof. Consider the set of all ﬁnite linear combinations ψ =
n
j=0
c
j
ϕ
j
which is dense in H. Then φ =
n
j=0
c
j
λ
j
±i
ϕ
j
∈ D(A) and (A ± i)φ = ψ
shows that Ran(A±i) is dense.
In addition, we note the following asymptotic expansion for the resolvent.
Lemma 2.18. Suppose A is selfadjoint. For every ψ ∈ H we have
lim
Im(z)→∞
AR
A
(z)ψ = 0. (2.96)
In particular, if ψ ∈ D(A
n
), then
R
A
(z)ψ = −
n
j=0
A
j
ψ
z
j+1
+o(
1
z
n+1
), as Im(z) → ∞. (2.97)
Proof. It suﬃces to prove the ﬁrst claim since the second then follows as
in (2.86).
Write ψ =
˜
ψ +ϕ, where
˜
ψ ∈ D(A) and ϕ ≤ ε. Then
AR
A
(z)ψ ≤ R
A
(z)A
˜
ψ +AR
A
(z)ϕ
≤
A
˜
ψ
Im(z)
+ϕ, (2.98)
by (2.48), ﬁnishing the proof.
Similarly, we can characterize the spectra of unitary operators. Recall
that a bijection U is called unitary if ¸Uψ, Uψ) = ¸ψ, U
∗
Uψ) = ¸ψ, ψ). Thus
U is unitary if and only if
U
∗
= U
−1
. (2.99)
Theorem 2.19. Let U be unitary, then σ(U) ⊆ ¦z ∈ C[ [z[ = 1¦. All
eigenvalues have modulus one and eigenvectors corresponding to diﬀerent
eigenvalues are orthogonal.
Proof. Since U ≤ 1 we have σ(U) ⊆ ¦z ∈ C[ [z[ ≤ 1¦. Moreover, U
−1
is also unitary and hence σ(U) ⊆ ¦z ∈ C[ [z[ ≥ 1¦ by Lemma 2.13. If
Uψ
j
= z
j
ψ
j
, j = 1, 2 we have
(z
1
−z
2
)¸ψ
1
, ψ
2
) = ¸U
∗
ψ
1
, ψ
2
) −¸ψ
1
, Uψ
2
) = 0 (2.100)
since Uψ = zψ implies U
∗
ψ = U
−1
ψ = z
−1
ψ = z
∗
ψ.
Problem 2.10. What is the spectrum of an orthogonal projection?
Problem 2.11. Compute the resolvent of Af = f
, D(A) = ¦f ∈ H
1
[0, 1] [ f(0) =
0¦ and show that unbounded operators can have empty spectrum.
Problem 2.12. Compute the eigenvalues and eigenvectors of A = −
d
2
dx
2
,
D(A) = ¦f ∈ H
2
(0, π)[f(0) = f(π) = 0¦. Compute the resolvent of A.
2.4. Orthogonal sums of operators 67
Problem 2.13. Find a Weyl sequence for the selfadjoint operator A =
−
d
2
dx
2
, D(A) = H
2
(R) for z ∈ (0, ∞). What is σ(A)? (Hint: Cut oﬀ the
solutions of −u
(x) = z u(x) outside a ﬁnite ball.)
Problem 2.14. Suppose A is bounded. Show that the spectrum of AA
∗
and
A
∗
A coincide away from 0 by showing
R
AA
∗(z) =
1
z
(AR
A
∗
A
(z)A
∗
−1) , R
A
∗
A
(z) =
1
z
(A
∗
R
AA
∗(z)A−1) .
(2.101)
2.4. Orthogonal sums of operators
Let H
j
, j = 1, 2, be two given Hilbert spaces and let A
j
: D(A
j
) → H
j
be
two given operators. Setting H = H
1
⊕H
2
we can deﬁne an operator
A = A
1
⊕A
2
, D(A) = D(A
1
) ⊕D(A
2
) (2.102)
by setting A(ψ
1
+ψ
2
) = A
1
ψ
1
+A
2
ψ
2
for ψ
j
∈ D(A
j
). Clearly A is closed,
(essentially) selfadjoint, etc., if and only if both A
1
and A
2
are. The same
considerations apply to countable orthogonal sums
A =
j
A
j
, D(A) =
j
D(A
j
) (2.103)
and we have
Theorem 2.20. Suppose A
j
are selfadjoint operators on H
j
, then A =
j
A
j
is selfadjoint and
R
A
(z) =
j
R
A
j
(z), z ∈ ρ(A) = C¸σ(A) (2.104)
where
σ(A) =
_
j
σ(A
j
) (2.105)
(the closure can be omitted if there are only ﬁnitely many terms).
Proof. By Ran(A±i) = (A±i)D(A) =
j
(A
j
±i)D(A
j
) =
j
H
j
= H we
see that A is selfadjoint. Moreover, if z ∈ σ(A
j
) there is a corresponding
Weyl sequence ψ
n
∈ D(A
j
) ⊆ D(A) and hence z ∈ σ(A). Conversely, if
z ,∈
j
σ(A
j
) set ε = d(z,
j
σ(A
j
)) > 0, then R
A
j
(z) ≤ ε
−1
and hence

j
R
A
j
(z) ≤ ε
−1
shows that z ∈ ρ(A).
Conversely, given an operator A it might be useful to write A as orthog
onal sum and investigate each part separately.
Let H
1
⊆ H be a closed subspace and let P
1
be the corresponding projec
tor. We say that H
1
reduces the operator A if P
1
A ⊆ AP
1
. Note that this
68 2. Selfadjointness and spectrum
implies P
1
D(A) ⊆ D(A). Moreover, if we set H
2
= H
⊥
1
, we have H = H
1
⊕H
2
and P
2
= I −P
1
reduces A as well.
Lemma 2.21. Suppose H
1
⊆ H reduces A, then A = A
1
⊕A
2
, where
A
j
ψ = Aψ, D(A
j
) = P
j
D(A) ⊆ D(A). (2.106)
If A is closable, then H
1
also reduces A and
A = A
1
⊕A
2
. (2.107)
Proof. As already noted, P
1
D(A) ⊆ D(A) and hence P
2
D(A) = (I −
P
1
)D(A) ⊆ D(A). Thus we see D(A) = D(A
1
) ⊕ D(A
2
). Moreover, if
ψ ∈ D(A
j
) we have Aψ = AP
j
ψ = P
j
Aψ ∈ H
j
and thus A
j
: D(A
j
) → H
j
which proves the ﬁrst claim.
Now let us turn to the second claim. Clearly A ⊆ A
1
⊕A
2
. Conversely,
suppose ψ ∈ D(A), then there is a sequence ψ
n
∈ D(A) such that ψ
n
→ ψ
and Aψ
n
→ Aψ. Then P
j
ψ
n
→ P
j
ψ and AP
j
ψ
n
= P
j
Aψ
n
→ PAψ. In
particular, P
j
ψ ∈ D(A) and AP
j
ψ = PAψ.
If A is selfadjoint, then H
1
reduces A if P
1
D(A) ⊆ D(A) and AP
1
ψ ∈ H
1
for every ψ ∈ D(A). In fact, if ψ ∈ D(A) we can write ψ = ψ
1
⊕ ψ
2
,
ψ
j
= P
j
ψ ∈ D(A). Since AP
1
ψ = Aψ
1
and P
1
Aψ = P
1
Aψ
1
+ P
1
Aψ
2
=
Aψ
1
+P
1
Aψ
2
we need to show P
1
Aψ
2
= 0. But this follows since
¸ϕ, P
1
Aψ
2
) = ¸AP
1
ϕ, ψ
2
) = 0 (2.108)
for every ϕ ∈ D(A).
Problem 2.15. Show (A
1
⊕A
2
)
∗
= A
∗
1
⊕A
∗
2
.
2.5. Selfadjoint extensions
It is safe to skip this entire section on ﬁrst reading.
In many physical applications a symmetric operator is given. If this
operator turns out to be essentially selfadjoint, there is a unique selfadjoint
extension and everything is ﬁne. However, if it is not, it is important to ﬁnd
out if there are selfadjoint extensions at all (for physical problems there
better are) and to classify them.
In Section 2.2 we have seen that A is essentially selfadjoint if Ker(A
∗
−
z) = Ker(A
∗
−z
∗
) = ¦0¦ for one z ∈ C¸R. Hence selfadjointness is related
to the dimension of these spaces and one calls the numbers
d
±
(A) = dimK
±
, K
±
= Ran(A±i)
⊥
= Ker(A
∗
∓i), (2.109)
defect indices of A (we have chosen z = i for simplicity, any other z ∈ C¸R
would be as good). If d
−
(A) = d
+
(A) = 0 there is one selfadjoint extension
of A, namely A. But what happens in the general case? Is there more than
2.5. Selfadjoint extensions 69
one extension, or maybe none at all? These questions can be answered by
virtue of the Cayley transform
V = (A−i)(A+ i)
−1
: Ran(A+ i) → Ran(A−i). (2.110)
Theorem 2.22. The Cayley transform is a bijection from the set of all
symmetric operators A to the set of all isometric operators V (i.e., V ϕ =
ϕ for all ϕ ∈ D(V )) for which Ran(1 +V ) is dense.
Proof. Since A is symmetric we have (A ± i)ψ
2
= Aψ
2
+ ψ
2
for
all ψ ∈ D(A) by a straightforward computation. And thus for every ϕ =
(A+ i)ψ ∈ D(V ) = Ran(A+ i) we have
V ϕ = (A−i)ψ = (A+ i)ψ = ϕ. (2.111)
Next observe
1 ±V = ((A−i) ±(A+ i))(A+ i)
−1
=
_
2A(A+ i)
−1
2i(A+ i)
−1
, (2.112)
which shows D(A) = Ran(1 −V ) and
A = i(1 +V )(1 −V )
−1
. (2.113)
Conversely, let V be given and use the last equation to deﬁne A.
Since V is isometric we have ¸(1 ± V )ϕ, (1 ∓ V )ϕ) = ±2iIm¸V ϕ, ϕ)
for all ϕ ∈ D(V ) by a straightforward computation. And thus for every
ψ = (1 −V )ϕ ∈ D(A) = Ran(1 −V ) we have
¸Aψ, ψ) = −i¸(1+V )ϕ, (1−V )ϕ) = i¸(1−V )ϕ, (1+V )ϕ) = ¸ψ, Aψ), (2.114)
that is, A is symmetric. Finally observe
A±i = ((1 +V ) ±(1 −V ))(1 −V )
−1
=
_
2i(1 −V )
−1
2iV (1 −V )
−1
, (2.115)
which shows that A is the Cayley transform of V and ﬁnishes the proof.
Thus A is selfadjoint if and only if its Cayley transform V is unitary.
Moreover, ﬁnding a selfadjoint extension of A is equivalent to ﬁnding a
unitary extensions of V and this in turn is equivalent to (taking the closure
and) ﬁnding a unitary operator from D(V )
⊥
to Ran(V )
⊥
. This is possible
if and only if both spaces have the same dimension, that is, if and only if
d
+
(A) = d
−
(A).
Theorem 2.23. A symmetric operator has selfadjoint extensions if and
only if its defect indices are equal.
In this case let A
1
be a selfadjoint extension, V
1
its Cayley transform.
Then
D(A
1
) = D(A) + (1 −V
1
)K
+
= ¦ψ +ϕ
+
−V
1
ϕ
+
[ψ ∈ D(A), ϕ
+
∈ K
+
¦
(2.116)
70 2. Selfadjointness and spectrum
and
A
1
(ψ +ϕ
+
−V
1
ϕ
+
) = Aψ + iϕ
+
+ iV
1
ϕ
+
. (2.117)
Moreover,
(A
1
±i)
−1
= (A±i)
−1
⊕
∓i
2
j
¸ϕ
±
j
, .)(ϕ
±
j
−ϕ
∓
j
), (2.118)
where ¦ϕ
+
j
¦ is an orthonormal basis for K
+
and ϕ
−
j
= V
1
ϕ
+
j
.
Corollary 2.24. Suppose A is a closed symmetric operator with equal defect
indices d = d
+
(A) = d
−
(A). Then dimKer(A
∗
−z
∗
) = d for all z ∈ C¸R.
Proof. First of all we note that instead of z = i we could use V (z) =
(A + z
∗
)(A + z)
−1
for any z ∈ C¸R. Let d
±
(z) = dimK
±
(z), K
+
(z) =
Ran(A + z)
⊥
respectively K
−
(z) = Ran(A + z
∗
)
⊥
. The same arguments
as before show that there is a one to one correspondence between the self
adjoint extensions of A and the unitary operators on C
d(z)
. Hence d(z
1
) =
d(z
2
) = d
±
(A).
Example. Recall the operator A = −i
d
dx
, D(A) = ¦f ∈ H
1
(0, 2π)[f(0) =
f(2π) = 0¦ with adjoint A
∗
= −i
d
dx
, D(A
∗
) = H
1
(0, 2π).
Clearly
K
±
= span¦e
∓x
¦ (2.119)
is one dimensional and hence all unitary maps are of the form
V
θ
e
2π−x
= e
iθ
e
x
, θ ∈ [0, 2π). (2.120)
The functions in the domain of the corresponding operator A
θ
are given by
f
θ
(x) = f(x) +α(e
2π−x
−e
iθ
e
x
), f ∈ D(A), α ∈ C. (2.121)
In particular, f
θ
satisﬁes
f
θ
(2π) = e
i
˜
θ
f
θ
(0), e
i
˜
θ
=
1 −e
iθ
e
2π
e
2π
−e
iθ
(2.122)
and thus we have
D(A
θ
) = ¦f ∈ H
1
(0, 2π)[f(2π) = e
i
˜
θ
f(0)¦. (2.123)
Concerning closures we note that a bounded operator is closed if and
only if its domain is closed and any operator is closed if and only if its
inverse is closed. Hence we have
Lemma 2.25. The following items are equivalent.
• A is closed.
• D(V ) = Ran(A+ i) is closed.
2.5. Selfadjoint extensions 71
• Ran(V ) = Ran(A−i) is closed.
• V is closed.
Next, we give a useful criterion for the existence of selfadjoint exten
sions. A skew linear map C : H → H is called a conjugation if it satisﬁes
C
2
= I and ¸Cψ, Cϕ) = ¸ψ, ϕ). The prototypical example is of course
complex conjugation Cψ = ψ
∗
. An operator A is called Creal if
CD(A) ⊆ D(A), and ACψ = CAψ, ψ ∈ D(A). (2.124)
Note that in this case CD(A) = D(A), since D(A) = C
2
D(A) ⊆ CD(A).
Theorem 2.26. Suppose the symmetric operator A is Creal, then its defect
indices are equal.
Proof. Let ¦ϕ
j
¦ be an orthonormal set in Ran(A+i)
⊥
. Then ¦Kϕ
j
¦ is an
orthonormal set in Ran(A − i)
⊥
. Hence ¦ϕ
j
¦ is an orthonormal basis for
Ran(A+ i)
⊥
if and only if ¦Kϕ
j
¦ is an orthonormal basis for Ran(A−i)
⊥
.
Hence the two spaces have the same dimension.
Finally, we note the following useful formula for the diﬀerence of resol
vents of selfadjoint extensions.
Lemma 2.27. If A
j
, j = 1, 2 are selfadjoint extensions and if ¦ϕ
j
(z)¦ is
an orthonormal basis for Ker(A
∗
−z
∗
), then
(A
1
−z)
−1
−(A
2
−z)
−1
=
j,k
(α
1
jk
(z) −α
2
jk
(z))¸ϕ
k
(z), .)ϕ
k
(z
∗
), (2.125)
where
α
l
jk
(z) = ¸ϕ
j
(z
∗
), (A
l
−z)
−1
ϕ
k
(z)). (2.126)
Proof. First observe that ((A
1
−z)
−1
−(A
2
−z)
−1
)ϕ is zero for every ϕ ∈
Ran(A−z). Hence it suﬃces to consider it for vectors ϕ =
j
¸ϕ
j
(z), ϕ)ϕ
j
(z) ∈
Ran(A−z)
⊥
. Hence we have
(A
1
−z)
−1
−(A
2
−z)
−1
=
j
¸ϕ
j
(z), .)ψ
j
(z), (2.127)
where
ψ
j
(z) = ((A
1
−z)
−1
−(A
2
−z)
−1
)ϕ
j
(z). (2.128)
Now computation the adjoint once using ((A
j
− z)
−1
)
∗
= (A
j
− z
∗
)
−1
and
once using (
j
¸ψ
j
, .)ϕ
j
)
∗
=
j
¸ϕ
j
, .)ψ
j
we obtain
j
¸ϕ
j
(z
∗
), .)ψ
j
(z
∗
) =
j
¸ψ
j
(z), .)ϕ
j
(z). (2.129)
72 2. Selfadjointness and spectrum
Evaluating at ϕ
k
(z) implies
ψ
k
(z) =
j
¸ψ
j
(z
∗
), ϕ
k
(z))ϕ
j
(z
∗
) (2.130)
and ﬁnishes the proof.
Problem 2.16. Compute the defect indices of A
0
= i
d
dx
, D(A
0
) = C
∞
c
((0, ∞)).
Can you give a selfadjoint extension of A
0
.
2.6. Appendix: Absolutely continuous functions
Let (a, b) ⊆ R be some interval. We denote by
AC(a, b) = ¦f ∈ C(a, b)[f(x) = f(c) +
_
x
c
g(t)dt, c ∈ (a, b), g ∈ L
1
loc
(a, b)¦
(2.131)
the set of all absolutely continuous functions. That is, f is absolutely
continuous if and only if it can be written as the integral of some locally
integrable function. Note that AC(a, b) is a vector space.
By Corollary A.33 f(x) = f(c) +
_
x
c
g(t)dt is diﬀerentiable a.e. (with re
spect to Lebesgue measure) and f
(x) = g(x). In particular, g is determined
uniquely a.e..
If [a, b] is a compact interval we set
AC[a, b] = ¦f ∈ AC(a, b)[g ∈ L
1
(a, b)¦ ⊆ C[a, b]. (2.132)
If f, g ∈ AC[a, b] we have the formula of partial integration
_
b
a
f(x)g
(x) = f(b)g(b) −f(a)g(a) −
_
b
a
f
(x)g(x)dx. (2.133)
We set
H
m
(a, b) = ¦f ∈ L
2
(a, b)[f
(j)
∈ AC(a, b), f
(j+1)
∈ L
2
(a, b), 0 ≤ j ≤ m−1¦.
(2.134)
Then we have
Lemma 2.28. Suppose f ∈ H
m
(a, b), m ≥ 1. Then f is bounded and
lim
x↓a
f
(j)
(x) respectively lim
x↑b
f
(j)
(x) exist for 0 ≤ j ≤ m−1. Moreover,
the limit is zero if the endpoint is inﬁnite.
Proof. If the endpoint is ﬁnite, then f
(j+1)
is integrable near this endpoint
and hence the claim follows. If the endpoint is inﬁnite, note that
[f
(j)
(x)[
2
= [f
(j)
(c)[
2
+ 2
_
x
c
Re(f
(j)
(t)
∗
f
(j+1)
(t))dt (2.135)
shows that the limit exists (dominated convergence). Since f
(j)
is square
integrable the limit must be zero.
2.6. Appendix: Absolutely continuous functions 73
Let me remark, that it suﬃces to check that the function plus the highest
derivative is in L
2
, the lower derivatives are then automatically in L
2
. That
is,
H
m
(a, b) = ¦f ∈ L
2
(a, b)[f
(j)
∈ AC(a, b), 0 ≤ j ≤ m−1, f
(r)
∈ L
2
(a, b)¦.
(2.136)
For a ﬁnite endpoint this is straightforward. For an inﬁnite endpoint this
can also be shown directly, but it is much easier to use the Fourier transform
(compare Section 7.1).
Problem 2.17. Show (2.133). (Hint: Fubini)
Problem 2.18. Show that H
1
(a, b) together with the norm
f
2
2,1
=
_
b
a
[f(t)[
2
dt +
_
b
a
[f
(t)[
2
dt (2.137)
is a Hilbert space.
Problem 2.19. What is the closure of C
∞
0
(a, b) in H
1
(a, b)? (Hint: Start
with the case where (a, b) is ﬁnite.)
Chapter 3
The spectral theorem
The time evolution of a quantum mechanical system is governed by the
Schr¨odinger equation
i
d
dt
ψ(t) = Hψ(t). (3.1)
If H = C
n
, and H is hence a matrix, this system of ordinary diﬀerential
equations is solved by the matrix exponential
ψ(t) = exp(−itH)ψ(0). (3.2)
This matrix exponential can be deﬁned by a convergent power series
exp(−itH) =
∞
n=0
(−it)
n
n!
H
n
. (3.3)
For this approach the boundedness of H is crucial, which might not be the
case for a a quantum system. However, the best way to compute the matrix
exponential, and to understand the underlying dynamics, is to diagonalize
H. But how do we diagonalize a selfadjoint operator? The answer is known
as the spectral theorem.
3.1. The spectral theorem
In this section we want to address the problem of deﬁning functions of a
selfadjoint operator A in a natural way, that is, such that
(f+g)(A) = f(A)+g(A), (fg)(A) = f(A)g(A), (f
∗
)(A) = f(A)
∗
. (3.4)
As long as f and g are polynomials, no problems arise. If we want to extend
this deﬁnition to a larger class of functions, we will need to perform some
limiting procedure. Hence we could consider convergent power series or
equip the space of polynomials with the sup norm. In both cases this only
75
76 3. The spectral theorem
works if the operator A is bounded. To overcome this limitation, we will use
characteristic functions χ
Ω
(A) instead of powers A
j
. Since χ
Ω
(λ)
2
= χ
Ω
(λ),
the corresponding operators should be orthogonal projections. Moreover,
we should also have χ
R
(A) = I and χ
Ω
(A) =
n
j=1
χ
Ω
j
(A) for any ﬁnite
union Ω =
n
j=1
Ω
j
of disjoint sets. The only remaining problem is of course
the deﬁnition of χ
Ω
(A). However, we will defer this problem and begin
by developing a functional calculus for a family of characteristic functions
χ
Ω
(A).
Denote the Borel sigma algebra of R by B. A projectionvalued mea
sure is a map
P : B →L(H), Ω → P(Ω), (3.5)
from the Borel sets to the set of orthogonal projections, that is, P(Ω)
∗
=
P(Ω) and P(Ω)
2
= P(Ω), such that the following two conditions hold:
(i) P(R) = I.
(ii) If Ω =
n
Ω
n
with Ω
n
∩ Ω
m
= ∅ for n ,= m, then
n
P(Ω
n
)ψ =
P(Ω)ψ for every ψ ∈ H (strong σadditivity).
Note that we require strong convergence,
n
P(Ω
n
)ψ = P(Ω)ψ, rather
than norm convergence,
n
P(Ω
n
) = P(Ω). In fact, norm convergence
does not even hold in the simplest case where H = L
2
(I) and P(Ω) = χ
Ω
(multiplication operator), since for a multiplication operator the norm is just
the sup norm of the function. Furthermore, it even suﬃces to require weak
convergence, since wlimP
n
= P for some orthogonal projections implies
slimP
n
= P by ¸ψ, P
n
ψ) = ¸ψ, P
2
n
ψ) = ¸P
n
ψ, P
n
ψ) = P
n
ψ
2
together
with Lemma 1.11 (iv).
Example. Let H = C
n
and A ∈ GL(n) be some symmetric matrix. Let
λ
1
, . . . , λ
m
be its (distinct) eigenvalues and let P
j
be the projections onto
the corresponding eigenspaces. Then
P
A
(Ω) =
{jλ
j
∈Ω}
P
j
(3.6)
is a projection valued measure.
Example. Let H = L
2
(R) and let f be a realvalued measurable function.
Then
P(Ω) = χ
f
−1
(Ω)
(3.7)
is a projection valued measure (Problem 3.2).
It is straightforward to verify that any projectionvalued measure satis
ﬁes
P(∅) = 0, P(R¸Ω) = I −P(Ω), (3.8)
3.1. The spectral theorem 77
and
P(Ω
1
∪ Ω
2
) +P(Ω
1
∩ Ω
2
) = P(Ω
1
) +P(Ω
2
). (3.9)
Moreover, we also have
P(Ω
1
)P(Ω
2
) = P(Ω
1
∩ Ω
2
). (3.10)
Indeed, suppose Ω
1
∩Ω
2
= ∅ ﬁrst. Then, taking the square of (3.9) we infer
P(Ω
1
)P(Ω
2
) +P(Ω
2
)P(Ω
1
) = 0. (3.11)
Multiplying this equation from the right by P(Ω
2
) shows that P(Ω
1
)P(Ω
2
) =
−P(Ω
2
)P(Ω
1
)P(Ω
2
) is selfadjoint and thus P(Ω
1
)P(Ω
2
) = P(Ω
2
)P(Ω
1
) =
0. For the general case Ω
1
∩ Ω
2
,= ∅ we now have
P(Ω
1
)P(Ω
2
) = (P(Ω
1
−Ω
2
) +P(Ω
1
∩ Ω
2
))(P(Ω
2
−Ω
1
) +P(Ω
1
∩ Ω
2
))
= P(Ω
1
∩ Ω
2
) (3.12)
as stated.
To every projectionvalued measure there corresponds a resolution of
the identity
P(λ) = P((−∞, λ]) (3.13)
which has the properties (Problem 3.3):
(i) P(λ) is an orthogonal projection.
(ii) P(λ
1
) ≤ P(λ
2
) (that is ¸ψ, P(λ
1
)ψ) ≤ ¸ψ, P(λ
2
)ψ)) or equiva
lently Ran(P(λ
1
)) ⊆ Ran(P(λ
2
)) for λ
1
≤ λ
2
.
(iii) slim
λn↓λ
P(λ
n
) = P(λ) (strong right continuity).
(iv) slim
λ→−∞
P(λ) = 0 and slim
λ→+∞
P(λ) = I.
As before, strong right continuity is equivalent to weak right continuity.
Picking ψ ∈ H, we obtain a ﬁnite Borel measure µ
ψ
(Ω) = ¸ψ, P(Ω)ψ) =
P(Ω)ψ
2
with µ
ψ
(R) = ψ
2
< ∞. The corresponding distribution func
tion is given by µ(λ) = ¸ψ, P(λ)ψ) and since for every distribution function
there is a unique Borel measure (Theorem A.2), for every resolution of the
identity there is a unique projection valued measure.
Using the polarization identity (2.16) we also have the following complex
Borel measures
µ
ϕ,ψ
(Ω) = ¸ϕ, P(Ω)ψ) =
1
4
(µ
ϕ+ψ
(Ω) −µ
ϕ−ψ
(Ω) + iµ
ϕ−iψ
(Ω) −iµ
ϕ+iψ
(Ω)).
(3.14)
Note also that, by CauchySchwarz, [µ
ϕ,ψ
(Ω)[ ≤ ϕ ψ.
Now let us turn to integration with respect to our projectionvalued
measure. For any simple function f =
n
j=1
α
j
χ
Ω
j
(where Ω
j
= f
−1
(α
j
))
78 3. The spectral theorem
we set
P(f) ≡
_
R
f(λ)dP(λ) =
n
j=1
α
j
P(Ω
j
). (3.15)
In particular, P(χ
Ω
) = P(Ω). Then ¸ϕ, P(f)ψ) =
j
α
j
µ
ϕ,ψ
(Ω
j
) shows
¸ϕ, P(f)ψ) =
_
R
f(λ)dµ
ϕ,ψ
(λ) (3.16)
and, by linearity of the integral, the operator P is a linear map from the set
of simple functions into the set of bounded linear operators on H. Moreover,
P(f)ψ
2
=
j
[α
j
[
2
µ
ψ
(Ω
j
) (the sets Ω
j
are disjoint) shows
P(f)ψ
2
=
_
R
[f(λ)[
2
dµ
ψ
(λ). (3.17)
Equipping the set of simple functions with the sup norm this implies
P(f)ψ ≤ f
∞
ψ (3.18)
that P has norm one. Since the simple functions are dense in the Banach
space of bounded Borel functions B(R), there is a unique extension of P to
a bounded linear operator P : B(R) → L(H) (whose norm is one) from the
bounded Borel functions on R (with sup norm) to the set of bounded linear
operators on H. In particular, (3.16) and (3.17) remain true.
There is some additional structure behind this extension. Recall that
the set L(H) of all bounded linear mappings on H forms a C
∗
algebra. A C
∗
algebra homomorphism φ is a linear map between two C
∗
algebras which
respects both the multiplication and the adjoint, that is, φ(ab) = φ(a)φ(b)
and φ(a
∗
) = φ(a)
∗
.
Theorem 3.1. Let P(Ω) be a projectionvalued measure on H. Then the
operator
P : B(R) → L(H)
f →
_
R
f(λ)dP(λ)
(3.19)
is a C
∗
algebra homomorphism with norm one.
In addition, if f
n
(x) → f(x) pointwise and if the sequence sup
λ∈R
[f
n
(λ)[
is bounded, then P(f
n
) → P(f) strongly.
Proof. The properties P(1) = I, P(f
∗
) = P(f)
∗
, and P(fg) = P(f)P(g)
are straightforward for simple functions f. For general f they follow from
continuity. Hence P is a C
∗
algebra homomorphism.
The last claim follows from the dominated convergence theorem and
(3.17).
3.1. The spectral theorem 79
As a consequence, observe
¸P(g)ϕ, P(f)ψ) =
_
R
g
∗
(λ)f(λ)dµ
ϕ,ψ
(λ) (3.20)
and thus
dµ
P(g)ϕ,P(f)ψ
= g
∗
fdµ
ϕ,ψ
. (3.21)
Example. Let H = C
n
and A ∈ GL(n) respectively P
A
as in the previous
example. Then
P
A
(f) =
m
j=1
f(λ
j
)P
j
. (3.22)
In particular, P
A
(f) = A for f(λ) = λ.
Next we want to deﬁne this operator for unbounded Borel functions.
Since we expect the resulting operator to be unbounded, we need a suitable
domain ﬁrst. Motivated by (3.17) we set
D
f
= ¦ψ ∈ H[
_
R
[f(λ)[
2
dµ
ψ
(λ) < ∞¦. (3.23)
This is clearly a linear subspace of H since µ
αψ
(Ω) = [α[
2
µ
ψ
(Ω) and since
µ
ϕ+ψ
(Ω) ≤ 2(µ
ϕ
(Ω) +µ
ψ
(Ω)) (by the triangle inequality).
For every ψ ∈ D
f
, the bounded Borel function
f
n
= χ
Ωn
f, Ω
n
= ¦λ[ [f(λ)[ ≤ n¦, (3.24)
converges to f in the sense of L
2
(R, dµ
ψ
). Moreover, because of (3.17),
P(f
n
)ψ converges to some vector
˜
ψ. We deﬁne P(f)ψ =
˜
ψ. By construction,
P(f) is a linear operator such that (3.16) and (3.17) hold.
In addition, D
f
is dense. Indeed, let Ω
n
be deﬁned as in (3.24) and
abbreviate ψ
n
= P(Ω
n
)ψ. Now observe that dµ
ψn
= χ
Ωn
dµ
ψ
and hence
ψ
n
∈ D
f
. Moreover, ψ
n
→ ψ by (3.17) since χ
Ωn
→ 1 in L
2
(R, dµ
ψ
).
The operator P(f) has some additional properties. One calls an un
bounded operator A normal if D(A) = D(A
∗
) and Aψ = A
∗
ψ for all
ψ ∈ D(A).
Theorem 3.2. For any Borel function f, the operator
P(f) =
_
R
f(λ)dP(λ), D(P(f)) = D
f
, (3.25)
is normal and satisﬁes
P(f)
∗
= P(f
∗
). (3.26)
Proof. Let f be given and deﬁne f
n
, Ω
n
as above. Since (3.26) holds for
f
n
by our previous theorem, we get
¸ϕ, P(f)ψ) = ¸P(f
∗
)ϕ, ψ) (3.27)
80 3. The spectral theorem
for any ϕ, ψ ∈ D
f
= D(f
∗
) by continuity. Thus it remains to show that
D(P(f)
∗
) ⊆ D
f
. If ψ ∈ D(P(f)
∗
) we have ¸ψ, P(f)ϕ) = ¸
˜
ψ, ϕ) for all
ϕ ∈ D
f
by deﬁnition. Now observe that P(f
∗
n
)ψ = P(Ω
n
)
˜
ψ since we have
¸P(f
∗
n
)ψ, ϕ) = ¸ψ, P(f
n
)ϕ) = ¸ψ, P(f)P(Ω
n
)ϕ) = ¸P(Ω
n
)
˜
ψ, ϕ) (3.28)
for any ϕ ∈ H. To see the second equality use P(f
n
)ϕ = P(f
m
χ
n
)ϕ =
P(f
m
)P(Ω
n
)ϕ for m ≥ n and let m → ∞. This proves existence of the limit
lim
n→∞
_
R
[f
n
[
2
dµ
ψ
(λ) = lim
n→∞
P(f
∗
n
)ψ
2
= lim
n→∞
P(Ω
n
)
˜
ψ
2
= 
˜
ψ
2
, (3.29)
which implies f ∈ L
2
(R, dµ
ψ
), that is, ψ ∈ D
f
. That P(f) is normal follows
from P(f)ψ
2
= P(f
∗
)ψ
2
=
_
R
[f(λ)[
2
dµ
ψ
.
These considerations seem to indicate some kind of correspondence be
tween the operators P(f) in H and f in L
2
(R, dµ
ψ
). Recall that U : H →
˜
H is
called unitary if it is a bijection which preserves scalar products ¸Uϕ, Uψ) =
¸ϕ, ψ). The operators A in H and
˜
A in
˜
H are said to be unitarily equiva
lent if
UA =
˜
AU, UD(A) = D(
˜
A). (3.30)
Clearly, A is selfadjoint if and only if
˜
A is and σ(A) = σ(
˜
A).
Now let us return to our original problem and consider the subspace
H
ψ
= ¦P(f)ψ[f ∈ L
2
(R, dµ
ψ
)¦ ⊆ H. (3.31)
Observe that this subspace is closed: If ψ
n
= P(f
n
)ψ converges in H, then
f
n
converges to some f in L
2
(since ψ
n
− ψ
m

2
=
_
[f
n
− f
m
[
2
dµ
ψ
) and
hence ψ
n
→ P(f)ψ.
The vector ψ is called cyclic if H
ψ
= H. By (3.17), the relation
U
ψ
(P(f)ψ) = f (3.32)
deﬁnes a unique unitary operator U
ψ
: H
ψ
→ L
2
(R, dµ
ψ
) such that
U
ψ
P(f) = fU
ψ
, (3.33)
where f is identiﬁed with its corresponding multiplication operator. More
over, if f is unbounded we have U
ψ
(D
f
∩H
ψ
) = D(f) = ¦g ∈ L
2
(R, dµ
ψ
)[fg ∈
L
2
(R, dµ
ψ
)¦ (since ϕ = P(f)ψ implies dµ
ϕ
= [f[
2
dµ
ψ
) and the above equa
tion still holds.
If ψ is cyclic, our picture is complete. Otherwise we need to extend this
approach. A set ¦ψ
j
¦
j∈J
(J some index set) is called a set of spectral vectors
if ψ
j
 = 1 and H
ψ
i
⊥ H
ψ
j
for all i ,= j. A set of spectral vectors is called a
spectral basis if
j
H
ψ
j
= H. Luckily a spectral basis always exist:
3.1. The spectral theorem 81
Lemma 3.3. For every projection valued measure P, there is an (at most
countable) spectral basis ¦ψ
n
¦ such that
H =
n
H
ψn
(3.34)
and a corresponding unitary operator
U =
n
U
ψn
: H →
n
L
2
(R, dµ
ψn
) (3.35)
such that for any Borel function f,
UP(f) = fU, UD
f
= D(f). (3.36)
Proof. It suﬃces to show that a spectral basis exists. This can be easily
done using a GramSchmidt type construction. First of all observe that if
¦ψ
j
¦
j∈J
is a spectral set and ψ ⊥ H
ψ
j
for all j we have H
ψ
⊥ H
ψ
j
for all j.
Indeed, ψ ⊥ H
ψ
j
implies P(g)ψ ⊥ H
ψ
j
for every bounded function g since
¸P(g)ψ, P(f)ψ
j
) = ¸ψ, P(g
∗
f)ψ
j
) = 0. But P(g)ψ with g bounded is dense
in H
ψ
implying H
ψ
⊥ H
ψ
j
.
Now start with some total set ¦
˜
ψ
j
¦. Normalize
˜
ψ
1
and choose this to be
ψ
1
. Move to the ﬁrst
˜
ψ
j
which is not in H
ψ
1
, take the orthogonal complement
with respect to H
ψ
1
and normalize it. Choose the result to be ψ
2
. Proceeding
like this we get a set of spectral vectors ¦ψ
j
¦ such that span¦
˜
ψ
j
¦ ⊆
j
H
ψ
j
.
Hence H = span¦
˜
ψ
j
¦ ⊆
j
H
ψ
j
.
It is important to observe that the cardinality of a spectral basis is not
welldeﬁned (in contradistinction to the cardinality of an ordinary basis of
the Hilbert space). However, it can be at most equal to the cardinality of an
ordinary basis. In particular, since H is separable, it is at most countable.
The minimal cardinality of a spectral basis is called spectral multiplicity
of P. If the spectral multiplicity is one, the spectrum is called simple.
Example. Let H = C
2
and A =
_
0 0
0 1
_
. Then ψ
1
= (1, 0) and ψ
2
=
(0, 1) are a spectral basis. However, ψ = (1, 1) is cyclic and hence the
spectrum of A is simple. If A =
_
1 0
0 1
_
there is no cyclic vector (why)
and hence the spectral multiplicity is two.
Using this canonical form of projection valued measures it is straight
forward to prove
Lemma 3.4. Let f, g be Borel functions and α, β ∈ C. Then we have
αP(f) +βP(g) ⊆ P(αf +βg), D(αP(f) +βP(g)) = D
f+g
(3.37)
82 3. The spectral theorem
and
P(f)P(g) ⊆ P(f g), D(P(f)P(g)) = D
g
∩ D
f g
. (3.38)
Now observe, that to every projection valued measure P we can assign a
selfadjoint operator A =
_
R
λdP(λ). The question is whether we can invert
this map. To do this, we consider the resolvent R
A
(z) =
_
R
(λ −z)
−1
dP(λ).
By (3.16) the corresponding quadratic form is given by
F
ψ
(z) = ¸ψ, R
A
(z)ψ) =
_
R
1
λ −z
dµ
ψ
(λ), (3.39)
which is know as the Borel transform of the measure µ
ψ
. It can be shown
(see Section 3.4) that F
ψ
(z) is a holomorphic map from the upper half
plane to itself. Such functions are called Herglotz functions. Moreover,
the measure µ
ψ
can be reconstructed from F
ψ
(z) by Stieltjes inversion
formula
µ
ψ
(λ) = lim
δ↓0
lim
ε↓0
1
π
_
λ+δ
−∞
Im(F
ψ
(t + iε))dt. (3.40)
Conversely, if F
ψ
(z) is a Herglotz function satisfying [F(z)[ ≤
M
Im(z)
, then it
is the Borel transform of a unique measure µ
ψ
(given by Stieltjes inversion
formula).
So let A be a given selfadjoint operator and consider the expectation of
the resolvent of A,
F
ψ
(z) = ¸ψ, R
A
(z)ψ). (3.41)
This function is holomorphic for z ∈ ρ(A) and satisﬁes
F
ψ
(z
∗
) = F
ψ
(z)
∗
and [F
ψ
(z)[ ≤
ψ
2
Im(z)
(3.42)
(see Theorem 2.14). Moreover, the ﬁrst resolvent formula (2.80) shows
Im(F
ψ
(z)) = Im(z)R
A
(z)ψ
2
(3.43)
that it maps the upper half plane to itself, that is, it is a Herglotz function.
So by our above remarks, there is a corresponding measure µ
ψ
(λ) given by
Stieltjes inversion formula. It is called spectral measure corresponding to
ψ.
More generally, by polarization, for each ϕ, ψ ∈ H we can ﬁnd a corre
sponding complex measure µ
ϕ,ψ
such that
¸ϕ, R
A
(z)ψ) =
_
R
1
λ −z
dµ
ϕ,ψ
(λ). (3.44)
The measure µ
ϕ,ψ
is conjugate linear in ϕ and linear in ψ. Moreover, a
comparison with our previous considerations begs us to deﬁne a family of
3.1. The spectral theorem 83
operators P
A
(Ω) via
¸ϕ, P
A
(Ω)ψ) =
_
R
χ
Ω
(λ)dµ
ϕ,ψ
(λ). (3.45)
This is indeed possible by Corollary 1.8 since [¸ϕ, P
A
(Ω)ψ)[ = [µ
ϕ,ψ
(Ω)[ ≤
ϕ ψ. The operators P
A
(Ω) are non negative (0 ≤ ¸ψ, P
A
(Ω)ψ) ≤ 1) and
hence selfadjoint.
Lemma 3.5. The family of operators P
A
(Ω) forms a projection valued mea
sure.
Proof. We ﬁrst show P
A
(Ω
1
)P
A
(Ω
2
) = P
A
(Ω
1
∩ Ω
2
) in two steps. First
observe (using the ﬁrst resolvent formula (2.80))
_
R
1
λ − ˜ z
dµ
R
A
(z
∗
)ϕ,ψ
(λ) = ¸R
A
(z
∗
)ϕ, R
A
(˜ z)ψ) = ¸ϕ, R
A
(z)R
A
(˜ z)ψ)
=
1
z − ˜ z
(¸ϕ, R
A
(z)ψ) −¸ϕ, R
A
(˜ z)ψ))
=
1
z − ˜ z
_
R
_
1
λ −z
−
1
λ − ˜ z
_
dµ
ϕ,ψ
(λ) =
_
R
1
λ − ˜ z
dµ
ϕ,ψ
(λ)
λ −z
(3.46)
implying dµ
R
A
(z
∗
)ϕ,ψ
(λ) = (λ − z)
−1
dµ
ϕ,ψ
(λ) since a Herglotz function is
uniquely determined by its measure. Secondly we compute
_
R
1
λ −z
dµ
ϕ,P
A
(Ω)ψ
(λ) = ¸ϕ, R
A
(z)P
A
(Ω)ψ) = ¸R
A
(z
∗
)ϕ, P
A
(Ω)ψ)
=
_
R
χ
Ω
(λ)dµ
R
A
(z
∗
)ϕ,ψ
(λ) =
_
R
1
λ −z
χ
Ω
(λ)dµ
ϕ,ψ
(λ)
implying dµ
ϕ,P
A
(Ω)ψ
(λ) = χ
Ω
(λ)dµ
ϕ,ψ
(λ). Equivalently we have
¸ϕ, P
A
(Ω
1
)P
A
(Ω
2
)ψ) = ¸ϕ, P
A
(Ω
1
∩ Ω
2
)ψ) (3.47)
since χ
Ω
1
χ
Ω
2
= χ
Ω
1
∩Ω
2
. In particular, choosing Ω
1
= Ω
2
, we see that
P
A
(Ω
1
) is a projector.
The relation P
A
(R) = I follows from (3.93) below and Lemma 2.18 which
imply µ
ψ
(R) = ψ
2
.
Now let Ω =
∞
n=1
Ω
n
with Ω
n
∩ Ω
m
= ∅ for n ,= m. Then
n
j=1
¸ψ, P
A
(Ω
j
)ψ) =
n
j=1
µ
ψ
(Ω
j
) → ¸ψ, P
A
(Ω)ψ) = µ
ψ
(Ω) (3.48)
by σadditivity of µ
ψ
. Hence P
A
is weakly σadditive which implies strong
σadditivity, as pointed out earlier.
Now we can prove the spectral theorem for selfadjoint operators.
84 3. The spectral theorem
Theorem 3.6 (Spectral theorem). To every selfadjoint operator A there
corresponds a unique projection valued measure P
A
such that
A =
_
R
λdP
A
(λ). (3.49)
Proof. Existence has already been established. Moreover, Lemma 3.4 shows
that P
A
((λ−z)
−1
) = R
A
(z), z ∈ C¸R. Since the measures µ
ϕ,ψ
are uniquely
determined by the resolvent and the projection valued measure is uniquely
determined by the measures µ
ϕ,ψ
we are done.
The quadratic form of A is given by
q
A
(ψ) =
_
R
λdµ
ψ
(λ) (3.50)
and can be deﬁned for every ψ in the form domain selfadjoint operator
Q(A) = ¦ψ ∈ H[
_
R
[λ[dµ
ψ
(λ) < ∞¦ (3.51)
(which is larger than the domain D(A) = ¦ψ ∈ H[
_
R
λ
2
dµ
ψ
(λ) < ∞¦). This
extends our previous deﬁnition for nonnegative operators.
Note, that if Aand
˜
Aare unitarily equivalent as in (3.30), then UR
A
(z) =
R
˜
A
(z)U and hence
dµ
ψ
= d˜ µ
Uψ
. (3.52)
In particular, we have UP
A
(f) = P
˜
A
(f)U, UD(P
A
(f)) = D(P
˜
A
(f)).
Finally, let us give a characterization of the spectrum of A in terms of
the associated projectors.
Theorem 3.7. The spectrum of A is given by
σ(A) = ¦λ ∈ R[P
A
((λ −ε, λ +ε)) ,= 0 for all ε > 0¦. (3.53)
Proof. Let Ω
n
= (λ
0
−
1
n
, λ
0
+
1
n
). Suppose P
A
(Ω
n
) ,= 0. Then we can ﬁnd
a ψ
n
∈ P
A
(Ω
n
)H with ψ
n
 = 1. Since
(A−λ
0
)ψ
n

2
= (A−λ
0
)P
A
(Ω
n
)ψ
n

2
=
_
R
(λ −λ
0
)
2
χ
Ωn
(λ)dµ
ψn
(λ) ≤
1
n
2
(3.54)
we conclude λ
0
∈ σ(A) by Lemma 2.12.
Conversely, if P
A
((λ
0
−ε, λ
0
+ε)) = 0, set f
ε
(λ) = χ
R\(λ
0
−ε,λ
0
+ε)
(λ)(λ−
λ
0
)
−1
. Then
(A−λ
0
)P
A
(f
ε
) = P
A
((λ −λ
0
)f
ε
(λ)) = P
A
(R¸(λ
0
−ε, λ
0
+ε)) = I. (3.55)
Similarly P
A
(f
ε
)(A−λ
0
) = I[
D(A)
and hence λ
0
∈ ρ(A).
3.2. More on Borel measures 85
Thus P
A
((λ
1
, λ
2
)) = 0 if and only if (λ
1
, λ
2
) ⊆ ρ(A) and we have
P
A
(σ(A)) = I and P
A
(R ∩ ρ(A)) = 0 (3.56)
and consequently
P
A
(f) = P
A
(σ(A))P
A
(f) = P
A
(χ
σ(A)
f). (3.57)
In other words, P
A
(f) is not aﬀected by the values of f on R¸σ(A)!
It is clearly more intuitive to write P
A
(f) = f(A) and we will do so from
now on. This notation is justiﬁed by the elementary observation
P
A
(
n
j=0
α
j
λ
j
) =
n
j=0
α
j
A
j
. (3.58)
Moreover, this also shows that if A is bounded and f(A) can be deﬁned via
a convergent power series, then this agrees with our present deﬁnition by
Theorem 3.1.
Problem 3.1. Show that a selfadjoint operator P is a projection if and
only if σ(P) ⊆ ¦0, 1¦.
Problem 3.2. Show that (3.7) is a projection valued measure. What is the
corresponding operator?
Problem 3.3. Show that P(λ) satisﬁes the properties (i)–(iv).
Problem 3.4 (Polar decomposition). Let A be a bounded operator and set
[A[ =
√
A
∗
A. Show that
[A[ψ = Aψ.
Conclude that Ker(A) = Ker([A[) = Ran([A[)
⊥
and that
U =
_
ϕ = [A[ψ → Aψ if ϕ ∈ Ran([A[)
ϕ → 0 if ϕ ∈ Ker([A[)
extends to a welldeﬁned partial isometry, that is, U : Ker(U)
⊥
→ H is
norm preserving.
In particular, we have the polar decomposition
A = U[A[.
Problem 3.5. Compute [A[ =
√
A
∗
A of A = ¸ϕ, .)ψ.
3.2. More on Borel measures
Section 3.1 showed that in order to understand selfadjoint operators, one
needs to understand multiplication operators on L
2
(R, dµ), where dµ is a
ﬁnite Borel measure. This is the purpose of the present section.
86 3. The spectral theorem
The set of all growth points, that is,
σ(µ) = ¦λ ∈ R[µ((λ −ε, λ +ε)) > 0 for all ε > 0¦, (3.59)
is called the spectrum of µ. Invoking Morea’s together with Fubini’s theorem
shows that the Borel transform
F(z) =
_
R
1
λ −z
dµ(λ) (3.60)
is holomorphic for z ∈ C¸σ(µ). The converse following from Stieltjes inver
sion formula. Associated with this measure is the operator
Af(λ) = λf(λ), D(A) = ¦f ∈ L
2
(R, dµ)[λf(λ) ∈ L
2
(R, dµ)¦. (3.61)
By Theorem 3.7 the spectrum of A is precisely the spectrum of µ, that is,
σ(A) = σ(µ). (3.62)
Note that 1 ∈ L
2
(R, dµ) is a cyclic vector for A and that
dµ
g,f
(λ) = g(λ)
∗
f(λ)dµ(λ). (3.63)
Now what can we say about the function f(A) (which is precisely the
multiplication operator by f) of A? We are only interested in the case where
f is realvalued. Introduce the measure
(f
µ)(Ω) = µ(f
−1
(Ω)), (3.64)
then
_
R
g(λ)d(f
µ)(λ) =
_
R
g(f(λ))dµ(λ). (3.65)
In fact, it suﬃces to check this formula for simple functions g which follows
since χ
Ω
◦ f = χ
f
−1
(Ω)
. In particular, we have
P
f(A)
(Ω) = χ
f
−1
(Ω)
. (3.66)
It is tempting to conjecture that f(A) is unitarily equivalent to multi
plication by λ in L
2
(R, d(f
µ)) via the map
L
2
(R, d(f
µ)) → L
2
(R, dµ), g → g ◦ f. (3.67)
However, this map is only unitary if its range is L
2
(R, dµ), that is, if f is
injective.
Lemma 3.8. Suppose f is injective, then
U : L
2
(R, dµ) → L
2
(R, d(f
µ)), g → g ◦ f
−1
(3.68)
is a unitary map such that Uf(λ) = λ.
3.2. More on Borel measures 87
Example. Let f(λ) = λ
2
, then (g ◦ f)(λ) = g(λ
2
) and the range of the
above map is given by the symmetric functions. Note that we can still
get a unitary map L
2
(R, d(f
µ)) ⊕ L
2
(R, χd(f
µ)) → L
2
(R, dµ), (g
1
, g
2
) →
g
1
(λ
2
) +g
2
(λ
2
)(χ(λ) −χ(−λ)), where χ = χ
(0,∞)
.
Lemma 3.9. Let f be realvalued. The spectrum of f(A) is given by
σ(f(A)) = σ(f
µ). (3.69)
In particular,
σ(f(A)) ⊆ f(σ(A)), (3.70)
where equality holds if f is continuous and the closure can be dropped if, in
addition, σ(A) is bounded (i.e., compact).
Proof. If λ
0
∈ σ(f
µ), then the sequence g
n
= µ(Ω
n
)
−1/2
χ
Ωn
, Ω
n
=
f
−1
((λ
0
−
1
n
, λ
0
+
1
n
)), satisﬁes g
n
 = 1, (f(A) −λ
0
)g
n
 < n
−1
and hence
λ
0
∈ σ(f(A)). Conversely, if λ
0
,∈ σ(f
µ), then µ(Ω
n
) = 0 for some n and
hence we can change f on Ω
n
such that f(R) ∩(λ
0
−
1
n
, λ
0
+
1
n
) = ∅ without
changing the corresponding operator. Thus (f(A) −λ
0
)
−1
= (f(λ) −λ
0
)
−1
exists and is bounded, implying λ
0
,∈ σ(f(A)).
If f is continuous, f
−1
(f(λ) − ε, f(λ) + ε) contains an open interval
around λ and hence f(λ) ∈ σ(f(A)) if λ ∈ σ(A). If, in addition, σ(A) is
compact, then f(σ(A)) is compact and hence closed.
If two operators with simple spectrum are unitarily equivalent can be
read oﬀ from the corresponding measures:
Lemma 3.10. Let A
1
, A
2
be selfadjoint operators with simple spectrum and
corresponding spectral measures µ
1
and µ
2
of cyclic vectors. Then A
1
and
A
2
are unitarily equivalent if and only if µ
1
and µ
2
are mutually absolutely
continuous.
Proof. Without restriction we can assume that A
j
is multiplication by λ
in L
2
(R, dµ
j
). Let U : L
2
(R, dµ
1
) → L
2
(R, dµ
2
) be a unitary map such that
UA
1
= A
2
U. Then we also have Uf(A
1
) = f(A
2
)U for any bounded Borel
Function and hence
Uf(λ) = Uf(λ) 1 = f(λ)U(1)(λ) (3.71)
and thus U is multiplication by u(λ) = U(1)(λ). Moreover, since U is
unitary we have
µ
1
(Ω) =
_
R
[χ
Ω
[
2
dµ
1
=
_
R
[uχ
Ω
[
2
dµ
2
=
_
Ω
[u[
2
dµ
2
, (3.72)
that is, dµ
1
= [u[
2
dµ
2
. Reversing the role of A
1
and A
2
we obtain dµ
2
=
[v[
2
dµ
1
, where v = U
−1
1.
88 3. The spectral theorem
The converse is left as an exercise (Problem 3.10.)
Next we recall the unique decomposition of µ with respect to Lebesgue
measure,
dµ = dµ
ac
+dµ
s
, (3.73)
where µ
ac
is absolutely continuous with respect to Lebesgue measure (i.e.,
we have µ
ac
(B) = 0 for all B with Lebesgue measure zero) and µ
s
is singular
with respect to Lebesgue measure (i.e., µ
s
is supported, µ
s
(R¸B) = 0, on
a set B with Lebesgue measure zero). The singular part µ
s
can be further
decomposed into a (singularly) continuous and a pure point part,
dµ
s
= dµ
sc
+dµ
pp
, (3.74)
where µ
sc
is continuous on R and µ
pp
is a step function. Since the measures
dµ
ac
, dµ
sc
, and dµ
pp
are mutually singular, they have mutually disjoint
supports M
ac
, M
sc
, and M
pp
. Note that these sets are not unique. We will
choose them such that M
pp
is the set of all jumps of µ(λ) and such that M
sc
has Lebesgue measure zero.
To the sets M
ac
, M
sc
, and M
pp
correspond projectors P
ac
= χ
Mac
(A),
P
sc
= χ
Msc
(A), and P
pp
= χ
Mpp
(A) satisfying P
ac
+ P
sc
+ P
pp
= I. In
other words, we have a corresponding direct sum decomposition of both our
Hilbert space
L
2
(R, dµ) = L
2
(R, dµ
ac
) ⊕L
2
(R, dµ
sc
) ⊕L
2
(R, dµ
pp
) (3.75)
and our operator A
A = (AP
ac
) ⊕(AP
sc
) ⊕(AP
pp
). (3.76)
The corresponding spectra, σ
ac
(A) = σ(µ
ac
), σ
sc
(A) = σ(µ
sc
), and σ
pp
(A) =
σ(µ
pp
) are called the absolutely continuous, singularly continuous, and pure
point spectrum of A, respectively.
It is important to observe that σ
pp
(A) is in general not equal to the set
of eigenvalues
σ
p
(A) = ¦λ ∈ R[λ is an eigenvalue of A¦ (3.77)
since we only have σ
pp
(A) = σ
p
(A).
Example. let H =
2
(N) and let A be given by Aδ
n
=
1
n
δ
n
, where δ
n
is the sequence which is 1 at the n’th place and zero else (that is, A is
a diagonal matrix with diagonal elements
1
n
). Then σ
p
(A) = ¦
1
n
[n ∈ N¦
but σ(A) = σ
pp
(A) = σ
p
(A) ∪ ¦0¦. To see this, just observe that δ
n
is the
eigenvector corresponding to the eigenvalue
1
n
and for z ,∈ σ(A) we have
R
A
(z)δ
n
=
n
1−nz
δ
n
. At z = 0 this formula still gives the inverse of A, but
it is unbounded and hence 0 ∈ σ(A) but 0 ,∈ σ
p
(A). Since a continuous
measure cannot live on a single point and hence also not on a countable set,
we have σ
ac
(A) = σ
sc
(A) = ∅.
3.3. Spectral types 89
Example. An example with purely absolutely continuous spectrum is given
by taking µ to be Lebesgue measure. An example with purely singularly
continuous spectrum is given by taking µ to be the Cantor measure.
Problem 3.6. Construct a multiplication operator on L
2
(R) which has
dense point spectrum.
Problem 3.7. Let λ be Lebesgue measure on R. Show that if f ∈ AC(R)
with f
> 0, then
d(f
λ) =
1
f
(λ)
dλ. (3.78)
Problem 3.8. Let dµ(λ) = χ
[0,1]
(λ)dλ and f(λ) = χ
(−∞,t]
, t ∈ R. Compute
f
µ.
Problem 3.9. Let A be the multiplication operator by the Cantor function
in L
2
(0, 1). Compute the spectrum of A. Determine the spectral types.
Problem 3.10. Show the missing direction in the proof of Lemma 3.10.
3.3. Spectral types
Our next aim is to transfer the results of the previous section to arbitrary
selfadjoint operators A using Lemma 3.3. To do this we will need a spectral
measure which contains the information from all measures in a spectral basis.
This will be the case if there is a vector ψ such that for every ϕ ∈ H its
spectral measureµ
ϕ
is absolutely continuous with respect to µ
ψ
. Such a
vector will be called a maximal spectral vector of A and µ
ψ
will be
called a maximal spectral measure of A.
Lemma 3.11. For every selfadjoint operator A there is a maximal spectral
vector.
Proof. Let ¦ψ
j
¦
j∈J
be a spectral basis and choose nonzero numbers ε
j
with
j∈J
[ε
j
[
2
= 1. Then I claim that
ψ =
j∈J
ε
j
ψ
j
(3.79)
is a maximal spectral vector. Let ϕ be given, then we can write it as ϕ =
j
f
j
(A)ψ
j
and hence dµ
ϕ
=
j
[f
j
[
2
dµ
ψ
j
. But µ
ψ
(Ω) =
j
[ε
j
[
2
µ
ψ
j
(Ω) =
0 implies µ
ψ
j
(Ω) = 0 for every j ∈ J and thus µ
ϕ
(Ω) = 0.
A set ¦ψ
j
¦ of spectral vectors is called ordered if ψ
k
is a maximal
spectral vector for A restricted to (
k−1
j=1
H
ψ
j
)
⊥
. As in the unordered case
one can show
90 3. The spectral theorem
Theorem 3.12. For every selfadjoint operator there is an ordered spectral
basis.
Observe that if ¦ψ
j
¦ is an ordered spectral basis, then µ
ψ
j+1
is absolutely
continuous with respect to µ
ψ
j
.
If µ is a maximal spectral measure we have σ(A) = σ(µ) and the follow
ing generalization of Lemma 3.9 holds.
Theorem 3.13 (Spectral mapping). Let µ be a maximal spectral measure
and let f be realvalued. Then the spectrum of f(A) is given by
σ(f(A)) = ¦λ ∈ R[µ(f
−1
(λ −ε, λ +ε)) > 0 for all ε > 0¦. (3.80)
In particular,
σ(f(A)) ⊆ f(σ(A)), (3.81)
where equality holds if f is continuous and the closure can be dropped if, in
addition, σ(A) is bounded.
Next, we want to introduce the splitting (3.75) for arbitrary selfadjoint
operators A. It is tempting to pick a spectral basis and treat each summand
in the direct sum separately. However, since it is not clear that this approach
is independent of the spectral basis chosen, we use the more sophisticated
deﬁnition
H
ac
= ¦ψ ∈ H[µ
ψ
is absolutely continuous¦,
H
sc
= ¦ψ ∈ H[µ
ψ
is singularly continuous¦,
H
pp
= ¦ψ ∈ H[µ
ψ
is pure point¦. (3.82)
Lemma 3.14. We have
H = H
ac
⊕H
sc
⊕H
pp
. (3.83)
There are Borel sets M
xx
such that the projector onto H
xx
is given by P
xx
=
χ
Mxx
(A), xx ∈ ¦ac, sc, pp¦. In particular, the subspaces H
xx
reduce A. For
the sets M
xx
one can choose the corresponding supports of some maximal
spectral measure µ.
Proof. We will use the unitary operator U of Lemma 3.3. Pick ϕ ∈ H and
write ϕ =
n
ϕ
n
with ϕ
n
∈ H
ψn
. Let f
n
= Uϕ
n
, then, by construction
of the unitary operator U, ϕ
n
= f
n
(A)ψ
n
and hence dµ
ϕn
= [f
n
[
2
dµ
ψn
.
Moreover, since the subspaces H
ψn
are orthogonal, we have
dµ
ϕ
=
n
[f
n
[
2
dµ
ψn
(3.84)
and hence
dµ
ϕ,xx
=
n
[f
n
[
2
dµ
ψn,xx
, xx ∈ ¦ac, sc, pp¦. (3.85)
3.4. Appendix: The Herglotz theorem 91
This shows
UH
xx
=
n
L
2
(R, dµ
ψn,xx
), xx ∈ ¦ac, sc, pp¦ (3.86)
and reduces our problem to the considerations of the previous section.
The absolutely continuous, singularly continuous, and pure point
spectrum of A are deﬁned as
σ
ac
(A) = σ(A[
Hac
), σ
sc
(A) = σ(A[
Hsc
), and σ
pp
(A) = σ(A[
Hpp
),
(3.87)
respectively. If µ is a maximal spectral measure we have σ
ac
(A) = σ(µ
ac
),
σ
sc
(A) = σ(µ
sc
), and σ
pp
(A) = σ(µ
pp
).
If A and
˜
A are unitarily equivalent via U, then so are A[
Hxx
and
˜
A[
˜
Hxx
by (3.52). In particular, σ
xx
(A) = σ
xx
(
˜
A).
Problem 3.11. Compute σ(A), σ
ac
(A), σ
sc
(A), and σ
pp
(A) for the multi
plication operator A =
1
1+x
2
in L
2
(R). What is its spectral multiplicity?
3.4. Appendix: The Herglotz theorem
A holomorphic function F : C
+
→C
+
, C
±
= ¦z ∈ C[ ±Im(z) > 0¦, is called
a Herglotz function. We can deﬁne F on C
−
using F(z
∗
) = F(z)
∗
.
Suppose µ is a ﬁnite Borel measure. Then its Borel transform is deﬁned
via
F(z) =
_
R
dµ(λ)
λ −z
. (3.88)
Theorem 3.15. The Borel transform of a ﬁnite Borel measure is a Herglotz
function satisfying
[F(z)[ ≤
µ(R)
Im(z)
, z ∈ C
+
. (3.89)
Moreover, the measure µ can be reconstructed via Stieltjes inversion formula
1
2
(µ((λ
1
, λ
2
)) +µ([λ
1
, λ
2
])) = lim
ε↓0
1
π
_
λ
2
λ
1
Im(F(λ + iε))dλ. (3.90)
Proof. By Morea’s and Fubini’s theorem, F is holomorphic on C
+
and the
remaining properties follow from 0 < Im((λ−z)
−1
) and [λ−z[
−1
≤ Im(z)
−1
.
Stieltjes inversion formula follows from Fubini’s theorem and the dominated
convergence theorem since
1
2πi
_
λ
2
λ
1
(
1
x −λ −iε
−
1
x −λ + iε
)dλ →
1
2
_
χ
[λ
1
,λ
2
]
(x) +χ
(λ
1
,λ
2
)
(x)
_
(3.91)
pointwise.
92 3. The spectral theorem
Observe
Im(F(z)) = Im(z)
_
R
dµ(λ)
[λ −z[
2
(3.92)
and
lim
λ→∞
λIm(F(iλ)) = µ(R). (3.93)
The converse of the previous theorem is also true
Theorem 3.16. Suppose F is a Herglotz function satisfying
[F(z)[ ≤
M
Im(z)
, z ∈ C
+
. (3.94)
Then there is a unique Borel measure µ, satisfying µ(R) ≤ M, such that F
is the Borel transform of µ.
Proof. We abbreviate F(z) = v(z) +i w(z) and z = x+i y. Next we choose
a contour
Γ = ¦x + iε +λ[λ ∈ [−R, R]¦ ∪ ¦x + iε +Re
iϕ
[ϕ ∈ [0, π]¦. (3.95)
and note that z lies inside Γ and z
∗
+ 2iε lies outside Γ if 0 < ε < y < R.
Hence we have by Cauchy’s formula
F(z) =
1
2πi
_
Γ
_
1
ζ −z
−
1
ζ −z
∗
−2iε
_
F(ζ)dζ. (3.96)
Inserting the explicit form of Γ we see
F(z) =
1
π
_
R
−R
y −ε
λ
2
+ (y −ε)
2
F(x + iε +λ)dλ
+
i
π
_
π
0
y −ε
R
2
e
2iϕ
+ (y −ε)
2
F(x + iε +Re
iϕ
)Re
iϕ
dϕ. (3.97)
The integral over the semi circle vanishes as R → ∞ and hence we obtain
F(z) =
1
π
_
R
y −ε
(λ −x)
2
+ (y −ε)
2
F(λ + iε)dλ (3.98)
and taking imaginary parts
w(z) =
_
R
φ
ε
(λ)w
ε
(λ)dλ, (3.99)
where φ
ε
(λ) = (y−ε)/((λ−x)
2
+(y−ε)
2
) and w
ε
(λ) = w(λ+iε)/π. Letting
y → ∞ we infer from our bound
_
R
w
ε
(λ)dλ ≤ M. (3.100)
In particular, since [φ
ε
(λ) −φ
0
(λ)[ ≤ const ε we have
w(z) = lim
ε↓0
_
R
φ
0
(λ)dµ
ε
(λ), (3.101)
3.4. Appendix: The Herglotz theorem 93
where µ
ε
(λ) =
_
λ
−∞
w
ε
(x)dx. It remains to establish that the monotone
functions µ
ε
converge properly. Since 0 ≤ µ
ε
(λ) ≤ M, there is a convergent
subsequence for ﬁxed λ. Moreover, by the standard diagonal trick, there
is even a subsequence ε
n
such that µ
εn
(λ) converges for each rational λ.
For irrational λ we set µ(λ
0
) = inf
λ≥λ
0
¦µ(λ)[λ rational¦. Then µ(λ) is
monotone, 0 ≤ µ(λ
1
) ≤ µ(λ
2
) ≤ M, λ
1
≤ λ
2
, and we claim
w(z) =
_
R
φ
0
(λ)dµ(λ). (3.102)
Fix δ > 0 and let λ
1
< λ
2
< < λ
m+1
be rational numbers such that
[λ
j+1
−λ
j
[ ≤ δ and λ
1
≤ x −
y
3
δ
, λ
m+1
≥ x +
y
3
δ
. (3.103)
Then (since [φ
0
(λ)[ ≤ y
−2
)
[φ
0
(λ) −φ
0
(λ
j
)[ ≤
δ
y
2
, λ
j
≤ λ ≤ λ
j+1
, (3.104)
and
[φ
0
(λ)[ ≤
δ
y
2
, λ ≤ λ
1
or λ
m+1
≤ λ. (3.105)
Now observe
[
_
R
φ
0
(λ)dµ(λ) −
_
R
φ
0
(λ)dµ
εn
(λ)[ ≤
[
_
R
φ
0
(λ)dµ(λ) −
m
j=1
φ
0
(λ
j
)(µ(λ
j+1
) −µ(λ
j
))[
+[
m
j=1
φ
0
(λ
j
)(µ(λ
j+1
) −µ(λ
j
) −µ
εn
(λ
j+1
) +µ
εn
(λ
j
))
+[
_
R
φ
0
(λ)dµ
εn
(λ) −
m
j=1
φ
0
(λ
j
)(µ
εn
(λ
j+1
) −µ
εn
(λ
j
))[ (3.106)
The ﬁrst and third term can be bounded by 2Mδ/y
2
. Moreover, since
φ
0
(y) ≤ 1/y we can ﬁnd an N ∈ N such that
[µ(λ
j
) −µ
εn
(λ
j
)[ ≤
y
2m
δ, n ≥ N, (3.107)
and hence the second term is bounded by δ. In summary, the diﬀerence in
(3.106) can be made arbitrarily small.
Now F(z) and
_
R
(λ−z)
−1
dµ(λ) have the same imaginary part and thus
they only diﬀer by a real constant. By our bound this constant must be
zero.
The RadonNikodym derivative of µ can be obtained from the boundary
values of F.
94 3. The spectral theorem
Theorem 3.17. Let µ be a ﬁnite Borel measure and F its Borel transform,
then
(Dµ)(λ) ≤ liminf
ε↓0
1
π
F(λ + iε) ≤ limsup
ε↓0
1
π
F(λ + iε) ≤ (Dµ)(λ). (3.108)
Proof. We need to estimate
Im(F(λ + iε)) =
_
R
K
ε
(t)dµ(t), K
ε
(t) =
ε
t
2
+ε
2
. (3.109)
We ﬁrst split the integral into two parts
Im(F(λ+iε)) =
_
I
δ
K
ε
(t−λ)dµ(t)+
_
R\I
δ
K
ε
(t−λ)µ(t), I
δ
= (λ−δ, λ+δ).
(3.110)
Clearly the second part can be estimated by
_
R\I
δ
K
ε
(t −λ)µ(t) ≤ K
ε
(δ)µ(R). (3.111)
To estimate the ﬁrst part we integrate
K
ε
(s) ds dµ(t) (3.112)
over the triangle ¦(s, t)[λ − s < t < λ + s, 0 < s < δ¦ = ¦(s, t)[λ − δ < t <
λ +δ, t −λ < s < δ¦ and obtain
_
δ
0
µ(I
s
)K
ε
(s)ds =
_
I
δ
(K(δ) −K
ε
(t −λ))dµ(t). (3.113)
Now suppose there is are constants c and C such that c ≤
µ(Is)
2s
≤ C,
0 ≤ s ≤ δ, then
2c arctan(
δ
ε
) ≤
_
I
δ
K
ε
(t −λ)dµ(t) ≤ 2C arctan(
δ
ε
) (3.114)
since
δK
ε
(δ) +
_
δ
0
−sK
ε
(s)ds = arctan(
δ
ε
). (3.115)
Thus the claim follows combining both estimates.
As a consequence of Theorem A.34 and Theorem A.35 we obtain
Theorem 3.18. Let µ be a ﬁnite Borel measure and F its Borel transform,
then the limit
Im(F(λ)) = lim
ε↓0
1
π
Im(F(λ + iε)) (3.116)
exists a.e. with respect to both µ and Lebesgue measure (ﬁnite or inﬁnite)
and
(Dµ)(λ) =
1
π
Im(F(λ)) (3.117)
whenever (Dµ)(λ) exists.
3.4. Appendix: The Herglotz theorem 95
Moreover, the set ¦λ[F(λ) = ∞¦ is a support for the singularly and
¦λ[F(λ) < ∞¦ is a support for the absolutely continuous part.
In particular,
Corollary 3.19. The measure µ is purely absolutely continuous on I if
limsup
ε↓0
Im(F(λ + iε)) < ∞ for all λ ∈ I.
Chapter 4
Applications of the
spectral theorem
This chapter can be mostly skipped on ﬁrst reading. You might want to have a
look at the ﬁrst section and the come back to the remaining ones later.
Now let us show how the spectral theorem can be used. We will give a
few typical applications:
Firstly we will derive an operator valued version of of Stieltjes’ inversion
formula. To do this, we need to show how to integrate a family of functions
of A with respect to a parameter. Moreover, we will show that these integrals
can be evaluated by computing the corresponding integrals of the complex
valued functions.
Secondly we will consider commuting operators and show how certain
facts, which are known to hold for the resolvent of an operator A, can be
established for a larger class of functions.
Then we will show how the eigenvalues below the essential spectrum and
dimension of RanP
A
(Ω) can be estimated using the quadratic form.
Finally, we will investigate tensor products of operators.
4.1. Integral formulas
We begin with the ﬁrst task by having a closer look at the projector P
A
(Ω).
They project onto subspaces corresponding to expectation values in the set
Ω. In particular, the number
¸ψ, χ
Ω
(A)ψ) (4.1)
97
98 4. Applications of the spectral theorem
is the probability for a measurement of a to lie in Ω. In addition, we have
¸ψ, Aψ) =
_
Ω
λdµ
ψ
(λ) ∈ hull(Ω), ψ ∈ P
A
(Ω)H, ψ = 1, (4.2)
where hull(Ω) is the convex hull of Ω.
The space Ranχ
{λ
0
}
(A) is called the eigenspace corresponding to λ
0
since we have
¸ϕ, Aψ) =
_
R
λχ
{λ
0
}
(λ)dµ
ϕ,ψ
(λ) = λ
0
_
R
dµ
ϕ,ψ
(λ) = λ
0
¸ϕ, ψ) (4.3)
and hence Aψ = λ
0
ψ for all ψ ∈ Ranχ
{λ
0
}
(A). The dimension of the
eigenspace is called the multiplicity of the eigenvalue.
Moreover, since
lim
ε↓0
−iε
λ −λ
0
−iε
= χ
{λ
0
}
(λ) (4.4)
we infer from Theorem 3.1 that
lim
ε↓0
−iεR
A
(λ
0
+ iε)ψ = χ
{λ
0
}
(A)ψ. (4.5)
Similarly, we can obtain an operator valued version of Stieltjes’ inversion
formula. But ﬁrst we need to recall a few facts from integration in Banach
spaces.
We will consider the case of mappings f : I → X where I = [t
0
, t
1
] ⊂ R is
a compact interval and X is a Banach space. As before, a function f : I → X
is called simple if the image of f is ﬁnite, f(I) = ¦x
i
¦
n
i=1
, and if each inverse
image f
−1
(x
i
), 1 ≤ i ≤ n, is a Borel set. The set of simple functions S(I, X)
forms a linear space and can be equipped with the sup norm
f
∞
= sup
t∈I
f(t). (4.6)
The corresponding Banach space obtained after completion is called the set
of regulated functions R(I, X).
Observe that C(I, X) ⊂ R(I, X). In fact, consider the simple function
f
n
=
n−1
i=0
f(s
i
)χ
[s
i
,s
i+1
)
, where s
i
= t
0
+ i
t
1
−t
0
n
. Since f ∈ C(I, X) is
uniformly continuous, we infer that f
n
converges uniformly to f.
For f ∈ S(I, X) we can deﬁne a linear map
_
: S(I, X) → X by
_
I
f(t)dt =
n
i=1
x
i
[f
−1
(x
i
)[, (4.7)
where [Ω[ denotes the Lebesgue measure of Ω. This map satisﬁes

_
I
f(t)dt ≤ f
∞
(t
1
−t
0
) (4.8)
4.1. Integral formulas 99
and hence it can be extended uniquely to a linear map
_
: R(I, X) → X
with the same norm (t
1
−t
0
) by Theorem 0.24. We even have

_
I
f(t)dt ≤
_
I
f(t)dt, (4.9)
which clearly holds for f ∈ S(I, X) und thus for all f ∈ R(I, X) by conti
nuity. In addition, if ∈ X
∗
is a continuous linear functional, then
(
_
I
f(t)dt) =
_
I
(f(t))dt, f ∈ R(I, X). (4.10)
In particular, if A(t) ∈ R(I, L(H)), then
__
I
A(t)dt
_
ψ =
_
I
(A(t)ψ)dt. (4.11)
If I = R, we say that f : I → X is integrable if f ∈ R([−r, r], X) for all
r > 0 and if f(t) is integrable. In this case we can set
_
R
f(t)dt = lim
r→∞
_
[−r,r]
f(t)dt (4.12)
and (4.9) and (4.10) still hold.
We will use the standard notation
_
t
3
t
2
f(s)ds =
_
I
χ
(t
2
,t
3
)
(s)f(s)ds and
_
t
2
t
3
f(s)ds = −
_
t
3
t
2
f(s)ds.
We write f ∈ C
1
(I, X) if
d
dt
f(t) = lim
ε→0
f(t +ε) −f(t)
ε
(4.13)
exists for all t ∈ I. In particular, if f ∈ C(I, X), then F(t) =
_
t
t
0
f(s)ds ∈
C
1
(I, X) and dF/dt = f as can be seen from
F(t +ε) −F(t) −f(t)ε = 
_
t+ε
t
(f(s) −f(t))ds ≤ [ε[ sup
s∈[t,t+ε]
f(s) −f(t).
(4.14)
The important facts for us are the following two results.
Lemma 4.1. Suppose f : I R →C is a bounded Borel function such that
f(., λ) and set F(λ) =
_
I
f(t, λ)dt. Let A be selfadjoint. Then f(t, A) ∈
R(I, L(H)) and
F(A) =
_
I
f(t, A)dt respectively F(A)ψ =
_
I
f(t, A)ψ dt. (4.15)
Proof. That f(t, A) ∈ R(I, L(H)) follows from the spectral theorem, since
it is no restriction to assume that A is multiplication by λ in some L
2
space.
100 4. Applications of the spectral theorem
We compute
¸ϕ, (
_
I
f(t, A)dt)ψ) =
_
I
¸ϕ, f(t, A)ψ)dt
=
_
I
_
R
f(t, λ)dµ
ϕ,ψ
(λ)dt
=
_
R
_
I
f(t, λ)dt dµ
ϕ,ψ
(λ)
=
_
R
F(λ)dµ
ϕ,ψ
(λ) = ¸ϕ, F(A)ψ) (4.16)
by Fubini’s theorem and hence the ﬁrst claim follows.
Lemma 4.2. Suppose f : R →L(H) is integrable and A ∈ L(H). Then
A
_
R
f(t)dt =
_
R
Af(t)dt respectively
_
R
f(t)dtA =
_
R
f(t)Adt. (4.17)
Proof. It suﬃces to prove the case where f is simple and of compact sup
port. But for such functions the claim is straightforward.
Now we can prove Stone’s formula.
Theorem 4.3 (Stone’s formula). Let A be selfadjoint, then
1
2πi
_
λ
2
λ
1
(R
A
(λ + iε) −R
A
(λ −iε))dλ →
1
2
(P
A
([λ
1
, λ
2
]) +P
A
((λ
1
, λ
2
)))
(4.18)
strongly.
Proof. The result follows combining Lemma 4.1 with Theorem 3.1 and
(3.91).
Problem 4.1. Let Γ be a diﬀerentiable Jordan curve in ρ(A). Show
χ
Ω
(A) =
_
Γ
R
A
(z)dz, (4.19)
where Ω is the intersection of the interior of Γ with R.
4.2. Commuting operators
Now we come to commuting operators. As a preparation we can now prove
Lemma 4.4. Let K ⊆ R be closed. And let C
∞
(K) be the set of all contin
uous functions on K which vanish at ∞ (if K is unbounded) with the sup
norm. The ∗algebra generated by the function
λ →
1
λ −z
(4.20)
for one z ∈ C¸K is dense in C
∞
(K).
4.2. Commuting operators 101
Proof. If K is compact, the claim follows directly from the complex Stone–
Weierstraß theorem since (λ
1
−z)
−1
= (λ
2
−z)
−1
implies λ
1
= λ
2
. Otherwise,
replace K by
˜
K = K∪¦∞¦, which is compact, and set (∞−z)
−1
= 0. Then
we can again apply the complex Stone–Weierstraß theorem to conclude that
our ∗subalgebra is equal to ¦f ∈ C(
˜
K)[f(∞) = 0¦ which is equivalent to
C
∞
(K).
We say that two bounded operators A, B commute if
[A, B] = AB −BA = 0. (4.21)
If A or B is unbounded, we soon run into trouble with this deﬁnition since
the above expression might not even make sense for any nonzero vector (e.g.,
take B = ¸ϕ, .)ψ with ψ ,∈ D(A)). To avoid this nuisance we will replace A
by a bounded function of A. A good candidate is the resolvent. Hence if A
is selfadjoint and B is bounded we will say that A and B commute if
[R
A
(z), B] = [R
A
(z
∗
), B] = 0 (4.22)
for one z ∈ ρ(A).
Lemma 4.5. Suppose A is selfadjoint and commutes with the bounded
operator B. Then
[f(A), B] = 0 (4.23)
for any bounded Borel function f. If f is unbounded, the claim holds for
any ψ ∈ D(f(A)).
Proof. Equation (4.22) tell us that (4.23) holds for any f in the ∗subalgebra
generated by R
A
(z). Since this subalgebra is dense in C
∞
(σ(A)), the claim
follows for all such f ∈ C
∞
(σ(A)). Next ﬁx ψ ∈ H and let f be bounded.
Choose a sequence f
n
∈ C
∞
(σ(A)) converging to f in L
2
(R, dµ
ψ
). Then
Bf(A)ψ = lim
n→∞
Bf
n
(A)ψ = lim
n→∞
f
n
(A)Bψ = f(A)Bψ. (4.24)
If f is unbounded, let ψ ∈ D(f(A)) and choose f
n
as in (3.24). Then
f(A)Bψ = lim
n→∞
f
n
(A)Bψ = lim
n→∞
Bf
n
(A)ψ (4.25)
shows f ∈ L
2
(R, dµ
Bψ
) (i.e., Bψ ∈ D(f(A))) and f(A)Bψ = BF(A)ψ.
Corollary 4.6. If A is selfadjoint and bounded, then (4.22) holds if and
only if (4.21) holds.
Proof. Since σ(A) is compact, we have λ ∈ C
∞
(σ(A)) and hence (4.21)
follows from (4.23) by our lemma. Conversely, since B commutes with any
polynomial of A, the claim follows from the Neumann series.
As another consequence we obtain
102 4. Applications of the spectral theorem
Theorem 4.7. Suppose A is selfadjoint and has simple spectrum. A bounded
operator B commutes with A if and only if B = f(A) for some bounded Borel
function.
Proof. Let ψ be a cyclic vector for A. By our unitary equivalence it is no
restriction to assume H = L
2
(R, dµ
ψ
). Then
Bg(λ) = Bg(λ) 1 = g(λ)(B1)(λ) (4.26)
since B commutes with the multiplication operator g(λ). Hence B is multi
plication by f(λ) = (B1)(λ).
The assumption that the spectrum of A is simple is crucial as the exam
ple A = I shows. Note also that the functions exp(−itA) can also be used
instead of resolvents.
Lemma 4.8. Suppose A is selfadjoint and B is bounded. Then B commutes
with A if and only if
[e
−iAt
, B] = 0 (4.27)
for all t ∈ R.
Proof. It suﬃces to show [
ˆ
f(A), B] = 0 for f ∈ o(R), since these functions
are dense in C
∞
(R) by the complex Stone–Weierstraß theorem. Here
ˆ
f
denotes the Fourier transform of f, see Section 7.1. But for such f we have
[
ˆ
f(A), B] =
1
√
2π
[
_
R
f(t)e
−iAt
dt, B] =
1
√
2π
_
R
f(t)[e
−iAt
, B]dt = 0 (4.28)
by Lemma 4.2.
The extension to the case where B is selfadjoint and unbounded is
straightforward. We say that A and B commute in this case if
[R
A
(z
1
), R
B
(z
2
)] = [R
A
(z
∗
1
), R
B
(z
2
)] = 0 (4.29)
for one z
1
∈ ρ(A) and one z
2
∈ ρ(B) (the claim for z
∗
2
follows by taking
adjoints). From our above analysis it follows that this is equivalent to
[e
−iAt
, e
−iBs
] = 0, t, s ∈ R, (4.30)
respectively
[f(A), g(B)] = 0 (4.31)
for arbitrary bounded Borel functions f and g.
4.3. The minmax theorem 103
4.3. The minmax theorem
In many applications a selfadjoint operator has a number of eigenvalues be
low the bottom of the essential spectrum. The essential spectrum is obtained
from the spectrum by removing all discrete eigenvalues with ﬁnite multiplic
ity (we will have a closer look at it in Section 6.2). In general there is no way
of computing the lowest eigenvalues and their corresponding eigenfunctions
explicitly. However, one often has some idea how the eigenfunctions might
approximately look like.
So suppose we have a normalized function ψ
1
which is an approximation
for the eigenfunction ϕ
1
of the lowest eigenvalue E
1
. Then by Theorem 2.15
we know that
¸ψ
1
, Aψ
1
) ≥ ¸ϕ
1
, Aϕ
1
) = E
1
. (4.32)
If we add some free parameters to ψ
1
, one can optimize them and obtain
quite good upper bounds for the ﬁrst eigenvalue.
But is there also something one can say about the next eigenvalues?
Suppose we know the ﬁrst eigenfunction ϕ
1
, then we can restrict A to the
orthogonal complement of ϕ
1
and proceed as before: E
2
will be the inﬁmum
over all expectations restricted to this subspace. If we restrict to the or
thogonal complement of an approximating eigenfunction ψ
1
, there will still
be a component in the direction of ϕ
1
left and hence the inﬁmum of the
expectations will be lower than E
2
. Thus the optimal choice ψ
1
= ϕ
1
will
give the maximal value E
2
.
More precisely, let ¦ϕ
j
¦
N
j=1
be an orthonormal basis for the space spanned
by the eigenfunctions corresponding to eigenvalues below the essential spec
trum. Assume they satisfy (A−E
j
)ϕ
j
= 0, where E
j
≤ E
j+1
are the eigen
values (counted according to their multiplicity). If the number of eigenvalues
N is ﬁnite we set E
j
= inf σ
ess
(A) for j > N and choose ϕ
j
orthonormal
such that (A−E
j
)ϕ
j
 ≤ ε.
Deﬁne
U(ψ
1
, . . . , ψ
n
) = ¦ψ ∈ D(A)[ ψ = 1, ψ ∈ span¦ψ
1
, . . . , ψ
n
¦
⊥
¦. (4.33)
(i) We have
inf
ψ∈U(ψ
1
,...,ψ
n−1
)
¸ψ, Aψ) ≤ E
n
+O(ε). (4.34)
In fact, set ψ =
n
j=1
α
j
ϕ
j
and choose α
j
such that ψ ∈ U(ψ
1
, . . . , ψ
n−1
),
then
¸ψ, Aψ) =
n
j=1
[α
j
[
2
E
j
+O(ε) ≤ E
n
+O(ε) (4.35)
and the claim follows.
104 4. Applications of the spectral theorem
(ii) We have
inf
ψ∈U(ϕ
1
,...,ϕ
n−1
)
¸ψ, Aψ) ≥ E
n
−O(ε). (4.36)
In fact, set ψ = ϕ
n
.
Since ε can be chosen arbitrarily small we have proven
Theorem 4.9 (MinMax). Let A be selfadjoint and let E
1
≤ E
2
≤ E
3
be the eigenvalues of A below the essential spectrum respectively the inﬁmum
of the essential spectrum once there are no more eigenvalues left. Then
E
n
= sup
ψ
1
,...,ψ
n−1
inf
ψ∈U(ψ
1
,...,ψ
n−1
)
¸ψ, Aψ). (4.37)
Clearly the same result holds if D(A) is replaced by the quadratic form
domain Q(A) in the deﬁnition of U. In addition, as long as E
n
is an eigen
value, the sup and inf are in fact max and min, explaining the name.
Corollary 4.10. Suppose A and B are selfadjoint operators with A ≥ B
(i.e. A−B ≥ 0), then E
n
(A) ≥ E
n
(B).
Problem 4.2. Suppose A, A
n
are bounded and A
n
→ A. Then E
k
(A
n
) →
E
k
(A). (Hint A−A
n
 ≤ ε is equivalent to A−ε ≤ A ≤ A+ε.)
4.4. Estimating eigenspaces
Next, we show that the dimension of the range of P
A
(Ω) can be estimated
if we have some functions which lie approximately in this space.
Theorem 4.11. Suppose A is a selfadjoint operator and ψ
j
, 1 ≤ j ≤ k,
are linearly independent elements of a H.
(i). Let λ ∈ R, ψ
j
∈ Q(A). If
¸ψ, Aψ) < λψ
2
(4.38)
for any nonzero linear combination ψ =
k
j=1
c
j
ψ
j
, then
dimRan P
A
((−∞, λ)) ≥ k. (4.39)
Similarly, ¸ψ, Aψ) > λψ
2
implies dimRan P
A
((λ, ∞)) ≥ k.
(ii). Let λ
1
< λ
2
, ψ
j
∈ D(A). If
(A−
λ
2
+λ
1
2
)ψ <
λ
2
−λ
1
2
ψ (4.40)
for any nonzero linear combination ψ =
k
j=1
c
j
ψ
j
, then
dimRan P
A
((λ
1
, λ
2
)) ≥ k. (4.41)
4.5. Tensor products of operators 105
Proof. (i). Let M = span¦ψ
j
¦ ⊆ H. We claim dimP
A
((−∞, λ))M =
dimM = k. For this it suﬃces to show KerP
A
((−∞, λ))[
M
= ¦0¦. Sup
pose P
A
((−∞, λ))ψ = 0, ψ ,= 0. Then we see that for any nonzero linear
combination ψ
¸ψ, Aψ) =
_
R
η dµ
ψ
(η) =
_
[λ,∞)
η dµ
ψ
(η)
≥ λ
_
[λ,∞)
dµ
ψ
(η) = λψ
2
. (4.42)
This contradicts our assumption (4.38).
(ii). Using the same notation as before we need to show KerP
A
((λ
1
, λ
2
))[
M
=
¦0¦. If P
A
((λ
1
, λ
2
))ψ = 0, ψ ,= 0, then,
(A−
λ
2
+λ
1
2
)ψ
2
=
_
R
(x −
λ
2
+λ
1
2
)
2
dµ
ψ
(x) =
_
Ω
x
2
dµ
ψ
(x +
λ
2
+λ
1
2
)
≥
(λ
2
−λ
1
)
2
4
_
Ω
dµ
ψ
(x +
λ
2
+λ
1
2
) =
(λ
2
−λ
1
)
2
4
ψ
2
, (4.43)
where Ω = (−∞, −(λ
2
−λ
1
)/2]∪[(λ
2
−λ
1
)/2, ∞). But this is a contradiction
as before.
4.5. Tensor products of operators
Suppose A
j
, 1 ≤ j ≤ n, are selfadjoint operators on H
j
. For every monomial
λ
n
1
1
λ
nn
n
we can deﬁne
(A
n
1
1
⊗ ⊗A
nn
n
)ψ
1
⊗ ⊗ψ
n
= (A
n
1
1
ψ
1
) ⊗ ⊗(A
nn
n
ψ
n
), ψ
j
∈ D(A
n
j
j
).
(4.44)
Hence for every polynomial P(λ
1
, . . . , λ
n
) of degree N we can deﬁne
P(A
1
, . . . , A
n
)ψ
1
⊗ ⊗ψ
n
, ψ
j
∈ D(A
N
j
), (4.45)
and extend this deﬁnition to obtain a linear operator on the set
D = span¦ψ
1
⊗ ⊗ψ
n
[ ψ
j
∈ D(A
N
j
)¦. (4.46)
Moreover, if P is realvalued, then the operator P(A
1
, . . . , A
n
) on D is sym
metric and we can consider its closure, which will again be denoted by
P(A
1
, . . . , A
n
).
Theorem 4.12. Suppose A
j
, 1 ≤ j ≤ n, are selfadjoint operators on H
j
and let P(λ
1
, . . . , λ
n
) be a realvalued polynomial and deﬁne P(A
1
, . . . , A
n
)
as above.
Then P(A
1
, . . . , A
n
) is selfadjoint and its spectrum is the closure of the
range of P on the product of the spectra of the A
j
, that is,
σ(P(A
1
, . . . , A
n
)) = P(σ(A
1
), . . . , σ(A
n
)). (4.47)
106 4. Applications of the spectral theorem
Proof. By the spectral theorem it is no restriction to assume that A
j
is
multiplication by λ
j
on L
2
(R, dµ
j
) and P(A
1
, . . . , A
n
) is hence multiplication
by P(λ
1
, . . . , λ
n
) on L
2
(R
n
, dµ
1
dµ
n
). Since D contains the set of
all functions ψ
1
(λ
1
) ψ
n
(λ
n
) for which ψ
j
∈ L
2
c
(R, dµ
j
) it follows that the
domain of the closure of P contains L
2
c
(R
n
, dµ
1
dµ
n
). Hence P is
the maximally deﬁned multiplication operator by P(λ
1
, . . . , λ
n
), which is
selfadjoint.
Now let λ = P(λ
1
, . . . , λ
n
) with λ
j
∈ σ(A
j
). Then there exists Weyl
sequences ψ
j,k
∈ D(A
N
j
) with (A
j
− λ
j
)ψ
j,k
→ 0 as k → ∞. Then, (P −
λ)ψ
k
→ 0, where ψ
k
= ψ
1,k
⊗ ⊗ ψ
1,k
and hence λ ∈ σ(P). Conversely,
if λ ,∈ P(σ(A
1
), . . . , σ(A
n
)), then [P(λ
1
, . . . , λ
n
) − λ[ ≥ ε for a.e. λ
j
with
respect to µ
j
and hence (P−λ)
−1
exists and is bounded, that is λ ∈ ρ(P).
The two main cases of interest are A
1
⊗A
2
, in which case
σ(A
1
⊗A
2
) = σ(A
1
)σ(A
2
) = ¦λ
1
λ
2
[λ
j
∈ σ(A
j
)¦, (4.48)
and A
1
⊗I +I ⊗A
2
, in which case
σ(A
1
⊗I +I ⊗A
2
) = σ(A
1
) +σ(A
2
) = ¦λ
1
+λ
2
[λ
j
∈ σ(A
j
)¦. (4.49)
Chapter 5
Quantum dynamics
As in the ﬁnite dimensional case, the solution of the Schr¨odinger equation
i
d
dt
ψ(t) = Hψ(t) (5.1)
is given by
ψ(t) = exp(−itH)ψ(0). (5.2)
A detailed investigation of this formula will be our ﬁrst task. Moreover, in
the ﬁnite dimensional case the dynamics is understood once the eigenvalues
are known and the same is true in our case once we know the spectrum. Note
that, like any Hamiltonian system from classical mechanics, our system is
not hyperbolic (i.e., the spectrum is not away from the real axis) and hence
simple results like, all solutions tend to the equilibrium position cannot be
expected.
5.1. The time evolution and Stone’s theorem
In this section we want to have a look at the initial value problem associated
with the Schr¨odinger equation (2.12) in the Hilbert space H. If H is one
dimensional (and hence A is a real number), the solution is given by
ψ(t) = e
−itA
ψ(0). (5.3)
Our hope is that this formula also applies in the general case and that we
can reconstruct a oneparameter unitary group U(t) from its generator A
(compare (2.11)) via U(t) = exp(−itA). We ﬁrst investigate the family of
operators exp(−itA).
Theorem 5.1. Let A be selfadjoint and let U(t) = exp(−itA).
(i). U(t) is a strongly continuous oneparameter unitary group.
107
108 5. Quantum dynamics
(ii). The limit lim
t→0
1
t
(U(t)ψ − ψ) exists if and only if ψ ∈ D(A) in
which case lim
t→0
1
t
(U(t)ψ −ψ) = −iAψ.
(iii). U(t)D(A) = D(A) and AU(t) = U(t)A.
Proof. The group property (i) follows directly from Theorem 3.1 and the
corresponding statements for the function exp(−itλ). To prove strong con
tinuity observe that
lim
t→t
0
e
−itA
ψ −e
−it
0
A
ψ
2
= lim
t→t
0
_
R
[e
−itλ
−e
−it
0
λ
[
2
dµ
ψ
(λ)
=
_
R
lim
t→t
0
[e
−itλ
−e
−it
0
λ
[
2
dµ
ψ
(λ) = 0 (5.4)
by the dominated convergence theorem.
Similarly, if ψ ∈ D(A) we obtain
lim
t→0

1
t
(e
−itA
ψ −ψ) + iAψ
2
= lim
t→0
_
R
[
1
t
(e
−itλ
−1) + iλ[
2
dµ
ψ
(λ) = 0 (5.5)
since [e
itλ
−1[ ≤ [tλ[. Now let
˜
A be the generator deﬁned as in (2.11). Then
˜
A is a symmetric extension of A since we have
¸ϕ,
˜
Aψ) = lim
t→0
¸ϕ,
i
t
(U(t) −1)ψ) = lim
t→0
¸
i
−t
(U(−t) −1)ϕ, ψ) = ¸
˜
Aϕ, ψ) (5.6)
and hence
˜
A = A by Corollary 2.2. This settles (ii).
To see (iii) replace ψ → U(s)ψ in (ii).
For our original problem this implies that formula (5.3) is indeed the
solution to the initial value problem of the Schr¨odinger equation. Moreover,
¸U(t)ψ, AU(t)ψ) = ¸U(t)ψ, U(t)Aψ) = ¸ψ, Aψ) (5.7)
shows that the expectations of A are time independent. This corresponds
to conservation of energy.
On the other hand, the generator of the time evolution of a quantum
mechanical system should always be a selfadjoint operator since it corre
sponds to an observable (energy). Moreover, there should be a one to one
correspondence between the unitary group and its generator. This is ensured
by Stone’s theorem.
Theorem 5.2 (Stone). Let U(t) be a weakly continuous oneparameter uni
tary group. Then its generator A is selfadjoint and U(t) = exp(−itA).
Proof. First of all observe that weak continuity together with Lemma 1.11 (iv)
shows that U(t) is in fact strongly continuous.
5.1. The time evolution and Stone’s theorem 109
Next we show that A is densely deﬁned. Pick ψ ∈ H and set
ψ
τ
=
_
τ
0
U(t)ψdt (5.8)
(the integral is deﬁned as in Section 4.1) implying lim
τ→0
τ
−1
ψ
τ
= ψ. More
over,
1
t
(U(t)ψ
τ
−ψ
τ
) =
1
t
_
t+τ
t
U(s)ψds −
1
t
_
τ
0
U(s)ψds
=
1
t
_
τ+t
τ
U(s)ψds −
1
t
_
t
0
U(s)ψds
=
1
t
U(τ)
_
t
0
U(s)ψds −
1
t
_
t
0
U(s)ψds → U(τ)ψ −ψ (5.9)
as t → 0 shows ψ
τ
∈ D(A). As in the proof of the previous theorem, we can
show that A is symmetric and that U(t)D(A) = D(A).
Next, let us prove that A is essentially selfadjoint. By Lemma 2.6 it
suﬃces to prove Ker(A
∗
−z
∗
) = ¦0¦ for z ∈ C¸R. Suppose A
∗
ϕ = z
∗
ϕ, then
for each ψ ∈ D(A) we have
d
dt
¸ϕ, U(t)ψ) = ¸ϕ, −iAU(t)ψ) = −i¸A
∗
ϕ, U(t)ψ)
= −iz¸ϕ, U(t)ψ) (5.10)
and hence ¸ϕ, U(t)ψ) = exp(−izt)¸ϕ, ψ). Since the left hand side is bounded
for all t ∈ R and the exponential on the right hand side is not, we must have
¸ϕ, ψ) = 0 implying ϕ = 0 since D(A) is dense.
So A is essentially selfadjoint and we can introduce V (t) = exp(−itA).
We are done if we can show U(t) = V (t).
Let ψ ∈ D(A) and abbreviate ψ(t) = (U(t) −V (t))ψ. Then
lim
s→0
ψ(t +s) −ψ(t)
s
= iAψ(t) (5.11)
and hence
d
dt
ψ(t)
2
= 2Re¸ψ(t), iAψ(t)) = 0. Since ψ(0) = 0 we have
ψ(t) = 0 and hence U(t) and V (t) coincide on D(A). Furthermore, since
D(A) is dense we have U(t) = V (t) by continuity.
As an immediate consequence of the proof we also note the following
useful criterion.
Corollary 5.3. Suppose D ⊆ D(A) is dense and invariant under U(t).
Then A is essentially selfadjoint on D.
Proof. As in the above proof it follows ¸ϕ, ψ) = 0 for any ϕ ∈ Ker(A
∗
−z
∗
)
and ψ ∈ D.
110 5. Quantum dynamics
Note that by Lemma 4.8 two strongly continuous oneparameter groups
commute
[e
−itA
, e
−isB
] = 0 (5.12)
if and only if the generators commute.
Clearly, for a physicist, one of the goals must be to understand the time
evolution of a quantum mechanical system. We have seen that the time
evolution is generated by a selfadjoint operator, the Hamiltonian, and is
given by a linear ﬁrst order diﬀerential equation, the Schr¨odinger equation.
To understand the dynamics of such a ﬁrst order diﬀerential equation, one
must understand the spectrum of the generator. Some general tools for this
endeavor will be provided in the following sections.
Problem 5.1. Let H = L
2
(0, 2π) and consider the one parameter unitary
group given by U(t)f(x) = f(x −t mod 2π). What is the generator of U?
5.2. The RAGE theorem
Now, let us discuss why the decomposition of the spectrum introduced in
Section 3.3 is of physical relevance. Let ϕ = ψ = 1. The vector ¸ϕ, ψ)ϕ
is the projection of ψ onto the (onedimensional) subspace spanned by ϕ.
Hence [¸ϕ, ψ)[
2
can be viewed as the part of ψ which is in the state ϕ. A
ﬁrst question one might rise is, how does
[¸ϕ, U(t)ψ)[
2
, U(t) = e
−itA
, (5.13)
behave as t → ∞? By the spectral theorem,
ˆ µ
ϕ,ψ
(t) = ¸ϕ, U(t)ψ) =
_
R
e
−itλ
dµ
ϕ,ψ
(λ) (5.14)
is the Fourier transform of the measure µ
ϕ,ψ
. Thus our question is an
swered by Wiener’s theorem.
Theorem 5.4 (Wiener). Let µ be a ﬁnite complex Borel measure on R and
let
ˆ µ(t) =
_
R
e
−itλ
dµ(λ) (5.15)
be its Fourier transform. Then the Ces`aro time average of ˆ µ(t) has the
following limit
lim
T→∞
1
T
_
T
0
[ˆ µ(t)[
2
dt =
λ∈R
[µ(¦λ¦)[
2
, (5.16)
where the sum on the right hand side is ﬁnite.
5.2. The RAGE theorem 111
Proof. By Fubini we have
1
T
_
T
0
[ˆ µ(t)[
2
dt =
1
T
_
T
0
_
R
_
R
e
−i(x−y)t
dµ(x)dµ
∗
(y)dt
=
_
R
_
R
_
1
T
_
T
0
e
−i(x−y)t
dt
_
dµ(x)dµ
∗
(y). (5.17)
The function in parentheses is bounded by one and converges pointwise to
χ
{0}
(x − y) as T → ∞. Thus, by the dominated convergence theorem, the
limit of the above expression is given by
_
R
_
R
χ
{0}
(x −y)dµ(x)dµ
∗
(y) =
_
R
µ(¦y¦)dµ
∗
(y) =
y∈R
[µ(¦y¦)[
2
. (5.18)
To apply this result to our situation, observe that the subspaces H
ac
,
H
sc
, and H
pp
are invariant with respect to time evolution since P
xx
U(t) =
χ
Mxx
(A) exp(−itA) = exp(−itA)χ
Mxx
(A) = U(t)P
xx
, xx ∈ ¦ac, sc, pp¦.
Moreover, if ψ ∈ H
xx
we have P
xx
ψ = ψ which shows ¸ϕ, f(A)ψ) =
¸ϕ, P
xx
f(A)ψ) = ¸P
xx
ϕ, f(A)ψ) implying dµ
ϕ,ψ
= dµ
P
xx
ϕ,ψ
. Thus if µ
ψ
is ac, sc, or pp, so is µ
ϕ,ψ
for every ϕ ∈ H.
That is, if ψ ∈ H
c
= H
ac
⊕H
sc
, then the Ces`aro mean of ¸ϕ, U(t)ψ) tends
to zero. In other words, the average of the probability of ﬁnding the system
in any prescribed state tends to zero if we start in the continuous subspace
H
c
of A.
If ψ ∈ H
ac
, then dµ
ϕ,ψ
is absolutely continuous with respect to Lebesgue
measure and thus ˆ µ
ϕ,ψ
(t) is continuous and tends to zero as [t[ → ∞. In
fact, this follows from the RiemannLebesgue lemma (see Lemma 7.6 below).
Now we want to draw some additional consequences from Wiener’s the
orem. This will eventually yield a dynamical characterization of the contin
uous and pure point spectrum due to Ruelle, Amrein, Gorgescu, and Enß.
But ﬁrst we need a few deﬁnitions.
An operator K ∈ L(H) is called ﬁnite rank if its range is ﬁnite dimen
sional. The dimension n = dimRan(K) is called the rank of K. If ¦ψ
j
¦
n
j=1
is an orthonormal basis for Ran(K) we have
Kψ =
n
j=1
¸ψ
j
, Kψ)ψ
j
=
n
j=1
¸ϕ
j
, ψ)ψ
j
, (5.19)
112 5. Quantum dynamics
where ϕ
j
= K
∗
ψ
j
. The elements ϕ
j
are linearly independent since Ran(K) =
Ker(K
∗
)
⊥
. Hence every ﬁnite rank operator is of the form (5.19). In addi
tion, the adjoint of K is also ﬁnite rank and given by
K
∗
ψ =
n
j=1
¸ψ
j
, ψ)ϕ
j
. (5.20)
The closure of the set of all ﬁnite rank operators in L(H) is called the set
of compact operators C(H). It is straightforward to verify (Problem 5.2)
Lemma 5.5. The set of all compact operators C(H) is a closed ∗ideal in
L(H).
There is also a weaker version of compactness which is useful for us. The
operator K is called relatively compact with respect to A if
KR
A
(z) ∈ C(H) (5.21)
for one z ∈ ρ(A). By the ﬁrst resolvent identity this then follows for all
z ∈ ρ(A). In particular we have D(A) ⊆ D(K).
Now let us return to our original problem.
Theorem 5.6. Let A be selfadjoint and suppose K is relatively compact.
Then
lim
T→∞
1
T
_
T
0
Ke
−itA
P
c
ψ
2
dt = 0 and lim
t→∞
Ke
−itA
P
ac
ψ = 0
(5.22)
for every ψ ∈ D(A). In particular, if K is also bounded, then the result
holds for any ψ ∈ H.
Proof. Let ψ ∈ H
c
respectively ψ ∈ H
ac
and drop the projectors. Then, if K
is a rank one operator (i.e., K = ¸ϕ
1
, .)ϕ
2
), the claim follows from Wiener’s
theorem respectively the RiemannLebesgue lemma. Hence it holds for any
ﬁnite rank operator K.
If K is compact, there is a sequence K
n
of ﬁnite rank operators such
that K −K
n
 ≤ 1/n and hence
Ke
−itA
ψ ≤ K
n
e
−itA
ψ +
1
n
ψ. (5.23)
Thus the claim holds for any compact operator K.
If ψ ∈ D(A) we can set ψ = (A − i)
−1
ϕ, where ϕ ∈ H
c
if and only if
ψ ∈ H
c
(since H
c
reduces A). Since K(A + i)
−1
is compact by assumption,
the claim can be reduced to the previous situation. If, in addition, K is
bounded, we can ﬁnd a sequence ψ
n
∈ D(A) such that ψ −ψ
n
 ≤ 1/n and
hence
Ke
−itA
ψ ≤ Ke
−itA
ψ
n
 +
1
n
K, (5.24)
5.2. The RAGE theorem 113
concluding the proof.
With the help of this result we can now prove an abstract version of the
RAGE theorem.
Theorem 5.7 (RAGE). Let A be selfadjoint. Suppose K
n
∈ L(H) is a se
quence of relatively compact operators which converges strongly to the iden
tity. Then
H
c
= ¦ψ ∈ H[ lim
n→∞
lim
T→∞
1
T
_
T
0
K
n
e
−itA
ψdt = 0¦,
H
pp
= ¦ψ ∈ H[ lim
n→∞
sup
t≥0
(I −K
n
)e
−itA
ψ = 0¦. (5.25)
Proof. Abbreviate ψ(t) = exp(−itA)ψ. We begin with the ﬁrst equation.
Let ψ ∈ H
c
, then
1
T
_
T
0
K
n
ψ(t)dt ≤
_
1
T
_
T
0
K
n
ψ(t)
2
dt
_
1/2
→ 0 (5.26)
by CauchySchwarz and the previous theorem. Conversely, if ψ ,∈ H
c
we
can write ψ = ψ
c
+ ψ
pp
. By our previous estimate it suﬃces to show
K
n
ψ
pp
(t) ≥ ε > 0 for n large. In fact, we even claim
lim
n→∞
sup
t≥0
K
n
ψ
pp
(t) −ψ
pp
(t) = 0. (5.27)
By the spectral theorem, we can write ψ
pp
(t) =
j
α
j
(t)ψ
j
, where the ψ
j
are orthonormal eigenfunctions and α
j
(t) = exp(−itλ
j
)α
j
. Truncate this
expansion after N terms, then this part converges uniformly to the desired
limit by strong convergence of K
n
. Moreover, by Lemma 1.13 we have
K
n
 ≤ M, and hence the error can be made arbitrarily small by choosing
N large.
Now let us turn to the second equation. If ψ ∈ H
pp
the claim follows
by (5.27). Conversely, if ψ ,∈ H
pp
we can write ψ = ψ
c
+ ψ
pp
and by our
previous estimate it suﬃces to show that (I −K
n
)ψ
c
(t) does not tend to
0 as n → ∞. If it would, we would have
0 = lim
T→∞
1
T
_
T
0
(I −K
n
)ψ
c
(t)
2
dt
≥ ψ
c
(t)
2
− lim
T→∞
1
T
_
T
0
K
n
ψ
c
(t)
2
dt = ψ
c
(t)
2
, (5.28)
a contradiction.
In summary, regularity properties of spectral measures are related to
the long time behavior of the corresponding quantum mechanical system.
114 5. Quantum dynamics
However, a more detailed investigation of this topic is beyond the scope of
this manuscript. For a survey containing several recent results see [9].
It is often convenient to treat the observables as time dependent rather
than the states. We set
K(t) = e
itA
Ke
−itA
(5.29)
and note
¸ψ(t), Kψ(t)) = ¸ψ, K(t)ψ), ψ(t) = e
−itA
ψ. (5.30)
This point of view is often referred to as Heisenberg picture in the physics
literature. If K is unbounded we will assume D(A) ⊆ D(K) such that the
above equations make sense at least for ψ ∈ D(A). The main interest is
the behavior of K(t) for large t. The strong limits are called asymptotic
observables if they exist.
Theorem 5.8. Suppose A is selfadjoint and K is relatively compact. Then
lim
T→∞
1
T
_
T
0
e
itA
Ke
−itA
ψdt =
λ∈σp(A)
P
A
(¦λ¦)KP
A
(¦λ¦)ψ, ψ ∈ D(A).
(5.31)
If K is in addition bounded, the result holds for any ψ ∈ H.
Proof. We will assume that K is bounded. To obtain the general result,
use the same trick as before and replace K by KR
A
(z). Write ψ = ψ
c
+ψ
pp
.
Then
lim
T→∞
1
T

_
T
0
K(t)ψ
c
dt ≤ lim
T→∞
1
T
_
T
0
K(t)ψ
c
dt = 0 (5.32)
by Theorem 5.6. As in the proof of the previous theorem we can write
ψ
pp
=
j
α
j
ψ
j
and hence
j
α
j
1
T
_
T
0
K(t)ψ
j
dt =
j
α
j
_
1
T
_
T
0
e
it(A−λ
j
)
dt
_
Kψ
j
. (5.33)
As in the proof of Wiener’s theorem, we see that the operator in parenthesis
tends to P
A
(¦λ
j
¦) strongly as T → ∞. Since this operator is also bounded
by 1 for all T, we can interchange the limit with the summation and the
claim follows.
We also note the following corollary.
Corollary 5.9. Under the same assumptions as in the RAGE theorem we
have
lim
n→∞
lim
T→∞
1
T
_
T
0
e
itA
K
n
e
−itA
ψdt = P
pp
ψ (5.34)
5.3. The Trotter product formula 115
respectively
lim
n→∞
lim
T→∞
1
T
_
T
0
e
−itA
(I −K
n
)e
−itA
ψdt = P
c
ψ. (5.35)
Problem 5.2. Prove Lemma 5.5.
Problem 5.3. Prove Corollary 5.9.
5.3. The Trotter product formula
In many situations the operator is of the form A + B, where e
itA
and e
itB
can be computed explicitly. Since A and B will not commute in general, we
cannot obtain e
it(A+B)
from e
itA
e
itB
. However, we at least have:
Theorem 5.10 (Trotter product formula). Suppose, A, B, and A+B are
selfadjoint. Then
e
it(A+B)
= slim
n→∞
_
e
i
t
n
A
e
i
t
n
B
_
n
. (5.36)
Proof. First of all note that we have
_
e
iτA
e
iτB
_
n
−e
it(A+B)
=
n−1
j=0
_
e
iτA
e
iτB
_
n−1−j
_
e
iτA
e
iτB
−e
iτ(A+B)
__
e
iτ(A+B)
_
j
,(5.37)
where τ =
t
n
, and hence
(e
iτA
e
iτB
)
n
−e
it(A+B)
ψ ≤ [t[ max
s≤t
F
τ
(s), (5.38)
where
F
τ
(s) = 
1
τ
(e
iτA
e
iτB
−e
iτ(A+B)
)e
is(A+B)
ψ. (5.39)
Now for ψ ∈ D(A+B) = D(A) ∩ D(B) we have
1
τ
(e
iτA
e
iτB
−e
iτ(A+B)
)ψ → iAψ + iBψ −i(A+B)ψ = 0 (5.40)
as τ → 0. So lim
τ→0
F
τ
(s) = 0 at least pointwise, but we need this uniformly
with respect to s ∈ [−[t[, [t[].
Pointwise convergence implies

1
τ
(e
iτA
e
iτB
−e
iτ(A+B)
)ψ ≤ C(ψ) (5.41)
and, since D(A+B) is a Hilbert space when equipped with the graph norm
ψ
2
Γ(A+B)
= ψ
2
+(A+B)ψ
2
, we can invoke the uniform boundedness
principle to obtain

1
τ
(e
iτA
e
iτB
−e
iτ(A+B)
)ψ ≤ Cψ
Γ(A+B)
. (5.42)
116 5. Quantum dynamics
Now
[F
τ
(s) −F
τ
(r)[ ≤ 
1
τ
(e
iτA
e
iτB
−e
iτ(A+B)
)(e
is(A+B)
−e
ir(A+B)
)ψ
≤ C(e
is(A+B)
−e
ir(A+B)
)ψ
Γ(A+B)
. (5.43)
shows that F
τ
(.) is uniformly continuous and the claim follows by a standard
ε
2
argument.
If the operators are semibounded from below the same proof shows
Theorem 5.11 (Trotter product formula). Suppose, A, B, and A+B are
selfadjoint and semibounded from below. Then
e
−t(A+B)
= slim
n→∞
_
e
−
t
n
A
e
−
t
n
B
_
n
, t ≥ 0. (5.44)
Problem 5.4. Proof Theorem 5.11.
Chapter 6
Perturbation theory for
selfadjoint operators
The Hamiltonian of a quantum mechanical system is usually the sum of
the kinetic energy H
0
(free Schr¨odinger operator) plus an operator V cor
responding to the potential energy. Since H
0
is easy to investigate, one
usually tries to consider V as a perturbation of H
0
. This will only work
if V is small with respect to H
0
. Hence we study such perturbations of
selfadjoint operators next.
6.1. Relatively bounded operators and the
Kato–Rellich theorem
An operator B is called A bounded or relatively bounded with respect
to A if D(A) ⊆ D(B) and if there are constants a, b ≥ 0 such that
Bψ ≤ aAψ +bψ, ψ ∈ D(A). (6.1)
The inﬁmum of all constants a for which a corresponding b exists such that
(6.1) holds is called the Abound of B.
The triangle inequality implies
Lemma 6.1. Suppose B
j
, j = 1, 2, are A bounded with respective Abounds
a
i
, i = 1, 2. Then α
1
B
1
+ α
2
B
2
is also A bounded with Abound less than
[α
1
[a
1
+ [α
2
[a
2
. In particular, the set of all A bounded operators forms a
linear space.
There are also the following equivalent characterizations:
117
118 6. Perturbation theory for selfadjoint operators
Lemma 6.2. Suppose A is closed and B is closable. Then the following are
equivalent:
(i) B is A bounded.
(ii) D(A) ⊆ D(B).
(iii) BR
A
(z) is bounded for one (and hence for all) z ∈ ρ(A).
Moreover, the Abound of B is no larger then inf
z∈ρ(A)
BR
A
(z).
Proof. (i) ⇒ (ii) is true by deﬁnition. (ii) ⇒ (iii) since BR
A
(z) is a closed
(Problem 2.6) operator deﬁned on all of H and hence bounded by the closed
graph theorem (Theorem 2.7). To see (iii) ⇒ (i) let ψ ∈ D(A), then
Bψ = BR
A
(z)(A−z)ψ ≤ a(A−z)ψ ≤ aAψ + (a[z[)ψ, (6.2)
where a = BR
A
(z). Finally, note that if BR
A
(z) is bounded for one
z ∈ ρ(A), it is bounded for all z ∈ ρ(A) by the ﬁrst resolvent formula.
We are mainly interested in the situation where A is selfadjoint and B
is symmetric. Hence we will restrict our attention to this case.
Lemma 6.3. Suppose A is selfadjoint and B relatively bounded. The A
bound of B is given by
lim
λ→∞
BR
A
(±iλ). (6.3)
If A is bounded from below, we can also replace ±iλ by −λ.
Proof. Let ϕ = R
A
(±iλ)ψ, λ > 0, and let a
∞
be the Abound of B. If B
is A bounded, then (use the spectral theorem to estimate the norms)
BR
A
(±iλ)ψ ≤ aAR
A
(±iλ)ψ +bR
A
(±iλ)ψ ≤ (a +
b
λ
)ψ. (6.4)
Hence limsup
λ
BR
A
(±iλ) ≤ a
∞
which, together with a
∞
≤ inf
λ
BR
A
(±iλ)
from the previous lemma, proves the claim.
The case where A is bounded from below is similar.
Now we will show the basic perturbation result due to Kato and Rellich.
Theorem 6.4 (Kato–Rellich). Suppose A is (essentially) selfadjoint and
B is symmetric with Abound less then one. Then A + B, D(A + B) =
D(A), is (essentially) selfadjoint. If A is essentially selfadjoint we have
D(A) ⊆ D(B) and A+B = A+B.
If A is bounded from below by γ, then A + B is bounded from below by
min(γ, b/(a −1)).
6.2. More on compact operators 119
Proof. Since D(A) ⊆ D(B) and D(A) ⊆ D(A+B) by (6.1) we can assume
that A is closed (i.e., selfadjoint). It suﬃces to show that Ran(A+B±iλ) =
H. By the above lemma we can ﬁnd a λ > 0 such that BR
A
(±iλ) < 1.
Hence −1 ∈ ρ(BR
A
(±iλ)) and thus I +BR
A
(±iλ) is onto. Thus
(A+B ±iλ) = (I +BR
A
(±iλ))(A±iλ) (6.5)
is onto and the proof of the ﬁrst part is complete.
If A is bounded from below we can replace ±iλ by −λ and the above
equation shows that R
A+B
exists for λ suﬃciently large. By the proof of
the previous lemma we can choose −λ < min(γ, b/(a −1)).
Finally, let us show that there is also a connection between the resolvents.
Lemma 6.5. Suppose A and B are closed and D(A) ⊆ D(B). Then we
have the second resolvent formula
R
A+B
(z) −R
A
(z) = −R
A
(z)BR
A+B
(z) = −R
A+B
(z)BR
A
(z) (6.6)
for z ∈ ρ(A) ∩ ρ(A+B).
Proof. We compute
R
A+B
(z) +R
A
(z)BR
A+B
(z) = R
A
(z)(A+B −z)R
A+B
(z) = R
A
(z). (6.7)
The second identity is similar.
Problem 6.1. Compute the resolvent of A+α¸ψ, .)ψ. (Hint: Show
(I +α¸ϕ, .)ψ)
−1
= I −
α
1 +α¸ϕ, ψ)
¸ϕ, .)ψ (6.8)
and use the second resolvent formula.)
6.2. More on compact operators
Recall from Section 5.2 that we have introduced the set of compact operators
C(H) as the closure of the set of all ﬁnite rank operators in L(H). Before we
can proceed, we need to establish some further results for such operators.
We begin by investigating the spectrum of selfadjoint compact operators
and show that the spectral theorem takes a particular simple form in this
case.
We introduce some notation ﬁrst. The discrete spectrum σ
d
(A) is the
set of all eigenvalues which are discrete points of the spectrum and whose
corresponding eigenspace is ﬁnite dimensional. The complement of the dis
crete spectrum is called the essential spectrum σ
ess
(A) = σ(A)¸σ
d
(A).
If A is selfadjoint we might equivalently set
σ
d
(A) = ¦λ ∈ σ
p
(A)[rank(P
A
((λ −ε, λ +ε))) < ∞ for some ε > 0¦. (6.9)
120 6. Perturbation theory for selfadjoint operators
respectively
σ
ess
(A) = ¦λ ∈ R[rank(P
A
((λ −ε, λ +ε))) = ∞ for all ε > 0¦. (6.10)
Theorem 6.6 (Spectral theorem for compact operators). Suppose the op
erator K is selfadjoint and compact. Then the spectrum of K consists of
an at most countable number of eigenvalues which can only cluster at 0.
Moreover, the eigenspace to each nonzero eigenvalue is ﬁnite dimensional.
In other words,
σ
ess
(K) ⊆ ¦0¦, (6.11)
where equality holds if and only if H is inﬁnite dimensional.
In addition, we have
K =
λ∈σ(K)
λP
K
(¦λ¦). (6.12)
Proof. It suﬃces to show rank(P
K
((λ −ε, λ +ε))) < ∞ for 0 < ε < [λ[.
Let K
n
be a sequence of ﬁnite rank operators such that K−K
n
 ≤ 1/n.
If RanP
K
((λ−ε, λ+ε)) is inﬁnite dimensional we can ﬁnd a vector ψ
n
in this
range such that ψ
n
 = 1 and K
n
ψ
n
= 0. But this yields a contradiction
1
n
≥ [¸ψ
n
, (K −K
n
)ψ
n
)[ = [¸ψ
n
, Kψ
n
)[ ≥ [λ[ −ε > 0 (6.13)
by (4.2).
As a consequence we obtain the canonical form of a general compact
operator.
Theorem 6.7 (Canonical form of compact operators). Let K be compact.
There exists orthonormal sets ¦
ˆ
φ
j
¦, ¦φ
j
¦ and positive numbers s
j
= s
j
(K)
such that
K =
j
s
j
¸φ
j
, .)
ˆ
φ
j
, K
∗
=
j
s
j
¸
ˆ
φ
j
, .)φ
j
. (6.14)
Note Kφ
j
= s
j
ˆ
φ
j
and K
∗
ˆ
φ
j
= s
j
φ
j
, and hence K
∗
Kφ
j
= s
2
j
φ
j
and KK
∗
ˆ
φ
j
=
s
2
j
ˆ
φ
j
.
The numbers s
j
(K)
2
> 0 are the nonzero eigenvalues of KK
∗
respec
tively K
∗
K (counted with multiplicity) and s
j
(K) = s
j
(K
∗
) = s
j
are called
singular values of K. There are either ﬁnitely many singular values (if K
is ﬁnite rank) or they converge to zero.
Proof. By Lemma 5.5 K
∗
K is compact and hence Theorem 6.6 applies.
Let ¦φ
j
¦ be an orthonormal basis of eigenvectors for P
K
∗
K
((0, ∞))H and let
6.2. More on compact operators 121
s
2
j
be the eigenvalue corresponding to φ
j
. Then, for any ψ ∈ H we can write
ψ =
j
¸φ
j
, ψ)φ
j
+
˜
ψ (6.15)
with
˜
ψ ∈ Ker(K
∗
K) = Ker(K). Then
Kψ =
j
s
j
¸φ
j
, ψ)
ˆ
φ
j
, (6.16)
where
ˆ
φ
j
= s
−1
j
Kφ
j
, since K
˜
ψ
2
= ¸
˜
ψ, K
∗
K
˜
ψ) = 0. By ¸
ˆ
φ
j
,
ˆ
φ
k
) =
(s
j
s
k
)
−1
¸Kφ
j
, Kφ
k
) = (s
j
s
k
)
−1
¸K
∗
Kφ
j
, φ
k
) = s
j
s
−1
k
¸φ
j
, φ
k
) we see that
¦
ˆ
φ
j
¦ are orthonormal and the formula for K
∗
follows by taking the adjoint
of the formula for K (Problem 6.2).
If K is selfadjoint then φ
j
= σ
j
ˆ
φ
j
, σ
2
j
= 1 are the eigenvectors of K and
σ
j
s
j
are the corresponding eigenvalues.
Moreover, note that we have
K = max
j
s
j
(K). (6.17)
Finally, let me remark that there are a number of other equivalent deﬁ
nitions for compact operators.
Lemma 6.8. For K ∈ L(H) the following statements are equivalent:
(i) K is compact.
(i’) K
∗
is compact.
(ii) A
n
∈ L(H) and A
n
s
→ A strongly implies A
n
K → AK.
(iii) ψ
n
ψ weakly implies Kψ
n
→ Kψ in norm.
(iv) ψ
n
bounded implies that Kψ
n
has a (norm) convergent subse
quence.
Proof. (i) ⇔ (i’). This is immediate from Theorem 6.7.
(i) ⇒(ii). Translating A
n
→ A
n
−A it is no restriction to assume A = 0.
Since A
n
 ≤ M it suﬃces to consider the case where K is ﬁnite rank. Then
(by (6.14))
A
n
K
2
= sup
ψ=1
N
j=1
s
j
[¸φ
j
, ψ)[
2
A
n
ˆ
φ
j

2
≤
N
j=1
s
j
A
n
ˆ
φ
j

2
→ 0. (6.18)
(ii) ⇒ (iii). Again, replace ψ
n
→ ψ
n
− ψ and assume ψ = 0. Choose
A
n
= ¸ψ
n
, .)ϕ, ϕ = 1, then Kψ
n

2
= A
n
K
∗
 → 0.
(iii) ⇒ (iv). If ψ
n
is bounded it has a weakly convergent subsequence
by Lemma 1.12. Now apply (iii) to this subsequence.
122 6. Perturbation theory for selfadjoint operators
(iv) ⇒ (i). Let ϕ
j
be an orthonormal basis and set
K
n
=
n
j=1
¸ϕ
j
, .)Kϕ
j
. (6.19)
Then
γ
n
= K −K
n
 = sup
ψ∈span{ϕ
j
}
∞
j=n
,ψ=1
Kψ (6.20)
is a decreasing sequence tending to a limit ε ≥ 0. Moreover, we can ﬁnd
a sequence of unit vectors ψ
n
∈ span¦ϕ
j
¦
∞
j=n
for which Kψ
n
 ≥ ε. By
assumption, Kψ
n
has a convergent subsequence which, since ψ
n
converges
weakly to 0, converges to 0. Hence ε must be 0 and we are done.
The last condition explains the name compact. Moreover, note that you
cannot replace A
n
K → AK by KA
n
→ KA unless you additionally require
A
n
to be normal (then this follows by taking adjoints — recall that only
for normal operators taking adjoints is continuous with respect to strong
convergence). Without the requirement that A
n
is normal the claim is wrong
as the following example shows.
Example. Let H =
2
(N), A
n
the operator which shifts each sequence n
places to the left, and K = ¸δ
1
, .)δ
1
, where δ
1
= (1, 0, . . . ). Then slimA
n
=
0 but KA
n
 = 1.
Problem 6.2. Deduce the formula for K
∗
from the one for K in (6.14).
6.3. Hilbert–Schmidt and trace class operators
Among the compact operators two special classes or of particular impor
tance. The ﬁrst one are integral operators
Kψ(x) =
_
M
K(x, y)ψ(y)dµ(y), ψ ∈ L
2
(M, dµ), (6.21)
where K(x, y) ∈ L
2
(M M, dµ ⊕dµ). Such an operator is called Hilbert–
Schmidt operator. Using CauchySchwarz,
_
M
[Kψ(x)[
2
dµ(x) =
_
M
¸
¸
¸
¸
_
M
[K(x, y)ψ(y)[dµ(y)
¸
¸
¸
¸
2
dµ(x)
≤
_
M
__
M
[K(x, y)[
2
dµ(y)
___
M
[ψ(y)[
2
dµ(y)
_
dµ(x)
=
__
M
_
M
[K(x, y)[
2
dµ(y) dµ(x)
___
M
[ψ(y)[
2
dµ(y)
_
(6.22)
we see that K is bounded. Next, pick an orthonormal basis ϕ
j
(x) for
L
2
(M, dµ). Then, by Lemma 1.9, ϕ
i
(x)ϕ
j
(y) is an orthonormal basis for
6.3. Hilbert–Schmidt and trace class operators 123
L
2
(M M, dµ ⊕dµ) and
K(x, y) =
i,j
c
i,j
ϕ
i
(x)ϕ
j
(y), c
i,j
= ¸ϕ
i
, Kϕ
∗
j
), (6.23)
where
i,j
[c
i,j
[
2
=
_
M
_
M
[K(x, y)[
2
dµ(y) dµ(x) < ∞. (6.24)
In particular,
Kψ(x) =
i,j
c
i,j
¸ϕ
∗
j
, ψ)ϕ
i
(x) (6.25)
shows that K can be approximated by ﬁnite rank operators (take ﬁnitely
many terms in the sum) and is hence compact.
Using (6.14) we can also give a diﬀerent characterization of Hilbert–
Schmidt operators.
Lemma 6.9. If H = L
2
(M, dµ), then a compact operator K is Hilbert–
Schmidt if and only if
j
s
j
(K)
2
< ∞ and
j
s
j
(K)
2
=
_
M
_
M
[K(x, y)[
2
dµ(x)dµ(y), (6.26)
in this case.
Proof. If K is compact we can deﬁne approximating ﬁnite rank operators
K
n
by considering only ﬁnitely many terms in (6.14):
K
n
=
n
j=1
s
j
¸φ
j
, .)
ˆ
φ
j
. (6.27)
Then K
n
has the kernel K
n
(x, y) =
n
j=1
s
j
φ
j
(y)
∗
ˆ
φ
j
(x) and
_
M
_
M
[K
n
(x, y)[
2
dµ(x)dµ(y) =
n
j=1
s
j
(K)
2
(6.28)
Now if one side converges, so does the other and in particular, (6.26) holds
in this case.
Hence we will call a compact operator Hilbert–Schmidt if its singular
values satisfy
j
s
j
(K)
2
< ∞. (6.29)
By our lemma this coincides with our previous deﬁnition if H = L
2
(M, dµ).
124 6. Perturbation theory for selfadjoint operators
Since every Hilbert space is isomorphic to some L
2
(M, dµ) we see that
the Hilbert–Schmidt operators together with the norm
K
2
=
_
j
s
j
(K)
2
_
1/2
(6.30)
form a Hilbert space (isomorphic to L
2
(MM, dµ⊗dµ)). Note that K
2
=
K
∗

2
(since s
j
(K) = s
j
(K
∗
)). There is another useful characterization for
identifying Hilbert–Schmidt operators:
Lemma 6.10. A compact operator K is Hilbert–Schmidt if and only if
n
Kψ
n

2
< ∞ (6.31)
for some orthonormal set and
n
Kψ
n

2
= K
2
2
(6.32)
in this case.
Proof. This follows from
n
Kψ
n

2
=
n,j
[¸
ˆ
φ
j
, Kψ
n
)[
2
=
n,j
[¸K
∗
ˆ
φ
j
, ψ
n
)[
2
=
n
K
∗
ˆ
φ
n

2
=
j
s
j
(K)
2
. (6.33)
Corollary 6.11. The set of Hilbert–Schmidt operators forms a ∗ideal in
L(H) and
KA
2
≤ AK
2
respectively AK
2
≤ AK
2
. (6.34)
Proof. Let K be Hilbert–Schmidt and A bounded. Then AK is compact
and
AK
2
2
=
n
AKψ
n

2
≤ A
2
n
Kψ
n

2
= A
2
K
2
2
. (6.35)
For KA just consider adjoints.
This approach can be generalized by deﬁning
K
p
=
_
j
s
j
(K)
p
_
1/p
(6.36)
plus corresponding spaces
¸
p
(H) = ¦K ∈ C(H)[K
p
< ∞¦, (6.37)
6.3. Hilbert–Schmidt and trace class operators 125
which are known as Schatten pclass. Note that
K ≤ K
p
. (6.38)
and that by s
j
(K) = s
j
(K
∗
) we have
K
p
= K
∗

p
. (6.39)
Lemma 6.12. The spaces ¸
p
(H) together with the norm .
p
are Banach
spaces. Moreover,
K
p
= sup
_
_
_
_
j
[¸ψ
j
, Kϕ
j
)[
p
_
1/p
¸
¸
¸ ¦ψ
j
¦, ¦ϕ
j
¦ ONS
_
_
_
, (6.40)
where the sup is taken over all orthonormal systems.
Proof. The hard part is to prove (6.40): Choose q such that
1
p
+
1
q
= 1 and
use H¨older’s inequality to obtain (s
j
[...[
2
= (s
p
j
[...[
2
)
1/p
[...[
2/q
)
j
s
j
[¸ϕ
n
, φ
j
)[
2
≤
_
j
s
p
j
[¸ϕ
n
, φ
j
)[
2
_
1/p
_
j
[¸ϕ
n
, φ
j
)[
2
_
1/q
≤
_
j
s
p
j
[¸ϕ
n
, φ
j
)[
2
_
1/p
. (6.41)
Clearly the analogous equation holds for
ˆ
φ
j
, ψ
n
. Now using CauchySchwarz
we have
[¸ψ
n
, Kϕ
n
)[
p
=
¸
¸
¸
j
s
1/2
j
¸ϕ
n
, φ
j
)s
1/2
j
¸
ˆ
φ
j
, ψ
n
)
¸
¸
¸
p
≤
_
j
s
p
j
[¸ϕ
n
, φ
j
)[
2
_
1/2
_
j
s
p
j
[¸ψ
n
,
ˆ
φ
j
)[
2
_
1/2
(6.42)
Summing over n, a second appeal to CauchySchwarz and interchanging the
order of summation ﬁnally gives
n
[¸ψ
n
, Kϕ
n
)[
p
≤
_
n,j
s
p
j
[¸ϕ
n
, φ
j
)[
2
_
1/2
_
n,j
s
p
j
[¸ψ
n
,
ˆ
φ
j
)[
2
_
1/2
≤
_
j
s
p
j
_
1/2
_
j
s
p
j
_
1/2
=
j
s
p
j
. (6.43)
Since equality is attained for ϕ
n
= φ
n
and ψ
n
=
ˆ
φ
n
equation (6.40) holds.
126 6. Perturbation theory for selfadjoint operators
Now the rest is straightforward. From
_
j
[¸ψ
j
, (K
1
+K
2
)ϕ
j
)[
p
_
1/p
≤
_
j
[¸ψ
j
, K
1
ϕ
j
)[
p
_
1/p
+
_
j
[¸ψ
j
, K
2
ϕ
j
)[
p
_
1/p
≤ K
1

p
+K
2

p
(6.44)
we infer that ¸
p
(H) is a vector space and the triangle inequality. The other
requirements for are norm are obvious and it remains to check completeness.
If K
n
is a Cauchy sequence with respect to .
p
it is also a Cauchy sequence
with respect to . (K ≤ K
p
). Since C(H) is closed, there is a compact
K with K −K
n
 → 0 and by K
n

p
≤ C we have
_
j
[¸ψ
j
, Kϕ
j
)[
p
_
1/p
≤ C (6.45)
for any ﬁnite ONS. Since the right hand side is independent of the ONS
(and in particular on the number of vectors), K is in ¸
p
(H).
The two most important cases are p = 1 and p = 2: ¸
2
(H) is the space
of Hilbert–Schmidt operators investigated in the previous section and ¸
1
(H)
is the space of trace class operators. Since Hilbert–Schmidt operators are
easy to identify it is important to relate ¸
1
(H) with ¸
2
(H):
Lemma 6.13. An operator is trace class if and only if it can be written as
the product of two Hilbert–Schmidt operators, K = K
1
K
2
, and we have
K
1
≤ K
1

2
K
2

2
(6.46)
in this case.
Proof. By CauchySchwarz we have
n
[¸ϕ
n
, Kψ
n
)[ =
n
[¸K
∗
1
ϕ
n
, K
2
ψ
n
)[ ≤
_
n
K
∗
1
ϕ
n

2
n
K
2
ψ
n

2
_
1/2
= K
1

2
K
2

2
(6.47)
and hence K = K
1
K
2
is trace calls if both K
1
and K
2
are Hilbert–Schmidt
operators. To see the converse let K be given by (6.14) and choose K
1
=
j
_
s
j
(K)¸φ
j
, .)
ˆ
φ
j
respectively K
2
=
j
_
s
j
(K)¸φ
j
, .)φ
j
.
Corollary 6.14. The set of trace class operators forms a ∗ideal in L(H)
and
KA
1
≤ AK
1
respectively AK
1
≤ AK
1
. (6.48)
6.3. Hilbert–Schmidt and trace class operators 127
Proof. Write K = K
1
K
2
with K
1
, K
2
Hilbert–Schmidt and use Corol
lary 6.11.
Now we can also explain the name trace class:
Lemma 6.15. If K is trace class, then for any ONB ¦ϕ
n
¦ the trace
tr(K) =
n
¸ϕ
n
, Kϕ
n
) (6.49)
is ﬁnite and independent of the ONB.
Moreover, the trace is linear and if K
1
≤ K
2
are both trace class we have
tr(K
1
) ≤ tr(K
2
).
Proof. Let ¦ψ
n
¦ be another ONB. If we write K = K
1
K
2
with K
1
, K
2
Hilbert–Schmidt we have
n
¸ϕ
n
, K
1
K
2
ϕ
n
) =
n
¸K
∗
1
ϕ
n
, K
2
ϕ
n
) =
n,m
¸K
∗
1
ϕ
n
, ψ
m
)¸ψ
m
, K
2
ϕ
n
)
=
m,n
¸K
∗
2
ψ
m
, ϕ
n
)¸ϕ
n
, K
1
ψ
m
) =
m
¸K
∗
2
ψ
m
, K
1
ψ
m
)
=
m
¸ψ
m
, K
2
K
1
ψ
m
). (6.50)
Hence the trace is independent of the ONB and we even have tr(K
1
K
2
) =
tr(K
2
K
1
).
The rest follows from linearity of the scalar product and K
1
≤ K
2
if and
only if ¸ϕ, K
1
ϕ) ≤ ¸ϕ, K
2
ϕ) for every ϕ ∈ H.
Clearly for selfadjoint trace class operators, the trace is the sum over
all eigenvalues (counted with their multiplicity). To see this you just have
to choose the ONB to consist of eigenfunctions. This is even true for all
trace class operators and is known as Lidiskij trace theorem (see [17] or [6]
for an easy to read introduction).
Problem 6.3. Let H =
2
(N) and let A be multiplication by a sequence
a(n). Show that A ∈ ¸
p
(
2
(N)) if and only if a ∈
p
(N). Furthermore, show
that A
p
= a
p
in this case.
Problem 6.4. Show that A ≥ 0 is trace class if (6.49) is ﬁnite for one (and
hence all) ONB. (Hint A is selfadjoint (why?) and A =
√
A
√
A.)
Problem 6.5. Show that for an orthogonal projection P we have
dimRan(P) = tr(P),
where we set tr(P) = ∞ if (6.49) is inﬁnite (for one and hence all ONB by
the previous problem).
128 6. Perturbation theory for selfadjoint operators
Problem 6.6. Show that for K ∈ C we have
[K[ =
j
s
j
¸φ
j
, .)φ
j
,
where [K[ =
√
K
∗
K. Conclude that
K
p
= (tr([A[
p
))
1/p
.
Problem 6.7. Show that K :
2
(N) →
2
(N), f(n) →
j∈N
k(n +j)f(j) is
Hilbert–Schmidt if [k(n)[ ≤ C(n), where C(n) is decreasing and summable.
6.4. Relatively compact operators and Weyl’s
theorem
In the previous section we have seen that the sum of a selfadjoint and a sym
metric operator is again selfadjoint if the perturbing operator is small. In
this section we want to study the inﬂuence of perturbations on the spectrum.
Our hope is that at least some parts of the spectrum remain invariant.
Let A be selfadjoint. Note that if we add a multiple of the identity to
A, we shift the entire spectrum. Hence, in general, we cannot expect a (rel
atively) bounded perturbation to leave any part of the spectrum invariant.
Next, if λ
0
is in the discrete spectrum, we can easily remove this eigenvalue
with a ﬁnite rank perturbation of arbitrary small norm. In fact, consider
A+εP
A
(¦λ
0
¦). (6.51)
Hence our only hope is that the remainder, namely the essential spectrum,
is stable under ﬁnite rank perturbations. To show this, we ﬁrst need a good
criterion for a point to be in the essential spectrum of A.
Lemma 6.16 (Weyl criterion). A point λ is in the essential spectrum of
a selfadjoint operator A if and only if there is a sequence ψ
n
such that
ψ
n
 = 1, ψ
n
converges weakly to 0, and (A − λ)ψ
n
 → 0. Moreover, the
sequence can chosen to be orthonormal. Such a sequence is called singular
Weyl sequence.
Proof. Let ψ
n
be a singular Weyl sequence for the point λ
0
. By Lemma 2.12
we have λ
0
∈ σ(A) and hence it suﬃces to show λ
0
,∈ σ
d
(A). If λ
0
∈ σ
d
(A) we
can ﬁnd an ε > 0 such that P
ε
= P
A
((λ
0
−ε, λ
0
+ε)) is ﬁnite rank. Consider
˜
ψ
n
= P
ε
ψ
n
. Clearly
˜
ψ
n
converges weakly to zero and (A − λ
0
)
˜
ψ
n
 → 0.
6.4. Relatively compact operators and Weyl’s theorem 129
Moreover,
ψ
n
−
˜
ψ
n

2
=
_
R\(λ−ε,λ+ε)
dµ
ψn
(λ)
≤
1
ε
2
_
R\(λ−ε,λ+ε)
(λ −λ
0
)
2
dµ
ψn
(λ)
≤
1
ε
2
(A−λ
0
)ψ
n

2
(6.52)
and hence 
˜
ψ
n
 → 1. Thus ϕ
n
= 
˜
ψ
n

−1
˜
ψ
n
is also a singular Weyl sequence.
But ϕ
n
is a sequence of unit length vectors which lives in a ﬁnite dimensional
space and converges to 0 weakly, a contradiction.
Conversely, if λ
0
∈ σ
ess
(A), consider P
n
= P
A
([λ −
1
n
, λ −
1
n+1
) ∪ (λ +
1
n+1
, λ +
1
n
]). Then rank(P
n
j
) > 0 for an inﬁnite subsequence n
j
. Now pick
ψ
j
∈ RanP
n
j
.
Now let K be a selfadjoint compact operator and ψ
n
a singular Weyl
sequence for A. Then ψ
n
converges weakly to zero and hence
(A+K −λ)ψ
n
 ≤ (A−λ)ψ
n
 +Kψ
n
 → 0 (6.53)
since (A−λ)ψ
n
 → 0 by assumption and Kψ
n
 → 0 by Lemma 6.8 (iii).
Hence σ
ess
(A) ⊆ σ
ess
(A + K). Reversing the roles of A + K and A shows
σ
ess
(A+K) = σ
ess
(A). Since we have shown that we can remove any point
in the discrete spectrum by a selfadjoint ﬁnite rank operator we obtain the
following equivalent characterization of the essential spectrum.
Lemma 6.17. The essential spectrum of a selfadjoint operator A is pre
cisely the part which is invariant under rankone perturbations. In particu
lar,
σ
ess
(A) =
K∈C(H),K
∗
=K
σ(A+K). (6.54)
There is even a larger class of operators under which the essential spec
trum is invariant.
Theorem 6.18 (Weyl). Suppose A and B are selfadjoint operators. If
R
A
(z) −R
B
(z) ∈ C(H) (6.55)
for one z ∈ ρ(A) ∩ ρ(B), then
σ
ess
(A) = σ
ess
(B). (6.56)
Proof. In fact, suppose λ ∈ σ
ess
(A) and let ψ
n
be a corresponding singular
Weyl sequence. Then (R
A
(z)−
1
λ−z
)ψ
n
=
R
A
(z)
z−λ
(A−λ)ψ
n
and thus (R
A
(z)−
1
λ−z
)ψ
n
 → 0. Moreover, by our assumption we also have (R
B
(z) −
1
λ−z
)ψ
n
 → 0 and thus (B − λ)ϕ
n
 → 0, where ϕ
n
= R
B
(z)ψ
n
. Since
130 6. Perturbation theory for selfadjoint operators
lim
n→∞
ϕ
n
 = lim
n→∞
R
A
(z)ψ
n
 = [λ − z[
−1
,= 0 (since (R
A
(z) −
1
λ−z
)ψ
n
 = 
1
λ−z
R
A
(z)(A−λ)ψ
n
 → 0) we obtain a singular Weyl sequence
for B, showing λ ∈ σ
ess
(B). Now interchange the roles of A and B.
As a ﬁrst consequence note the following result
Theorem 6.19. Suppose A is symmetric with equal ﬁnite defect indices,
then all selfadjoint extensions have the same essential spectrum.
Proof. By Lemma 2.27 the resolvent diﬀerence of two selfadjoint extensions
is a ﬁnite rank operator if the defect indices are ﬁnite.
In addition, the following result is of interest.
Lemma 6.20. Suppose
R
A
(z) −R
B
(z) ∈ C(H) (6.57)
for one z ∈ ρ(A)∩ρ(B), then this holds for all z ∈ ρ(A)∩ρ(B). In addition,
if A and B are selfadjoint, then
f(A) −f(B) ∈ C(H) (6.58)
for any f ∈ C
∞
(R).
Proof. If the condition holds for one z it holds for all since we have (using
both resolvent formulas)
R
A
(z
) −R
B
(z
)
= (1 −(z −z
)R
B
(z
))(R
A
(z) −R
B
(z))(1 −(z −z
)R
A
(z
)). (6.59)
Let A and B be selfadjoint. The set of all functions f for which the
claim holds is a closed ∗subalgebra of C
∞
(R) (with sup norm). Hence the
claim follows from Lemma 4.4.
Remember that we have called K relatively compact with respect to A
if KR
A
(z) is compact (for one and hence for all z) and note that the the
resolvent diﬀerence R
A+K
(z) −R
A
(z) is compact if K is relatively compact.
In particular, Theorem 6.18 applies if B = A + K, where K is relatively
compact.
For later use observe that set of all operators which are relatively com
pact with respect to A forms a linear space (since compact operators do)
and relatively compact operators have Abound zero.
Lemma 6.21. Let A be selfadjoint and suppose K is relatively compact
with respect to A. Then the Abound of K is zero.
6.5. Strong and norm resolvent convergence 131
Proof. Write
KR
A
(λi) = (KR
A
(i))((A+ i)R
A
(λi)) (6.60)
and observe that the ﬁrst operator is compact and the second is normal
and converges strongly to 0 (by the spectral theorem). Hence the claim
follows from Lemma 6.3 and the discussion after Lemma 6.8 (since R
A
is
normal).
In addition, note the following result which is a straightforward conse
quence of the second resolvent identity.
Lemma 6.22. Suppose A is selfadjoint and B is symmetric with Abound
less then one. If K is relatively compact with respect to A then it is also
relatively compact with respect to A+B.
Proof. Since B is A bounded with Abound less than one, we can choose a
z ∈ C such that BR
A
(z) < 1. And hence
BR
A+B
(z) = BR
A
(z)(I +BR
A
(z))
−1
(6.61)
shows that B is also A+B bounded and the result follows from
KR
A+B
(z) = KR
A
(z)(I −BR
A+B
(z)) (6.62)
since KR
A
(z) is compact and BR
A+B
(z) is bounded.
Problem 6.8. Show that A = −
d
2
dx
2
+q(x), D(A) = H
2
(R) is selfadjoint if
q ∈ L
∞
(R). Show that if −u
(x)+q(x)u(x) = zu(x) has a solution for which
u and u
are bounded near +∞ (or −∞) but u is not square integrable near
+∞ (or −∞), then z ∈ σ
ess
(A). (Hint: Use u to construct a Weyl sequence
by restricting it to a compact set. Now modify your construction to get a
singular Weyl sequence by observing that functions with disjoint support are
orthogonal.)
6.5. Strong and norm resolvent convergence
Suppose A
n
and A are selfadjoint operators. We say that A
n
converges to
A in norm respectively strong resolvent sense if
lim
n→∞
R
An
(z) = R
A
(z) respectively slim
n→∞
R
An
(z) = R
A
(z) (6.63)
for one z ∈ Γ = C¸Σ, Σ = σ(A) ∪
n
σ(A
n
).
Using the Stone–Weierstraß theorem we obtain as a ﬁrst consequence
Theorem 6.23. Suppose A
n
converges to A in norm resolvent sense, then
f(A
n
) converges to f(A) in norm for any bounded continuous function
f : Σ → C with lim
λ→−∞
f(λ) = lim
λ→∞
f(λ). If A
n
converges to A
in strong resolvent sense, then f(A
n
) converges to f(A) strongly for any
bounded continuous function f : Σ →C.
132 6. Perturbation theory for selfadjoint operators
Proof. The set of functions for which the claim holds clearly forms a ∗
algebra (since resolvents are normal, taking adjoints is continuous even with
respect to strong convergence) and since it contains f(λ) = 1 and f(λ) =
1
λ−z
0
this ∗algebra is dense by the Stone–Weierstaß theorem. The usual
ε
3
shows that this ∗algebra is also closed.
To see the last claim let χ
n
be a compactly supported continuous func
tion (0 ≤ χ
m
≤ 1) which is one on the interval [−m, m]. Then f(A
n
)χ
m
(A
n
)
s
→
f(A)χ
m
(A) by the ﬁrst part and hence
(f(A
n
) −f(A))ψ ≤ f(A
n
) (1 −χ
m
(A
n
))ψ
+f(A
n
) (χ
m
(A
n
) −χ
m
(A))ψ
+(f(A
n
)χ
m
(A
n
) −f(A)χ
m
(A))ψ
+f(A) (1 −χ
m
(A))ψ (6.64)
can be made arbitrarily small since f(.) ≤ f
∞
and χ
m
(.)
s
→I by Theo
rem 3.1.
As a consequence note that the point z ∈ Γ is of no importance
Corollary 6.24. Suppose A
n
converges to A in norm or strong resolvent
sense for one z
0
∈ Γ, then this holds for all z ∈ Γ.
and that we have
Corollary 6.25. Suppose A
n
converges to A in strong resolvent sense, then
e
itAn
s
→ e
itA
, t ∈ R, (6.65)
and if all operators are semibounded
e
−tAn
s
→ e
−tA
, t ≥ 0. (6.66)
Finally we need some good criteria to check for norm respectively strong
resolvent convergence.
Lemma 6.26. Let A
n
, A be selfadjoint operators with D(A
n
) = D(A).
Then A
n
converges to A in norm resolvent sense if there are sequences a
n
and b
n
converging to zero such that
(A
n
−A)ψ ≤ a
n
ψ +b
n
Aψ, ψ ∈ D(A) = D(A
n
). (6.67)
Proof. From the second resolvent identity
R
An
(z) −R
A
(z) = R
An
(z)(A−A
n
)R
A
(z) (6.68)
we infer
(R
An
(i) −R
A
(i))ψ ≤ R
An
(i)
_
a
n
R
A
(i)ψ +b
n
AR
A
(i)ψ
_
≤ (a
n
+b
n
)ψ (6.69)
6.5. Strong and norm resolvent convergence 133
and hence R
An
(i) −R
A
(i) ≤ a
n
+b
n
→ 0.
In particular, norm convergence implies norm resolvent convergence:
Corollary 6.27. Let A
n
, A be bounded selfadjoint operators with A
n
→ A,
then A
n
converges to A in norm resolvent sense.
Similarly, if no domain problems get in the way, strong convergence
implies strong resolvent convergence:
Lemma 6.28. Let A
n
, A be selfadjoint operators. Then A
n
converges to
A in strong resolvent sense if there there is a core D
0
of A such that for any
ψ ∈ D
0
we have ψ ∈ D(A
n
) for n suﬃciently large and A
n
ψ → Aψ.
Proof. Using the second resolvent identity we have
(R
An
(i) −R
A
(i))ψ ≤ (A−A
n
)R
A
(i)ψ → 0 (6.70)
for ψ ∈ (A −i)D
0
which is dense, since D
0
is a core. The rest follows from
Lemma 1.13.
If you wonder why we did not deﬁne weak resolvent convergence, here
is the answer: it is equivalent to strong resolvent convergence.
Lemma 6.29. Suppose wlim
n→∞
R
An
(z) = R
A
(z) for some z ∈ Γ, then
also slim
n→∞
R
An
(z) = R
A
(z).
Proof. By R
An
(z) R
A
(z) we have also R
An
(z)
∗
R
A
(z)
∗
and thus by
the ﬁrst resolvent identity
R
An
(z)ψ
2
−R
A
(z)ψ
2
= ¸ψ, R
An
(z
∗
)R
An
(z)ψ −R
A
(z
∗
)R
A
(z)ψ)
=
1
z −z
∗
¸ψ, (R
An
(z) −R
An
(z
∗
) +R
A
(z) −R
A
(z
∗
))ψ) → 0. (6.71)
Together with R
An
(z)ψ R
A
(z)ψ we have R
An
(z)ψ → R
A
(z)ψ by virtue
of Lemma 1.11 (iv).
Now what can we say about the spectrum?
Theorem 6.30. Let A
n
and A be selfadjoint operators. If A
n
converges to
A in strong resolvent sense we have σ(A) ⊆ lim
n→∞
σ(A
n
). If A
n
converges
to A in norm resolvent sense we have σ(A) = lim
n→∞
σ(A
n
).
Proof. Suppose the ﬁrst claim were wrong. Then we can ﬁnd a λ ∈ σ(A)
and some ε > 0 such that σ(A
n
) ∩ (λ − ε, λ + ε) = ∅. Choose a bounded
continuous function f which is one on (λ −
ε
2
, λ +
ε
2
) and vanishes outside
(λ−ε, λ+ε). Then f(A
n
) = 0 and hence f(A)ψ = limf(A
n
)ψ = 0 for every
ψ. On the other hand, since λ ∈ σ(A) there is a nonzero ψ ∈ RanP
A
((λ −
ε
2
, λ +
ε
2
)) implying f(A)ψ = ψ, a contradiction.
134 6. Perturbation theory for selfadjoint operators
To see the second claim, recall that the norm of R
A
(z) is just one over
the distance from the spectrum. In particular, λ ,∈ σ(A) if and only if
R
A
(λ + i) < 1. So λ ,∈ σ(A) implies R
A
(λ + i) < 1, which implies
R
An
(λ + i) < 1 for n suﬃciently large, which implies λ ,∈ σ(A
n
) for n
suﬃciently large.
Note that the spectrum can contract if we only have strong resolvent
sense: Let A
n
be multiplication by
1
n
x in L
2
(R). Then A
n
converges to 0 in
strong resolvent sense, but σ(A
n
) = R and σ(0) = ¦0¦.
Lemma 6.31. Suppose A
n
converges in strong resolvent sense to A. If
P
A
(¦λ¦) = 0, then
slim
n→∞
P
An
((−∞, λ)) = slim
n→∞
P
An
((−∞, λ]) = P
A
((−∞, λ)) = P
A
((−∞, λ]).
(6.72)
Proof. The idea is to approximate χ
(−∞,λ)
by a continuous function f with
0 ≤ f ≤ χ
(−∞,λ)
. Then
(P
A
((−∞, λ)) −P
An
((−∞, λ)))ψ ≤ (P
A
((−∞, λ)) −f(A))ψ
+(f(A) −f(A
n
))ψ +(f(A
n
) −P
An
((−∞, λ)))ψ. (6.73)
The ﬁrst term can be made arbitrarily small if we let f converge pointwise
to χ
(−∞,λ)
(Theorem 3.1) and the same is true for the second if we choose n
large (Theorem 6.23). However, the third term can only be made small for
ﬁxed n. To overcome this problem let us choose another continuous function
g with χ
(−∞,λ]
≤ g ≤ 1. Then
(f(A
n
) −P
An
((−∞, λ)))ψ ≤ (g(A
n
) −f(A
n
))ψ (6.74)
since f ≤ χ
(−∞,λ)
≤ χ
(−∞,λ]
≤ g. Furthermore,
(g(A
n
) −f(A
n
))ψ ≤ (g(A
n
) −f(A))ψ
+(f(A) −g(A))ψ +(g(A) −g(A
n
))ψ (6.75)
and now all terms are under control. Since we can replace P
.
((−∞, λ)) by
P
.
((−∞, λ]) in all calculations we are done.
Using P((λ
0
, λ
1
)) = P((−∞, λ
1
)) −P((−∞, λ
0
]) we also obtain
Corollary 6.32. Suppose A
n
converges in strong resolvent sense to A. If
P
A
(¦λ
0
¦) = P
A
(¦λ
1
¦) = 0, then
slim
n→∞
P
An
((λ
0
, λ
1
)) = slim
n→∞
P
An
([λ
0
, λ
1
]) = P
A
((λ
0
, λ
1
)) = P
A
([λ
0
, λ
1
]).
(6.76)
6.5. Strong and norm resolvent convergence 135
Example. The following example shows that the requirement P
A
(¦λ¦) = 0
is crucial, even if we have bounded operators and norm convergence. In fact,
let H = C
2
and
A
n
=
1
n
_
1 0
0 −1
_
. (6.77)
Then A
n
→ 0 and
P
An
((−∞, 0)) = P
An
((−∞, 0]) =
_
0 0
0 1
_
, (6.78)
but P
0
((−∞, 0)) = 0 and P
0
((−∞, 0]) = I.
Problem 6.9. Show that for selfadjoint operators, strong resolvent conver
gence is equivalent to convergence with respect to the metric
d(A, B) =
n∈N
1
2
n
(R
A
(i) −R
B
(i))ϕ
n
, (6.79)
where ¦ϕ
n
¦
n∈N
is some (ﬁxed) ONB.
Problem 6.10 (Weak convergence of spectral measures). Suppose A
n
→
A in strong resolvent sense and let µ
ψ
, µ
ψ,n
be the corresponding spectral
measures. Show that
_
f(λ)dµ
ψ,n
(λ) →
_
f(λ)dµ
ψ
(λ) (6.80)
for every bounded continuous f. Give a counterexample when f is not con
tinuous.
Part 2
Schr¨odinger Operators
Chapter 7
The free Schr¨odinger
operator
7.1. The Fourier transform
We ﬁrst review some basic facts concerning the Fourier transform which
will be needed in the following section.
Let C
∞
(R
n
) be the set of all complexvalued functions which have partial
derivatives of arbitrary order. For f ∈ C
∞
(R
n
) and α ∈ N
n
0
we set
∂
α
f =
∂
α
f
∂x
α
1
1
∂x
αn
n
, x
α
= x
α
1
1
x
αn
n
, [α[ = α
1
+ +α
n
. (7.1)
An element α ∈ N
n
0
is called multiindex and [α[ is called its order. Recall
the Schwarz space
o(R
n
) = ¦f ∈ C
∞
(R
n
)[ sup
x
[x
α
(∂
β
f)(x)[ < ∞, α, β ∈ N
n
0
¦ (7.2)
which is dense in L
2
(R
n
) (since C
∞
c
(R
n
) ⊂ o(R
n
) is). For f ∈ o(R
n
) we
deﬁne
T(f)(p) ≡
ˆ
f(p) =
1
(2π)
n/2
_
R
n
e
−ipx
f(x)d
n
x. (7.3)
Then it is an exercise in partial integration to prove
Lemma 7.1. For any multiindex α ∈ N
n
0
and any f ∈ o(R
n
) we have
(∂
α
f)
∧
(p) = (ip)
α
ˆ
f(p), (x
α
f(x))
∧
(p) = i
α
∂
α
ˆ
f(p). (7.4)
Hence we will sometimes write pf(x) for −i∂f(x), where ∂ = (∂
1
, . . . , ∂
n
)
is the gradient.
139
140 7. The free Schr¨odinger operator
In particular T maps o(R
n
) into itself. Another useful property is the
convolution formula.
Lemma 7.2. The Fourier transform of the convolution
(f ∗ g)(x) =
_
R
n
f(y)g(x −y)d
n
y =
_
R
n
f(x −y)g(y)d
n
y (7.5)
of two functions f, g ∈ o(R
n
) is given by
(f ∗ g)
∧
(p) = (2π)
n/2
ˆ
f(p)ˆ g(p). (7.6)
Proof. We compute
(f ∗ g)
∧
(p) =
1
(2π)
n/2
_
R
n
e
−ipx
_
R
n
f(y)g(x −y)d
n
y d
n
x
=
_
R
n
e
−ipy
f(y)
1
(2π)
n/2
_
R
n
e
−ip(x−y)
g(x −y)d
n
xd
n
y
=
_
R
n
e
−ipy
f(y)ˆ g(p)d
n
y = (2π)
n/2
ˆ
f(p)ˆ g(p), (7.7)
where we have used Fubini’s theorem.
Next, we want to compute the inverse of the Fourier transform. For this
the following lemma will be needed.
Lemma 7.3. We have e
−zx
2
/2
∈ o(R
n
) for Re(z) > 0 and
T(e
−zx
2
/2
)(p) =
1
z
n/2
e
−p
2
/(2z)
. (7.8)
Here z
n/2
has to be understood as (
√
z)
n
, where the branch cut of the root
is chosen along the negative real axis.
Proof. Due to the product structure of the exponential, one can treat each
coordinate separately, reducing the problem to the case n = 1.
Let φ
z
(x) = exp(−zx
2
/2). Then φ
z
(x)+zxφ
z
(x) = 0 and hence i(p
ˆ
φ
z
(p)+
z
ˆ
φ
z
(p)) = 0. Thus
ˆ
φ
z
(p) = cφ
1/z
(p) and (Problem 7.1)
c =
ˆ
φ
z
(0) =
1
√
2π
_
R
exp(−zx
2
/2)dx =
1
√
z
(7.9)
at least for z > 0. However, since the integral is holomorphic for Re(z) > 0,
this holds for all z with Re(z) > 0 if we choose the branch cut of the root
along the negative real axis.
Now we can show
7.1. The Fourier transform 141
Theorem 7.4. The Fourier transform T : o(R
n
) → o(R
n
) is a bijection.
Its inverse is given by
T
−1
(g)(x) ≡ ˇ g(x) =
1
(2π)
n/2
_
R
n
e
ipx
g(p)d
n
p. (7.10)
We have T
2
(f)(x) = f(−x) and thus T
4
= I.
Proof. It suﬃces to show T
2
(f)(x) = f(−x). Consider φ
z
(x) from the
proof of the previous lemma and observe T
2
(φ
z
)(x) = φ
z
(−x). Moreover,
using Fubini this even implies T
2
(f
ε
)(x) = f
ε
(−x) for any ε > 0, where
f
ε
(x) =
1
ε
n
_
R
n
φ
1/ε
2(x −y)f(y)d
n
y. (7.11)
Since lim
ε↓0
f
ε
(x) = f(x) for every x ∈ R
n
(Problem 7.2), we infer from
dominated convergence T
2
(f)(x) = lim
ε↓0
T
2
(f
ε
)(x) = lim
ε↓0
f
ε
(−x) =
f(−x).
From Fubini’s theorem we also obtain Parseval’s identity
_
R
n
[
ˆ
f(p)[
2
d
n
p =
1
(2π)
n/2
_
R
n
_
R
n
f(x)
∗
ˆ
f(p)e
ipx
d
n
p d
n
x
=
_
R
n
[f(x)[
2
d
n
x (7.12)
for f ∈ o(R
n
) and thus we can extend T to a unitary operator:
Theorem 7.5. The Fourier transform T extends to a unitary operator T :
L
2
(R
n
) → L
2
(R
n
). Its spectrum satisﬁes
σ(T) = ¦z ∈ C[z
4
= 1¦ = ¦1, −1, i, −i¦. (7.13)
Proof. It remains to compute the spectrum. In fact, if ψ
n
is a Weyl se
quence, then (T
2
+ z
2
)(T + z)(T − z)ψ
n
= (T
4
− z
4
)ψ
n
= (1 − z
4
)ψ
n
→ 0
implies z
4
= 1. Hence σ(T) ⊆ ¦z ∈ C[z
4
= 1¦. We defer the proof for
equality to Section 8.3, where we will explicitly compute an orthonormal
basis of eigenfunctions.
Lemma 7.1 also allows us to extend diﬀerentiation to a larger class. Let
us introduce the Sobolev space
H
r
(R
n
) = ¦f ∈ L
2
(R
n
)[[p[
r
ˆ
f(p) ∈ L
2
(R
n
)¦. (7.14)
We will abbreviate
∂
α
f = ((ip)
α
ˆ
f(p))
∨
, f ∈ H
r
(R
n
), [α[ ≤ r (7.15)
which implies
_
R
n
g(x)(∂
α
f)(x)d
n
x = (−1)
α
_
R
n
(∂
α
g)(x)f(x)d
n
x, (7.16)
142 7. The free Schr¨odinger operator
for f ∈ H
r
(R
n
) and g ∈ o(R
n
). That is, ∂
α
f is the derivative of f in the
sense of distributions.
Finally, we have the RiemannLebesgue lemma.
Lemma 7.6 (RiemannLebesgue). Let C
∞
(R
n
) denote the Banach space of
all continuous functions f : R
n
→ C which vanish at ∞ equipped with the
sup norm. Then the Fourier transform is a bounded map from L
1
(R
n
) into
C
∞
(R
n
) satisfying

ˆ
f
∞
≤ (2π)
−n/2
f
1
. (7.17)
Proof. Clearly we have
ˆ
f ∈ C
∞
(R
n
) if f ∈ o(R
n
). Moreover, the estimate
sup
p
[
ˆ
f(p)[ ≤
1
(2π)
n/2
sup
p
_
R
n
[e
−ipx
f(x)[d
n
x =
1
(2π)
n/2
_
R
n
[f(x)[d
n
x
(7.18)
shows
ˆ
f ∈ C
∞
(R
n
) for arbitrary f ∈ L
1
(R
n
) since o(R
n
) is dense in L
1
(R
n
).
Problem 7.1. Show that
_
R
exp(−x
2
/2)dx =
√
2π. (Hint: Square the inte
gral and evaluate it using polar coordinates.)
Problem 7.2. Extend Lemma 0.32 to the case u, f ∈ o(R
n
) (compare Prob
lem 0.17).
7.2. The free Schr¨odinger operator
In Section 2.1 we have seen that the Hilbert space corresponding to one
particle in R
3
is L
2
(R
3
). More generally, the Hilbert space for N particles
in R
d
is L
2
(R
n
), n = Nd. The corresponding non relativistic Hamilton
operator, if the particles do not interact, is given by
H
0
= −∆, (7.19)
where ∆ is the Laplace operator
∆ =
n
j=1
∂
2
∂x
2
j
. (7.20)
Our ﬁrst task is to ﬁnd a good domain such that H
0
is a selfadjoint operator.
By Lemma 7.1 we have that
−∆ψ(x) = (p
2
ˆ
ψ(p))
∨
(x), ψ ∈ H
2
(R
n
), (7.21)
and hence the operator
H
0
ψ = −∆ψ, D(H
0
) = H
2
(R
n
), (7.22)
7.2. The free Schr¨odinger operator 143
is unitarily equivalent to the maximally deﬁned multiplication operator
(T H
0
T
−1
)ϕ(p) = p
2
ϕ(p), D(p
2
) = ¦ϕ ∈ L
2
(R
n
)[p
2
ϕ(p) ∈ L
2
(R
n
)¦.
(7.23)
Theorem 7.7. The free Schr¨odinger operator H
0
is selfadjoint and its
spectrum is characterized by
σ(H
0
) = σ
ac
(H
0
) = [0, ∞), σ
sc
(H
0
) = σ
pp
(H
0
) = ∅. (7.24)
Proof. It suﬃces to show that dµ
ψ
is purely absolutely continuous for every
ψ. First observe that
¸ψ, R
H
0
(z)ψ) = ¸
ˆ
ψ, R
p
2(z)
ˆ
ψ) =
_
R
n
[
ˆ
ψ(p)[
2
p
2
−z
d
n
p =
_
R
1
r
2
−z
d˜ µ
ψ
(r), (7.25)
where
d˜ µ
ψ
(r) = χ
[0,∞)
(r)r
n−1
__
S
n−1
[
ˆ
ψ(rω)[
2
d
n−1
ω
_
dr. (7.26)
Hence, after a change of coordinates, we have
¸ψ, R
H
0
(z)ψ) =
_
R
1
λ −z
dµ
ψ
(λ), (7.27)
where
dµ
ψ
(λ) =
1
2
χ
[0,∞)
(λ)λ
n/2−1
__
S
n−1
[
ˆ
ψ(
√
λω)[
2
d
n−1
ω
_
dλ, (7.28)
proving the claim.
Finally, we note that the compactly supported smooth functions are a
core for H
0
.
Lemma 7.8. The set C
∞
c
(R
n
) = ¦f ∈ o(R
n
)[supp(f) is compact¦ is a core
for H
0
.
Proof. It is not hard to see that o(R
n
) is a core (Problem 7.3) and hence it
suﬃces to show that the closure of H
0
[
C
∞
c
(R
n
)
contains H
0
[
S(R
n
)
. To see this,
let ϕ(x) ∈ C
∞
c
(R
n
) which is one for [x[ ≤ 1 and vanishes for [x[ ≥ 2. Set
ϕ
n
(x) = ϕ(
1
n
x), then ψ
n
(x) = ϕ
n
(x)ψ(x) is in C
∞
c
(R
n
) for every ψ ∈ o(R
n
)
and ψ
n
→ ψ respectively ∆ψ
n
→ ∆ψ.
Note also that the quadratic form of H
0
is given by
q
H
0
(ψ) =
n
j=1
_
R
n
[∂
j
ψ(x)[
2
d
n
x, ψ ∈ Q(H
0
) = H
1
(R
n
). (7.29)
Problem 7.3. Show that o(R
n
) is a core for H
0
. (Hint: Show that the
closure of H
0
[
S(R
n
)
contains H
0
.)
144 7. The free Schr¨odinger operator
Problem 7.4. Show that ¦ψ ∈ o(R)[ψ(0) = 0¦ is dense but not a core for
H
0
= −
d
2
dx
2
.
7.3. The time evolution in the free case
Now let us look at the time evolution. We have
e
−itH
0
ψ(x) = T
−1
e
−itp
2
ˆ
ψ(p). (7.30)
The right hand side is a product and hence our operator should be express
ible as an integral operator via the convolution formula. However, since
e
−itp
2
is not in L
2
, a more careful analysis is needed.
Consider
f
ε
(p
2
) = e
−(it+ε)p
2
, ε > 0. (7.31)
Then f
ε
(H
0
)ψ → e
−itH
0
ψ by Theorem 3.1. Moreover, by Lemma 7.3 and
the convolution formula we have
f
ε
(H
0
)ψ(x) =
1
(4π(it +ε))
n/2
_
R
n
e
−
x−y
2
4(it+ε)
ψ(y)d
n
y (7.32)
and hence
e
−itH
0
ψ(x) =
1
(4πit)
n/2
_
R
n
e
i
x−y
2
4t
ψ(y)d
n
y (7.33)
for t ,= 0 and ψ ∈ L
1
∩ L
2
. For general ψ ∈ L
2
the integral has to be
understood as a limit.
Using this explicit form, it is not hard to draw some immediate conse
quences. For example, if ψ ∈ L
2
(R
n
) ∩L
1
(R
n
), then ψ(t) ∈ C(R
n
) for t ,= 0
(use dominated convergence and continuity of the exponential) and satisﬁes
ψ(t)
∞
≤
1
[4πt[
n/2
ψ(0)
1
(7.34)
by the RiemannLebesgue lemma. Thus we have spreading of wave functions
in this case. Moreover, it is even possible to determine the asymptotic form
of the wave function for large t as follows. Observe
e
−itH
0
ψ(x) =
e
i
x
2
4t
(4πit)
n/2
_
R
n
e
i
y
2
4t
ψ(y)e
i
xy
2t
d
n
y
=
_
1
2it
_
n/2
e
i
x
2
4t
_
e
i
y
2
4t
ψ(y)
_
∧
(
x
2t
). (7.35)
Moreover, since exp(i
y
2
4t
)ψ(y) → ψ(y) in L
2
as [t[ → ∞ (dominated conver
gence) we obtain
7.4. The resolvent and Green’s function 145
Lemma 7.9. For any ψ ∈ L
2
(R
n
) we have
e
−itH
0
ψ(x) −
_
1
2it
_
n/2
e
i
x
2
4t
ˆ
ψ(
x
2t
) → 0 (7.36)
in L
2
as [t[ → ∞.
Next we want to apply the RAGE theorem in order to show that for any
initial condition, a particle will escape to inﬁnity.
Lemma 7.10. Let g(x) be the multiplication operator by g and let f(p) be
the operator given by f(p)ψ(x) = T
−1
(f(p)
ˆ
ψ(p))(x). Denote by L
∞
∞
(R
n
) the
bounded Borel functions which vanish at inﬁnity. Then
f(p)g(x) and g(x)f(p) (7.37)
are compact if f, g ∈ L
∞
∞
(R
n
) and (extend to) Hilbert–Schmidt operators if
f, g ∈ L
2
(R
n
).
Proof. By symmetry it suﬃces to consider g(x)f(p). Let f, g ∈ L
2
, then
g(x)f(p)ψ(x) =
1
(2π)
n/2
_
R
n
g(x)
ˇ
f(x −y)ψ(y)d
n
y (7.38)
shows that g(x)f(p) is Hilbert–Schmidt since g(x)
ˇ
f(x −y) ∈ L
2
(R
n
R
n
).
If f, g are bounded then the functions f
R
(p) = χ
{pp
2
≤R}
(p)f(p) and
g
R
(x) = χ
{xx
2
≤R}
(x)g(x) are in L
2
. Thus g
R
(x)f
R
(p) is compact and tends
to g(x)f(p) in norm since f, g vanish at inﬁnity.
In particular, this lemma implies that
χ
Ω
(H
0
+ i)
−1
(7.39)
is compact if Ω ⊆ R
n
is bounded and hence
lim
t→∞
χ
Ω
e
−itH
0
ψ
2
= 0 (7.40)
for any ψ ∈ L
2
(R
n
) and any bounded subset Ω of R
n
. In other words, the
particle will eventually escape to inﬁnity since the probability of ﬁnding the
particle in any bounded set tends to zero. (If ψ ∈ L
1
(R
n
) this of course also
follows from (7.34).)
7.4. The resolvent and Green’s function
Now let us compute the resolvent of H
0
. We will try to use a similar approach
as for the time evolution in the previous section. However, since it is highly
nontrivial to compute the inverse Fourier transform of exp(−εp
2
)(p
2
−z)
−1
directly, we will use a small ruse.
146 7. The free Schr¨odinger operator
Note that
R
H
0
(z) =
_
∞
0
e
zt
e
−tH
0
dt, Re(z) < 0 (7.41)
by Lemma 4.1. Moreover,
e
−tH
0
ψ(x) =
1
(4πt)
n/2
_
R
n
e
−
x−y
2
4t
ψ(y)d
n
y, t > 0, (7.42)
by the same analysis as in the previous section. Hence, by Fubini, we have
R
H
0
(z)ψ(x) =
_
R
n
G
0
(z, [x −y[)ψ(y)d
n
y, (7.43)
where
G
0
(z, r) =
_
∞
0
1
(4πt)
n/2
e
−
r
2
4t
+zt
dt, r > 0, Re(z) < 0. (7.44)
The function G
0
(z, r) is called Green’s function of H
0
. The integral can
be evaluated in terms of modiﬁed Bessel functions of the second kind
G
0
(z, r) =
1
2π
_
−z
4π
2
r
2
_n−2
4
K
n
2
−1
(
√
−zr). (7.45)
The functions K
ν
(x) satisfy the following diﬀerential equation
_
d
2
dx
2
+
1
x
d
dx
−1 −
ν
2
x
2
_
K
ν
(x) = 0 (7.46)
and have the following asymptotics
K
ν
(x) =
_
Γ(ν)
2
_
x
2
_
−ν
+O(x
−ν+1
) ν ,= 0
−ln(
x
2
) +O(1) ν = 0
(7.47)
for [x[ → 0 and
K
ν
(x) =
_
π
2x
e
−x
(1 +O(x
−1
)) (7.48)
for [x[ → ∞. For more information see for example [22]. In particular,
G
0
(z, r) has an analytic continuation for z ∈ C¸[0, ∞) = ρ(H
0
). Hence we
can deﬁne the right hand side of (7.43) for all z ∈ ρ(H
0
) such that
_
R
n
_
R
n
ϕ(x)G
0
(z, [x −y[)ψ(y)d
n
yd
n
x (7.49)
is analytic for z ∈ ρ(H
0
) and ϕ, ψ ∈ o(R
n
) (by Morea’s theorem). Since
it is equal to ¸ϕ, R
H
0
(z)ψ) for Re(z) < 0 it is equal to this function for all
z ∈ ρ(H
0
), since both functions are analytic in this domain. In particular,
(7.43) holds for all z ∈ ρ(H
0
).
7.4. The resolvent and Green’s function 147
If n is odd, we have the case of spherical Bessel functions which can be
expressed in terms of elementary functions. For example, we have
G
0
(z, r) =
1
2
√
−z
e
−
√
−z r
, n = 1, (7.50)
and
G
0
(z, r) =
1
4πr
e
−
√
−z r
, n = 3. (7.51)
Problem 7.5. Verify (7.43) directly in the case n = 1.
Chapter 8
Algebraic methods
8.1. Position and momentum
Apart from the Hamiltonian H
0
, which corresponds to the kinetic energy,
there are several other important observables associated with a single parti
cle in three dimensions. Using commutation relation between these observ
ables, many important consequences about these observables can be derived.
First consider the oneparameter unitary group
(U
j
(t)ψ)(x) = e
−itx
j
ψ(x), 1 ≤ j ≤ 3. (8.1)
For ψ ∈ o(R
3
) we compute
lim
t→0
i
e
−itx
j
ψ(x) −ψ(x)
t
= x
j
ψ(x) (8.2)
and hence the generator is the multiplication operator by the jth coordinate
function. By Corollary 5.3 it is essentially selfadjoint on ψ ∈ o(R
3
). It is
custom to combine all three operators to one vector valued operator x, which
is known as position operator. Moreover, it is not hard to see that the
spectrum of x
j
is purely absolutely continuous and given by σ(x
j
) = R. In
fact, let ϕ(x) be an orthonormal basis for L
2
(R). Then ϕ
i
(x
1
)ϕ
j
(x
2
)ϕ
k
(x
3
)
is an orthonormal basis for L
2
(R
3
) and x
1
can be written as a orthogonal
sum of operators restricted to the subspaces spanned by ϕ
j
(x
2
)ϕ
k
(x
3
). Each
subspace is unitarily equivalent to L
2
(R) and x
1
is given by multiplication
with the identity. Hence the claim follows (or use Theorem 4.12).
Next, consider the oneparameter unitary group of translations
(U
j
(t)ψ)(x) = ψ(x −te
j
), 1 ≤ j ≤ 3, (8.3)
149
150 8. Algebraic methods
where e
j
is the unit vector in the jth coordinate direction. For ψ ∈ o(R
3
)
we compute
lim
t→0
i
ψ(x −te
j
) −ψ(x)
t
=
1
i
∂
∂x
j
ψ(x) (8.4)
and hence the generator is p
j
=
1
i
∂
∂x
j
. Again it is essentially selfadjoint
on ψ ∈ o(R
3
). Moreover, since it is unitarily equivalent to x
j
by virtue of
the Fourier transform we conclude that the spectrum of p
j
is again purely
absolutely continuous and given by σ(p
j
) = R. The operator p is known as
momentum operator. Note that since
[H
0
, p
j
]ψ(x) = 0, ψ ∈ o(R
3
) (8.5)
we have
d
dt
¸ψ(t), p
j
ψ(t)) = 0, ψ(t) = e
−itH
0
ψ(0) ∈ o(R
3
), (8.6)
that is, the momentum is a conserved quantity for the free motion. Similarly
one has
[p
j
, x
k
]ψ(x) = δ
jk
ψ(x), ψ ∈ o(R
3
), (8.7)
which is known as the Weyl relation.
The Weyl relations also imply that the meansquare deviation of position
and momentum cannot be made arbitrarily small simultaneously:
Theorem 8.1 (Heisenberg Uncertainty Principle). Suppose A and B are
two symmetric operators, then for any ψ ∈ D(AB) ∩ D(AB) we have
∆
ψ
(A)∆
ψ
(B) ≥
1
2
[E
ψ
([A, B])[ (8.8)
with equality if
(B −E
ψ
(B))ψ = iλ(A−E
ψ
(A))ψ, λ ∈ R¸¦0¦, (8.9)
or if ψ is an eigenstate of A or B.
Proof. Let us ﬁx ψ ∈ D(AB) ∩ D(AB) and abbreviate
ˆ
A = A−E
ψ
(A),
ˆ
B = B −E
ψ
(B). (8.10)
Then ∆
ψ
(A) = 
ˆ
Aψ, ∆
ψ
(B) = 
ˆ
Bψ and hence by CauchySchwarz
[¸
ˆ
Aψ,
ˆ
Bψ)[ ≤ ∆
ψ
(A)∆
ψ
(B). (8.11)
Now note that
ˆ
A
ˆ
B =
1
2
¦
ˆ
A,
ˆ
B¦ +
1
2
[A, B], ¦
ˆ
A,
ˆ
B¦ =
ˆ
A
ˆ
B +
ˆ
B
ˆ
A (8.12)
where ¦
ˆ
A,
ˆ
B¦ and i[A, B] are symmetric. So
[¸
ˆ
Aψ,
ˆ
Bψ)[
2
= [¸ψ,
ˆ
A
ˆ
Bψ)[
2
=
1
2
[¸ψ, ¦
ˆ
A,
ˆ
B¦ψ)[
2
+
1
2
[¸ψ, [A, B]ψ)[
2
(8.13)
8.2. Angular momentum 151
which proves (8.8).
To have equality if ψ is not an eigenstate we need
ˆ
Bψ = z
ˆ
Aψ for equality
in CauchySchwarz and ¸ψ, ¦
ˆ
A,
ˆ
B¦ψ) = 0. Inserting the ﬁrst into the second
requirement gives 0 = (z −z
∗
)
ˆ
Aψ
2
and shows Re(z) = 0.
In case of position and momentum we have (ψ = 1)
∆
ψ
(p
j
)∆
ψ
(x
k
) ≥
δ
jk
2
(8.14)
and the minimum is attained for the Gaussian wave packets
ψ(x) =
_
λ
π
_
n/4
e
−
λ
2
x−x
0

2
−ip
0
x
, (8.15)
which satisfy E
ψ
(x) = x
0
and E
ψ
(p) = p
0
respectively ∆
ψ
(p
j
)
2
=
λ
2
and
∆
ψ
(x
k
)
2
=
1
2λ
.
Problem 8.1. Check that (8.15) realizes the minimum.
8.2. Angular momentum
Now consider the oneparameter unitary group of rotations
(U
j
(t)ψ)(x) = ψ(M
j
(t)x), 1 ≤ j ≤ 3, (8.16)
where M
j
(t) is the matrix of rotation around e
j
by an angle of t. For
ψ ∈ o(R
3
) we compute
lim
t→0
i
ψ(M
i
(t)x) −ψ(x)
t
=
3
j,k=1
ε
ijk
x
j
p
k
ψ(x), (8.17)
where
ε
ijk
=
_
_
_
1 if ijk is an even permutation of 123
−1 if ijk is an odd permutation of 123
0 else
. (8.18)
Again one combines the three components to one vector valued operator
L = x ∧ p, which is known as angular momentum operator. Since
e
i2πL
j
= I, we see that the spectrum is a subset of Z. In particular, the
continuous spectrum is empty. We will show below that we have σ(L
j
) = Z.
Note that since
[H
0
, L
j
]ψ(x) = 0, ψ ∈ o(R
3
), (8.19)
we have again
d
dt
¸ψ(t), L
j
ψ(t)) = 0, ψ(t) = e
−itH
0
ψ(0) ∈ o(R
3
), (8.20)
that is, the angular momentum is a conserved quantity for the free motion
as well.
152 8. Algebraic methods
Moreover, we even have
[L
i
, K
j
]ψ(x) = i
3
k=1
ε
ijk
K
k
ψ(x), ψ ∈ o(R
3
), K
j
∈ ¦L
j
, p
j
, x
j
¦, (8.21)
and these algebraic commutation relations are often used to derive informa
tion on the point spectra of these operators. In this respect the following
domain
D = span¦x
α
e
−
x
2
2
[ α ∈ N
n
0
¦ ⊂ o(R
n
) (8.22)
is often used. It has the nice property that the ﬁnite dimensional subspaces
D
k
= span¦x
α
e
−
x
2
2
[ [α[ ≤ k¦ (8.23)
are invariant under L
j
(and hence they reduce L
j
).
Lemma 8.2. The subspace D ⊂ L
2
(R
n
) deﬁned in (8.22) is dense.
Proof. By Lemma 1.9 it suﬃces to consider the case n = 1. Suppose
¸ϕ, ψ) = 0 for every ψ ∈ D. Then
1
√
2π
_
ϕ(x)e
−
x
2
2
k
j=1
(itx)
j
j!
= 0 (8.24)
for any ﬁnite k and hence also in the limit k → ∞ by the dominated conver
gence theorem. But the limit is the Fourier transform of ϕ(x)e
−
x
2
2
, which
shows that this function is zero. Hence ϕ(x) = 0.
Since it is invariant under the unitary groups generated by L
j
, the op
erators L
j
are essentially selfadjoint on D by Corollary 5.3.
Introducing L
2
= L
2
1
+L
2
2
+L
2
3
it is straightforward to check
[L
2
, L
j
]ψ(x) = 0, ψ ∈ o(R
3
). (8.25)
Moreover, D
k
is invariant under L
2
and L
3
and hence D
k
reduces L
2
and
L
3
. In particular, L
2
and L
3
are given by ﬁnite matrices on D
k
. Now
let H
m
= Ker(L
3
− m) and denote by P
k
the projector onto D
k
. Since
L
2
and L
3
commute on D
k
, the space P
k
H
m
is invariant under L
2
which
shows that we can choose an orthonormal basis consisting of eigenfunctions
of L
2
for P
k
H
m
. Increasing k we get an orthonormal set of simultaneous
eigenfunctions whose span is equal to D. Hence there is an orthonormal
basis of simultaneous eigenfunctions of L
2
and L
3
.
Now let us try to draw some further consequences by using the commuta
tion relations (8.21). (All commutation relations below hold for ψ ∈ o(R
3
).)
Denote by H
l,m
the set of all functions in D satisfying
L
3
ψ = mψ, L
2
ψ = l(l + 1)ψ. (8.26)
8.2. Angular momentum 153
By L
2
≥ 0 and σ(L
3
) ⊆ Z we can restrict our attention to the case l ≥ 0
and m ∈ Z.
First introduce two new operators
L
±
= L
1
±iL
2
, [L
3
, L
±
] = ±L
±
. (8.27)
Then, for every ψ ∈ H
l,m
we have
L
3
(L
±
ψ) = (m±1)(L
±
ψ), L
2
(L
±
ψ) = l(l + 1)(L
±
ψ), (8.28)
that is, L
±
H
l,m
→H
l,m±1
. Moreover, since
L
2
= L
2
3
±L
3
+L
∓
L
±
(8.29)
we obtain
L
±
ψ
2
= ¸ψ, L
∓
L
±
ψ) = (l(l + 1) −m(m±1))ψ (8.30)
for every ψ ∈ H
l,m
. If ψ ,= 0 we must have l(l + 1) − m(m± 1) ≥ 0 which
shows H
l,m
= ¦0¦ for [m[ > l. Moreover, L
±
H
l,m
→ H
l,m±1
is injective
unless [m[ = l. Hence we must have H
l,m
= ¦0¦ for l ,∈ N
0
.
Up to this point we know σ(L
2
) ⊆ ¦l(l +1)[l ∈ N
0
¦, σ(L
3
) ⊆ Z. In order
to show that equality holds in both cases, we need to show that H
l,m
,= ¦0¦
for l ∈ N
0
, m = −l, −l + 1, . . . , l −1, l. First of all we observe
ψ
0,0
(x) =
1
π
3/2
e
−
x
2
2
∈ H
0,0
. (8.31)
Next, we note that (8.21) implies
[L
3
, x
±
] = ±x
±
, x
±
= x
1
±ix
2
,
[L
±
, x
±
] = 0, [L
±
, x
∓
] = ±2x
3
,
[L
2
, x
±
] = 2x
±
(1 ±L
3
) ∓2x
3
L
±
. (8.32)
Hence if ψ ∈ H
l,l
, then (x
1
±ix
2
)ψ ∈ H
l±1,l±1
. And thus
ψ
l,l
(x) =
1
√
l!
(x
1
±ix
2
)
l
ψ
0,0
(x) ∈ H
l,l
, (8.33)
respectively
ψ
l,m
(x) =
¸
(l +m)!
(l −m)!(2l)!
L
l−m
−
ψ
l,l
(x) ∈ H
l,m
. (8.34)
The constants are chosen such that ψ
l,m
 = 1.
In summary,
Theorem 8.3. There exists an orthonormal basis of simultaneous eigenvec
tors for the operators L
2
and L
j
. Moreover, their spectra are given by
σ(L
2
) = ¦l(l + 1)[l ∈ N
0
¦, σ(L
3
) = Z. (8.35)
We will rederive this result using diﬀerent methods in Section 10.3.
154 8. Algebraic methods
8.3. The harmonic oscillator
Finally, let us consider another important model whose algebraic structure
is similar to those of the angular momentum, the harmonic oscillator
H = H
0
+ω
2
x
2
, ω > 0. (8.36)
As domain we will choose
D(H) = D = span¦x
α
e
−
x
2
2
[ α ∈ N
3
0
¦ ⊆ L
2
(R
3
) (8.37)
from our previous section.
We will ﬁrst consider the onedimensional case. Introducing
A
±
=
1
√
2
_
√
ωx ∓
1
√
ω
d
dx
_
(8.38)
we have
[A
−
, A
+
] = 1 (8.39)
and
H = ω(2N + 1), N = A
+
A
−
, (8.40)
for any function in D.
Moreover, since
[N, A
±
] = ±A
±
, (8.41)
we see that Nψ = nψ implies NA
±
ψ = (n ±1)A
±
ψ. Moreover, A
+
ψ
2
=
¸ψ, A
−
A
+
ψ) = (n + 1)ψ
2
respectively A
−
ψ
2
= nψ
2
in this case and
hence we conclude that σ(N) ⊆ N
0
If Nψ
0
= 0, then we must have A
−
ψ = 0 and the normalized solution
of this last equation is given by
ψ
0
(x) =
_
ω
π
_
1/4
e
−
ωx
2
2
∈ D. (8.42)
Hence
ψ
n
(x) =
1
√
n!
A
n
+
ψ
0
(x) (8.43)
is a normalized eigenfunction of N corresponding to the eigenvalue n. More
over, since
ψ
n
(x) =
1
√
2
n
n!
_
ω
π
_
1/4
H
n
(
√
ωx)e
−
ωx
2
2
(8.44)
where H
n
(x) is a polynomial of degree n given by
H
n
(x) = e
x
2
2
_
x −
d
dx
_
n
e
−
x
2
2
= (−1)
n
e
x
2 d
n
dx
n
e
−x
2
, (8.45)
we conclude span¦ψ
n
¦ = D. The polynomials H
n
(x) are called Hermite
polynomials.
In summary,
8.3. The harmonic oscillator 155
Theorem 8.4. The harmonic oscillator H is essentially self adjoint on D
and has an orthonormal basis of eigenfunctions
ψ
n
1
,n
2
,n
3
(x) = ψ
n
1
(x
1
)ψ
n
2
(x
2
)ψ
n
3
(x
3
), (8.46)
with ψ
n
j
(x
j
) from (8.44). The spectrum is given by
σ(H) = ¦(2n + 3)ω[n ∈ N
0
¦. (8.47)
Finally, there is also a close connection with the Fourier transformation.
without restriction we choose ω = 1 and consider only one dimension. Then
it easy to verify that H commutes with the Fourier transformation
TH = HT. (8.48)
Moreover, by TA
±
= ∓iA
±
T we even infer
Tψ
n
=
1
√
n!
TA
n
+
ψ
0
=
(−i)
n
√
n!
A
n
+
Tψ
0
= (−i)
n
ψ
n
, (8.49)
since Tψ
0
= ψ
0
by Lemma 7.3. In particular,
σ(T) = ¦z ∈ C[z
4
= 1¦. (8.50)
Problem 8.2. Show that H = −
d
2
dx
2
+q can be written as H = AA
∗
, where
A = −
d
dx
+φ, if the diﬀerential equation ψ
+qψ = 0 has a positive solution.
Compute
˜
H = A
∗
A. (Hint: φ =
ψ
ψ
.)
Chapter 9
One dimensional
Schr¨odinger operators
9.1. SturmLiouville operators
In this section we want to illustrate some of the results obtained thus far by
investigating a speciﬁc example, the SturmLiouville equations.
τf(x) =
1
r(x)
_
−
d
dx
p(x)
d
dx
f(x) +q(x)f(x)
_
, f, pf
∈ AC
loc
(I) (9.1)
The case p = r = 1 can be viewed as the model of a particle in one
dimension in the external potential q. Moreover, the case of a particle in
three dimensions can in some situations be reduced to the investigation of
SturmLiouville equations. In particular, we will see how this works when
explicitly solving the hydrogen atom.
The suitable Hilbert space is
L
2
((a, b), r(x)dx), ¸f, g) =
_
b
a
f(x)
∗
g(x)r(x)dx, (9.2)
where I = (a, b) ⊂ R is an arbitrary open interval.
We require
(i) p
−1
∈ L
1
loc
(I), realvalued
(ii) q ∈ L
1
loc
(I), realvalued
(iii) r ∈ L
1
loc
(I), positive
157
158 9. One dimensional Schr¨odinger operators
If a is ﬁnite and if p
−1
, q, r ∈ L
1
((a, c)) (c ∈ I), then the SturmLiouville
equation (9.1) is called regular at a. Similarly for b. If it is both regular at
a and b it is called regular.
The maximal domain of deﬁnition for τ in L
2
(I, r dx) is given by
D(τ) = ¦f ∈ L
2
(I, r dx)[f, pf
∈ AC
loc
(I), τf ∈ L
2
(I, r dx)¦. (9.3)
It is not clear that D(τ) is dense unless (e.g.) p ∈ AC
loc
(I), p
, q ∈ L
2
loc
(I),
r
−1
∈ L
∞
loc
(I) since C
∞
0
(I) ⊂ D(τ) in this case. We will defer the general
case to Lemma 9.4 below.
Since we are interested in selfadjoint operators H associated with (9.1),
we perform a little calculation. Using integration by parts (twice) we obtain
(a < c < d < b):
_
d
c
g
∗
(τf) rdy = W
d
(g
∗
, f) −W
c
(g
∗
, f) +
_
d
c
(τg)
∗
f rdy, (9.4)
for f, g, pf
, pg
∈ AC
loc
(I) where
W
x
(f
1
, f
2
) =
_
p(f
1
f
2
−f
1
f
2
)
_
(x) (9.5)
is called the modiﬁed Wronskian.
Equation (9.4) also shows that the Wronskian of two solutions of τu = zu
is constant
W
x
(u
1
, u
2
) = W(u
1
, u
2
), τu
1,2
= zu
1,2
. (9.6)
Moreover, it is nonzero if and only if u
1
and u
2
are linearly independent
(compare Theorem 9.1 below).
If we choose f, g ∈ D(τ) in (9.4), than we can take the limits c → a and
d → b, which results in
¸g, τf) = W
b
(g
∗
, f) −W
a
(g
∗
, f) +¸τg, f), f, g ∈ D(τ). (9.7)
Here W
a,b
(g
∗
, f) has to be understood as limit.
Finally, we recall the following wellknown result from ordinary diﬀer
ential equations.
Theorem 9.1. Suppose rg ∈ L
1
loc
(I), then there exists a unique solution
f, pf
∈ AC
loc
(I) of the diﬀerential equation
(τ −z)f = g, z ∈ C, (9.8)
satisfying the initial condition
f(c) = α, (pf
)(c) = β, α, β ∈ C, c ∈ I. (9.9)
In addition, f is holomorphic with respect to z.
Note that f, pf
can be extended continuously to a regular end point.
9.1. SturmLiouville operators 159
Lemma 9.2. Suppose u
1
, u
2
are two solutions of (τ−z)u = 0 with W(u
1
, u
2
) =
1. Then any other solution of (9.8) can be written as (α, β ∈ C)
f(x) = u
1
(x)
_
α +
_
x
c
u
2
gr dy
_
+u
2
(x)
_
β −
_
x
c
u
1
gr dy
_
,
f
(x) = u
1
(x)
_
α +
_
x
c
u
2
gr dy
_
+u
2
(x)
_
β −
_
x
c
u
1
gr dy
_
. (9.10)
Note that the constants α, β coincide with those from Theorem 9.1 if u
1
(c) =
(pu
2
)(c) = 1 and (pu
1
)(c) = u
2
(c) = 0.
Proof. It suﬃces to check τf − z f = g. Diﬀerentiating the ﬁrst equation
of (9.10) gives the second. Next we compute
(pf
)
= (pu
1
)
_
α +
_
u
2
gr dy
_
+ (pu
2
)
_
β −
_
u
1
gr dy
_
−W(u
1
, u
2
)gr
= (q −z)u
1
_
α +
_
u
2
gr dy
_
+ (q −z)u
2
_
β −
_
u
1
g dy
_
−gr
= (q −z)f −gr (9.11)
which proves the claim.
Now we want to obtain a symmetric operator and hence we choose
A
0
f = τf, D(A
0
) = D(τ) ∩ AC
c
(I), (9.12)
where AC
c
(I) are the functions in AC(I) with compact support. This deﬁ
nition clearly ensures that the Wronskian of two such functions vanishes on
the boundary, implying that A
0
is symmetric. Our ﬁrst task is to compute
the closure of A
0
and its adjoint. For this the following elementary fact will
be needed.
Lemma 9.3. Suppose V is a vector space and l, l
1
, . . . , l
n
are linear func
tionals (deﬁned on all of V ) such that
n
j=1
Ker(l
j
) ⊆ Ker(l). Then l =
n
j=0
α
j
l
j
for some constants α
j
∈ C.
Proof. First of all it is no restriction to assume that the functionals l
j
are
linearly independent. Then the map L : V → C
n
, f → (l
1
(f), . . . , l
n
(f)) is
surjective (since x ∈ Ran(L)
⊥
implies
n
j=1
x
j
l
j
(f) = 0 for all f). Hence
there are vectors f
k
∈ V such that l
j
(f
k
) = 0 for j ,= k and l
j
(f
j
) = 1. Then
f −
n
j=1
l
j
(f)f
j
∈
n
j=1
Ker(l
j
) and hence l(f)−
n
j=1
l
j
(f)l(f
j
) = 0. Thus
we can choose α
j
= l(f
j
).
Now we are ready to prove
Lemma 9.4. The operator A
0
is densely deﬁned and its closure is given by
A
0
f = τf, D(A
0
) = ¦f ∈ D(τ) [ W
a
(f, g) = W
b
(f, g) = 0, ∀g ∈ D(τ)¦.
(9.13)
160 9. One dimensional Schr¨odinger operators
Its adjoint is given by
A
∗
0
f = τf, D(A
∗
0
) = D(τ). (9.14)
Proof. We start by computing A
∗
0
and ignore the fact that we don’t know
wether D(A
0
) is dense for now.
By (9.7) we have D(τ) ⊆ D(A
∗
0
) and it remains to show D(A
∗
0
) ⊆ D(τ).
If h ∈ D(A
∗
0
) we must have
¸h, A
0
f) = ¸k, f), ∀f ∈ D(A
0
) (9.15)
for some k ∈ L
2
(I, r dx). Using (9.10) we can ﬁnd a
˜
h such that τ
˜
h = k and
from integration by parts we obtain
_
b
a
(h(x) −
˜
h(x))
∗
(τf)(x)r(x)dx = 0, ∀f ∈ D(A
0
). (9.16)
Clearly we expect that h −
˜
h will be a solution of the τu = 0 and to prove
this we will invoke Lemma 9.3. Therefore we consider the linear functionals
l(g) =
_
b
a
(h(x) −
˜
h(x))
∗
g(x)r(x)dx, l
j
(g) =
_
b
a
u
j
(x)
∗
g(x)r(x)dx, (9.17)
on L
2
c
(I, r dx), where u
j
are two solutions of τu = 0 with W(u
1
, u
2
) ,= 0.
We have Ker(l
1
) ∩ Ker(l
2
) ⊆ Ker(l). In fact, if g ∈ Ker(l
1
) ∩ Ker(l
2
), then
f(x) = u
1
(x)
_
x
a
u
2
(y)g(y)r(y)dy +u
2
(x)
_
b
x
u
1
(y)g(y)r(y)dy (9.18)
is in D(A
0
) and g = τf ∈ Ker(l) by (9.16). Now Lemma 9.3 implies
_
b
a
(h(x) −
˜
h(x) +α
1
u
1
(x) +α
2
u
2
(x))
∗
g(x)r(x)dx = 0, ∀g ∈ L
2
c
(I, rdx)
(9.19)
and hence h =
˜
h +α
1
u
1
+α
2
u
2
∈ D(τ).
Now what if D(A
0
) were not dense? Then there would be some freedom
in choice of k since we could always add a component in D(A
0
)
⊥
. So suppose
we have two choices k
1
,= k
2
. Then by the above calculation, there are
corresponding functions
˜
h
1
and
˜
h
2
such that h =
˜
h
1
+ α
1,1
u
1
+ α
1,2
u
2
=
˜
h
2
+ α
2,1
u
1
+ α
2,2
u
2
. In particular,
˜
h
1
−
˜
h
2
is in the kernel of τ and hence
k
1
= τ
˜
h
1
= τ
˜
h
2
= k
2
contradiction our assumption.
Next we turn to A
0
. Denote the set on the right hand side of (9.13) by
D. Then we have D ⊆ D(A
∗∗
0
) = A
0
by (9.7). Conversely, since A
0
⊆ A
∗
0
we can use (9.7) to conclude
W
a
(f, h) +W
b
(f, h) = 0, f ∈ D(A
0
), h ∈ D(A
∗
0
). (9.20)
9.2. Weyl’s limit circle, limit point alternative 161
Now replace h by a
˜
h ∈ D(A
∗
0
) which coincides with h near a and vanishes
identically near b (Problem 9.1). Then W
a
(f, h) = W
a
(f,
˜
h) +W
b
(f,
˜
h) = 0.
Finally, W
b
(f, h) = −W
a
(f, h) = 0 shows f ∈ D.
Example. If τ is regular at a, then f ∈ D(A
0
) if and only if f(a) =
(pf
)(a) = 0. This follows since we can prescribe the values of g(a), (pg
)(a)
for g ∈ D(τ) arbitrarily.
This result shows that any selfadjoint extension of A
0
must lie between
A
0
and A
∗
0
. Moreover, selfadjointness seems to be related to the Wronskian
of two functions at the boundary. Hence we collect a few properties ﬁrst.
Lemma 9.5. Suppose v ∈ D(τ) with W
a
(v
∗
, v) = 0 and there is a
ˆ
f ∈ D(τ)
with W(v
∗
,
ˆ
f)
a
,= 0. then we have
W
a
(v, f) = 0 ⇔ W
a
(v, f
∗
) = 0 ∀f ∈ D(τ) (9.21)
and
W
a
(v, f) = W
a
(v, g) = 0 ⇒ W
a
(g
∗
, f) = 0 ∀f, g ∈ D(τ) (9.22)
Proof. For all f
1
, . . . , f
4
∈ D(τ) we have the Pl¨ ucker identity
W
x
(f
1
, f
2
)W
x
(f
3
, f
4
) +W
x
(f
1
, f
3
)W
x
(f
4
, f
2
) +W
x
(f
1
, f
4
)W
x
(f
2
, f
3
) = 0
(9.23)
which remains valid in the limit x → a. Choosing f
1
= v, f
2
= f, f
3
=
v
∗
, f
4
=
ˆ
f we infer (9.21). Choosing f
1
= f, f
2
= g
∗
, f
3
= v, f
4
=
ˆ
f we
infer (9.22).
Problem 9.1. Given α, β, γ, δ, show that there is a function f ∈ D(τ)
restricted to [c, d] ⊆ (a, b) such that f(c) = α, (pf)(c) = β and f(d) = γ,
(pf)(c) = δ. (Hint: Lemma 9.2)
Problem 9.2. Let A
0
= −
d
2
dx
2
, D(A
0
) = ¦f ∈ H
2
[0, 1][f(0) = f(1) = 0¦.
and B = q, D(B) = ¦f ∈ L
2
(0, 1)[qf ∈ L
2
(0, 1)¦. Find a q ∈ L
1
(0, 1) such
that D(A
0
) ∩ D(B) = ¦0¦. (Hint: Problem 0.18)
9.2. Weyl’s limit circle, limit point alternative
We call τ limit circle (l.c.) at a if there is a v ∈ D(τ) with W
a
(v
∗
, v) = 0
such that W
a
(v, f) ,= 0 for at least one f ∈ D(τ). Otherwise τ is called
limit point (l.p.) at a. Similarly for b.
Example. If τ is regular at a, it is limit circle at a. Since
W
a
(v, f) = (pf
)(a)v(a) −(pv
)(a)f(a) (9.24)
any realvalued v with (v(a), (pv
)(a)) ,= (0, 0) works.
162 9. One dimensional Schr¨odinger operators
Note that if W
a
(f, v) ,= 0, then W
a
(f, Re(v)) ,= 0 or W
a
(f, Im(v)) ,= 0.
Hence it is no restriction to assume that v is real and W
a
(v
∗
, v) = 0 is
trivially satisﬁed in this case. In particular, τ is limit point if and only if
W
a
(f, g) = 0 for all f, g ∈ D(τ).
Theorem 9.6. If τ is l.c. at a, then let v ∈ D(τ) with W(v
∗
, v)
a
= 0 and
W(v, f)
a
,= 0 for some f ∈ D(τ). Similarly, if τ is l.c. at b, let w be an
analogous function. Then the operator
A : D(A) → L
2
(I, r dx)
f → τf
(9.25)
with
D(A) = ¦f ∈ D(τ)[ W
a
(v, f) = 0 if l.c. at a
W
b
(w, f) = 0 if l.c. at b¦
(9.26)
is selfadjoint.
Proof. Clearly A ⊆ A
∗
⊆ A
∗
0
. Let g ∈ D(A
∗
). As in the computation of A
0
we conclude W
a
(f, g) = 0 for all f ∈ D(A). Moreover, we can choose f such
that it coincides with v near a and hence W
a
(v, g) = 0, that is g ∈ D(A).
The name limit circle respectively limit point stems from the original
approach of Weyl, who considered the set of solutions τu = zu, z ∈ C¸R
which satisfy W
c
(u
∗
, u) = 0. They can be shown to lie on a circle which
converges to a circle respectively point as c → a (or c → b).
Example. If τ is regular at a we can choose v such that (v(a), (pv
)(a)) =
(sin(α), −cos(α)), α ∈ [0, π), such that
W
a
(v, f) = cos(α)f(a) + sin(α)(pf
)(a). (9.27)
The most common choice α = 0 is known as Dirichlet boundary condi
tion f(a) = 0.
Next we want to compute the resolvent of A.
Lemma 9.7. Suppose z ∈ ρ(A), then there exists a solution u
a
(z, x) which
is in L
2
((a, c), r dx) and which satisﬁes the boundary condition at a if τ
is l.c. at a. Similarly, there exists a solution u
b
(z, x) with the analogous
properties near b.
The resolvent of A is given by
(A−z)
−1
g(x) =
_
b
a
G(z, x, y)g(y)r(y)dy, (9.28)
where
G(z, x, y) =
1
W(u
b
(z), u
a
(z))
_
u
b
(z, x)u
a
(z, y) x ≥ y
u
a
(z, x)u
b
(z, y) x ≤ y
. (9.29)
9.2. Weyl’s limit circle, limit point alternative 163
Proof. Let g ∈ L
2
c
(I, r dx) be realvalued and consider f = (A − z)
−1
g ∈
D(A). Since (τ −z)f = 0 near a respectively b, we obtain u
a
(z, x) by setting
it equal to f near a and using the diﬀerential equation to extend it to the
rest of I. Similarly we obtain u
b
. The only problem is that u
a
or u
b
might
be identically zero. Hence we need to show that this can be avoided by
choosing g properly.
Fix z and let g be supported in (c, d) ⊂ I. Since (τ −z)f = g we have
f(x) = u
1
(x)
_
α +
_
x
a
u
2
gr dy
_
+u
2
(x)
_
β +
_
b
x
u
1
gr dy
_
. (9.30)
Near a (x < c) we have f(x) = αu
1
(x) +
˜
βu
2
(x) and near b (x > d) we have
f(x) = ˜ αu
1
(x) +βu
2
(x), where ˜ α = α +
_
b
a
u
2
gr dy and
˜
β = β +
_
b
a
u
1
gr dy.
If f vanishes identically near both a and b we must have α = β = ˜ α =
˜
β = 0
and thus α = β = 0 and
_
b
a
u
j
(y)g(y)r(y)dy = 0, j = 1, 2. This case can
be avoided choosing g suitable and hence there is at least one solution, say
u
b
(z).
Now choose u
1
= u
b
and consider the behavior near b. If u
2
is not square
integrable on (d, b), we must have β = 0 since βu
2
= f − ˜ αu
b
is. If u
2
is
square integrable, we can ﬁnd two functions in D(τ) which coincide with
u
b
and u
2
near b. Since W(u
b
, u
2
) = 1 we see that τ is l.c. at a and hence
0 = W
b
(u
b
, f) = W
b
(u
b
, ˜ αu
b
+ βu
2
) = β. Thus β = 0 in both cases and we
have
f(x) = u
b
(x)
_
α +
_
x
a
u
2
gr dy
_
+u
2
(x)
_
b
x
u
b
gr dy. (9.31)
Now choosing g such that
_
b
a
u
b
gr dy ,= 0 we infer existence of u
a
(z). Choos
ing u
2
= u
a
and arguing as before we see α = 0 and hence
f(x) = u
b
(x)
_
x
a
u
a
(y)g(y)r(y)dy +u
a
(x)
_
b
x
u
b
(y)g(y)r(y)dy
=
_
b
a
G(z, x, y)g(y)r(y)dy (9.32)
for any g ∈ L
2
c
(I, r dx). Since this set is dense the claim follows.
Example. If τ is regular at a with a boundary condition as in the previous
example, we can choose u
a
(z, x) to be the solution corresponding to the
initial conditions (u
a
(z, a), (pu
a
)(z, a)) = (sin(α), −cos(α)). In particular,
u
a
(z, x) exists for all z ∈ C.
If τ is regular at both a and b there is a corresponding solution u
b
(z, x),
again for all z. So the only values of z for which (A − z)
−1
does not exist
must be those with W(u
b
(z), u
a
(z)) = 0. However, in this case u
a
(z, x)
164 9. One dimensional Schr¨odinger operators
and u
b
(z, x) are linearly dependent and u
a
(z, x) = γu
b
(z, x) satisﬁes both
boundary conditions. That is, z is an eigenvalue in this case.
In particular, regular operators have pure point spectrum. We will see
in Theorem 9.10 below, that this holds for any operator which is l.c. at both
end points.
In the previous example u
a
(z, x) is holomorphic with respect to z and
satisﬁes u
a
(z, x)
∗
= u
a
(z
∗
, x) (since it coresponds to real initial conditions
and our diﬀerential equation has real coeﬃcients). In general we have:
Lemma 9.8. Suppose z ∈ ρ(A), then u
a
(z, x) from the previous lemma can
be chosen locally holomorphic with respect to z such that
u
a
(z, x)
∗
= u
a
(z
∗
, x). (9.33)
Similarly, for u
b
(z, x).
Proof. Since this is a local property near a we can suppose b is regular
and choose u
b
(z, x) such that (u
b
(z, b), (pu
b
)(z, b)) = (sin(β), −cos(β)) as in
the example above. In addition, choose a second solution v
b
(z, x) such that
(v
b
(z, b), (pv
b
)(z, b)) = (cos(β), sin(β)) and observe W(u
b
(z), v
b
(z)) = 1. If
z ∈ ρ(A), z is no eigenvalue and hence u
a
(z, x) cannot be a multiple of
u
b
(z, x). Thus we can set
u
a
(z, x) = v
b
(z, x) +m(z)u
b
(z, x) (9.34)
and it remains to show that m(z) is holomorphic with m(z)
∗
= m(z
∗
).
Choosing h with compact support in (a, c) and g with support in (c, b)
we have
¸h, (A−z)
−1
g) = ¸h, u
a
(z))¸g
∗
, u
b
(z))
= (¸h, v
b
(z)) +m(z)¸h, u
b
(z)))¸g
∗
, u
b
(z)) (9.35)
(with a slight abuse of notation since u
b
, v
b
might not be square integrable).
Choosing (realvalued) functions h and g such that ¸h, u
b
(z))¸g
∗
, u
b
(z)) , = 0
we can solve for m(z):
m(z) =
¸h, (A−z)
−1
g) −¸h, v
b
(z))¸g
∗
, u
b
(z))
¸h, u
b
(z))¸g
∗
, u
b
(z))
. (9.36)
This ﬁnishes the proof.
Example. We already know that τ = −
d
2
dx
2
on I = (−∞, ∞) gives rise to
the free Schr¨odinger operator H
0
. Furthermore,
u
±
(z, x) = e
∓
√
−zx
, z ∈ C, (9.37)
are two linearly independent solutions (for z ,= 0) and since Re(
√
−z) > 0
for z ∈ C¸[0, ∞), there is precisely one solution (up to a constant multiple)
9.2. Weyl’s limit circle, limit point alternative 165
which is square integrable near ±∞, namely u
±
. In particular, the only
choice for u
a
is u
−
and for u
b
is u
+
and we get
G(z, x, y) =
1
2
√
−z
e
−
√
−zx−y
(9.38)
which we already found in Section 7.4.
If, as in the previous example, there is only one square integrable solu
tion, there is no choice for G(z, x, y). But since diﬀerent boundary condi
tions must give rise to diﬀerent resolvents, there is no room for boundary
conditions in this case. This indicates a connection between our l.c., l.p.
distinction and square integrability of solutions.
Theorem 9.9 (Weyl alternative). The operator τ is l.c. at a if and only if
for one z
0
∈ C all solutions of (τ − z
0
)u = 0 are square integrable near a.
This then holds for all z ∈ C. Similarly for b.
Proof. If all solutions are square integrable near a, τ is l.c. at a since the
Wronskian of two linearly independent solutions does not vanish.
Conversely, take two functions v, ˜ v ∈ D(τ) with W
a
(v, ˜ v) ,= 0. By con
sidering real and imaginary parts it is no restriction th assume that v and
˜ v are realvalued. Thus they give rise to two diﬀerent selfadjoint operators
A and
˜
A (choose any ﬁxed w for the other endpoint). Let u
a
and ˜ u
a
be the
corresponding solutions from above, then W(u
a
, ˜ u
a
) ,= 0 (since otherwise
A =
˜
A by Lemma 9.5) and thus there are two linearly independent solutions
which are square integrable near a. Since any other solution can be written
as a linear combination of those two, every solution is square integrable near
a.
It remains to show that all solutions of (τ − z)u = 0 for all z ∈ C are
square integrable near a if τ is l.c. at a. In fact, the above argument ensures
this for every z ∈ ρ(A) ∩ ρ(
˜
A), that is, at least for all z ∈ C¸R.
Suppose (τ −z)u = 0. and choose two linearly independent solutions u
j
,
j = 1, 2, of (τ − z
0
)u = 0 with W(u
1
, u
2
) = 1. Using (τ − z
0
)u = (z − z
0
)u
and (9.10) we have (a < c < x < b)
u(x) = αu
1
(x) +βu
2
(x) + (z −z
0
)
_
x
c
(u
1
(x)u
2
(y) −u
1
(y)u
2
(x))u(y)r(y) dy.
(9.39)
Since u
j
∈ L
2
((c, b), rdx) we can ﬁnd a constant M ≥ 0 such that
_
b
c
[u
1,2
(y)[
2
r(y) dy ≤ M. (9.40)
166 9. One dimensional Schr¨odinger operators
Now choose c close to b such that [z − z
0
[M
2
≤ 1/4. Next, estimating the
integral using Cauchy–Schwarz gives
¸
¸
¸
_
x
c
(u
1
(x)u
2
(y) −u
1
(y)u
2
(x))u(y)r(y) dy
¸
¸
¸
2
≤
_
x
c
[u
1
(x)u
2
(y) −u
1
(y)u
2
(x)[
2
r(y) dy
_
x
c
[u(y)[
2
r(y) dy
≤ M
_
[u
1
(x)[
2
+[u
2
(x)[
2
_
_
x
c
[u(y)[
2
r(y) dy (9.41)
and hence
_
x
c
[u(y)[
2
r(y) dy ≤ ([α[
2
+[β[
2
)M + 2[z −z
0
[M
2
_
x
c
[u(y)[
2
r(y) dy
≤ ([α[
2
+[β[
2
)M +
1
2
_
x
c
[u(y)[
2
r(y) dy. (9.42)
Thus
_
x
c
[u(y)[
2
r(y) dy ≤ 2([α[
2
+[β[
2
)M (9.43)
and since u ∈ AC
loc
(I) we have u ∈ L
2
((c, b), r dx) for every c ∈ (a, b).
Note that all eigenvalues are simple. If τ is l.p. at one endpoint this is
clear, since there is at most one solution of (τ − λ)u = 0 which is square
integrable near this end point. If τ is l.c. this also follows since the fact that
two solutions of (τ − λ)u = 0 satisfy the same boundary condition implies
that their Wronskian vanishes.
Finally, led us shed some additional light on the number of possible
boundary conditions. Suppose τ is l.c. at a and let u
1
, u
2
be two solutions
of τu = 0 with W(u
1
, u
2
) = 1. Abbreviate
BC
j
x
(f) = W
x
(u
j
, f), f ∈ D(τ). (9.44)
Let v be as in Theorem 9.6, then, using Lemma 9.5 it is not hard to see that
W
a
(v, f) = 0 ⇔ cos(α)BC
1
a
(f) + sin(α)BC
2
a
(f) = 0, (9.45)
where tan(α) = −
BC
1
a
(v)
BC
2
a
(v)
. Hence all possible boundary conditions can be
parametrized by α ∈ [0, π). If τ is regular at a and if we choose u
1
(a) =
p(a)u
2
(a) = 1 and p(a)u
1
(a) = u
2
(a) = 0, then
BC
1
a
(f) = f(a), BC
2
a
(f) = p(a)f
(a) (9.46)
and the boundary condition takes the simple form
cos(α)f(a) + sin(α)p(a)f
(a) = 0. (9.47)
Finally, note that if τ is l.c. at both a and b, then Theorem 9.6 does not give
all possible selfadjoint extensions. For example, one could also choose
BC
1
a
(f) = e
iα
BC
1
b
(f), BC
2
a
(f) = e
iα
BC
2
b
(f). (9.48)
9.2. Weyl’s limit circle, limit point alternative 167
The case α = 0 gives rise to periodic boundary conditions in the regular
case.
Now we turn to the investigation of the spectrum of A. If τ is l.c. at
both endpoints, then the spectrum of A is very simple
Theorem 9.10. If τ is l.c. at both end points, then the resolvent is a Hilbert–
Schmidt operator, that is,
_
b
a
_
b
a
[G(z, x, y)[
2
r(y)dy r(x)dx < ∞. (9.49)
In particular, the spectrum of any self adjoint extensions is purely discrete
and the eigenfunctions (which are simple) form an orthonormal basis.
Proof. This follows from the estimate
_
b
a
_
_
x
a
[u
b
(x)u
a
(y)[
2
r(y)dy +
_
b
x
[u
b
(y)u
a
(x)[
2
r(y)dy
_
r(x)dx
≤ 2
_
b
a
[u
a
(y)[
2
r(y)dy
_
b
a
[u
b
(y)[
2
r(y)dy, (9.50)
which shows that the resolvent is Hilbert–Schmidt and hence compact.
If τ is not l.c. the situation is more complicated and we can only say
something about the essential spectrum.
Theorem 9.11. All self adjoint extensions have the same essential spec
trum. Moreover, if A
ac
and A
cb
are selfadjoint extensions of τ restricted to
(a, c) and (c, b) (for any c ∈ I), then
σ
ess
(A) = σ
ess
(A
ac
) ∪ σ
ess
(A
cb
). (9.51)
Proof. Since (τ −i)u = 0 has two linearly independent solutions, the defect
indices are at most two (they are zero if τ is l.p. at both end points, one if
τ is l.c. at one and l.p. at the other end point, and two if τ is l.c. at both
endpoints). Hence the ﬁrst claim follows from Theorem 6.19.
For the second claim restrict τ to the functions with compact support
in (a, c) ∪ (c, d). Then, this operator is the orthogonal sum of the operators
A
0,ac
and A
0,cb
. Hence the same is true for the adjoints and hence the defect
indices of A
0,ac
⊕ A
0,cb
are at most four. Now note that A and A
ac
⊕ A
cb
are both selfadjoint extensions of this operator. Thus the second claim also
follows from Theorem 6.19.
Problem 9.3. Compute the spectrum and the resolvent of τ = −
d
2
dx
2
, I =
(0, ∞) deﬁned on D(A) = ¦f ∈ D(τ)[f(0) = 0¦.
168 9. One dimensional Schr¨odinger operators
9.3. Spectral transformations
In this section we want to provide some fundamental tools for investigating
the spectra of SturmLiouville operators and, at the same time, give some
nice illustrations of the spectral theorem.
Example. Consider again τ = −
d
2
dx
2
on I = (−∞, ∞). From Section 7.2
we know that the Fourier transform maps the associated operator H
0
to the
multiplication operator with p
2
in L
2
(R). To get multiplication by λ, as in
the spectral theorem, we set p =
√
λ and split the Fourier integral into a
positive and negative part
(Uf)(λ) =
_
_
R
e
i
√
λx
f(x) dx
_
R
e
−i
√
λx
f(x) dx
_
, λ ∈ σ(H
0
) = [0, ∞). (9.52)
Then
U : L
2
(R) →
2
j=1
L
2
(R,
χ
[0,∞)
(λ)
2
√
λ
dλ) (9.53)
is the spectral transformation whose existence is guaranteed by the spectral
theorem (Lemma 3.3).
Note that in the previous example the kernel e
±i
√
λx
of the integral trans
form U is just a pair of linearly independent solutions of the underlying
diﬀerential equation (though no eigenfunctions, since they are not square
integrable).
More general, if
U : L
2
(I, r dx) → L
2
(R, dµ), f(x) →
_
R
u(λ, x)f(x)r(x) dx (9.54)
is an integral transformation which maps a selfadjoint SturmLiouville op
erator A to multiplication by λ, then its kernel u(λ, x) is a solution of the
underlying diﬀerential equation. This formally follows from UAf = λUf
which implies
_
R
u(λ, x)(τ −λ)f(x)r(x) dx =
_
R
(τ −λ)u(λ, x)f(x)r(x) dx (9.55)
and hence (τ −λ)u(λ, .) = 0.
Lemma 9.12. Suppose
U : L
2
(I, r dx) →
k
j=1
L
2
(R, dµ
j
) (9.56)
is a spectral mapping as in Lemma 3.3. Then U is of the form
Uf(x) =
_
b
a
u(λ, x)f(x)r(x) dx, (9.57)
9.3. Spectral transformations 169
where u(λ, x) = (u
1
(λ, x), . . . , u
k
(λ, x)) and each u
j
(λ, .) is a solution of
τu
j
= λu
j
for a.e. λ (with respect to µ
j
). The inverse is given by
U
−1
F(λ) =
k
j=1
_
R
u
j
(λ, x)
∗
F
j
(λ)dµ
j
(λ). (9.58)
Moreover, the solutions u
j
(λ) are linearly independent if the spectral
measures are ordered and, if τ is l.c. at some endpoint, they satisfy the
boundary condition. In particular, for ordered spectral measures we have
always k ≤ 2 and even k = 1 if τ is l.c. at one endpoint.
Proof. Using U
j
R
A
(z) =
1
λ−z
U
j
we have
U
j
f(x) = (λ −z)U
j
_
b
a
G(z, x, y)f(y)r(y) dy. (9.59)
If we restrict R
A
(z) to a compact interval [c, d] ⊂ (a, b), then R
A
(z)χ
[c,d]
is Hilbert–Schmidt since G(z, x, y)χ
[c,d]
(y) is square integrable over (a, b)
(a, b). Hence U
j
χ
[c,d]
= (λ −z)U
j
R
A
(z)χ
[c,d]
is Hilbert–Schmidt as well and
by Lemma 6.9 there is a corresponding kernel u
[c,d]
j
(λ, y) such that
(U
j
χ
[c,d]
f)(λ) =
_
b
a
u
[c,d]
j
(λ, x)f(x)r(x) dx. (9.60)
Now take a larger compact interval [ˆ c,
ˆ
d] ⊇ [c, d], then the kernels coincide
on [c, d], u
[c,d]
j
(λ, .) = u
[ˆ c,
ˆ
d]
j
(λ, .)χ
[c,d]
, since we have U
j
χ
[c,d]
= U
j
χ
[ˆ c,
ˆ
d]
χ
[c,d]
.
In particular, there is a kernel u
j
(λ, x) such that
U
j
f(x) =
_
b
a
u
j
(λ, x)f(x)r(x) dx (9.61)
for every f with compact support in (a, b). Since functions with compact
support are dense and U
j
is continuous, this formula holds for any f (pro
vided the integral is understood as the corresponding limit).
Using the fact that U is unitary, ¸F, Ug) = ¸U
−1
F, g), we see
j
_
R
F
j
(λ)
∗
_
b
a
u
j
(λ, x)g(x)r(x) dx =
_
b
a
(U
−1
F)(x)
∗
g(x)r(x) dx. (9.62)
Interchanging integrals on the right hand side (which is permitted at least
for g, F with compact support), the formula for the inverse follows.
Next, from U
j
Af = λU
j
f we have
_
b
a
u
j
(λ, x)(τf)(x)r(x) dx = λ
_
b
a
u
j
(λ, x)f(x)r(x) dx (9.63)
170 9. One dimensional Schr¨odinger operators
for a.e. λ and every f ∈ D(A
0
). Restricting everything to [c, d] ⊂ (a, b)
the above equation implies u
j
(λ, .)[
[c,d]
∈ D(A
∗
cd,0
) and A
∗
ab,0
u
j
(λ, .)[
[c,d]
=
λu
j
(λ, .)[
[c,d]
. In particular, u
j
(λ, .) is a solution of τu
j
= λu
j
. Moreover, if
u
j
(λ, .) is τ is l.c. near a, we can choose α = a and f to satisfy the boundary
condition.
Finally, ﬁx l ≤ k. If we assume the µ
j
are ordered, there is a set Ω
l
such
that µ
j
(Ω
l
) ,= 0 for 1 ≤ j ≤ l. Suppose
l
j=1
c
j
(λ)u
j
(λ, x) = 0 (9.64)
then we have
l
j=1
c
j
(λ)F
j
(λ) = 0, F
j
= U
j
f, (9.65)
for every f. Since U is surjective, we can prescribe F
j
arbitrarily, e.g.,
F
j
(λ) = 1 for j = j
0
and F
j
(λ) = 0 else which shows c
j
0
(λ) = 0. Hence
u
j
(λ, x), 1 ≤ j ≤ l, are linearly independent for λ ∈ Ω
l
which shows k ≤
2 since there are at most two linearly independent solutions. If τ is l.c.
and u
j
(λ, x) must satisfy the boundary condition, there is only one linearly
independent solution and thus k = 1.
Please note that the integral in (9.57) has to be understood as
U
j
f(x) = lim
α↓a,β↑b
_
β
α
u
j
(λ, x)f(x)r(x) dx, (9.66)
where the limit is taken in L
2
(R, dµ
j
). Similarly for (9.58).
For simplicity we will only pursue the case where one endpoint, say a,
is regular. The general case can usually be reduced to this case by choosing
c ∈ (a, b) and splitting A as in Theorem 9.11.
We choose a boundary condition
cos(α)f(a) + sin(α)p(a)f
(a) = 0 (9.67)
and introduce two solution s(z, x) and c(z, x) of τu = zu satisfying the
initial conditions
s(z, a) = −sin(α), p(a)s
(z, a) = cos(α),
c(z, a) = cos(α), p(a)c
(z, a) = sin(α). (9.68)
Note that s(z, x) is the solution which satisﬁes the boundary condition at
a, that is, we can choose u
a
(z, x) = s(z, x). Moreover, in our previous
lemma we have u
1
(λ, x) = γ
a
(λ)s(λ, x) and using the rescaling dµ(λ) =
9.3. Spectral transformations 171
[γ
a
(λ)[
2
dµ
a
(λ) and (U
1
f)(λ) = γ
a
(λ)(Uf)(λ) we obtain a unitary map
U : L
2
(I, r dx) → L
2
(R, dµ), (Uf)(λ) =
_
b
a
s(λ, x)f(x)r(x)dx (9.69)
with inverse
(U
−1
F)(x) =
_
R
s(λ, x)F(λ)dµ(λ). (9.70)
Note however, that while this rescaling gets rid of the unknown factor γ
a
(λ),
it destroys the normalization of the measure µ. For µ
1
we know µ
1
(R) = 1,
but µ might not even be bounded! In fact, it turns out that µ is indeed
unbounded.
So up to this point we have our spectral transformation U which maps A
to multiplication by λ, but we know nothing about the measure µ. Further
more, the measure µ is the object of desire since it contains all the spectral
information of A. So our next aim must be to compute µ. If A has only
pure point spectrum (i.e., only eigenvalues) this is straightforward as the
following example shows.
Example. Suppose E ∈ σ
p
(A) is an eigenvalue. Then s(E, x) is the corre
sponding eigenfunction and hence the same is true for S
E
(λ) = (Us(E))(λ).
In particular, S
E
(λ) = 0 for a.e. λ ,= E, that is
S
E
(λ) =
_
s(E)
2
, λ = E
0, λ ,= 0
. (9.71)
Moreover, since U is unitary we have
s(E)
2
=
_
b
a
s(E, x)
2
r(x)dx =
_
R
S
E
(λ)
2
dµ(λ) = s(E)
4
µ(¦E¦), (9.72)
that is µ(¦E¦) = s(E)
−2
. In particular, if A has pure point spectrum
(e.g., if τ is limit circle at both endpoints), we have
dµ(λ) =
∞
j=1
1
s(E
j
)
2
dΘ(λ −E
j
), σ
p
(A) = ¦E
j
¦
∞
j=1
, (9.73)
where dΘ is the Dirac measure centered at 0. For arbitrary A, the above
formula holds at least for the pure point part µ
pp
.
In the general case we have to work a bit harder. Since c(z, x) and s(z, x)
are linearly independent solutions,
W(c(z), s(z)) = 1, (9.74)
we can write u
b
(z, x) = γ
b
(z)(c(z, x) +m
b
(z)s(z, x)), where
m
b
(z) =
cos(α)p(a)u
b
(z, a) −sin(α)u
b
(z, a)
cos(α)u
b
(z, a) + sin(α)p(a)u
b
(z, a)
, z ∈ ρ(A), (9.75)
172 9. One dimensional Schr¨odinger operators
is known as WeylTitchmarsh mfunction. Note that m
b
(z) is holomor
phic in ρ(A) and that
m
b
(z)
∗
= m
b
(z
∗
) (9.76)
since the same is true for u
b
(z, x) (the denominator in (9.75) only vanishes if
u
b
(z, x) satisﬁes the boundary condition at a, that is, if z is an eigenvalue).
Moreover, the constant γ
b
(z) is of no importance and can be chosen equal
to one,
u
b
(z, x) = c(z, x) +m
b
(z)s(z, x). (9.77)
Lemma 9.13. The Weyl mfunction is a Herglotz function and satisﬁes
Im(m
b
(z)) = Im(z)
_
b
a
[u
b
(z, x)[
2
r(x) dx, (9.78)
where u
b
(z, x) is normalized as in (9.77).
Proof. Given two solutions u(x), v(x) of τu = zu, τv = ˆ zv it is straightfor
ward to check
(ˆ z −z)
_
x
a
u(y)v(y)r(y) dy = W
x
(u, v) −W
a
(u, v) (9.79)
(clearly it is true for x = a, now diﬀerentiate with respect to x). Now choose
u(x) = u
b
(z, x) and v(x) = u
b
(z, x)
∗
= u
b
(z
∗
, x),
−2Im(z)
_
x
a
[u
b
(z, y)[
2
r(y) dy = W
x
(u
b
(z), u
b
(z)
∗
) −2Im(m
b
(z)), (9.80)
and observe that W
x
(u
b
, u
∗
b
) vanishes as x ↑ b, since both u
b
and u
∗
b
are in
D(τ) near b.
Lemma 9.14. We have
(Uu
b
(z))(λ) =
1
λ −z
, (9.81)
where u
b
(z, x) is normalized as in (9.77).
Proof. First of all note that from R
A
(z)f = U
−1 1
λ−z
Uf we have
_
b
a
G(z, x, y)f(y)r(y) dy =
_
R
s(λ, x)F(λ)
λ −z
dµ(λ), (9.82)
where F = Uf. Here equality is to be understood in L
2
, that is for a.e. x.
However, the right hand side is continuous with respect to x and so is the
left hand side, at least if F has compact support. Hence in this case the
formula holds for all x and we can choose x = a to obtain
sin(α)
_
b
a
u
b
(z, y)f(y)r(y) dy = sin(α)
_
R
F(λ)
λ −z
dµ(λ), (9.83)
9.3. Spectral transformations 173
for all f, where F has compact support. Since these functions are dense, the
claim follows if we can cancel sin(α), that is, α ,= 0. To see the case α = 0,
ﬁrst diﬀerentiate (9.82) with respect to x before setting x = a.
Now combining the last two lemmas we infer from unitarity of U that
Im(m
b
(z)) = Im(z)
_
b
a
[u
b
(z, x)[
2
r(x) dx = Im(z)
_
R
1
[λ −z[
2
dµ(λ) (9.84)
and since a holomorphic function is determined up to a real constant by its
imaginary part we obtain
Theorem 9.15. The Weyl mfunction is given by
m
b
(z) = d +
_
R
_
1
λ −z
−
λ
1 +λ
2
_
dµ(λ), d ∈ R, (9.85)
and
d = Re(m
b
(i)),
_
R
1
1 +λ
2
dµ(λ) = Im(m
b
(i)) < ∞. (9.86)
Moreover, µ is given by Stieltjes inversion formula
µ(λ) = lim
δ↓0
lim
ε↓0
1
π
_
λ+δ
δ
Im(m
b
(λ + iε))dλ, (9.87)
where
Im(m
b
(λ + iε)) = ε
_
b
a
[u
b
(λ + iε, x)[
2
r(x) dx. (9.88)
Proof. Choosing z = i in (9.84) shows (9.86) and hence the right hand side
of (9.85) is a welldeﬁned holomorphic function in C¸R. By Im(
1
λ−z
−
λ
1+λ
2
) =
Im(z)
λ−z
2
its imaginary part coincides with that of m
b
(z) and hence equality
follows. Stieltjes inversion formula follows as in the case where the measure
is bounded.
Example. Consider τ = −
d
2
dx
2
on I = (0, ∞). Then
c(z, x) = cos(α) cos(
√
zx) +
sin(α)
√
z
sin(
√
zx) (9.89)
and
s(z, x) = −sin(α) cos(
√
zx) +
cos(α)
√
z
sin(
√
zx). (9.90)
Moreover,
u
b
(z, x) = u
b
(z, 0)e
−
√
−zx
(9.91)
and thus
m
b
(z) =
√
−z cos(α) + sin(α)
√
−z sin(α) −cos(α)
(9.92)
174 9. One dimensional Schr¨odinger operators
respectively
dµ(λ) =
√
λ
π(cos(α)
2
+λsin(α)
2
)
dλ. (9.93)
Note that if α ,= 0 we even have
_
1
λ−z
dµ(λ) < 0 in the previous example
and hence
m
b
(z) = cot(α) +
_
R
1
λ −z
dµ(λ) (9.94)
in this case (the factor −cot(α) follows by considering the limit [z[ → ∞
of both sides). One can show that this remains true in the general case.
Formally it follows by choosing x = a in u
b
(z, x) = (U
−1 1
λ−z
)(x), however,
since we know equality only for a.e. x, a more careful analysis is needed.
We conclude this section with some asymptotics for large z away from
the spectrum. For simplicity we only consider the case p = k ≡ 1. We ﬁrst
observe (Problem 9.4):
Lemma 9.16. We have
c(z, x) =
1
2
_
cos(α) +
sin(α)
√
−z
_
e
√
−z(x−a)
(1 +O(
1
√
−z
))
s(z, x) =
1
2
_
−sin(α) +
cos(α)
√
−z
_
e
√
−z(x−a)
(1 +O(
1
√
−z
)) (9.95)
for ﬁxed x ∈ (a, b) as Im(z) → ∞.
Furthermore we have
Lemma 9.17. The Green’s function of A
G(z, x, y) = s(z, x)u
b
(z, y), x ≤ y, (9.96)
satisﬁes
lim
Im(z)→∞
G(z, x, y) = 0 (9.97)
for every x, y ∈ (a, b).
Proof. Let ϕ be some continuous positive function supported in [−1, 1] with
_
ϕ(x)dx = 1 and set ϕ
k
(x) = k ϕ(k x). Then, by R
A
(z) ≤ Im(z)
−1
(see
Theorem 2.14),
[G(z, x, y)[ ≤ [G
k
(z, x, y)[ +[G(z, x, y) −G
k
(z, x, y)[
≤
kϕ
2
Im(z)
+[G(z, x, y) −G
k
(z, x, y)[, (9.98)
where
G
k
(z, x, y) = ¸ϕ
k
(. −x), R
A
(z)ϕ
k
(. −y)). (9.99)
9.3. Spectral transformations 175
Since G(z, x, y) = lim
k→∞
G
k
(z, x, y) we can ﬁrst choose k suﬃciently large
such that [G(z, x, y) − G
k
(z, x, y)[ ≤
ε
2
and then Im(z) so large such that
kϕ
2
Im(z)
≤
ε
2
. Hence [G(z, x, y)[ ≤ ε an the claim follows.
Combining the last two lemmas we also obtain
Lemma 9.18. The Weyl mfunction satisﬁes
m
b
(z) =
_
cot(α) +O(
1
√
−z
), α ,= 0
−
√
−z +O(
1
√
−z
), α = 0
(9.100)
as Im(z) → ∞.
Proof. Plugging the asymptotic expansions from Lemma 9.16 into Lemma 9.17
we see
m
b
(z) = −
c(z, x)
s(z, x)
+o(e
−2
√
−z(x−a)
) (9.101)
form which the claim follows.
Note that assuming q ∈ C
k
([a, b)) one can obtain further asymptotic
terms in Lemma 9.16 and hence also in the expansion of m
b
(z).
Problem 9.4. Prove the asymptotics for c(z, x) and s(z, x) from Lemma 9.16.
(Hint: Use that
c(z, x) = cos(α) cosh(
√
−zx) +
sin(α)
√
−z
sinh(
√
−zx)
−
1
√
−z
_
x
a
sinh(
√
−z(x −y))q(y)c(z, y)dy
by Lemma 9.2 and consider ˜ c(z, x) = e
−
√
−zx
c(z, x).)
Chapter 10
Oneparticle
Schr¨odinger operators
10.1. Selfadjointness and spectrum
Our next goal is to apply these results to Schr¨odinger operators. The Hamil
tonian of one particle in d dimensions is given by
H = H
0
+V, (10.1)
where V : R
d
→ R is the potential energy of the particle. We are mainly
interested in the case 1 ≤ d ≤ 3 and want to ﬁnd classes of potentials which
are relatively bounded respectively relatively compact. To do this we need
a better understanding of the functions in the domain of H
0
.
Lemma 10.1. Suppose n ≤ 3 and ψ ∈ H
2
(R
n
). Then ψ ∈ C
∞
(R
n
) and for
any a > 0 there is a b > 0 such that
ψ
∞
≤ aH
0
ψ +bψ. (10.2)
Proof. The important observation is that (p
2
+ γ
2
)
−1
∈ L
2
(R
n
) if n ≤ 3.
Hence, since (p
2
+γ
2
)
ˆ
ψ ∈ L
2
(R
n
), the CauchySchwarz inequality

ˆ
ψ
1
= (p
2
+γ
2
)
−1
(p
2
+γ
2
)
ˆ
ψ(p)
1
≤ (p
2
+γ
2
)
−1
 (p
2
+γ
2
)
ˆ
ψ(p). (10.3)
shows
ˆ
ψ ∈ L
1
(R
n
). But now everything follows from the RiemannLebesgue
lemma
ψ
∞
≤ (2π)
−n/2
(p
2
+γ
2
)
−1
(p
2
ˆ
ψ(p) +γ
2

ˆ
ψ(p))
= (γ/2π)
n/2
(p
2
+ 1)
−1
(γ
−2
H
0
ψ +ψ) (10.4)
177
178 10. Oneparticle Schr¨odinger operators
ﬁnishes the proof.
Now we come to our ﬁrst result.
Theorem 10.2. Let V be realvalued and V ∈ L
∞
∞
(R
n
) if n > 3 and V ∈
L
∞
∞
(R
n
) +L
2
(R
n
) if n ≤ 3. Then V is relatively compact with respect to H
0
.
In particular,
H = H
0
+V, D(H) = H
2
(R
n
), (10.5)
is selfadjoint, bounded from below and
σ
ess
(H) = [0, ∞). (10.6)
Moreover, C
∞
0
(R
n
) is a core for H.
Proof. Our previous lemma shows D(H
0
) ⊆ D(V ) and the rest follows
from Lemma 7.10 using f(p) = (p
2
− z)
−1
and g(x) = V (x). Note that
f ∈ L
∞
∞
(R
n
) ∩ L
2
(R
n
) for n ≤ 3.
Observe that since C
∞
c
(R
n
) ⊆ D(H
0
), we must have V ∈ L
2
loc
(R
n
) if
D(V ) ⊆ D(H
0
).
10.2. The hydrogen atom
We begin with the simple model of a single electron in R
3
moving in the
external potential V generated by a nucleus (which is assumed to be ﬁxed
at the origin). If one takes only the electrostatic force into account, then
V is given by the Coulomb potential and the corresponding Hamiltonian is
given by
H
(1)
= −∆−
γ
[x[
, D(H
(1)
) = H
2
(R
3
). (10.7)
If the potential is attracting, that is, if γ > 0, then it describes the hydrogen
atom and is probably the most famous model in quantum mechanics.
As domain we have chosen D(H
(1)
) = D(H
0
) ∩ D(
1
x
) = D(H
0
) and by
Theorem 10.2 we conclude that H
(1)
is selfadjoint. Moreover, Theorem 10.2
also tells us
σ
ess
(H
(1)
) = [0, ∞) (10.8)
and that H
(1)
is bounded from below
E
0
= inf σ(H
(1)
) > −∞. (10.9)
If γ ≤ 0 we have H
(1)
≥ 0 and hence E
0
= 0, but if γ > 0, we might have
E
0
< 0 and there might be some discrete eigenvalues below the essential
spectrum.
10.2. The hydrogen atom 179
In order to say more about the eigenvalues of H
(1)
we will use the fact
that both H
0
and V
(1)
= −γ/[x[ have a simple behavior with respect to
scaling. Consider the dilation group
U(s)ψ(x) = e
−ns/2
ψ(e
−s
x), s ∈ R, (10.10)
which is a strongly continuous oneparameter unitary group. The generator
can be easily computed
Dψ(x) =
1
2
(xp +px)ψ(x) = (xp −
in
2
)ψ(x), ψ ∈ o(R
n
). (10.11)
Now let us investigate the action of U(s) on H
(1)
H
(1)
(s) = U(−s)H
(1)
U(s) = e
−2s
H
0
+ e
−s
V
(1)
, D(H
(1)
(s)) = D(H
(1)
).
(10.12)
Now suppose Hψ = λψ, then
¸ψ, [U(s), H]ψ) = ¸U(−s)ψ, Hψ) −¸Hψ, U(s)ψ) = 0 (10.13)
and hence
0 = lim
s→0
1
s
¸ψ, [U(s), H]ψ) = lim
s→0
¸U(−s)ψ,
H −H(s)
s
ψ)
= ¸ψ, (2H
0
+V
(1)
)ψ). (10.14)
Thus we have proven the virial theorem.
Theorem 10.3. Suppose H = H
0
+ V with U(−s)V U(s) = e
−s
V . Then
any normalized eigenfunction ψ corresponding to an eigenvalue λ satisﬁes
λ = −¸ψ, H
0
ψ) =
1
2
¸ψ, V ψ). (10.15)
In particular, all eigenvalues must be negative.
This result even has some further consequences for the point spectrum
of H
(1)
.
Corollary 10.4. Suppose γ > 0. Then
σ
p
(H
(1)
) = σ
d
(H
(1)
) = ¦E
j−1
¦
j∈N
0
, E
0
< E
j
< E
j+1
< 0, (10.16)
with lim
j→∞
E
j
= 0.
Proof. Choose ψ ∈ C
∞
c
(R¸¦0¦) and set ψ(s) = U(−s)ψ. Then
¸ψ(s), H
(1)
ψ(s)) = e
−2s
¸ψ, H
0
ψ) + e
−s
¸ψ, V
(1)
ψ) (10.17)
which is negative for s large. Now choose a sequence s
n
→ ∞ such that
we have supp(ψ(s
n
)) ∩ supp(ψ(s
m
)) = ∅ for n ,= m. Then Theorem 4.11
(i) shows that rank(P
H
(1) ((−∞, 0))) = ∞. Since each eigenvalue E
j
has
ﬁnite multiplicity (it lies in the discrete spectrum) there must be an inﬁnite
number of eigenvalues which accumulate at 0.
180 10. Oneparticle Schr¨odinger operators
If γ ≤ 0 we have σ
d
(H
(1)
) = ∅ since H
(1)
≥ 0 in this case.
Hence we have gotten a quite complete picture of the spectrum of H
(1)
.
Next, we could try to compute the eigenvalues of H
(1)
(in the case γ > 0) by
solving the corresponding eigenvalue equation, which is given by the partial
diﬀerential equation
−∆ψ(x) −
γ
[x[
ψ(x) = λψ(x). (10.18)
For a general potential this is hopeless, but in our case we can use the rota
tional symmetry of our operator to reduce our partial diﬀerential equation
to ordinary ones.
First of all, it suggests itself to switch to spherical coordinates (x
1
, x
2
, x
3
) →
(r, θ, ϕ)
x
1
= r sin(θ) cos(ϕ), x
2
= r sin(θ) sin(ϕ), x
3
= r cos(θ), (10.19)
which correspond to a unitary transform
L
2
(R
3
) → L
2
((0, ∞), r
2
dr) ⊗L
2
((0, π), sin(θ)dθ) ⊗L
2
((0, 2π), dϕ). (10.20)
In these new coordinates (r, θ, ϕ) our operator reads
H
(1)
= −
1
r
2
∂
∂r
r
2
∂
∂r
+
1
r
2
L
2
+V (r), V (r) = −
γ
r
, (10.21)
where
L
2
= L
2
1
+L
2
2
+L
2
3
= −
1
sin(θ)
∂
∂θ
sin(θ)
∂
∂θ
−
1
sin(θ)
2
∂
2
∂ϕ
2
. (10.22)
(Recall the angular momentum operators L
j
from Section 8.2.)
Making the product ansatz (separation of variables)
ψ(r, θ, ϕ) = R(r)Θ(θ)Φ(ϕ) (10.23)
we obtain the following three SturmLiouville equations
_
−
1
r
2
d
dr
r
2
d
dr
+
l(l + 1)
r
2
+V (r)
_
R(r) = λR(r)
1
sin(θ)
_
−
d
dθ
sin(θ)
d
dθ
+
m
2
sin(θ)
_
Θ(θ) = l(l + 1)Θ(θ)
−
d
2
dϕ
2
Φ(ϕ) = m
2
Φ(ϕ) (10.24)
The form chosen for the constants l(l + 1) and m
2
is for convenience later
on. These equations will be investigated in the following sections.
10.3. Angular momentum 181
10.3. Angular momentum
We start by investigating the equation for Φ(ϕ) which associated with the
StumLiouville equation
τΦ = −Φ
, I = (0, 2π). (10.25)
since we want ψ deﬁned via (10.23) to be in the domain of H
0
(in particu
lar continuous), we choose periodic boundary conditions the StumLiouville
equation
AΦ = τΦ, D(A) = ¦Φ ∈ L
2
(0, 2π)[ Φ ∈ AC
1
[0, 2π],
Φ(0) = Φ(2π), Φ
(0) = Φ
(2π)¦
.
(10.26)
From our analysis in Section 9.1 we immediately obtain
Theorem 10.5. The operator A deﬁned via (10.25) is selfadjoint. Its
spectrum is purely discrete
σ(A) = σ
d
(A) = ¦m
2
[m ∈ Z¦ (10.27)
and the corresponding eigenfunctions
Φ
m
(ϕ) =
1
√
2π
e
imϕ
, m ∈ Z, (10.28)
form an orthonormal basis for L
2
(0, 2π).
Note that except for the lowest eigenvalue, all eigenvalues are twice de
generate.
We note that this operator is essentially the square of the angular mo
mentum in the third coordinate direction, since in polar coordinates
L
3
=
1
i
∂
∂ϕ
. (10.29)
Now we turn to the equation for Θ(θ)
τ
m
Θ(θ) =
1
sin(θ)
_
−
d
dθ
sin(θ)
d
dθ
+
m
2
sin(θ)
_
Θ(θ), I = (0, π), m ∈ N
0
.
(10.30)
For the investigation of the corresponding operator we use the unitary
transform
L
2
((0, π), sin(θ)dθ) → L
2
((−1, 1), dx), Θ(θ) → f(x) = Θ(arccos(x)).
(10.31)
The operator τ transforms to the somewhat simpler form
τ
m
= −
d
dx
(1 −x
2
)
d
dx
−
m
2
1 −x
2
. (10.32)
182 10. Oneparticle Schr¨odinger operators
The corresponding eigenvalue equation
τ
m
u = l(l + 1)u (10.33)
is the associated Legendre equation. For l ∈ N
0
it is solved by the
associated Legendre functions
P
lm
(x) = (1 −x
2
)
m/2
d
m
dx
m
P
l
(x), (10.34)
where
P
l
(x) =
1
2
l
l!
d
l
dx
l
(x
2
−1)
l
(10.35)
are the Legendre polynomials. This is straightforward to check. More
over, note that P
l
(x) are (nonzero) polynomials of degree l. A second,
linearly independent solution is given by
Q
lm
(x) = P
lm
(x)
_
x
0
dt
(1 −t
2
)P
lm
(t)
2
. (10.36)
In fact, for every SturmLiouville equation v(x) = u(x)
_
x
dt
p(t)u(t)
2
satisﬁes
τv = 0 whenever τu = 0. Now ﬁx l = 0 and note P
0
(x) = 1. For m = 0 we
have Q
00
= arctanh(x) ∈ L
2
and so τ
0
is l.c. at both end points. For m > 0
we have Q
0m
= (x±1)
−m/2
(C +O(x±1)) which shows that it is not square
integrable. Thus τ
m
is l.c. for m = 0 and l.p. for m > 0 at both endpoints.
In order to make sure that the eigenfunctions for m = 0 are continuous (such
that ψ deﬁned via (10.23) is continuous) we choose the boundary condition
generated by P
0
(x) = 1 in this case
A
m
f = τf, D(A
m
) = ¦f ∈ L
2
(−1, 1)[ f ∈ AC
1
(0, π), τf ∈ L
2
(−1, 1)
lim
x→±1
(1 −x
2
)f
(x) = 0¦
.
(10.37)
Theorem 10.6. The operator A
m
, m ∈ N
0
, deﬁned via (10.37) is self
adjoint. Its spectrum is purely discrete
σ(A
m
) = σ
d
(A
m
) = ¦l(l + 1)[l ∈ N
0
, l ≥ m¦ (10.38)
and the corresponding eigenfunctions
u
lm
(x) =
¸
2l + 1
2
(l +m)!
(l −m)!
P
lm
(x), l ∈ N
0
, l ≥ m, (10.39)
form an orthonormal basis for L
2
(−1, 1).
Proof. By Theorem 9.6, A
m
is selfadjoint. Moreover, P
lm
is an eigenfunc
tion corresponding to the eigenvalue l(l +1) and it suﬃces to show that P
lm
form a basis. To prove this, it suﬃces to show that the functions P
lm
(x)
are dense. Since (1 − x
2
) > 0 for x ∈ (−1, 1) it suﬃces to show that the
functions (1 − x
2
)
−m/2
P
lm
(x) are dense. But the span of these functions
10.3. Angular momentum 183
contains every polynomial. Every continuous function can be approximated
by polynomials (in the sup norm and hence in the L
2
norm) and since the
continuous functions are dense, so are the polynomials.
The only thing remaining is the normalization of the eigenfunctions,
which can be found in any book on special functions.
Returning to our original setting we conclude that
Θ
lm
(θ) =
¸
2l + 1
2
(l +m)!
(l −m)!
P
lm
(cos(θ)), l = m, m+ 1, . . . (10.40)
form an orthonormal basis for L
2
((0, π), sin(θ)dθ) for any ﬁxed m ∈ N
0
.
Theorem 10.7. The operator L
2
on L
2
((0, π), sin(θ)dθ) ⊗ L
2
((0, 2π)) has
a purely discrete spectrum given
σ(L
2
) = ¦l(l + 1)[l ∈ N
0
¦. (10.41)
The spherical harmonics
Y
lm
(θ, ϕ) = Θ
lm
(θ)Φ
m
(ϕ) =
¸
2l + 1
4π
(l +[m[)!
(l −[m[)!
P
lm
(cos(θ))e
imϕ
, [m[ ≤ l,
(10.42)
form an orthonormal basis and satisfy L
2
Y
lm
= l(l + 1)Y
lm
and L
3
Y
lm
=
mY
lm
.
Proof. Everything follows from our construction, if we can show that Y
lm
form a basis. But this follows as in the proof of Lemma 1.9.
Note that transforming Y
lm
back to cartesian coordinates gives
Y
l,±m
(x) =
¸
2l + 1
4π
(l +m)!
(l −m)!
˜
P
lm
(
x
3
r
)
_
x
1
±ix
2
r
_
m
, r = [x[, (10.43)
where
˜
P
lm
is a polynomial of degree l −m given by
˜
P
lm
(x) = (1 −x
2
)
−m/2
P
lm
(x) =
d
l+m
dx
l+m
(1 −x
2
)
l
. (10.44)
In particular, Y
lm
are smooth away from the origin and by construction they
satisfy
−∆Y
lm
=
l(l + 1)
r
2
Y
lm
. (10.45)
184 10. Oneparticle Schr¨odinger operators
10.4. The eigenvalues of the hydrogen atom
Now we want to use the considerations from the previous section to decom
pose the Hamiltonian of the hydrogen atom. In fact, we can even admit any
spherically symmetric potential V (x) = V ([x[) with
V (r) ∈ L
∞
∞
(R) +L
2
((0, ∞), r
2
dr). (10.46)
The important observation is that the spaces
H
lm
= ¦ψ(x) = R(r)Y
lm
(θ, ϕ)[R(r) ∈ L
2
((0, ∞), r
2
dr)¦ (10.47)
reduce our operator H = H
0
+V . Hence
H = H
0
+V =
l,m
˜
H
l
, (10.48)
where
˜
H
l
R(r) = ˜ τ
l
R(r), ˜ τ
l
= −
1
r
2
d
dr
r
2
d
dr
+
l(l + 1)
r
2
+V (r)
D(H
l
) ⊆ L
2
((0, ∞), r
2
dr). (10.49)
Using the unitary transformation
L
2
((0, ∞), r
2
dr) → L
2
((0, ∞)), R(r) → u(r) = rR(r), (10.50)
our operator transforms to
A
l
f = τ
l
f, τ
l
= −
d
2
dr
2
+
l(l + 1)
r
2
+V (r)
D(A
l
) ⊆ L
2
((0, ∞)). (10.51)
It remains to investigate this operator.
Theorem 10.8. The domain of the operator A
l
is given by
D(A
l
) = ¦f ∈ L
2
(I)[ f, f
∈ AC(I), τf ∈ L
2
(I),
lim
r→0
(f(r) −rf
(r)) = 0 if l = 0¦,
(10.52)
where I = (0, ∞). Moreover, σ
ess
(A
l
) = [0, ∞).
Proof. By construction of A
l
we know that it is selfadjoint and satisﬁes
σ
ess
(A
l
) = [0, ∞). Hence it remains to compute the domain. We know at
least D(A
l
) ⊆ D(τ) and since D(H) = D(H
0
) it suﬃces to consider the case
V = 0. In this case the solutions of −u
(r) +
l(l+1)
r
2
u(r) = 0 are given by
u(r) = αr
l+1
+ βr
−l
. Thus we are in the l.p. case at ∞ for any l ∈ N
0
.
However, at 0 we are in the l.p. case only if l > 0, that is, we need an
additional boundary condition at 0 if l = 0. Since we need R(r) =
u(r)
r
to
be bounded (such that (10.23) is in the domain of H
0
), we have to take the
boundary condition generated by u(r) = r.
10.4. The eigenvalues of the hydrogen atom 185
Finally let us turn to some explicit choices for V , where the correspond
ing diﬀerential equation can be explicitly solved. The simplest case is V = 0
in this case the solutions of
−u
(r) +
l(l + 1)
r
2
u(r) = zu(r) (10.53)
are given by the spherical Bessel respectively spherical Neumann func
tions
u(r) = αj
l
(
√
zr) +β n
l
(
√
zr), (10.54)
where
j
l
(r) = (−r)
l
_
1
r
d
dr
_
l
sin(r)
r
. (10.55)
In particular,
u
a
(z, r) = j
l
(
√
zr) and u
b
(z, r) = j
l
(
√
zr) + in
l
(
√
zr) (10.56)
are the functions which are square integrable and satisfy the boundary con
dition (if any) near a = 0 and b = ∞, respectively.
The second case is that of our Coulomb potential
V (r) = −
γ
r
, γ > 0, (10.57)
where we will try to compute the eigenvalues plus corresponding eigenfunc
tions. It turns out that they can be expressed in terms of the Laguerre
polynomials
L
j
(r) = e
r
d
j
dr
j
e
−r
r
j
(10.58)
and the associated Laguerre polynomials
L
k
j
(r) =
d
k
dr
k
L
j
(r). (10.59)
Note that L
k
j
is a polynomial of degree j −k.
Theorem 10.9. The eigenvalues of H
(1)
are explicitly given by
E
n
= −
_
γ
2(n + 1)
_
2
, n ∈ N
0
. (10.60)
An orthonormal basis for the corresponding eigenspace is given by
ψ
nlm
(x) = R
nl
(r)Y
lm
(x), (10.61)
where
R
nl
(r) =
¸
γ
3
(n −l)!
2n
3
((n +l + 1)!)
3
_
γr
n + 1
_
l
e
−
γr
2(n+1)
L
2l+1
n+l+1
(
γr
n + 1
). (10.62)
In particular, the lowest eigenvalue E
0
= −
γ
2
4
is simple and the correspond
ing eigenfunction ψ
000
(x) =
_
γ
3
4
3
π
e
−γr/2
is positive.
186 10. Oneparticle Schr¨odinger operators
Proof. It is a straightforward calculation to check that R
nl
are indeed eigen
functions of A
l
corresponding to the eigenvalue −(
γ
2(n+1)
)
2
and for the norm
ing constants we refer to any book on special functions. The only problem
is to show that we have found all eigenvalues.
Since all eigenvalues are negative, we need to look at the equation
−u
(r) + (
l(l + 1)
r
2
−
γ
r
)u(r) = λu(r) (10.63)
for λ < 0. Introducing new variables x =
√
−λr and v(x) =
e
x
x
l+1
u(
x
√
−λ
)
this equation transforms into
xv
(x) + 2(l + 1 −x)v
(x) + 2nv(x) = 0, n =
γ
2
√
−λ
−(l + 1). (10.64)
Now let us search for a solution which can be expanded into a convergent
power series
v(x) =
∞
j=0
v
j
x
j
, v
0
= 1. (10.65)
The corresponding u(r) is square integrable near 0 and satisﬁes the boundary
condition (if any). Thus we need to ﬁnd those values of λ for which it is
square integrable near +∞.
Substituting the ansatz (10.65) into our diﬀerential equation and com
paring powers of x gives the following recursion for the coeﬃcients
v
j+1
=
2(j −n)
(j + 1)(j + 2(l + 1))
v
j
(10.66)
and thus
v
j
=
1
j!
j−1
k=0
2(k −n)
k + 2(l + 1)
. (10.67)
Now there are two cases to distinguish. If n ∈ N
0
, then v
j
= 0 for j > n
and v(x) is a polynomial. In this case u(r) is square integrable and hence an
eigenfunction corresponding to the eigenvalue λ
n
= −(
γ
2(n+l+1)
)
2
. Otherwise
we have v
j
≥
(2−ε)
j
j!
for j suﬃciently large. Hence by adding a polynomial
to v(x) we can get a function ˜ v(x) such that ˜ v
j
≥
(2−ε)
j
j!
for all j. But
then ˜ v(x) ≥ exp((2 − ε)x) and thus the corresponding u(r) is not square
integrable near −∞.
10.5. Nondegeneracy of the ground state
The lowest eigenvalue (below the essential spectrum) of a Schr¨odinger oper
ator is called ground state. Since the laws of physics state that a quantum
system will transfer energy to its surroundings (e.g., an atom emits radia
tion) until it eventually reaches its ground state, this state is in some sense
10.5. Nondegeneracy of the ground state 187
the most important state. We have seen that the hydrogen atom has a
nondegenerate (simple) ground state with a corresponding positive eigen
function. In particular, the hydrogen atom is stable in the sense that there
is a lowest possible energy. This is quite surprising since the corresponding
classical mechanical system is not, the electron could fall into the nucleus!
Our aim in this section is to show that the ground state is simple with a
corresponding positive eigenfunction. Note that it suﬃces to show that any
ground state eigenfunction is positive since nondegeneracy then follows for
free: two positive functions cannot be orthogonal.
To set the stage let us introduce some notation. Let H = L
2
(R
n
). We call
f ∈ L
2
(R
n
) positive if f ≥ 0 a.e. and f ,= 0. We call f strictly positive if
f > 0 a.e.. A bounded operator A is called positivity preserving if f ≥ 0
implies Af ≥ 0 and positivity improving if f ≥ 0 implies Af > 0. Clearly
A is positivity preserving (improving) if and only if ¸f, Ag) ≥ 0 (> 0) for
f, g ≥ 0.
Example. Multiplication by a positive function is positivity preserving (but
not improving). Convolution with a strictly positive function is positivity
improving.
We ﬁrst show that positivity improving operators have positive eigen
functions.
Theorem 10.10. Suppose A ∈ L(L
2
(R
n
)) is a selfadjoint, positivity im
proving and real (i.e., it maps real functions to real functions) operator. If
A is an eigenvalue, then it is simple and the corresponding eigenfunction
is strictly positive.
Proof. Let ψ be an eigenfunction, then it is no restriction to assume that ψ
is real (since A is real both real and imaginary part of ψ are eigenfunctions
as well). We assume ψ = 1 and denote by ψ
±
=
f±f
2
the positive and
negative parts of ψ. Then by [Aψ[ = [Aψ
+
− Aψ
−
[ ≤ Aψ
+
+ Aψ
−
= A[ψ[
we have
A = ¸ψ, Aψ) ≤ ¸[ψ[, [Aψ[) ≤ ¸[ψ[, A[ψ[) ≤ A, (10.68)
that is, ¸ψ, Aψ) = ¸[ψ[, A[ψ[) and thus
¸ψ
+
, Aψ
−
) =
1
4
(¸[ψ[, A[ψ[) −¸ψ, Aψ)) = 0. (10.69)
Consequently ψ
−
= 0 or ψ
+
= 0 since otherwise Aψ
−
> 0 and hence also
¸ψ
+
, Aψ
−
) > 0. Without restriction ψ = ψ
+
≥ 0 and since A is positivity
increasing we even have ψ = A
−1
Aψ > 0.
So we need a positivity improving operator. By (7.42) and (7.43) both
E
−tH
0
, t > 0 and R
λ
(H
0
), λ < 0 are since they are given by convolution
188 10. Oneparticle Schr¨odinger operators
with a strictly positive function. Our hope is that this property carries over
to H = H
0
+V .
Theorem 10.11. Suppose H = H
0
+ V is selfadjoint and bounded from
below with C
∞
c
(R
n
) as a core. If E
0
= min σ(H) is an eigenvalue, it is
simple and the corresponding eigenfunction is strictly positive.
Proof. We ﬁrst show that e
−tH
, t > 0, is positivity preserving. If we set
V
n
= V χ
{x V (x)≤n}
, then V
n
is bounded and H
n
= H
0
+ V
n
is positivity
preserving by the Trotter product formula since both e
−tH
0
and e
−tV
are.
Moreover, we have H
n
ψ → Hψ for ψ ∈ C
∞
c
(R
n
) (note that necessarily
V ∈ L
2
loc
) and hence H
n
sr
→ H in strong resolvent sense by Lemma 6.28.
Hence e
−tHn
s
→ e
−tH
by Theorem 6.23, which shows that e
−tH
is at least
positivity preserving (since 0 cannot be an eigenvalue of e
−tH
it cannot map
a positive function to 0).
Next I claim that for ψ positive the closed set
N(ψ) = ¦ϕ ∈ L
2
(R
n
) [ ϕ ≥ 0, ¸ϕ, e
−sH
ψ) = 0∀s ≥ 0¦ (10.70)
is just ¦0¦. If ϕ ∈ N(ψ) we have by e
−sH
ψ ≥ 0 that ϕe
−sH
ψ = 0. Hence
e
tVn
ϕe
−sH
ψ = 0, that is e
tVn
ϕ ∈ N(ψ). In other words, both e
tVn
and e
−tH
leave N(ψ) invariant and invoking again Trotter’s formula the same is true
for
e
−t(H−Vn)
= slim
k→∞
_
e
−
t
k
H
e
t
k
Vn
_
k
. (10.71)
Since e
−t(H−Vn)
s
→ e
−tH
0
we ﬁnally obtain that e
−tH
0
leaves N(ψ) invariant,
but this operator is positivity increasing and thus N(ψ) = ¦0¦.
Now it remains to use (7.41) which shows
¸ϕ, R
H
(λ)ψ) =
_
∞
0
e
λt
¸ϕ, e
−tH
ψ)dt > 0, λ < E
0
, (10.72)
for ϕ, ψ positive. So R
H
(λ) is positivity increasing for λ < E
0
.
If ψ is an eigenfunction of H corresponding to E
0
it is an eigenfunction
of R
H
(λ) corresponding to
1
E
0
−λ
and the claim follows since R
H
(λ) =
1
E
0
−λ
.
Chapter 11
Atomic Schr¨odinger
operators
11.1. Selfadjointness
In this section we want to have a look at the Hamiltonian corresponding to
more than one interacting particle. It is given by
H = −
N
j=1
∆
j
+
N
j<k
V
j,k
(x
j
−x
k
). (11.1)
We ﬁrst consider the case of two particles, which will give us a feeling
for how the many particle case diﬀers from the one particle case and how
the diﬃculties can be overcome.
We denote the coordinates corresponding to the ﬁrst particle by x
1
=
(x
1,1
, x
1,2
, x
1,3
) and those corresponding to the second particle by x
2
=
(x
2,1
, x
2,2
, x
2,3
). If we assume that the interaction is again of the Coulomb
type, the Hamiltonian is given by
H = −∆
1
−∆
2
−
γ
[x
1
−x
2
[
, D(H) = H
2
(R
6
). (11.2)
Since Theorem 10.2 does not allow singularities for n ≥ 3, it does not tell
us whether H is selfadjoint or not. Let
(y
1
, y
2
) =
1
√
2
_
I I
−I I
_
(x
1
, x
2
), (11.3)
then H reads in this new coordinates
H = (−∆
1
) + (−∆
2
−
γ/
√
2
[y
2
[
). (11.4)
189
190 11. Atomic Schr¨odinger operators
In particular, it is the sum of a free particle plus a particle in an external
Coulomb ﬁeld. From a physics point of view, the ﬁrst part corresponds to
the center of mass motion and the second part to the relative motion.
Using that γ/(
√
2[y
2
[) has (−∆
2
)bound 0 in L
2
(R
3
) it is not hard to
see that the same is true for the (−∆
1
−∆
2
)bound in L
2
(R
6
) (details will
follow in the next section). In particular, H is selfadjoint and semibounded
for any γ ∈ R. Moreover, you might suspect that γ/(
√
2[y
2
[) is relatively
compact with respect to −∆
1
−∆
2
in L
2
(R
6
) since it is with respect to −∆
2
in L
2
(R
6
). However, this is not true! This is due to the fact that γ/(
√
2[y
2
[)
does not vanish as [y[ → ∞.
Let us look at this problem from the physical view point. If λ ∈ σ
ess
(H),
this means that the movement of the whole system is somehow unbounded.
There are two possibilities for this.
Firstly, both particles are far away from each other (such that we can
neglect the interaction) and the energy corresponds to the sum of the kinetic
energies of both particles. Since both can be arbitrarily small (but positive),
we expect [0, ∞) ⊆ σ
ess
(H).
Secondly, both particles remain close to each other and move together.
In the last coordinates this corresponds to a bound state of the second
operator. Hence we expect [λ
0
, ∞) ⊆ σ
ess
(H), where λ
0
= −γ
2
/8 is the
smallest eigenvalue of the second operator if the forces are attracting (γ ≥ 0)
and λ
0
= 0 if they are repelling (γ ≤ 0).
It is not hard to translate this intuitive ideas into a rigorous proof.
Let ψ
1
(y
1
) be a Weyl sequence corresponding to λ ∈ [0, ∞) for −∆
1
and
ψ
2
(y
2
) be a Weyl sequence corresponding to λ
0
for −∆
2
−γ/(
√
2[y
2
[). Then,
ψ
1
(y
1
)ψ
2
(y
2
) is a Weyl sequence corresponding to λ + λ
0
for H and thus
[λ
0
, ∞) ⊆ σ
ess
(H). Conversely, we have −∆
1
≥ 0 respectively −∆
2
−
γ/(
√
2[y
2
[) ≥ λ
0
and hence H ≥ λ
0
. Thus we obtain
σ(H) = σ
ess
(H) = [λ
0
, ∞), λ
0
=
_
−γ
2
/8, γ ≥ 0
0, γ ≤ 0
. (11.5)
Clearly, the physically relevant information is the spectrum of the operator
−∆
2
−γ/(
√
2[y
2
[) which is hidden by the spectrum of −∆
1
. Hence, in order
to reveal the physics, one ﬁrst has to remove the center of mass motion.
To avoid clumsy notation, we will restrict ourselves to the case of one
atom with N electrons whose nucleus is ﬁxed at the origin. In particular,
this implies that we do not have to deal with the center of mass motion
11.2. The HVZ theorem 191
encountered in our example above. The Hamiltonian is given by
H
(N)
= −
N
j=1
∆
j
−
N
j=1
V
ne
(x
j
) +
N
j=1
N
j<k
V
ee
(x
j
−x
k
),
D(H
(N)
) = H
2
(R
3N
), (11.6)
where V
ne
describes the interaction of one electron with the nucleus and V
ee
describes the interaction of two electrons. Explicitly we have
V
j
(x) =
γ
j
[x[
, γ
j
> 0, j = ne, ee. (11.7)
We ﬁrst need to establish selfadjointness of H
(N)
. This will follow from
Kato’s theorem.
Theorem 11.1 (Kato). Let V
k
∈ L
∞
∞
(R
d
) + L
2
(R
d
), d ≤ 3, be realvalued
and let V
k
(y
(k)
) be the multiplication operator in L
2
(R
n
), n = Nd, obtained
by letting y
(k)
be the ﬁrst d coordinates of a unitary transform of R
n
. Then
V
k
is H
0
bounded with H
0
bound 0. In particular,
H = H
0
+
k
V
k
(y
(k)
), D(H) = H
2
(R
n
), (11.8)
is selfadjoint and C
∞
0
(R
n
) is a core.
Proof. It suﬃces to consider one k. After a unitary transform of R
n
we can
assume y
(1)
= (x
1
, . . . , x
d
) since such transformations leave both the scalar
product of L
2
(R
n
) and H
0
invariant. Now let ψ ∈ o(R
n
), then
V
k
ψ
2
≤ a
2
_
R
n
[∆
1
ψ(x)[
2
d
n
x +b
2
_
R
n
[ψ(x)[
2
d
n
x, (11.9)
where ∆
1
=
d
j=1
∂
2
/∂
2
x
j
, by our previous lemma. Hence we obtain
V
k
ψ
2
≤ a
2
_
R
n
[
d
j=1
p
2
j
ˆ
ψ(p)[
2
d
n
p +b
2
ψ
2
≤ a
2
_
R
n
[
n
j=1
p
2
j
ˆ
ψ(p)[
2
d
n
p +b
2
ψ
2
= a
2
H
0
ψ
2
+b
2
ψ
2
, (11.10)
which implies that V
k
is relatively bounded with bound 0.
11.2. The HVZ theorem
The considerations of the beginning of this section show that it is not so
easy to determine the essential spectrum of H
(N)
since the potential does
not decay in all directions as [x[ → ∞. However, there is still something we
192 11. Atomic Schr¨odinger operators
can do. Denote the inﬁmum of the spectrum of H
(N)
by λ
N
. Then, let us
split the system into H
(N−1)
plus a single electron. If the single electron is
far away from the remaining system such that there is little interaction, the
energy should be the sum of the kinetic energy of the single electron and
the energy of the remaining system. Hence arguing as in the two electron
example of the previous section we expect
Theorem 11.2 (HVZ). Let H
(N)
be the selfadjoint operator given in (11.6).
Then H
(N)
is bounded from below and
σ
ess
(H
(N)
) = [λ
N−1
, ∞), (11.11)
where λ
N−1
= min σ(H
(N−1)
) < 0.
In particular, the ionization energy (i.e., the energy needed to remove
one electron from the atom in its ground state) of an atom with N electrons
is given by λ
N
−λ
N−1
.
Our goal for the rest of this section is to prove this result which is due to
Zhislin, van Winter and Hunziker and known as HVZ theorem. In fact there
is a version which holds for general Nbody systems. The proof is similar
but involves some additional notation.
The idea of proof is the following. To prove [λ
N−1
, ∞) ⊆ σ
ess
(H
(N)
)
we choose Weyl sequences for H
(N−1)
and −∆
N
and proceed according to
our intuitive picture from above. To prove σ
ess
(H
(N)
) ⊆ [λ
N−1
, ∞) we will
localize H
(N)
on sets where either one electron is far away from the others
or all electrons are far away from the nucleus. Since the error turns out
relatively compact, it remains to consider the inﬁmum of the spectra of
these operators. For all cases where one electron is far away it is λ
N−1
and
for the case where all electrons are far away from the nucleus it is 0 (since
the electrons repel each other).
We begin with the ﬁrst inclusion. Let ψ
N−1
(x
1
, . . . , x
N−1
) ∈ H
2
(R
3(N−1)
)
such that ψ
N−1
 = 1, (H
(N−1)
− λ
N−1
)ψ
N−1
 ≤ ε and ψ
1
∈ H
2
(R
3
)
such that ψ
1
 = 1, (−∆
N
− λ)ψ
1
 ≤ ε for some λ ≥ 0. Now consider
ψ
r
(x
1
, . . . , x
N
) = ψ
N−1
(x
1
, . . . , x
N−1
)ψ
1
r
(x
N
), ψ
1
r
(x
N
) = ψ
1
(x
N
−r), then
(H
(N)
−λ −λ
N−1
)ψ
r
 ≤ (H
(N−1)
−λ
N−1
)ψ
N−1
ψ
1
r

+ψ
N−1
(−∆
N
−λ)ψ
1
r

+(V
N
−
N−1
j=1
V
N,j
)ψ
r
, (11.12)
where V
N
= V
ne
(x
N
) and V
N,j
= V
ee
(x
N
−x
j
). Since (V
N
−
N−1
j=1
V
N,j
)ψ
N−1
∈
L
2
(R
3N
) and [ψ
1
r
[ → 0 pointwise as [r[ → ∞ (by Lemma 10.1), the third
11.2. The HVZ theorem 193
term can be made smaller than ε by choosing [r[ large (dominated conver
gence). In summary,
(H
(N)
−λ −λ
N−1
)ψ
r
 ≤ 3ε (11.13)
proving [λ
N−1
, ∞) ⊆ σ
ess
(H
(N)
).
The second inclusion is more involved. We begin with a localization
formula, which can be veriﬁed by a straightforward computation
Lemma 11.3 (IMS localization formula). Suppose φ
j
∈ C
∞
(R
n
), 0 ≤ j ≤
N, is such that
N
j=0
φ
j
(x)
2
= 1, x ∈ R
n
, (11.14)
then
∆ψ =
N
j=0
φ
j
∆φ
j
ψ −[∂φ
j
[
2
ψ, ψ ∈ H
2
(R
n
). (11.15)
Abbreviate B = ¦x ∈ R
3N
[[x[ ≥ 1¦. Now we will choose φ
j
, 1 ≤ j ≤ N,
in such a way that x ∈ supp(φ
j
) ∩ B implies that the jth particle is far
away from all the others and from the nucleus. Similarly, we will choose φ
0
in such a way that x ∈ supp(φ
0
) ∩ B implies that all particle are far away
from the nucleus.
Lemma 11.4. There exists functions φ
j
∈ C
∞
(R
n
, [0, 1]), 0 ≤ j ≤ N, is
such that (11.14) holds,
supp(φ
j
) ∩ B ⊆ ¦x ∈ B[ [x
j
−x
[ ≥ C[x[ for all ,= j, and [x
j
[ ≥ C[x[¦,
supp(φ
0
) ∩ B ⊆ ¦x ∈ B[ [x
[ ≥ C[x[ for all ¦ (11.16)
for some C ∈ [0, 1], and [∂φ
j
(x)[ → 0 as [x[ → ∞.
Proof. Consider the sets
U
n
j
= ¦x ∈ S
3N−1
[ [x
j
−x
[ > n
−1
for all ,= j, and [x
j
[ > n
−1
¦,
U
n
0
= ¦x ∈ S
3N−1
[ [x
[ > n
−1
for all ¦. (11.17)
We claim that
∞
_
n=1
N
_
j=0
U
n
j
= S
3N−1
. (11.18)
Indeed, suppose there is an x ∈ S
3N−1
which is not an element of this union.
Then x ,∈ U
n
0
for all n implies 0 = [x
j
[ for some j, say j = 1. Next, since
x ,∈ U
n
1
for all n implies 0 = [x
j
− x
1
[ = [x
j
[ for some j > 1, say j = 2.
194 11. Atomic Schr¨odinger operators
Proceeding like this we end up with x = 0, a contradiction. By compactness
of S
3N−1
we even have
N
_
j=0
U
n
j
= S
3N−1
(11.19)
for n suﬃciently large. It is wellknown that there is a partition of unity
˜
φ
j
(x) subordinate to this cover. Extend
˜
φ
j
(x) to a smooth function from
R
3N
¸¦0¦ to [0, 1] by
˜
φ
j
(λx) =
˜
φ
j
(x), x ∈ S
3N−1
, λ > 0, (11.20)
and pick a function
˜
φ ∈ C
∞
(R
3N
, [0, 1]) with support inside the unit ball
which is 1 in a neighborhood of the origin. Then
φ
j
=
˜
φ + (1 −
˜
φ)
˜
φ
j
_
N
=0
˜
φ + (1 −
˜
φ)
˜
φ
(11.21)
are the desired functions. The gradient tends to zero since φ
j
(λx) = φ
j
(x)
for λ ≥ 1 and [x[ ≥ 1 which implies (∂φ
j
)(λx) = λ
−1
(∂φ
j
)(x).
By our localization formula we have
H
(N)
=
N
j=0
φ
j
H
(N,j)
φ
j
+K, K =
N
j=0
φ
2
j
V
(N,j)
+[∂φ
j
[
2
, (11.22)
where
H
(N,j)
= −
N
=1
∆
−
N
=j
V
+
N
k<, k,=j
V
k,
, H
(N,0)
= −
N
=1
∆
+
N
k<
V
k,
V
(N,j)
= V
j
+
N
=j
V
j,
, V
(N,0)
=
N
=1
V
(11.23)
To show that our choice of the functions φ
j
implies that K is relatively
compact with respect to H we need the following
Lemma 11.5. Let V be H
0
bounded with H
0
bound 0 and suppose that
χ
{xx≥R}
V R
H
0
(z) → 0 as R → ∞. Then V is relatively compact with
respect to H
0
.
Proof. Let ψ
n
converge to 0 weakly. Note that ψ
n
 ≤ M for some
M > 0. It suﬃces to show that V R
H
0
(z)ψ
n
 converges to 0. Choose
φ ∈ C
∞
0
(R
n
, [0, 1]) such that it is one for [x[ ≤ R. Then
V R
H
0
(z)ψ
n
 ≤ (1 −φ)V R
H
0
(z)ψ
n
 +V φR
H
0
(z)ψ
n

≤ (1 −φ)V R
H
0
(z)
∞
ψ
n
 +
aH
0
φR
H
0
(z)ψ
n
 +bφR
H
0
(z)ψ
n
. (11.24)
11.2. The HVZ theorem 195
By assumption, the ﬁrst term can be made smaller than ε by choosing R
large. Next, the same is true for the second term choosing a small. Finally,
the last term can also be made smaller than ε by choosing n large since φ
is H
0
compact.
The terms [∂φ
j
[
2
are bounded and vanish at ∞, hence they are H
0
compact by Lemma 7.10. The terms φ
j
V
(N,j)
are relatively compact by
the lemma and hence K is relatively compact with respect to H
0
. By
Lemma 6.22, K is also relatively compact with respect to H
(N)
since V
(N)
is relatively bounded with respect to H
0
.
In particular H
(N)
− K is selfadjoint on H
2
(R
3N
) and σ
ess
(H
(N)
) =
σ
ess
(H
(N)
− K). Since the operators H
(N,j)
, 1 ≤ j ≤ N, are all of the
form H
(N−1)
plus one particle which does not interact with the others and
the nucleus, we have H
(N,j)
− λ
N−1
≥ 0, 1 ≤ j ≤ N. Moreover, we have
H
(0)
≥ 0 since V
j,k
≥ 0 and hence
¸ψ, (H
(N)
−K −λ
N−1
)ψ) =
N
j=0
¸φ
j
ψ, (H
(N,j)
−λ
N−1
)φ
j
ψ) ≥ 0. (11.25)
Thus we obtain the remaining inclusion
σ
ess
(H
(N)
) = σ
ess
(H
(N)
−K) ⊆ σ(H
(N)
−K) ⊆ [λ
N−1
, ∞) (11.26)
which ﬁnishes the proof of the HVZ theorem.
Note that the same proof works if we add additional nuclei at ﬁxed
locations. That is, we can also treat molecules if we assume that the nuclei
are ﬁxed in space.
Finally, let us consider the example of Helium like atoms (N = 2). By
the HVZ theorem and the considerations of the previous section we have
σ
ess
(H
(2)
) = [−
γ
2
ne
4
, ∞). (11.27)
Moreover, if γ
ee
= 0 (no electron interaction), we can take products of one
particle eigenfunctions to show that
−γ
2
ne
_
1
4n
2
+
1
4m
2
_
∈ σ
p
(H
(2)
(γ
ee
= 0)), n, m ∈ N. (11.28)
In particular, there are eigenvalues embedded in the essential spectrum in
this case. Moreover, since the electron interaction term is positive, we see
H
(2)
≥ −
γ
2
ne
2
. (11.29)
Note that there can be no positive eigenvalues by the virial theorem. This
even holds for arbitrary N,
σ
p
(H
(N)
) ⊂ (−∞, 0). (11.30)
Chapter 12
Scattering theory
12.1. Abstract theory
In physical measurements one often has the following situation. A particle
is shot into a region where it interacts with some forces and then leaves
the region again. Outside this region the forces are negligible and hence the
time evolution should be asymptotically free. Hence one expects asymptotic
states ψ
±
(t) = exp(−itH
0
)ψ
±
(0) to exist such that
ψ(t) −ψ
±
(t) → 0 as t → ±∞. (12.1)
ψ(t)
ψ
−
(t)
ψ
+
(t)
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡!
$
$
$
$
$
$
$
$
$
$
$
$
$
$
$
$
$$X
Rewriting this condition we see
0 = lim
t→±∞
e
−itH
ψ(0) −e
−itH
0
ψ
±
(0) = lim
t→±∞
ψ(0) −e
itH
e
−itH
0
ψ
±
(0)
(12.2)
and motivated by this we deﬁne the wave operators by
D(Ω
±
) = ¦ψ ∈ H[∃ lim
t→±∞
e
itH
e
−itH
0
ψ¦
Ω
±
ψ = lim
t→±∞
e
itH
e
−itH
0
ψ
. (12.3)
197
198 12. Scattering theory
The set D(Ω
±
) is the set of all incoming/outgoing asymptotic states ψ
±
and
Ran(Ω
±
) is the set of all states which have an incoming/outgoing asymptotic
state. If a state ψ has both, that is, ψ ∈ Ran(Ω
+
) ∩ Ran(Ω
−
), it is called a
scattering state.
By construction we have
Ω
±
ψ = lim
t→±∞
e
itH
e
−itH
0
ψ = lim
t→±∞
ψ = ψ (12.4)
and it is not hard to see that D(Ω
±
) is closed. Moreover, interchanging the
roles of H
0
and H amounts to replacing Ω
±
by Ω
−1
±
and hence Ran(Ω
±
) is
also closed. In summary,
Lemma 12.1. The sets D(Ω
±
) and Ran(Ω
±
) are closed and Ω
±
: D(Ω
±
) →
Ran(Ω
±
) is unitary.
Next, observe that
lim
t→±∞
e
itH
e
−itH
0
(e
−isH
0
ψ) = lim
t→±∞
e
−isH
(e
i(t+s)H
e
−i(t+s)H
0
ψ) (12.5)
and hence
Ω
±
e
−itH
0
ψ = e
−itH
Ω
±
ψ, ψ ∈ D(Ω
±
). (12.6)
In addition, D(Ω
±
) is invariant under exp(−itH
0
) and Ran(Ω
±
) is invariant
under exp(−itH). Moreover, if ψ ∈ D(Ω
±
)
⊥
then
¸ϕ, exp(−itH
0
)ψ) = ¸exp(itH
0
)ϕ, ψ) = 0, ϕ ∈ D(Ω
±
). (12.7)
Hence D(Ω
±
)
⊥
is invariant under exp(−itH
0
) and Ran(Ω
±
)
⊥
is invariant
under exp(−itH). Consequently, D(Ω
±
) reduces exp(−itH
0
) and Ran(Ω
±
)
reduces exp(−itH). Moreover, diﬀerentiating (12.6) with respect to t we
obtain from Theorem 5.1 the intertwining property of the wave operators.
Theorem 12.2. The subspaces D(Ω
±
) respectively Ran(Ω
±
) reduce H
0
re
spectively H and the operators restricted to these subspaces are unitarily
equivalent
Ω
±
H
0
ψ = HΩ
±
ψ, ψ ∈ D(Ω
±
) ∩ D(H
0
). (12.8)
It is interesting to know the correspondence between incoming and out
going states. Hence we deﬁne the scattering operator
S = Ω
−1
+
Ω
−
, D(S) = ¦ψ ∈ D(Ω
−
)[Ω
−
ψ ∈ Ran(Ω
+
)¦. (12.9)
Note that we have D(S) = D(Ω
−
) if and only if Ran(Ω
−
) ⊆ Ran(Ω
+
) and
Ran(S) = D(Ω
+
) if and only if Ran(Ω
+
) ⊆ Ran(Ω
−
). Moreover, S is unitary
from D(S) onto Ran(S) and we have
H
0
Sψ = SH
0
ψ, D(H
0
) ∩ D(S). (12.10)
However, note that this whole theory is meaningless until we can show
that D(Ω
±
) are nontrivial. We ﬁrst show a criterion due to Cook.
12.1. Abstract theory 199
Lemma 12.3 (Cook). Suppose D(H) ⊆ D(H
0
). If
_
∞
0
(H −H
0
) exp(∓itH
0
)ψdt < ∞, ψ ∈ D(H
0
), (12.11)
then ψ ∈ D(Ω
±
), respectively. Moreover, we even have
(Ω
±
−I)ψ ≤
_
∞
0
(H −H
0
) exp(∓itH
0
)ψdt (12.12)
in this case.
Proof. The result follows from
e
itH
e
−itH
0
ψ = ψ + i
_
t
0
exp(isH)(H −H
0
) exp(−isH
0
)ψds (12.13)
which holds for ψ ∈ D(H
0
).
As a simple consequence we obtain the following result for Schr¨odinger
operators in R
3
Theorem 12.4. Suppose H
0
is the free Schr¨odinger operator and H =
H
0
+V with V ∈ L
2
(R
3
), then the wave operators exist and D(Ω
±
) = H.
Proof. Since we want to use Cook’s lemma, we need to estimate
V ψ(s)
2
=
_
R
3
[V (x)ψ(s, x)[
2
dx, ψ(s) = exp(isH
0
)ψ, (12.14)
for given ψ ∈ D(H
0
). Invoking (7.34) we get
V ψ(s) ≤ ψ(s)
∞
V  ≤
1
(4πs)
3/2
ψ
1
V , s > 0, (12.15)
at least for ψ ∈ L
1
(R
3
). Moreover, this implies
_
∞
1
V ψ(s)ds ≤
1
4π
3/2
ψ
1
V  (12.16)
and thus any such ψ is in D(Ω
+
). Since such functions are dense, we obtain
D(Ω
+
) = H. Similarly for Ω
−
.
By the intertwining property ψ is an eigenfunction of H
0
if and only
if it is an eigenfunction of H. Hence for ψ ∈ H
pp
(H
0
) it is easy to check
whether it is in D(Ω
±
) or not and only the continuous subspace is of interest.
We will say that the wave operators exist if all elements of H
ac
(H
0
) are
asymptotic states, that is,
H
ac
(H
0
) ⊆ D(Ω
±
) (12.17)
and that they are complete if, in addition, all elements of H
ac
(H) are
scattering states, that is,
H
ac
(H) ⊆ Ran(Ω
±
). (12.18)
200 12. Scattering theory
If we even have
H
c
(H) ⊆ Ran(Ω
±
), (12.19)
they are called asymptotically complete. We will be mainly interested in
the case where H
0
is the free Schr¨odinger operator and hence H
ac
(H
0
) = H.
In this later case the wave operators exist if D(Ω
±
) = H, they are complete if
H
ac
(H) = Ran(Ω
±
), and asymptotically complete if H
c
(H) = Ran(Ω
±
). In
particular asymptotic completeness implies H
sc
(H) = ∅ since H restricted
to Ran(Ω
±
) is unitarily equivalent to H
0
.
12.2. Incoming and outgoing states
In the remaining sections we want to apply this theory to Schr¨odinger op
erators. Our ﬁrst goal is to give a precise meaning to some terms in the
intuitive picture of scattering theory introduced in the previous section.
This physical picture suggests that we should be able to decompose
ψ ∈ H into an incoming and an outgoing part. But how should incom
ing respectively outgoing be deﬁned for ψ ∈ H? Well incoming (outgoing)
means that the expectation of x
2
should decrease (increase). Set x(t)
2
=
exp(iH
0
t)x
2
exp(−iH
0
t), then, abbreviating ψ(t) = e
−itH
0
ψ,
d
dt
E
ψ
(x(t)
2
) = ¸ψ(t), i[H
0
, x
2
]ψ(t)) = 4¸ψ(t), Dψ(t)), ψ ∈ o(R
n
),
(12.20)
where D is the dilation operator introduced in (10.11). Hence it is natural
to consider ψ ∈ Ran(P
±
),
P
±
= P
D
((0, ±∞)), (12.21)
as outgoing respectively incoming states. If we project a state in Ran(P
±
)
to energies in the interval (a
2
, b
2
), we expect that it cannot be found in a
ball of radius proportional to a[t[ as t → ±∞ (a is the minimal velocity of
the particle, since we have assumed the mass to be two). In fact, we will
show below that the tail decays faster then any inverse power of [t[.
We ﬁrst collect some properties of D which will be needed later on. Note
TD = −DT (12.22)
and hence Tf(D) = f(−D)T. To say more we will look for a transformation
which maps D to a multiplication operator.
Since the dilation group acts on [x[ only, it seems reasonable to switch
to polar coordinates x = rω, (t, ω) ∈ R
+
S
n−1
. Since U(s) essentially
transforms r into r exp(s) we will replace r by ρ = ln(r). In these coordinates
we have
U(s)ψ(e
ρ
ω) = e
−ns/2
ψ(e
(ρ−s)
ω) (12.23)
12.2. Incoming and outgoing states 201
and hence U(s) corresponds to a shift of ρ (the constant in front is absorbed
by the volume element). Thus D corresponds to diﬀerentiation with respect
to this coordinate and all we have to do to make it a multiplication operator
is to take the Fourier transform with respect to ρ.
This leads us to the Mellin transform
/ : L
2
(R
n
) → L
2
(R S
n−1
)
ψ(rω) → (/ψ)(λ, ω) =
1
√
2π
_
∞
0
r
−iλ
ψ(rω)r
n
2
−1
dr
. (12.24)
By construction, / is unitary, that is,
_
R
_
S
n−1
[(/ψ)(λ, ω)[
2
dλd
n−1
ω =
_
R
+
_
S
n−1
[ψ(rω)[
2
r
n−1
drd
n−1
ω,
(12.25)
where d
n−1
ω is the normalized surface measure on S
n−1
. Moreover,
/
−1
U(s)/ = e
−isλ
(12.26)
and hence
/
−1
D/ = λ. (12.27)
From this it is straightforward to show that
σ(D) = σ
ac
(D) = R, σ
sc
(D) = σ
pp
(D) = ∅ (12.28)
and that o(R
n
) is a core for D. In particular we have P
+
+P
−
= I.
Using the Mellin transform we can now prove Perry’s estimate [11].
Lemma 12.5. Suppose f ∈ C
∞
c
(R) with supp(f) ⊂ (a
2
, b
2
) for some a, b >
0. For any R ∈ R, N ∈ N there is a constant C such that
χ
{x x<2at}
e
−itH
0
f(H
0
)P
D
((±R, ±∞)) ≤
C
(1 +[t[)
N
, ±t ≥ 0, (12.29)
respectively.
Proof. We prove only the + case, the remaining one being similar. Consider
ψ ∈ o(R
n
). Introducing
ψ(t, x) = e
−itH
0
f(H
0
)P
D
((R, ∞))ψ(x) = ¸K
t,x
, TP
D
((R, ∞))ψ)
= ¸K
t,x
, P
D
((−∞, −R))
ˆ
ψ), (12.30)
where
K
t,x
(p) =
1
(2π)
n/2
e
i(tp
2
−px)
f(p
2
)
∗
, (12.31)
we see that it suﬃces to show
P
D
((−∞, −R))K
t,x

2
≤
const
(1 +[t[)
2N
, for [x[ < 2a[t[, t > 0. (12.32)
202 12. Scattering theory
Now we invoke the Mellin transform to estimate this norm
P
D
((−∞, −R))K
t,x

2
=
_
R
−∞
_
S
n−1
[(/K
t,x
)(λ, ω)[
2
dλd
n−1
ω. (12.33)
Since
(/K
t,x
)(λ, ω) =
1
(2π)
(n+1)/2
_
∞
0
˜
f(r)e
iα(r)
dr (12.34)
with
˜
f(r) = f(r
2
)
∗
r
n/2−1
∈ C
∞
c
((a
2
, b
2
)), α(r) = tr
2
+ rωx − λln(r). Esti
mating the derivative of α we see
α
(r) = 2tr +ωx −λ/r > 0, r ∈ (a, b), (12.35)
for λ ≤ −R and t > R(2εa)
−1
, where ε is the distance of a to the support
of
˜
f. Hence we can ﬁnd a constant such that
1
[α
(r)[
≤
const
1 +[λ[ +[t[
, r ∈ (a, b), (12.36)
and λ, t as above. Using this we can estimate the integral in (12.34)
¸
¸
¸
¸
_
∞
0
˜
f(r)
1
α
(r)
d
dr
e
iα(r)
dr
¸
¸
¸
¸
≤
const
1 +[λ[ +[t[
¸
¸
¸
¸
_
∞
0
˜
f
(r)e
iα(r)
dr
¸
¸
¸
¸
, (12.37)
(the last step uses integration by parts) for λ, t as above. By increasing the
constant we can even assume that it holds for t ≥ 0 and λ ≤ −R. Moreover,
by iterating the last estimate we see
[(/K
t,x
)(λ, ω)[ ≤
const
(1 +[λ[ +[t[)
N
(12.38)
for any N ∈ N and t ≥ 0 and λ ≤ −R. This ﬁnishes the proof.
Corollary 12.6. Suppose that f ∈ C
∞
c
((0, ∞)) and R ∈ R. Then the
operator P
D
((±R, ±∞))f(H
0
) exp(−itH
0
) converges strongly to 0 as t →
∓∞.
Proof. Abbreviating P
D
= P
D
((±R, ±∞)) and χ = χ
{x x<2at}
we have
P
D
f(H
0
)e
−itH
0
ψ ≤ χe
itH
0
f(H
0
)
∗
P
D
 ψ+f(H
0
)(I−χ)ψ. (12.39)
since A = A
∗
. Taking t → ∓∞ the ﬁrst term goes to zero by our lemma
and the second goes to zero since χψ → ψ.
12.3. Schr¨odinger operators with short range
potentials
By the RAGE theorem we know that for ψ ∈ H
c
, ψ(t) will eventually leave
every compact ball (at least on the average). Hence we expect that the
time evolution will asymptotically look like the free one for ψ ∈ H
c
if the
potential decays suﬃciently fast. In other words, we expect such potentials
to be asymptotically complete.
12.3. Schr¨odinger operators with short range potentials 203
Suppose V is relatively bounded with bound less than one. Introduce
h
1
(r) = V R
H
0
(z)χ
r
, h
2
(r) = χ
r
V R
H
0
(z), r ≥ 0, (12.40)
where
χ
r
= χ
{x x≥r}
. (12.41)
The potential V will be called short range if these quantities are integrable.
We ﬁrst note that it suﬃces to check this for h
1
or h
2
and for one z ∈ ρ(H
0
).
Lemma 12.7. The function h
1
is integrable if and only if h
2
is. Moreover,
h
j
integrable for one z
0
∈ ρ(H
0
) implies h
j
integrable for all z ∈ ρ(H
0
).
Proof. Pick φ ∈ C
∞
c
(R
n
, [0, 1]) such that φ(x) = 0 for 0 ≤ [x[ ≤ 1/2 and
φ(x) = 0 for 1 ≤ [x[. Then it is not hard to see that h
j
is integrable if and
only if
˜
h
j
is integrable, where
˜
h
1
(r) = V R
H
0
(z)φ
r
,
˜
h
2
(r) = φ
r
V R
H
0
(z), r ≥ 1, (12.42)
and φ
r
(x) = φ(x/r). Using
[R
H
0
(z), φ
r
] = −R
H
0
(z)[H
0
(z), φ
r
]R
H
0
(z)
= R
H
0
(z)(∆φ
r
+ (∂φ
r
)∂)R
H
0
(z) (12.43)
and ∆φ
r
= φ
r/2
∆φ
r
, ∆φ
r

∞
≤ ∆φ
∞
/r
2
respectively (∂φ
r
) = φ
r/2
(∂φ
r
),
∂φ
r

∞
≤ ∂φ
∞
/r
2
we see
[
˜
h
1
(r) −
˜
h
2
(r)[ ≤
c
r
˜
h
1
(r/2), r ≥ 1. (12.44)
Hence
˜
h
2
is integrable if
˜
h
1
is. Conversely,
˜
h
1
(r) ≤
˜
h
2
(r) +
c
r
˜
h
1
(r/2) ≤
˜
h
2
(r) +
c
r
˜
h
2
(r/2) +
2c
r
2
˜
h
1
(r/4) (12.45)
shows that
˜
h
2
is integrable if
˜
h
1
is.
Invoking the ﬁrst resolvent formula
φ
r
V R
H
0
(z) ≤ φ
r
V R
H
0
(z
0
)I −(z −z
0
)R
H
0
(z) (12.46)
ﬁnishes the proof.
As a ﬁrst consequence note
Lemma 12.8. If V is short range, then R
H
(z) −R
H
0
(z) is compact.
Proof. The operator R
H
(z)V (I−χ
r
)R
H
0
(z) is compact since (I−χ
r
)R
H
0
(z)
is by Lemma 7.10 and R
H
(z)V is bounded by Lemma 6.22. Moreover, by
our short range condition it converges in norm to
R
H
(z)V R
H
0
(z) = R
H
(z) −R
H
0
(z) (12.47)
as r → ∞ (at least for some subsequence).
204 12. Scattering theory
In particular, by Weyl’s theorem we have σ
ess
(H) = [0, ∞). Moreover,
V short range implies that H and H
0
look alike far outside.
Lemma 12.9. Suppose R
H
(z)−R
H
0
(z) is compact, then so is f(H)−f(H
0
)
for any f ∈ C
∞
(R) and
lim
r→∞
(f(H) −f(H
0
))χ
r
 = 0. (12.48)
Proof. The ﬁrst part is Lemma 6.20 and the second part follows from part
(ii) of Lemma 6.8 since χ
r
converges strongly to 0.
However, this is clearly not enough to prove asymptotic completeness
and we need a more careful analysis. The main ideas are due to Enß [4].
We begin by showing that the wave operators exist. By Cook’s criterion
(Lemma 12.3) we need to show that
V exp(∓itH
0
)ψ ≤ V R
H
0
(−1)(I −χ
2at
) exp(∓itH
0
)(H
0
+I)ψ
+V R
H
0
(−1)χ
2at
(H
0
+I)ψ (12.49)
is integrable for a dense set of vectors ψ. The second term is integrable by our
short range assumption. The same is true by Perry’s estimate (Lemma 12.5)
for the ﬁrst term if we choose ψ = f(H
0
)P
D
((±R, ±∞))ϕ. Since vectors of
this form are dense, we see that the wave operators exist,
D(Ω
±
) = H. (12.50)
Since H restricted to Ran(Ω
∗
±
) is unitarily equivalent to H
0
, we obtain
[0, ∞) = σ
ac
(H
0
) ⊆ σ
ac
(H). And by σ
ac
(H) ⊆ σ
ess
(H) = [0, ∞) we even
have σ
ac
(H) = [0, ∞).
To prove asymptotic completeness of the wave operators we will need
that (Ω
±
−I)f(H
0
)P
±
are compact.
Lemma 12.10. Let f ∈ C
∞
c
((0, ∞)) and suppose ψ
n
converges weakly to 0.
Then
lim
n→∞
(Ω
±
−I)f(H
0
)P
±
ψ
n
 = 0, (12.51)
that is, (Ω
±
−I)f(H
0
)P
±
is compact.
Proof. By (12.13) we see
R
H
(z)(Ω
±
−I)f(H
0
)P
±
ψ
n
 ≤
_
∞
0
R
H
(z)V exp(−isH
0
)f(H
0
)P
±
ψ
n
dt.
(12.52)
Since R
H
(z)V R
H
0
is compact we see that the integrand
R
H
(z)V exp(−isH
0
)f(H
0
)P
±
ψ
n
=
R
H
(z)V R
H
0
exp(−isH
0
)(H
0
+ 1)f(H
0
)P
±
ψ
n
(12.53)
12.3. Schr¨odinger operators with short range potentials 205
converges pointwise to 0. Moreover, arguing as in (12.49) the integrand
is bounded by an L
1
function depending only on ψ
n
. Thus R
H
(z)(Ω
±
−
I)f(H
0
)P
±
is compact by the dominated convergence theorem. Furthermore,
using the intertwining property we see that
(Ω
±
−I)
˜
f(H
0
)P
±
= R
H
(z)(Ω
±
−I)f(H
0
)P
±
−(R
H
(z) −R
H
0
(z))f(H
0
)P
±
(12.54)
is compact by Lemma 6.20, where
˜
f(λ) = (λ + 1)f(λ).
Now we have gathered enough information to tackle the problem of
asymptotic completeness.
We ﬁrst show that the singular continuous spectrum is absent. This
is not really necessary, but avoids the use of Ces`aro means in our main
argument.
Abbreviate P = P
sc
H
P
H
((a, b)), 0 < a < b. Since H restricted to
Ran(Ω
±
) is unitarily equivalent to H
0
(which has purely absolutely continu
ous spectrum), the singular part must live on Ran(Ω
±
)
⊥
, that is, P
sc
H
Ω
±
= 0.
Thus Pf(H
0
) = P(I −Ω
+
)f(H
0
)P
+
+P(I −Ω
−
)f(H
0
)P
−
is compact. Since
f(H) − f(H
0
) is compact, it follows that Pf(H) is also compact. Choos
ing f such that f(λ) = 1 for λ ∈ [a, b] we see that P = Pf(H) is com
pact and hence ﬁnite dimensional. In particular σ
sc
(H) ∩ (a, b) is a ﬁ
nite set. But a continuous measure cannot be supported on a ﬁnite set,
showing σ
sc
(H) ∩ (a, b) = ∅. Since 0 < a < b are arbitrary we even
have σ
sc
(H) ∩ (0, ∞) = ∅ and by σ
sc
(H) ⊆ σ
ess
(H) = [0, ∞) we obtain
σ
sc
(H) = ∅.
Observe that replacing P
sc
H
by P
pp
H
the same argument shows that all
nonzero eigenvalues are ﬁnite dimensional and cannot accumulate in (0, ∞).
In summary we have shown
Theorem 12.11. Suppose V is short range. Then
σ
ac
(H) = σ
ess
(H) = [0, ∞), σ
sc
(H) = ∅. (12.55)
All nonzero eigenvalues have ﬁnite multiplicity and cannot accumulate in
(0, ∞).
Now we come to the anticipated asymptotic completeness result of Enß.
Choose
ψ ∈ H
c
(H) = H
ac
(H) such that ψ = f(H)ψ (12.56)
for some f ∈ C
∞
c
((0, ∞). By the RAGE theorem the sequence ψ(t) converges
weakly to zero as t → ±∞. Abbreviate ψ(t) = exp(−itH)ψ. Introduce
ϕ
±
(t) = f(H
0
)P
±
ψ(t). (12.57)
206 12. Scattering theory
which satisfy
lim
t→±∞
ψ(t) −ϕ
+
(t) −ϕ
−
(t) = 0. (12.58)
Indeed this follows from
ψ(t) = ϕ
+
(t) +ϕ
−
(t) + (f(H) −f(H
0
))ψ(t) (12.59)
and Lemma 6.20. Moreover, we even have
lim
t→±∞
(Ω
±
−I)ϕ
±
(t) = 0 (12.60)
by Lemma 12.10. Now suppose ψ ∈ Ran(Ω
±
)
⊥
, then
ψ
2
= lim
t→±∞
¸ψ(t), ψ(t))
= lim
t→±∞
¸ψ(t), ϕ
+
(t) +ϕ
−
(t))
= lim
t→±∞
¸ψ(t), Ω
+
ϕ
+
(t) + Ω
−
ϕ
−
(t)). (12.61)
By Theorem 12.2, Ran(Ω
±
)
⊥
is invariant under H and hence ψ(t) ∈ Ran(Ω
±
)
⊥
implying
ψ
2
= lim
t→±∞
¸ψ(t), Ω
∓
ϕ
∓
(t)) (12.62)
= lim
t→±∞
¸P
∓
f(H
0
)
∗
Ω
∗
∓
ψ(t), ψ(t)).
Invoking the intertwining property we see
ψ
2
= lim
t→±∞
¸P
∓
f(H
0
)
∗
e
−itH
0
Ω
∗
∓
ψ, ψ(t)) = 0 (12.63)
by Corollary 12.6. Hence Ran(Ω
±
) = H
ac
(H) = H
c
(H) and we thus have
shown
Theorem 12.12 (Enß). Suppose V is short range, then the wave operators
are asymptotically complete.
For further results and references see for example [3].
Part 3
Appendix
Appendix A
Almost everything
about Lebesgue
integration
In this appendix I give a brief introduction to measure theory. Good refer
ences are [2] or [18].
A.1. Borel measures in a nut shell
The ﬁrst step in deﬁning the Lebesgue integral is extending the notion of
size from intervals to arbitrary sets. Unfortunately, this turns out to be too
much, since a classical paradox by Banach and Tarski shows that one can
break the unit ball in R
3
into a ﬁnite number of (wild – choosing the pieces
uses the Axiom of Choice and cannot be done with a jigsaw;) pieces, rotate
and translate them, and reassemble them to get two copies of the unit ball
(compare Problem A.1). Hence any reasonable notion of size (i.e., one which
is translation and rotation invariant) cannot be deﬁned for all sets!
A collection of subsets / of a given set X such that
• X ∈ /,
• / is closed under ﬁnite unions,
• / is closed under complements.
is called an algebra. Note that ∅ ∈ / and that, by de Morgan, / is also
closed under ﬁnite intersections. If an algebra is closed under countable
unions (and hence also countable intersections), it is called a σalgebra.
209
210 A. Almost everything about Lebesgue integration
Moreover, the intersection of any family of (σ)algebras ¦/
α
¦ is again
a (σ)algebra and for any collection S of subsets there is a unique smallest
(σ)algebra Σ(S) containing S (namely the intersection of all (σ)algebra
containing S). It is called the (σ)algebra generated by S.
If X is a topological space, the Borel σalgebra of X is deﬁned to be
the σalgebra generated by all open (respectively all closed) sets. Sets in the
Borel σalgebra are called Borel sets.
Example. In the case X = R
n
the Borel σalgebra will be denoted by B
n
and we will abbreviate B = B
1
.
Now let us turn to the deﬁnition of a measure: A set X together with a σ
algebra Σ is called a measure space. A measure µ is a map µ : Σ → [0, ∞]
on a σalgebra Σ such that
• µ(∅) = 0,
• µ(
∞
j=1
A
j
) =
∞
j=1
µ(A
j
) if A
j
∩A
k
= ∅ for all j, k (σadditivity).
It is called σﬁnite if there is a countable cover ¦X
j
¦
∞
j=1
of X with µ(X
j
) <
∞ for all j. (Note that it is no restriction to assume X
j
¸ X.) It is called
ﬁnite if µ(X) < ∞. The sets in Σ are called measurable sets.
If we replace the σalgebra by an algebra /, then µ is called a premea
sure. In this case σadditivity clearly only needs to hold for disjoint sets
A
n
for which
n
A
n
∈ /.
We will write A
n
¸ A if A
n
⊆ A
n+1
(note A =
n
A
n
) and A
n
¸ A if
A
n+1
⊆ A
n
(note A =
n
A
n
).
Theorem A.1. Any measure µ satisﬁes the following properties:
(i) A ⊆ B implies µ(A) ≤ µ(B) (monotonicity).
(ii) µ(A
n
) → µ(A) if A
n
¸ A (continuity from below).
(iii) µ(A
n
) → µ(A) if A
n
¸ A and µ(A
1
) < ∞(continuity from above).
Proof. The ﬁrst claim is obvious. The second follows using
˜
A
n
= A
n
¸A
n−1
and σadditivity. The third follows from the second using
˜
A
n
= A
1
¸A
n
and
µ(
˜
A
n
) = µ(A
1
) −µ(A
n
).
Example. Let A ∈ P(M) and set µ(A) to be the number of elements of A
(respectively ∞ if A is inﬁnite). This is the so called counting measure.
Note that if X = N and A
n
= ¦j ∈ N[j ≥ n¦, then µ(A
n
) = ∞, but
µ(
n
A
n
) = µ(∅) = 0 which shows that the requirement µ(A
1
) < ∞ in the
last claim of Theorem A.1 is not superﬂuous.
A.1. Borel measures in a nut shell 211
A measure on the Borel σalgebra is called a Borel measure if µ(C) <
∞ for any compact set C. A Borel measures is called outer regular if
µ(A) = inf
A⊆O,O open
µ(O) (A.1)
and inner regular if
µ(A) = sup
C⊆A,C compact
µ(C). (A.2)
It is called regular if it is both outer and inner regular.
But how can we obtain some more interesting Borel measures? We will
restrict ourselves to the case of X = R for simplicity. Then the strategy
is as follows: Start with the algebra of ﬁnite unions of disjoint intervals
and deﬁne µ for those sets (as the sum over the intervals). This yields a
premeasure. Extend this to an outer measure for all subsets of R. Show
that the restriction to the Borel sets is a measure.
Let us ﬁrst show how we should deﬁne µ for intervals: To every Borel
measure on B we can assign its distribution function
µ(x) =
_
_
_
−µ((x, 0]), x < 0
0, x = 0
µ((0, x]), x > 0
(A.3)
which is right continuous and nondecreasing. Conversely, given a right
continuous nondecreasing function µ : R →R we can set
µ(A) =
_
¸
¸
_
¸
¸
_
µ(b) −µ(a), A = (a, b]
µ(b) −µ(a−), A = [a, b]
µ(b−) −µ(a), A = (a, b)
µ(b−) −µ(a−), A = [a, b)
, (A.4)
where µ(a−) = lim
ε↓0
µ(a−ε). In particular, this gives a premeasure on the
algebra of ﬁnite unions of intervals which can be extended to a measure:
Theorem A.2. For every right continuous nondecreasing function µ : R →
R there exists a unique regular Borel measure µ which extends (A.4). Two
diﬀerent functions generate the same measure if and only if they diﬀer by a
constant.
Since the proof of this theorem is rather involved, we defer it to the next
section and look at some examples ﬁrst.
Example. Suppose Θ(x) = 0 for x < 0 and Θ(x) = 1 for x ≥ 0. Then we
obtain the socalled Dirac measure at 0, which is given by Θ(A) = 1 if
0 ∈ A and Θ(A) = 0 if 0 ,∈ A.
212 A. Almost everything about Lebesgue integration
Example. Suppose λ(x) = x, then the associated measure is the ordinary
Lebesgue measure on R. We will abbreviate the Lebesgue measure of a
Borel set A by λ(A) = [A[.
It can be shown that Borel measures on a separable metric space are
always regular.
A set A ∈ Σ is called a support for µ if µ(X¸A) = 0. A property is
said to hold µalmost everywhere (a.e.) if the it holds on a support for
µ or, equivalently, if the set where it does not hold is contained in a set of
measure zero.
Example. The set of rational numbers has Lebesgue measure zero: λ(Q) =
0. In fact, any single point has Lebesgue measure zero, and so has any
countable union of points (by countable additivity).
Example. The Cantor set is an example of a closed uncountable set of
Lebesgue measure zero. It is constructed as follows: Start with C
0
= [0, 1]
and remove the middle third to obtain C
1
= [0,
1
3
]∪[
2
3
, 1]. Next, again remove
the middle third’s of the remaining sets to obtain C
2
= [0,
1
9
] ∪[
2
9
,
1
3
] ∪[
2
3
,
7
9
] ∪
[
8
9
, 1].
C
0
C
1
C
2
C
3
.
.
.
Proceeding like this we obtain a sequence of nesting sets C
n
and the limit
C =
n
C
n
is the Cantor set. Since C
n
is compact, so is C. Moreover,
C
n
consists of 2
n
intervals of length 3
−n
, and thus its Lebesgue measure
is λ(C
n
) = (2/3)
n
. In particular, λ(C) = lim
n→∞
λ(C
n
) = 0. Using the
ternary expansion it is extremely simple to describe: C is the set of all
x ∈ [0, 1] whose ternary expansion contains no one’s, which shows that C is
uncountable (why?). It has some further interesting properties: it is totally
disconnected (i.e., it contains no subintervals) and perfect (it has no isolated
points).
Problem A.1 (Vitali set). Call two numbers x, y ∈ [0, 1) equivalent if x−y
is rational. Construct the set V by choosing one representative from each
equivalence class. Show that V cannot be measurable with respect to any
ﬁnite translation invariant measure on [0, 1). (Hint: How can you build up
[0, 1) from V ?)
A.2. Extending a premasure to a measure 213
A.2. Extending a premasure to a measure
The purpose of this section is to prove Theorem A.2. It is rather technical and
should be skipped on ﬁrst reading.
In order to prove Theorem A.2 we need to show how a premeasure can
be extended to a measure. As a prerequisite we ﬁrst establish that it suﬃces
to check increasing (or decreasing) sequences of sets when checking wether
a given algebra is in fact a σalgebra:
A collections of sets / is called a monotone class if A
n
¸ A implies
A ∈ / whenever A
n
∈ / and A
n
¸ A implies A ∈ / whenever A
n
∈ /.
Every σalgebra is a monotone class and the intersection of monotone classes
is a monotone class. Hence every collection of sets S generates a smallest
monotone class /(S).
Theorem A.3. Let / be an algebra. Then /(/) = Σ(/).
Proof. We ﬁrst show that / = /(/) is an algebra.
Put M(A) = ¦B ∈ /[A ∪ B ∈ /¦. If B
n
is an increasing sequence
of sets in M(A) then A ∪ B
n
is an increasing sequence in / and hence
n
(A∪ B
n
) ∈ /. Now
A∪
_
_
n
B
n
_
=
_
n
(A∪ B
n
) (A.5)
shows that M(A) is closed under increasing sequences. Similarly, M(A) is
closed under decreasing sequences and hence it is a monotone class. But
does it contain any elements? Well if A ∈ / we have / ⊆ M(A) implying
M(A) = /. Hence A ∪ B ∈ / if at least one of the sets is in /. But this
shows / ⊆ M(A) and hence M(A) = / for any A ∈ /. So / is closed
under ﬁnite unions.
To show that we are closed under complements consider M = ¦A ∈
/[X¸A ∈ /¦. If A
n
is an increasing sequence then X¸A
n
is a decreasing
sequence and X¸
n
A
n
=
n
X¸A
n
∈ / if A
n
∈ M. Similarly for decreas
ing sequences. Hence M is a monotone class and must be equal to / since
it contains /.
So we know that / is an algebra. To show that it is an σalgebra let
A
n
be given and put
˜
A
n
=
k≤n
A
n
. Then
˜
A
n
is increasing and
n
˜
A
n
=
n
A
n
∈ /.
The typical use of this theorem is as follows: First verify some property
for sets in an algebra /. In order to show that it holds for any set in Σ(/),
it suﬃces to show that the sets for which it holds is closed under countable
increasing and decreasing sequences (i.e., is a monotone class).
Now we start by proving that (A.4) indeed gives rise to a premeasure.
214 A. Almost everything about Lebesgue integration
Lemma A.4. µ as deﬁned in (A.4) gives rise to a unique σﬁnite regular
premeasure on the algebra / of ﬁnite unions of disjoint intervals.
Proof. First of all, (A.4) can be extended to ﬁnite unions of disjoint inter
vals by summing over all intervals. It is straightforward to verify that µ is
well deﬁned (one set can be represented by diﬀerent unions of intervals) and
by construction additive.
To show regularity, we can assume any such union to consist of open
intervals and points only. To show outer regularity replace each point ¦x¦
by a small open interval (x+ε, x−ε) and use that µ(¦x¦) = lim
ε↓
µ(x+ε) −
µ(x−ε). Similarly, to show inner regularity, replace each open interval (a, b)
by a compact one [a
n
, b
n
] ⊆ (a, b) and use µ((a, b)) = lim
n→∞
µ(b
n
) −µ(a
n
)
if a
n
↓ a and b
n
↑ b.
It remains to verify σadditivity. We need to show
µ(
_
k
I
k
) =
k
µ(I
k
) (A.6)
whenever I
n
∈ / and I =
k
I
k
∈ /. Since each I
n
is a ﬁnite union of in
tervals, we can as well assume each I
n
is just one interval (just split I
n
into
its subintervals and note that the sum does not change by additivity). Sim
ilarly, we can assume that I is just one interval (just treat each subinterval
separately).
By additivity µ is monotone and hence
n
k=1
µ(I
k
) = µ(
n
_
k=1
I
k
) ≤ µ(I) (A.7)
which shows
∞
k=1
µ(I
k
) ≤ µ(I). (A.8)
To get the converse inequality we need to work harder.
By outer regularity we can cover each I
k
by open interval J
k
such that
µ(J
k
) ≤ µ(I
k
) +
ε
2
k
. Suppose I is compact ﬁrst. Then ﬁnitely many of the
J
k
, say the ﬁrst n, cover I and we have
µ(I) ≤ µ(
n
_
k=1
J
k
) ≤
n
k=1
µ(J
k
) ≤
∞
k=1
µ(I
k
) +ε. (A.9)
Since ε > 0 is arbitrary, this shows σadditivity for compact intervals. By
additivity we can always add/subtract the end points of I and hence σ
additivity holds for any bounded interval. If I is unbounded, say I = [a, ∞),
A.2. Extending a premasure to a measure 215
then given x > 0 we can ﬁnd an n such that J
n
cover at least [0, x] and hence
n
k=1
µ(I
k
) ≥
n
k=1
µ(J
k
) −ε ≥ µ([a, x]) −ε. (A.10)
Since x > a and ε > 0 are arbitrary we are done.
This premeasure determines the corresponding measure µ uniquely (if
there is one at all):
Theorem A.5 (Uniqueness of measures). Let µ be a σﬁnite premeasure
on an algebra /. Then there is at most one extension to Σ(/).
Proof. We ﬁrst assume that µ(X) < ∞. Suppose there is another extension
˜ µ and consider the set
S = ¦A ∈ Σ(/)[µ(A) = ˜ µ(A)¦. (A.11)
I claim S is a monotone class and hence S = Σ(/) since / ⊆ S by assump
tion (Theorem A.3).
Let A
n
¸ A. If A
n
∈ S we have µ(A
n
) = ˜ µ(A
n
) and taking limits
(Theorem A.1 (ii)) we conclude µ(A) = ˜ µ(A). Next let A
n
¸ A and take
again limits. This ﬁnishes the ﬁnite case. To extend our result to the σﬁnite
case let X
j
¸ X be an increasing sequence such that µ(X
j
) < ∞. By the
ﬁnite case µ(A∩ X
j
) = ˜ µ(A∩ X
j
) (just restrict µ, ˜ µ to X
j
). Hence
µ(A) = lim
j→∞
µ(A∩ X
j
) = lim
j→∞
˜ µ(A∩ X
j
) = ˜ µ(A) (A.12)
and we are done.
Note that if our premeasure is regular, so will be the extension:
Lemma A.6. Suppose µ is a σﬁnite premeasure on some algebra / gen
erating the Borel sets B. Then outer (inner) regularity holds for all Borel
sets if it holds for all sets in /.
Proof. We ﬁrst assume that µ(X) < ∞. Set
µ
◦
(A) = inf
A⊆O,O open
µ(O) ≥ µ(A) (A.13)
and let M = ¦A ∈ B[µ
◦
(A) = µ(A)¦. Since by assumption M contains
some algebra generating B it suﬃces to proof that M is a monotone class.
Let A
n
∈ M be a monotone sequence and let O
n
⊇ A
n
be open sets such
that µ(O
n
) ≤ µ(A
n
) +
1
n
. Then
µ(A
n
) ≤ µ
◦
(A
n
) ≤ µ(O
n
) ≤ µ(A
n
) +
1
n
. (A.14)
Now if A
n
¸ A just take limits and use continuity from below of µ. Similarly
if A
n
¸ A.
216 A. Almost everything about Lebesgue integration
Now let µ be arbitrary. Given A we can split it into disjoint sets A
j
such that A
j
⊆ X
j
(A
1
= A ∩ X
1
, A
2
= (A¸A
1
) ∩ X
2
, etc.). Let X
j
be a
cover with µ(X
j
) < ∞. By regularity, we can assume X
j
open. Thus there
are open (in X) sets O
j
covering A
j
such that µ(O
j
) ≤ µ(A
j
) +
ε
2
j
. Then
O =
j
O
j
is open, covers A, and satisﬁes
µ(A) ≤ µ(O) ≤
j
µ(O
j
) ≤ µ(A) +ε. (A.15)
This settles outer regularity.
Next let us turn to inner regularity. If µ(X) < ∞one can show as before
that M = ¦A ∈ B[µ
◦
(A) = µ(A)¦, where
µ
◦
(A) = sup
C⊆A,C compact
µ(C) ≤ µ(A) (A.16)
is a monotone class. This settles the ﬁnite case.
For the σﬁnite case split again A as before. Since X
j
has ﬁnite measure,
there are compact subsets K
j
of A
j
such that µ(A
j
) ≤ µ(K
j
) +
ε
2
j
. Now
we need to distinguish two cases: If µ(A) = ∞, the sum
j
µ(A
j
) will
diverge and so will
j
µ(K
j
). Hence
˜
K
n
=
n
j=1
⊆ A is compact with
µ(
˜
K
n
) → ∞ = µ(A). If µ(A) < ∞, the sum
j
µ(A
j
) will converge and
choosing n suﬃciently large we will have
µ(
˜
K
n
) ≤ µ(A) ≤ µ(
˜
K
n
) + 2ε. (A.17)
This ﬁnishes the proof.
So it remains to ensure that there is an extension at all. For any pre
measure µ we deﬁne
µ
∗
(A) = inf
_
∞
n=1
µ(A
n
)
¸
¸
¸A ⊆
∞
_
n=1
A
n
, A
n
∈ /
_
(A.18)
where the inﬁmum extends over all countable covers from /. Then the
function µ
∗
: P(X) → [0, ∞] is an outer measure, that is, it has the
properties (Problem A.2)
• µ
∗
(∅) = 0,
• A
1
⊆ A
2
⇒ µ
∗
(A
1
) ≤ µ
∗
(A
2
), and
• µ
∗
(
∞
n=1
A
n
) ≤
∞
n=1
µ
∗
(A
n
) (subadditivity).
Note that µ
∗
(A) = µ(A) for A ∈ / (Problem A.3).
Theorem A.7 (Extensions via outer measures). Let µ
∗
be an outer measure.
Then the set Σ of all sets A satisfying the Carath´eodory condition
µ
∗
(E) = µ
∗
(A∩ E) +µ
∗
(A
∩ E) ∀E ⊆ X (A.19)
A.2. Extending a premasure to a measure 217
(where A
= X¸A is the complement of A) form a σalgebra and µ
∗
restricted
to this σalgebra is a measure.
Proof. We ﬁrst show that Σ is an algebra. It clearly contains X and is closed
under complements. Let A, B ∈ Σ. Applying Carath´eodory’s condition
twice ﬁnally shows
µ
∗
(E) = µ
∗
(A∩ B ∩ E) +µ
∗
(A
∩ B ∩ E) +µ
∗
(A∩ B
∩ E)
+µ
∗
(A
∩ B
∩ E)
≥ µ
∗
((A∪ B) ∩ E) +µ
∗
((A∪ B)
∩ E), (A.20)
where we have used De Morgan and
µ
∗
(A∩B∩E) +µ
∗
(A
∩B∩E) +µ
∗
(A∩B
∩E) ≥ µ
∗
((A∪B) ∩E) (A.21)
which follows from subadditivity and (A ∪ B) ∩ E = (A ∩ B ∩ E) ∪ (A
∩
B ∩ E) ∪ (A∩ B
∩ E). Since the reverse inequality is just subadditivity, we
conclude that Σ is an algebra.
Next, let A
n
be a sequence of sets from Σ. Without restriction we
can assume that they are disjoint (compare the last argument in proof of
Theorem A.3). Abbreviate
˜
A
n
=
k≤n
A
n
, A =
n
A
n
. Then for any set E
we have
µ
∗
(
˜
A
n
∩ E) = µ
∗
(A
n
∩
˜
A
n
∩ E) +µ
∗
(A
n
∩
˜
A
n
∩ E)
= µ
∗
(A
n
∩ E) +µ
∗
(
˜
A
n−1
∩ E)
= . . . =
n
k=1
µ
∗
(A
k
∩ E). (A.22)
Using
˜
A
n
∈ Σ and monotonicity of µ
∗
, we infer
µ
∗
(E) = µ
∗
(
˜
A
n
∩ E) +µ
∗
(
˜
A
n
∩ E)
≥
n
k=1
µ
∗
(A
k
∩ E) +µ
∗
(A
∩ E). (A.23)
Letting n → ∞ and using subadditivity ﬁnally gives
µ
∗
(E) ≥
∞
k=1
µ
∗
(A
k
∩ E) +µ
∗
(A
∩ E)
≥ µ
∗
(A∩ E) +µ
∗
(B
∩ E) ≥ µ
∗
(E) (A.24)
and we infer that Σ is a σalgebra.
Finally, setting E = A in (A.24) we have
µ
∗
(A) =
∞
k=1
µ
∗
(A
k
∩ A) +µ
∗
(A
∩ A) =
∞
k=1
µ
∗
(A
k
) (A.25)
and we are done.
218 A. Almost everything about Lebesgue integration
Remark: The constructed measure µ is complete, that is, for any mea
surable set A of measure zero, any subset of A is again measurable (Prob
lem A.4).
The only remaining question is wether there are any nontrivial sets sat
isfying the Carath´eodory condition.
Lemma A.8. Let µ be a premeasure on / and let µ
∗
be the associated outer
measure. Then every set in / satisﬁes the Carath´eodory condition.
Proof. Let A
n
∈ / be a countable cover for E. Then for any A ∈ / we
have
∞
n=1
µ(A
n
) =
∞
n=1
µ(A
n
∩A)+
∞
n=1
µ(A
n
∩A
) ≥ µ
∗
(E∩A)+µ
∗
(E∩A
) (A.26)
since A
n
∩A ∈ / is a cover for E ∩A and A
n
∩A
∈ / is a cover for E ∩A
.
Taking the inﬁmum we have µ
∗
(E) ≥ µ
∗
(E∩A) +µ
∗
(E∩A
) which ﬁnishes
the proof.
Thus, as a consequence we obtain Theorem A.2.
Problem A.2. Show that µ
∗
deﬁned in (A.18) is an outer measure. (Hint
for the last property: Take a cover ¦B
nk
¦
∞
k=1
for A
n
such that µ
∗
(A
n
) =
ε
2
n
+
∞
k=1
µ(B
nk
) and note that ¦B
nk
¦
∞
n,k=1
is a cover for
n
A
n
.)
Problem A.3. Show that µ
∗
deﬁned in (A.18) extends µ. (Hint: For the
cover A
n
it is no restriction to assume A
n
∩ A
m
= ∅ and A
n
⊆ A.)
Problem A.4. Show that the measure constructed in Theorem A.7 is com
plete.
A.3. Measurable functions
The Riemann integral works by splitting the x coordinate into small intervals
and approximating f(x) on each interval by its minimum and maximum.
The problem with this approach is that the diﬀerence between maximum
and minimum will only tend to zero (as the intervals get smaller) if f(x) is
suﬃciently nice. To avoid this problem we can force the diﬀerence to go to
zero by considering, instead of an interval, the set of x for which f(x) lies
between two given numbers a < b. Now we need the size of the set of these
x, that is, the size of the preimage f
−1
((a, b)). For this to work, preimages
of intervals must be measurable.
A function f : X → R
n
is called measurable if f
−1
(A) ∈ Σ for every
A ∈ B
n
. A complexvalued function is called measurable if both its real
and imaginary parts are. Clearly it suﬃces to check this condition for every
A.3. Measurable functions 219
set A in a collection of sets which generate B
n
, say for all open intervals or
even for open intervals which are products of (a, ∞):
Lemma A.9. A function f : X →R
n
is measurable if and only if
f
−1
(I) ∈ Σ ∀ I =
n
j=1
(a
j
, ∞). (A.27)
In particular, a function f : X → R
n
is measurable if and only if every
component is measurable.
Proof. All you have to use is f
−1
(R
n
¸A) = X¸f
−1
(A), f
−1
(
j
A
j
) =
j
f
1
(A
j
) and the fact that any open set is a countable union of open
intervals.
If Σ is the Borel σalgebra, we will call a measurable function also Borel
function. Note that, in particular,
Lemma A.10. Any continuous function is measurable and the composition
of two measurable functions is again measurable.
Moreover, sometimes it is also convenient to allow ±∞as possible values
for f, that is, functions f : X → R, R = R ∪ ¦−∞, ∞¦. In this case A ⊆ R
is called Borel if A∩ R is.
The set of all measurable functions forms an algebra.
Lemma A.11. Suppose f, g : X → R are measurable functions. Then the
sum f +g and the product fg is measurable.
Proof. Note that addition and multiplication are continuous functions from
R
2
→R and hence the claim follows from the previous lemma.
Moreover, the set of all measurable functions is closed under all impor
tant limiting operations.
Lemma A.12. Suppose f
n
: X →R is a sequence of measurable functions,
then
inf
n∈N
f
n
, sup
n∈N
f
n
, liminf
n→∞
f
n
, limsup
n→∞
f
n
(A.28)
are measurable as well.
Proof. It suﬃces to proof that sup f
n
is measurable since the rest follows
from inf f
n
= −sup(−f
n
), liminf f
n
= sup
k
inf
n≥k
f
n
, and limsup f
n
=
inf
k
sup
n≥k
f
n
. But (sup f
n
)
−1
((a, ∞)) =
n
f
−1
n
((a, ∞)) and we are done.
It follows that if f and g are measurable functions, so are min(f, g),
max(f, g), [f[ = max(f, −f), f
±
= max(±f, 0).
220 A. Almost everything about Lebesgue integration
A.4. The Lebesgue integral
Now we can deﬁne the integral for measurable functions as follows. A
measurable function s : X → R is called simple if its range is ﬁnite
s(X) = ¦α
j
¦
p
j=1
, that is, if
s =
p
j=1
α
j
χ
A
j
, A
j
= s
−1
(α
j
) ∈ Σ. (A.29)
Here χ
A
is the characteristic function of A, that is, χ
A
(x) = 1 if x ∈ A
and χ
A
(x) = 0 else.
For a positive simple function we deﬁne its integral as
_
A
s dµ =
n
j=1
α
j
µ(A
j
∩ A). (A.30)
Here we use the convention 0 ∞ = 0.
Lemma A.13. The integral has the following properties:
(i)
_
A
s dµ =
_
X
χ
A
s dµ.
(ii)
_
S
∞
j=1
A
j
s dµ =
∞
j=1
_
A
j
s dµ.
(iii)
_
A
αs dµ = α
_
A
s dµ.
(iv)
_
A
(s +t)dµ =
_
A
s dµ +
_
A
t dµ.
(v) A ⊆ B ⇒
_
A
s dµ ≤
_
B
s dµ.
(vi) s ≤ t ⇒
_
A
s dµ ≤
_
A
t dµ.
Proof. (i) is clear from the deﬁnition. (ii) follows from σadditivity of µ.
(iii) is obvious. (iv) Let s =
j
α
j
χ
A
j
, t =
j
β
j
χ
B
j
and abbreviate
C
jk
= (A
j
∩ B
k
) ∩ A. Then
_
A
(s +t)dµ =
j,k
_
C
jk
(s +t)dµ =
j,k
(α
j
+β
k
)µ(C
jk
)
=
j,k
_
_
C
jk
s dµ +
_
C
jk
t dµ
_
=
_
A
s dµ +
_
A
t dµ(A.31)
(v) follows from monotonicity of µ. (vi) follows using t −s ≥ 0 and arguing
as in (iii).
Our next task is to extend this deﬁnition to arbitrary positive functions
by
_
A
f dµ = sup
s≤f
_
A
s dµ, (A.32)
A.4. The Lebesgue integral 221
where the supremum is taken over all simple functions s ≤ f. Note that,
except for possibly (ii) and (iv), Lemma A.13 still holds for this extension.
Theorem A.14 (monotone convergence). Let f
n
be a monotone nondecreas
ing sequence of positive measurable functions, f
n
¸ f. Then
_
A
f
n
dµ →
_
A
f dµ. (A.33)
Proof. By property (v)
_
A
f
n
dµ is monotone and converges to some number
α. By f
n
≤ f and again (v) we have
α ≤
_
A
f dµ. (A.34)
To show the converse let s be simple such that s ≤ f and let θ ∈ (0, 1). Put
A
n
= ¦x ∈ A[f
n
(x) ≥ θs(x)¦ and note A
n
¸ X (show this). Then
_
A
f
n
dµ ≥
_
An
f
n
dµ ≥ θ
_
An
s dµ. (A.35)
Letting n → ∞ we see
α ≥ θ
_
A
s dµ. (A.36)
Since this is valid for any θ < 1, it still holds for θ = 1. Finally, since s ≤ f
is arbitrary, the claim follows.
In particular
_
A
f dµ = lim
n→∞
_
A
s
n
dµ, (A.37)
for any monotone sequence s
n
¸ f of simple functions. Note that there is
always such a sequence, for example,
s
n
(x) =
n
2
k=0
k
n
χ
f
−1
(A
k
)
(x), A
k
= [
k
n
,
k + 1
n
2
), A
n
2 = [n, ∞). (A.38)
By construction s
n
converges uniformly if f is bounded, since s
n
(x) = n if
f(x) = ∞ and f(x) −s
n
(x) <
1
n
if f(x) < n + 1.
Now what about the missing items (ii) and (iv) from Lemma A.13? Since
limits can be spread over sums, the extension is linear (i.e., item (iv) holds)
and (ii) also follows directly from the monotone convergence theorem. We
even have the following result:
Lemma A.15. If f ≥ 0 is measurable, then dν = f dµ deﬁned via
ν(A) =
_
A
f dµ (A.39)
222 A. Almost everything about Lebesgue integration
is a measure such that
_
g dν =
_
gf dµ. (A.40)
Proof. As already mentioned, additivity of µ is equivalent to linearity of the
integral and σadditivity follows from the monotone convergence theorem
ν(
∞
_
n=1
A
n
) =
_
(
∞
n=1
χ
An
)f dµ =
∞
n=1
_
χ
An
f dµ =
∞
n=1
ν(A
n
). (A.41)
The second claim holds for simple functions and hence for all functions by
construction of the integral.
If f
n
is not necessarily monotone we have at least
Theorem A.16 (Fatou’s Lemma). If f
n
is a sequence of nonnegative mea
surable function, then
_
A
liminf
n→∞
f
n
dµ ≤ liminf
n→∞
_
A
f
n
dµ, (A.42)
Proof. Set g
n
= inf
k≥n
f
k
. Then g
n
≤ f
n
implying
_
A
g
n
dµ ≤
_
A
f
n
dµ. (A.43)
Now take the liminf on both sides and note that by the monotone conver
gence theorem
liminf
n→∞
_
A
g
n
dµ = lim
n→∞
_
A
g
n
dµ =
_
A
lim
n→∞
g
n
dµ =
_
A
liminf
n→∞
f
n
dµ,
(A.44)
proving the claim.
If the integral is ﬁnite for both the positive and negative part f
±
of an
arbitrary measurable function f, we call f integrable and set
_
A
f dµ =
_
A
f
+
dµ −
_
A
f
−
dµ. (A.45)
The set of all integrable functions is denoted by L
1
(X, dµ).
Lemma A.17. Lemma A.13 holds for integrable functions s, t.
Similarly, we handle the case where f is complexvalued by calling f
integrable if both the real and imaginary part are and setting
_
A
f dµ =
_
A
Re(f)dµ + i
_
A
Im(f)dµ. (A.46)
Clearly f is integrable if and only if [f[ is.
A.4. The Lebesgue integral 223
Lemma A.18. For any integrable functions f, g we have
[
_
A
f dµ[ ≤
_
A
[f[ dµ (A.47)
and (triangle inequality)
_
A
[f +g[ dµ ≤
_
A
[f[ dµ +
_
A
[g[ dµ. (A.48)
Proof. Put α =
z
∗
z
, where z =
_
A
f dµ (without restriction z ,= 0). Then
[
_
A
f dµ[ = α
_
A
f dµ =
_
A
αf dµ =
_
A
Re(αf) dµ ≤
_
A
[f[ dµ. (A.49)
proving the ﬁrst claim. The second follows from [f +g[ ≤ [f[ +[g[.
In addition, our integral is well behaved with respect to limiting opera
tions.
Theorem A.19 (dominated convergence). Let f
n
be a convergent sequence
of measurable functions and set f = lim
n→∞
f
n
. Suppose there is an inte
grable function g such that [f
n
[ ≤ g. Then f is integrable and
lim
n→∞
_
f
n
dµ =
_
fdµ. (A.50)
Proof. The real and imaginary parts satisfy the same assumptions and so
do the positive and negative parts. Hence it suﬃces to prove the case where
f
n
and f are nonnegative.
By Fatou’s lemma
liminf
n→∞
_
A
f
n
dµ ≥
_
A
f dµ (A.51)
and
liminf
n→∞
_
A
(g −f
n
)dµ ≥
_
A
(g −f)dµ. (A.52)
Subtracting
_
A
g dµ on both sides of the last inequality ﬁnishes the proof
since liminf(−f
n
) = −limsup f
n
.
Remark: Since sets of measure zero do not contribute to the value of the
integral, it clearly suﬃces if the requirements of the dominated convergence
theorem are satisﬁed almost everywhere (with respect to µ).
Note that the existence of g is crucial, as the example f
n
(x) =
1
n
χ
[−n,n]
(x)
on R with Lebesgue measure shows.
224 A. Almost everything about Lebesgue integration
Example. If µ(x) =
n
α
n
Θ(x − x
n
) is a sum of Dirac measures Θ(x)
centered at x = 0, then
_
f(x)dµ(x) =
n
α
n
f(x
n
). (A.53)
Hence our integral contains sums as special cases.
Problem A.5. Show that the set B(X) of bounded measurable functions
is a Banach space. Show that the set S(X) of simple functions is dense
in B(X). Show that the integral is a bounded linear functional on B(X).
(Hence Theorem 0.24 could be used to extend the integral from simple to
bounded measurable functions.)
Problem A.6. Show that the dominated convergence theorem implies (un
der the same assumptions)
lim
n→∞
_
[f
n
−f[dµ = 0. (A.54)
Problem A.7. Suppose y → f(x, y) is measurable for every x and x →
f(x, y) is continuous for every y. Show that
F(x) =
_
A
f(x, y) dµ(y) (A.55)
is continuous if there is an integrable function g(y) such that [f(x, y)[ ≤ g(y).
Problem A.8. Suppose y → f(x, y) is measurable for ﬁxed x and x →
f(x, y) is diﬀerentiable for ﬁxed y. Show that
F(x) =
_
A
f(x, y) dµ(y) (A.56)
is diﬀerentiable if there is an integrable function g(y) such that [
∂
∂x
f(x, y)[ ≤
g(y). Moreover, x →
∂
∂x
f(x, y) is measurable and
F
(x) =
_
A
∂
∂x
f(x, y) dµ(y) (A.57)
in this case.
A.5. Product measures
Let µ
1
and µ
2
be two measures on Σ
1
and Σ
2
, respectively. Let Σ
1
⊗Σ
2
be
the σalgebra generated by rectangles of the form A
1
A
2
.
Example. Let B be the Borel sets in R then B
2
= B⊗ B are the Borel
sets in R
2
(since the rectangles are a basis for the product topology).
Any set in Σ
1
⊗Σ
2
has the section property, that is,
A.5. Product measures 225
Lemma A.20. Suppose A ∈ Σ
1
⊗Σ
2
then its sections
A
1
(x
2
) = ¦x
1
[(x
1
, x
2
) ∈ A¦ and A
2
(x
1
) = ¦x
2
[(x
1
, x
2
) ∈ A¦ (A.58)
are measurable.
Proof. Denote all sets A ∈ Σ
1
⊗Σ
2
in with the property that A
1
(x
2
) ∈ Σ
1
by
S. Clearly all rectangles are in S and it suﬃces to show that S is a σalgebra.
Moreover, if A ∈ S, then (A
)
1
(x
2
) = (A
1
(x
2
))
∈ Σ
2
and thus S is closed
under complements. Similarly, if A
n
∈ S, then (
n
A
n
)
1
(x
2
) =
n
(A
n
)
1
(x
2
)
shows that S is closed under countable unions.
This implies that if f is a measurable function on X
1
X
2
, then f(., x
2
) is
measurable on X
1
for every x
2
and f(x
1
, .) is measurable on X
2
for every x
1
(observe A
1
(x
2
) = ¦x
1
[f(x
1
, x
2
) ∈ B¦, where A = ¦(x
1
, x
2
)[f(x
1
, x
2
) ∈ B¦).
In fact, this is even equivalent since χ
A
1
(x
2
)
(x
1
) = χ
A
2
(x
1
)
(x
2
) = χ
A
(x
1
, x
2
).
Given two measures µ
1
on Σ
1
and µ
2
on Σ
2
we now want to construct
the product measure, µ
1
⊗µ
2
on Σ
1
⊗Σ
2
such that
µ
1
⊗µ
2
(A
1
A
2
) = µ
1
(A
1
)µ
2
(A
2
), A
j
∈ Σ
j
, j = 1, 2. (A.59)
Theorem A.21. Let µ
1
and µ
2
be two σﬁnite measures on Σ
1
and Σ
2
,
respectively. Let A ∈ Σ
1
⊗ Σ
2
. Then µ
2
(A
2
(x
1
)) and µ
1
(A
1
(x
2
)) are mea
surable and
_
X
1
µ
2
(A
2
(x
1
))dµ
1
(x
1
) =
_
X
2
µ
1
(A
1
(x
2
))dµ
2
(x
2
). (A.60)
Proof. Let S be the set of all subsets for which our claim holds. Note
that S contains at least all rectangles. It even contains the algebra of ﬁnite
disjoint unions of rectangles. Thus it suﬃces to show that S is a monotone
class. If µ
1
and µ
2
are ﬁnite, this follows from continuity from above and
below of measures. The case if µ
1
and µ
2
are σﬁnite can be handled as in
Theorem A.5.
Hence we can deﬁne
µ
1
⊗µ
2
(A) =
_
X
1
µ
2
(A
2
(x
1
))dµ
1
(x
1
) =
_
X
2
µ
1
(A
1
(x
2
))dµ
2
(x
2
) (A.61)
or equivalently
µ
1
⊗µ
2
(A) =
_
X
1
__
X
2
χ
A
(x
1
, x
2
)dµ
2
(x
2
)
_
dµ
1
(x
1
)
=
_
X
2
__
X
1
χ
A
(x
1
, x
2
)dµ
1
(x
1
)
_
dµ
2
(x
2
). (A.62)
Additivity of µ
1
⊗µ
2
follows from the monotone convergence theorem.
226 A. Almost everything about Lebesgue integration
Note that (A.59) uniquely deﬁnes µ
1
⊗ µ
2
as a σﬁnite premeasure on
the algebra of ﬁnite disjoint unions of rectangles. Hence by Theorem A.5 it
is the only measure on Σ
1
⊗Σ
2
satisfying (A.59).
Finally we have:
Theorem A.22 (Fubini). Let f be a measurable function on X
1
X
2
and
let µ
1
, µ
2
be σﬁnite measures on X
1
, X
2
, respectively.
(i) If f ≥ 0 then
_
f(., x
2
)dµ
2
(x
2
) and
_
f(x
1
, .)dµ
1
(x
1
) are both mea
surable and
__
f(x
1
, x
2
)dµ
1
⊗µ
2
(x
1
, x
2
) =
_ __
f(x
1
, x
2
)dµ
1
(x
1
)
_
dµ
2
(x
2
)
=
_ __
f(x
1
, x
2
)dµ
2
(x
2
)
_
dµ
1
(x
1
). (A.63)
(ii) If f is complex then
_
[f(x
1
, x
2
)[dµ
1
(x
1
) ∈ L
1
(X
2
, dµ
2
) (A.64)
respectively
_
[f(x
1
, x
2
)[dµ
2
(x
2
) ∈ L
1
(X
1
, dµ
1
) (A.65)
if and only if f ∈ L
1
(X
1
X
2
, dµ
1
⊗ dµ
2
). In this case (A.63)
holds.
Proof. By Theorem A.21 the claim holds for simple functions. Now (i)
follows from the monotone convergence theorem and (ii) from the dominated
convergence theorem.
In particular, if f(x
1
, x
2
) is either nonnegative or integrable, then the
order of integration can be interchanged.
Lemma A.23. If µ
1
and µ
2
are σﬁnite regular Borel measures with, so is
µ
1
⊗µ
2
.
Proof. Regularity holds for every rectangle and hence also for the algebra of
ﬁnite disjoint unions of rectangles. Thus the claim follows from Lemma A.6.
Note that we can iterate this procedure.
Lemma A.24. Suppose µ
j
, j = 1, 2, 3 are σﬁnite measures. Then
(µ
1
⊗µ
2
) ⊗µ
3
= µ
1
⊗(µ
2
⊗µ
3
). (A.66)
A.6. Decomposition of measures 227
Proof. First of all note that (Σ
1
⊗Σ
2
) ⊗Σ
3
= Σ
1
⊗(Σ
2
⊗Σ
3
) is the sigma
algebra generated by the cuboids A
1
A
2
A
3
in X
1
X
2
X
3
. Moreover,
since
((µ
1
⊗µ
2
) ⊗µ
3
)(A
1
A
2
A
3
) = µ
1
(A
1
)µ
2
(A
2
)µ
3
(A
3
)
= (µ
1
⊗(µ
2
⊗µ
3
))(A
1
A
2
A
3
) (A.67)
the two measures coincide on the algebra of ﬁnite disjoint unions of cuboids.
Hence they coincide everywhere by Theorem A.5.
Example. If λ is Lebesgue measure on R, then λ
n
= λ⊗ ⊗λ is Lebesgue
measure on R
n
. Since λ is regular, so is λ
n
.
A.6. Decomposition of measures
Let µ, ν be two measures on a measure space (X, Σ). They are called
mutually singular (in symbols µ ⊥ ν) if they are supported on disjoint
sets. That is, there is a measurable set N such that µ(N) = 0 and ν(X¸N) =
0.
Example. Let λ be the Lebesgue measure and Θ the Dirac measure
(centered at 0), then λ ⊥ Θ: Just take N = ¦0¦, then λ(¦0¦) = 0 and
Θ(R¸¦0¦) = 0.
On the other hand, ν is called absolutely continuous with respect to
µ (in symbols ν ¸ µ) if µ(A) = 0 implies ν(A) = 0.
Example. The prototypical example is the measure dν = f dµ (compare
Lemma A.15). Indeed µ(A) = 0 implies
ν(A) =
_
A
f dµ = 0 (A.68)
and shows that ν is absolutely continuous with respect to µ. In fact, we will
show below that every absolutely continuous measure is of this form.
The two main results will follow as simple consequence of the following
result:
Theorem A.25. Let µ, ν be σﬁnite measures. Then there exists a unique
(a.e.) nonnegative function f and a set N of µ measure zero, such that
ν(A) = ν(A∩ N) +
_
A
f dµ. (A.69)
Proof. We ﬁrst assume µ, ν to be ﬁnite measures. Let α = µ + ν and
consider the Hilbert space L
2
(X, dα). Then
(h) =
_
X
hdν (A.70)
228 A. Almost everything about Lebesgue integration
is a bounded linear functional by CauchySchwarz:
[(h)[
2
=
¸
¸
¸
¸
_
X
1 hdν
¸
¸
¸
¸
2
≤
__
[1[
2
dν
___
[h[
2
dν
_
≤ ν(X)
__
[h[
2
dα
_
= ν(X)h
2
. (A.71)
Hence by the Riesz lemma (Theorem 1.7) there exists an g ∈ L
2
(X, dα) such
that
(h) =
_
X
hg dα. (A.72)
By construction
ν(A) =
_
χ
A
dν =
_
χ
A
g dα =
_
A
g dα. (A.73)
In particular, g must be positive a.e. (take A the set where g is negative).
Furthermore, let N = ¦x[g(x) ≥ 1¦, then
ν(N) =
_
N
g dα ≥ α(N) = µ(N) +ν(N), (A.74)
which shows µ(N) = 0. Now set
f =
g
1 −g
χ
N
, N
= X¸N. (A.75)
Then, since (A.73) implies dν = g dα respectively dµ = (1 −g)dα, we have
_
A
fdµ =
_
χ
A
g
1 −g
χ
N
dµ
=
_
χ
A∩N
g dα
= ν(A∩ N
) (A.76)
as desired. Clearly f is unique, since if there is a second function
˜
f, then
_
A
(f −
˜
f)dµ = 0 for every A shows f −
˜
f = 0 a.e..
To see the σﬁnite case, observe that X
n
¸ X, µ(X
n
) < ∞and Y
n
¸ X,
ν(Y
n
) < ∞ implies X
n
∩ Y
n
¸ X and α(X
n
∩ Y
n
) < ∞. Hence when
restricted to X
n
∩Y
n
we have sets N
n
and functions f
n
. Now take N =
N
n
and choose f such that f[
Xn
= f
n
(this is possible since f
n+1
[
Xn
= f
n
a.e.).
Then µ(N) = 0 and
ν(A∩ N
) = lim
n→∞
ν(A∩ (X
n
¸N)) = lim
n→∞
_
A∩Xn
f dµ =
_
A
f dµ, (A.77)
which ﬁnishes the proof.
Now the anticipated results follow with no eﬀort:
A.7. Derivatives of measures 229
Theorem A.26 (Lebesgue decomposition). Let µ, ν be two σﬁnite mea
sures on a measure space (X, Σ). Then ν can be uniquely decomposed as
ν = ν
ac
+ ν
sing
, where ν
ac
and ν
sing
are mutually singular and ν
ac
is abso
lutely continuous with respect to µ.
Proof. Taking ν
sing
(A) = ν(A ∩ N) and dν
ac
= f dµ there is at least one
such decomposition. To show uniqueness, let ν be ﬁnite ﬁrst. If there
is another one ν = ˜ ν
ac
+ ˜ ν
sing
, then let
˜
N be such that µ(
˜
N) = 0 and
˜ ν
sing
(
˜
N
) = 0. Then ˜ ν
sing
(A) − ˜ ν
sing
(A) =
_
A
(
˜
f − f)dµ. In particular,
_
A∩N
∩
˜
N
(
˜
f − f)dµ = 0 and hence
˜
f = f a.e. away from N ∪
˜
N. Since
µ(N ∪
˜
N) = 0, we have
˜
f = f a.e. and hence ˜ ν
ac
= ν
ac
as well as ˜ ν
sing
=
ν − ˜ ν
ac
= ν −ν
ac
= ν
sing
. The σﬁnite case follows as usual.
Theorem A.27 (RadonNikodym). Let µ, ν be two σﬁnite measures on a
measure space (X, Σ). Then ν is absolutely continuous with respect to µ if
and only if there is a positive measurable function f such that
ν(A) =
_
A
f dµ (A.78)
for every A ∈ Σ. The function f is determined uniquely a.e. with respect to
µ and is called the RadonNikodym derivative
dν
dµ
of ν with respect to µ.
Proof. Just observe that in this case ν(A ∩ N) = 0 for every A, that is
ν
sing
= 0.
Problem A.9. Let µ is a Borel measure on B and suppose its distribution
function µ(x) is diﬀerentiable. Show that the RadonNikodym derivative
equals the ordinary derivative µ
(x).
Problem A.10 (Chain rule). Show that ν ¸ µ is a transitive relation. In
particular, if ω ¸ ν ¸ µ show that
dω
dµ
=
dω
dν
dν
dµ
.
Problem A.11. Suppose ν ¸ µ. Show that
dω
dµ
dµ =
dω
dν
dν +dζ,
where ζ is a positive measure which is singular with respect to ν. Show that
ζ = 0 if and only if µ ¸ ν.
A.7. Derivatives of measures
If µ is a Borel measure on B and its distribution function µ(x) is diﬀeren
tiable, then the RadonNikodym derivative is just the ordinary derivative
230 A. Almost everything about Lebesgue integration
µ
(x). Our aim in this section is to generalize this result to arbitrary mea
sures on B
n
.
We call
(Dµ)(x) = lim
ε↓0
µ(B
ε
(x))
[B
ε
(x)[
, (A.79)
the derivative of µ at x ∈ R
n
provided the above limit exists. (Here B
r
(x) ⊂
R
n
is a ball of radius r centered at x ∈ R
n
and [A[ denotes the Lebesgue
measure of A ∈ B
n
).
Note that for a Borel measure on B, (Dµ)(x) exists if and only if µ(x)
(as deﬁned in (A.3)) is diﬀerentiable at x and (Dµ)(x) = µ
(x) in this case.
We will assume that µ is regular throughout this section. It can be
shown that every Borel
To compute the derivative of µ we introduce the upper and lower
derivative
(Dµ)(x) = limsup
ε↓0
µ(B
ε
(x))
[B
ε
(x)[
and (Dµ)(x) = liminf
ε↓0
µ(B
ε
(x))
[B
ε
(x)[
. (A.80)
Clearly µ is diﬀerentiable if (Dµ)(x) = (Dµ)(x) < ∞. First of all note that
they are measurable:
Lemma A.28. The upper derivative is lower semicontinuous, that is the set
¦x[(Dµ)(x) > α¦ is open for every α ∈ R. Similarly, the lower derivative is
upper semicontinuous, that is ¦x[(Dµ)(x) < α¦ is open.
Proof. We only prove the claim for Dµ, the case Dµ being similar. Abbre
viate,
M
r
(x) = sup
0<ε<r
µ(B
ε
(x))
[B
ε
(x)[
(A.81)
and note that it suﬃces to show that O
r
= ¦x[M
r
(x) > α¦ is open.
If x ∈ O
r
, there is some ε < r such that
µ(B
ε
(x))
[B
ε
(x)[
> α. (A.82)
Let δ > 0 and y ∈ B
δ
(x). Then B
ε
(x) ⊆ B
ε+δ
(y) implying
µ(B
ε+δ
(y))
[B
ε+δ
(y)[
≥
_
ε
ε +δ
_
n
µ(B
ε
(x))
[B
ε
(x)[
> α (A.83)
for δ suﬃciently small. That is, B
δ
(x) ⊆ O.
In particular, both the upper and lower derivative are measurable. Next,
the following geometric fact of R
n
will be needed.
A.7. Derivatives of measures 231
Lemma A.29. Given open balls B
1
, . . . , B
m
in R
n
, there is a subset of
disjoint balls B
j
1
, . . . , B
j
k
such that
¸
¸
¸
¸
¸
m
_
i=1
B
i
¸
¸
¸
¸
¸
≤ 3
n
k
i=1
[B
j
i
[. (A.84)
Proof. Start with B
j
1
= B
1
= B
r
1
(x
1
) and remove all balls from our list
which intersect B
j
1
. Observe that the removed balls are all contained in
3B
1
= B
3r
1
(x
1
). Proceeding like this we obtain B
j
1
, . . . , B
j
k
such that
m
_
i=1
B
i
⊆
k
_
i=1
B
3r
j
i
(x
j
i
) (A.85)
and the claim follows since [B
3r
(x)[ = 3
n
[B
r
(x)[.
Now we can show
Lemma A.30. Let α > 0. For any Borel set A we have
[¦x ∈ A[ (Dµ)(x) > α¦[ ≤ 3
n
µ(A)
α
(A.86)
and
[¦x ∈ A[ (Dµ)(x) > 0¦[ = 0, whenever µ(A) = 0. (A.87)
Proof. Let A
α
= ¦x[(Dµ)(x) > α¦. We will show
[K[ ≤ 3
n
µ(O)
α
(A.88)
for any compact set K and open set O with K ⊆ E ⊆ O. The ﬁrst claim
then follows from regularity of µ and the Lebesgue measure.
Given ﬁxed K, O, for every x ∈ K there is some r
x
such that B
rx
(x) ⊆ O
and [B
rx
(x)[ < α
−1
µ(B
rx
(x)). Since K is compact, we can choose a ﬁnite
subcover of K. Moreover, by Lemma A.29 we can reﬁne our set of balls such
that
[K[ ≤ 3
n
k
i=1
[B
r
i
(x
i
)[ <
3
n
α
k
i=1
µ(B
r
i
(x
i
)) ≤ 3
n
µ(O)
α
. (A.89)
To see the second claim, observe that
¦x ∈ A[ (Dµ)(x) > 0¦ =
∞
_
j=1
¦x ∈ A[ (Dµ)(x) >
1
j
¦ (A.90)
and by the ﬁrst part [¦x ∈ A[ (Dµ)(x) >
1
j
¦[ = 0 for any j if µ(A) = 0.
232 A. Almost everything about Lebesgue integration
Theorem A.31 (Lebesgue). Let f be (locally) integrable, then for a.e. x ∈
R
n
we have
lim
r↓0
1
[B
r
(x)[
_
Br(x)
[f(y) −f(x)[dy = 0. (A.91)
Proof. Decompose f as f = g + h, where g is continuous and h
1
< ε
(Theorem 0.31) and abbreviate
D
r
(f)(x) =
1
[B
r
(x)[
_
Br(x)
[f(y) −f(x)[dy. (A.92)
Then, since limF
r
(g)(x) = 0 (for every x) and D
r
(f) ≤ D
r
(g) + D
r
(h) we
have
limsup
r↓0
D
r
(f) ≤ limsup
r↓0
D
r
(h) ≤ (Dµ)(x) +[h(x)[, (A.93)
where dµ = [h[dx. Using [¦x[ [h(x)[ ≥ α¦ ≤ α
−1
h
1
and the ﬁrst part of
Lemma A.30 we see
[¦x[ limsup
r↓0
D
r
(f)(x) ≥ α¦[ ≤ (3
n
+ 1)
ε
α
(A.94)
Since ε is arbitrary, the Lebesgue measure of this set must be zero for every
α. That is, the set where the limsup is positive has Lebesgue measure
zero.
The points where (A.91) holds are called Lebesgue points of f.
Note that the balls can be replaced by more general sets: A sequence of
sets A
j
(x) is said to shrink to x nicely if there are balls B
r
j
(x) with r
j
→ 0
and a constant ε > 0 such that A
j
(x) ⊆ B
r
j
(x) and [A
j
[ ≥ ε[B
r
j
(x)[. For
example A
j
(x) could be some balls or cubes (not necessarily containing x).
However, the portion of B
r
j
(x) which they occupy must not go to zero! For
example the rectangles (0,
1
j
) (0,
2
j
) ⊂ R
2
do shrink nicely to 0, but the
rectangles (0,
1
j
) (0,
2
j
2
) don’t.
Lemma A.32. Let f be (locally) integrable, then at every Lebesgue point
we have
f(x) = lim
j→∞
1
[A
j
(x)[
_
A
j
(x)
f(y)dy. (A.95)
whenever A
j
(x) shrinks to x nicely.
Proof. Let x be a Lebesgue point and choose some nicely shrinking sets
A
j
(x) with corresponding B
r
j
(x) and ε. Then
1
[A
j
(x)[
_
A
j
(x)
[f(y) −f(x)[dy ≤
1
ε[B
r
j
(x)[
_
Br
j
(x)
[f(y) −f(x)[dy (A.96)
and the claim follows.
A.7. Derivatives of measures 233
Corollary A.33. Suppose µ is an absolutely continuous Borel measure on
R, then its distribution function is diﬀerentiable a.e. and dµ(x) = µ
(x)dx.
As another consequence we obtain
Theorem A.34. Let µ be a Borel measure on R
n
. The derivative Dµ
exists a.e. with respect to Lebesgue measure and equals the RadonNikodym
derivative of the absolutely continuous part of µ with respect to Lebesgue
measure, that is,
µ
ac
(A) =
_
A
(Dµ)(x)dx. (A.97)
Proof. If dµ = f dx is absolutely continuous with respect to Lebesgue mea
sure the claim follows from Theorem A.31. To see the general case use the
Lebesgue decomposition of µ and let N be a support for the singular part
with [N[ = 0. Then (Dµ
sing
)(x) = 0 for a.e. x ∈ N
by the second part of
Lemma A.30.
In particular, µ is singular with respect to Lebesgue measure if and only
if Dµ = 0 a.e. with respect to Lebesgue measure.
Using the upper and lower derivatives we can also give supports for the
absolutely and singularly continuous parts.
Theorem A.35. The set ¦x[(Dµ)(x) = ∞¦ is a support for the singular
and ¦x[0 < (Dµ)(x) < ∞¦ is a support for the absolutely continuous part.
Proof. Suppose µ is purely singular ﬁrst. Let us show that the set O
k
=
¦x[ (Dµ)(x) < k¦ satisﬁes µ(O
k
) = 0 for every k ∈ N.
Let K ⊂ O
k
be compact, let V
j
⊃ K be some open set such that
[V
j
¸K[ ≤
1
j
. For every x ∈ K there is some ε = ε(x) such that B
ε
(x) ⊆ V
j
and µ(B
ε
(x)) ≤ k[B
ε
(x)[. By compactness, ﬁnitely many of these balls cover
K and hence
µ(K) ≤
i
µ(B
ε
i
(x
i
)) ≤ k
i
[B
ε
i
(x
i
)[. (A.98)
Selecting disjoint balls us in Lemma A.29 further shows
µ(K) ≤ k3
n
[B
ε
i
(x
i
)[ ≤ k3
n
[V
j
[. (A.99)
Letting j → ∞ we see µ(K) ≤ k3
n
[K[ and by regularity we even have
µ(A) ≤ k3
n
[A[ for every A ⊆ O
k
. Hence µ is absolutely continuous on O
k
and since we assumed µ to be singular we must have µ(O
k
) = 0.
Thus (Dµ
sing
)(x) = ∞ for a.e. x with respect to µ
sing
and we are done.
234 A. Almost everything about Lebesgue integration
Example. The Cantor function is constructed as follows: Take the sets
C
n
used in the construction of the Cantor set C: C
n
is the union of 2
n
closed intervals with 2
n
− 1 open gaps in between. Set f
n
equal to j/2
n
on the j’th gap of C
n
and extend it to [0, 1] by linear interpolation. Note
that, since we are creating precisely one new gap between every old gap
when going from C
n
to C
n+1
, the value of f
n+1
is the same as the value of
f
n
on the gaps of C
n
. In particular, f
n
− f
m

∞
≤ 2
−min(n,m)
and hence
we can deﬁne the Cantor function as f = lim
n→∞
f
n
. By construction f
is a continuous function which is constant on every subinterval of [0, 1]¸C.
Since C is of Lebesgue measure zero, this set is of full Lebesgue measure
and hence f
= 0 a.e. in [0, 1. In particular, the corresponding measure, the
Cantor measure, is supported on C and purely singular with respect to
Lebesgue measure.
Bibliography
[1] W. O. Amrein, J. M. Jauch, and K. B. Sinha, Scattering Theory in Quantum
Mechanics, W. A. Benajmin Inc., New York, 1977
[2] H. Bauer, Measure and Integration Theory, de Gruyter, Berlin, 2001.
[3] H. L. Cycon, R. G. Froese, W. Kirsch and B. Simon, Schr¨odinger Operators,
Springer, Berlin, 1987.
[4] V. Enss, Asymptotic completeness for Quantum mechanical potential scattering,
Comm. Math. Phys. 61, 285291 (1978).
[5] V. Enß, Schr¨odinger Operators, lecture notes (private communication).
[6] I. Gohberg, S. Goldberg, and N. Krupnik, Traces and Determinants of Linear
Operators, Birkh¨auser, Basel, 2000.
[7] P. D. Hislop and I. M. Sigal, Introduction to Spectral Theory Springer, New York,
1996.
[8] J.L. Kelly, General Topology, Springer, New York, 1955.
[9] Yoram Last, Quantum dynamics and decompositions of singular continuous spec
tra, J. Funct. Anal. 142, 406445 (1996).
[10] E. Lieb and M. Loss, Analysis, American Mathematical Society, Providence,
1997.
[11] P. Perry, Mellin transforms and scattering Theory, Duke Math. J. 47, 187193
(1987).
[12] E. Prugoveˇcki, Quantum Mechanics in Hilbert Space, 2nd edition, Academic
Press, New York, 1981.
[13] M. Reed and B. Simon, Methods of Modern Mathematical Physics I. Functional
Analysis, rev. and enl. edition, Academic Press, San Diego, 1980.
[14] M. Reed and B. Simon, Methods of Modern Mathematical Physics II. Fourier
Analysis, SelfAdjointness, Academic Press, San Diego, 1975.
[15] M. Reed and B. Simon, Methods of Modern Mathematical Physics III. Scattering
Theory, Academic Press, San Diego, 1979.
[16] M. Reed and B. Simon, Methods of Modern Mathematical Physics IV. Analysis
of Operators, Academic Press, San Diego, 1978.
235
236 Bibliography
[17] J.R. Retherford, Hilbert Space: Compact Operators and the Trace Theorem, Cam
bridge UP, Cambridge, 1993.
[18] W. Rudin, Real and Complex Analysis, 3rd edition, McGrawHill, New York,
1987.
[19] M. Schechter, Operator Methods in Quantum Mechanics, North Holland, New
York, 1981.
[20] M. Schechter, Spectra of Partial Diﬀerential Operators, 2nd ed., North Holland,
Amsterdam, 1986.
[21] W. Thirring, Quantum Mechanics of Atoms and Molecules, Springer, New York,
1981.
[22] G. N. Watson, A Treatise on the Theory of Bessel Functions, 2nd ed., Cambridge
University Press, Cambridge, 1962.
[23] J. Weidmann, Linear Operators in Hilbert Spaces, Springer, New York, 1980.
[24] J. Weidmann, Lineare Operatoren in Hilbertr¨aumen, Teil 1: Grundlagen,
B.G.Teubner, Stuttgart, 2000.
[25] J. Weidmann, Lineare Operatoren in Hilbertr¨aumen, Teil 2: Anwendungen,
B.G.Teubner, Stuttgart, 2003.
Glossary of notations
AC(I) . . . absolutely continuous functions, 72
B = B
1
B
n
. . . Borel σﬁeld of R
n
, 210.
C(H) . . . set of compact operators, 112.
C(U) . . . set of continuous functions from U to C.
C
∞
(U) . . . set of functions in C(U) which vanish at ∞.
C(U, V ) . . . set of continuous functions from U to V .
C
∞
c
(U, V ) . . . set of compactly supported smooth functions
χ
Ω
(.) . . . characteristic function of the set Ω
dim . . . dimension of a linear space
dist(x, Y ) = inf
y∈Y
x −y, distance between x and Y
D(.) . . . domain of an operator
e . . . exponential function, e
z
= exp(z)
E(A) . . . expectation of an operator A, 47
T . . . Fourier transform, 139
H . . . Schr¨odinger operator, 177
H
0
. . . free Schr¨odinger operator, 142
H
m
(a, b) . . . Sobolev space, 72
H
m
(R
n
) . . . Sobolev space, 141
hull(.) . . . convex hull
H . . . a separable Hilbert space
i . . . complex unity, i
2
= −1
Im(.) . . . imaginary part of a complex number
inf . . . inﬁmum
Ker(A) . . . kernel of an operator A
L(X, Y ) . . . set of all bounded linear operators from X to Y , 20
237
238 Glossary of notations
L(X) = L(X, X)
L
p
(M, dµ) . . . Lebesgue space of p integrable functions, 22
L
p
loc
(M, dµ) . . . locally p integrable functions
L
p
c
(M, dµ) . . . compactly supported p integrable functions
L
∞
(M, dµ) . . . Lebesgue space of bounded functions, 23
L
∞
∞
(R
n
) . . . Lebesgue space of bounded functions vanishing at ∞
λ . . . a real number
max . . . maximum
/ . . . Mellin transform, 201
µ
ψ
. . . spectral measure, 82
N . . . the set of positive integers
N
0
= N ∪ ¦0¦
Ω . . . a Borel set
Ω
±
. . . wave operators, 197
P
A
(.) . . . family of spectral projections of an operator A
P
±
. . . projector onto outgoing/incoming states, 200
Q . . . the set of rational numbers
Q(.) . . . form domain of an operator, 84
R(I, X) . . . set of regulated functions, 98
R
A
(z) . . . resolvent of A, 62
Ran(A) . . . range of an operator A
rank(A) = dimRan(A), rank of an operator A, 111
Re(.) . . . real part of a complex number
ρ(A) . . . resolvent set of A, 61
R . . . the set of real numbers
S(I, X) . . . set of simple functions, 98
o(R
n
) . . . set of smooth functions with rapid decay, 139
sup . . . supremum
supp . . . support of a function
σ(A) . . . spectrum of an operator A, 61
σ
ac
(A) . . . absolutely continuous spectrum of A, 91
σ
sc
(A) . . . singular continuous spectrum of A, 91
σ
pp
(A) . . . pure point spectrum of A, 91
σ
p
(A) . . . point spectrum (set of eigenvalues) of A, 88
σ
d
(A) . . . discrete spectrum of A, 119
σ
ess
(A) . . . essential spectrum of A, 119
span(M) . . . set of ﬁnite linear combinations from M, 12
Z . . . the set of integers
z . . . a complex number
Glossary of notations 239
I . . . identity operator
√
z . . . square root of z with branch cut along (−∞, 0)
z
∗
. . . complex conjugation
A
∗
. . . adjoint of A, 51
A . . . closure of A, 55
ˆ
f = Tf, Fourier transform of f
ˇ
f = T
−1
f, inverse Fourier transform of f
. . . . norm in the Hilbert space H
.
p
. . . norm in the Banach space L
p
, 22
¸., ..) . . . scalar product in H
E
ψ
(A) = ¸ψ, Aψ) expectation value
∆
ψ
(A) = E
ψ
(A
2
) −E
ψ
(A)
2
variance
⊕ . . . orthogonal sum of linear spaces or operators, 38
∆ . . . Laplace operator, 142
∂ . . . gradient, 139
∂
α
. . . derivative, 139
M
⊥
. . . orthogonal complement, 36
(λ
1
, λ
2
) = ¦λ ∈ R[ λ
1
< λ < λ
2
¦, open interval
[λ
1
, λ
2
] = ¦λ ∈ R[ λ
1
≤ λ ≤ λ
2
¦, closed interval
ψ
n
→ ψ . . . norm convergence
ψ
n
ψ . . . weak convergence, 41
A
n
→ A . . . norm convergence
A
n
s
→ A . . . strong convergence, 43
A
n
A . . . weak convergence, 43
A
n
nr
→ A . . . norm resolvent convergence, 131
A
n
sr
→ A . . . strong resolvent convergence, 131
Index
a.e., see almost everywehre
Absolutely continuous
function, 72
measure, 227
Adjoint, 40
Algebra, 209
Almost everywhere, 212
Angular momentum operator, 151
Banach algebra, 21
Banach space, 11
Basis
orthonormal, 34
spectral, 80
Bessel function, 146
spherical, 185
Bessel inequality, 33
Borel
function, 219
measure, 211
set, 210
σalgebra, 210
Borel measure
regular, 211
Borel transform, 82
Creal, 71
Cantor function, 234
Cantor measure, 234
Cantor set, 212
CauchySchwarz inequality, 16
Caylay transform, 69
Ces`aro average, 110
Characteristic function, 220
Closed set, 5
Closure, 5
Commute, 101
Compact, 8
locally, 9
sequentially, 8
Complete, 6, 11
Conﬁguration space, 48
Conjugation, 71
Continuous, 6
Convolution, 140
Core, 55
Cover, 7
C
∗
algebra, 41
Cyclic vector, 80
Dense, 6
Dilation group, 179
Dirac measure, 211, 224
Dirichlet boundary condition, 162
Distance, 9
Distribution function, 211
Domain, 20, 48, 50
Eigenspace, 98
Eigenvalue, 61
multiplicity, 98
Eigenvector, 61
Element
adjoint, 41
normal, 41
positive, 41
selfadjoint, 41
unitary, 41
Essential supremum, 23
Expectation, 47
First resolvent formula, 63
241
242 Index
Form domain, 58, 84
Fourier series, 35
Fourier transform, 110, 139
Friedrichs extension, 61
Function
absolutely continuous, 72
Gaussian wave packet, 151
Gradient, 139
GramSchmidt orthogonalization, 35
Graph, 55
Graph norm, 55
Green’s function, 146
Ground state, 186
H¨older’s inequality, 23
Hamiltonian, 49
Harmonic oscillator, 154
Hausdorﬀ space, 5
Heisenberg picture, 114
Herglotz functions, 82
Hermite polynomials, 154
Hilbert space, 15, 31
separable, 35
Hydrogen atom, 178
Ideal, 41
Induced topology, 5
Inner product, 15
Inner product space, 15
Integrable, 222
Integral, 220
Interior, 5
Interior point, 4
Intertwining property, 198
Involution, 41
Ionisation, 192
Kernel, 20
l.c., see Limit circle
l.p., see Limit point
Laguerre polynomial, 185
associated, 185
Lebesgue measure, 212
Lebesgue point, 232
Legendre equation, 182
Lemma
RiemannLebesgue, 142
Lidiskij trace theorem, 127
Limit circle, 161
Limit point, 4, 161
Linear functional, 21, 37
Localization formula, 193
Meansquare deviation, 48
Measurable
function, 218
set, 210
Measure, 210
absolutely continuous, 227
complete, 218
ﬁnite, 210
growth point, 86
mutually singular, 227
product, 225
projectionvalued, 76
spectral, 82
support, 212
Measure space, 210
Mellin transform, 201
Metric space, 3
Minkowski’s inequality, 24
Molliﬁer, 26
Momentum operator, 150
Multiindex, 139
order, 139
Multiplicity
spectral, 81
Neighborhood, 4
Neumann function
spherical, 185
Neumann series, 64
Norm, 10
operator, 20
Norm resolvent convergence, 131
Normal, 10
Normalized, 15, 32
Normed space, 10
Nowhere dense, 27
Observable, 47
Oneparameter unitary group, 49
Open ball, 4
Open set, 4
Operator
adjoint, 51
bounded, 20
closable, 55
closed, 55
closure, 55
compact, 112
domain, 20, 50
ﬁnite rank, 111
Hermitian, 50
Hilbert–Schmidt, 122
linear, 20, 50
nonnegative, 58
normal, 79
positive, 58
relatively bounded, 117
relatively compact, 112
selfadjoint, 51
Index 243
semibounded, 61
strong convergence, 43
symmetric, 50
unitary, 33, 49
weak convergence, 43
Orthogonal, 15, 32
Orthogonal complement, 36
Orthogonal projection, 37
Orthogonal sum, 38
Outer measure, 216
Parallel, 15, 32
Parallelogram law, 17
Parseval’s identity, 141
Partial isometry, 85
Perpendicular, 15, 32
Phase space, 48
Pl¨ ucker identity, 161
Polar decomposition, 85
Polarization identity, 17, 33, 50
Position operator, 149
Positivity improving, 187
Positivity preserving, 187
Premeasure, 210
Probability density, 47
Product measure, 225
Projection, 41
Pythagorean theorem, 16
Quadratic form, 50
Range, 20
Rank, 111
Regulated function, 98
Relatively compact, 112
Resolution of the identity, 77
Resolvent, 62
Neumann series, 64
Resolvent set, 61
Riesz lemma, 37
Scalar product, 15
Scattering operator, 198
Scattering state, 198
Schatten pclass, 125
Schr¨odinger equation, 49
Second countable, 5
Second resolvent formula, 119
Selfadjoint
essentially, 55
Semimetric, 3
Separable, 6, 13
Short range, 203
σalgebra, 209
σﬁnite, 210
Simple function, 98, 220
Simple spectrum, 81
Singular values, 120
Span, 12
Spectral basis, 80
ordered, 90
Spectral measure
maximal, 89
Spectral theorem, 83
compact operators, 120
Spectral vector, 80
maximal, 89
Spectrum, 61
absolutely continuous, 91
discrete, 119
essential, 119
pure point, 91
singularly continuous, 91
Spherical harmonics, 183
∗algebra, 41
∗ideal, 41
Stieltjes inversion formula, 82
Stone’s formula, 100
Strong resolvent convergence, 131
SturmLiouville equation, 157
regular, 158
Subcover, 7
Subspace
reducing, 67
Superposition, 48
Tensor product, 39
Theorem
BanachSteinhaus, 28
closed graph, 57
dominated convergence, 223
Fubini, 226
HeineBorel, 9
HVZ, 192
Kato–Rellich, 118
Lebesgue decomposition, 229
monotone convergence, 221
Pythagorean, 32
RadonNikodym, 229
RAGE, 113
spectral, 84
spectral mapping, 90
Stone, 108
Stone–Weierstraß, 45
virial, 179
Weierstraß, 13
Weyl, 129
Wiener, 110
Topological space, 4
Topology
base, 5
product, 7
Total, 12
Trace, 127
244 Index
Trace class, 126
Triangel inequality, 11
Trotter product formula, 115
Uncertainty principle, 150
Uniform boundedness principle, 28
Unit vector, 15, 32
Unitary group
Generator, 49
Urysohn lemma, 10
Variance, 48
Vitali set, 212
Wave function, 47
Wave operators, 197
Weak convergence, 21, 41
Weierstraß approxiamation, 13
Weyl relation, 150
Weyl sequence, 64
singular, 128
WeylTitchmarsh mfunction, 172
Wronskian, 158
Gerald Teschl Fakult¨t f¨r Mathematik a u Nordbergstraße 15 Universit¨t Wien a 1090 Wien, Austria Email: Gerald.Teschl@univie.ac.at URL: http://www.mat.univie.ac.at/˜gerald/
2000 Mathematics subject classiﬁcation. 8101, 81Qxx, 4601
Abstract. This manuscript provides a selfcontained introduction to mathematical methods in quantum mechanics (spectral theory) with applications to Schr¨dinger operators. The ﬁrst part covers mathematical foundations o of quantum mechanics from selfadjointness, the spectral theorem, quantum dynamics (including Stone’s and the RAGE theorem) to perturbation theory for selfadjoint operators. The second part starts with a detailed study of the free Schr¨dinger opo erator respectively position, momentum and angular momentum operators. Then we develop WeylTitchmarsh theory for SturmLiouville operators and apply it to spherically symmetric problems, in particular to the hydrogen atom. Next we investigate selfadjointness of atomic Schr¨dinger operators o and their essential spectrum, in particular the HVZ theorem. Finally we have a look at scattering theory and prove asymptotic completeness in the short range case. Keywords and phrases. Schr¨dinger operators, quantum mechanics, uno bounded operators, spectral theory.
A Typeset by AMSL TEX and Makeindex. Version: April 19, 2006 Copyright c 19992005 by Gerald Teschl
Contents
Preface Part 0. Preliminaries Chapter 0. §0.1. §0.2. §0.3. §0.4. §0.5. §0.6. §0.7. A ﬁrst look at Banach and Hilbert spaces
vii
3 3 10 14 19 20 22 27
Warm up: Metric and topological spaces The Banach space of continuous functions The geometry of Hilbert spaces Completeness Bounded operators Lebesgue Lp spaces Appendix: The uniform boundedness principle
Part 1. Mathematical Foundations of Quantum Mechanics Chapter 1. §1.1. §1.2. §1.3. §1.4. §1.5. §1.6. §1.7. Hilbert spaces 31 31 33 36 38 40 41 44 47 iii
Hilbert spaces Orthonormal bases The projection theorem and the Riesz lemma Orthogonal sums and tensor products The C∗ algebra of bounded linear operators Weak and strong convergence Appendix: The Stone–Weierstraß theorem Selfadjointness and spectrum
Chapter 2.
iv
Contents
§2.1. Some quantum mechanics §2.2. Selfadjoint operators §2.3. Resolvents and spectra §2.4. Orthogonal sums of operators §2.5. Selfadjoint extensions §2.6. Appendix: Absolutely continuous functions Chapter §3.1. §3.2. §3.3. §3.4. Chapter §4.1. §4.2. §4.3. §4.4. §4.5. Chapter §5.1. §5.2. §5.3. Chapter §6.1. §6.2. §6.3. §6.4. §6.5. 3. The spectral theorem The spectral theorem More on Borel measures Spectral types Appendix: The Herglotz theorem 4. Applications of the spectral theorem Integral formulas Commuting operators The minmax theorem Estimating eigenspaces Tensor products of operators 5. Quantum dynamics The time evolution and Stone’s theorem The RAGE theorem The Trotter product formula
47 50 61 67 68 72 75 75 85 89 91 97 97 100 103 104 105 107 107 110 115
6. Perturbation theory for selfadjoint operators 117 Relatively bounded operators and the Kato–Rellich theorem 117 More on compact operators 119 Hilbert–Schmidt and trace class operators 122 Relatively compact operators and Weyl’s theorem 128 Strong and norm resolvent convergence 131
Part 2. Schr¨dinger Operators o Chapter §7.1. §7.2. §7.3. §7.4. 7. The free Schr¨dinger operator o The Fourier transform The free Schr¨dinger operator o The time evolution in the free case The resolvent and Green’s function 139 139 142 144 145
2.2.6. The hydrogen atom §10.1.7.5.2.3. §8. §9. 8. Algebraic methods Position and momentum Angular momentum The harmonic oscillator 9.2. Borel measures in a nut shell §A. One dimensional Schr¨dinger operators o SturmLiouville operators Weyl’s limit circle.3.3. Selfadjointness and spectrum §10.5. Scattering theory §12. Appendix Appendix A. Extending a premasure to a measure §A. Nondegeneracy of the ground state Chapter 11.1. Angular momentum §10.Contents v Chapter §8. Incoming and outgoing states §12. limit point alternative Spectral transformations 149 149 151 154 157 157 161 168 177 177 178 181 184 186 189 189 191 197 197 200 202 Chapter 10.1.3. Decomposition of measures §A. Selfadjointness §11.1.1.3. Oneparticle Schr¨dinger operators o §10. Derivatives of measures Bibliography Glossary of notations Index 209 209 213 218 220 224 227 229 235 237 241 . Measurable functions §A. §9. Abstract theory §12. §8.4. Almost everything about Lebesgue integration §A. The eigenvalues of the hydrogen atom §10.2. The HVZ theorem Chapter 12. Schr¨dinger operators with short range potentials o Part 3.1. The Lebesgue integral §A.2.4. Chapter §9. Product measures §A. Atomic Schr¨dinger operators o §11.
.
Hence I try to get to it as quickly as possible. In particular. This has the advantage that you will not get drowned in results which are never used again before you get to the applications. Moreover. The applications presented are highly o selective and many important and interesting items are not touched. vii . It is supposed to give a brief but rather self contained introduction to the mathematical methods of quantum mechanics with a view towards applications to Schr¨dinger operators. I do not take the detour over bounded operators but I go straight for the unbounded case. My approach is built around the spectral theorem as the central object.Preface Overview The present manuscript was written for my course Schr¨dinger Operators o held at the University of Vienna in Winter 1999. existence of spectral measures is established via the Herglotz rather than the Riesz representation theorem since this approach paves the way for an investigation of spectral types via boundary values of the resolvent as the spectral parameter approaches the real line. and Summer 2005. In addition. Summer 2002. I am not trying to provide an encyclopedic reference. The ﬁrst part is a stripped down introduction to spectral theory of unbounded operators where I try to introduce only those topics which are needed for the applications later on. Nevertheless I still feel that the ﬁrst part should give you a solid background covering all important results which are usually taken for granted in more advanced books and research papers.
5 which can be skipped on ﬁrst reading). and angular momentum operators via algebraic methods. In particular. momentum. and Chapter 11.viii Preface The second part starts with the free Schr¨dinger equation and computes o the free resolvent and time evolution. This is usually found in any physics textbook on quantum mechanics. while this assumption is reasonable for mathematics students. For this reason there is a preliminary chapter reviewing all necessary results (including proofs). Chapter 1 and Chapter 2. Furthermore. Finally. Chapters 2. read Chapter 7. In addition. Chapter 4 should give you an idea of how the spectral theorem is used. Further topics are nondegeneracy of the ground state. which is a prerequisite for all others except for Chapter 9. If you are interested in atoms. However. there is an appendix (again with proofs) providing all necessary results from measure theory. Chapter 7. again I try to provide some mathematical details not found in physics textbooks.) the ﬁrst section and you can come back to the remaining ones as needed. Readers guide There is some intentional overlap between Chapter 0. you can start reading in Chapter 1 or even Chapter 2.g. Chapter 10. I compute the spectrum of the hydrogen atom. 3 are key chapters and you should study them in detail (except for Section 2. Stone’s theorem and the RAGE theorem. Chapter 5 contains two key results from quantum dynamics. Prerequisites I assume some previous experience with Hilbert spaces and bounded linear operators which should be covered in any basic course on functional analysis.3 . If you are interested in one dimensional models (SturmLiouville equations). spectra of atoms (the HVZ theorem) and scattering theory. you can skip the separation of variables (Sections 10. Hence. provided you have the necessary background. In addition. with the only diﬀerence that I include some technical details which are usually not found there. You should have a look at (e. Chapter 9 is all you need. Chapter 6 is again of central importance and should be studied in detail. The chapters in the second part are mostly independent of each others except for the ﬁrst one. In particular the RAGE theorem shows the connections between long time behavior and spectral types. it might not always be for physics students. I discuss position.
Preface ix and 10. Harald Rindler. Austria February. Availability It is available from http://www. 2005 . Maria HoﬀmannOstenhof. If you are interested in scattering theory. Chapter 5 is one of the key prerequisites in this case.ac.4.at/~gerald/ftp/bookschroe/ Acknowledgments I’d like to thank Volker Enß for making his lecture notes available to me and Wang Lanning.mat. read Chapter 7. the ﬁrst two sections of Chapter 10. Gerald Teschl Vienna. and Karl Unterkoﬂer for pointing out errors in previous versions. Zhenyou Huang. and Chapter 12. which require Chapter 9) method for computing the eigenvalues of the Hydrogen atom if you are happy with the fact that there are countably many which accumulate at the bottom of the continuous spectrum.univie.
.
Part 0 Preliminaries .
.
Chapter 0 A ﬁrst look at Banach and Hilbert spaces I assume that the reader has some basic familiarity with measure theory and functional analysis. For convenience. Euclidean space Rn together with d(x. some facts needed from Banach and Lp spaces are reviewed in this chapter. If you feel comfortable with terms like Lebesgue Lp spaces. y) + d(y. Warm up: Metric and topological spaces Before we begin I want to recall some basic facts from metric and topological spaces. y) = ( n xk −yk 2 )1/2 . z) ≤ d(x. y) = d(y. or bounded linear operator.1. 0. y) = 0 if and only if x = y (iii) d(x. A good reference is [8]. you can skip this entire chapter. I presume that you are familiar with these topics from your calculus course. k=1 3 . A crash course in measure theory can be found in the appendix. z) (triangle inequality) If (ii) does not hold. d is called a semimetric. you might want to at least browse through it to refresh your memory. y) ≥ 0 (ii) d(x. y) = ( n (xk − yk )2 )1/2 k=1 is a metric space and so is Cn together with d(x. However. x) (iv) d(x. A metric space is a space X together with a function d : X × X → R such that (i) d(x. Example. Banach space.
B are balls com˜ puted using d. The family of open sets O satisﬁes the following properties (i) ∅. or we could also use n ˜ d(x. Example. Consider R with the usual metric and let U = (−1. y)). we can equip Rn (or Cn ) with the Euclidean distance as before. Then every point x ∈ U is an interior point of U .3) ˜ ˜ shows Br/√n ((x. respectively. y) < r} (0. a space X together with a family of sets O. Note that a limit point must not lie in U . y)) ⊆ Br ((x. satisfying (i)–(iii) is called a topological space. O is closed under ﬁnite intersections and arbitrary unions. O1 is called weaker (or coarser) than O2 if and only if O1 ⊆ O2 . Example. The notions of interior point. A ﬁrst look at Banach and Hilbert spaces The set Br (x) = {y ∈ Xd(x. A point x is called a limit point of U if Br (x) ∩ (U \{x}) = ∅ for every ball. Moreover. Two usually not very interesting examples are the trivial topology O = {∅. The points ±1 are limit points of U . Hence the topology is the same for both metrics.1) is called an open ball around x with radius r > 0. There are usually diﬀerent choices for the topology. y) = k=1 xk − yk  n n (0. X} and the discrete topology O = P(X) (the powerset of X). x is not a limit point of U if and only if it is an interior point of the complement of U . O2 ∈ O implies O1 ∩ O2 ∈ O (iii) {Oα } ⊆ O implies α Oα ∈O That is. limit point. . but U contains points arbitrarily close to x. If x is an interior point of U . A set consisting only of interior points is called open. and neighborhood carry over to topological spaces if we replace open ball by open set. y)) ⊆ Br ((x. where B. X ∈ O (ii) O1 . Given two topologies O1 and O2 on X. the open sets. d. A point x of some set U is called an interior point of U if U contains some ball around x. Note that diﬀerent metrics can give rise to the same topology. In general. 1). then U is also called a neighborhood of x.4 0.2) Since 1 √ n n xk  ≤ k=1 k=1 xk 2 ≤ k=1 xk  (0. For example.
If B ⊆ O is a base for the topology.1. y) without changing the topology. closed sets are closed under ﬁnite unions and arbitrary intersections. Example. We can always replace a metric d by the bounded metric d(x. y) = (0.U ⊆C C. Example. where d = d(x.0. O=O Example.6) . y) > 0. Every subspace Y of a topological space X becomes a topological space ˜ of its own if we call O ⊆ Y open if there is some open set O ⊆ X such that ˜ ∩ Y (induced topology). 1] ⊆ R is not open in the topology of X = R. there is some set O ∈ B with x ∈ O ⊆ U . Any metric space is a Hausdorﬀ space: Given two diﬀerent points x and y the balls Bd/2 (x) and Bd/2 (y). (0.O⊆U O. The complement of an open set is called a closed set. C2 ∈ C implies C1 ∪ C2 ∈ C (iii) {Cα } ⊆ C implies α Cα ∈C That is. A family of open sets B ⊆ O is called a base for the topology if for each x and each neighborhood U (x). then every open set can be written as a union of elements from B. Warm up: Metric and topological spaces 5 Example. 1]. A topological space is called Hausdorﬀ space if for two diﬀerent points there are always two disjoint neighborhoods. X ∈ C (ii) C1 . are disjoint neighborhoods (a semimetric space will not be Hausdorﬀ).5) and the largest open set contained in a given set U is called the interior U◦ = O∈O. If there exists a countable base. y) ˜ d(x. It follows from de Morgan’s rules that the family of closed sets C satisﬁes (i) ∅.1. The smallest closed set containing a given set U is called the closure U= C∈C.4) 1 + d(x. By construction the open balls B1/n (x) are a base for the topology in a metric space. then X is called second countable. The set (0. but it is open in the incuded topology when considered as a subset of Y = [−1. (0. In the case of Rn (or Cn ) it even suﬃces to take balls with rational center and hence Rn (and Cn ) are second countable. Since O = x∈O U (x) we have Lemma 0.
k . We write limn→∞ xn = x as usual in this case. xm ) ≤ ε n.m ∈ B1/m (xn ) ∩ Y for all (n. The only problem is that A ∩ Y might contain no elements at all. A ﬁrst look at Banach and Hilbert spaces It is straightforward to check that Lemma 0. some elements of A must be at least arbitrarily close: Let J ⊆ N2 be the set of all pairs (n. Every subset of X is again separable. (0. Let X be a separable metric space. To see that B is dense choose y ∈ Y . Hence (nk . Then there is some sequence xnk with d(xnk . y) < 1/4. Note that convergence can also be equivalently formulated in terms of topological terms: A sequence xn converges to x if and only if for every neighborhood U of x there is some N ∈ N such that xn ∈ U for n ≥ N . A closed subset of a complete metric space is again a complete metric space. Hence Lemma 0. Then B = {yn. f (y)) ≤ ε if dX (x. if its closure is all of X.2. Proof. xnk ) + d(xnk . then X is called complete.m)∈J ⊆ Y is countable. if every Cauchy sequence has a limit. y) ≤ d(ynk . A set U is called dense. k) ∈ J and d(ynk . Let X be a topological space.3. A metric space is called separable if it contains a countable dense set. m) ∈ J. Lemma 0.7) If the converse is also true. A sequence (xn )∞ ⊆ X is said to converge to some point x ∈ X if n=1 d(x. In a Hausdorﬀ space the limit is unique.6 0. However.k . A function between metric spaces X and Y is called continuous at a point x ∈ X if for every ε > 0 we can ﬁnd a δ > 0 such that dY (f (x). xn ) → 0. Let A = {xn }n∈N be a dense set in X. for every ε > 0 there is some N ∈ N such that d(xn . Example. (0. m) for which B1/m (xn ) ∩ Y = ∅ and choose some yn. .m }(n. that is. then the interior of U is the set of all interior points of U and the closure of U is the set of all limit points of U .8) If f is continuous at every point it is called continuous. y) ≤ 2/k → 0. A point x is clearly a limit point of U if and only if there is some sequence xn ∈ U converging to x. that is if U = X. Both Rn and Cn are complete metric spaces. Clearly the limit is unique if it exists (this is not true for a semimetric). y) < δ. that is.4. Every convergent sequence is a Cauchy sequence. m ≥ N.
y) ≤ d(xn . in general (ii) and (iii) will no longer be equivalent unless one uses generalized sequences. f −1 (V ) is a neighborhood of x. y1 ). the topology (and thus also convergence/continuity) is independent of this choice. Thus xn → x but f (xn ) → f (x). Proof. (0.9) as follows: ˜ d((x1 . In particular. The following are equivalent (i) f is continuous at x (i. by d(xn . y) ∈ O there are open neighborhoods U of x and V of y such that U × V ⊆ O. Warm up: Metric and topological spaces 7 Lemma 0. If X and X are metric spaces then X × Y together with d((x1 . so called nets. A cover is call open if all Uα are open. y2 )) = dX (x1 . y) we see that d : X × X → R is continuous. (iii) ⇒ (i): Choose V = Bε (f (x)) and observe that by (iii) Bδ (x) ⊆ f −1 (V ) for some δ.8) holds). y2 )) = dX (x1 . . yn ) − d(x.9). Hence we can choose a sequence xn ∈ B1/n (x) such that f (xn ) ∈ f −1 (V ). In particular. (iii) is used as deﬁnition for continuity. where the index set N is replaced by arbitrary directed sets.11) (0. y2 ) (0. the projections onto the ﬁrst (x. x2 )2 + dY (y1 . A subset of {Uα } is called a subcover. However. (ii) f (xn ) → f (x) whenever xn → x (iii) For every neighborhood V of f (x). the product topology is deﬁned by calling O ⊆ X × Y open if for every point (x. If X and Y are just topological spaces. y2 )2 . (i) ⇒ (ii) is obvious. If we consider R × R we do not get the Euclidean distance of R2 unless we modify (0. (x2 . (0. y) → y coordinate are continuous. y) → x respectively onto the second (x. A cover of a set Y ⊆ X is a family of sets {Uα } such that Y ⊆ α Uα .9) is a metric space. Example. x) + d(yn . Let X be a metric space. y) if and only if xn → x and yn → y. (ii) ⇒ (iii): If (iii) does not hold there is a neighborhood V of f (x) such that Bδ (x) ⊆ f −1 (V ) for every δ. Note: In a topological space.1.e. x2 ) + dY (y1 . yn ) converges to (x. y1 ). In the case of metric spaces this clearly agrees with the topology deﬁned via the product metric (0. A sequence (xn . (x2 .5.10) As noted in our previous example. The last item implies that f is continuous if and only if the inverse image of every open (closed) set is again open (closed).0.
yk (x)) cover Y . yn such that V (yj ) cover Y . For every n . By compactness of Y . to every family of open sets there is a corresponding family of closed sets and vice versa. there are y1 . . j=1 (iv) Let {Oα } be an open cover for X × Y . (ii) Let {Oα } be an open cover for the closed subset Y . (iii) Let Y ⊆ X be compact.y) . A topological space is compact if and only if it has the ﬁnite intersection property: The intersection of a family of closed sets is empty if and only if the intersection of some ﬁnite subfamily is empty. Let X be a topological space. (i) The continuous image of a compact set is compact. By deﬁnition of the product topology there is some open rectangle U (x. (iv) The product of compact sets is compact. (ii) Every closed subset of a compact set is compact. (i) Just observe that if {Oα } is an open cover for f (Y ). Fix x ∈ X\Y (if Y = X there is nothing to do). (v) A compact set is also sequentially compact. then {f −1 (Oα )} is one for Y . Then {Oα } ∪ {X\Y } is an open cover for X. the open sets are a cover if and only if the corresponding closed sets have empty intersection. Lemma 0. y) × V (x.6. y) such that (x. Proof. . (v) Let xn be a sequence which has no convergent subsequence. A ﬁrst look at Banach and Hilbert spaces A subset K ⊂ X is called compact if every open cover has a ﬁnite subcover. Proof. yk (x)). Then K = {xn } has no limit points and is hence compact by (ii). We show that X\Y is open. Hence there are ﬁnitely many points yk (x) such V (x. y) ∈ Oα(x. yk (xj )) ⊆ Oα(xj .7. Lemma 0. y)}y∈Y is an open cover of Y . U (xj ) × V (xj .yk (xj )) cover X × Y . A subset K ⊂ X is called sequentially compact if every sequence has a convergent subsequence. Hence for ﬁxed x. By taking complements. By construction. For every (x. {V (x. y) ∈ X × Y there is some α(x. Moreover.y) . . y) ⊆ Oα(x.8 0. Set U (x) = k U (x. (iii) If X is Hausdorﬀ. {U (x)}x∈X is an open cover and there are ﬁnitely many points xj such U (xj ) cover X. By the deﬁnition of Hausdorﬀ. any compact set is closed. But then U (x) = n Uyj (x) is a neighborhood of x which does not intersect Y . for every y ∈ Y there are disjoint neighborhoods V (y) of y and Uy (x) of x. Since ﬁnite intersections of open sets are open.
Proceeding like this we obtain a Cauchy sequence yn (note that by construction In+1 ⊆ In and hence yn − ym  ≤ b−a for m ≥ n). Subdivide I1 and pick y2 = xn2 .12) . If there were none. b] has a convergent subsequence. In Rn (or Cn ) a set is compact if and only if it is bounded and closed. The distance between a point x ∈ X and a subset Y ⊆ X is dist(x. So it remains to show that there is such an ε. m=1 In particular. n A topological space is called locally compact if every point has a compact neighborhood. we are done if we can show that for every open cover {Oα } there is some ε > 0 such that for every x we have Bε (x) ⊆ Oα for some α = α(x). ﬁnitely many suﬃce to cover K. Choose 1 ε = n and pick a corresponding xn . call it I1 .0. y∈Y (0. we k=1 have that Oα(xk ) is a cover as well. In a metric space compact and sequentially compact are equivalent. Let y1 = xn1 be the ﬁrst one. Rn is locally compact. Please also recall the HeineBorel theorem: Theorem 0. Let X be a metric space. Then at least one of these two 2 2 intervals. Let x = lim xn . Moreover. First of all note that every cover of open balls with ﬁxed radius ε > 0 has a ﬁnite subcover. x) < 2 we have B1/n (xn ) ⊆ Bε (x) ⊆ Oα contradicting our assumption.1. xm ) > ε for m < n. Let xn be our sequence and divide I = [a. then x lies in some Oα and hence Bε (x) ⊆ Oα . But choosing 1 ε ε n so large that n < 2 and d(xn . a contradiction. Proof. for every ε > 0 there must be an x such that Bε (x) ⊆ Oα for every α. with n2 > n1 as before. by Lemma 0. Since if this were false we could construct a sequence xn ∈ X\ n−1 Bε (xm ) such that d(xn .9 (HeineBorel).8 it suﬃces to show that every sequence in I = [a. Then a subset is compact if and only if it is sequentially compact. Example. By Lemma 0. it is no restriction to assume xn converges (after maybe passing to a subsequence). However. y). a+b ] ∪ [ a+b ]. contains inﬁnitely many elements of our sequence. Indeed. Y ) = inf d(x. Warm up: Metric and topological spaces 9 there is a ball Bεn (xn ) which contains only ﬁnitely many elements of K. choosing {xk }n such that Bε (xk ) is a cover. Since X is sequentially compact.7 (ii) and (iii) it suﬃces to show that a closed interval in I ⊆ R is compact. Lemma 0. Proof.8.
Y ) ≤ dist(x. we will always consider complex valued functions! One way of declaring a distance.2. then dist(x. Let X be a metric space. (0. Y ) ≤ d(x. Y ) ≤ dist(x. Now replace C2 by X\O. (0. there is a ball Bε (x) such that Bε (x) is compact and Bε (x) ⊂ X\C2 . To prove the ﬁrst claim set f (x) = dist(x. is the maximum norm: f (x) − g(x) ∞ = max f (x) − g(x). z) + d(z. 0. Y ) = 0. Since we want to handle complex models. Suppose C1 and C2 are disjoint closed subsets of a metric space X. Since U is compact. A ﬁrst look at Banach and Hilbert spaces Note that x ∈ Y if and only if dist(x. there are disjoint open sets O1 and O2 such that Cj ⊆ Oj .10. Y ) ≤ dist(x. Y ) − dist(z. Y ). Then there is a continuous function f : X → [0. Note that Urysohn’s lemma implies that a metric space is normal. Proof. . wellknown from calculus. In fact. In fact.11 (Urysohn). for every x. x → dist(x. j = 1. In particular. that is. 1] such that f is zero on C1 and one on C2 . Y )−dist(z. for any two disjoint closed sets C1 and C2 . such that • f ≥ 0 for all f ∈ X and f = 0 if and only if f = 0.C1 )+dist(x. Taking the inﬁmum in the triangle inequality d(x. 1]). Lemma 0.C2 ) . choose f as in Urysohn’s lemma and set O1 = f −1 ([0. If X is locally compact and U is compact. The Banach space of continuous functions Now let us have a ﬁrst look at Banach spaces by investigating set of continuous functions C(I) on a compact interval I = [a. z). one can choose f with compact support. z)+dist(z. y) shows dist(x. 2. Proof. Hence dist(x. z). y) ≤ d(x. z). ﬁnitely many of them cover C1 and we can choose the union of those balls to be O. Y ) − dist(x.14) x∈I It is not hard to see that with this deﬁnition C(I) becomes a normed linear space: A normed linear space X is a vector space X over C (or R) with a realvalued function (the norm) . 1/2)) respectively O2 = f −1 ((1/2. Y ) is continuous. b] ⊂ R.10 0. observe that there is an open set O such that O is compact and C1 ⊂ O ⊂ O ⊂ X\C2 . Lemma 0.13) For the second claim.C2 ) dist(x. Interchanging x and z shows dist(z.
Moreover.0. a mapping F : X → Y between to normed spaces is called continuous if fn → f implies F (fn ) → F (f ). we need to verify three things: (i) 1 (N) is a Vector space. From the triangle inequality we also get the inverse triangle inequality (Problem 0. Example. that is closed under addition and scalar multiplication (ii) . for given ε > 0 we can ﬁnd an Nε such that am − an 1 ≤ ε for m. it is not hard to see that the norm.16) To show this.2. That 1 (N) is closed under scalar multiplication and the two other properties of a norm are straightforward.1)  f − g ≤ f −g . vector addition.17) for any ﬁnite k. In fact. 1 satisﬁes the three requirements for a norm and (iii) 1 (N) is complete. Now consider j k am − an  ≤ ε j j j=1 (0. In addition to the concept of convergence we have also the concept of a Cauchy sequence and hence the concept of completeness: A normed space is called complete if every Cauchy sequence has a limit. (0. as usual. n ≥ Nε . g ∈ X (triangle inequality).18) . that is. g) = f −g and hence we know when a sequence of vectors fn converges to a vector f .2). It remains to show that 1 (N) is complete. and • f + g ≤ f + g for all f. Letting k → ∞ we conclude that 1 (N) is closed under addition and that the triangle inequality holds. First of all observe k k k aj + bj  ≤ j=1 j=1 aj  + j=1 bj  ≤ a 1 + b 1 (0. This implies in particular am − an  ≤ ε for j j any ﬁxed j. Let an = (an )∞ be j j=1 a Cauchy sequence. We will write fn → f or limn→∞ fn = f . Thus an is a Cauchy sequence for ﬁxed j and by completeness j of C has a limit: limn→∞ an = aj . is a Banach space. in this case. 1 = j=1 aj  (0. A complete normed space is called a Banach space.15) Once we have a norm. The space 1 (N) of all sequences a = (aj )∞ for which the norm j=1 ∞ a is ﬁnite. The Banach space of continuous functions 11 • λ f = λ f for all λ ∈ C and f ∈ X. we have a distance d(f. and multiplication by scalars are continuous (Problem 0.
in the language of real analysis. Next we want to know if there is a basis for C(I). if there is a set {un } ⊂ X of linearly independent vectors such that every element f ∈ X can be written as f= n ∀m. fn converges uniformly to f .23) that is. there is a wellknown result from real analysis which tells us that the uniform limit of continuous functions is again continuous. n > Nε .21) That is. C(I) with the maximum norm is a Banach space. up to this point we don’t know whether it is in our vector space C(I) or not. Or.22) cn un . The space with the norm ∞ (N) of all bounded sequences a = (aj )∞ together j=1 a ∞ = sup aj  j∈N (0. that is. j j=1 (0. In particular.24) then the span span{un } (the set of all ﬁnite linear combinations) of {un } is dense in X. letting m → ∞ in fm (x) − fn (x) ≤ ε we see f (x) − fn (x) ≤ ε ∀n > Nε . x ∈ I (0. Hence f (x) ∈ C(I) and thus every Cauchy sequence in C(I) converges. there is a limit f (x) for each x. we would even prefer a countable basis. whether it is continuous or not. x ∈ I. Now let us look at the case where fn is only a Cauchy sequence. in other words Theorem 0. In order to have only countable sums. n→∞ x∈I (0. that is. Then fn (x) is clearly a Cauchy sequence of real numbers for any ﬁxed x ∈ I. However.12. If such a basis exists. (0. A ﬁrst look at Banach and Hilbert spaces and take m → ∞: k aj − an  ≤ ε. Now what about convergence in this space? A sequence of functions fn (x) converges to f if and only if n→∞ lim f − fn = lim sup fn (x) − f (x) = 0. Hence (a−an ) ∈ 1 (N) and since a ∈ 1 (N) we ﬁnally conclude a = a + (a − a ) ∈ 1 (N).19) Since this holds for any ﬁnite k we even have a−an 1 ≤ ε. Moreover. A set whose span is dense is called total and if we have a total set. we also have a countable dense set (consider only linear combinations . (0. Thus we get a limiting function f (x). cn ∈ C.20) is a Banach space (Problem 0. n n n Example.3).12 0. by completeness of C. Fortunately. fn (x) converges uniformly to f (x).
The Banach space of continuous functions 13 with rational coeﬃcients – show this). the set of vectors n n δ n . Proof. Let un (x) be a sequence of nonnegative continuous functions on [−1. Example. 2 2 Now the claim follows from the lemma below using 1 un (x) = (1 − x2 )n . Moreover. δ > 0. n n (0. without restriction we only consider I = [ −1 . Then the set of polynomials is dense in C(I). 1 ] which vanishes at the endpoints.26) (0.29) converges uniformly to f (x). n = m is total: Let a ∈ 1 (N) be given and set n k .2. 1 ] (why?). A normed linear space containing a countable dense set is called separable.13 (Weierstraß).28) (In other words. f (− 2 ) = 2 f ( 1 ) = 0. The Banach space 1 (N) is separable. Let I be a compact interval. . In where 1 n! (1 − x2 )n dx = 1 1 In = ( 2 + 1) · · · ( 1 + n) −1 2 2 = √ Γ(1 + n) π 3 = Γ( 2 + n) π 1 (1 + O( )).0.25) since an = aj for 1 ≤ j ≤ n and an = 0 for j > n.14 (Smoothing).) Lemma 0. un has mass one and concentrates near x = 0 as n → ∞. then n = a k=1 ak δ ∞ a − an 1 = j=n+1 aj  → 0 (0. Let f (x) ∈ C(I) be given. In fact. we have that 2 1/2 fn (x) = −1/2 un (x − y)f (y)dy (0. By considering f (x) − f (a) + (f (b) − f (a))(x − b) it is no loss to assume that f vanishes at the boundary points.27) (Remark: The integral is known as Beta function and the asymptotics follow from Stirling’s formula. j j Luckily this is also the case for C(I): Theorem 0. 1] such that un (x)dx = 1 x≤1 and δ≤x≤1 un (x)dx → 0. with δn = 1 and δm = 0. (0.) 1 1 Then for every f ∈ C[− 2 .
Moreover. but not for ∞ (N) (Problem 0.3.4)! Problem 0. 0. Now abbreviate M = max f and note 1/2 1/2 f (x)− −1/2 un (x−y)f (x)dy = f (x) 1− −1/2 un (x−y)dy ≤ M ε.31) Note that fn will be as smooth as un . vector addition. Show that the norm. if fn → f . Show that ∞ (N) is a Banach space. Corollary 0. Problem 0.2. A ﬁrst look at Banach and Hilbert spaces Proof. gn → g. How many are there? What is the distance between two such sequences?). and λn → λ then fn → f . for given ε we can ﬁnd a δ (independent of x) such that f (x)−f (y) ≤ ε whenever x−y ≤ δ.x−y≤δ ≤ + un (x − y)f (y) − f (x)dy + M ε y≤1/2. hence the title smoothing lemma. fn + gn → f + g. and multiplication by scalars are continuous.30) In fact. Problem 0.1. Problem 0. C(I) is separable. The geometry of Hilbert spaces So it looks like C(I) has all the properties we want. Using this we have 1/2 fn (x) − f (x) ≤ −1/2 un (x − y)f (y) − f (x)dy + M ε un (x − y)f (y) − f (x)dy y≤1/2.14 0. The same is true for 1 (N). (0. That is.x−y≥δ = ε + 2M ε + M ε = (1 + 3M )ε.3. which proves the claim.15.4. Show that  f − g  ≤ f − g . there is still one thing missing: How should we deﬁne orthogonality in C(I)? In . either the distance of x to one of the boundary points ± 1 is smaller 2 than δ and hence f (x) ≤ ε or otherwise the diﬀerence between one and the integral is smaller than ε. we can choose n such that δ≤y≤1 un (y)dy ≤ ε. (0. Show that ∞ (N) is not separable (Hint: Consider sequences which take only the value one and zero. and λn gn → λg. The same idea is used to approximate noncontinuous functions by smooth ones (of course the convergence will no longer be uniform in this case). Since f is uniformly continuous. However.
. f is indeed a norm. Example. A skew linear form satisfying the requirements (i) f. Only the triangle inequality is nontrivial which will follow from the CauchySchwarz inequality below. (aj )∞ j=1 j=1 aj 2 < ∞ (0. Two vectors f.36) (Show that this is in fact a separable Hilbert space! Problem 0. The geometry of Hilbert spaces 15 Euclidean space. . . : H × H → C is called skew linear form if it is conjugate linear in the ﬁrst and linear in the second argument. g . g ∈ H are called orthogonal or perpendicular (f ⊥ g) if f. Associated with every scalar product is a norm f = f. (0. A map . g2 α1 . . g1 + α2 f. A vector f ∈ H is called normalized or unit vector if f = 1. . α1 f1 + α2 f2 . j (0. Example.35) with scalar product ∞ a. f > 0 for f = 0 (ii) f.33) The pair (H. that is.. = α1 f.0..3. g + α2 f2 . Clearly Cn with the usual scalar product n a. ) is called inner product space. g f. two vectors are called orthogonal if their scalar product vanishes. the set of all sequences ∞ 2 (N). f .32) where ‘∗’ denotes complex conjugation. If H is complete it is called a Hilbert space. α1 g1 + α2 g2 ∗ ∗ = α1 f1 .5) Of course I still owe you a proof for the claim that f. g = 0 and parallel if one is a multiple of the other. α2 ∈ C. b = j=1 a∗ bj . A somewhat more interesting example is the Hilbert space that is. so we would need a scalar product: Suppose H is a vector space. (0. f ∗ (positive deﬁnite) (symmetry) is called inner product or scalar product. g = g.34) is a (ﬁnite dimensional) Hilbert space.. b = j=1 a∗ bj j (0.
f u. g ∈ H0 we have  f.40) and hence f = u. f u (0. f ⊥ g. Suppose u is a unit vector. Can we ﬁnd a scalar product which has the maximum norm as associated norm? Unfortunately the answer is no! The . As a ﬁrst consequence we obtain the CauchySchwarzBunjakowski inequality: Theorem 0. (0.37) which is one line of computation. then the projection of f in the direction of u is given by f = u. (0. if fn → f and gn → g we have fn . f +g 2 = f 2 + f. f u = u. f 2 + f⊥ 2 .38) and f⊥ deﬁned via f⊥ = f − u. Note that the CauchySchwarz inequality entails that the scalar product is continuous in both variables. then for every f. g  ≤ f g (0.41) with equality if and only if f and g are parallel.42) But let us return to C(I). Let H0 be an inner product space.16 (CauchySchwarzBunjakowski). that is. f − α2 (0.16 0. g . But then the claim follows from f 2 =  g. f⊥ = u. u = 0. f u (0. gn → f.39) is perpendicular to u since u. f u is the unique vector parallel to u which is closest to f. f − u. f w f f f⊥ f f f I f I u Taking any other vector parallel to u it is easy to see f − αu 2 = f⊥ + (f − αu) 2 = f⊥ 2 +  u. f − u. Proof. A ﬁrst look at Banach and Hilbert spaces For two orthogonal vectors we have the Pythagorean theorem: f +g 2 = f 2 + g 2. f + g 2 ≤ ( f + g )2 . is indeed a norm. As another consequence we infer that the map . g + g. It suﬃces to prove the case g = 1.
g) = 2s(f. veriﬁcation of the parallelogram law and the polarization identity is straight forward (Problem 0. 1 and . cont Suppose we have two norms .3. Theorem 0. Furthermore. λg) for every positive rational λ. To show the converse.46) 2 Now choosing h = 0 (and using s(f. −g) = −s(f. (0. g + h). By continuity (check this!) this holds for all λ > 0 and s(f. Then . f )∗ are straightforward to check. Moreover. by induction we infer m m 2n s(f. g = 4 Proof. said to be stronger than . g) = s(g. g) = s(f. 0) = 0) shows s(f. 2n g).43) f. g) respectively s(f.7). But how do we deﬁne a scalar product on C(I)? One possibility is b 2 + f −g 2 =2 f 2 +2 g 2 (0.48) and hence the maximum norm is stronger than the L2 norm. (0. that is λs(f. (0.47) The corresponding inner product space is denoted by L2 (I).44) f.6).17 (Jordanvon Neumann). 1 if there is a constant m > 0 such that f 1 2 is ≤m f 2. ). g) = 4 Then s(f. A norm is associated with a scalar product if and only if the parallelogram law f +g holds. g) ﬁnishes the proof. we deﬁne 1 f + g 2 − f − g 2 + i f − ig 2 − i f + ig 2 . h) = s(f. The geometry of Hilbert spaces 17 reason is that the maximum norm does not satisfy the parallelogram law (Problem 0. f ) = f 2 and s(f.0. g) = s(f.49) . ig) = i s(f. (0.45) s(f. g) + s(f. g = a f ∗ (x)g(x)dx. (0. h) = 2s(f. In this case the scalar product can be recovered from its norm by virtue of the polarization identity 1 f + g 2 − f − g 2 + i f − ig 2 − i f + ig 2 . Note that cont we have f ≤ b − a f ∞ (0. g ) and 2 thus s(f. If an inner product space is given. g) + s(f. Note that the parallelogram law and the polarization identity even hold for skew linear forms (Problem 0.6). 2 on a space X. another straightforward computation using the parallelogram law shows g+h s(f.
Problem 0. the key to solving problems in inﬁnite dimensional spaces is often ﬁnding the right norm! This is something which cannot happen in the ﬁnite dimensional case. A ﬁrst look at Banach and Hilbert spaces It is straightforward to check that Lemma 0. for given two norms . 2 there are constants m1 and m2 such that 1 f 1 ≤ f 2 ≤ m1 f 1 . If . is also a . .19. 1 ≤ j ≤ n. 1 is continuous with respect to . . Show that 2 (N) is a separable Hilbert space. L2 is separable. 1 ). (0. 2 it is also convergent with respect to . if fn is convergent with respect to . . 1 and .51) m2 Proof. and assume that .8)! This shows that in inﬁnite dimensional spaces diﬀerent norms will give raise to diﬀerent convergent sequences! In fact. Let f = j αj uj . 2 is stronger than . then by the triangle and Cauchy Schwartz inequalities f 1 ≤ j αj  uj uj 1 ≤ j uj 2 1 f 2 (0. That is.5. 1 − n ≤ x ≤ 1 fn (x) = 1. 2 is 2 2 the usual Euclidean norm.18 0. then all norms are equivalent. 2 ) it is also dense in (X. Now choose m1 = 1/m. 2 and attains its minimum m > 0 on the unit sphere (which is compact by the HeineBorel theorem). 1 . j αj uj 2 = j αj  .52) and we can choose m2 = j 1. Take I = [0. 1 1 + n(x − 1). but there is no limit in L2 ! cont cont Clearly the limit should be the step function which is 0 for 0 ≤ x < 1 and 1 for 1 ≤ x ≤ 2. then any . 2 Cauchy sequence Hence if a function F : X → Y is continuous in (X. 2] and deﬁne 1 0≤x≤1− n 0. In particular. Thus . In particular. . but this step function is discontinuous (Problem 0. But is it also complete? Unfortunately cont the answer is no: Example. 1 . If X is a ﬁnite dimensional case.50) then fn (x) is a Cauchy sequence in L2 . Clearly we can choose a basis uj . 1 ) it is also continuos in (X. 2 ) and if a set is dense in (X. 1 Cauchy sequence. . 1≤x≤2 (0.18. Theorem 0.
g) = (p(f + g) − p(f − g) + i p(f − ig) − i p(f + ig)) . in the last example. . Completeness Since L2 cont is not complete. deﬁned in (0. how can we obtain a Hilbert space out of it? Well the answer is simple: take the completion. Moreover. ¯ Proof. Let s(f. This can for example be seen by showing that the identity map on X has a unique extension to X (compare Theorem 0.50). then xn converges.8. If xn is a Cauchy sequence.7. Problem 0.0. consider the set of all Cauchy ˜ sequences X. 0. X is a Banach space containing X as a dense subspace if we identify x ∈ X with the equivalence class of all sequences converging to x. in the important case of L2 cont it is somewhat inconvenient to work with equivalence classes of Cauchy sequences and hence we will give a diﬀerent characterization using the Lebesgue integral later. (Outline) It remains to show that X is complete. However.6. Show that the maximum norm (on C[0. Consequently the norm of a Cauchy sequence (xn )∞ can be deﬁned by n=1 (xn )∞ n=1 = limn→∞ xn and is independent of the equivalence class (show ¯ ˜ this!). In particular it is no restriction to assume that a normed linear space or an inner product space is complete.4. Prove the parallelogram law p(f + g) + p(f − g) = 2p(f ) + 2p(g) (0.21. Let me remark that the completion X is unique. f ) the associated quadratic form.24 below). g) be a skew linear form and p(f ) = s(f. ¯ Theorem 0. ˜ to see that X Lemma 0. Prove the claims made about fn . 1]) does not satisfy the parallelogram law. Let ξn = [(xn.20. Call two Cauchy sequences equivalent if their diﬀerence con¯ verges to zero and denote by X the set of all equivalence classes. Then it is not hard to see that ξ = [(xj. Thus X is a normed space (X is not! why?). It is easy ¯ (and X) inherit the vector space structure from X. Completeness 19 Problem 0. (0.53) and the polarization identity 1 s(f. More precisely any other complete space which contains X as a dense subset is isomorphic to X.4.j )∞ ] j=1 is its limit.j )∞ ] j=1 ¯ be a Cauchy sequence in X. If X is a (incomplete) normed space.54) 4 Problem 0.
given ε > 0 there is some N such that An − Am ≤ ε for n.23. By continuity of the vector operations. The set of all bounded linear operators from X to Y is denoted by L(X.57) are deﬁned as usual. A is linear and by continuity of the norm Af = limn→∞ An f ≤ (limn→∞ An ) f it is bounded. Bounded operators A linear map A between two normed spaces X and Y will be called a (linear) operator A : D(A) ⊆ X → Y. That (0. (0.56) is ﬁnite. m ≥ N and thus An f − Am f ≤ ε f .58) f X =1 (0.59) and hence continuous. If X = Y we write L(X. it is no restriction to assume that it is deﬁned on all of X.20 0. By construction. Proof.5.58) is a normed space. a bounded operator is Lipschitz continuous Af Y ≤ A f X (0. Moreover. X) = L(X). . then An f converges to an element g for every f . Then fn = n un converges to 0 but Afn ≥ 1 does not converge to 0.55) The linear subspace D(A) on which A is deﬁned. Furthermore. Suppose A is continuous but not bounded. The operator A is called bounded if the following operator norm A = sup Af Y (0. is called the domain of A and is usually required to be dense. Theorem 0. Deﬁne a new operator A via Af = g. Taking the limit m → ∞ we see An f − Af ≤ ε f . Y ) together with the operator norm (0. It is a Banach space if Y is. A ﬁrst look at Banach and Hilbert spaces 0. Y ).22. If Y is complete and An is a Cauchy sequence of operators. An operator A is bounded if and only if it is continuous. that is An → A. The converse is also true Theorem 0. Proof. The space L(X.58) is indeed a norm is straightforward. if A is bounded and densely deﬁned. Then there is a sequence 1 of unit vectors un such that Aun ≥ n. The kernel Ker(A) = {f ∈ D(A)Af = 0} and range Ran(A) = {Af f ∈ D(A)} = AD(A) (0.
note that our multiplication is not commutative (unless X is one dimensional). Problem 0. there is a unique (continuous) extension of A to X.61) From continuity of vector addition and scalar multiplication it follows that our extension is linear.24.64) ensures that multiplication is continuous. Finally. Bounded operators 21 Theorem 0. y) ∈ C([0. and (AB)C = A(BC).65) where K(x. Since a bounded operator maps Cauchy sequences to Cauchy sequences. n→∞ fn ∈ D(A).0. deﬁned on D(K) = C[0. (0. α (AB) = (αA)B = A (αB).9. An operator in L(X. A sequence fn is said to converge weakly fn f if (fn ) → (f ) for every ∈ X ∗ . A. We even have an identity. α ∈ C. which has the same norm.60) To show that this deﬁnition is independent of the sequence fn → f . cont Problem 0. 1]). C) is called the dual space of X. (0. 1] is an unbounded operator. Clearly this multiplication satisﬁes (A + B)C = AC + BC. In particular. 1] (max norm) and X = L2 (0. Show that the diﬀerential operator A = D(A) = C 1 [0. C) is called a bounded linear functional and the space X ∗ = L(X. Y ) and let Y be a Banach space. this extension can only be given by Af = lim Afn . B. If D(A) is dense. let gn → f be a second sequence and observe Afn − Agn = A(fn − gn ) ≤ A fn − gn → 0.63) (0. Let A ∈ L(X. The Banach space of bounded linear operators L(X) even has a multiplication given by composition. note that (0. 1] is a bounded operator both in X = C[0. A Banach space together with a multiplication satisfying the above requirements is called a Banach algebra. y)f (y)dy. (0. Show that the integral operator 1 A(B + C) = AB + BC.5. AB ≤ A B . from continuity of the norm we conclude that the norm does not increase.62) (Kf )(x) = 0 K(x. 1). the identity operator I satisfying I = 1. 1] ⊂ C[0.10. it is easy to see that we have However. (0. Proof. 1] × [0.64) Moreover. C ∈ L(X) (0. f ∈ X. d dx deﬁned on .
dµ) and we can consider the quotient space Lp (X. Lebesgue Lp spaces We ﬁx some measure space (X.25. g)p ≤ 2p max(f p . Σ. then f p dµ = 0 X (0. (0. Let f be measurable.68) Then N (X. Observe that we have A = {xf (x) = 0} = n An . Show that AB ≤ A B for every A. Of course our hope is that Lp (X. then f (x) = 0 a. Thus f p = 0 only implies f (x) = 0 for almost every x. dµ) is a Banach space. Note that the proof also shows that if f is not 0 almost everywhere.67) if and only if f (x) = 0 almost everywhere with respect to µ. The converse is obvious.66) and denote by Lp (X. (with respect to λ). dµ) is a linear space. dµ). since f + gp ≤ 2p max(f . dµ)/N (X. If f p dµ = 0 we must have µ(An ) = 0 for every n and hence µ(A) = limn→∞ µ(An ) = 0. Proof. However. where An = 1 {x f (x) > n }. gp ) ≤ 2p (f p + gp ). A ﬁrst look at Banach and Hilbert spaces Problem 0. 1≤p (0. 0. (0. (with respect to Θ) if and only if f (0) = 0.69) . but not for all! Hence . Show that the multiplication in a Banach algebra X is continuous: xn → x and yn → y implies xn yn → xy. dµ) is a linear subspace of Lp (X. µ) and deﬁne the Lp norm by 1/p f p = X f p dµ . there is a small technical problem (recall that a property is said to hold almost everywhere if the set where it fails to hold is contained in a set of measure zero): Lemma 0. dµ). Let Θ be the Dirac measure centered at 0. The way out of this misery is to identify functions which are equal almost everywhere: Let N (X. Problem 0. Example. dµ) = Lp (X. B ∈ L(X).e.6.11.e. Let λ be the Lebesgue measure on R. dµ) = {f f (x) = 0 µalmost everywhere}. dµ) the set of all complex valued measurable functions for which f p is ﬁnite. Then the characteristic function of the rationals χQ is zero a. First of all note that Lp (X.12.22 0. p is not a norm on Lp (X. there is an ε > 0 such that µ({x f (x) ≥ ε}) > 0.
70) That is. The solution is the essential supremum f ∞ = inf{C  µ({x f (x) > C}) = 0}. we will need H¨lder’s inequality. If Θ is the Dirac measure centered at 0. then the essential sup of χQ with respect to Θ is 1 (since χQ (0) = 1. dµ) and g ∈ Lq (X. Even though the elements of Lp (X. We will show this in the following sections. the supremum is no longer independent of the equivalence class. dµ) and observe that f ∞ (0. have a look at Problem 0. With this modiﬁcation we are back in business since Lp (X. dµ) then f g ∈ L1 (X.6. If f ∈ Lp (X.13. and x = 0 is the only point which counts for Θ). As a preparation for proving that Lp is a Banach space. o In particular. it will imply Minkowski’s inequality.73) . But before that let us also deﬁne L∞ (X. note that for f ∈ Lp (X. which plays a central role in the theory of Lp spaces. If you wonder where the ∞ comes from. Observe that f p is well deﬁned on Lp (X. Theorem 0.g. dµ) and fg 1 ≤ f p g q. dµ) turns out to be a Banach space. dµ) = B(X)/N (X. Let p and q be dual indices. e.72) with 1 ≤ p ≤ ∞. dµ) the value f (x) is not well deﬁned (unless there is a continuous representative and diﬀerent continuous functions are in diﬀerent equivalence classes. we will still call them functions for notational convenience. dµ). Example.0. dµ) are strictly speaking equivalence classes of functions. (0. C is an essential bound if f (x) ≤ C almost everywhere and the essential supremum is the inﬁmum over all essential bounds. dµ). As before we set L∞ (X. If λ is the Lebesgue measure. which is just the triangle inequality for Lp . that is. o 1 1 + =1 p q (0. It should be the set of bounded measurable functions B(X) together with the sup norm. then the essential sup of χQ with respect to λ is 0. (0.. The only problem is that if we want to identify functions equal almost everywhere.26 (H¨lder’s inequality). However. Lebesgue Lp spaces 23 If dµ is the Lebesgue measure on X ⊆ Rn we simply write Lp (X). in the case of Lebesgue measure).71) is independent of the equivalence class.
dµ) is complete. Then ∞ G(x) = k=1 gk (x) (0. (0. Since the cases p = 1. a. The case p = 1.74) 1 p f p dµ + X 1 q gq dµ = 1 X (0. p q with a = f p and b = gq and integrating over X gives f gdµ ≤ X q = 1.28. Let f. This follows from n n gk  k=1 p ≤ k=1 gk (x) p ≤ f1 p + 1 2 (0.80) . A ﬁrst look at Banach and Hilbert spaces Proof. dµ) is a normed linear space.24 0.78) fn+1 − fn p ≤ n .79) is in Lp . Finally it remains to show that Lp (X. Proof. Then. (0. b ≥ 0. dµ).14) 1 1 a1/p b1/q ≤ a + b. q < ∞. using (0. 2 Now consider gn = fn − fn−1 (set f0 = 0). ∞ are straightforward. The space Lp (X.75) and ﬁnishes the proof. It suﬃces to show that some subsequence converges (show this). Hence we can drop some terms such that 1 (0. dµ) is a Banach space.77) This shows that Lp (X. As a consequence we also get Theorem 0. Theorem 0. g ∈ Lp (X. we only consider 1 < p < ∞. First of all it is no restriction to assume f p = g the elementary inequality (Problem 0. q = ∞ (respectively p = ∞. then f +g p ≤ f p + g p.27 (Minkowski’s inequality). q = 1) follows directly from the properties of the integral and hence it remains to consider 1 < p. Suppose fn is a Cauchy sequence. Using f +gp ≤ f  f +gp−1 +g f +gp−1 we obtain from H¨lder’s o inequality (note (p − 1)q = p) f +g p p ≤ f p p (f + g)p−1 q + g p (f + g)p−1 q = ( f + g p ) (f + g) p−1 p .76) Proof.
Proof. By monotone convergence. χO − χOn p → 0 and hence the set of all characteristic functions ˜ ˜ ∈ B is total. is total by construction of the integral. It even turns out that Lp is separable. by considering the set of all ﬁnite j=1 ˜ unions of elements from B it is no restriction to assume n Oj ∈ B. we can restrict A to open sets from this base. If fn − f p → 0 then there is a subsequence which converges pointwise almost everywhere. As in the previous proof the set of all characteristic functions χK (x) with K compact is total (using inner regularity). dµ). In particular. every open set O can be written as ˜ ˜ O = ∞ Oj with Oj ∈ B. 1 ≤ p < ∞ is separable.e. Since f (x) − fn (x)p converges to zero almost everywhere and f (x) − fn (x)p ≤ 2p G(x)p ∈ L1 .0. Lebesgue Lp spaces 25 using the monotone convergence theorem. Finally let B be a countable basis for the topology.29. it has a countable basis) and µ is a regular Borel measure. dµ). Lemma 0.. Hence j=1 ˜ ˜ there is an increasing sequence On O with On ∈ B. Now our strategy is as follows: Using outer regularity we can restrict A to open sets and using the existence of a countable base. Moreover. Theorem 0. let us show that continuous functions are dense in Lp .30. Suppose X is a second countable topological space (i. in the proof of the last theorem we have seen: Corollary 0. G(x) < ∞ almost everywhere and the sum ∞ gn (x) = lim fn (x) n=1 n→∞ (0. Then. Thus the set of all characteristic functions χO (x) with O open and µ(O) < ∞. Then Lp (X. χO with O ˜ To ﬁnish this chapter. Fix A. Now let f (x) be this limit.6. Let X be a locally compact metric space and let µ be a σﬁnite regular Borel measure. Then the set Cc (X) of continuous functions with compact support is dense in Lp (X. In particular. Since µ(A) < ∞ it is no restriction to assume µ(On ) < ∞. Now dominated convergence implies χA − χOn p → 0. 1 ≤ p < ∞. The set of all characteristic functions χA (x) with A ∈ Σ and µ(A) < ∞. there is a decreasing sequence of open sets On such that µ(On ) → µ(A). By outer regularity. and thus µ(On \A) = µ(On ) − µ(A) → 0.31. dominated convergence shows f − fn p → 0. is total. Hence it suﬃces to show . Proof.81) is absolutely convergent for those x.
By our previous result it suﬃces to show that any continuous function f (x) with compact support can be approximated by smooth ones.32. If we scale a molliﬁer according to uk (x) = k n u(k x) such that its mass is preserved ( uk 1 = 1) and it concentrates more and more around the origin T u k E we have the following result (Problem 0.17): Lemma 0. Let u be a molliﬁer in Rn and set uk (x) = k n u(k x).84) is in C ∞ (Rn ) and converges to f (uniformly).82) we have fε − χK → 0 and we are done.11) there is a continuous function fε which is one on K and 0 outside O. Since χK − fε p dµ = X O\K fε p dµ ≤ µ(O\K) ≤ ε (0. By Urysohn’s lemma (Lemma 0. it is no restriction to assume X = Rn . then the set Cc (X) of p (X. A ﬁrst look at Banach and Hilbert spaces that χK (x) can be approximated by continuous functions. By outer regularity there is an open set O ⊃ K such that µ(O\K) ≤ ε. dµ). Then for any (uniformly) continuous function f : Rn → C we have that fk (x) = Rn uk (x − y)f (y)dy (0. If X ⊆ Rn and µ is a Borel measure. ∞ A nonnegative function u ∈ Cc (Rn ) is called a molliﬁer if u(x)dx = 1 Rn (0.83) 1 The standard molliﬁer is u(x) = exp( x2 −1 ) for x < 1 and u(x) = 0 else. all smooth functions with compact support is dense in L Proof. 1 ≤ p < ∞. By setting f (x) = 0 for x ∈ X. Now we are ready to prove ∞ Theorem 0.33.26 0. Now choose a molliﬁer u and observe that fk has compact support (since f . If X is some subset of Rn we can do even better.
(Hint: To show that fk is smooth use Problem A.17.16 (Lyapunov inequality). Suppose µ(X) < ∞. Show that if f ∈ Lp1 ∩ Lp2 . 1]. 1) which has a pole at every rational number in [0. the union of all open subsets). (0. Problem 0.7.8. Moreover. Proof. (Hint: Show that f (x) = (1 − t) + tx − xt . that is. A set is called nowhere dense if its closure has empty interior. then X cannot be the countable union of nowhere dense sets. (0. But this implies fk → f in Lp .18. Prove Lemma 0. Suppose X = ∞ Xn . Prove (0. Show that p→∞ lim f p = f ∞ (0. .13. We will construct a Cauchy sequence xn which stays away from all Xn . Let 0 < θ < 1. The key to several important theorems about Banach spaces is the observation that a Banach space cannot be the countable union of nowhere dense sets.87) = θ p1 + 1−θ p2 .32. X\Xn is open and nonempty for every n. r Problem 0.7.7 and A. Let X be a complete metric space.) 0. Show the following generalization of H¨lder’s inequality o fg where 1 p r ≤ f p g q. since f has compact support it is uniformly continuous and fk → f uniformly. Problem 0.15.) Problem 0. then f ∈ Lp and f where 1 p p ≤ f θ p1 f 1−θ p2 .86) + 1 q = 1. Appendix: The uniform boundedness principle 27 has). Construct a function f ∈ Lp (0.74). (Hint: Start with the function f0 (x) = x−α which has a single pole at 0.34 (Baire category theorem).14. Theorem 0. We can assume that the sets Xn are closed n=1 and none of them contains a ball.) Problem 0. 0 < t < 1 satisﬁes f (a/b) ≥ 0 = f (1). Problem 0. then fj (x) = f0 (x − xj ) has a pole at xj .0. x > 0.85) for any bounded measurable function. Appendix: The uniform boundedness principle Recall that the interior of a set is the largest open subset (that is.
Moreover.91) . Reducing r1 a little. (0.89) then n Xn = X by assumption. we can even assume Br1 (x1 ) ⊆ X\X1 . Hence the name category theorem. Proceeding by induction we obtain a sequence of balls such that (0. (Sets which can be written as countable union of nowhere dense sets are called of ﬁrst category. Now observe Aα y ≤ Aα (y + x0 ) + Aα x0 ≤ n + Aα x0 (0. By Baire’s theorem at least one contains a ball of positive radius: Bε (x0 ) ⊂ Xn .28 0. if Xn ⊆ X is a sequence of closed subsets which cover X. we conclude that xn is Cauchy and converges to some point x ∈ X. Since by construction xn ∈ BrN (xN ) for n ≥ N . hence without loss of generality rn → 0. Theorem 0. contradicting our assumption that the Xn cover X.90) x for y < ε. Since Br1 (x1 ) ∩ (X\X2 ) is open there is a closed ball Br2 (x2 ) ⊆ Br1 (x1 ) ∩ (X\X2 ). A ﬁrst look at Banach and Hilbert spaces Since X\X1 is open and nonempty there is a closed ball Br1 (x1 ) ⊆ X\X1 . then Aα ≤ C is uniformly bounded. at least one Xn contains a ball of radius ε > 0. each Xn is an intersection of closed sets and hence closed. But x ∈ Brn (xn ) ⊆ X\Xn for every n. Y ) be a family of bounded operators. Let Xn = {x Aα x ≤ n for all α} = α {x Aα x ≤ n}. Now observe that in every step we can choose rn as small as we please.) In other words. the uniform boundedness principle. Let {Aα } ⊆ L(X. Proof. since X2 cannot contain Br1 (x1 ) there is some x2 ∈ Br1 (x1 ) that is not in X2 . Now we come to the ﬁrst important consequence.35 (BanachSteinhaus). n + C(x0 ) x ε (0.88) Brn (xn ) ⊆ Brn−1 (xn−1 ) ∩ (X\Xn ). All other sets are second category. Suppose Aα x ≤ C(x) is bounded for ﬁxed x ∈ X. Moreover. Setting y = ε x we obtain Aα x ≤ for any x. Let X be a Banach space and Y some normed linear space. by continuity of Aα and the norm.
Part 1 Mathematical Foundations of Quantum Mechanics .
.
1. A map . dµ) is a Hilbert space with scalar product given by f. . it is called a Hilbert space. g = M f (x)∗ g(x)dµ(x). 1. A positive deﬁnite skew linear form is called inner product or scalar product. It is no restriction to assume that H is complete since one can easily replace it by its completion. : H × H → C is called skew linear form if it is conjugate linear in the ﬁrst and linear in the second argument. Hence the geometry of Hilbert spaces stands at the outset of our investigations.2) with equality if and only if ψ and ϕ are parallel. ψ . ϕ  ≤ ψ ϕ (1.Chapter 1 Hilbert spaces The phase space in classical mechanics is the Euclidean space R2n (for the n position and n momentum coordinates). (1. If H is complete with respect to the above norm.1) The triangle inequality follows from the CauchySchwarzBunjakowski inequality:  ψ. In quantum mechanics the phase space is always a Hilbert space H.. Associated with every scalar product is a norm ψ = ψ. Hilbert spaces Suppose H is a vector space. Example.3) 31 . The space L2 (M.. (1.
ϕ = 0 and parallel if one is a multiple of the other.32 1.10) ˆ with equality holding if and only if ψ = ψ .1. ψ is uniquely characterized as the vector in the span of {ϕj }n being closest to ψ.5) which is straightforward to check. ψ⊥ = 0 for all 1 ≤ j ≤ n.6) (1. the set of all square summable sequences with scalar product f. Lemma 1. 2 (N) is a Hilbert space (1. Moreover. ϕj = 1. ψ ϕ and ψ⊥ deﬁned via ψ⊥ = ψ − ϕ. A set of vectors {ϕj } is called orthonormal set if ϕj . ϕk = 0 for j = k and ϕj . Suppose ϕ is a unit vector. Then every ψ ∈ H j=0 can be written as n (1. g = j∈N ∗ fj gj . take M = R and µ a sum of Dirac measures.4) (Note that the second example is a special case of the ﬁrst one. every ψ in the span of {ϕj }n satisﬁes j=0 ˆ ψ − ψ ≥ ψ⊥ (1. ϕj . Suppose {ϕj }n is an orthonormal set. (1. then the projection of ψ in the direction of ϕ is given by ψ = ϕ. ψ = j=0 ϕj . ψ ⊥ ϕ. Two vectors ψ. ψ ϕ is perpendicular to ϕ. n ψ 2 = j=0  ϕj . If ψ and ϕ are orthogonal we have the Pythagorean theorem: ψ+ϕ 2 = ψ 2 + ϕ 2. (1. In particular. In other words.8) where ψ and ψ⊥ are orthogonal. j=0 . Hilbert spaces Similarly.) A vector ψ ∈ H is called normalized or unit vector if ψ = 1. (1.7) ψ = ψ + ψ⊥ . These results can also be generalized to more than one vector. ϕ ∈ H are called orthogonal or perpendicular (ψ ⊥ ϕ) if ψ.9) ˆ Moreover. ψ 2 + ψ⊥ 2 . ψ ϕj .
The formula for the norm follows by applying (1. Is it unitary? 1. Orthonormal bases Of course. We start by assuming that J is countable.13) with equality holding if and only if ψ lies in the span of {ϕj }n .14) 4 A bijective operator U ∈ L(H1 . ψ − ψ = 0 and hence ψ and ψ⊥ = ψ − ψ are orthogonal. ϕ. ) clearly satisﬁes U a = a . ψ = ϕ + ψ 2 − ϕ − ψ 2 + i ϕ − iψ 2 − i ϕ + iψ 2 . .9) we obtain Bessel’s inequality n  ϕj . . a2 .2. The operator S : 2 (N) → 2 (N).16) j∈J . Then one computes j=0 ˆ ψ−ψ 2 = = ˆ ψ + ψ⊥ − ψ n 2 = ψ⊥ 2 ˆ + ψ −ψ 2 ψ⊥ 2 + j=0 cj − ϕj . Orthonormal bases 33 Proof.1. (1.13) shows that  ϕj . Then Bessel’s inequality (1.5) iteratively. U ψ 2 = ϕ. ) → (0. From (1. . ψ 2 ≤ ψ j=0 2 (1. The two Hilbert space H1 and H2 are called unitarily equivalent in this case. j=0 Recall that a scalar product can be recovered from its norm by virtue of the polarization identity 1 ϕ. H2 ) is called unitary if U preserves scalar products: U ϕ.15) By the polarization identity this is the case if and only if U preserves norms: U ψ 2 = ψ 1 for all ψ ∈ H1 . . Problem 1. (a1 . . a3 .11) in the span of {ϕj }n .2. ψ 2 (1. ﬁx a vector n ˆ ψ= j=0 cj ϕj . ψ ∈ H1 . since we cannot assume H to be a ﬁnite dimensional vector space. ψ 2 (1. Now. a1 . A straightforward calculation shows ϕj .12) from which the last claim follows. ψ 1 . a2 . we need to generalize Lemma 1.1 to arbitrary orthonormal sets {ϕj }j∈J .1. (1. (1.
Note that from Bessel’s inequality (which of course still holds) it follows that the map ψ → ψ is continuous. An orthonormal set which is not a proper subset of any other orthonormal set is called an orthonormal basis due to following result: Theorem 1. ψ ϕ j . for any ﬁnite subset K ⊂ J we have ϕj . For an orthonormal set {ϕj }j∈J the following conditions are equivalent: (i) {ϕj }j∈J is a maximal orthonormal set.22) ˆ with equality holding if and only if ψ = ψ . ψ ϕj j∈K 2 = j∈K  ϕj . Then every ψ ∈ H can be written as ψ = ψ + ψ⊥ . ψ 2 (1. Bessel’s inequality shows that for any given ε > 0 there are at most ﬁnitely many j for which  ϕj . In particular. Hilbert spaces converges absolutely. Suppose {ϕj }j∈J is an orthonormal set. ψ  > 0. ϕj . ψ 2 (1.20) where ψ and ψ⊥ are orthogonal.18) j∈J is welldeﬁned and so is ϕj .23) . Thus it follows that  ϕj . ψ  is. Moreover.17) by the Pythagorean theorem and thus j∈J ϕj . Theorem 1. ψ = j∈J ϕj . (1. Moreover.21) j∈J ˆ Moreover. Now let J be arbitrary. In other words. ψ 2=  ϕj .2. Again. ψ  ≥ ε. ψ 2 + ψ⊥ 2 . ψ⊥ = 0 for all j ∈ J.34 1. (1. j∈J (1.3. every ψ in the span of {ϕj }j∈J satisﬁes ˆ ψ − ψ ≥ ψ⊥ (1. ψ ϕj .19) In particular. Hence there are at most countably many j for which  ϕj . ψ is uniquely characterized as the vector in the span of {ϕj }j∈J being closest to ψ. ψ ϕj . ψ ϕj is Cauchy if and only 2 if j∈J  ϕj . by continuity of the scalar product we see that Lemma 1. (1.1 holds for arbitrary orthonormal sets without modiﬁcations. (ii) For every vector ψ ∈ H we have ψ= j∈J ϕj .
ψ n ϕj . The set of functions 1 ϕn (x) = √ ein x . (1. ϕj = 0 for all j ∈ J we conclude ψ 2 = 0 and hence ψ = 0. (iv) ⇒ (i): If {ϕj }j∈J were not maximal.1. ψ = 0 for all j ∈ J implies ψ = 0.2. Example. Since ψ → ψ is continuous.27) ψ 1 − ϕ0 . ψ 1 ϕ0 Proceeding like this we deﬁne recursively ϕn = ψn − ψn − n−1 j=0 n−1 j=0 ϕj . ψ 2 . . . (Problem 1. ϕ = 0 for all j ∈ J implies ϕ = 0 by (iv).17) A Hilbert space is separable if and only if there is a countable orthonormal basis. (ii) ⇒ (iii): Follows since (ii) implies ψ⊥ = 0. ψ n ϕj ϕj . contradicting maximality of {ϕj }j∈J . Proof. it suﬃces to check conditions (ii) and (iii) on a dense set.24) (iv) ϕj . if H is separable. (1. then there exists a countable total set {ψj }N . We will use the notation from Theorem 1. Now we can construct an orthonormal basis as follows: We begin by normalizing ψ0 ψ0 ϕ0 = . ˜ (i) ⇒ (ii): If ψ⊥ = 0 than we can normalize ψ⊥ to obtain a unit vector ψ⊥ ˜ which is orthogonal to all vectors ϕj . In fact. (iii) ⇒ (iv): If ψ.28) This procedure is known as GramSchmidt orthogonalization. a contradiction.26) ψ0 Next we take ψ1 and remove the component parallel to ϕ0 and normalize again ψ 1 − ϕ0 . Hence we obtain an orthonormal set {ϕj }N such that span{ϕj }n = span{ψj }n j=0 j=0 j=0 .2. But then {ϕj }j∈J ∪ {ψ⊥ } would be a larger orthonormal set. (1.25) forms an orthonormal basis for H = L2 (0. (1. . ψ 1 ϕ0 ϕ1 = . 2π n ∈ Z. After throwing away some vectors we can assume that ψn+1 canj=0 not be expressed as a linear combinations of the vectors ψ0 . 2π). The corresponding orthogonal expansion is just the ordinary Fourier series. there would be a unit vector ϕ such that {ϕj }j∈J ∪ {ϕ} is larger orthonormal set. But ϕj . ψn . Orthonormal bases 35 (iii) For every vector ψ ∈ H we have ψ 2 = j∈J  ϕj . (1.
Theorem 1. if there is one countable basis.3. (1. However. P1 (x) = x. Now let {φk }k∈K be a second basis and consider the set Kj = {k ∈ K φk . Moreover. Any separable inﬁnite dimensional Hilbert space is unitarily equivalent to 2 (N). The resulting polynomials are up to a normalization equal to the Legendre polynomials P0 (x) = 1.29) (which are normalized such that Pn (1) = 1). Hence Zorn’s lemma implies the existence of a maximal element. up to unitary equivalence. then every orthonormal basis is countable. then M ⊥ = {ψ ϕ. ϕj = 0}.36 1. Then the map U : H → 2 (N).4. ψ = 0. it turns out that. In particular. it can be shown that L2 (M. Since these are the expansion coeﬃcients of ϕj with ˜ respect to {φk }k∈K . We know that there is at least one countable orthonormal basis {ϕj }j∈J .5. Hence the set K = j∈J Kj is ˜ ˜ countable as well. Let me remark that if H is not separable. 2 .. there is only one (separable) inﬁnite dimensional Hilbert space: Let H be an inﬁnite dimensional Hilbert space and let {ϕj }j∈N be any orthogonal basis. Proof. Since {ψj }N is total. Moreover. dµ) is separable. there still exists an orthonormal basis. 1) we can orthogonalize the polynomial fn (x) = xn .. If fact. 1. Theorem 1. If H is separable. an orthonormal basis. Hilbert spaces for any ﬁnite n and thus also for N . j=0 Example. By continuity of the scalar product it follows that M ⊥ is a closed linear subspace and by linearity that .3 (iii)). we infer that j=0 {ϕj }N is an orthonormal basis. any linearly ordered chain has an upper bound (the union of all sets in the chain). the proof requires Zorn’s lemma: The collection of all orthonormal sets in H can be partially ordered by inclusion. But k ∈ K\K implies φk = 0 and hence K = K. ψ → ( ϕj . ∀ϕ ∈ M } is called the orthogonal complement of M . In particular. then it follows that any other basis is countable as well. that is. In L2 (−1. P2 (x) = 3 x2 − 1 . We will assume all Hilbert spaces to be separable. this set is countable. The projection theorem and the Riesz lemma Let M ⊆ H be a subset. ψ )j∈N is unitary (by Theorem 1.
(1. ψ for all ψ ∈ H. M ⊥⊥ = M . In other words. it is a Hilbert space and has an orthonormal basis {ϕj }j∈J . ˜ ˜ The following easy consequence is left as an exercise. to operators : H → C. ϕ = ψ. ϕ = ψ . that is. If ≡ 0 we can choose ϕ = 0. that is. we see that the vectors in a closed subspace M are precisely those which are orthogonal to all vectors in M ⊥ . By the CauchySchwarz inequality we know that ϕ : ψ → ϕ. = ψ. a Hilbert space is equivalent to its own dual space H∗ = H. Theorem 1. For every ˜ ψ ∈ H we have (ψ)ϕ − (ϕ)ψ ∈ Ker( ) and hence ˜ ˜ 0 = ϕ.31) since PM ψ. Finally we turn to linear functionals. If M is an arbitrary subset we have at least M ⊥⊥ = span(M ).30) and PM ψ. to every ψ ∈ H we can assign a unique vector ψ which is the vector in M closest to ψ. (ψ)ϕ − (ϕ)ψ = (ψ) − (ϕ) ϕ. In other words. Theorem 1. The projection theorem and the Riesz lemma 37 (span(M ))⊥ = M ⊥ . Otherwise Ker( ) = {ψ (ψ) = 0} is a proper subspace and we can ﬁnd a unit vector ϕ ∈ Ker( )⊥ .33) . ψ . (1.2. For example we have H⊥ = {0} since any vector in H⊥ must be in particular orthogonal to all vectors in some orthonormal basis. Hence the result follows from Theorem 1. Proof. ϕ PM ψ = ψ ⊥ . PM ϕ . Suppose is a bounded linear functional on a Hilbert space H.6 (projection theorem). Clearly we have PM ⊥ ψ = ψ − Moreover. ˜ ˜ ˜ ˜ ˜ In other words.3. PM ϕ (1.7 (Riesz lemma). In turns out that in a Hilbert space every bounded linear functional can be written in this way. Then there is a vector ϕ ∈ H such that (ψ) = ϕ.32) Note that by H⊥ = {} we see that M ⊥ = {0} if and only if M is dense. The operator PM ψ = ψ is called the orthogonal projection corresponding to M . Note that we have 2 PM = PM (1. then every ψ ∈ H can be uniquely written as ψ = ψ + ψ⊥ with ψ ∈ M and ψ⊥ ∈ M ⊥ . Since M is closed. Let M be a closed linear subspace of a Hilbert space H. Proof.1. ψ is a bounded linear functional (with norm ϕ ). we can choose ϕ = (ϕ)∗ ϕ. One writes M ⊕ M⊥ = H in this situation. The rest ψ − ψ lies in M ⊥ .
ψ 2 2 . Problem 1. (ψ1 . (1.8.8. Problem 1. Prove Corollary 1.4. Similarly for H2 . Let {ϕj } be some orthonormal basis. (1. ψ2 ) = ϕ1 . ψ2 ). P1 ψ ≤ ψ. ϕ . Show that a bounded linear operator A is uniquely determined by its matrix elements Ajk = ϕj . Compute its range and kernel.35) Problem 1. Show that an orthogonal projection PM = 0 has norm one. Show P : L2 (R) → L2 (R). It is also custom to write ψ1 + ψ2 instead of (ψ1 . P2 ψ ) is equivalent to Ran(P1 ) ⊆ Ran(P2 ). Problem 1. Our goal is to construct their ˜ tensor product. Suppose P1 and P1 are orthogonal projections. H1 can be identiﬁed with {(ψ1 . f (x) → projection. (1. The elements should be products ψ ⊗ ψ of elements ψ ∈ H . ψ1 1 + ϕ2 . ψ2 ) ∈ H1 × H2 together with the scalar product (ϕ1 . 1 2 (f (x) + f (−x)) is a 1.5. (1. Aϕk with respect to this basis. ϕ2 ).38) ˜ Suppose H and H are two Hilbert spaces. j=1 ψj 2 j < ∞}.2. Show that P1 ≤ P2 (that is ψ. Suppose B is a bounded skew liner form. B(ψ. Hilbert spaces Corollary 1.36) It is left as an exercise to verify that H1 ⊕ H2 is again a Hilbert space. More generally. j=1 j=1 ψj = j=1 ϕj . ψ j j .34) Then there is a unique bounded operator A such that B(ψ. Moreover.38 1. ϕ) = Aψ. ϕ) ≤ C ψ ϕ . let Hj j ∈ N. be a countable collection of Hilbert spaces and deﬁne ∞ ∞ ∞ Hj = { j=1 j=1 ψ j  ψ j ∈ Hj . 0)ψ1 ∈ H1 } and we can regard H1 as a subspace of H1 ⊕ H2 . Show that L(H) is not separable H is inﬁnite dimensional.7. (1.6.37) which becomes a Hilbert space with the scalar product ∞ ∞ ∞ ϕj . Problem 1. that is. Orthogonal sums and tensor products Given two Hilbert spaces H1 and H2 we deﬁne their orthogonal sum H1 ⊕H2 to be the set of all pairs (ψ1 . Problem 1.4.3.
ϕk for span{ψi }.4. ˜ µ ˜ L µ 2 (M. since span{ϕj }.42) j. Then we have 2 (M. ψ⊗(ψ1 + ψ2 ) = ψ⊗ ψ1 +ψ⊗ ψ2 . respectively. ˜ = ψ ⊗ (αψ) we consider F(H. φ ψ. H). respectively. where ˜ ˜ ˜ and (αψ) ⊗ ψ n n n ˜ N (H. H). ˜ ˜ H ⊗ H. dµ) and (M . That ϕj ⊗ ϕk is an orthonormal set is immediate from (1.43) ˜ ˜ The completion of F(H. then ˜ ˜ ϕj ⊗ ϕk is an orthonormal basis for H ⊗ H. ψi ϕk . dµ × d˜ ). ϕk are orthonormal bases for H. we need to ensure positivity. k=1 ˜ βk ψk )} (1. ˜ Proof.k αjk 2 > 0. ψ). ψj ) ∈ H × H. We have H ⊗ Cn = Hn . H. ˜ obtain a scalar product. H)/N (H. φ (1. ψk ) − ( j=1 αj ψj .41) ˜ ˜ To show that we which extends to a skew linear form on F(H.39) ˜ ˜ ˜ ˜ ˜ ˜ ˜ Since we want (ψ1 +ψ2 )⊗ ψ = ψ1 ⊗ ψ+ψ2 ⊗ ψ. If ϕj . Example.9. ψi ˜ ˜ (1. 0 and pick orthonormal bases ϕj . φ ⊗ φ = ψ. ψ = j. Let (M. d˜ ) = L2 (M × M .k=1 ˜ αj βk (ψj . ˜ called the tensor product H ⊗ H ˜ Lemma 1. (1. Orthogonal sums and tensor products 39 ˜ ˜ and ψ ∈ H. αjk = ˜ αi ϕj . H) with respect to the induced norm is ˜ of H and H. it is easy to ˜ ˜ ˜ But the latter is dense in see that ϕj ⊗ ϕk is dense in F(H. d˜ ) ⊆ L2 (M × M . Hence we start with the set of all ﬁnite linear combinations of ˜ elements of H × H n ˜ F(H. d˜ ) as in our ˜ µ take an orthonormal basis ϕj ⊗ ϕk for L ˜ previous lemma. H)/N (H. respectively. H). H) = { j=1 ˜ ˜ ˜ αj (ψj . y)dµ(x)d˜(y) = 0 ˜ µ (1. More˜ ˜ over. H)/N (H. dµ) ⊗ L2 (M . H. Then M ˜ M (ϕj (x)ϕk (y))∗ f (x. H)/N (H.k i and we compute ψ. span{ϕk } is dense in H. (1. span{ψ ˜ Then ψ= αjk ϕj ⊗ ϕk . Now ˜ µ ˜ Clearly we have L µ 2 (M. ˜ µ Example.40) ˜ ˜ and write ψ ⊗ ψ for the equivalence class of (ψ.44) . d˜) be two measure spaces. dµ) ⊗ L2 (M . dµ × d˜ ).41). H) = span{ j. dµ) ⊗ L2 (M .1. αj ∈ C}. ψj )(ψj . Let ψ = i αi ψi ⊗ψi = ˜i }. Next we deﬁne ˜ ˜ ˜ ˜ ψ ⊗ ψ.
We even note ∞ ∞ ( j=1 Hj ) ⊗ H = j=1 (Hj ⊗ H). A∗ ϕ  = sup ϕ = ψ =1  Aψ. = A∗ A = AA∗ . ϕ  = A .48) (αA)∗ = α∗ A∗ . We have ψ⊗ ψ = φ⊗ φ if and only if there is some α ∈ C\{0} ˜ = α−1 φ. y) = 0 for µa. Proof.10. A∗ ψ = Aϕ. (1. (1.47) ˜ ˜ Problem 1.e. Show (1. that both spaces are unitarily equivalent by virtue of the identiﬁcation ∞ ∞ ( j=1 ψj ) ⊗ ψ = j=1 ψj ⊗ ψ. B ∈ L(H).40 1.e. But this implies f (x. Example.e. dµ × d˜) ˜ ˜ µ and equality follows. Bψ = B ∗ A∗ ϕ. (i) and (ii) are obvious. y and thus f = 0.8. Let A.49) .8). (1. The C ∗ algebra of bounded linear operators We start by introducing a conjugation for operators on a Hilbert space H.45) and hence fk (x) = 0 µa. (ii) A∗∗ = A.k≤n . (iv) A = A∗ and A 2 (1. ˜ such that ψ = αφ and ψ Problem 1. (iv) follows from A∗ = sup ϕ = ψ =1  ψ. M fk (x) = ˜ M ϕk (y)∗ f (x. then (i) (A + B)∗ = A∗ + B ∗ . x. (iii) follows from ϕ. It is straightforward to extend the tensor product to any ﬁnite number of Hilbert spaces. x and ˜ µa. y)d˜(y) ˜ µ (1. ψ .k≤n .46) where equality has to be understood in the sense.9. ψ (compare Corollary 1.46) 1. then A∗ = (a∗ )1≤j. (AB)ψ = A∗ ϕ. Hence ϕj ⊗ ϕk is a basis for L2 (M × M .5. If H = Cn and A = (ajk )1≤j. then the adjoint operator is deﬁned via ϕ. Hilbert spaces implies ϕj (x)∗ fk (x)dµ(x) = 0. kj Lemma 1. (iii) (AB)∗ = B ∗ A∗ . Let A = L(H).
(1.52) is called a C ∗ algebra.50) sup ϕ =1 = A 2. Example. Note that if there is and identity e we have e∗ = e and hence (a−1 )∗ = (a∗ )−1 (show this). is called a ∗algebra. (ab)∗ = b∗ a∗ .51) a 2 = a∗ a (1.12. An element a ∈ A is called normal if aa∗ = a∗ a. a Banach algebra A together with an involution (αa)∗ = α∗ a∗ . (orthogonal) projection if a = a∗ = a2 . . Weak and strong convergence 41 and A∗ A = = sup ϕ = ψ =1  ϕ. A∗ Aψ  = Aϕ 2 sup ϕ = ψ =1  Aϕ. (a + b)∗ = a∗ + b∗ . The element a∗ is called the adjoint of a. . b ∈ A implies ab ∈ I and ba ∈ I. Problem 1.6) Problem 1. Any subalgebra which is also closed under involution. a∗∗ = a. a2 . a1 . where we have used ϕ = sup As a consequence of A∗ continuous. . ψ. and positive if a = bb∗ for some b ∈ A. (1. An ideal is a subspace I ⊆ A such that a ∈ I.54) . Aψ  (1. Let A ∈ L(H).11.52) and aa∗ ≤ a a∗ . (a1 . 2 (N) ∀ψ ∈ H. Show that U : H → H is unitary if and only if U −1 = U ∗ . unitary if aa∗ = a∗ a = I. ) → 1.10.6.6. Weak and strong convergence Sometimes a weaker notion of convergence is useful: We say that ψn converges weakly to ψ and write wlim ψn = ψ n→∞ or ψn ψ. ϕ . a2 . ). If it is closed under the adjoint map it is called a ∗ideal. Clearly both selfadjoint and unitary elements are normal. a3 .53) → 2 (N). Compute the adjoint of S : (0. (Hint: Problem 0. The continuous function C(I) together with complex conjugation form a commutative C ∗ algebra. selfadjoint if a = a∗ . satisfying ψ =1  = A observe that taking the adjoint is In general.1. Problem 1. (1. Show that A is normal if and only if Aψ = A∗ ψ . . Note that a∗ = a follows from (1. .
Hence the unit ball in an inﬁnite dimensional Hilbert space is never compact.42 1. ψn ) + ψn 2 → 0. where convergence in the operator norm makes no sense at all. (iv) For a weakly convergent sequence ψn and only if lim sup ψn ≤ ψ . ψ = lim inf ψ. (i) Observe ψ 2 ψ we have: ψn → ψ if = ψ. . ψ for every ϕ ∈ H (show that a weak limit is unique). ψn  ≤ C(ϕ) is bounded. Example. ψn → ϕ. ≤ C. ψnm converges for all k. (1.11. ψn is Cauchy for every ϕ ∈ H.56) The converse is straightforward. (i) ψn ψ implies ψ ≤ lim inf ψn . Then ψ. Finally. However. Hence by the uniform boundedness principle we have ψn = ψn . (ϕn does not converge to 0. Proof. Let ϕk be an ONB. Lemma 1. (iv) By (i) we have lim ψn = ψ and hence ψ − ψn 2 = ψ 2 − 2Re( ψ. ψ implies ϕ. Let H be a Hilbert space. . ψnm converges for every ϕ ∈ H and hence ψnm is a weak Cauchy sequence. (1. since ϕn = 1. ϕ. let me remark that similar concepts can be introduced for operators. Every bounded sequence ψn has weakly convergent subsequence. the weak limit is unique. then by the usual diagonal sequence argument we can ﬁnd a subsequence ψnm such that ϕk . since ϕ. Let H be a Hilbert space. (ψ − ψ) = 0. Moreover. (iii) Every weak Cauchy sequence converges weakly.) Clearly ψn → ψ implies ψn ψ and hence this notion of convergence is indeed weaker. ψn → ϕ. ψ ˜ ˜ and ϕ. ψn → ϕ. Clearly an orthonormal basis does not have a norm convergent subsequence. Proof. Let ϕn be an (inﬁnite) orthonormal set.12. we can at least extract weakly convergent subsequences: Lemma 1. ϕn → 0 for every ψ since these are just the expansion coeﬃcients of ψ. ψn . (iii) Let ϕm be an orthonormal basis and deﬁne cm = limn→∞ ϕm . Hilbert spaces if ϕ.55) (ii) For every ϕ we have that  ϕ. Then ψ = m cm ϕm is the desired limit. This is of particular importance for the case of unbounded operators. (ii) Every weak Cauchy sequence ψn is bounded: ψn ≤ C. Since ψn is bounded. ψn ≤ ψ lim inf ψn . A sequence ψn is called weak Cauchy sequence if ϕ.
). . if An and A are normal we have (An − A)∗ ψ = (An − A)ψ (1. ) = (xn+1 . .58) Clearly norm convergence implies strong convergence and strong convergence implies weak convergence. wlim An = A n→∞ :⇔ An ψ Aψ ∀ψ ∈ D(A) ⊆ D(An ). ψ → Aϕ. A∗ ψ = An ϕ. ). (1.59) ∗ and the operator Sn ∈ L( 2 (N)) which shifts a sequence n places to the right and ﬁlls up the ﬁrst n places with zeros ∗ Sn (x1 . than slimn→∞ An = A. . The same result holds if strong convergence is replaced by weak convergence. . Thus at least for normal operators taking adjoints is continuous with respect to strong convergence. (1. .13. x1 . .61) A∗ in general.11 (i). Suppose An is a sequence of bounded operators.6. x2 . ) = (0. n places ∗ Then Sn converges to zero strongly but not in norm and Sn converges weakly to zero but not strongly. x2 . . . . x2 . (i) and (ii) follow as in Lemma 1. (ii) Just use An ψ − Aψ ≤ An ψ − An ϕ + An ϕ − Aϕ + Aϕ − Aψ (1. . Proof. However. (1. (iii) If An y → Ay for y in a dense set and An ≤ C. . . xn+2 .62) and hence A∗ in this case.60) Note that this example also shows that taking adjoints is not continuous s with respect to strong convergence! If An → A we only have ϕ.57) It is said to converge weakly to A. slim An = A n→∞ :⇔ An ψ → Aψ ∀x ∈ D(A) ⊆ D(An ). Consider the operator Sn ∈ L( 2 (N)) which shifts a sequence n places to the left Sn (x1 .63) ≤ 2C ψ − ϕ + An ϕ − Aϕ . 0. Weak and strong convergence 43 A sequence of operators An is said to converge strongly to A. . . . (1. ψ = ϕ. (ii) Every strong Cauchy sequence An is bounded: An ≤ C. .1. Example. (i) slimn→∞ An = A implies A ≤ lim inf An . A∗ ψ n and hence A∗ n s A∗ → n (1. Lemma 1.
Now ﬁx f ∈ C(K. ψn → ϕj . R) be the Banach algebra of continuous functions (with the sup norm). R) contains the identity 1 and separates points (i. ψ . z ∈ K there is a function fy. w w = j=1 1  ϕj . ψ for every j.13.65) < ε. Let {ϕj } be some orthonormal basis.. Suppose ψn → ψ and ϕn ε 4C and n large ϕ.15.14 (Stone–Weierstraß.z (y) = f (y) and fy. R). In particular.7. A subspace M ⊆ H is closed if and only if every weak Cauchy sequence in M has a limit in M . if f. If F ⊂ C(K. ϕn → ψ. Denote by A the algebra generated by F .64) w is a norm.14. Let {ϕj }∞ be some orthonormal basis and deﬁne j=1 ∞ ψ Show that . g in A. ϕ . (Hint: M = M ⊥⊥ . 2j ϕ if and only if ψn − ψ (1. Appendix: The Stone–Weierstraß theorem In case of a selfadjoint operator.z (z) = f (z) . Problem 1. Problem 1. Hilbert spaces and choose ϕ in the dense subspace such that ψ − ϕ ≤ ε such that An ϕ − Aϕ ≤ 2 . Then ψn . then the algebra generated by F is dense. the spectral theorem will show that the closed ∗algebra generated by this operator is isomorphic to the C ∗ algebra of continuous functions C(K) over some compact set. Show that this is wrong without the boundedness assumption. Suppose K is a compact set and let C(K. since A separates points. we also have max{f. The case of weak convergence is left as an exercise. 2 min{f. we have f  ∈ A: By the Weierstraß approximation theorem (Theorem 0. observe that for given y. Proof.44 1. We need to ﬁnd some fε ∈ A with f − fε ∞ (f + g) + f − g . First of all. Problem 1. Problem 1. Note that if f ∈ A.z ∈ A such that fy. real version).16. Show that ψn → 0. for every x1 = x2 there is some function f ∈ F such that f (x1 ) = f (x2 )). Show that ψn ϕ if and only if ψn is bounded and ϕj .13) 1 there is a polynomial pn (t) such that t−pn (t) < n and hence pn (f ) → f . Hence it is important to be able to identify dense sets: Theorem 1. g} = (f + g) − f − g 2 (1. g} = in A.e.) 1.
If F ⊂ C(K) separates points. However. . in particular this holds for the real and imaginary part for any given complexvalued function. .1.z . either all f ∈ F vanish at one point t0 ∈ K (there can be at most one such point since F separates points) or there is no such point. Appendix: The Stone–Weierstraß theorem 45 (show this). . . If F ⊂ C(K) contains the identity 1 and separates points. V (zk ). . If there is such a t0 .16. Im(f )f ∈ F } satisﬁes the assumption of the real version. . . we have found a required function.7. then the ∗algebra generated by F is dense.69) and a corresponding ﬁnite cover V (z1 ). Next. Since fz (z) = f (z) for every z ∈ K there is a neighborhood V (z) such that fz (x) < f (x) + ε. say U (y1 ). x ∈ V (z) (1. Just observe that F = {Re(f ).z } ∈ A and satisﬁes fz > f − ε by construction.68) (1. Suppose K is a compact set and let C(K) be the C ∗ algebra of continuous functions (with the sup norm). There are two possibilities. Since f − ε < fzl < fε .15 (Stone–Weierstraß). . fzk } ∈ A satisﬁes fε < f + ε. adding the identity to A we get A + C = C(K) and it is easy to see that A = {f ∈ C(K)f (t0 ) = 0}. . then the closure of the ∗algebra generated by F is either C(K) or {f ∈ C(K)f (t0 ) = 0} for some t0 ∈ K. U (yj ). Note that the additional requirement of being closed under complex conjugation is crucial: The functions holomorphic on the unit ball and continuous on the boundary separate points. . x ∈ U (y) (1. . but they are not dense (since the uniform limit of holomorphic functions is again holomorphic). since pn can be chosen to contain no constant term). Corollary 1. Proof. If there is no such point we can proceed as in the proof of the Stone–Weierstraß theorem to show that the identity can be approximated by elements in A (note that to show f  ∈ A if f ∈ A we do not need the identity. . . ﬁnitely many. the identity is clearly missing from A. Hence any realvalued continuous functions can be ˜ approximated by elements from F . fyj . ˜ Proof. cover K.67) and since K is compact. for every y ∈ K there is a neighborhood U (y) such that fy. . Theorem 1.66) (1. .z (x) > f (x) − ε. Now fε = min{fz1 . Then fz = max{fy1 . Suppose K is a compact set and let C(K) be the C ∗ algebra of continuous functions (with the sup norm).
46
1. Hilbert spaces
Problem 1.17. Show that the of functions ϕn (x) = an orthonormal basis for H = L2 (0, 2π).
√1 ein x , 2π
n ∈ Z, form
1 Problem 1.18. Show that the ∗algebra generated by fz (t) = t−z , z ∈ C, is dense in the C ∗ algebra C∞ (R) of continuous functions vanishing at inﬁnity. (Hint: Add ∞ to R to make it compact.)
Chapter 2
Selfadjointness and spectrum
2.1. Some quantum mechanics
In quantum mechanics, a single particle living in R3 is described by a complexvalued function (the wave function) ψ(x, t), (x, t) ∈ R3 × R, (2.1)
where x corresponds to a point in space and t corresponds to time. The quantity ρt (x) = ψ(x, t)2 is interpreted as the probability density of the particle at the time t. In particular, ψ must be normalized according to ψ(x, t)2 d3 x = 1,
R3
t ∈ R.
(2.2)
The location x of the particle is a quantity which can be observed (i.e., measured) and is hence called observable. Due to our probabilistic interpretation it is also a random variable whose expectation is given by Eψ (x) = xψ(x, t)2 d3 x.
R3
(2.3)
In a real life setting, it will not be possible to measure x directly and one will only be able to measure certain functions of x. For example, it is possible to check whether the particle is inside a certain area Ω of space (e.g., inside a detector). The corresponding observable is the characteristic function χΩ (x) of this set. In particular, the number Eψ (χΩ ) = χΩ (x)ψ(x, t)2 d3 x =
R3 Ω
ψ(x, t)2 d3 x
(2.4) 47
48
2. Selfadjointness and spectrum
corresponds to the probability of ﬁnding the particle inside Ω ⊆ R3 . An important point to observe is that, in contradistinction to classical mechanics, the particle is no longer localized at a certain point. In particular, the meansquare deviation (or variance) ∆ψ (x)2 = Eψ (x2 ) − Eψ (x)2 is always nonzero. In general, the conﬁguration space (or phase space) of a quantum system is a (complex) Hilbert space H and the possible states of this system are represented by the elements ψ having norm one, ψ = 1. An observable a corresponds to a linear operator A in this Hilbert space and its expectation, if the system is in the state ψ, is given by the real number Eψ (A) = ψ, Aψ = Aψ, ψ , (2.5) where ., .. denotes the scalar product of H. Similarly, the meansquare deviation is given by ∆ψ (A)2 = Eψ (A2 ) − Eψ (A)2 = (A − Eψ (A))ψ 2 . (2.6)
Note that ∆ψ (A) vanishes if and only if ψ is an eigenstate corresponding to the eigenvalue Eψ (A), that is, Aψ = Eψ (A)ψ. From a physical point of view, (2.5) should make sense for any ψ ∈ H. However, this is not in the cards as our simple example of one particle already shows. In fact, the reader is invited to ﬁnd a square integrable function ψ(x) for which xψ(x) is no longer square integrable. The deeper reason behind this nuisance is that Eψ (x) can attain arbitrary large values if the particle is not conﬁned to a ﬁnite domain, which renders the corresponding operator unbounded. But unbounded operators cannot be deﬁned on the entire Hilbert space in a natural way by the closed graph theorem (Theorem 2.7 below) . Hence, A will only be deﬁned on a subset D(A) ⊆ H called the domain of A. Since we want A to at least be deﬁned for most states, we require D(A) to be dense. However, it should be noted that there is no general prescription how to ﬁnd the operator corresponding to a given observable. Now let us turn to the time evolution of such a quantum mechanical system. Given an initial state ψ(0) of the system, there should be a unique ψ(t) representing the state of the system at time t ∈ R. We will write ψ(t) = U (t)ψ(0). (2.7)
Moreover, it follows from physical experiments, that superposition of states holds, that is, U (t)(α1 ψ1 (0) + α2 ψ2 (0)) = α1 ψ1 (t) + α2 ψ2 (t) (α1 2 + α2 2 = 1). In other words, U (t) should be a linear operator. Moreover,
2.1. Some quantum mechanics
49
since ψ(t) is a state (i.e., ψ(t) = 1), we have U (t)ψ = ψ . (2.8)
Such operators are called unitary. Next, since we have assumed uniqueness of solutions to the initial value problem, we must have U (0) = I, U (t + s) = U (t)U (s). (2.9)
A family of unitary operators U (t) having this property is called a oneparameter unitary group. In addition, it is natural to assume that this group is strongly continuous
t→t0
lim U (t)ψ = U (t0 )ψ,
ψ ∈ H.
(2.10)
Each such group has an inﬁnitesimal generator deﬁned by 1 D(H) = {ψ ∈ H lim (U (t)ψ − ψ) exists}. t→0 t (2.11) This operator is called the Hamiltonian and corresponds to the energy of the system. If ψ(0) ∈ D(H), then ψ(t) is a solution of the Schr¨dinger o equation (in suitable units) i d ψ(t) = Hψ(t). dt (2.12) i Hψ = lim (U (t)ψ − ψ), t→0 t
This equation will be the main subject of our course. In summary, we have the following axioms of quantum mechanics. Axiom 1. The conﬁguration space of a quantum system is a complex separable Hilbert space H and the possible states of this system are represented by the elements of H which have norm one. Axiom 2. Each observable a corresponds to a linear operator A deﬁned maximally on a dense subset D(A). Moreover, the operator corresponding to a polynomial Pn (a) = n αj aj , αj ∈ R, is Pn (A) = n αj Aj , j=0 j=0 D(Pn (A)) = D(An ) = {ψ ∈ D(A)Aψ ∈ D(An−1 )} (A0 = I). Axiom 3. The expectation value for a measurement of a, when the system is in the state ψ ∈ D(A), is given by (2.5), which must be real for all ψ ∈ D(A). Axiom 4. The time evolution is given by a strongly continuous oneparameter unitary group U (t). The generator of this group corresponds to the energy of the system. In the following sections we will try to draw some mathematical consequences from these assumptions:
50
2. Selfadjointness and spectrum
First we will see that Axiom 2 and 3 imply that observables correspond to selfadjoint operators. Hence these operators play a central role in quantum mechanics and we will derive some of their basic properties. Another crucial role is played by the set of all possible expectation values for the measurement of a, which is connected with the spectrum σ(A) of the corresponding operator A. The problem of deﬁning functions of an observable will lead us to the spectral theorem (in the next chapter), which generalizes the diagonalization of symmetric matrices. Axiom 4 will be the topic of Chapter 5.
2.2. Selfadjoint operators
Let H be a (complex separable) Hilbert space. A linear operator is a linear mapping A : D(A) → H, (2.13) where D(A) is a linear subspace of H, called the domain of A. It is called bounded if the operator norm A = sup
ψ =1
Aψ =
sup
ϕ = ψ =1
 ψ, Aϕ 
(2.14)
is ﬁnite. The second equality follows since equality in  ψ, Aϕ  ≤ ψ Aϕ is attained when Aϕ = zψ for some z ∈ C. If A is bounded it is no restriction to assume D(A) = H and we will always do so. The Banach space of all bounded linear operators is denoted by L(H). The expression ψ, Aψ encountered in the previous section is called the quadratic form qA (ψ) = ψ, Aψ , ψ ∈ D(A), (2.15)
associated to A. An operator can be reconstructed from its quadratic form via the polarization identity ϕ, Aψ = 1 (qA (ϕ + ψ) − qA (ϕ − ψ) + iqA (ϕ − iψ) − iqA (ϕ + iψ)) . (2.16) 4
A densely deﬁned linear operator A is called symmetric (or Hermitian) if ϕ, Aψ = Aϕ, ψ , ψ, ϕ ∈ D(A). (2.17) The justiﬁcation for this deﬁnition is provided by the following Lemma 2.1. A densely deﬁned operator A is symmetric if and only if the corresponding quadratic form is realvalued.
Replacing ϕ by iϕ in this last equation shows Im Aϕ. This is for example true in the case of the position operator x. (Af )(x) = A(x)f (x). In fact. First of all note that D(A) is dense.17) implies that Im(qA (ψ)) = 0. (Multiplication operator) Consider the multiplication operator D(A) = {f ∈ L2 (Rn . if A is another ˜ ˜ ˜ symmetric operator such that A ⊆ A. However. taking the imaginary part of the identity qA (ψ + iϕ) = qA (ψ) + qA (ϕ) + i( ψ. . which is a special case of a multiplication operator. Here we write A ⊆ A if ˜ and Aψ = Aψ for all ψ ∈ D(A). Aψ . Aϕ = ψ. Our goal is to show that observables correspond to selfadjoint operators. In addition.20) ˜ A∗ ψ = ψ The requirement that D(A) is dense implies that A∗ is welldeﬁned. Clearly we have (αA)∗ = α∗ A∗ for α ∈ C and (A + B)∗ ⊇ A∗ + B ∗ provided D(A + B) = D(A) ∩ D(B) is dense. A = A∗ holds. for every f ∈ L2 (Rn .2. consider Ωn = {x ∈ Rn  A(x) ≤ n} Rn .2) Ker(A∗ ) = Ran(A)⊥ . a densely deﬁned operator A is symmetric if and only if ψ. Aψ ) (2. Aψ = Aψ.22) given by multiplication with the measurable function A : Rn → C.1). (2. dµ)  Af ∈ L2 (Rn . then A is called selfadjoint. However. equality will not hold in general unless one operator is bounded (Problem 2. (2. In fact. Example. Clearly (2. (2. then A = A. A should be deﬁned maximally. Aϕ − ϕ.21) For symmetric operators we clearly have A ⊆ A∗ . Aψ and ﬁnishes the proof. For later use. let us tackle the issue of the correct domain. Selfadjoint operators 51 Proof. note that D(A∗ ) might not be dense in general. dµ)}. ϕ . In other words. dµ) the function fn = χΩn f ∈ D(A) converges to f as n → ∞ by dominated convergence. ∀ϕ ∈ D(A)} . (2. ˜ By Axiom 2. ψ .18) shows Re Aϕ. ψ ∈ D(A). If in addition. Next.19) This already narrows the class of admissible operators to the class of symmetric operators by Axiom 3. Conversely. ψ = Re ϕ. Then. note that (Problem 2. it might contain no vectors other than 0. The adjoint operator A∗ of a densely deﬁned linear operator A is deﬁned by ˜ ˜ D(A∗ ) = {ψ ∈ H∃ψ ∈ H : ψ.2. that is. we write A = A ˜ ˜ D(A) ⊆ D(A) ˜ ˜ if both A ⊆ A and A ⊆ A hold. ψ = Im ϕ.
(2.26) g(x)∗ f (x)dµ(x). Performing a formal computation we have for h. (2. that is. Af = h(x)∗ A(x)f (x)dµ(x) = ˜ (A(x)∗ h(x))∗ f (x)dµ(x) = Ah. if A is selfadjoint and B is a symmetric extension. if A∗ is densely deﬁned (which is the case if A is symmetric) we can consider A∗∗ . To show that equality holds.27) f ∈ D(A). they do not have any symmetric extensions. From the deﬁnition (2. (2. dµ) and thus A∗ is multiplication by A(x)∗ and D(A∗ ) = D(A). we need to work a little harder: If h ∈ D(A∗ ) there is some g ∈ L2 (Rn .28) that is. A is selfadjoint if A is realvalued. dµ) vanishes. Corollary 2. But for our calculation we had to assume h ∈ D(A) and there might be some functions in D(A∗ ) which do not satisfy this requirement! In ˜ particular. dµ) such that h(x)∗ A(x)f (x)dµ(x) = and thus (h(x)A(x)∗ − g(x))∗ f (x)dµ(x) = 0. Selfadjointness and spectrum Next. At ﬁrst sight this seems to show that the adjoint of Note D(A) ˜ A is A. D(A) = {f ∈ L2 (Rn . dµ)}. χΩn (x)(h(x)A(x)∗ − g(x))∗ f (x)dµ(x) = 0. In fact. Selfadjoint operators are maximal. Furthermore. we infer A ⊆ B ⊆ B ∗ ⊆ A∗ = A implying A = B. Since n is arbitrary.52 2. we even have h(x)A(x)∗ = g(x) ∈ L2 (Rn . let us compute the adjoint of A. dµ)  Af ∈ L2 (Rn . dµ). f ∈ D(A). Such operators are called normal. Now note that A ⊆ B ⇒ B ∗ ⊆ A∗ . In particular.20) it is clear that A ⊆ A∗∗ and . ˜ ˜ ˜ (Af )(x) = A(x)∗ f (x). f ∈ L2 (Rn . f ∈ D(A) that h. In the general case we have at least Af = A∗ f for all f ∈ D(A) = D(A∗ ). f .23) ˜ where A is multiplication by A(x)∗ .25) which shows that χΩn (h(x)A(x)∗ − g(x))∗ ∈ L2 (Rn .24) ˜ = D(A). our calculation only shows A ⊆ A∗ . Thus there is no point in trying to extend the domain of a selfadjoint operator further.2. In particular. (2. (2. (2. increasing the domain of A implies decreasing the domain of A∗ .
2π). Thus {f f ∈ D(A0 )} = {h ∈ H 1. 2π]. ϕ ∈ D(A∗ ) implies that A is symmetric since A∗ ϕ = Aϕ for ϕ ∈ D(A). If A is symmetric we have A ⊆ A∗ and hence A = A∗∗ ⊆ A∗ . f = i(f (0)g(0)∗ − f (2π)g(2π)∗ ) 0 (2. 2π] and A∗ g = g = −ig . Note that the boundary conditions f (0) = f (2π) = 0 are chosen such that the boundary terms occurring from integration by parts vanish. A∗ ϕ = Aψ. Integration by parts shows ˜ 2π x ∗ f (x) g(x) − i 0 x 0 g (t)dt ˜ dx = 0 (2. Conversely.30) holds with g = −ig and we conclude ˜ A∗ f = −i 0 d f. If g ∈ D(A∗ ) we 0 0 must have 2π 0 g(x)∗ (−if (x))dx = 0 2π g (x)∗ f (x)dx ˜ (2. 2π). A lies between A and A∗ . 2π]f ∈ L2 (0.2. this will also follow once we have computed A∗ . ˜ 0 0 for every g ∈ H 1 (0. ψ. Example. (2. dx D(A∗ ) = H 1 (0. A is symmetric but not selfadjoint. But {f f ∈ D(A0 )} = x 2π ˜ {h ∈ C(0. 2π) 0 h(t)dt = 0} implying g(x) = g(0) + i 0 g (t)dt since ⊥ and {1}⊥⊥ = span{1}. 2π]  f (0) = f (2π) = 0}.31) ˜ and hence g(x) − i 0 g (t)dt ∈ {f f ∈ D(A0 )}⊥ . In summary. h = 0} = {1} g ∈ AC[0. b]f (x) = f (a) + a g(t)dt.32) denotes the set of all absolutely continuous functions (see Section 2. where x AC[a. 2π) = {f ∈ AC[0.30) for some g ∈ L2 (0. (i). g ∈ D(A∗ ) implies g ∈ AC[0. However.2. Moreover. 2π). dx D(A0 ) = {f ∈ C 1 [0. Selfadjoint operators 53 thus A∗∗ is an extension of A.29) That A0 is symmetric can be shown by a simple integration by parts (do this).34) .33) In particular. Consider the operator A0 f = −i d f. A0 f − A∗ g. b)} (2. This extension is closely related to extending a linear subspace M via M ⊥⊥ = M (as we will see a bit later) and thus is called the closure A = A∗∗ of A. b] = {f ∈ C[a. 2π)} (2. g ∈ L1 (a. that is. (Diﬀerential operator) Take H = L2 (0.6). ϕ for all ψ ∈ D(A). 0 (2. Since A∗∗ ⊆ A∗ we compute 0 = g.
dx D(A0 ) = {f ∈ D(A∗ )  f (0) = f (2π) = 0}..38) (2. (2. 0 (2. Af − A∗ g. (ii). z ∈ C) whereas the second one has an orthonormal basis of eigenvectors! Example. By deﬁnition an eigenvector is a (nonzero) solution of A0 u = zu. However.40) un (x) = √ einx . Now we look for solutions of Au = zu. Thus A∗ ⊆ A∗ and we compute 0 Af = −i 0 = g.54 2.39) satisfying the boundary conditions u(0) = u(2π) = 0 (since we must have u ∈ D(A0 ). Thus there are two possibilities. but now subject to the boundary condition u(0) = u(2π). f = if (0)(g(0)∗ − g(2π)∗ ). as before. (2. Either u(0) = 0 (which is of no use for us) or z ∈ Z. we see that all eigenvectors are given by 1 n ∈ Z. dx D(A∗ ) = {f ∈ H 1 (0. since they coincide on a dense set of vectors. In particular. A = A∗ and thus A is selfadjoint. .36) dx which is clearly an extension of A0 . we 0 must have f (0) = f (2π) = 0. Compute the eigenvectors of A0 and A from the previous example.37) Similarly. 2π)  f (0) = f (2π)}. The general solution of the diﬀerential equation is u(x) = u(0)eizx and the boundary conditions imply u(x) = 0. Hence there are no eigenvectors. Thus A0 f = −i d f. (2. a solution of the ordinary diﬀerential equation u (x) = zu(x) (2. (i). the converse is true: For example. solutions of the equation A0 ψ = zψ. Again the general solution is u(x) = u(0)eizx and the boundary condition requires u(0) = u(0)e2πiz . that is the same diﬀerential equation as before. D(A) = {f ∈ C 1 [0. 2π]  f (0) = f (2π)}. the ﬁrst operator A0 has no eigenvectors at all (i. that is. Selfadjointness and spectrum and since the boundary values of g ∈ D(A∗ ) can be prescribed arbitrary. z ∈ C. 2π which are wellknown to form an orthonormal basis.e.35) (ii). Now let us take d f. Since this must hold for all f ∈ D(A) we conclude g(0) = g(2π) and A∗ f = −i d f. One might suspect that there is no big diﬀerence between the two symmetric operators A0 and A from the previous example.
there is a ∗ )ϑ = ψ + z ∗ ψ. Hence it will be important to know whether a given operator is selfadjoint or not. this raises the question of selfadjoint extensions. If A is selfadjoint. Let A be symmetric such that Ran(A + z) = Ran(A + z ∗ ) = H for one z ∈ C.2.3. if a given symmetric operator A turns out not to be selfadjoint. (A + z)ϕ = ψ + z ∗ ψ.4. Selfadjoint operators 55 We will see a bit later that this is a consequence of selfadjointness of A. Otherwise there might be more than one selfadjoint extension or none at all.2. This situation is more delicate and will be investigated in Section 2. but computing the adjoint of an operator is a nontrivial job even in simple situations. and hence ψ = ϑ ∈ D(A) since Ran(A + z) = H. The simplest way of extending an operator A is to take the closure of its ˜ graph Γ(A) = {(ψ. For Aψ to be welldeﬁned. with the graph norm ψ 2 Γ(A) = ψ closed). However.2). ϕ = (A + z ∗ )ϑ. we will learn soon that selfadjointness is a much stronger property than symmetry justifying the additional eﬀort needed to prove it. Since Ran(A + z ∗ ) = H. Two cases need to be distinguished. Let ψ ∈ D(A∗ ) and A∗ ψ = ψ. we will need more information on the closure of an operator. Now we compute ˜ ϑ ∈ D(A) such that (A + z ˜ ψ. A bounded operator is closed if and only if its domain is closed (show this!). Aψ)ψ ∈ D(A)} ⊂ H2 . To proceed further. a criterion for selfadjointness not involving A∗ will be useful. A is closed if and only if Γ(A) equipped 2 + Aψ 2 is a Hilbert space (i. Aψn ) → (0. then there is only one selfadjoint extension (if B is another one. In this case A is called closable and the ˜ (ψn . ψ) unique operator A which satisﬁes Γ(A) = Γ(A) is called the closure of A. we have A ⊆ B and hence A = B by Corollary 2. We will establish equivalence with our original deﬁnition in Lemma 2. Clearly. In this case A is called essentially selfadjoint and D(A) is called a core for A. That is. On the other hand. A is called closed if A = A. ˜ Proof.41) . ϕ ∈ D(A). Our example shows that symmetry is easy to check (in case of diﬀerential operators it usually boils down to integration by parts). Aψn ) → (ψ. ϕ = ϑ.e. Then A is selfadjoint. ψ) ˜ we might try to deﬁne Aψ = ψ. Since we have seen that computing A∗ is not always easy. We will use a diﬀerent approach which avoids the use of the adjoint operator. which is the case if and only if the graph of A is closed.. (A + z)ϕ . (2. if (ψn .5. Lemma 2. Equivalently. we need that ˜ implies ψ = 0.
ϕ). (iii) If A is injective and the Ran(A) is dense. Let f ∈ D(A0 ) and let fn ∈ D(A0 ) be a sequence such that fn → f . Hence A is closable if ∗ ) is dense. ϕ) ∈ H  (−ϕ. −ϕ). ψ) ∈ Γ(A) if and only if ψ ∈ D(A∗ )⊥ . Lemma 2. then (A∗ )−1 = (A−1 )∗ . Conversely. (ψ. From Γ(A) = Γ(A)⊥⊥ = (U Γ(A∗ ))⊥ ˜ ˜ = {(ψ. Hence if Ran(A) is dense. Then x fn → g and hence f (x) = 0 g(t)dt. (i) A∗ is closed. then Ker(A∗ ) = Ran(A)⊥ = {0} and Γ((A∗ )−1 ) = V Γ(A∗ ) = V U Γ(A)⊥ = U V Γ(A)⊥ = U (V Γ(A))⊥ (2.56 2. ψ) = (ψ. Let us compute the closure of the operator A0 from the previous example without the use of the adjoint operator. A∗ ϕ − ψ. A0 fn → −ig. Aψ = ϕ. ψ) Γ(A) = 0 ∀(ψ.42) (2. −1 If A is closable and A is injective.46) (2. ψ ∀ϕ ∈ D(A∗ )} ˜ ˜ 2 ˜ ˜ = {(ϕ. (2. ϕ). ψ) = (ψ. (iii). In this case. Next note that (provided A is injective) Γ(A−1 ) = V Γ(A). Suppose A is a densely deﬁned operator. Next. equation (2. Thus f ∈ AC[0. replacing A by A∗ in (2. 2π Moreover f (2π) = limn→0 0 fn (t)dt = 0. ∀ϕ ∈ D(A∗ )} (2. Proof. (i). From Γ(A∗ ) = {(ϕ. Let us consider the following two unitary operators from H2 to itself U (ϕ. ϕ) ∈ H2  ϕ.45) . (ii).44) ˜ ˜ we see that (0. any such f can be approximated by functions in D(A0 ) (show this). ψ) ψ.43) V (ϕ. then A = A−1 . and only if D(A Moreover. let us collect a few important results.4. ϕ = 0.43) and comparing with the last formula shows A∗∗ = A. (ii) A is closable if and only if D(A∗ ) is dense and A = A∗∗ respectively (A)∗ = A∗ in this case. ψ) ∈ Γ(A)} ˜ ˜ = U (Γ(A)⊥ ) = (U Γ(A))⊥ we conclude that A∗ is closed. Selfadjointness and spectrum Example.43) also shows A∗ = A∗ . 2π] and f (0) = 0.
we can also prove the closed graph theorem which shows that an unbounded operator cannot be deﬁned on the entire Hilbert space. and hence the two conditions are equivalent. • Ran(A + z) = Ran(A + z ∗ ) = H. Since it is densely deﬁned by assumption. In particular. • Ker(A∗ + z) = Ker(A∗ + z ∗ ) = {0}. Theorem 2. We have A ∈ L(H) if and only if A∗ ∈ L(H). Let z = x + iy.2. Similarly. Proof.6.2. Then A∗ is well deﬁned and for all unit vectors . its domain Ran(A + z) must be equal to H. since A = A∗∗ we obtain Theorem 2.5. 0). that is ψ.3 ﬁnishes the general case.48) we infer that Ker(A−z) = {0} and hence (A−z)−1 exists. ε > 0.7 (closed graph). Let H1 and H2 be two Hilbert spaces and A an operator deﬁned on all of H1 .8 A ∈ L(H). If A is nonnegative. By taking the closure of A it is no restriction to assume that A is closed. If A is bounded than it is easy to see that Γ(A) is closed. In addition. As noted earlier Ker(A∗ ) = Ran(A)⊥ . Aψ ≥ 0 for all ψ ∈ D(A). The argument for the nonnegative case with z < 0 is similar using ε ψ 2 ≤  ψ. Hence (A − z)−1 is bounded and closed.47) If A ∈ L(H) we clearly have D(A∗ ) = H and by Corollary 1. Moreover. A symmetric operator A is essentially selfadjoint if and only if one of the following conditions holds for one z ∈ C\R. Now we can also generalize Lemma 2. (2. So let us assume that Γ(A) is closed. setting ψ = (A − z)−1 ϕ (y = 0) shows (A − z)−1 ≤ y−1 . Proof. (2.3 to the case of essential selfadjoint operators. Then A is bounded if and only if Γ(A) is closed. From (A−z)ψ 2 = (A−x)ψ −iyψ 2 = (A−x)ψ 2 +y 2 ψ 2 ≥ y 2 ψ 2 . Lemma 2. Replacing z by z ∗ and applying Lemma 2. Selfadjoint operators 57 shows that (A∗ )−1 = (A−1 )∗ . (A + ε)ψ 2 ≤ ψ (A + ε)ψ which shows (A + ε)−1 ≤ ε−1 . −1 then A = A−1 by Γ(A −1 ) = V Γ(A) = V Γ(A) = Γ(A−1 ). we can also admit z ∈ (−∞. if A is closable and A is injective.
dµ)  A1/2 f ∈ L2 (Rn . To avoid this. D(A) ⊆ HA ⊆ H. There is a good way of doing this for nonnegative operators and hence we will consider this case ﬁrst. then it is also Cauchy in H (since ψ ≤ ψ A by assumption) and hence we can identify it with the limit of (ψn ) regarded as a sequence in H. which satisﬁes ψ ≤ ψ A . ψ is pointwise bounded (2.58 2. positive) if ψ. However. ψ) → ϕ. n large.54) A(x)f (x)2 dµ(x) . A ≥ 0. ψ ∈ Q(A) = HA . We claim that HA can be regarded as a subspace of H. we need to show that if (ψn ) ⊂ D(A) is a Cauchy sequence in HA such that ψn → 0. Aψ ≥ 0 (resp. dµ). A is bounded and so is A = A . If A is positive.53) (2.51) since the right hand side can be made arbitrarily small choosing m. Aψ is a scalar product. Example. the diﬃcult part remaining is to show that an observable has at least one selfadjoint extension. then ψn A → 0. ψ A = ϕ.50) deﬁned on D(A). Since selfadjoint operators are already maximal. Let HA be the completion of D(A) with respect to the above scalar product. that is. ψ m A A ψn − ψm + ψn (A + 1)ψm (2. ψ A − ψ 2 . That is. Aψ  ≤ Aψ . ψ n − ψm ψn A A + ψn .52) The set Q(A) is also called the form domain of A. This follows from ψn 2 A = ≤ ψn . An operator is called nonnegative (resp. Hence by the uniform boundedness principle there is a constant C such that ∗ ∗ ∗∗ ϕ = A ϕ ≤ C. the map (ϕ. (2. For this identiﬁcation to be unique. (Multiplication operator) Let A be multiplication by A(x) ≥ 0 in L2 (Rn . Then Q(A) = D(A1/2 ) = {f ∈ L2 (Rn . > 0 for ψ = 0) for all ψ ∈ D(A).49) ϕ (ψ) =  ϕ. Finally we want to draw some some further consequences of Axiom 2 and show that observables correspond to selfadjoint operators. (A + 1)ψ . there might be sequences which are Cauchy with respect to this scalar product but not with respect to our original one. (2. we introduce the scalar product ϕ. dµ)} and qA (x) = Rn (2. Selfadjointness and spectrum ϕ ∈ D(A∗ ) we have that the linear functional ϕ (ψ) = A∗ ϕ. If (ψn ) is a Cauchy sequence in D(A). Clearly the quadratic form qA can be extended to every ψ ∈ HA by setting qA (ψ) = ψ.
8. Suppose A is a nonnegative operator. it is straightforward to check that A is symmetric π 0 g(x)∗ (−f )(x)dx = 0 π g (x)∗ f (x)dx = 0 π (−g )(x)∗ f (x)dx. (2. since π g. By the deﬁnition of A.55) ˜ Since HA is dense. ∀ϕ ∈ HA } . Indeed. Next let us show HA = {f ∈ H 1 (0. π) and consider the operator d2 f. Moreover. In fact.58) (ii). bijection from D(A) Lemma 2. Moreover. f = 0.57) Note that the boundary conditions f (0) = f (π) = 0 are chosen such that the boundary terms occurring from integration by parts vanish. Thus fn → f and fn → g in L2 (0. using integration by parts twice. ˜ Proof. (2. D(A) = {f ∈ C 2 [0. π]  f (0) = f (π) = 0}. π)  f (0) = f (π) = 0}.56) dx2 which corresponds to the onedimensional model of a particle conﬁned to a box. ψ is welldeﬁned. Hence there is an element ϕ → ψ. for any ψ ∈ H. ϕ A for all ϕ ∈ HA . (2. it is straightforward to see ˜ that A is a nonnegative extension of A. ˜ (A Now it is time for another Example.2. ˜ ϕ is bounded linear functional on HA . Now we come to our extension result. ψ .2. the same calculation also shows that A is positive π 0 f (x)∗ (−f )(x)dx = 0 π f (x)2 dx > 0. Let us deﬁne an operator A by ˜ ˜ D(A) = {ψ ∈ HA ∃ψ ∈ H : ϕ. First of all. Selfadjoint operators 59 (show this). A + 1 is a ˜ onto H. ˜ ˜ + 1)ψ = ψ and hence A + 1 is onto. f A = 0 g (x)∗ f (x) + g(x)∗ f (x) dx. then there is a non˜ ˜ negative extension A such that Ran(A + 1) = H. ψ ˜−ψ ˜ Aψ = ψ A ˜ = ϕ. π) and fn (x) = 0 fn (t)dt .59) we see that fn is Cauchy in HA if and only if both fn and fn are Cauchy x in L2 (0. Let us take H = L2 (0. (2. (2. ˜ ˜ It is also not hard to see that Ran(A + 1) = H. Note that A + 1 is injective and ˜ ˜ the best we can hope for is that for a nonnegative extension A. Af = − (i). ˜ ˜ ψ ∈ HA such that ψ. ϕ = ψ. π).
˜ ˜ (iii). Moreover. which has a symmetric extension whose range is H. f A = g. π)} and Af = −f . we will consider 1 + A2 . Using gn − 0 gn (t)dt instead of gn π x it is no restriction to assume 0 gn (t)dt = 0. dx ˜ D(A) = {f ∈ H 2 [0. Finally.60 2. π)  f (0) = f (π) = 0}. Since A will.9. f . As an immediate consequence we obtain Corollary 2. D(1 + A2 ) = D(A2 ). π]. Moreover. 1 + A2 is maximally deﬁned and hence is equal to this extension. By our requirement for observables. f (0) = 0 is obvious π π and from 0 = fn (π) = 0 fn (t)dt we have f (π) = limn→∞ 0 fn (t)dt = 0. not satisfy the assumptions of our lemma. We have f ∈ D(A) if for ˜ such that g. π]  f (0) = f (π) = 0}. ˜ all g ∈ HA there is an f π 0 x g (x)∗ f (x)dx = 0 π ˜ g(x)∗ (f (x) − f (x))dx. Ran(1 + A2 ) = H. So we see f ∈ H 2 (0. In other words. instead. But there is another important consequence of the results which is worth while mentioning. Now deﬁne fn (x) = 0 gn (t)dt and note fn ∈ D(A) → f .63) Now let us apply this result to operators A corresponding to observables. To see the converse π approximate f by smooth functions gn . (2. Selfadjointness and spectrum implies f (x) = 0 g(t)dt. in general. Thus f ∈ AC[0. π]f ∈ H 1 (0. That is. . we infer Ran(A ± i) = H. The converse is easy and hence d2 ˜ Af = − 2 f. (2.62) Now observe {g ∈ Hg ∈ HA } = {h ∈ H 0 h(t)dt = 0} = {1}⊥ and thus x ˜ f (x) + 0 (f (t) − f (t))dt ∈ {1}⊥⊥ = span{1}. So we have HA ⊆ {f ∈ H 1 (0. Observables correspond to selfadjoint operators. let us compute the extension A. π) = ˜ {f ∈ AC[0.64) and since (A ± i)ψ ∈ D(A). π (2. for any ϕ ∈ H there is a ψ ∈ D(A2 ) such that (A − i)(A + i)ψ = (A + i)(A − i)ψ = ϕ (2.60) Integration by parts on the right hand side shows π 0 g (x)∗ f (x)dx = − 0 π g (x)∗ 0 x ˜ (f (t) − f (t))dt dx (2.61) or equivalently π 0 g (x)∗ f (x) + 0 x ˜ (f (t) − f (t))dt dx = 0.
2. Let A be a semibounded symmetric operator.21). Show that the kernel of a closed operator is closed. z ∈ ρ(A) if and only if (A − z) : D(A) → H is bijective and its inverse is bounded. it suﬃces to check that A − z is bijective. D(A) = {f ∈ H 2 (0. γ ∈ R.67) of A. Show that normal operators are closed.66) More precisely. A vector ψ ∈ Ker(A − z) is called an eigenvector and z is called eigenvalue in this case. Show (αA)∗ = α∗ A∗ and (A+B)∗ ⊇ A∗ +B ∗ (where D(A∗ + B ∗ ) = D(A∗ ) ∩ D(B ∗ )) with equality if one operator is bounded. The resolvent set of A is deﬁned by ρ(A) = {z ∈ C(A − z)−1 ∈ L(H)}.7).3.10 (Friedrichs extension). (2.65) ˜ Then there is a selfadjoint extension A which is also bounded from below ˜ ⊆ HA−γ . Give an example where equality does not hold. The rest is straightforward. The complement of the resolvent set is called the spectrum σ(A) = C\ρ(A) (2. π)  f (0) = f (π) = 0} 1 and let ψ(x) = 2√π x(π −x).1.2.8.6. Show that A∗ A (with D(A∗ A) = {ψ ∈ D(A)Aψ ∈ D(A∗ )} is selfadjoint. Aψ ≥ γ ψ 2 . . Problem 2.2.7. (Hint: A∗ A ≥ 0. Suppose A is a closed operator.) Problem 2. that is. Problem 2. Show that if A is normal.4. Let A = − dx2 .9. A2 ψ = 0. d Problem 2. then BA is closed. Show that if A is bounded and B closed. (2.3. Find the error in the following argument: Since A is symmetric we have 1 = Aψ. 2 Problem 2. so is A + z for any z ∈ C. Aψ = ψ. z ∈ σ(A) if A − z has a nontrivial kernel. Problem 2. Show that A is normal if and only if AA∗ = A∗ A. By the closed graph theorem (Theorem 2. Replacing A by A − γ we can reduce it to the case considered in Lemma 2. Show (2.5. by γ and which satisﬁes D(A) Proof. In particular. Resolvents and spectra Let A be a (densely deﬁned) closed operator. qA (ψ) = ψ.8.3. Problem 2. Problem 2. Problem 2. Resolvents and spectra 61 Theorem 2.
Clearly (A − z)−1 is given by the multiplication operator (Af )(x) = A(x)f (x). f (0) = f (2π)} (2. (2. dx D(A) = {f ∈ AC[0. ρ(A∗ ) = ρ(A)∗ .68) D((A − z)−1 ) = {f ∈ L2 (Rn . (2. (A − z)−1 f (x) = 1 f (x). dµ)  whenever this operator is bounded. z is an eigenvalue of A if µ(A−1 ({z})) > 0 and χA−1 ({z}) is a corresponding eigenfunction in this case.72) A−z = 1 A−z ∞ (2.73) Moreover. (2.71) given by multiplication with the measurable function A : Rn → C.76) . We already know that the eigenvalues of A are the integers and that the corresponding normalized eigenfunctions 1 un (x) = √ einx 2π form an orthonormal basis. But (A − z)−1 equivalent to µ({x A(x) − z ≥ ε}) = 0 and hence ≤ 1 ε is ρ(A) = {z ∈ C∃ε > 0 : µ({x A(x) − z ≤ ε}) = 0}.62 2. 2π). dµ)  Af ∈ L2 (Rn . 2π]  f ∈ L2 .70) Example.75) f (x) = f (0)eizx + i 0 eiz(x−t) g(t)dt. (Diﬀerential operator) Consider again the diﬀerential operator Af = −i d f. Selfadjointness and spectrum The function RA : ρ(A) → L(H) z → (A − z)−1 is called resolvent of A. A(x) − z 1 f ∈ L2 (Rn . To compute the resolvent we must ﬁnd the solution of the corresponding inhomogeneous equation −if (x) − z f (x) = g(x). (2. (Multiplication operator) Consider again the multiplication operator D(A) = {f ∈ L2 (Rn . dµ)} (2. Note the convenient formula RA (z)∗ = ((A − z)−1 )∗ = ((A − z)∗ )−1 = (A∗ − z ∗ )−1 = RA∗ (z ∗ ). Example. (2. By the variation of constants formula the solution is given by (this can also be easily veriﬁed directly) x (2.74) in L2 (0.69) In particular. dµ)}.
(A − z)−1 − (z − z )(A − z)−1 (A − z )−1 = (A − z)−1 (1 − (z − A + A − z )(A − z )−1 ) = (A − z )−1 . (2.85) and hence R∞ (A − z)ψ = ψ after taking the limit. ψ + zϕ) shows ϕ ∈ D(A) (since A is closed) and (A − z)R∞ ψ = ψ. Resolvents and spectra 63 Since f must lie in the domain of A. The second follows after interchanging z and z . t) = eiz(x−t) In particular σ(A) = Z. we must have f (0) = f (2π) which gives f (0) = i e−2πiz − 1 0 2π e−izt g(t)dt. Rn Aψ = ψ + (z − z0 )ϕn−1 + z0 ϕn (2. ϕ = R∞ ψ for some ψ ∈ H. If z. (2.84) Hence (ϕn . (2.) Hence (A − z)−1 g(x) = 0 2π G(z. (2. the inverse cannot exist in this case. z ∈ C\Z. z ∈ ρ(A). 1−e2πiz t>x .82) The sequence of bounded operators n Rn = j=0 (z − z0 )j RA (z0 )j+1 (2.80) recursively to obtain n −i . (2.77) (Since z ∈ Z are the eigenvalues.3. Then a quick calculation shows ARn ψ = ψ + (z − z0 )ϕn−1 + z0 ϕn . Thus R∞ = RA (z) as anticipated. x.81) which proves the ﬁrst equality. we have the ﬁrst resolvent formula RA (z) − RA (z ) = (z − z )RA (z)RA (z ) = (z − z )RA (z )RA (z). Let R∞ = limn→∞ Rn and set ϕn = Rn ψ.79) (2. (2. Now ﬁx z = z0 and use (2. In fact. t<x z ∈ C\Z. x. 1−e−2πiz i . t)g(t)dt. Aϕn ) → (ϕ.80) RA (z) = j=0 (z − z0 )j RA (z0 )j+1 + (z − z0 )n+1 RA (z0 )n+1 RA (z). for ψ ∈ D(A).2. Similarly.83) converges to a bounded operator if z − z0  < RA (z0 ) −1 and clearly we expect z ∈ ρ(A) and Rn → RA (z) in this case.78) where G(z. .
Selfadjointness and spectrum If A is bounded. we have Aψ = zψ if and only if A−1 ψ = z −1 ψ. Such a sequence is called Weyl sequence. (2. We have z ∈ σ(A) if there is a sequence ψn ∈ D(A) such that ψn = 1 and (A − z)ψn → 0.89) (2.12. Then z ∈ ρ(A) is impossible by 1 = ψn = RA (z)(A − z)ψn ≤ RA (z) (A − z)ψn → 0. RA (z) ≥ dist(z. Proof. it has an absolutely convergent power series expansion around every point z0 ∈ ρ(A).13. ϕ ∈ H. that is. a similar argument veriﬁes the Neumann series for the resolvent n−1 RA (z) = − j=0 ∞ Aj 1 + n An RA (z) j+1 z z Aj .87) there is a sequence zn → z and corresponding vectors ϕn ∈ H such that RA (z)ϕn ϕn −1 → ∞. As a consequence we obtain the useful Lemma 2. Let us also note the following spectral mapping result.87) In fact.88) (2. In addition. In addition. Then we claim RA−1 (z −1 ) = −zARA (z) = −z − RA (z). Let ψn be a Weyl sequence. Lemma 2.91) . Proof. (2.11. Conversely. (2. the right hand side is a bounded operator from H → Ran(A) = D(A−1 ) and (A−1 − z −1 )(−zARA (z))ϕ = (−z + A)RA (z)ϕ = ϕ. z j+1 z > A .90) (2. then the converse is also true. by (2.64 2. Suppose A is injective. The resolvent set ρ(A) is open and RA : ρ(A) → L(H) is holomorphic.86) = − j=0 In summary we have proved the following Theorem 2. Suppose z ∈ ρ(A)\{0}. then σ(A−1 )\{0} = (σ(A)\{0})−1 . Then ϕn → 0 and hence (A − z)ψn = ϕn + (zn − z)ψn ≤ ϕn + z − zn  → 0 shows that ψn is a Weyl sequence. Let ψn = RA (zn )ϕn and rescale ϕn such that ψn = 1. σ(A))−1 and if A is bounded we have {z ∈ C z > A } ⊆ ρ(A). If z is a boundary point of ρ(A).
3. 0]) and satisﬁes the given estimates as has been shown in the proof of Lemma 2. λ1 ψ1 = ψ1 . Let A be symmetric. RA (z) ≤ Im(z)−1 and. Conversely. if there is an orthonormal basis of eigenvectors.93) For the eigenvalues and corresponding eigenfunctions we have: Lemma 2. then inf σ(A) = inf ψ∈D(A).95) 2 = ψ1 . Aψ1 = ψ1 . ψ2 = 0. However. and hence A is selfadjoint by Lemma 2. we have λ1 ψ 1 and (λ1 − λ2 ) ψ1 . ﬁnishing the proof. (2.17. λ < 0. then Ran(A + z) = H.14. it is no restriction to assume that they are since we can use GramSchmidt to ﬁnd an orthonormal basis for Ker(A − λ). ψ2 = Aψ1 .2. ψ2 − Aψ1 . ψ =1 ψ. if A ≥ 0. we obtain Theorem 2. Suppose A is a symmetric operator which has an orthonormal basis of eigenfunctions {ϕj }. However.15. Theorem 2. if ψ ∈ D(A−1 ) = Ran(A) we have ψ = Aϕ and hence (−zARA (z))(A−1 − z −1 )ψ = ARA (z)((A − z)ϕ) = Aϕ = ψ. Next.92) Thus z −1 ∈ ρ(A−1 ). In particular. if A is selfadjoint (resp. we can always ﬁnd an orthonormal basis of eigenvectors. then RA (z) exists for z ∈ C\R (resp. Moreover. Then all eigenvalues are real and eigenvectors corresponding to diﬀerent eigenvalues are orthogonal. ∞). then A is essentially selfadjoint. If Aψj = λj ψj .16. Then A is selfadjoint if and only if σ(A) ⊆ R and A ≥ 0 if and only if σ(A) ⊆ [0.6. The rest follows after interchanging the roles of A and A−1 . . Proof. RA (λ) ≤ λ−1 . Resolvents and spectra 65 Conversely. In particular. If σ(A) ⊆ R. Aψ . If H is ﬁnite dimensional. it is essentially selfadjoint on span{ϕj }. j = 1. let us characterize the spectra of selfadjoint operators. ψ1 = λ∗ ψ1 2 1 (2. Proof. then A is essentially selfadjoint. 2. A ≥ 0). In the inﬁnite dimensional case this is no longer true in general. Let A be symmetric. Let A be selfadjoint.6.94) The result does not imply that two linearly independent eigenfunctions to the same eigenvalue are orthogonal. (2. (2. Aψ1 = λ1 ψ1 . z ∈ C\R. Theorem 2. z ∈ C\(−∞.
(2. U −1 is also unitary and hence σ(U ) ⊆ {z ∈ C z ≥ 1} by Lemma 2. then σ(U ) ⊆ {z ∈ C z = 1}. Recall that a bijection U is called unitary if U ψ. j = 1. Thus U is unitary if and only if U ∗ = U −1 . D(A) = {f ∈ H 2 . 2 we have (z1 − z2 ) ψ1 . U ψ2 = 0 since U ψ = zψ implies U ∗ψ = U −1 ψ = z −1 ψ = z ∗ ψ. ﬁnishing the proof. z j+1 z as Im(z) → ∞. we can characterize the spectra of unitary operators.48).18. It suﬃces to prove the ﬁrst claim since the second then follows as in (2. Then φ = j=0 λj ±i ϕj ∈ D(A) and (A ± i)φ = ψ shows that Ran(A ± i) is dense. What is the spectrum of an orthogonal projection? Problem 2. Compute the resolvent of A. Let U be unitary. U ∗ U ψ = ψ. Lemma 2.19.98) Problem 2. Then ARA (z)ψ ≤ ≤ by (2. (2. then n RA (z)ψ = − j=0 1 Aj ψ + o( n+1 ).99) Theorem 2.13. U ψ = ψ. Im(z) (2. D(A) = {f ∈ H 1 [0. 2 (0.11.100) ˜ RA (z)Aψ + ARA (z)ϕ ˜ Aψ + ϕ . Proof. Suppose A is selfadjoint. Compute the eigenvalues and eigenvectors of A = − dx2 .66 2.10. ˜ ˜ Write ψ = ψ + ϕ.97) Proof. d Problem 2. For every ψ ∈ H we have lim Im(z)→∞ ARA (z)ψ = 0.96) In particular. Since U ≤ 1 we have σ(U ) ⊆ {z ∈ C z ≤ 1}.12. we note the following asymptotic expansion for the resolvent. π)f (0) = f (π) = 0}. ψ2 = U ∗ ψ1 . Similarly. ψ2 − ψ1 . If U ψj = zj ψj . All eigenvalues have modulus one and eigenvectors corresponding to diﬀerent eigenvalues are orthogonal. where ψ ∈ D(A) and ϕ ≤ ε. 1]  f (0) = 0} and show that unbounded operators can have empty spectrum. Consider the set of all ﬁnite linear combinations ψ = j=0 cj ϕj cj n which is dense in H. ψ . Moreover. (2. Selfadjointness and spectrum n Proof. if ψ ∈ D(An ).86). (2. Compute the resolvent of Af = f . In addition.
j = 1.20. Find a Weyl sequence for the selfadjoint operator A = d2 − dx2 . then A = j Aj is selfadjoint and RA (z) = j RAj (z). Show that the spectrum of AA∗ and A∗ A coincide away from 0 by showing 1 1 RAA∗ (z) = (ARA∗ A (z)A∗ − 1) . ∞). 2. The same considerations apply to countable orthogonal sums A= j Aj .) Problem 2. Clearly A is closed. etc. then RAj (z) ≤ ε−1 and hence −1 shows that z ∈ ρ(A). Let H1 ⊆ H be a closed subspace and let P1 be the corresponding projector. z z (2.102) by setting A(ψ1 + ψ2 ) = A1 ψ1 + A2 ψ2 for ψj ∈ D(Aj ).13. RA∗ A (z) = (A∗ RAA∗ (z)A − 1) .105) (the closure can be omitted if there are only ﬁnitely many terms). if and only if both A1 and A2 are.103) and we have Theorem 2.14. What is σ(A)? (Hint: Cut oﬀ the solutions of −u (x) = z u(x) outside a ﬁnite ball. z ∈ ρ(A) = C\σ(A) (2. Suppose Aj are selfadjoint operators on Hj . D(A) = j D(Aj ) (2. be two given Hilbert spaces and let Aj : D(Aj ) → Hj be two given operators. Moreover. Orthogonal sums of operators 67 Problem 2. given an operator A it might be useful to write A as orthogonal sum and investigate each part separately. Setting H = H1 ⊕ H2 we can deﬁne an operator A = A1 ⊕ A2 . Orthogonal sums of operators Let Hj . Proof. We say that H1 reduces the operator A if P1 A ⊆ AP1 . D(A) = D(A1 ) ⊕ D(A2 ) (2. if z ∈ j σ(Aj ) set ε = d(z. Note that this . D(A) = H 2 (R) for z ∈ (0. By Ran(A ± i) = (A ± i)D(A) = j (Aj ± i)D(Aj ) = j Hj = H we see that A is selfadjoint.101) 2.4. Conversely. if z ∈ σ(Aj ) there is a corresponding Weyl sequence ψn ∈ D(Aj ) ⊆ D(A) and hence z ∈ σ(A). j σ(Aj )) > 0..4. j RAj (z) ≤ ε Conversely. (essentially) selfadjoint.2. Suppose A is bounded.104) where σ(A) = j σ(Aj ) (2.
Suppose H1 ⊆ H reduces A. P1 Aψ2 = AP1 ϕ. Problem 2. if ψ ∈ D(Aj ) we have Aψ = APj ψ = Pj Aψ ∈ Hj and thus Aj : D(Aj ) → Hj which proves the ﬁrst claim.15.108) 2. Hence selfadjointness is related to the dimension of these spaces and one calls the numbers d± (A) = dim K± . If this operator turns out to be essentially selfadjoint. D(Aj ) = Pj D(A) ⊆ D(A). Then Pj ψn → Pj ψ and APj ψn = Pj Aψn → P Aψ. Moreover. Selfadjointness and spectrum implies P1 D(A) ⊆ D(A). If d− (A) = d+ (A) = 0 there is one selfadjoint extension of A. then there is a sequence ψn ∈ D(A) such that ψn → ψ and Aψn → Aψ. any other z ∈ C\R would be as good). In particular. Since AP1 ψ = Aψ1 and P1 Aψ = P1 Aψ1 + P1 Aψ2 = Aψ1 + P1 Aψ2 we need to show P1 Aψ2 = 0. ψj = Pj ψ ∈ D(A). In many physical applications a symmetric operator is given. As already noted.68 2. Show (A1 ⊕ A2 )∗ = A∗ ⊕ A∗ . if it is not.109) defect indices of A (we have chosen z = i for simplicity. But this follows since ϕ. K± = Ran(A ± i)⊥ = Ker(A∗ i). If A is selfadjoint. Conversely. P1 D(A) ⊆ D(A) and hence P2 D(A) = (I − P1 )D(A) ⊆ D(A). suppose ψ ∈ D(A). (2. if ψ ∈ D(A) we can write ψ = ψ1 ⊕ ψ2 . then H1 reduces A if P1 D(A) ⊆ D(A) and AP1 ψ ∈ H1 for every ψ ∈ D(A). However. Now let us turn to the second claim. In Section 2. Selfadjoint extensions It is safe to skip this entire section on ﬁrst reading. 1 2 (2. ψ2 = 0 for every ϕ ∈ D(A). In fact. Clearly A ⊆ A1 ⊕ A2 . Moreover. Pj ψ ∈ D(A) and APj ψ = P Aψ.5. Thus we see D(A) = D(A1 ) ⊕ D(A2 ). there is a unique selfadjoint extension and everything is ﬁne. if we set H2 = H⊥ . then H1 also reduces A and A = A1 ⊕ A2 . namely A. then A = A1 ⊕ A2 .106) If A is closable. (2. where Aj ψ = Aψ.21. it is important to ﬁnd out if there are selfadjoint extensions at all (for physical problems there better are) and to classify them. we have H = H1 ⊕H2 1 and P2 = I − P1 reduces A as well. But what happens in the general case? Is there more than . (2.107) Proof. Lemma 2.2 we have seen that A is essentially selfadjoint if Ker(A∗ − z) = Ker(A∗ − z ∗ ) = {0} for one z ∈ C\R.
Moreover. (1+V )ϕ = ψ. Since A is symmetric we have (A ± i)ψ 2 = Aψ 2 + ψ 2 for all ψ ∈ D(A) by a straightforward computation. A is symmetric. Since V is isometric we have (1 ± V )ϕ.116) . Thus A is selfadjoint if and only if its Cayley transform V is unitary.112) (2. This is possible if and only if both spaces have the same dimension.111) which shows that A is the Cayley transform of V and ﬁnishes the proof.113) 2A(A + i)−1 . Aψ .22. 2iV (1 − V )−1 (2. 2i(A + i)−1 (2.5. let V be given and use the last equation to deﬁne A. or maybe none at all? These questions can be answered by virtue of the Cayley transform V = (A − i)(A + i)−1 : Ran(A + i) → Ran(A − i).e.115) (2. A symmetric operator has selfadjoint extensions if and only if its defect indices are equal.. that is.114) that is. In this case let A1 be a selfadjoint extension. (2. Next observe 1 ± V = ((A − i) ± (A + i))(A + i)−1 = which shows D(A) = Ran(1 − V ) and A = i(1 + V )(1 − V )−1 . Conversely. Selfadjoint extensions 69 one extension. ϕ for all ϕ ∈ D(V ) by a straightforward computation.23. (2. ϕ+ ∈ K+ } (2. ψ = −i (1+V )ϕ.2. (1−V )ϕ = i (1−V )ϕ. V ϕ = ϕ for all ϕ ∈ D(V )) for which Ran(1 + V ) is dense. Finally observe A ± i = ((1 + V ) ± (1 − V ))(1 − V )−1 = 2i(1 − V )−1 . Proof. And thus for every ϕ = (A + i)ψ ∈ D(V ) = Ran(A + i) we have V ϕ = (A − i)ψ = (A + i)ψ = ϕ . V1 its Cayley transform.110) Theorem 2. And thus for every ψ = (1 − V )ϕ ∈ D(A) = Ran(1 − V ) we have Aψ. if and only if d+ (A) = d− (A). (1 V )ϕ = ±2iIm V ϕ. The Cayley transform is a bijection from the set of all symmetric operators A to the set of all isometric operators V (i. ﬁnding a selfadjoint extension of A is equivalent to ﬁnding a unitary extensions of V and this in turn is equivalent to (taking the closure and) ﬁnding a unitary operator from D(V )⊥ to Ran(V )⊥ . Then D(A1 ) = D(A) + (1 − V1 )K+ = {ψ + ϕ+ − V1 ϕ+ ψ ∈ D(A). Theorem 2.
First of all we note that instead of z = i we could use V (z) = (A + z ∗ )(A + z)−1 for any z ∈ C\R. D(A∗ ) = H 1 (0. K+ (z) = Ran(A + z)⊥ respectively K− (z) = Ran(A + z ∗ )⊥ . The following items are equivalent. Proof. j j Corollary 2.24. 2π). Then dim Ker(A∗ − z ∗ ) = d for all z ∈ C\R. Let d± (z) = dim K± (z).25. The same arguments as before show that there is a one to one correspondence between the selfadjoint extensions of A and the unitary operators on Cd(z) . . Hence we have Lemma 2. 2π). d Example. • A is closed. (A1 ± i)−1 = (A ± i)−1 ⊕ where {ϕ+ } j i 2 j (2. fθ satisﬁes fθ (2π) = eiθ fθ (0).119) (2. In particular. f ∈ D(A).118) is an orthonormal basis for K+ and ϕ− = V1 ϕ+ .70 2. ˜ ˜ (2. fθ (x) = f (x) + α(e2π−x − eiθ ex ). . Hence d(z1 ) = d(z2 ) = d± (A).123) Concerning closures we note that a bounded operator is closed if and only if its domain is closed and any operator is closed if and only if its inverse is closed. Suppose A is a closed symmetric operator with equal defect indices d = d+ (A) = d− (A).117) ϕ± . Recall the operator A = −i dx . D(A) = {f ∈ H 1 (0. 2π)f (0) = d f (2π) = 0} with adjoint A∗ = −i dx . and thus we have D(Aθ ) = {f ∈ H 1 (0.121) e iθ = ˜ (2. Clearly K± = span{e x } is one dimensional and hence all unitary maps are of the form Vθ e2π−x = eiθ ex . Moreover.120) θ ∈ [0. Selfadjointness and spectrum and A1 (ψ + ϕ+ − V1 ϕ+ ) = Aψ + iϕ+ + iV1 ϕ+ .122) (2. 2π)f (2π) = eiθ f (0)}. j j (2. α ∈ C. (ϕ± − ϕj ). 1 − eiθ e2π e2π − eiθ The functions in the domain of the corresponding operator Aθ are given by (2. • D(V ) = Ran(A + i) is closed.
• V is closed.2. Next. The prototypical example is of course complex conjugation Cψ = ψ ∗ . and ACψ = CAψ. ϕj )∗ = j ϕj . (2. .126) Proof. A skew linear map C : H → H is called a conjugation if it satisﬁes C 2 = I and Cψ. Hence it suﬃces to consider it for vectors ϕ = j ϕj (z). since D(A) = C 2 D(A) ⊆ CD(A). First observe that ((A1 − z)−1 − (A2 − z)−1 )ϕ is zero for every ϕ ∈ Ran(A−z).124) Note that in this case CD(A) = D(A). Cϕ = ψ. ψj (z ∗ ) = j j (2.5. . Now computation the adjoint once using ((Aj − once using ( j ψj . . ϕk (z ∗ ). (2. Hence we have (A1 − z)−1 − (A2 − z)−1 = j ϕj (z). we give a useful criterion for the existence of selfadjoint extensions. Finally.125) where l αjk (z) = ϕj (z ∗ ). Hence the two spaces have the same dimension. (Al − z)−1 ϕk (z) .27. An operator A is called Creal if CD(A) ⊆ D(A). Selfadjoint extensions 71 • Ran(V ) = Ran(A − i) is closed. .26. Theorem 2.k 1 2 (αjk (z) − αjk (z)) ϕk (z). ϕj (z). ϕ ϕj (z) ∈ Ran(A − z)⊥ . Let {ϕj } be an orthonormal set in Ran(A + i)⊥ . (2. then its defect indices are equal. Then {Kϕj } is an orthonormal set in Ran(A − i)⊥ . ψj (z). ϕ . Proof. ψ ∈ D(A). Lemma 2.128) z ∗ )−1 and z)−1 )∗ = (Aj − ψj (z). . Suppose the symmetric operator A is Creal. (2.129) . (2. . we note the following useful formula for the diﬀerence of resolvents of selfadjoint extensions.127) where ψj (z) = ((A1 − z)−1 − (A2 − z)−1 )ϕj (z). then (A1 − z)−1 − (A2 − z)−1 = j. If Aj . 2 are selfadjoint extensions and if {ϕj (z)} is an orthonormal basis for Ker(A∗ − z ∗ ). ψj we obtain ϕj (z ∗ ). Hence {ϕj } is an orthonormal basis for Ran(A + i)⊥ if and only if {Kϕj } is an orthonormal basis for Ran(A − i)⊥ . j = 1.
m ≥ 1.134) Then we have Lemma 2.6. b)} ⊆ C[a. b] we have the formula of partial integration b b x (2. Selfadjointness and spectrum Evaluating at ϕk (z) implies ψk (z) = j ψj (z ∗ ). d ∞ Problem 2. f is absolutely continuous if and only if it can be written as the integral of some locally integrable function. b)f (j) ∈ AC(a. b)f (x) = f (c) + c g(t)dt. b) ⊆ R be some interval.16. By Corollary A. g ∈ AC[a. note that x m f (j) (x)2 = f (j) (c)2 + 2 c Re(f (j) (t)∗ f (j+1) (t))dt (2. b). Proof. b). If the endpoint is inﬁnite. Compute the defect indices of A0 = i dx . We denote by x AC(a. .. (with respect to Lebesgue measure) and f (x) = g(x). b] = {f ∈ AC(a. Moreover. then f (j+1) is integrable near this endpoint and hence the claim follows.33 f (x) = f (c) + c g(t)dt is diﬀerentiable a. b)g ∈ L1 (a. Then f is bounded and limx↓a f (j) (x) respectively limx↑b f (j) (x) exist for 0 ≤ j ≤ m − 1. b) = {f ∈ C(a. b)} loc (2. If [a. Can you give a selfadjoint extension of A0 . (2. f (j+1) ∈ L2 (a. D(A0 ) = Cc ((0. In particular. Since f (j) is square integrable the limit must be zero. c ∈ (a.72 2.e. b] is a compact interval we set AC[a.133) We set H (a. ∞)). b).e. g is determined uniquely a.131) the set of all absolutely continuous functions. Note that AC(a. g ∈ L1 (a. If the endpoint is ﬁnite.130) and ﬁnishes the proof. the limit is zero if the endpoint is inﬁnite. b). (2. b) is a vector space. Suppose f ∈ H m (a. ϕk (z) ϕj (z ∗ ) (2. 0 ≤ j ≤ m − 1}. b) = {f ∈ L2 (a. If f. That is.28. b]. Appendix: Absolutely continuous functions Let (a.132) f (x)g (x) = f (b)g(b) − f (a)g(a) − a a f (x)g(x)dx.135) shows that the limit exists (dominated convergence). 2.
b). b) is ﬁnite.1). b)f (j) ∈ AC(a. (2.6. that it suﬃces to check that the function plus the highest derivative is in L2 . H m (a. For an inﬁnite endpoint this can also be shown directly.136) For a ﬁnite endpoint this is straightforward. What is the closure of C0 (a. b)}. 0 ≤ j ≤ m − 1. Show (2. Appendix: Absolutely continuous functions 73 Let me remark. b) together with the norm b b f is a Hilbert space. f (r) ∈ L2 (a.2.17. That is. Show that H 1 (a. 2 2.1 = a f (t)2 dt + a f (t)2 dt (2.137) ∞ Problem 2.133). b) in H 1 (a.19.) . but it is much easier to use the Fourier transform (compare Section 7. b) = {f ∈ L2 (a. (Hint: Fubini) Problem 2.18. the lower derivatives are then automatically in L2 . Problem 2. b)? (Hint: Start with the case where (a.
.
n! (3.4) As long as f and g are polynomials. But how do we diagonalize a selfadjoint operator? The answer is known as the spectral theorem. (f ∗ )(A) = f (A)∗ .1) i ψ(t) = Hψ(t). dt If H = Cn . 3. which might not be the case for a a quantum system. and to understand the underlying dynamics. (f g)(A) = f (A)g(A). This matrix exponential can be deﬁned by a convergent power series ∞ (3. In both cases this only 75 .1. such that (f +g)(A) = f (A)+g(A). is to diagonalize H. The spectral theorem In this section we want to address the problem of deﬁning functions of a selfadjoint operator A in a natural way. no problems arise. the best way to compute the matrix exponential. However.Chapter 3 The spectral theorem The time evolution of a quantum mechanical system is governed by the Schr¨dinger equation o d (3.2) exp(−itH) = n=0 (−it)n n H . (3. If we want to extend this deﬁnition to a larger class of functions. we will need to perform some limiting procedure. Hence we could consider convergent power series or equip the space of polynomials with the sup norm. and H is hence a matrix. that is.3) For this approach the boundedness of H is crucial. this system of ordinary diﬀerential equations is solved by the matrix exponential ψ(t) = exp(−itH)ψ(0).
n P (Ωn )ψ = P (Ω)ψ. we will defer this problem and begin by developing a functional calculus for a family of characteristic functions χΩ (A). we should also have χR (A) = I and χΩ (A) = n χΩj (A) for any ﬁnite j=1 union Ω = n Ωj of disjoint sets. Pn ψ = Pn ψ.2). norm convergence does not even hold in the simplest case where H = L2 (I) and P (Ω) = χΩ (multiplication operator). Moreover. P (R\Ω) = I − P (Ω). then P (Ω)ψ for every ψ ∈ H (strong σadditivity). since for a multiplication operator the norm is just the sup norm of the function. we will use characteristic functions χΩ (A) instead of powers Aj . Pn ψ = ψ. rather than norm convergence.11 (iv). Let H = Cn and A ∈ GL(n) be some symmetric matrix. n P (Ωn ) = P (Ω). Furthermore.7) is a projection valued measure (Problem 3. The only remaining problem is of course j=1 the deﬁnition of χΩ (A). the corresponding operators should be orthogonal projections. such that the following two conditions hold: (i) P (R) = I. (3. Pn ψ = Pn ψ 2 together with Lemma 1. . (3.8) . Let H = L2 (R) and let f be a realvalued measurable function.5) from the Borel sets to the set of orthogonal projections. since wlim Pn = P for some orthogonal projections implies 2 slim Pn = P by ψ. Then P (Ω) = χf −1 (Ω) (3. Then PA (Ω) = {jλj ∈Ω} Pj (3. that is. A projectionvalued measure is a map P : B → L(H). In fact. Since χΩ (λ)2 = χΩ (λ). . Denote the Borel sigma algebra of R by B. Let λ1 . Ω → P (Ω). The spectral theorem works if the operator A is bounded. Example. Example.76 3. it even suﬃces to require weak convergence. n P (Ωn )ψ = Note that we require strong convergence. (ii) If Ω = n Ωn with Ωn ∩ Ωm = ∅ for n = m. λm be its (distinct) eigenvalues and let Pj be the projections onto the corresponding eigenspaces. However. . .6) is a projection valued measure. It is straightforward to verify that any projectionvalued measure satisﬁes P (∅) = 0. P (Ω)∗ = P (Ω) and P (Ω)2 = P (Ω). To overcome this limitation.
13) which has the properties (Problem 3.1. For the general case Ω1 ∩ Ω2 = ∅ we now have P (Ω1 )P (Ω2 ) = (P (Ω1 − Ω2 ) + P (Ω1 ∩ Ω2 ))(P (Ω2 − Ω1 ) + P (Ω1 ∩ Ω2 )) = P (Ω1 ∩ Ω2 ) as stated.2). by CauchySchwarz. P (λ)ψ and since for every distribution function there is a unique Borel measure (Theorem A.9) we infer P (Ω1 )P (Ω2 ) + P (Ω2 )P (Ω1 ) = 0.9) Indeed. As before. P (λ2 )ψ ) or equivalently Ran(P (λ1 )) ⊆ Ran(P (λ2 )) for λ1 ≤ λ2 . Then. Picking ψ ∈ H.ψ (Ω) ≤ ϕ ψ . for every resolution of the identity there is a unique projection valued measure. strong right continuity is equivalent to weak right continuity. (iv) slimλ→−∞ P (λ) = 0 and slimλ→+∞ P (λ) = I. Using the polarization identity (2. λ]) (3. 4 (3. P (Ω)ψ = (µϕ+ψ (Ω) − µϕ−ψ (Ω) + iµϕ−iψ (Ω) − iµϕ+iψ (Ω)). suppose Ω1 ∩ Ω2 = ∅ ﬁrst.12) . Now let us turn to integration with respect to our projectionvalued measure.14) Note also that.16) we also have the following complex Borel measures 1 µϕ. (iii) slimλn ↓λ P (λn ) = P (λ) (strong right continuity). (ii) P (λ1 ) ≤ P (λ2 ) (that is ψ. P (λ1 )ψ ≤ ψ.3. we also have P (Ω1 )P (Ω2 ) = P (Ω1 ∩ Ω2 ). we obtain a ﬁnite Borel measure µψ (Ω) = ψ. The corresponding distribution function is given by µ(λ) = ψ. P (Ω)ψ = P (Ω)ψ 2 with µψ (R) = ψ 2 < ∞. To every projectionvalued measure there corresponds a resolution of the identity P (λ) = P ((−∞.10) (3. For any simple function f = n αj χΩj (where Ωj = f −1 (αj )) j=1 (3. (3.3): (i) P (λ) is an orthogonal projection.ψ (Ω) = ϕ. taking the square of (3. Moreover.11) Multiplying this equation from the right by P (Ω2 ) shows that P (Ω1 )P (Ω2 ) = −P (Ω2 )P (Ω1 )P (Ω2 ) is selfadjoint and thus P (Ω1 )P (Ω2 ) = P (Ω2 )P (Ω1 ) = 0. The spectral theorem 77 and P (Ω1 ∪ Ω2 ) + P (Ω1 ∩ Ω2 ) = P (Ω1 ) + P (Ω2 ). µϕ. (3.
There is some additional structure behind this extension. and P (f g) = P (f )P (g) are straightforward for simple functions f . The spectral theorem we set n P (f ) ≡ R f (λ)dP (λ) = j=1 αj P (Ωj ). P (χΩ ) = P (Ω).ψ (λ) and. Theorem 3.16) f (λ)dµϕ. P (f )ψ = R αj µϕ.1. (3.17) remain true. if fn (x) → f (x) pointwise and if the sequence supλ∈R fn (λ) is bounded. Since the simple functions are dense in the Banach space of bounded Borel functions B(R). Proof. In particular. Recall that the set L(H) of all bounded linear mappings on H forms a C ∗ algebra.ψ (Ωj ) shows (3. j (3. The properties P (1) = I.18) that P has norm one.78 3. Then the operator P : B(R) → L(H) f → R f (λ)dP (λ) is a C ∗ algebra homomorphism with norm one. Hence P is a C ∗ algebra homomorphism. For general f they follow from continuity. P (f ∗ ) = P (f )∗ . Then ϕ. (3. P (f )ψ = ϕ.15) In particular. (3.17) Equipping the set of simple functions with the sup norm this implies P (f )ψ ≤ f ∞ ψ (3. In addition. P (f )ψ 2 = j αj 2 µψ (Ωj ) (the sets Ωj are disjoint) shows P (f )ψ 2 = R f (λ)2 dµψ (λ).16) and (3. that is. the operator P is a linear map from the set of simple functions into the set of bounded linear operators on H. φ(ab) = φ(a)φ(b) and φ(a∗ ) = φ(a)∗ . then P (fn ) → P (f ) strongly. by linearity of the integral.19) . The last claim follows from the dominated convergence theorem and (3.17). Moreover. Let P (Ω) be a projectionvalued measure on H. there is a unique extension of P to a bounded linear operator P : B(R) → L(H) (whose norm is one) from the bounded Borel functions on R (with sup norm) to the set of bounded linear operators on H. A C ∗ algebra homomorphism φ is a linear map between two C ∗ algebras which respects both the multiplication and the adjoint.
Since (3. P (f )ψ = R g ∗ (λ)f (λ)dµϕ. For every ψ ∈ Df .21) Example. Moreover. (3. we get ϕ. (3. ˜ ˜ P (fn )ψ converges to some vector ψ. Let H = Cn and A ∈ GL(n) respectively PA as in the previous example. Df is dense.ψ .1. Ωn as above. (3.23) This is clearly a linear subspace of H since µαψ (Ω) = α2 µψ (Ω) and since µϕ+ψ (Ω) ≤ 2(µϕ (Ω) + µψ (Ω)) (by the triangle inequality).22) In particular.27) . P (f ) is a linear operator such that (3. Indeed. P (f )ψ = P (f ∗ )ϕ. For any Borel function f .24) and abbreviate ψn = P (Ωn )ψ. ψn → ψ by (3. Let f be given and deﬁne fn .24) converges to f in the sense of L2 (R. dµψ ). dµψ ). (3. (3. In addition.17) since χΩn → 1 in L2 (R.20) and thus dµP (g)ϕ.17) hold. Moreover. Theorem 3. Since we expect the resulting operator to be unbounded. (3. D(P (f )) = Df . The spectral theorem 79 As a consequence.17). Next we want to deﬁne this operator for unbounded Borel functions. By construction. because of (3. Motivated by (3.3. Now observe that dµψn = χΩn dµψ and hence ψn ∈ Df . One calls an unbounded operator A normal if D(A) = D(A∗ ) and Aψ = A∗ ψ for all ψ ∈ D(A). PA (f ) = A for f (λ) = λ.26) Proof.2. observe P (g)ϕ. ψ (3.26) holds for fn by our previous theorem. We deﬁne P (f )ψ = ψ. let Ωn be deﬁned as in (3.ψ (λ) (3. Then m PA (f ) = j=1 f (λj )Pj .P (f )ψ = g ∗ f dµϕ. The operator P (f ) has some additional properties. Ωn = {λ f (λ) ≤ n}.17) we set Df = {ψ ∈ H R f (λ)2 dµψ (λ) < ∞}. the operator P (f ) = R f (λ)dP (λ). the bounded Borel function fn = χΩn f.25) is normal and satisﬁes P (f )∗ = P (f ∗ ). we need a suitable domain ﬁrst.16) and (3.
29) which implies f ∈ L2 (R. This proves existence of the limit n→∞ R lim ∗ fn 2 dµψ (λ) = lim P (fn )ψ n→∞ 2 ˜ = lim P (Ωn )ψ n→∞ 2 ˜ = ψ 2 . Thus it remains to show that ˜ D(P (f )∗ ) ⊆ Df . Now observe that P (fn n ∗ ˜ P (fn )ψ. If ψ is cyclic. Luckily a spectral basis always exist: . Recall that U : H → H is called unitary if it is a bijection which preserves scalar products U ϕ. the relation Uψ (P (f )ψ) = f deﬁnes a unique unitary operator Uψ : Hψ → L2 (R. U ψ = ˜ ˜ ϕ. To see the second equality use P (fn )ϕ = P (fm χn )ϕ = P (fm )P (Ωn )ϕ for m ≥ n and let m → ∞. dµψ ). These considerations seem to indicate some kind of correspondence be˜ tween the operators P (f ) in H and f in L2 (R. that is. That P (f ) is normal follows from P (f )ψ 2 = P (f ∗ )ψ 2 = R f (λ)2 dµψ . A set of spectral vectors is called a spectral basis if j Hψj = H. If ψ ∈ D(P (f )∗ ) we have ψ. P (f )ϕ = ψ. By (3.80 3. ψ ∈ Df = D(f ∗ ) by continuity. dµψ )} (since ϕ = P (f )ψ implies dµϕ = f 2 dµψ ) and the above equation still holds. Moreover.33) (3.30) ˜ ˜ Clearly. dµψ )} ⊆ H. dµψ )f g ∈ L2 (R. (3. our picture is complete. Now let us return to our original problem and consider the subspace Hψ = {P (f )ψf ∈ L2 (R. P (fn )ϕ = ψ. dµψ ). ϕ = ψ. The spectral theorem for any ϕ.17). ψ . (3. then fn converges to some f in L2 (since ψn − ψm 2 = fn − fm 2 dµψ ) and hence ψn → P (f )ψ.32) where f is identiﬁed with its corresponding multiplication operator. The vector ψ is called cyclic if Hψ = H. (3. Otherwise we need to extend this approach. ϕ for all ∗ )ψ = P (Ω )ψ since we have ˜ ϕ ∈ Df by deﬁnition. A is selfadjoint if and only if A is and σ(A) = σ(A). dµψ ) such that Uψ P (f ) = f Uψ .28) for any ϕ ∈ H. if f is unbounded we have Uψ (Df ∩Hψ ) = D(f ) = {g ∈ L2 (R.31) Observe that this subspace is closed: If ψn = P (fn )ψ converges in H. P (f )P (Ωn )ϕ = P (Ωn )ψ. A set {ψj }j∈J (J some index set) is called a set of spectral vectors if ψj = 1 and Hψi ⊥ Hψj for all i = j. (3. ϕ (3. ψ ∈ Df . The operators A in H and A in H are said to be unitarily equivalent if ˜ ˜ U A = AU. U D(A) = D(A).
Using this canonical form of projection valued measures it is straightforward to prove Lemma 3. it is at most countable. ψ ⊥ Hψj implies P (g)ψ ⊥ Hψj for every bounded function g since P (g)ψ. 1) is cyclic and hence the 1 0 spectrum of A is simple. the spectrum is called simple. β ∈ C. First of all observe that if {ψj }j∈J is a spectral set and ψ ⊥ Hψj for all j we have Hψ ⊥ Hψj for all j. It suﬃces to show that a spectral basis exists.3. 0 0 Example. Normalize ψ1 and choose this to be ˜j which is not in Hψ . 1) are a spectral basis. D(αP (f ) + βP (g)) = Df +g (3. ˜ ˜ Now start with some total set {ψj }.35) such that for any Borel function f . (3. The spectral theorem 81 Lemma 3. U Df = D(f ). it can be at most equal to the cardinality of an ordinary basis. Move to the ﬁrst ψ 1 with respect to Hψ1 and normalize it. For every projection valued measure P . This can be easily done using a GramSchmidt type construction.36) Proof. In particular. But P (g)ψ with g bounded is dense in Hψ implying Hψ ⊥ Hψj .3. If A = there is no cyclic vector (why) 0 1 and hence the spectral multiplicity is two. Let H = C2 and A = . ˜ Hence H = span{ψj } ⊆ Hψ . take the orthogonal complement ψ1 . However. there is an (at most countable) spectral basis {ψn } such that H= n Hψn (3. Then ψ1 = (1. P (f )ψj = ψ. 0) and ψ2 = 0 1 (0. Indeed. Let f. If the spectral multiplicity is one. U P (f ) = f U. since H is separable.1. P (g ∗ f )ψj = 0. j j It is important to observe that the cardinality of a spectral basis is not welldeﬁned (in contradistinction to the cardinality of an ordinary basis of the Hilbert space). ψ = (1. dµψn ) (3. g be Borel functions and α. Choose the result to be ψ2 .34) and a corresponding unitary operator U= n Uψn : H → n L2 (R.4.37) . The minimal cardinality of a spectral basis is called spectral multiplicity of P . Then we have αP (f ) + βP (g) ⊆ P (αf + βg). Proceeding ˜ like this we get a set of spectral vectors {ψj } such that span{ψj } ⊆ j Hψj . However.
39) which is know as the Borel transform of the measure µψ . More generally.41) This function is holomorphic for z ∈ ρ(A) and satisﬁes Fψ (z ∗ ) = Fψ (z)∗ and Fψ (z) ≤ ψ 2 Im(z) (3. D(P (f )P (g)) = Dg ∩ Df g . RA (z)ψ = R 1 dµϕ. (3.43) that it maps the upper half plane to itself. there is a corresponding measure µψ (λ) given by Stieltjes inversion formula.80) shows Im(Fψ (z)) = Im(z) RA (z)ψ 2 (3.ψ such that ϕ. So by our above remarks.42) (see Theorem 2. The spectral theorem and P (f )P (g) ⊆ P (f g).44) The measure µϕ. that to every projection valued measure P we can assign a selfadjoint operator A = R λdP (λ). λ−z (3. RA (z)ψ . Moreover. The question is whether we can invert this map. By (3.4) that Fψ (z) is a holomorphic map from the upper half plane to itself.16) the corresponding quadratic form is given by Fψ (z) = ψ. (3. the ﬁrst resolvent formula (2.82 3. To do this. It can be shown (see Section 3. λ−z (3. Such functions are called Herglotz functions. (3.14). we consider the resolvent RA (z) = R (λ − z)−1 dP (λ). Moreover. then it is the Borel transform of a unique measure µψ (given by Stieltjes inversion formula). So let A be a given selfadjoint operator and consider the expectation of the resolvent of A. ψ ∈ H we can ﬁnd a corresponding complex measure µϕ. it is a Herglotz function.38) Now observe. Moreover. that is. a comparison with our previous considerations begs us to deﬁne a family of .ψ (λ). Fψ (z) = ψ.ψ is conjugate linear in ϕ and linear in ψ. It is called spectral measure corresponding to ψ. if Fψ (z) is a Herglotz function satisfying F (z) ≤ Im(z) . the measure µψ can be reconstructed from Fψ (z) by Stieltjes inversion formula 1 λ+δ Im(Fψ (t + iε))dt. by polarization.40) µψ (λ) = lim lim δ↓0 ε↓0 π −∞ M Conversely. RA (z)ψ = R 1 dµψ (λ). for each ϕ.
ψ (λ).ψ (Ω) ≤ ϕ ψ . We ﬁrst show PA (Ω1 )PA (Ω2 ) = PA (Ω1 ∩ Ω2 ) in two steps. Now we can prove the spectral theorem for selfadjoint operators. The operators PA (Ω) are non negative (0 ≤ ψ. Lemma 3. (3. The family of operators PA (Ω) forms a projection valued measure. RA (˜)ψ = ϕ.ψ (λ) = (λ − z)−1 dµϕ. RA (z)PA (Ω)ψ = RA (z ∗ )ϕ.ψ (λ). PA (Ω)ψ ≤ 1) and hence selfadjoint. choosing Ω1 = Ω2 .ψ ˜ 1 = ( ϕ.1. Secondly we compute 1 dµ (λ) = ϕ. . PA (Ω)ψ λ − z ϕ. we see that PA (Ω1 ) is a projector.5.ψ (λ) = χΩ (λ)dµϕ. Then n ψ.8 since  ϕ.PA (Ω)ψ (λ) = χΩ (λ)dµϕ. RA (˜)ψ ) z z−z ˜ 1 1 1 1 dµϕ. In particular. The spectral theorem 83 operators PA (Ω) via ϕ. PA (Ω)ψ = R χΩ (λ)dµϕ. Now let Ω = n ∞ n=1 Ωn with Ωn ∩ Ωm = ∅ for n = m. PA (Ω1 ∩ Ω2 )ψ (3.93) below and Lemma 2.ψ (λ) R R λ−z ϕ. as pointed out earlier.45) This is indeed possible by Corollary 1.80)) 1 ∗ dµ (λ) = RA (z ∗ )ϕ. RA (z)ψ − ϕ.46) z−z R λ−z λ−z ˜ ˜ ˜ R λ−z λ−z R implying dµRA (z ∗ )ϕ.48) by σadditivity of µψ . The relation PA (R) = I follows from (3. First observe (using the ﬁrst resolvent formula (2.PA (Ω)ψ 1 = χΩ (λ)dµRA (z ∗ )ϕ.3.47) R implying dµϕ. PA (Ω)ψ  = µϕ. PA (Ωj )ψ = j=1 j=1 µψ (Ωj ) → ψ.ψ (λ) = (3.18 which imply µψ (R) = ψ 2 . Equivalently we have since χΩ1 χΩ2 = χΩ1 ∩Ω2 .ψ (λ) = − dµϕ. PA (Ω1 )PA (Ω2 )ψ = ϕ. Proof. PA (Ω)ψ = µψ (Ω) (3. RA (z)RA (˜)ψ z z λ − z RA (z )ϕ. Hence PA is weakly σadditive which implies strong σadditivity.ψ (λ) since a Herglotz function is uniquely determined by its measure.
µ (3. ˜ ˜ Finally.λ0 +ε) (λ)(λ − λ0 )−1 .12. λ0 + ε)) = I. then U RA (z) = RA (z)U and hence ˜ dµψ = d˜U ψ . Let Ωn = (λ0 − n .51) (which is larger than the domain D(A) = {ψ ∈ H R λ2 dµψ (λ) < ∞}).ψ we are done. that if A and A are unitarily equivalent as in (3. if PA ((λ0 − ε. λ + ε)) = 0 for all ε > 0}. ˜ Note.53) 1 1 Proof. U D(PA (f )) = D(PA (f )).ψ are uniquely determined by the resolvent and the projection valued measure is uniquely determined by the measures µϕ. (3.55) Similarly PA (fε )(A − λ0 ) = ID(A) and hence λ0 ∈ ρ(A). z ∈ C\R.30). Lemma 3. (3. The quadratic form of A is given by qA (ψ) = R λdµψ (λ) (3. set fε (λ) = χR\(λ0 −ε.49) Proof. Existence has already been established. λ0 + n ). Moreover.50) and can be deﬁned for every ψ in the form domain selfadjoint operator Q(A) = {ψ ∈ H R λdµψ (λ) < ∞} (3. Since (A − λ0 )ψn 2 = = (A − λ0 )PA (Ωn )ψn R 2 (λ − λ0 )2 χΩn (λ)dµψn (λ) ≤ 1 n2 (3.54) we conclude λ0 ∈ σ(A) by Lemma 2. The spectrum of A is given by σ(A) = {λ ∈ RPA ((λ − ε. Theorem 3.52) In particular. Since the measures µϕ. Suppose PA (Ωn ) = 0.4 shows that PA ((λ−z)−1 ) = RA (z).6 (Spectral theorem). let us give a characterization of the spectrum of A in terms of the associated projectors. Conversely. To every selfadjoint operator A there corresponds a unique projection valued measure PA such that A= R λdPA (λ). λ0 + ε)) = 0. The spectral theorem Theorem 3. we have U PA (f ) = PA (f )U . . (3.84 3.7. Then we can ﬁnd a ψn ∈ PA (Ωn )H with ψn = 1. This extends our previous deﬁnition for nonnegative operators. Then (A − λ0 )PA (fε ) = PA ((λ − λ0 )fε (λ)) = PA (R\(λ0 − ε.
5. one needs to understand multiplication operators on L2 (R. Show that (3. More on Borel measures 85 Thus PA ((λ1 . Compute A = A∗ A of A = ϕ.7) is a projection valued measure.56) (3.2. (3. then this agrees with our present deﬁnition by Theorem 3. More on Borel measures Section 3.1 showed that in order to understand selfadjoint operators. This notation is justiﬁed by the elementary observation n n and PA (R ∩ ρ(A)) = 0 (3. Problem 3. λ2 )) = 0 if and only if (λ1 . this also shows that if A is bounded and f (A) can be deﬁned via a convergent power series. . where dµ is a ﬁnite Borel measure. Problem 3.2.57) PA ( j=0 αj λj ) = j=0 αj Aj . What is the corresponding operator? Problem 3. λ2 ) ⊆ ρ(A) and we have PA (σ(A)) = I and consequently PA (f ) = PA (σ(A))PA (f ) = PA (χσ(A) f ). Let A be a bounded operator and set √ A = A∗ A. U : Ker(U )⊥ → H is norm preserving. 1}. Show that a selfadjoint operator P is a projection if and only if σ(P ) ⊆ {0. Problem 3.1. In other words.3.4 (Polar decomposition).2. 3.1. √ Problem 3. PA (f ) is not aﬀected by the values of f on R\σ(A)! It is clearly more intuitive to write PA (f ) = f (A) and we will do so from now on. Show that Aψ = Aψ .58) Moreover. ψ.3. we have the polar decomposition A = U A. . dµ). In particular. Conclude that Ker(A) = Ker(A) = Ran(A)⊥ and that U= ϕ = Aψ → Aψ ϕ→0 if ϕ ∈ Ran(A) if ϕ ∈ Ker(A) extends to a welldeﬁned partial isometry. that is. Show that P (λ) satisﬁes the properties (i)–(iv). This is the purpose of the present section.
g → g ◦ f.f (λ) = g(λ)∗ f (λ)dµ(λ). Lemma 3. dµ) → L2 (R. that is. (3. Suppose f is injective. then g(λ)d(f µ)(λ) = R R (3. d(f µ)). Invoking Morea’s together with Fubini’s theorem shows that the Borel transform F (z) = R 1 dµ(λ) λ−z (3.67) However.68) . dµ)λf (λ) ∈ L2 (R. (3.63) (3. Introduce the measure (f µ)(Ω) = µ(f −1 (Ω)).62) Now what can we say about the function f (A) (which is precisely the multiplication operator by f ) of A? We are only interested in the case where f is realvalued. σ(A) = σ(µ). we have Pf (A) (Ω) = χf −1 (Ω) . g → g ◦ f −1 (3. d(f µ)) via the map L2 (R. that is. D(A) = {f ∈ L2 (R. The spectral theorem The set of all growth points. dµ). (3. (3.60) is holomorphic for z ∈ C\σ(µ).61) By Theorem 3. is a unitary map such that U f (λ) = λ. σ(µ) = {λ ∈ Rµ((λ − ε.65) In fact. this map is only unitary if its range is L2 (R. then U : L2 (R. d(f µ)) → L2 (R. dµ) is a cyclic vector for A and that dµg. (3. (3. In particular.86 3. λ + ε)) > 0 for all ε > 0}.66) It is tempting to conjecture that f (A) is unitarily equivalent to multiplication by λ in L2 (R. dµ).7 the spectrum of A is precisely the spectrum of µ. The converse following from Stieltjes inversion formula.8.64) g(f (λ))dµ(λ). that is. dµ)}. if f is injective. Note that 1 ∈ L2 (R.59) is called the spectrum of µ. Associated with this measure is the operator Af (λ) = λf (λ). it suﬃces to check this formula for simple functions g which follows since χΩ ◦ f = χf −1 (Ω) .
72) that is. Let f be realvalued. If two operators with simple spectrum are unitarily equivalent can be read oﬀ from the corresponding measures: Lemma 3. (g1 . Conversely. dµ1 ) → L2 (R. Ωn = 1 1 f −1 ((λ0 − n . f (λ) + ε) contains an open interval around λ and hence f (λ) ∈ σ(f (A)) if λ ∈ σ(A).9. where v = U −1 1. where χ = χ(0. then f (σ(A)) is compact and hence closed. Let A1 . If. d(f µ)) ⊕ L2 (R. if λ0 ∈ σ(f µ). Without restriction we can assume that Aj is multiplication by λ in L2 (R.2. λ0 + n )). . If λ0 ∈ σ(f µ). Thus (f (A) − λ0 )−1 = (f (λ) − λ0 )−1 exists and is bounded. (3. A2 be selfadjoint operators with simple spectrum and corresponding spectral measures µ1 and µ2 of cyclic vectors.70) where equality holds if f is continuous and the closure can be dropped if.. satisﬁes gn = 1. dµ2 ) be a unitary map such that U A1 = A2 U . Reversing the role of A1 and A2 we obtain dµ2 = v2 dµ1 . Proof. compact).∞) .e. Moreover. Note that we can still get a unitary map L2 (R. The spectrum of f (A) is given by σ(f (A)) = σ(f µ). (f (A) − λ0 )gn < n−1 and hence λ0 ∈ σ(f (A)). χd(f µ)) → L2 (R. dµ1 = u2 dµ2 . If f is continuous. in addition. then µ(Ωn ) = 0 for some n and 1 1 hence we can change f on Ωn such that f (R) ∩ (λ0 − n . Then A1 and A2 are unitarily equivalent if and only if µ1 and µ2 are mutually absolutely continuous.69) and thus U is multiplication by u(λ) = U (1)(λ). Lemma 3. Let f (λ) = λ2 . f −1 (f (λ) − ε. σ(A) is bounded (i. Then we also have U f (A1 ) = f (A2 )U for any bounded Borel Function and hence U f (λ) = U f (λ) · 1 = f (λ)U (1)(λ) (3. σ(f (A)) ⊆ f (σ(A)). dµ).71) (3. implying λ0 ∈ σ(f (A)). Let U : L2 (R. then (g ◦ f )(λ) = g(λ2 ) and the range of the above map is given by the symmetric functions. More on Borel measures 87 Example. then the sequence gn = µ(Ωn )−1/2 χΩn . since U is unitary we have µ1 (Ω) = R χΩ 2 dµ1 = R u χΩ 2 dµ2 = Ω u2 dµ2 .3. σ(A) is compact.10. in addition. Proof. λ0 + n ) = ∅ without changing the corresponding operator. In particular. dµj ). g2 ) → g1 (λ2 ) + g2 (λ2 )(χ(λ) − χ(−λ)). (3.
dµpp ) and our operator A A = (AP ac ) ⊕ (AP sc ) ⊕ (AP pp ). Since a continuous measure cannot live on a single point and hence also not on a countable set. µs (R\B) = 0. we have σac (A) = σsc (A) = ∅. dµ = dµac + dµs . (3. and P pp = χMpp (A) satisfying P ac + P sc + P pp = I. To see this. dµsc . We will choose them such that Mpp is the set of all jumps of µ(λ) and such that Msc has Lebesgue measure zero. (3. σsc (A) = σ(µsc ).) Next we recall the unique decomposition of µ with respect to Lebesgue measure. A is 1 1 a diagonal matrix with diagonal elements n ). let H = 2 (N) and let A be given by Aδn = n δn . but it is unbounded and hence 0 ∈ σ(A) but 0 ∈ σp (A)..75) (3. respectively. dµac ) ⊕ L2 (R. P sc = χMsc (A). dµ) = L2 (R.10. they have mutually disjoint supports Mac . At z = 0 this formula still gives the inverse of A. σac (A) = σ(µac ). and Mpp . Msc . and dµpp are mutually singular. on a set B with Lebesgue measure zero). we have a corresponding direct sum decomposition of both our Hilbert space L2 (R. (3.74) where µsc is continuous on R and µpp is a step function. and pure point spectrum of A. 1 Example. and Mpp correspond projectors P ac = χMac (A).73) where µac is absolutely continuous with respect to Lebesgue measure (i.76) The corresponding spectra. just observe that δn is the 1 eigenvector corresponding to the eigenvalue n and for z ∈ σ(A) we have n RA (z)δn = 1−nz δn . µs is supported..88 3.77) . Msc . we have µac (B) = 0 for all B with Lebesgue measure zero) and µs is singular with respect to Lebesgue measure (i. Then σp (A) = { n n ∈ N} but σ(A) = σpp (A) = σp (A) ∪ {0}. Note that these sets are not unique.e. singularly continuous. and σpp (A) = σ(µpp ) are called the absolutely continuous. To the sets Mac . (3. dµsc ) ⊕ L2 (R. dµs = dµsc + dµpp .e. where δn is the sequence which is 1 at the n’th place and zero else (that is. The singular part µs can be further decomposed into a (singularly) continuous and a pure point part. Since the measures dµac . It is important to observe that σpp (A) is in general not equal to the set of eigenvalues σp (A) = {λ ∈ Rλ is an eigenvalue of A} since we only have σpp (A) = σp (A). In other words. The spectral theorem The converse is left as an exercise (Problem 3.
7.78) d(f λ) = f (λ) Problem 3. Problem 3. then 1 dλ. (3. Construct a multiplication operator on L2 (R) which has dense point spectrum. Let {ψj }j∈J be a spectral basis and choose nonzero numbers εj with 2 j∈J εj  = 1.3.8. A set {ψj } of spectral vectors is called ordered if ψk is a maximal k−1 spectral vector for A restricted to ( j=1 Hψj )⊥ . For every selfadjoint operator A there is a maximal spectral vector. Spectral types Our next aim is to transfer the results of the previous section to arbitrary selfadjoint operators A using Lemma 3.3. As in the unordered case one can show . Problem 3. Proof. Then I claim that ψ= j∈J εj ψj (3.11. t ∈ R. To do this we will need a spectral measure which contains the information from all measures in a spectral basis. Show that if f ∈ AC(R) with f > 0. Spectral types 89 Example. Let dµ(λ) = χ[0.79) is a maximal spectral vector. Compute the spectrum of A. Let λ be Lebesgue measure on R. Determine the spectral types. then we can write it as ϕ = 2 2 j fj (A)ψj and hence dµϕ = j fj  dµψj .10. Problem 3.9. Problem 3. 3. An example with purely singularly continuous spectrum is given by taking µ to be the Cantor measure. Compute f µ. Such a vector will be called a maximal spectral vector of A and µψ will be called a maximal spectral measure of A. An example with purely absolutely continuous spectrum is given by taking µ to be Lebesgue measure.3.6.t] . But µψ (Ω) = j εj  µψj (Ω) = 0 implies µψj (Ω) = 0 for every j ∈ J and thus µϕ (Ω) = 0. This will be the case if there is a vector ψ such that for every ϕ ∈ H its spectral measureµϕ is absolutely continuous with respect to µψ . 1). Lemma 3.3.1] (λ)dλ and f (λ) = χ(−∞. Let ϕ be given. Let A be the multiplication operator by the Cantor function in L2 (0. Show the missing direction in the proof of Lemma 3.10.
Theorem 3. pp}. In particular.13 (Spectral mapping).75) for arbitrary selfadjoint operators A.83) There are Borel sets Mxx such that the projector onto Hxx is given by P xx = χMxx (A).14. pp}. Hsc = {ψ ∈ Hµψ is singularly continuous}. Let fn = U ϕn . sc. xx ∈ {ac. by construction of the unitary operator U . since the subspaces Hψn are orthogonal. If µ is a maximal spectral measure we have σ(A) = σ(µ) and the following generalization of Lemma 3.81) where equality holds if f is continuous and the closure can be dropped if.3. (3. It is tempting to pick a spectral basis and treat each summand in the direct sum separately. Pick ϕ ∈ H and write ϕ = n ϕn with ϕn ∈ Hψn . then. Hpp = {ψ ∈ Hµψ is pure point}. σ(A) is bounded.xx .85) . Next.82) fn 2 dµψn (3. xx ∈ {ac.80) (3. σ(f (A)) ⊆ f (σ(A)). We have H = Hac ⊕ Hsc ⊕ Hpp . In particular. Proof. Then the spectrum of f (A) is given by σ(f (A)) = {λ ∈ Rµ(f −1 (λ − ε. Let µ be a maximal spectral measure and let f be realvalued. Lemma 3. Observe that if {ψj } is an ordered spectral basis.xx = n fn 2 dµψn . then µψj+1 is absolutely continuous with respect to µψj . in addition. For every selfadjoint operator there is an ordered spectral basis. λ + ε)) > 0 for all ε > 0}. we have dµϕ = n (3. Moreover. we want to introduce the splitting (3.84) and hence dµϕ. (3. The spectral theorem Theorem 3. (3. However. the subspaces Hxx reduce A. For the sets Mxx one can choose the corresponding supports of some maximal spectral measure µ. we use the more sophisticated deﬁnition Hac = {ψ ∈ Hµψ is absolutely continuous}.90 3. We will use the unitary operator U of Lemma 3.9 holds. sc.12. ϕn = fn (A)ψn and hence dµϕn = fn 2 dµψn . since it is not clear that this approach is independent of the spectral basis chosen.
What is its spectral multiplicity? σac (A) = σ(AHac ).88) Theorem 3. Stieltjes inversion formula follows from Fubini’s theorem and the dominated convergence theorem since 1 2πi λ2 ( λ1 1 1 1 − )dλ → χ (x) + χ(λ1 . dµψn . The absolutely continuous. ˜ ˜˜ If A and A are unitarily equivalent via U . σsc (A) = σ(AHsc ).λ2 ) (x) (3. Appendix: The Herglotz theorem 91 This shows U Hxx = n L2 (R. We can deﬁne F on C− using F (z ∗ ) = F (z)∗ . sc. σxx (A) = σxx (A).87) respectively.4. then so are AHxx and AHxx ˜ by (3.15. The Borel transform of a ﬁnite Borel measure is a Herglotz function satisfying µ(R) . Problem 3. If µ is a maximal spectral measure we have σac (A) = σ(µac ). By Morea’s and Fubini’s theorem. C± = {z ∈ C ± Im(z) > 0}. and pure point spectrum of A are deﬁned as and σpp (A) = σ(AHpp ). In particular.52). σac (A). Compute σ(A). (3. (3.86) and reduces our problem to the considerations of the previous section.4. .90) Proof.91) x − λ − iε x − λ + iε 2 [λ1 . σsc (A) = σ(µsc ).89) F (z) ≤ Im(z) Moreover. 3. z ∈ C+ . singularly continuous. Appendix: The Herglotz theorem A holomorphic function F : C+ → C+ . λ2 ])) = lim ε↓0 π 2 λ2 Im(F (λ + iε))dλ. Then its Borel transform is deﬁned via F (z) = R dµ(λ) . the measure µ can be reconstructed via Stieltjes inversion formula 1 1 (µ((λ1 . F is holomorphic on C+ and the remaining properties follow from 0 < Im((λ−z)−1 ) and λ−z−1 ≤ Im(z)−1 . is called a Herglotz function.3. Suppose µ is a ﬁnite Borel measure. λ−z (3. σsc (A). λ2 )) + µ([λ1 .11.λ2 ] pointwise. xx ∈ {ac. and σpp (A) = σ(µpp ).xx ). λ1 (3. pp} (3. and σpp (A) for the multi1 plication operator A = 1+x2 in L2 (R).
such that F is the Borel transform of µ. Im(z) (3. The spectral theorem Observe Im(F (z)) = Im(z) R dµ(λ) λ − z2 (3. Next we choose a contour Γ = {x + iε + λλ ∈ [−R.94) Then there is a unique Borel measure µ.92 3. Suppose F is a Herglotz function satisfying M F (z) ≤ .92) (3. z ∈ C+ . R (3. satisfying µ(R) ≤ M .96) Inserting the explicit form of Γ we see F (z) = 1 π y−ε F (x + iε + λ)dλ λ2 + (y − ε)2 −R i π y−ε + F (x + iε + Reiϕ )Reiϕ dϕ.16. The converse of the previous theorem is also true Theorem 3. (3. Hence we have by Cauchy’s formula F (z) = 1 2πi Γ 1 1 − ∗ − 2iε ζ −z ζ −z F (ζ)dζ.100) In particular. Proof.93) and λ→∞ lim λ Im(F (iλ)) = µ(R).98) π R (λ − x)2 + (y − ε)2 and taking imaginary parts w(z) = R φε (λ)wε (λ)dλ. R]} ∪ {x + iε + Reiϕ ϕ ∈ [0. (3. since φε (λ) − φ0 (λ) ≤ const ε we have w(z) = lim ε↓0 R φ0 (λ)dµε (λ). (3.99) where φε (λ) = (y − ε)/((λ − x)2 + (y − ε)2 ) and wε (λ) = w(λ + iε)/π.101) . We abbreviate F (z) = v(z) + i w(z) and z = x + i y. (3. 2 e2iϕ + (y − ε)2 π 0 R R (3.95) and note that z lies inside Γ and z ∗ + 2iε lies outside Γ if 0 < ε < y < R.97) The integral over the semi circle vanishes as R → ∞ and hence we obtain 1 y−ε F (z) = F (λ + iε)dλ (3. π]}. Letting y → ∞ we infer from our bound wε (λ)dλ ≤ M.
δ δ (3.102) Fix δ > 0 and let λ1 < λ2 < · · · < λm+1 be rational numbers such that λj+1 − λj  ≤ δ and λ1 ≤ x − y3 y3 . y2 λ ≤ λ1 or λm+1 ≤ λ. (3.103) Then (since φ0 (λ) ≤ y −2 ) φ0 (λ) − φ0 (λj ) ≤ and φ0 (λ) ≤ Now observe  R δ . The RadonNikodym derivative of µ can be obtained from the boundary values of F . n ≥ N. λ1 ≤ λ2 . the diﬀerence in (3. For irrational λ we set µ(λ0 ) = inf λ≥λ0 {µ(λ)λ rational}. Moreover. Then µ(λ) is monotone. Appendix: The Herglotz theorem 93 where µε (λ) = −∞ wε (x)dx.4. Moreover. In summary. Now F (z) and R (λ − z)−1 dµ(λ) have the same imaginary part and thus they only diﬀer by a real constant.107) 2m and hence the second term is bounded by δ. and we claim w(z) = R λ φ0 (λ)dµ(λ). there is a convergent subsequence for ﬁxed λ.106) can be made arbitrarily small. by the standard diagonal trick. Since 0 ≤ µε (λ) ≤ M . there is even a subsequence εn such that µεn (λ) converges for each rational λ. (3.105) φ0 (λ)dµ(λ) − R φ0 (λ)dµεn (λ) ≤ m  R φ0 (λ)dµ(λ) − j=1 m φ0 (λj )(µ(λj+1 ) − µ(λj )) + j=1 φ0 (λj )(µ(λj+1 ) − µ(λj ) − µεn (λj+1 ) + µεn (λj )) m + R φ0 (λ)dµεn (λ) − j=1 φ0 (λj )(µεn (λj+1 ) − µεn (λj )) (3. It remains to establish that the monotone functions µε converge properly. (3. since φ0 (y) ≤ 1/y we can ﬁnd an N ∈ N such that y µ(λj ) − µεn (λj ) ≤ δ. y2 λj ≤ λ ≤ λj+1 . 0 ≤ µ(λ1 ) ≤ µ(λ2 ) ≤ M . λm+1 ≥ x + .3. (3.104) δ . By our bound this constant must be zero.106) The ﬁrst and third term can be bounded by 2M δ/y 2 . .
(3.110) Clearly the second part can be estimated by Kε (t − λ)µ(t) ≤ Kε (δ)µ(R).18. then 1 1 (Dµ)(λ) ≤ lim inf F (λ + iε) ≤ lim sup F (λ + iε) ≤ (Dµ)(λ).115) As a consequence of Theorem A.109) We ﬁrst split the integral into two parts Im(F (λ+iε)) = Iδ Kε (t−λ)dµ(t)+ R\Iδ Kε (t−λ)µ(t). Let µ be a ﬁnite Borel measure and F its Borel transform.112) over the triangle {(s.e. . then the limit 1 (3. λ+δ). (3.17.35 we obtain Theorem 3.117) π whenever (Dµ)(λ) exists. δ ≤ C.116) Im(F (λ)) = lim Im(F (λ + iε)) ε↓0 π exists a. with respect to both µ and Lebesgue measure (ﬁnite or inﬁnite) and 1 (Dµ)(λ) = Im(F (λ)) (3. (3.94 3. ε 0 Thus the claim follows combining both estimates. We need to estimate Im(F (λ + iε)) = R Kε (t)dµ(t). t)λ − s < t < λ + s.34 and Theorem A. (3.108) ε↓0 π π ε↓0 Proof. R\Iδ (3. Iδ = (λ−δ.113) µ(Is ) 2s Now suppose there is are constants c and C such that c ≤ 0 ≤ s ≤ δ. The spectral theorem Theorem 3. Kε (t) = t2 ε . + ε2 (3.114) (3. 0 < s < δ} = {(s. t − λ < s < δ} and obtain δ µ(Is )Kε (s)ds = 0 Iδ (K(δ) − Kε (t − λ))dµ(t). then δ δ 2c arctan( ) ≤ Kε (t − λ)dµ(t) ≤ 2C arctan( ) ε ε Iδ since δKε (δ) + δ −sKε (s)ds = arctan( ). Let µ be a ﬁnite Borel measure and F its Borel transform. t)λ − δ < t < λ + δ.111) To estimate the ﬁrst part we integrate Kε (s) ds dµ(t) (3.
.19. Corollary 3. In particular.3.4. Appendix: The Herglotz theorem 95 Moreover. The measure µ is purely absolutely continuous on I if lim supε↓0 Im(F (λ + iε)) < ∞ for all λ ∈ I. the set {λF (λ) = ∞} is a support for the singularly and {λF (λ) < ∞} is a support for the absolutely continuous part.
.
Moreover. Secondly we will consider commuting operators and show how certain facts. In particular.Chapter 4 Applications of the spectral theorem This chapter can be mostly skipped on ﬁrst reading. You might want to have a look at the ﬁrst section and the come back to the remaining ones later. We will give a few typical applications: Firstly we will derive an operator valued version of of Stieltjes’ inversion formula. They project onto subspaces corresponding to expectation values in the set Ω. 4.1. can be established for a larger class of functions. χΩ (A)ψ (4. we will investigate tensor products of operators. which are known to hold for the resolvent of an operator A.1) 97 . Finally. we will show that these integrals can be evaluated by computing the corresponding integrals of the complex valued functions. the number ψ. To do this. Then we will show how the eigenvalues below the essential spectrum and dimension of RanPA (Ω) can be estimated using the quadratic form. Integral formulas We begin with the ﬁrst task by having a closer look at the projector PA (Ω). Now let us show how the spectral theorem can be used. we need to show how to integrate a family of functions of A with respect to a parameter.
In addition.4) (4. t1 ] ⊂ R is a compact interval and X is a Banach space.si+1 ) . since −iε = χ{λ0 } (λ) ε↓0 λ − λ0 − iε we infer from Theorem 3. This map satisﬁes f (t)dt ≤ f I ∞ (t1 − t0 ) (4. For f ∈ S(I.2) where hull(Ω) is the convex hull of Ω.ψ (λ) = λ0 ϕ. we have ψ. In fact. Aψ = R λ χ{λ0 } (λ)dµϕ. consider the simple function n−1 t1 −t0 fn = i=0 f (si )χ[si . Observe that C(I. ψ (4. X) forms a linear space and can be equipped with the sup norm f ∞ = sup f (t) . t∈I (4. and if each inverse i=1 image f −1 (xi ). Moreover. ε↓0 (4. where Ω denotes the Lebesgue measure of Ω.98 4. 1 ≤ i ≤ n. ψ ∈ PA (Ω)H. (4. Aψ = Ω λ dµψ (λ) ∈ hull(Ω). a function f : I → X is called simple if the image of f is ﬁnite. X) → X by (4.5) Similarly.8) . X) we can deﬁne a linear map n : S(I. we can obtain an operator valued version of Stieltjes’ inversion formula. We will consider the case of mappings f : I → X where I = [t0 . Since f ∈ C(I. we infer that fn converges uniformly to f . The space Ranχ{λ0 } (A) is called the eigenspace corresponding to λ0 since we have ϕ.6) The corresponding Banach space obtained after completion is called the set of regulated functions R(I. X) is uniformly continuous. Applications of the spectral theorem is the probability for a measurement of a to lie in Ω. X) ⊂ R(I. where si = t0 + i n . ψ = 1. As before. X). is a Borel set.7) f (t)dt = I i=1 xi f −1 (xi ). The set of simple functions S(I. The dimension of the eigenspace is called the multiplicity of the eigenvalue. X). But ﬁrst we need to recall a few facts from integration in Banach spaces.3) and hence Aψ = λ0 ψ for all ψ ∈ Ranχ{λ0 } (A).ψ (λ) = λ0 R dµϕ. f (I) = {xi }n .1 that lim lim −iεRA (λ0 + iε)ψ = χ{λ0 } (A)ψ.
r] f (t)dt (4.9) which clearly holds for f ∈ S(I. A) ∈ R(I. f (t + ε) − f (t) d f (t) = lim ε→0 dt ε We write f ∈ C 1 (I. X) and dF/dt = f as can be seen from t+ε f (s)ds ∈ F (t + ε) − F (t) − f (t)ε = t (f (s) − f (t))ds ≤ ε sup f (s) − f (t) . In this case we can set f (t)dt = lim R r→∞ [−r. L(H)). A) ∈ R(I. then ( I f (t)dt) = I (f (t))dt. Suppose f : I × R → C is a bounded Borel function such that f (. L(H)) follows from the spectral theorem. X). we say that f : I → X is integrable if f ∈ R([−r.13) t t0 exists for all t ∈ I. λ) and set F (λ) = I f (t. Let A be selfadjoint. then A(t)dt ψ = I I (A(t)ψ)dt. (4. A)dt respectively F (A)ψ = I f (t. A)ψ dt. s∈[t. X) by continuity. since it is no restriction to assume that A is multiplication by λ in some L2 space. if f ∈ C(I. f ∈ R(I. X). X) und thus for all f ∈ R(I..10) still hold. X) → X with the same norm (t1 − t0 ) by Theorem 0. We even have f (t)dt ≤ I I f (t) dt. if A(t) ∈ R(I. (4. In particular.12) and (4. Lemma 4.14) The important facts for us are the following two results.9) and (4. (4. then F (t) = C 1 (I. .11) If I = R. X) for all r > 0 and if f (t) is integrable. r]. if ∈ X ∗ is a continuous linear functional. That f (t.15) Proof. λ)dt.1.1. L(H)) and F (A) = I f (t.t+ε] (4. Integral formulas 99 and hence it can be extended uniquely to a linear map : R(I. (4.t3 ) (s)f (s)ds and f (s)ds. Then f (t. We will use the standard notation t2 t3 f (s)ds = − t3 t2 t3 t2 f (s)ds = I χ(t2 . In addition.4.24.10) In particular. X) if (4.
2. f (t. λ)dt dµϕ. But for such functions the claim is straightforward.1 and (3.ψ (λ)dt I R = = R I f (t.ψ (λ) = ϕ. Then A R f (t)dt = R Af (t)dt respectively R f (t)dtA = R f (t)Adt.91). Lemma 4. And let C∞ (K) be the set of all continuous functions on K which vanish at ∞ (if K is unbounded) with the sup norm.4. Let K ⊆ R be closed. Now we can prove Stone’s formula.3 (Stone’s formula).19) where Ω is the intersection of the interior of Γ with R.2. Theorem 4.ψ (λ) F (λ)dµϕ. It suﬃces to prove the case where f is simple and of compact support. (4.1 with Theorem 3. 4.17) Proof.16) by Fubini’s theorem and hence the ﬁrst claim follows. λ)dµϕ. λ2 ]) + PA ((λ1 . Proof. The ∗algebra generated by the function 1 λ→ (4.100 4. A)dt)ψ I = I ϕ. Problem 4. Applications of the spectral theorem We compute ϕ. Let Γ be a diﬀerentiable Jordan curve in ρ(A). Commuting operators Now we come to commuting operators.20) λ−z for one z ∈ C\K is dense in C∞ (K). The result follows combining Lemma 4. Suppose f : R → L(H) is integrable and A ∈ L(H). Show χΩ (A) = Γ RA (z)dz. Let A be selfadjoint. . F (A)ψ R = (4. λ2 ))) 2 (4.18) strongly. ( f (t. (4.1. As a preparation we can now prove Lemma 4. A)ψ dt f (t. then 1 2πi λ2 (RA (λ + iε) − RA (λ − iε))dλ → λ1 1 (PA ([λ1 .
the claim follows from the Neumann series. Proof.g. A good candidate is the resolvent. Lemma 4. As another consequence we obtain .22) (4. ˜ replace K by K = K ∪{∞}. we have λ ∈ C∞ (σ(A)) and hence (4. dµψ ). let ψ ∈ D(f (A)) and choose fn as in (3. Then f (A)Bψ = lim fn (A)Bψ = lim Bfn (A)ψ n→∞ n→∞ (4.e.5.22) tell us that (4.25) shows f ∈ L2 (R.23) holds for any f in the ∗subalgebra generated by RA (z). dµBψ ) (i. We say that two bounded operators A. we soon run into trouble with this deﬁnition since the above expression might not even make sense for any nonzero vector (e. Commuting operators 101 Proof. (4. the claim follows directly from the complex Stone– Weierstraß theorem since (λ1 −z)−1 = (λ2 −z)−1 implies λ1 = λ2 . Proof.. which is compact. B] = AB − BA = 0. If K is compact. since B commutes with any polynomial of A. Corollary 4. ψ with ψ ∈ D(A)). Conversely. Then Bf (A)ψ = lim Bfn (A)ψ = lim fn (A)Bψ = f (A)Bψ. .6. Since σ(A) is compact. the claim follows for all such f ∈ C∞ (σ(A)). B] = 0 for one z ∈ ρ(A). Then we can again apply the complex Stone–Weierstraß theorem to conclude that ˜ our ∗subalgebra is equal to {f ∈ C(K)f (∞) = 0} which is equivalent to C∞ (K). and set (∞−z)−1 = 0.21) If A or B is unbounded. Bψ ∈ D(f (A))) and f (A)Bψ = BF (A)ψ.24) If f is unbounded. Then [f (A).21) follows from (4. Suppose A is selfadjoint and commutes with the bounded operator B.22) holds if and only if (4. Hence if A is selfadjoint and B is bounded we will say that A and B commute if [RA (z).2. Otherwise. To avoid this nuisance we will replace A by a bounded function of A.23) by our lemma.23) for any bounded Borel function f . the claim holds for any ψ ∈ D(f (A)). B commute if [A.21) holds. then (4. Since this subalgebra is dense in C∞ (σ(A)). n→∞ n→∞ (4. Choose a sequence fn ∈ C∞ (σ(A)) converging to f in L2 (R. Next ﬁx ψ ∈ H and let f be bounded. take B = ϕ. If f is unbounded. B] = 0 (4. If A is selfadjoint and bounded. Equation (4.4..24). B] = [RA (z ∗ ).
(4. But for such f we have 1 ˆ [f (A). Proof. since these functions ˆ are dense in C∞ (R) by the complex Stone–Weierstraß theorem. B]dt = 0 (4.26) since B commutes with the multiplication operator g(λ). The extension to the case where B is selfadjoint and unbounded is straightforward. B] = 0 for all t ∈ R.7. Suppose A is selfadjoint and has simple spectrum. RB (z2 )] = [RA (z1 ). g(B)] = 0 for arbitrary bounded Borel functions f and g. dµψ ). Suppose A is selfadjoint and B is bounded. (4. From our above analysis it follows that this is equivalent to [e−iAt .2.30) [f (A). Then B commutes with A if and only if [e−iAt . Then Bg(λ) = Bg(λ) · 1 = g(λ)(B1)(λ) (4. Here f denotes the Fourier transform of f . Applications of the spectral theorem Theorem 4. s ∈ R. B] = √ [ 2π by Lemma 4. RB (z2 )] = 0 (4.27) 1 f (t)e−iAt dt. We say that A and B commute in this case if ∗ [RA (z1 ). ˆ Proof. Hence B is multiplication by f (λ) = (B1)(λ). Lemma 4.1. see Section 7.28) R (4. It suﬃces to show [f (A). The assumption that the spectrum of A is simple is crucial as the example A = I shows. Let ψ be a cyclic vector for A. A bounded operator B commutes with A if and only if B = f (A) for some bounded Borel function.8. respectively t. Note also that the functions exp(−itA) can also be used instead of resolvents.31) . B] = √ 2π R f (t)[e−iAt .102 4. B] = 0 for f ∈ S(R). By our unitary equivalence it is no restriction to assume H = L2 (R.29) ∗ for one z1 ∈ ρ(A) and one z2 ∈ ρ(B) (the claim for z2 follows by taking adjoints). e−iBs ] = 0.
. More precisely. In general there is no way of computing the lowest eigenvalues and their corresponding eigenfunctions explicitly.. However. Deﬁne U (ψ1 . Aψ = j=1 αj 2 Ej + O(ε) ≤ En + O(ε) (4. Aψ ≤ En + O(ε). ψ ∈ span{ψ1 . let {ϕj }N be an orthonormal basis for the space spanned j=1 by the eigenfunctions corresponding to eigenvalues below the essential spectrum. .15 we know that ψ1 . . .. then we can restrict A to the orthogonal complement of ϕ1 and proceed as before: E2 will be the inﬁmum over all expectations restricted to this subspace. there will still be a component in the direction of ϕ1 left and hence the inﬁmum of the expectations will be lower than E2 . ψn ) = {ψ ∈ D(A) ψ = 1. The minmax theorem 103 4. Then by Theorem 2..32) If we add some free parameters to ψ1 .3. So suppose we have a normalized function ψ1 which is an approximation for the eigenfunction ϕ1 of the lowest eigenvalue E1 . Aϕ1 = E1 . one often has some idea how the eigenfunctions might approximately look like.33) ψ. If the number of eigenvalues N is ﬁnite we set Ej = inf σess (A) for j > N and choose ϕj orthonormal such that (A − Ej )ϕj ≤ ε. ψn−1 ). The essential spectrum is obtained from the spectrum by removing all discrete eigenvalues with ﬁnite multiplicity (we will have a closer look at it in Section 6.3. . . If we restrict to the orthogonal complement of an approximating eigenfunction ψ1 . . . one can optimize them and obtain quite good upper bounds for the ﬁrst eigenvalue. . . n ψ. The minmax theorem In many applications a selfadjoint operator has a number of eigenvalues below the bottom of the essential spectrum. .. (i) We have inf ψ∈U (ψ1 .35) and the claim follows. . (4.ψn−1 ) (4.4. But is there also something one can say about the next eigenvalues? Suppose we know the ﬁrst eigenfunction ϕ1 . Aψ1 ≥ ϕ1 .34) In fact. . Assume they satisfy (A − Ej )ϕj = 0.2). set ψ = then n j=1 αj ϕj and choose αj such that ψ ∈ U (ψ1 . (4. ψn }⊥ }. Thus the optimal choice ψ1 = ϕ1 will give the maximal value E2 . where Ej ≤ Ej+1 are the eigenvalues (counted according to their multiplicity).
9 (MinMax). ∞)) ≥ k. Suppose A.. the sup and inf are in fact max and min. dim Ran PA ((−∞. Problem 4.10. Corollary 4. Estimating eigenspaces Next. Then En = sup inf ψ. explaining the name. as long as En is an eigenvalue. Applications of the spectral theorem (ii) We have inf ψ∈U (ϕ1 . we show that the dimension of the range of PA (Ω) can be estimated if we have some functions which lie approximately in this space.e. (4. Similarly.. (4..ψn−1 ) Clearly the same result holds if D(A) is replaced by the quadratic form domain Q(A) in the deﬁnition of U . A − B ≥ 0). ψj ∈ D(A). Aψ ≥ En − O(ε). In addition.41) dim Ran PA ((λ1 . (Hint A − An ≤ ε is equivalent to A − ε ≤ A ≤ A + ε. . ψ.. Let λ1 < λ2 . ψj ∈ Q(A). Aψ ..ϕn−1 ) ψ.40) for any nonzero linear combination ψ = then (4. λ2 )) ≥ k. Suppose A is a selfadjoint operator and ψj ...2. Aψ < λ ψ for any nonzero linear combination ψ = 2 (4. Since ε can be chosen arbitrarily small we have proven Theorem 4..) 4.104 4.ψn−1 ψ∈U (ψ1 .36) In fact. Theorem 4. set ψ = ϕn . Let A be selfadjoint and let E1 ≤ E2 ≤ E3 · · · be the eigenvalues of A below the essential spectrum respectively the inﬁmum of the essential spectrum once there are no more eigenvalues left.. If (A − λ2 − λ1 λ2 + λ1 )ψ < ψ 2 2 k j=1 cj ψj . (4.. then En (A) ≥ En (B). Let λ ∈ R. An are bounded and An → A. (ii). Then Ek (An ) → Ek (A). λ)) ≥ k..4.11. Aψ > λ ψ 2 implies dim Ran PA ((λ.37) ψ1 . Suppose A and B are selfadjoint operators with A ≥ B (i. (i).38) then (4.. 1 ≤ j ≤ k. If ψ.39) k j=1 cj ψj . are linearly independent elements of a H.
5.4. (i). 4. An )ψ1 ⊗ · · · ⊗ ψn . . . (4. An ) as above.46) Moreover. . j (4. An ). For every monomial λn1 · · · λnn we can deﬁne n 1 (An1 ⊗ · · · ⊗ Ann )ψ1 ⊗ · · · ⊗ ψn = (An1 ψ1 ) ⊗ · · · ⊗ (Ann ψn ). . then the operator P (A1 . Then P (A1 . then. For this it suﬃces to show KerPA ((−∞. λn ) of degree N we can deﬁne P (A1 .42) ≥ λ [λ. if P is realvalued.5. . . Using the same notation as before we need to show KerPA ((λ1 . λ))ψ = 0.43) 4 2 4 Ω where Ω = (−∞. λ2 ))M = {0}. . Let M = span{ψj } ⊆ H. . λ))M = {0}. . Tensor products of operators Suppose Aj . . . ψ = 0. . . An )) = P (σ(A1 ).12. .47) . But this is a contradiction as before. An ) is selfadjoint and its spectrum is the closure of the range of P on the product of the spectra of the Aj . ψj ∈ D(AN ). Suppose PA ((−∞.45) n and extend this deﬁnition to obtain a linear operator on the set D = span{ψ1 ⊗ · · · ⊗ ψn  ψj ∈ D(AN )}.∞) η dµψ (η) (4. We claim dim PA ((−∞. . . (4. j (4. (4. .∞) dµψ (η) = λ ψ 2 . 1 ≤ j ≤ n. If PA ((λ1 . . . σ(An )). An ) on D is symmetric and we can consider its closure. Suppose Aj . are selfadjoint operators on Hj . . . (A − λ2 + λ 1 λ2 + λ1 2 λ2 + λ 1 )ψ 2 = (x − ) dµψ (x) = x2 dµψ (x + ) 2 2 2 R Ω (λ2 − λ1 )2 λ 2 + λ1 (λ2 − λ1 )2 ≥ dµψ (x + )= ψ 2. . λ))M = dim M = k. ∞). . Aψ = R η dµψ (η) = [λ. . σ(P (A1 .38). .44) Hence for every polynomial P (λ1 . are selfadjoint operators on Hj and let P (λ1 . . . . . . Theorem 4. . This contradicts our assumption (4. . . . Tensor products of operators 105 Proof. . n n 1 1 ψj ∈ D(Aj j ). that is. ψ = 0. −(λ2 −λ1 )/2]∪[(λ2 −λ1 )/2. 1 ≤ j ≤ n. (ii). Then we see that for any nonzero linear combination ψ ψ. λn ) be a realvalued polynomial and deﬁne P (A1 . λ2 ))ψ = 0. . which will again be denoted by P (A1 .
k ∈ D(AN ) with (Aj − λj )ψj. dµj ) and P (A1 . dµ1 × · · · × dµn ). Since D contains the set of all functions ψ1 (λ1 ) · · · ψn (λn ) for which ψj ∈ L2 (R. . λn ) with λj ∈ σ(Aj ). Then. . By the spectral theorem it is no restriction to assume that Aj is multiplication by λj on L2 (R.k and hence λ ∈ σ(P ). . An ) is hence multiplication by P (λ1 . in which case σ(A1 ⊗ A2 ) = σ(A1 )σ(A2 ) = {λ1 λ2 λj ∈ σ(Aj )}. The two main cases of interest are A1 ⊗ A2 . Applications of the spectral theorem Proof. Then there exists Weyl sequences ψj. in which case σ(A1 ⊗ I + I ⊗ A2 ) = σ(A1 ) + σ(A2 ) = {λ1 + λ2 λj ∈ σ(Aj )}. . .k ⊗ · · · ⊗ ψ1. . which is selfadjoint. Hence P is c the maximally deﬁned multiplication operator by P (λ1 . λj with respect to µj and hence (P −λ)−1 exists and is bounded. . Conversely.106 4. λn ) on L2 (Rn . . .49) (4. . . . . .k → 0 as k → ∞. if λ ∈ P (σ(A1 ). λn ) − λ ≥ ε for a. . .e. Now let λ = P (λ1 . . . and A1 ⊗ I + I ⊗ A2 . σ(An )). (4. λn ).48) . that is λ ∈ ρ(P ). . . . . . (P − j λ)ψk → 0. dµj ) it follows that the c domain of the closure of P contains L2 (Rn . then P (λ1 . where ψk = ψ1. . dµ1 × · · · × dµn ).
the solution is given by ψ(t) = e−itA ψ(0). the spectrum is not away from the real axis) and hence simple results like. like any Hamiltonian system from classical mechanics. If H is oneo dimensional (and hence A is a real number). 107 . the solution of the Schr¨dinger equation o i is given by ψ(t) = exp(−itH)ψ(0). d ψ(t) = Hψ(t) dt (5. Moreover. U (t) is a strongly continuous oneparameter unitary group. our system is not hyperbolic (i. (i).2) A detailed investigation of this formula will be our ﬁrst task.e.Chapter 5 Quantum dynamics As in the ﬁnite dimensional case. (5. Note that. Theorem 5.1) 5. We ﬁrst investigate the family of operators exp(−itA). all solutions tend to the equilibrium position cannot be expected. in the ﬁnite dimensional case the dynamics is understood once the eigenvalues are known and the same is true in our case once we know the spectrum. The time evolution and Stone’s theorem In this section we want to have a look at the initial value problem associated with the Schr¨dinger equation (2.1.11)) via U (t) = exp(−itA).1.12) in the Hilbert space H. Let A be selfadjoint and let U (t) = exp(−itA).3) Our hope is that this formula also applies in the general case and that we can reconstruct a oneparameter unitary group U (t) from its generator A (compare (2.. (5.
there should be a one to one correspondence between the unitary group and its generator. To see (iii) replace ψ → U (s)ψ in (ii).11 (iv) shows that U (t) is in fact strongly continuous. U (t)D(A) = D(A) and AU (t) = U (t)A. Moreover.2. U (t)Aψ = ψ. To prove strong continuity observe that t→t0 lim e−itA ψ − e−it0 A ψ 2 = = t→t0 lim e−itλ − e−it0 λ 2 dµψ (λ) R R t→t0 lim e−itλ − e−it0 λ 2 dµψ (λ) = 0 (5. the generator of the time evolution of a quantum mechanical system should always be a selfadjoint operator since it corresponds to an observable (energy). Aψ (5. AU (t)ψ = U (t)ψ. Aψ = lim ϕ. Similarly. Theorem 5. Moreover.2 (Stone). This settles (ii).108 5. On the other hand.7) shows that the expectations of A are time independent.5) t ˜ since eitλ − 1 ≤ tλ.6) ϕ. Let U (t) be a weakly continuous oneparameter unitary group. This corresponds to conservation of energy. ψ (5. (U (t) − 1)ψ = lim t→0 −t t→0 t ˜ and hence A = A by Corollary 2. Now let A be the generator deﬁned as in (2. The group property (i) follows directly from Theorem 3. Then its generator A is selfadjoint and U (t) = exp(−itA). . if ψ ∈ D(A) we obtain t→0 lim 1 −itA (e ψ − ψ) + iAψ t 2 = lim t→0 R 1  (e−itλ − 1) + iλ2 dµψ (λ) = 0 (5.1 and the corresponding statements for the function exp(−itλ). This is ensured by Stone’s theorem. Proof.11). o U (t)ψ. t (iii). First of all observe that weak continuity together with Lemma 1.4) by the dominated convergence theorem. Proof. Quantum dynamics (ii). For our original problem this implies that formula (5.3) is indeed the solution to the initial value problem of the Schr¨dinger equation. The limit limt→0 1 (U (t)ψ − ψ) exists if and only if ψ ∈ D(A) in t which case limt→0 1 (U (t)ψ − ψ) = −iAψ. Then ˜ A is a symmetric extension of A since we have i i ˜ ˜ (U (−t) − 1)ϕ. ψ = Aϕ.
Next. So A is essentially selfadjoint and we can introduce V (t) = exp(−itA). 1 τ 1 t+τ 1 U (s)ψds − U (s)ψds (U (t)ψτ − ψτ ) = t t t t 0 1 τ +t 1 t = U (s)ψds − U (s)ψds t τ t 0 t 1 1 t = U (τ ) U (s)ψds − U (s)ψds → U (τ )ψ − ψ t t 0 0 (5. Moreover. Pick ψ ∈ H and set τ ψτ = 0 U (t)ψdt (5. −iAU (t)ψ = −i A∗ ϕ. since D(A) is dense we have U (t) = V (t) by continuity. U (t)ψ dt = ϕ. let us prove that A is essentially selfadjoint. we must have ϕ. ψ = 0 for any ϕ ∈ Ker(A∗ − z ∗ ) and ψ ∈ D. Since ψ(0) = 0 we have ψ(t) = 0 and hence U (t) and V (t) coincide on D(A). Proof. Corollary 5. We are done if we can show U (t) = V (t).6 it suﬃces to prove Ker(A∗ − z ∗ ) = {0} for z ∈ C\R. Then A is essentially selfadjoint on D. As in the proof of the previous theorem. ψ = 0 implying ϕ = 0 since D(A) is dense. By Lemma 2.9) as t → 0 shows ψτ ∈ D(A).11) d and hence dt ψ(t) 2 = 2Re ψ(t).1) implying limτ →0 τ −1 ψτ = ψ. As in the above proof it follows ϕ. Furthermore. U (t)ψ (5. iAψ(t) = 0. we can show that A is symmetric and that U (t)D(A) = D(A).3.1. Let ψ ∈ D(A) and abbreviate ψ(t) = (U (t) − V (t))ψ. ψ . Suppose D ⊆ D(A) is dense and invariant under U (t). U (t)ψ and hence ϕ.5. then for each ψ ∈ D(A) we have d ϕ. The time evolution and Stone’s theorem 109 Next we show that A is densely deﬁned.10) = −iz ϕ.8) (the integral is deﬁned as in Section 4. U (t)ψ = exp(−izt) ϕ. Then ψ(t + s) − ψ(t) = iAψ(t) s→0 s lim (5. Since the left hand side is bounded for all t ∈ R and the exponential on the right hand side is not. . Suppose A∗ ϕ = z ∗ ϕ. As an immediate consequence of the proof we also note the following useful criterion.
Theorem 5.8 two strongly continuous oneparameter groups commute [e−itA .15) be its Fourier transform. U (t) = e−itA . Clearly.13) behave as t → ∞? By the spectral theorem. let us discuss why the decomposition of the spectrum introduced in Section 3. Let ϕ = ψ = 1. Then the Ces`ro time average of µ(t) has the a ˆ following limit 1 T →∞ T lim T ˆ(t)2 dt = µ 0 λ∈R µ({λ})2 . U (t)ψ = ˆ R e−itλ dµϕ. Let H = L2 (0. What is the generator of U ? (5. A ﬁrst question one might rise is. The RAGE theorem Now. Quantum dynamics Note that by Lemma 4. the Schr¨dinger equation.ψ . We have seen that the time evolution is generated by a selfadjoint operator.14) is the Fourier transform of the measure µϕ. one must understand the spectrum of the generator. how does  ϕ.110 5. and is given by a linear ﬁrst order diﬀerential equation. the Hamiltonian. ψ 2 can be viewed as the part of ψ which is in the state ϕ. (5.ψ (λ) (5.12) 5. ψ ϕ is the projection of ψ onto the (onedimensional) subspace spanned by ϕ. e−isB ] = 0 if and only if the generators commute.3 is of physical relevance.4 (Wiener).16) where the sum on the right hand side is ﬁnite. o To understand the dynamics of such a ﬁrst order diﬀerential equation. µϕ.2. Hence  ϕ.1. 2π) and consider the one parameter unitary group given by U (t)f (x) = f (x − t mod 2π). Let µ be a ﬁnite complex Borel measure on R and let µ(t) = ˆ R e−itλ dµ(λ) (5. The vector ϕ. Thus our question is answered by Wiener’s theorem. . Problem 5. (5. one of the goals must be to understand the time evolution of a quantum mechanical system. for a physicist. U (t)ψ 2 .ψ (t) = ϕ. Some general tools for this endeavor will be provided in the following sections.
P xx f (A)ψ = P xx ϕ. f (A)ψ implying dµϕ. then the Ces`ro mean of ϕ.19) . In ˆ fact. This will eventually yield a dynamical characterization of the continuous and pure point spectrum due to Ruelle.6 below). Kψ ψj = j=1 ϕj . ψ ψ j . An operator K ∈ L(H) is called ﬁnite rank if its range is ﬁnite dimensional. If {ψj }n j=1 is an orthonormal basis for Ran(K) we have n n Kψ = j=1 ψj . sc.ψ = dµP xx ϕ.2. By Fubini we have 1 T T ˆ(t)2 dt = µ 0 1 T T 0 R R e−i(x−y)t dµ(x)dµ∗ (y)dt 1 T T 0 = R R e−i(x−y)t dt dµ(x)dµ∗ (y). (5. the average of the probability of ﬁnding the system in any prescribed state tends to zero if we start in the continuous subspace Hc of A. Hsc . U (t)ψ tends a to zero. then dµϕ. observe that the subspaces Hac . so is µϕ. Amrein.ψ for every ϕ ∈ H. Thus if µψ is ac. the limit of the above expression is given by χ{0} (x − y)dµ(x)dµ∗ (y) = R R R µ({y})dµ∗ (y) = y∈R µ({y})2 . Gorgescu.ψ (t) is continuous and tends to zero as t → ∞. if ψ ∈ Hxx we have P xx ψ = ψ which shows ϕ. Thus. Now we want to draw some additional consequences from Wiener’s theorem. (5. The RAGE theorem 111 Proof. f (A)ψ = ϕ. by the dominated convergence theorem. pp}. The dimension n = dim Ran(K) is called the rank of K. and Hpp are invariant with respect to time evolution since P xx U (t) = χMxx (A) exp(−itA) = exp(−itA)χMxx (A) = U (t)P xx .18) To apply this result to our situation. But ﬁrst we need a few deﬁnitions. That is.5. Moreover.ψ is absolutely continuous with respect to Lebesgue measure and thus µϕ. this follows from the RiemannLebesgue lemma (see Lemma 7.ψ . xx ∈ {ac.17) The function in parentheses is bounded by one and converges pointwise to χ{0} (x − y) as T → ∞. sc. In other words. or pp. (5. If ψ ∈ Hac . if ψ ∈ Hc = Hac ⊕Hsc . and Enß.
(5. the claim can be reduced to the previous situation. if K is also bounded. If. In particular we have D(A) ⊆ D(K). then the result holds for any ψ ∈ H. Theorem 5. It is straightforward to verify (Problem 5. Hence it holds for any ﬁnite rank operator K. In particular. in addition. Then 1 T lim Ke−itA P c ψ 2 dt = 0 and lim Ke−itA P ac ψ = 0 t→∞ T →∞ T 0 (5.24) n . Let ψ ∈ Hc respectively ψ ∈ Hac and drop the projectors. The elements ϕj are linearly independent since Ran(K) = Ker(K ∗ )⊥ .6. .23) n Thus the claim holds for any compact operator K. In addition. the adjoint of K is also ﬁnite rank and given by n K ∗ψ = j=1 ψ j . where ϕ ∈ Hc if and only if ψ ∈ Hc (since Hc reduces A).21) for one z ∈ ρ(A). ϕ2 ).22) for every ψ ∈ D(A). If K is compact. Then. if K is a rank one operator (i. Hence every ﬁnite rank operator is of the form (5.. If ψ ∈ D(A) we can set ψ = (A − i)−1 ϕ.20) The closure of the set of all ﬁnite rank operators in L(H) is called the set of compact operators C(H). (5. we can ﬁnd a sequence ψn ∈ D(A) such that ψ − ψn ≤ 1/n and hence 1 Ke−itA ψ ≤ Ke−itA ψn + K .19). there is a sequence Kn of ﬁnite rank operators such that K − Kn ≤ 1/n and hence 1 Ke−itA ψ ≤ Kn e−itA ψ + ψ . There is also a weaker version of compactness which is useful for us. ψ ϕj . Now let us return to our original problem. The set of all compact operators C(H) is a closed ∗ideal in L(H). Let A be selfadjoint and suppose K is relatively compact. Proof. K = ϕ1 . K is bounded.e. Since K(A + i)−1 is compact by assumption.112 5.5. (5. Quantum dynamics where ϕj = K ∗ ψj . The operator K is called relatively compact with respect to A if KRA (z) ∈ C(H) (5. By the ﬁrst resolvent identity this then follows for all z ∈ ρ(A).2) Lemma 5. the claim follows from Wiener’s theorem respectively the RiemannLebesgue lemma.
If it would. we would have 0 = ≥ a contradiction. Conversely.13 we have Kn ≤ M .5. and hence the error can be made arbitrarily small by choosing N large. Truncate this expansion after N terms. if ψ ∈ Hpp we can write ψ = ψ c + ψ pp and by our previous estimate it suﬃces to show that (I − Kn )ψ c (t) does not tend to 0 as n → ∞. Proof.2. Moreover. then this part converges uniformly to the desired limit by strong convergence of Kn . By our previous estimate it suﬃces to show Kn ψ pp (t) ≥ ε > 0 for n large. Abbreviate ψ(t) = exp(−itA)ψ. In summary. Let A be selfadjoint. (5. by Lemma 1. We begin with the ﬁrst equation. regularity properties of spectral measures are related to the long time behavior of the corresponding quantum mechanical system. where the ψj are orthonormal eigenfunctions and αj (t) = exp(−itλj )αj . The RAGE theorem 113 concluding the proof. 1 T →∞ T lim ψ c (t) 2 T (I − Kn )ψ c (t) 2 dt 0 − lim 1 T →∞ T T Kn ψ c (t) 2 dt = ψ c (t) 2 .26) by CauchySchwarz and the previous theorem. In fact. we even claim n→∞ t≥0 lim sup Kn ψ pp (t) − ψ pp (t) = 0.7 (RAGE). we can write ψ pp (t) = j αj (t)ψj .27) By the spectral theorem. (5.25) Hpp = {ψ ∈ H lim sup (I − Kn )e−itA ψ = 0}. Now let us turn to the second equation.27).28) . if ψ ∈ Hc we can write ψ = ψ c + ψ pp . Suppose Kn ∈ L(H) is a sequence of relatively compact operators which converges strongly to the identity. Let ψ ∈ Hc . Then Hc = {ψ ∈ H lim lim n→∞ t≥0 1 n→∞ T →∞ T T 0 Kn e−itA ψ dt = 0}. then 1 T T T 1/2 Kn ψ(t) dt ≤ 0 1 T Kn ψ(t) 2 dt 0 →0 (5. With the help of this result we can now prove an abstract version of the RAGE theorem. If ψ ∈ Hpp the claim follows by (5. Theorem 5. 0 (5. Conversely.
Proof. K(t)ψ . The main interest is the behavior of K(t) for large t. If K is unbounded we will assume D(A) ⊆ D(K) such that the above equations make sense at least for ψ ∈ D(A). Under the same assumptions as in the RAGE theorem we have 1 T itA lim lim e Kn e−itA ψdt = P pp ψ (5. Then 1 T →∞ T lim T 0 K(t)ψ c dt ≤ lim 1 T →∞ T T K(t)ψ c dt = 0 0 (5.32) by Theorem 5. Suppose A is selfadjoint and K is relatively compact.34) n→∞ T →∞ T 0 .29) and note ψ(t).31) If K is in addition bounded.9. a more detailed investigation of this topic is beyond the scope of this manuscript.6. ψ ∈ D(A). It is often convenient to treat the observables as time dependent rather than the states. To obtain the general result. we see that the operator in parenthesis tends to PA ({λj }) strongly as T → ∞. We also note the following corollary. we can interchange the limit with the summation and the claim follows. the result holds for any ψ ∈ H. use the same trick as before and replace K by KRA (z). The strong limits are called asymptotic observables if they exist. Then 1 T →∞ T lim T 0 eitA Ke−itA ψdt = λ∈σp (A) PA ({λ})KPA ({λ})ψ. As in the proof of the previous theorem we can write ψ pp = j αj ψj and hence αj j 1 T T K(t)ψj dt = 0 j αj 1 T T eit(A−λj ) dt Kψj . (5. Quantum dynamics However.33) As in the proof of Wiener’s theorem.114 5. Kψ(t) = ψ. We will assume that K is bounded.30) This point of view is often referred to as Heisenberg picture in the physics literature. Write ψ = ψ c + ψ pp . We set K(t) = eitA Ke−itA (5.8. (5. For a survey containing several recent results see [9]. Since this operator is also bounded by 1 for all T . Corollary 5. ψ(t) = e−itA ψ. Theorem 5. 0 (5.
(5. So limτ →0 Fτ (s) = 0 at least pointwise. s≤t (5. we can invoke the uniform boundedness ψ 2 Γ(A+B) = ψ principle to obtain 1 iτ A iτ B (e e − eiτ (A+B) )ψ ≤ C ψ Γ(A+B) .37) j where τ = t n. A. and A + B are selfadjoint. where eitA and eitB can be computed explicitly.2.39) τ Now for ψ ∈ D(A + B) = D(A) ∩ D(B) we have 1 iτ A iτ B (e e − eiτ (A+B) )ψ → iAψ + iBψ − i(A + B)ψ = 0 (5.3.35) Problem 5.9. Prove Lemma 5.42) τ where . (5. First of all note that we have eiτ A eiτ B n−1 n − eit(A+B) n−1−j = j=0 eiτ A eiτ B eiτ A eiτ B − eiτ (A+B) eiτ (A+B) .41) τ and.(5. However. Suppose. t]. we at least have: Theorem 5.36) Proof. Prove Corollary 5.3. B.40) τ as τ → 0. (5. Then eit(A+B) = slim ei n A ei n B n→∞ t t n .10 (Trotter product formula). 5.5.3. since D(A + B) is a Hilbert space when equipped with the graph norm 2 + (A + B)ψ 2 . and hence (eiτ A eiτ B )n − eit(A+B) ψ ≤ t max Fτ (s). The Trotter product formula In many situations the operator is of the form A + B.38) 1 iτ A iτ B (e e − eiτ (A+B) )eis(A+B) ψ . Since A and B will not commute in general. but we need this uniformly with respect to s ∈ [−t. Fτ (s) = Pointwise convergence implies 1 iτ A iτ B (e e − eiτ (A+B) )ψ ≤ C(ψ) (5. The Trotter product formula 115 respectively 1 n→∞ T →∞ T lim lim T 0 e−itA (I − Kn )e−itA ψdt = P c ψ.5. (5. Problem 5. we cannot obtain eit(A+B) from eitA eitB .
116
5. Quantum dynamics
Now Fτ (s) − Fτ (r) ≤ 1 iτ A iτ B (e e − eiτ (A+B) )(eis(A+B) − eir(A+B) )ψ τ ≤ C (eis(A+B) − eir(A+B) )ψ Γ(A+B) . (5.43)
shows that Fτ (.) is uniformly continuous and the claim follows by a standard ε 2 argument. If the operators are semibounded from below the same proof shows Theorem 5.11 (Trotter product formula). Suppose, A, B, and A + B are selfadjoint and semibounded from below. Then e−t(A+B) = slim e− n A e− n B
n→∞
t t
n
,
t ≥ 0.
(5.44)
Problem 5.4. Proof Theorem 5.11.
Chapter 6
Perturbation theory for selfadjoint operators
The Hamiltonian of a quantum mechanical system is usually the sum of the kinetic energy H0 (free Schr¨dinger operator) plus an operator V coro responding to the potential energy. Since H0 is easy to investigate, one usually tries to consider V as a perturbation of H0 . This will only work if V is small with respect to H0 . Hence we study such perturbations of selfadjoint operators next.
6.1. Relatively bounded operators and the Kato–Rellich theorem
An operator B is called A bounded or relatively bounded with respect to A if D(A) ⊆ D(B) and if there are constants a, b ≥ 0 such that Bψ ≤ a Aψ + b ψ , ψ ∈ D(A). (6.1)
The inﬁmum of all constants a for which a corresponding b exists such that (6.1) holds is called the Abound of B. The triangle inequality implies Lemma 6.1. Suppose Bj , j = 1, 2, are A bounded with respective Abounds ai , i = 1, 2. Then α1 B1 + α2 B2 is also A bounded with Abound less than α1 a1 + α2 a2 . In particular, the set of all A bounded operators forms a linear space. There are also the following equivalent characterizations: 117
118
6. Perturbation theory for selfadjoint operators
Lemma 6.2. Suppose A is closed and B is closable. Then the following are equivalent: (i) B is A bounded. (ii) D(A) ⊆ D(B). (iii) BRA (z) is bounded for one (and hence for all) z ∈ ρ(A). Moreover, the Abound of B is no larger then inf z∈ρ(A) BRA (z) . Proof. (i) ⇒ (ii) is true by deﬁnition. (ii) ⇒ (iii) since BRA (z) is a closed (Problem 2.6) operator deﬁned on all of H and hence bounded by the closed graph theorem (Theorem 2.7). To see (iii) ⇒ (i) let ψ ∈ D(A), then Bψ = BRA (z)(A − z)ψ ≤ a (A − z)ψ ≤ a Aψ + (az) ψ , (6.2) where a = BRA (z) . Finally, note that if BRA (z) is bounded for one z ∈ ρ(A), it is bounded for all z ∈ ρ(A) by the ﬁrst resolvent formula. We are mainly interested in the situation where A is selfadjoint and B is symmetric. Hence we will restrict our attention to this case. Lemma 6.3. Suppose A is selfadjoint and B relatively bounded. The Abound of B is given by
λ→∞
lim BRA (±iλ) .
(6.3)
If A is bounded from below, we can also replace ±iλ by −λ. Proof. Let ϕ = RA (±iλ)ψ, λ > 0, and let a∞ be the Abound of B. If B is A bounded, then (use the spectral theorem to estimate the norms) BRA (±iλ)ψ ≤ a ARA (±iλ)ψ + b RA (±iλ)ψ ≤ (a + b ) ψ . λ (6.4)
Hence lim supλ BRA (±iλ) ≤ a∞ which, together with a∞ ≤ inf λ BRA (±iλ) from the previous lemma, proves the claim. The case where A is bounded from below is similar. Now we will show the basic perturbation result due to Kato and Rellich. Theorem 6.4 (Kato–Rellich). Suppose A is (essentially) selfadjoint and B is symmetric with Abound less then one. Then A + B, D(A + B) = D(A), is (essentially) selfadjoint. If A is essentially selfadjoint we have D(A) ⊆ D(B) and A + B = A + B. If A is bounded from below by γ, then A + B is bounded from below by min(γ, b/(a − 1)).
6.2. More on compact operators
119
Proof. Since D(A) ⊆ D(B) and D(A) ⊆ D(A + B) by (6.1) we can assume that A is closed (i.e., selfadjoint). It suﬃces to show that Ran(A+B ±iλ) = H. By the above lemma we can ﬁnd a λ > 0 such that BRA (±iλ) < 1. Hence −1 ∈ ρ(BRA (±iλ)) and thus I + BRA (±iλ) is onto. Thus (A + B ± iλ) = (I + BRA (±iλ))(A ± iλ) is onto and the proof of the ﬁrst part is complete. If A is bounded from below we can replace ±iλ by −λ and the above equation shows that RA+B exists for λ suﬃciently large. By the proof of the previous lemma we can choose −λ < min(γ, b/(a − 1)). Finally, let us show that there is also a connection between the resolvents. Lemma 6.5. Suppose A and B are closed and D(A) ⊆ D(B). Then we have the second resolvent formula RA+B (z) − RA (z) = −RA (z)BRA+B (z) = −RA+B (z)BRA (z) for z ∈ ρ(A) ∩ ρ(A + B). Proof. We compute RA+B (z) + RA (z)BRA+B (z) = RA (z)(A + B − z)RA+B (z) = RA (z). (6.7) The second identity is similar. Problem 6.1. Compute the resolvent of A + α ψ, . ψ. (Hint: Show α (I + α ϕ, . ψ)−1 = I − ϕ, . ψ 1 + α ϕ, ψ and use the second resolvent formula.) (6.6) (6.5)
(6.8)
6.2. More on compact operators
Recall from Section 5.2 that we have introduced the set of compact operators C(H) as the closure of the set of all ﬁnite rank operators in L(H). Before we can proceed, we need to establish some further results for such operators. We begin by investigating the spectrum of selfadjoint compact operators and show that the spectral theorem takes a particular simple form in this case. We introduce some notation ﬁrst. The discrete spectrum σd (A) is the set of all eigenvalues which are discrete points of the spectrum and whose corresponding eigenspace is ﬁnite dimensional. The complement of the discrete spectrum is called the essential spectrum σess (A) = σ(A)\σd (A). If A is selfadjoint we might equivalently set σd (A) = {λ ∈ σp (A)rank(PA ((λ − ε, λ + ε))) < ∞ for some ε > 0}. (6.9)
120
6. Perturbation theory for selfadjoint operators
respectively σess (A) = {λ ∈ Rrank(PA ((λ − ε, λ + ε))) = ∞ for all ε > 0}. (6.10)
Theorem 6.6 (Spectral theorem for compact operators). Suppose the operator K is selfadjoint and compact. Then the spectrum of K consists of an at most countable number of eigenvalues which can only cluster at 0. Moreover, the eigenspace to each nonzero eigenvalue is ﬁnite dimensional. In other words, σess (K) ⊆ {0}, where equality holds if and only if H is inﬁnite dimensional. In addition, we have K=
λ∈σ(K)
(6.11)
λPK ({λ}).
(6.12)
Proof. It suﬃces to show rank(PK ((λ − ε, λ + ε))) < ∞ for 0 < ε < λ. Let Kn be a sequence of ﬁnite rank operators such that K −Kn ≤ 1/n. If RanPK ((λ−ε, λ+ε)) is inﬁnite dimensional we can ﬁnd a vector ψn in this range such that ψn = 1 and Kn ψn = 0. But this yields a contradiction 1 ≥  ψn , (K − Kn )ψn  =  ψn , Kψn  ≥ λ − ε > 0 n by (4.2). As a consequence we obtain the canonical form of a general compact operator. Theorem 6.7 (Canonical form of compact operators). Let K be compact. ˆ There exists orthonormal sets {φj }, {φj } and positive numbers sj = sj (K) such that ˆ ˆ K= sj φj , . φj , K∗ = sj φj , . φj . (6.14)
j j
(6.13)
ˆ ˆ ˆ Note Kφj = sj φj and K ∗ φj = sj φj , and hence K ∗ Kφj = s2 φj and KK ∗ φj = j ˆ s2 φj . j The numbers sj (K)2 > 0 are the nonzero eigenvalues of KK ∗ respectively K ∗ K (counted with multiplicity) and sj (K) = sj (K ∗ ) = sj are called singular values of K. There are either ﬁnitely many singular values (if K is ﬁnite rank) or they converge to zero. Proof. By Lemma 5.5 K ∗ K is compact and hence Theorem 6.6 applies. Let {φj } be an orthonormal basis of eigenvectors for PK ∗ K ((0, ∞))H and let
let me remark that there are a number of other equivalent deﬁnitions for compact operators. (iii) ⇒ (iv).17) Finally. for any ψ ∈ H we can write j ψ= j ˜ φj . Choose An = ψn . Again.7.15) ˜ with ψ ∈ Ker(K ∗ K) = Ker(K). φk = j (sj sk )−1 Kφj . Proof. ϕ. (iii) ψn ψ weakly implies Kψn → Kψ in norm. replace ψn → ψn − ψ and assume ψ = 0. j (6.12. σj = 1 are the eigenvectors of K and σj sj are the corresponding eigenvalues. (6. (i’) K ∗ is compact. For K ∈ L(H) the following statements are equivalent: (i) K is compact. By φj . (i) ⇒ (ii). φk we see that k ˆ {φj } are orthonormal and the formula for K ∗ follows by taking the adjoint of the formula for K (Problem 6. (6. Kφk = (sj sk )−1 K ∗ Kφj . ψ φj . ψ 2 An φj 2 ≤ j=1 ˆ sj An φj 2 → 0. (i) ⇔ (i’). K ∗ K ψ = 0. . More on compact operators 121 s2 be the eigenvalue corresponding to φj . Then (by (6. Lemma 6. then Kψn 2 = An K ∗ → 0.6. Then Kψ = j ˆ sj φj . (iv) ψn bounded implies that Kψn has a (norm) convergent subsequence. Moreover.18) (ii) ⇒ (iii). φk = sj s−1 φj . Then. Translating An → An −A it is no restriction to assume A = 0. note that we have K = max sj (K). Now apply (iii) to this subsequence. This is immediate from Theorem 6. 2 ˆ If K is selfadjoint then φj = σj φj . ϕ = 1. (ii) An ∈ L(H) and An → A strongly implies An K → AK.14)) N N s An K 2 = sup ψ =1 j=1 ˆ sj  φj . If ψn is bounded it has a weakly convergent subsequence by Lemma 1. ψ φj + ψ (6. .8. since K ψ 2 = ψ.2. Since An ≤ M it suﬃces to consider the case where K is ﬁnite rank.16) ˆ ˜ ˜ ˜ ˆ ˆ where φj = s−1 Kφj .2).
since ψn converges weakly to 0. where δ1 = (1. Kψn has a convergent subsequence which.22) we see that K is bounded. y)2 dµ(y) dµ(x) M ψ(y)2 dµ(y) (6. ψ =1 j=n Kψ (6. ψ ∈ L2 (M.21) where K(x. Next. dµ). Then slim An = 0 but KAn = 1. An the operator which shifts each sequence n places to the left. 6. note that you cannot replace An K → AK by KAn → KA unless you additionally require An to be normal (then this follows by taking adjoints — recall that only for normal operators taking adjoints is continuous with respect to strong convergence). y)ψ(y)dµ(y) dµ(x) K(x. The last condition explains the name compact. . . Hilbert–Schmidt and trace class operators Among the compact operators two special classes or of particular importance. Without the requirement that An is normal the claim is wrong as the following example shows. .19) Then γn = K − Kn = sup ψ∈span{ϕj }∞ . . y)ψ(y)dµ(y). Then.2. The ﬁrst one are integral operators Kψ(x) = M K(x. y)2 dµ(y) ψ(y)2 dµ(y) dµ(x) M ≤ M M = M M K(x.3. Moreover.20) is a decreasing sequence tending to a limit ε ≥ 0.9. Such an operator is called Hilbert– Schmidt operator. 2 Kψ(x)2 dµ(x) = M M M K(x. 0. Let ϕj be an orthonormal basis and set n Kn = j=1 ϕj . ). we can ﬁnd a sequence of unit vectors ψn ∈ span{ϕj }∞ for which Kψn ≥ ε. (6. pick an orthonormal basis ϕj (x) for L2 (M. (6. Moreover. . y) ∈ L2 (M × M. dµ ⊕ dµ).14). by Lemma 1. Perturbation theory for selfadjoint operators (iv) ⇒ (i). ϕi (x)ϕj (y) is an orthonormal basis for . converges to 0. Kϕj .122 6. Deduce the formula for K ∗ from the one for K in (6. Using CauchySchwarz. By j=n assumption. Example. and K = δ1 . Let H = 2 (N). δ1 . dµ). Hence ε must be 0 and we are done. Problem 6.
14): n Kn = j=1 ˆ sj φj . then a compact operator K is Hilbert– Schmidt if and only if j sj (K)2 < ∞ and sj (K)2 = j M M K(x. Proof.14) we can also give a diﬀerent characterization of Hilbert– Schmidt operators.9. Kψ(x) = i.j ci. (6. . Hence we will call a compact operator Hilbert–Schmidt if its singular values satisfy sj (K)2 < ∞.26) holds in this case.j ci. ψ ϕi (x) j (6. (6. dµ).28) Then Kn has the kernel Kn (x. If H = L2 (M. dµ). Kϕ∗ . . Lemma 6. If K is compact we can deﬁne approximating ﬁnite rank operators Kn by considering only ﬁnitely many terms in (6.j = ϕi . y) = Kn (x. y)2 dµ(x)dµ(y) = M M j=1 sj (K)2 Now if one side converges. j (6. ci. n ∗ˆ j=1 sj φj (y) φj (x) n (6. y)2 dµ(y) dµ(x) < ∞.6.j 2 = i.j ϕ∗ . Hilbert–Schmidt and trace class operators 123 L2 (M × M.26) in this case. φj . j (6.j M M K(x.27) and (6. y)2 dµ(x)dµ(y).j ϕi (x)ϕj (y). dµ ⊕ dµ) and K(x. so does the other and in particular. Using (6. y) = i.23) where ci.3.25) shows that K can be approximated by ﬁnite rank operators (take ﬁnitely many terms in the sum) and is hence compact.24) In particular.29) By our lemma this coincides with our previous deﬁnition if H = L2 (M. (6.
34) Proof. (6. dµ⊗dµ)).j ˆ  K ∗ φj .j ˆ  φj . This approach can be generalized by deﬁning K plus corresponding spaces Jp (H) = {K ∈ C(H) K p p = j sj (K)p 1/p (6. (6. (6. The set of Hilbert–Schmidt operators forms a ∗ideal in L(H) and KA 2 ≤ A K 2 respectively AK 2 ≤ A K 2. Proof.124 6.30) form a Hilbert space (isomorphic to L2 (M ×M.36) < ∞}.37) .32) in this case. dµ) we see that the Hilbert–Schmidt operators together with the norm K 2 = j sj (K)2 1/2 (6.11.10. A compact operator K is Hilbert–Schmidt if and only if Kψn n 2 <∞ (6. (6. Kψn 2 = n.33) = n ˆ K ∗ φn 2 = j Corollary 6. Note that K 2 = K ∗ 2 (since sj (K) = sj (K ∗ )). Then AK is compact and AK 2 2 = n AKψn 2 ≤ A 2 n Kψn 2 = A 2 K 2 2.35) For KA just consider adjoints. This follows from Kψn n 2 = n. ψn 2 sj (K)2 . Perturbation theory for selfadjoint operators Since every Hilbert space is isomorphic to some L2 (M. Let K be Hilbert–Schmidt and A bounded.31) for some orthonormal set and Kψn n 2 = K 2 2 (6. There is another useful characterization for identifying Hilbert–Schmidt operators: Lemma 6.
j sp  ϕn .6. . Now using CauchySchwarz we have  ψn .3.12. ψn . φj 2 j sp  ϕn . φj 2 j = j 1/2 ≤ sp j 1/2 sp .41) ˆ Clearly the analogous equation holds for φj .j ˆ sp  ψn . j (6.. (6.40) holds. φj sj 1/2 ˆ φj . φj 2 j 1/2 j ˆ sp  ψn . 1/q ≤ (6. (6.39) Lemma 6. φj 2 j 1/2 (6.43) ˆ Since equality is attained for ϕn = φn and ψn = φn equation (6. φj 2 ≤ j j sp  ϕn .. φj 2 .2/q ) o j sj  ϕn . Kϕj p j where the sup is taken over all orthonormal systems.40) K p = sup  ψj . a second appeal to CauchySchwarz and interchanging the order of summation ﬁnally gives  ψn . The spaces Jp (H) together with the norm .. Note that K ≤ K and that by sj (K) = sj (K ∗ ) we have K p p.. {ϕj } ONS . Moreover..42) Summing over n.38) = K ∗ p. φj 2 j sp j j 1/2 j 1/2 n.40): Choose q such that p + 1 = 1 and q use H¨lder’s inequality to obtain (sj .2 = (sp . Kϕn p ≤ n n. (6. 1 Proof. p are Banach spaces..2 )1/p . φj 2 j j 1/p j 1/p  ϕn . Hilbert–Schmidt and trace class operators 125 which are known as Schatten pclass. Kϕn p = j sj 1/2 ϕn . The hard part is to prove (6. ψn p ≤ j sp  ϕn . 1/p {ψj }.
(K1 + K2 )ϕj p j 1/p ≤ j  ψj . An operator is trace class if and only if it can be written as the product of two Hilbert–Schmidt operators. j Corollary 6. Kψn  = n n ∗  K1 ϕn . K2 ϕj p 1/p ≤ K1 + K2 p (6. φj respectively K2 = j sj (K) φj . .126 6. . ( K ≤ K p ). Perturbation theory for selfadjoint operators Now the rest is straightforward. Since the right hand side is independent of the ONS (and in particular on the number of vectors).47) and hence K = K1 K2 is trace calls if both K1 and K2 are Hilbert–Schmidt operators. To see the converse let K be given by (6. The set of trace class operators forms a ∗ideal in L(H) and KA 1 ≤ A K 1 respectively AK 1 ≤ A K 1. The two most important cases are p = 1 and p = 2: J2 (H) is the space of Hilbert–Schmidt operators investigated in the previous section and J1 (H) is the space of trace class operators. By CauchySchwarz we have  ϕn . p it is also a Cauchy sequence with respect to . (6.13. If Kn is a Cauchy sequence with respect to . and we have K in this case.48) . Since C(H) is closed. K1 ϕj p p 1/p + j  ψj . From  ψj . K is in Jp (H).44) we infer that Jp (H) is a vector space and the triangle inequality. Proof.45) for any ﬁnite ONS. there is a compact K with K − Kn → 0 and by Kn p ≤ C we have  ψj . Since Hilbert–Schmidt operators are easy to identify it is important to relate J1 (H) with J2 (H): Lemma 6. K = K1 K2 .46) K2 ψ n 2 1/2 = K1 K2 2 (6. K 2 ψ n  ≤ n 2 ∗ K 1 ϕn 2 n 1 ≤ K1 2 K2 2 (6. φj .14. The other requirements for are norm are obvious and it remains to check completeness.14) and choose K1 = ˆ sj (K) φj . Kϕj p j 1/p ≤C (6.
Clearly for selfadjoint trace class operators. K1 ϕ ≤ ϕ. Problem 6.11. ϕ n m. (6. Show that A ≥ 0 is trace class if (6.m ∗ K2 ψ m . K 1 K 2 ϕn n = n ∗ K 1 ϕn . K2 Hilbert–Schmidt we have ϕn . If K is trace class. Proof. ψ m ψ m . Furthermore. The rest follows from linearity of the scalar product and K1 ≤ K2 if and only if ϕ. K 1 ψ m = ψ m .49) is inﬁnite (for one and hence all ONB by the previous problem). If we write K = K1 K2 with K1 .6. To see this you just have to choose the ONB to consist of eigenfunctions.49) is ﬁnite and independent of the ONB. Show that for an orthogonal projection P we have dim Ran(P ) = tr(P ). K2 Hilbert–Schmidt and use Corollary 6. Let H = 2 (N) and let A be multiplication by a sequence a(n). Moreover. .15.3. This is even true for all trace class operators and is known as Lidiskij trace theorem (see [17] or [6] for an easy to read introduction). Show that A ∈ Jp ( 2 (N)) if and only if a ∈ p (N). Problem 6. K 2 K1 ψ m .) Problem 6.49) is ﬁnite for one (and √ √ hence all) ONB. the trace is the sum over all eigenvalues (counted with their multiplicity). the trace is linear and if K1 ≤ K2 are both trace class we have tr(K1 ) ≤ tr(K2 ). Let {ψn } be another ONB.50) Hence the trace is independent of the ONB and we even have tr(K1 K2 ) = tr(K2 K1 ). Now we can also explain the name trace class: Lemma 6.3.4.5. (Hint A is selfadjoint (why?) and A = A A. show that A p = a p in this case. where we set tr(P ) = ∞ if (6. then for any ONB {ϕn } the trace tr(K) = n ϕn . K2 ϕ for every ϕ ∈ H.n ∗ K 1 ϕn . K 1 ψ m m = = m ϕn . K 2 ϕ n ∗ K2 ψ m . K 2 ϕ n = n. Kϕn (6. Hilbert–Schmidt and trace class operators 127 Proof. Write K = K1 K2 with K1 .
16 (Weyl criterion). Hence. By Lemma 2. Relatively compact operators and Weyl’s theorem In the previous section we have seen that the sum of a selfadjoint and a symmetric operator is again selfadjoint if the perturbing operator is small.4. If λ0 ∈ σd (A) we can ﬁnd an ε > 0 such that Pε = PA ((λ0 − ε. namely the essential spectrum. the sequence can chosen to be orthonormal. Such a sequence is called singular Weyl sequence.128 6. Note that if we add a multiple of the identity to A. . Show that K : 2 (N) → 2 (N). In this section we want to study the inﬂuence of perturbations on the spectrum. ψn converges weakly to 0. φj . Lemma 6. and (A − λ)ψn → 0. we cannot expect a (relatively) bounded perturbation to leave any part of the spectrum invariant. Consider ˜ ˜ ˜ ψn = Pε ψn . Moreover. Proof. Problem 6. is stable under ﬁnite rank perturbations. Let ψn be a singular Weyl sequence for the point λ0 . Next. Perturbation theory for selfadjoint operators Problem 6. we shift the entire spectrum. where K = √ K ∗ K. 6. Our hope is that at least some parts of the spectrum remain invariant. . To show this. Let A be selfadjoint. f (n) → j∈N k(n + j)f (j) is Hilbert–Schmidt if k(n) ≤ C(n). Show that for K ∈ C we have K = j sj φj . λ0 + ε)) is ﬁnite rank. A point λ is in the essential spectrum of a selfadjoint operator A if and only if there is a sequence ψn such that ψn = 1. if λ0 is in the discrete spectrum. we can easily remove this eigenvalue with a ﬁnite rank perturbation of arbitrary small norm.51) Hence our only hope is that the remainder. we ﬁrst need a good criterion for a point to be in the essential spectrum of A. In fact. (6. where C(n) is decreasing and summable.6. Conclude that K p = (tr(Ap ))1/p .7. in general. consider A + εPA ({λ0 }). Clearly ψn converges weakly to zero and (A − λ0 )ψn → 0.12 we have λ0 ∈ σ(A) and hence it suﬃces to show λ0 ∈ σd (A).
Then (RA (z)− λ−z )ψn = RA (z) (A−λ)ψn and thus (RA (z)− z−λ 1 → 0. But ϕn is a sequence of unit length vectors which lives in a ﬁnite dimensional space and converges to 0 weakly. Lemma 6. by our assumption we also have (RB (z) − λ−z )ψn 1 λ−z )ψn → 0 and thus (B − λ)ϕn → 0. Hence σess (A) ⊆ σess (A + K).53) since (A − λ)ψn → 0 by assumption and Kψn → 0 by Lemma 6. λ − n+1 ) ∪ (λ + rank(Pnj ) > 0 for an inﬁnite subsequence nj . The essential spectrum of a selfadjoint operator A is precisely the part which is invariant under rankone perturbations. Then ψn converges weakly to zero and hence (A + K − λ)ψn ≤ (A − λ)ψn + Kψn → 0 (6.8 (iii). Suppose A and B are selfadjoint operators. where ϕn = RB (z)ψn .55) Proof.4. (6. Since . λ + n ]). (6.6.K ∗ =K There is even a larger class of operators under which the essential spectrum is invariant.56) (6.λ+ε) 1 (λ − λ0 )2 dµψn (λ) ε2 R\(λ−ε. Then ψj ∈ RanPnj . Relatively compact operators and Weyl’s theorem 129 Moreover. ˜ ψn − ψn 2 = ≤ dµψn (λ) R\(λ−ε. σess (A) = σ(A + K). if λ0 ∈ σess (A). Thus ϕn = ψn −1 ψn is also a singular Weyl sequence.λ+ε) 1 ≤ 2 (A − λ0 )ψn 2 (6. Reversing the roles of A + K and A shows σess (A + K) = σess (A). Theorem 6. 1 1 Conversely.54) K∈C(H). Moreover.18 (Weyl). Now pick Now let K be a selfadjoint compact operator and ψn a singular Weyl sequence for A. then σess (A) = σess (B). 1 1 n+1 .17. Since we have shown that we can remove any point in the discrete spectrum by a selfadjoint ﬁnite rank operator we obtain the following equivalent characterization of the essential spectrum.52) ε ˜ ˜ ˜ and hence ψn → 1. consider Pn = PA ([λ − n . a contradiction. If RA (z) − RB (z) ∈ C(H) for one z ∈ ρ(A) ∩ ρ(B). In fact. In particular. suppose λ ∈ σess (A) and let ψn be a corresponding singular 1 Weyl sequence.
27 the resolvent diﬀerence of two selfadjoint extensions is a ﬁnite rank operator if the defect indices are ﬁnite. In particular. Proof. Lemma 6. Now interchange the roles of A and B. Let A be selfadjoint and suppose K is relatively compact with respect to A. showing λ ∈ σess (B). Suppose RA (z) − RB (z) ∈ C(H) (6. Then the Abound of K is zero. By Lemma 2. then f (A) − f (B) ∈ C(H) for any f ∈ C∞ (R). Proof.59) Let A and B be selfadjoint. then all selfadjoint extensions have the same essential spectrum. In addition. Hence the claim follows from Lemma 4.21. The set of all functions f for which the claim holds is a closed ∗subalgebra of C∞ (R) (with sup norm).130 6. For later use observe that set of all operators which are relatively compact with respect to A forms a linear space (since compact operators do) and relatively compact operators have Abound zero.18 applies if B = A + K. Lemma 6. (6. then this holds for all z ∈ ρ(A) ∩ ρ(B). As a ﬁrst consequence note the following result Theorem 6. Remember that we have called K relatively compact with respect to A if KRA (z) is compact (for one and hence for all z) and note that the the resolvent diﬀerence RA+K (z) − RA (z) is compact if K is relatively compact.57) for one z ∈ ρ(A) ∩ ρ(B).19. If the condition holds for one z it holds for all since we have (using both resolvent formulas) RA (z ) − RB (z ) = (1 − (z − z )RB (z ))(RA (z) − RB (z))(1 − (z − z )RA (z )).4. if A and B are selfadjoint. In addition.58) . where K is relatively compact.20. the following result is of interest. Theorem 6. (6. Perturbation theory for selfadjoint operators limn→∞ ϕn = limn→∞ RA (z)ψn = λ − z−1 = 0 (since (RA (z) − 1 1 λ−z )ψn = λ−z RA (z)(A − λ)ψn → 0) we obtain a singular Weyl sequence for B. Suppose A is symmetric with equal ﬁnite defect indices.
then f (An ) converges to f (A) in norm for any bounded continuous function f : Σ → C with limλ→−∞ f (λ) = limλ→∞ f (λ).8. Since B is A bounded with Abound less than one. d Problem 6. If K is relatively compact with respect to A then it is also relatively compact with respect to A + B.23. If An converges to A in strong resolvent sense. We say that An converges to A in norm respectively strong resolvent sense if n→∞ lim RAn (z) = RA (z) respectively slim RAn (z) = RA (z) n→∞ (6. Lemma 6.6.22.61) (6. Proof.63) for one z ∈ Γ = C\Σ. And hence BRA+B (z) = BRA (z)(I + BRA (z))−1 shows that B is also A + B bounded and the result follows from KRA+B (z) = KRA (z)(I − BRA+B (z)) since KRA (z) is compact and BRA+B (z) is bounded. note the following result which is a straightforward consequence of the second resolvent identity. Show that A = − dx2 + q(x). Σ = σ(A) ∪ n σ(An ). Strong and norm resolvent convergence Suppose An and A are selfadjoint operators.62) 6.5. D(A) = H 2 (R) is selfadjoint if ∞ (R).8 (since RA is normal). Using the Stone–Weierstraß theorem we obtain as a ﬁrst consequence Theorem 6. Suppose An converges to A in norm resolvent sense. Hence the claim follows from Lemma 6. . then f (An ) converges to f (A) strongly for any bounded continuous function f : Σ → C.60) and observe that the ﬁrst operator is compact and the second is normal and converges strongly to 0 (by the spectral theorem).) 2 (6. then z ∈ σess (A). Show that if −u (x)+q(x)u(x) = zu(x) has a solution for which q∈L u and u are bounded near +∞ (or −∞) but u is not square integrable near +∞ (or −∞). Write KRA (λi) = (KRA (i))((A + i)RA (λi)) (6. we can choose a z ∈ C such that BRA (z) < 1. Suppose A is selfadjoint and B is symmetric with Abound less then one.5.3 and the discussion after Lemma 6. (Hint: Use u to construct a Weyl sequence by restricting it to a compact set. Now modify your construction to get a singular Weyl sequence by observing that functions with disjoint support are orthogonal. Strong and norm resolvent convergence 131 Proof. In addition.
Lemma 6.64) can be made arbitrarily small since f (. To see the last claim let χn be a compactly supported continuous funcs tion (0 ≤ χm ≤ 1) which is one on the interval [−m. Suppose An converges to A in norm or strong resolvent sense for one z0 ∈ Γ.66) Finally we need some good criteria to check for norm respectively strong resolvent convergence. taking adjoints is continuous even with respect to strong convergence) and since it contains f (λ) = 1 and f (λ) = 1 ε λ−z0 this ∗algebra is dense by the Stone–Weierstaß theorem.) ≤ f rem 3. m]. ψ ∈ D(A) = D(An ).25.67) Proof. (6. Then An converges to A in norm resolvent sense if there are sequences an and bn converging to zero such that (An − A)ψ ≤ an ψ + bn Aψ .132 6. s s t ∈ R. and if all operators are semibounded e−tAn → e−tA . (6. then this holds for all z ∈ Γ.68) ≤ (an + bn ) ψ .) → I by Theo As a consequence note that the point z ∈ Γ is of no importance Corollary 6.1. The usual 3 shows that this ∗algebra is also closed.69) (6. Suppose An converges to A in strong resolvent sense. t ≥ 0. A be selfadjoint operators with D(An ) = D(A).26. Let An . From the second resolvent identity RAn (z) − RA (z) = RAn (z)(A − An )RA (z) we infer (RAn (i) − RA (i))ψ ≤ RAn (i) an RA (i)ψ + bn ARA (i)ψ (6. The set of functions for which the claim holds clearly forms a ∗algebra (since resolvents are normal. then eitAn → eitA .65) (6. Then f (An )χm (An ) → f (A)χm (A) by the ﬁrst part and hence (f (An ) − f (A))ψ ≤ f (An ) (1 − χm (An ))ψ (χm (An ) − χm (A))ψ (1 − χm (A))ψ s ∞ + f (An ) + f (A) + (f (An )χm (An ) − f (A)χm (A))ψ (6. Perturbation theory for selfadjoint operators Proof.24. and χm (. and that we have Corollary 6.
28. Then we can ﬁnd a λ ∈ σ(A) and some ε > 0 such that σ(An ) ∩ (λ − ε.27. λ + 2 ) and vanishes outside (λ − ε. Let An . Let An and A be selfadjoint operators. A be bounded selfadjoint operators with An → A.6. strong convergence implies strong resolvent convergence: Lemma 6. Suppose the ﬁrst claim were wrong. Similarly. Then f (An ) = 0 and hence f (A)ψ = lim f (An )ψ = 0 for every ψ. if no domain problems get in the way. here is the answer: it is equivalent to strong resolvent convergence.29. λ + ε). A be selfadjoint operators. On the other hand. norm convergence implies norm resolvent convergence: Corollary 6. If you wonder why we did not deﬁne weak resolvent convergence. By RAn (z) RA (z) we have also RAn (z)∗ the ﬁrst resolvent identity RA (z)∗ and thus by RAn (z)ψ 2 − RA (z)ψ 2 = ψ. then An converges to A in norm resolvent sense. Proof. Let An . λ + ε) = ∅. since D0 is a core.5.30. λ + 2 )) implying f (A)ψ = ψ. Choose a bounded ε ε continuous function f which is one on (λ − 2 . The rest follows from Lemma 1.11 (iv). Lemma 6. Now what can we say about the spectrum? Theorem 6. a contradiction. then also slimn→∞ RAn (z) = RA (z). Proof. (6. (RAn (z) − RAn (z ∗ ) + RA (z) − RA (z ∗ ))ψ → 0.13. Proof. If An converges to A in strong resolvent sense we have σ(A) ⊆ limn→∞ σ(An ). Strong and norm resolvent convergence 133 and hence RAn (i) − RA (i) ≤ an + bn → 0. Suppose wlimn→∞ RAn (z) = RA (z) for some z ∈ Γ. RAn (z ∗ )RAn (z)ψ − RA (z ∗ )RA (z)ψ 1 = ψ. Using the second resolvent identity we have (RAn (i) − RA (i))ψ ≤ (A − An )RA (i)ψ → 0 (6. If An converges to A in norm resolvent sense we have σ(A) = limn→∞ σ(An ).71) z − z∗ Together with RAn (z)ψ RA (z)ψ we have RAn (z)ψ → RA (z)ψ by virtue of Lemma 1. since λ ∈ σ(A) there is a nonzero ψ ∈ RanPA ((λ − ε ε 2 . In particular. Then An converges to A in strong resolvent sense if there there is a core D0 of A such that for any ψ ∈ D0 we have ψ ∈ D(An ) for n suﬃciently large and An ψ → Aψ.70) for ψ ∈ (A − i)D0 which is dense. .
(6.λ) .73) The ﬁrst term can be made arbitrarily small if we let f converge pointwise to χ(−∞. Then (PA ((−∞. n→∞ n→∞ (6. In particular. λ]) in all calculations we are done. recall that the norm of RA (z) is just one over the distance from the spectrum.λ) ≤ χ(−∞. Perturbation theory for selfadjoint operators To see the second claim.76) .75) (6. Then An converges to 0 in strong resolvent sense. n→∞ n→∞ (6. λ)) − PAn ((−∞. To overcome this problem let us choose another continuous function g with χ(−∞.λ] ≤ g. ((−∞. then slim PAn ((λ0 .134 6.31. Using P ((λ0 . Suppose An converges in strong resolvent sense to A. Note that the spectrum can contract if we only have strong resolvent 1 sense: Let An be multiplication by n x in L2 (R). λ ∈ σ(A) if and only if RA (λ + i) < 1.λ) (Theorem 3. λ1 ]). λ)) = PA ((−∞.23). λ1 )) = slim PAn ([λ0 . λ]) = PA ((−∞. λ0 ]) we also obtain Corollary 6. λ1 )) = P ((−∞. λ1 )) = PA ([λ0 . λ)))ψ . (g(An ) − f (An ))ψ ≤ (g(An ) − f (A))ψ + (f (A) − g(A))ψ + (g(A) − g(An ))ψ (6. If PA ({λ}) = 0. Then (f (An ) − PAn ((−∞. However.1) and the same is true for the second if we choose n large (Theorem 6.λ) by a continuous function f with 0 ≤ f ≤ χ(−∞.λ] ≤ g ≤ 1. Lemma 6. λ)))ψ ≤ (PA ((−∞. λ)) by P. Suppose An converges in strong resolvent sense to A. λ)) = slim PAn ((−∞. λ1 )) − P ((−∞. If PA ({λ0 }) = PA ({λ1 }) = 0. λ)) − f (A))ψ + (f (A) − f (An ))ψ + (f (An ) − PAn ((−∞. So λ ∈ σ(A) implies RA (λ + i) < 1. which implies RAn (λ + i) < 1 for n suﬃciently large. λ)))ψ ≤ (g(An ) − f (An ))ψ since f ≤ χ(−∞. the third term can only be made small for ﬁxed n. then slim PAn ((−∞.32. Furthermore. λ1 ]) = PA ((λ0 . Since we can replace P.72) Proof. ((−∞. but σ(An ) = R and σ(0) = {0}. λ]). which implies λ ∈ σ(An ) for n suﬃciently large.74) and now all terms are under control. The idea is to approximate χ(−∞.
78) where {ϕn }n∈N is some (ﬁxed) ONB. (6. The following example shows that the requirement PA ({λ}) = 0 is crucial. strong resolvent convergence is equivalent to convergence with respect to the metric 1 (RA (i) − RB (i))ϕn .80) for every bounded continuous f . 0)) = PAn ((−∞. µψ. Give a counterexample when f is not continuous.77) n 0 −1 Then An → 0 and PAn ((−∞. B) = 2n n∈N 0 0 0 1 . Show that for selfadjoint operators.10 (Weak convergence of spectral measures).9. 0)) = 0 and P0 ((−∞.n (λ) → f (λ)dµψ (λ) (6. let H = C2 and 1 1 0 An = . Show that f (λ)dµψ. even if we have bounded operators and norm convergence. Suppose An → A in strong resolvent sense and let µψ . Strong and norm resolvent convergence 135 Example. 0]) = I.79) d(A. Problem 6. In fact.n be the corresponding spectral measures.5. 0]) = but P0 ((−∞. . (6.6. Problem 6. (6.
.
Part 2 Schr¨dinger Operators o .
.
. The Fourier transform We ﬁrst review some basic facts concerning the Fourier transform which will be needed in the following section.4) Hence we will sometimes write pf (x) for −i∂f (x).1. Recall 0 the Schwarz space S(Rn ) = {f ∈ C ∞ (Rn ) sup xα (∂β f )(x) < ∞.1) An element α ∈ Nn is called multiindex and α is called its order. 139 . n 1 α = α1 + · · · + αn .2) ∞ which is dense in L2 (Rn ) (since Cc (Rn ) ⊂ S(Rn ) is). α. For f ∈ S(Rn ) we deﬁne 1 ˆ F(f )(p) ≡ f (p) = e−ipx f (x)dn x.3) (2π)n/2 Rn Then it is an exercise in partial integration to prove Lemma 7. Let C ∞ (Rn ) be the set of all complexvalued functions which have partial derivatives of arbitrary order. For f ∈ C ∞ (Rn ) and α ∈ Nn we set 0 ∂α f = ∂xα1 1 ∂ α f . β ∈ Nn } 0 x (7. . (7.Chapter 7 The free Schr¨dinger o operator 7. For any multiindex α ∈ Nn and any f ∈ S(Rn ) we have 0 ˆ (∂α f )∧ (p) = (ip)α f (p).1. . (7. . ˆ (xα f (x))∧ (p) = iα ∂α f (p). (7. where ∂ = (∂1 . ∂n ) is the gradient. · · · ∂xαn n xα = xα1 · · · xαn .
Thus φz (p) = cφ1/z (p) and (Problem 7. Due to the product structure of the exponential. Now we can show .3. g ∈ S(Rn ) is given by ˆ g (f ∗ g)∧ (p) = (2π)n/2 f (p)ˆ(p).5) of two functions f. The Fourier transform of the convolution (f ∗ g)(x) = Rn f (y)g(x − y)dn y = Rn f (x − y)g(y)dn y (7. z n/2 (7. one can treat each coordinate separately.9) at least for z > 0. Another useful property is the convolution formula. ˆ Let φz (x) = exp(−zx2 /2). reducing the problem to the case n = 1. For this the following lemma will be needed.140 7. this holds for all z with Re(z) > 0 if we choose the branch cut of the root along the negative real axis. Proof.6) 1 (2π)n/2 e−ipx Rn Rn f (y)g(x − y)dn y dn x e−ip(x−y) g(x − y)dn x dn y Rn e−ipy f (y) 1 (2π)n/2 = Rn ˆ g e−ipy f (y)ˆ(p)dn y = (2π)n/2 f (p)ˆ(p). However. We compute (f ∗ g)∧ (p) = = Rn (7. We have e−zx 2 /2 ∈ S(Rn ) for Re(z) > 0 and )(p) = 1 −p2 /(2z) e . Lemma 7.1) ˆ zφ z 1 ˆ c = φz (0) = √ 2π 1 exp(−zx2 /2)dx = √ z R (7.8) F(e−zx 2 /2 √ Here z n/2 has to be understood as ( z)n . Lemma 7. g (7.2. Proof. we want to compute the inverse of the Fourier transform. The free Schr¨dinger operator o In particular F maps S(Rn ) into itself. Then φz (x)+zxφz (x) = 0 and hence i(pφz (p)+ ˆ (p)) = 0.7) where we have used Fubini’s theorem. Next. where the branch cut of the root is chosen along the negative real axis. since the integral is holomorphic for Re(z) > 0.
16) . It remains to compute the spectrum. From Fubini’s theorem we also obtain Parseval’s identity 1 ˆ ˆ f (p)2 dn p = f (x)∗ f (p)eipx dn p dn x n (2π)n/2 Rn Rn R = Rn f (x)2 dn x (7. (7. i.2). In fact.15) (∂α g)(x)f (x)dn x. α ≤ r (7.12) for f ∈ S(Rn ) and thus we can extend F to a unitary operator: Theorem 7. Its spectrum satisﬁes σ(F) = {z ∈ Cz 4 = 1} = {1. (7. Hence σ(F) ⊆ {z ∈ Cz 4 = 1}. The Fourier transform F : S(Rn ) → S(Rn ) is a bijection.10) (2π)n/2 Rn We have F 2 (f )(x) = f (−x) and thus F 4 = I. The Fourier transform 141 Theorem 7. (7. (7. Moreover. The Fourier transform F extends to a unitary operator F : L2 (Rn ) → L2 (Rn ).1.5. where fε (x) = 1 εn Rn φ1/ε2 (x − y)f (y)dn y. We defer the proof for equality to Section 8. (7. Consider φz (x) from the proof of the previous lemma and observe F 2 (φz )(x) = φz (−x).4. where we will explicitly compute an orthonormal basis of eigenfunctions. Let us introduce the Sobolev space ˆ H r (Rn ) = {f ∈ L2 (Rn )pr f (p) ∈ L2 (Rn )}. Proof.13) Proof. if ψn is a Weyl sequence. It suﬃces to show F 2 (f )(x) = f (−x). −1.7. which implies g(x)(∂α f )(x)dn x = (−1)α Rn Rn f ∈ H r (Rn ). then (F 2 + z 2 )(F + z)(F − z)ψn = (F 4 − z 4 )ψn = (1 − z 4 )ψn → 0 implies z 4 = 1.11) Since limε↓0 fε (x) = f (x) for every x ∈ Rn (Problem 7. we infer from dominated convergence F 2 (f )(x) = limε↓0 F 2 (fε )(x) = limε↓0 fε (−x) = f (−x). Its inverse is given by 1 F −1 (g)(x) ≡ g (x) = ˇ eipx g(p)dn p. −i}.14) We will abbreviate ˆ ∂α f = ((ip)α f (p))∨ . using Fubini this even implies F 2 (fε )(x) = fε (−x) for any ε > 0. Lemma 7.3.1 also allows us to extend diﬀerentiation to a larger class.
Problem 7.2. Extend Lemma 0.21) . Moreover. Let C∞ (Rn ) denote the Banach space of all continuous functions f : Rn → C which vanish at ∞ equipped with the sup norm. (7. (7. and hence the operator H0 ψ = −∆ψ. D(H0 ) = H 2 (Rn ). f ∈ S(Rn ) (compare Problem 0.17) ˆ Proof.32 to the case u.22) ψ ∈ H 2 (Rn ). the estimate ˆ sup f (p) ≤ p 1 sup (2π)n/2 p e−ipx f (x)dn x = Rn 1 (2π)n/2 f (x)dn x Rn (7. we have the RiemannLebesgue lemma.2. ∂x2 j (7. The free Schr¨dinger operator o for f ∈ H r (Rn ) and g ∈ S(Rn ). Show that R exp(−x2 /2)dx = gral and evaluate it using polar coordinates. where ∆ is the Laplace operator n (7. (Hint: Square the inte Problem 7.19) ∆= j=1 ∂2 .20) Our ﬁrst task is to ﬁnd a good domain such that H0 is a selfadjoint operator. (7. 7. the Hilbert space for N particles in Rd is L2 (Rn ). is given by H0 = −∆. n = N d.142 7.1. ∂α f is the derivative of f in the sense of distributions. Clearly we have f ∈ C∞ (Rn ) if f ∈ S(Rn ). More generally. That is.) √ 2π. Finally.18) ˆ shows f ∈ C∞ (Rn ) for arbitrary f ∈ L1 (Rn ) since S(Rn ) is dense in L1 (Rn ).1 we have that ˆ − ∆ψ(x) = (p2 ψ(p))∨ (x). The corresponding non relativistic Hamilton operator. Lemma 7. By Lemma 7.17).1 we have seen that the Hilbert space corresponding to one particle in R3 is L2 (R3 ).6 (RiemannLebesgue). Then the Fourier transform is a bounded map from L1 (Rn ) into C∞ (Rn ) satisfying ˆ f ∞ ≤ (2π)−n/2 f 1 . The free Schr¨dinger operator o In Section 2. if the particles do not interact.
∞) (λ)λn/2−1 2 proving the claim.7. Set let ϕ(x) ∈ Cc 1 ∞ ϕn (x) = ϕ( n x). S n−1 (7.3.2. S n−1 Rn ˆ ψ(p)2 n d p= p2 − z R r2 1 d˜ψ (r). √ ˆ ψ( λω)2 dn−1 ω dλ.27) (7. Proof. σsc (H0 ) = σpp (H0 ) = ∅. (7. (7.29) Problem 7. ∞ Lemma 7. The free Schr¨dinger operator H0 is selfadjoint and its o spectrum is characterized by σ(H0 ) = σac (H0 ) = [0.∞) (r)rn−1 µ ˆ ψ(rω)2 dn−1 ω dr. Rp2 (z)ψ = where d˜ψ (r) = χ[0. we note that the compactly supported smooth functions are a core for H0 . Rn ψ ∈ Q(H0 ) = H 1 (Rn ). The free Schr¨dinger operator o 143 is unitarily equivalent to the maximally deﬁned multiplication operator (F H0 F −1 )ϕ(p) = p2 ϕ(p). Note also that the quadratic form of H0 is given by n qH0 (ψ) = j=1 ∂j ψ(x)2 dn x. D(p2 ) = {ϕ ∈ L2 (Rn )p2 ϕ(p) ∈ L2 (Rn )}. It is not hard to see that S(Rn ) is a core (Problem 7.28) Finally. (Hint: Show that the closure of H0 S(Rn ) contains H0 .) . (7.25) µ −z (7.24) Proof. To see this.23) Theorem 7.3) and hence it ∞ suﬃces to show that the closure of H0 Cc (Rn ) contains H0 S(Rn ) . (7. after a change of coordinates. Show that S(Rn ) is a core for H0 . The set Cc (Rn ) = {f ∈ S(Rn )supp(f ) is compact} is a core for H0 . RH0 (z)ψ = R λ−z where 1 dµψ (λ) = χ[0. RH0 (z)ψ = ψ. then ψn (x) = ϕn (x)ψ(x) is in Cc (Rn ) for every ψ ∈ S(Rn ) and ψn → ψ respectively ∆ψn → ∆ψ. ∞).8. ∞ (Rn ) which is one for x ≤ 1 and vanishes for x ≥ 2. ψ. First observe that ˆ ˆ ψ.26) Hence. we have 1 dµψ (λ). It suﬃces to show that dµψ is purely absolutely continuous for every ψ.7.
2t (7.3 and the convolution formula we have fε (H0 )ψ(x) = and hence e−itH0 ψ(x) = 1 (4πit)n/2 ei Rn x−y2 4t 1 (4π(it + ε))n/2 e Rn − 4(it+ε) x−y2 ψ(y)dn y (7.31) Then fε (H0 )ψ → e−itH0 ψ by Theorem 3.35) Moreover. Show that {ψ ∈ S(R)ψ(0) = 0} is dense but not a core for d2 H0 = − dx2 .4. However. The free Schr¨dinger operator o Problem 7.34) by the RiemannLebesgue lemma. it is even possible to determine the asymptotic form of the wave function for large t as follows. Observe e−itH0 ψ(x) = = 2 ei 4t (4πit)n/2 1 2it n/2 x2 ei 4t ψ(y)ei 2t dn y Rn y2 xy ei 4t x2 ei 4t ψ(y) y2 ∧ ( x ). a more careful analysis is needed.33) for t = 0 and ψ ∈ L1 ∩ L2 . The time evolution in the free case Now let us look at the time evolution. For example. then ψ(t) ∈ C(Rn ) for t = 0 (use dominated convergence and continuity of the exponential) and satisﬁes ψ(t) ∞ ≤ 1 ψ(0) 4πtn/2 1 (7.32) ψ(y)dn y (7. if ψ ∈ L2 (Rn ) ∩ L1 (Rn ). since exp(i y )ψ(y) → ψ(y) in L2 as t → ∞ (dominated conver4t gence) we obtain . it is not hard to draw some immediate consequences. (7.144 7. Using this explicit form. 2 ε > 0. Thus we have spreading of wave functions in this case. since 2 e−itp is not in L2 .1.3. Consider fε (p2 ) = e−(it+ε)p . Moreover. For general ψ ∈ L2 the integral has to be understood as a limit. (7. We have 2 ˆ e−itH0 ψ(x) = F −1 e−itp ψ(p). Moreover. 7. by Lemma 7.30) The right hand side is a product and hence our operator should be expressible as an integral operator via the convolution formula.
a particle will escape to inﬁnity.34).9. we will use a small ruse. the particle will eventually escape to inﬁnity since the probability of ﬁnding the particle in any bounded set tends to zero. By symmetry it suﬃces to consider g(x)f (p).38) ˇ shows that g(x)f (p) is Hilbert–Schmidt since g(x)f (x − y) ∈ L2 (Rn × Rn ). Lemma 7.10. The resolvent and Green’s function 145 Lemma 7.36) are compact if f. For any ψ ∈ L2 (Rn ) we have e−itH0 ψ(x) − in L2 as t → ∞. If f. We will try to use a similar approach as for the time evolution in the previous section. g ∈ L2 (Rn ). g are bounded then the functions fR (p) = χ{pp2 ≤R} (p)f (p) and gR (x) = χ{xx2 ≤R} (x)g(x) are in L2 . Proof. However. g ∈ L∞ (Rn ) and (extend to) Hilbert–Schmidt operators if ∞ f. Thus gR (x)fR (p) is compact and tends to g(x)f (p) in norm since f.39) lim χΩ e−itH0 ψ 2 =0 (7.37) 1 2it n/2 x ˆ x ei 4t ψ( ) → 0 2t 2 (7. g ∈ L2 . Let g(x) be the multiplication operator by g and let f (p) be ˆ the operator given by f (p)ψ(x) = F −1 (f (p)ψ(p))(x). In other words. . g vanish at inﬁnity.7. Let f. since it is highly nontrivial to compute the inverse Fourier transform of exp(−εp2 )(p2 − z)−1 directly. Denote by L∞ (Rn ) the ∞ bounded Borel functions which vanish at inﬁnity.) 7.4. this lemma implies that χΩ (H0 + i)−1 is compact if Ω ⊆ Rn is bounded and hence t→∞ (7. (If ψ ∈ L1 (Rn ) this of course also follows from (7. Then f (p)g(x) and g(x)f (p) (7. The resolvent and Green’s function Now let us compute the resolvent of H0 . Next we want to apply the RAGE theorem in order to show that for any initial condition.4. In particular. then g(x)f (p)ψ(x) = 1 (2π)n/2 ˇ g(x)f (x − y)ψ(y)dn y Rn (7.40) for any ψ ∈ L2 (Rn ) and any bounded subset Ω of Rn .
47) for x → ∞. (7. Since it is equal to ϕ. by Fubini. r) = 0 r2 1 e− 4t +zt dt.49) is analytic for z ∈ ρ(H0 ) and ϕ.48) x −ν + O(x−ν+1 ) 2 x − ln( 2 ) + O(1) Γ(ν) 2 Kν (x) = 0 (7. ∞) = ρ(H0 ).1. (7. Rn (7. t > 0.46) ν=0 ν=0 (7. x − y)ψ(y)dn ydn x Rn Rn (7.43) holds for all z ∈ ρ(H0 ). RH0 (z)ψ for Re(z) < 0 it is equal to this function for all z ∈ ρ(H0 ). The integral can be evaluated in terms of modiﬁed Bessel functions of the second kind 1 G0 (z. (4πt)n/2 r > 0. Re(z) < 0 (7. G0 (z.43) for all z ∈ ρ(H0 ) such that ϕ(x)G0 (z.42) by the same analysis as in the previous section. r) = 2π −z 4π 2 r2 n−2 4 √ K n −1 ( −zr). e−tH0 ψ(x) = 1 (4πt)n/2 e− Rn x−y2 4t ψ(y)dn y. x − y)ψ(y)dn y. Hence we can deﬁne the right hand side of (7.43) G0 (z. In particular. since both functions are analytic in this domain. In particular.44) The function G0 (z. r) has an analytic continuation for z ∈ C\[0. 2 (7. The free Schr¨dinger operator o Note that ∞ RH0 (z) = 0 ezt e−tH0 dt. Moreover.41) by Lemma 4. we have RH0 (z)ψ(x) = where ∞ G0 (z.45) The functions Kν (x) satisfy the following diﬀerential equation d2 1 d ν2 + −1− 2 dx2 x dx x and have the following asymptotics Kν (x) = for x → 0 and Kν (x) = π −x e (1 + O(x−1 )) 2x (7. . For more information see for example [22]. Hence. ψ ∈ S(Rn ) (by Morea’s theorem). Re(z) < 0. (7. r) is called Green’s function of H0 .146 7.
The resolvent and Green’s function 147 If n is odd. Verify (7. r) = √ e− −z r . n = 3. r) = e 4πr Problem 7. n = 1. we have the case of spherical Bessel functions which can be expressed in terms of elementary functions. (7.5.4. For example. (7. we have √ 1 G0 (z.50) 2 −z and 1 −√−z r .7. .43) directly in the case n = 1.51) G0 (z.
.
many important consequences about these observables can be derived. For ψ ∈ S(R3 ) we compute lim i e−itxj ψ(x) − ψ(x) = xj ψ(x) t (8. Each subspace is unitarily equivalent to L2 (R) and x1 is given by multiplication with the identity.12). it is not hard to see that the spectrum of xj is purely absolutely continuous and given by σ(xj ) = R. First consider the oneparameter unitary group (Uj (t)ψ)(x) = e−itxj ψ(x). consider the oneparameter unitary group of translations (Uj (t)ψ)(x) = ψ(x − tej ).1.3) 149 . By Corollary 5. which is known as position operator. which corresponds to the kinetic energy. Hence the claim follows (or use Theorem 4. Using commutation relation between these observables.3 it is essentially selfadjoint on ψ ∈ S(R3 ). 1 ≤ j ≤ 3. (8.Chapter 8 Algebraic methods 8. Then ϕi (x1 )ϕj (x2 )ϕk (x3 ) is an orthonormal basis for L2 (R3 ) and x1 can be written as a orthogonal sum of operators restricted to the subspaces spanned by ϕj (x2 )ϕk (x3 ). Moreover. Position and momentum Apart from the Hamiltonian H0 .2) 1 ≤ j ≤ 3. (8. It is custom to combine all three operators to one vector valued operator x. there are several other important observables associated with a single particle in three dimensions. Next.1) t→0 and hence the generator is the multiplication operator by the jth coordinate function. let ϕ(x) be an orthonormal basis for L2 (R). In fact.
Algebraic methods where ej is the unit vector in the jth coordinate direction.5) d ψ(t). Let us ﬁx ψ ∈ D(AB) ∩ D(AB) and abbreviate ˆ A = A − Eψ (A). The operator p is known as momentum operator. we have ψ ∈ S(R3 ) (8. The Weyl relations also imply that the meansquare deviation of position and momentum cannot be made arbitrarily small simultaneously: Theorem 8. then for any ψ ∈ D(AB) ∩ D(AB) we have 1 ∆ψ (A)∆ψ (B) ≥ Eψ ([A. B} = AB + B A 2 2 ˆ ˆ where {A. For ψ ∈ S(R3 ) we compute ψ(x − tej ) − ψ(x) 1 ∂ lim i ψ(x) (8. B]. (8.150 8. B]ψ 2 2 2 (8. ABψ 2 =  ψ. Bψ  ≤ ∆ψ (A)∆ψ (B). pj ψ(t) = 0. ψ ∈ S(R3 ).4) = t→0 t i ∂xj ∂ and hence the generator is pj = 1 ∂xj . B] are symmetric. Again it is essentially selfadjoint i on ψ ∈ S(R3 ). B}ψ 2 +  ψ. Proof.6) dt that is. Now note that 1 ˆ ˆ 1 ˆˆ ˆ ˆ ˆˆ ˆˆ AB = {A.13) .8) 2 with equality if (B − Eψ (B))ψ = iλ(A − Eψ (A))ψ. pj ]ψ(x) = 0.1 (Heisenberg Uncertainty Principle). [A. Similarly one has [pj . or if ψ is an eigenstate of A or B. Moreover. {A. since it is unitarily equivalent to xj by virtue of the Fourier transform we conclude that the spectrum of pj is again purely absolutely continuous and given by σ(pj ) = R. the momentum is a conserved quantity for the free motion. ψ(t) = e−itH0 ψ(0) ∈ S(R3 ). (8. {A.9) (8. B} and i[A.12) (8.11) λ ∈ R\{0}. Suppose A and B are two symmetric operators. So 1 1 ˆ ˆ ˆˆ ˆ ˆ  Aψ. B]) (8.7) which is known as the Weyl relation. ∆ψ (B) = Bψ and hence by CauchySchwarz ˆ ˆ  Aψ. Bψ 2 =  ψ. (8. (8. xk ]ψ(x) = δjk ψ(x). ˆ B = B − Eψ (B). Note that since [H0 . B} + [A.10) ˆ ˆ Then ∆ψ (A) = Aψ .
8). else (8.2. the angular momentum is a conserved quantity for the free motion as well. (8.16) where Mj (t) is the matrix of rotation around ej by an angle of t. (8. For ψ ∈ S(R3 ) we compute ψ(Mi (t)x) − ψ(x) lim i = t→0 t where εijk 1 −1 = 0 if ijk is an even permutation of 123 if ijk is an odd permutation of 123 .19) we have again d ψ(t). the continuous spectrum is empty. . ˆ ˆ To have equality if ψ is not an eigenstate we need Bψ = z Aψ for equality ˆ ˆ in CauchySchwarz and ψ.2.18) 3 εijk xj pk ψ(x). Lj ψ(t) = 0. and 8. Problem 8. we see that the spectrum is a subset of Z. 1 ≤ j ≤ 3. In particular. {A. Since ei2πLj = I. which is known as angular momentum operator. Angular momentum Now consider the oneparameter unitary group of rotations (Uj (t)ψ)(x) = ψ(Mj (t)x).k=1 (8.17) Again one combines the three components to one vector valued operator L = x ∧ p.15) realizes the minimum. Angular momentum 151 which proves (8.8. Check that (8. (8.14) e− 2 x−x0  λ 2 −ip x 0 . ˆ requirement gives 0 = (z − z In case of position and momentum we have ( ψ = 1) δjk ∆ψ (pj )∆ψ (xk ) ≥ 2 and the minimum is attained for the Gaussian wave packets ψ(x) = λ π n/4 (8. We will show below that we have σ(Lj ) = Z. B}ψ = 0. ψ ∈ S(R3 ). j. ψ(t) = e−itH0 ψ(0) ∈ S(R3 ).20) dt that is. Lj ]ψ(x) = 0. Note that since [H0 .15) λ 2 which satisfy Eψ (x) = x0 and Eψ (p) = p0 respectively ∆ψ (pj )2 = 1 ∆ψ (xk )2 = 2λ . Inserting the ﬁrst into the second ∗ ) Aψ 2 and shows Re(z) = 0.1. (8.
(8. (All commutation relations below hold for ψ ∈ S(R3 ). It has the nice property that the ﬁnite dimensional subspaces Dk = span{xα e− x2 2  α ≤ k} (8. But the limit is the Fourier transform of ϕ(x)e− shows that this function is zero. Proof. Kj ]ψ(x) = i k=1 εijk Kk ψ(x).24) for any ﬁnite k and hence also in the limit k → ∞ by the dominated convergence theorem.23) are invariant under Lj (and hence they reduce Lj ). Since L2 and L3 commute on Dk . Then 1 √ 2π ϕ(x)e −x 2 2 k j=1 (itx)j =0 j! x2 2 (8. L2 ψ = l(l + 1)ψ. (8. ψ ∈ S(R3 ). Hence ϕ(x) = 0. (8. pj .2. we even have 3 [Li . Now let us try to draw some further consequences by using the commutation relations (8. Dk is invariant under L2 and L3 and hence Dk reduces L2 and L3 . Introducing L2 = L2 + L2 + L2 it is straightforward to check 1 2 3 [L2 . Now let Hm = Ker(L3 − m) and denote by Pk the projector onto Dk . xj }. the space Pk Hm is invariant under L2 which shows that we can choose an orthonormal basis consisting of eigenfunctions of L2 for Pk Hm .25) Moreover. Kj ∈ {Lj .22) is dense.152 8. By Lemma 1. ψ ∈ S(R3 ).21). In this respect the following domain x2 D = span{xα e− 2  α ∈ Nn } ⊂ S(Rn ) (8. ψ = 0 for every ψ ∈ D.21) and these algebraic commutation relations are often used to derive information on the point spectra of these operators. Lemma 8.9 it suﬃces to consider the case n = 1. which Since it is invariant under the unitary groups generated by Lj . Suppose ϕ. L2 and L3 are given by ﬁnite matrices on Dk . In particular. The subspace D ⊂ L2 (Rn ) deﬁned in (8. the operators Lj are essentially selfadjoint on D by Corollary 5.22) 0 is often used. .m the set of all functions in D satisfying L3 ψ = mψ.) Denote by Hl. Increasing k we get an orthonormal set of simultaneous eigenfunctions whose span is equal to D. Hence there is an orthonormal basis of simultaneous eigenfunctions of L2 and L3 .26) . Lj ]ψ(x) = 0.3. Algebraic methods Moreover.
Moreover. since L2 = L2 ± L3 + L L± 3 we obtain L± ψ 2 [L3 .m (x) = (l + m)! Ll−m ψl. First of all we observe x2 1 ψ0.30) = ψ. [L .35) We will rederive this result using diﬀerent methods in Section 10. (8. we note that (8. There exists an orthonormal basis of simultaneous eigenvectors for the operators L2 and Lj .l (x) ∈ Hl. L L± ψ = (l(l + 1) − m(m ± 1)) ψ for every ψ ∈ Hl.l . (8. .0 . x ] = ±2x3 . Moreover.m = {0} for l ∈ N0 . L± Hl. . 2 x± = x1 ± ix2 .21) implies [L3 . First introduce two new operators L± = L1 ± iL2 . Theorem 8. −l + 1. . (8.28) that is. l − 1. And thus 1 ψl.27) (8.l±1 .l (x) = √ (x1 ± ix2 )l ψ0. m = −l.29) (8. l. (l − m)!(2l)! − (8.m±1 is injective unless m = l.0 (x) = 3/2 e− 2 ∈ H0. x± ] = 0.m±1 .m → Hl.m we have L3 (L± ψ) = (m ± 1)(L± ψ).m .2. L± ] = ±L± .34) The constants are chosen such that ψl.m = {0} for l ∈ N0 . .32) [L± .l . L2 (L± ψ) = l(l + 1)(L± ψ).3. Up to this point we know σ(L2 ) ⊆ {l(l + 1)l ∈ N0 }. for every ψ ∈ Hl. 2x3 L± . l! respectively ψl.31) π Next. x± ] = ±x± . their spectra are given by σ(L2 ) = {l(l + 1)l ∈ N0 }. [L± . Then. In summary.m . (8. (8.m → Hl.m = 1. If ψ = 0 we must have l(l + 1) − m(m ± 1) ≥ 0 which shows Hl. σ(L3 ) = Z. we need to show that Hl. Moreover. Angular momentum 153 By L2 ≥ 0 and σ(L3 ) ⊆ Z we can restrict our attention to the case l ≥ 0 and m ∈ Z. σ(L3 ) ⊆ Z. . Hence we must have Hl.m = {0} for m > l. L± Hl.0 (x) ∈ Hl. In order to show that equality holds in both cases. x± ] = 2x± (1 ± L3 ) Hence if ψ ∈ Hl.3.8. then (x1 ± ix2 )ψ ∈ Hl±1.33) (8.
In summary. The polynomials Hn (x) are called Hermite polynomials. Introducing 1 √ 1 d A± = √ ωx √ ω dx 2 we have [A− . for any function in D.41) we see that N ψ = nψ implies N A± ψ = (n ± 1)A± ψ. Moreover.3. the harmonic oscillator H = H0 + ω 2 x2 . A+ ] = 1 and H = ω(2N + 1). . x− d dx n e− x2 2 = (−1)n ex 2 dn −x2 e . Algebraic methods 8. A+ ψ 2 = ψ. A− A+ ψ = (n + 1) ψ 2 respectively A− ψ 2 = n ψ 2 in this case and hence we conclude that σ(N ) ⊆ N0 If N ψ0 = 0. since [N.44) 2n n! π where Hn (x) is a polynomial of degree n given by Hn (x) = e x2 2 x2 2 ω > 0.45) we conclude span{ψn } = D. dxn (8.  α ∈ N3 } ⊆ L2 (R3 ) 0 (8.154 8. We will ﬁrst consider the onedimensional case. (8.39) (8.37) (8. The harmonic oscillator Finally. Moreover.43) + n! is a normalized eigenfunction of N corresponding to the eigenvalue n. π Hence 1 ψn (x) = √ An ψ0 (x) (8.42) e 2 ∈ D.38) (8. since √ ωx2 1 ω 1/4 ψn (x) = √ Hn ( ωx)e− 2 (8. Moreover. A± ] = ±A± . let us consider another important model whose algebraic structure is similar to those of the angular momentum.36) (8. then we must have A− ψ = 0 and the normalized solution of this last equation is given by ω 1/4 − ωx2 ψ0 (x) = (8.40) N = A+ A− . As domain we will choose D(H) = D = span{xα e− from our previous section.
47) (8. without restriction we choose ω = 1 and consider only one dimension.8.2. ˜ Compute H = A∗ A.n3 (x) = ψn1 (x1 )ψn2 (x2 )ψn3 (x3 ).48) (8. In particular. by FA± = iA± F we even infer (−i)n 1 Fψn = √ FAn ψ0 = √ An Fψ0 = (−i)n ψn . σ(F) = {z ∈ Cz 4 = 1}. (8. The harmonic oscillator 155 Theorem 8. The harmonic oscillator H is essentially self adjoint on D and has an orthonormal basis of eigenfunctions ψn1 . Moreover. (Hint: φ = ψ .49) (8. with ψnj (xj ) from (8.50) Problem 8. Show that H = − + q can be written as H = AA∗ .44). + + n! n! since Fψ0 = ψ0 by Lemma 7.3.n2 . d2 dx2 (8. The spectrum is given by σ(H) = {(2n + 3)ωn ∈ N0 }.46) Finally. there is also a close connection with the Fourier transformation.) ψ . where d A = − dx + φ. Then it easy to verify that H commutes with the Fourier transformation FH = HF. if the diﬀerential equation ψ + qψ = 0 has a positive solution.3.4.
.
The suitable Hilbert space is b L2 ((a. the case of a particle in three dimensions can in some situations be reduced to the investigation of SturmLiouville equations. We require (i) p−1 ∈ L1 (I). f.2) where I = (a. realvalued loc (ii) q ∈ L1 (I).1. b). In particular. Moreover. g = a f (x)∗ g(x)r(x)dx.Chapter 9 One dimensional Schr¨dinger operators o 9. realvalued loc (iii) r ∈ L1 (I). we will see how this works when explicitly solving the hydrogen atom. pf ∈ ACloc (I) (9. positive loc 157 . dx dx f.1) The case p = r = 1 can be viewed as the model of a particle in one dimension in the external potential q. the SturmLiouville equations. (9. b) ⊂ R is an arbitrary open interval. r(x)dx). SturmLiouville operators In this section we want to illustrate some of the results obtained thus far by investigating a speciﬁc example. τ f (x) = 1 r(x) − d d p(x) f (x) + q(x)f (x) .
Similarly for b.4).158 9. r ∈ L1 ((a. satisfying the initial condition f (c) = α.2 = zu1.9) z ∈ C.4 below. f ) − Wa (g ∗ . f ) has to be understood as limit. α. pf . .1. (9. If we choose f.8) f. q. τ f = Wb (g ∗ .4) for f. pf ∈ ACloc (I). Here Wa. r dx)f. We will defer the general loc case to Lemma 9. f ) − Wc (g ∗ . r dx)}. we perform a little calculation. f is holomorphic with respect to z. g ∈ D(τ ) in (9. it is nonzero if and only if u1 and u2 are linearly independent (compare Theorem 9. r dx) is given by D(τ ) = {f ∈ L2 (I. then the SturmLiouville equation (9. Equation (9. Using integration by parts (twice) we obtain (a < c < d < b): d c g ∗ (τ f ) rdy = Wd (g ∗ . c ∈ I.7) (9. If it is both regular at a and b it is called regular. pf can be extended continuously to a regular end point. Theorem 9. Suppose rg ∈ L1 (I). (9.g.) p ∈ ACloc (I).3) L2 (I). Finally. (pf )(c) = β.5) In addition. pf ∈ ACloc (I) of the diﬀerential equation (τ − z)f = g. loc It is not clear that D(τ ) is dense unless (e. f ) + c d (τ g)∗ f rdy. we recall the following wellknown result from ordinary diﬀerential equations. which results in g. u2 ). (9. q ∈ ∞ r−1 ∈ L∞ (I) since C0 (I) ⊂ D(τ ) in this case. τ u1. f2 ) = p(f1 f2 − f1 f2 ) (x) is called the modiﬁed Wronskian. g ∈ D(τ ). (9.2 . β ∈ C. f ) + τ g. then there exists a unique solution loc f.1). u2 ) = W (u1 . g. pg ∈ ACloc (I) where Wx (f1 . (9.1 below). Note that f. One dimensional Schr¨dinger operators o If a is ﬁnite and if p−1 . τ f ∈ L2 (I.b (g ∗ . c)) (c ∈ I). (9. p . than we can take the limits c → a and d → b. Since we are interested in selfadjoint operators H associated with (9.1) is called regular at a.6) Moreover. f . The maximal domain of deﬁnition for τ in L2 (I.4) also shows that the Wronskian of two solutions of τ u = zu is constant Wx (u1 .
First of all it is no restriction to assume that the functionals lj are linearly independent.3.4. Lemma 9.11) = (q − z)u1 α + = (q − z)f − gr which proves the claim. Next we compute (pf ) = (pu1 ) α + u2 gr dy + (pu2 ) β − u1 gr dy − W (u1 . Then l = j=1 n αj lj for some constants αj ∈ C. . l1 . Suppose u1 . u2 gr dy + (q − z)u2 β − Now we want to obtain a symmetric operator and hence we choose A0 f = τ f. . ∀g ∈ D(τ )}. ln are linear functionals (deﬁned on all of V ) such that n Ker(lj ) ⊆ Ker(l). β coincide with those from Theorem 9.1.10) gives the second. The operator A0 is densely deﬁned and its closure is given by A0 f = τ f. Our ﬁrst task is to compute the closure of A0 and its adjoint. Thus j=1 j=1 j=1 we can choose αj = l(fj ). Now we are ready to prove Lemma 9. SturmLiouville operators 159 Lemma 9. g) = 0. . . f → (l1 (f ). u1 gr dy . β ∈ C) x x f (x) = u1 (x) α + c x u2 gr dy + u2 (x) β − c x u1 gr dy . u2 are two solutions of (τ −z)u = 0 with W (u1 .13) . This deﬁnition clearly ensures that the Wronskian of two such functions vanishes on the boundary. (9. (9. D(A0 ) = D(τ ) ∩ ACc (I). Proof. D(A0 ) = {f ∈ D(τ )  Wa (f. Then any other solution of (9. . Suppose V is a vector space and l.2. g) = Wb (f. . For this the following elementary fact will be needed. Then the map L : V → Cn . It suﬃces to check τ f − z f = g. Hence j=1 there are vectors fk ∈ V such that lj (fk ) = 0 for j = k and lj (fj ) = 1. . implying that A0 is symmetric. ln (f )) is surjective (since x ∈ Ran(L)⊥ implies n xj lj (f ) = 0 for all f ).1 if u1 (c) = (pu2 )(c) = 1 and (pu1 )(c) = u2 (c) = 0.12) where ACc (I) are the functions in AC(I) with compact support.9. Diﬀerentiating the ﬁrst equation of (9. (9. Then f − n lj (f )fj ∈ n Ker(lj ) and hence l(f ) − n lj (f )l(fj ) = 0.8) can be written as (α. u2 )gr u1 g dy − gr (9. j=0 Proof. u2 ) = 1. .10) c f (x) = u1 (x) α + c u2 gr dy + u2 (x) β − Note that the constants α.
r dx).10) we can ﬁnd a h such that τ h = k and from integration by parts we obtain b a ˜ (h(x) − h(x))∗ (τ f )(x)r(x)dx = 0. We start by computing A∗ and ignore the fact that we don’t know 0 wether D(A0 ) is dense for now. r dx). ˜ k1 = τ h Next we turn to A0 . (9. 0 D(A∗ ) = D(τ ). 0 (9.1 u1 + α2. (9.1 u1 + α1. f ∈ D(A0 ). Now Lemma 9.160 9.2 u2 = ˜ ˜ ˜ h2 + α2. since A0 ⊆ A∗ 0 0 we can use (9. b lj (g) = a uj (x)∗ g(x)r(x)dx. f . h1 − h2 is in the kernel of τ and hence ˜ 1 = τ h2 = k2 contradiction our assumption. if g ∈ Ker(l1 ) ∩ Ker(l2 ).7).15) ˜ ˜ for some k ∈ L2 (I. then x b f (x) = u1 (x) a u2 (y)g(y)r(y)dy + u2 (x) x u1 (y)g(y)r(y)dy (9.2 u2 . Now what if D(A0 ) were not dense? Then there would be some freedom in choice of k since we could always add a component in D(A0 )⊥ . In fact. Then we have D ⊆ D(A∗∗ ) = A0 by (9. A0 f = k.18) is in D(A0 ) and g = τ f ∈ Ker(l) by (9.16) ˜ Clearly we expect that h − h will be a solution of the τ u = 0 and to prove this we will invoke Lemma 9. Denote the set on the right hand side of (9. there are ˜ ˜ ˜ corresponding functions h1 and h2 such that h = h1 + α1. In particular. 0 0 If h ∈ D(A∗ ) we must have 0 h.17) on L2 (I. ∀f ∈ D(A0 ). c We have Ker(l1 ) ∩ Ker(l2 ) ⊆ Ker(l). h) = 0.19) ˜ and hence h = h + α1 u1 + α2 u2 ∈ D(τ ). ∀g ∈ L2 (I. 0 (9.14) Proof. rdx) c (9.7) to conclude Wa (f. One dimensional Schr¨dinger operators o Its adjoint is given by A∗ f = τ f.3 implies b a ˜ (h(x) − h(x) + α1 u1 (x) + α2 u2 (x))∗ g(x)r(x)dx = 0.13) by D.16). ∀f ∈ D(A0 ) (9. h) + Wb (f. Conversely. u2 ) = 0.20) . So suppose we have two choices k1 = k2 . Using (9. By (9.7) we have D(τ ) ⊆ D(A∗ ) and it remains to show D(A∗ ) ⊆ D(τ ). h ∈ D(A∗ ). where uj are two solutions of τ u = 0 with W (u1 . Then by the above calculation.3. Therefore we consider the linear functionals b l(g) = a ˜ (h(x) − h(x))∗ g(x)r(x)dx.
f4 ∈ D(τ ) we have the Pl¨cker identity u Wx (f1 . and B = q.22). Choosing f1 = v. (pv )(a)) = (0. (pf )(c) = δ. If τ is regular at a. (pf )(c) = β and f (d) = γ. β. f ) = 0 and Wa (v. 1]f (0) = f (1) = 0}. then we have ˆa with W (v Wa (v.22) ⇔ Wa (v. f ∗ ) = 0 ∀f ∈ D(τ ) (9.1. h) = −Wa (f. Finally. f ) = Wa (v.21). h) = Wa (f. f3 ) = 0 (9. f3 = v. f3 = ˆ ˆ v ∗ . . Then Wa (f. Weyl’s limit circle. g) = 0 ⇒ Wa (g ∗ . h) = 0 shows f ∈ D. f ) = 0 ∀f.) at a if there is a v ∈ D(τ ) with Wa (v ∗ . . Problem 9. (Hint: Lemma 9.) at a. b) such that f (c) = α. D(A0 ) = {f ∈ H 2 [0. show that there is a function f ∈ D(τ ) restricted to [c.2. f4 = f we infer (9. Since Wa (v. Otherwise τ is called limit point (l. selfadjointness seems to be related to the Wronskian 0 of two functions at the boundary.9. Let A0 = − dx2 . f2 )Wx (f3 .p. h) + Wb (f. f3 )Wx (f4 . limit point alternative We call τ limit circle (l. then f ∈ D(A0 ) if and only if f (a) = (pf )(a) = 0. Hence we collect a few properties ﬁrst. ˆ Lemma 9. Suppose v ∈ D(τ ) with Wa (v ∗ . δ. γ. g ∈ D(τ ) (9. 1)}. Wb (f. f2 = g ∗ . Choosing f1 = f. Similarly for b. f2 = f. (Hint: Problem 0. 1)qf ∈ L2 (0. Example.21) Proof.23) which remains valid in the limit x → a. f4 ) + Wx (f1 .2. (9. v) = 0 such that Wa (v. f4 )Wx (f2 . h) = 0. f ) = 0. For all f1 .5. f2 ) + Wx (f1 .2. 0) works. (pg )(a) for g ∈ D(τ ) arbitrarily. d] ⊆ (a. If τ is regular at a.24) . f ) = (pf )(a)v(a) − (pv )(a)f (a) any realvalued v with (v(a). Example. Weyl’s limit circle. This follows since we can prescribe the values of g(a). 1) such that D(A0 ) ∩ D(B) = {0}.2) d Problem 9. f4 = f we infer (9. . f ) = 0 for at least one f ∈ D(τ ). This result shows that any selfadjoint extension of A0 must lie between A0 and A∗ . limit point alternative 161 ˜ Now replace h by a h ∈ D(A∗ ) which coincides with h near a and vanishes 0 ˜ ˜ identically near b (Problem 9. it is limit circle at a. v) = 0 and there is a f ∈ D(τ ) ∗ . Find a q ∈ L1 (0. Given α.18) 2 9.c. D(B) = {f ∈ L2 (0. Moreover.1). .
Suppose z ∈ ρ(A). (9. x. The name limit circle respectively limit point stems from the original approach of Weyl. Next we want to compute the resolvent of A. c). Lemma 9.6. Hence it is no restriction to assume that v is real and Wa (v ∗ . Theorem 9. π).26) (9. Example. − cos(α)). y) = 1 W (ub (z). g) = 0. f ) = cos(α)f (a) + sin(α)(pf )(a). such that Wa (v. f ) = 0 if l.162 9. x) with the analogous properties near b.25) The most common choice α = 0 is known as Dirichlet boundary condition f (a) = 0. f )a = 0 for some f ∈ D(τ ). If τ is l. v) = 0. ua (z. The resolvent of A is given by (A − z)−1 g(x) = a b G(z. (9.c. z ∈ C\R which satisfy Wc (u∗ . Similarly. who considered the set of solutions τ u = zu. v) = 0 is trivially satisﬁed in this case.c.27) (9. at a. (pv )(a)) = (sin(α). Then the operator A : D(A) → L2 (I. Moreover.7. if τ is l. at b} is selfadjoint. g ∈ D(τ ).28) where G(z. Similarly. As in the computation of A0 0 we conclude Wa (f. y) x ≤ y (9. x)ub (z. In particular. One dimensional Schr¨dinger operators o Note that if Wa (f. we can choose f such that it coincides with v near a and hence Wa (v. Let g ∈ D(A∗ ). let w be an analogous function. then there exists a solution ua (z. that is g ∈ D(A).c. at a. x) which is in L2 ((a. They can be shown to lie on a circle which converges to a circle respectively point as c → a (or c → b).c. Re(v)) = 0 or Wa (f. r dx) and which satisﬁes the boundary condition at a if τ is l.29) . g) = 0 for all f. then let v ∈ D(τ ) with W (v ∗ . x)ua (z. y) x ≥ y . If τ is regular at a we can choose v such that (v(a). y)g(y)r(y)dy. x. there exists a solution ub (z.c. at a Wb (w. then Wa (f. Im(v)) = 0. τ is limit point if and only if Wa (f. g) = 0 for all f ∈ D(A). v)a = 0 and W (v. α ∈ [0. u) = 0. f ) = 0 if l. at b. ua (z)) ub (z. Proof. Clearly A ⊆ A∗ ⊆ A∗ . r dx) f → τf with D(A) = {f ∈ D(τ ) Wa (v.
limit point alternative 163 Proof. Let g ∈ L2 (I. b). we can ﬁnd two functions in D(τ ) which coincide with ub and u2 near b. (pua )(z. − cos(α)). (9. y)g(y)r(y)dy for any g ∈ L2 (I. j = 1. ˜ ˜ ˜ If f vanishes identically near both a and b we must have α = β = α = β = 0 ˜ b and thus α = β = 0 and a uj (y)g(y)r(y)dy = 0. αub + βu2 ) = β. again for all z. c Example. r dx). d) ⊂ I. x. x) . The only problem is that ua or ub might be identically zero. at a and hence 0 = Wb (ub . x). we obtain ua (z.31) Now choosing g such that a ub gr dy = 0 we infer existence of ua (z). we must have β = 0 since βu2 = f − αub is. Since (τ − z)f = g we have x b f (x) = u1 (x) α + a u2 gr dy + u2 (x) β + x u1 gr dy . x) by setting it equal to f near a and using the diﬀerential equation to extend it to the rest of I. Choosing u2 = ua and arguing as before we see α = 0 and hence x b f (x) = ub (x) a b ua (y)g(y)r(y)dy + ua (x) x ub (y)g(y)r(y)dy (9. Since (τ − z)f = 0 near a respectively b. In particular. If τ is regular at both a and b there is a corresponding solution ub (z. a)) = (sin(α). Since W (ub . Now choose u1 = ub and consider the behavior near b.32) = a G(z. Thus β = 0 in both cases and we ˜ have x b f (x) = ub (x) α + a b u2 gr dy + u2 (x) x ub gr dy. r dx) be realvalued and consider f = (A − z)−1 g ∈ c D(A). where α = α + a u2 gr dy and β = β + a u1 gr dy. Since this set is dense the claim follows. u2 ) = 1 we see that τ is l. (9. x) to be the solution corresponding to the initial conditions (ua (z. However. If u2 is ˜ square integrable. If u2 is not square integrable on (d.2. Weyl’s limit circle. Hence we need to show that this can be avoided by choosing g properly. x) exists for all z ∈ C. Fix z and let g be supported in (c. If τ is regular at a with a boundary condition as in the previous example.9. we can choose ua (z. 2. ua (z)) = 0.c. This case can be avoided choosing g suitable and hence there is at least one solution. f ) = Wb (ub . So the only values of z for which (A − z)−1 does not exist must be those with W (ub (z). say ub (z). Similarly we obtain ub .30) ˜ Near a (x < c) we have f (x) = αu1 (x) + βu2 (x) and near b (x > d) we have b b ˜ f (x) = αu1 (x) + βu2 (x). in this case ua (z. a). ua (z.
164
9. One dimensional Schr¨dinger operators o
and ub (z, x) are linearly dependent and ua (z, x) = γub (z, x) satisﬁes both boundary conditions. That is, z is an eigenvalue in this case. In particular, regular operators have pure point spectrum. We will see in Theorem 9.10 below, that this holds for any operator which is l.c. at both end points. In the previous example ua (z, x) is holomorphic with respect to z and satisﬁes ua (z, x)∗ = ua (z ∗ , x) (since it coresponds to real initial conditions and our diﬀerential equation has real coeﬃcients). In general we have: Lemma 9.8. Suppose z ∈ ρ(A), then ua (z, x) from the previous lemma can be chosen locally holomorphic with respect to z such that ua (z, x)∗ = ua (z ∗ , x). Similarly, for ub (z, x). Proof. Since this is a local property near a we can suppose b is regular and choose ub (z, x) such that (ub (z, b), (pub )(z, b)) = (sin(β), − cos(β)) as in the example above. In addition, choose a second solution vb (z, x) such that (vb (z, b), (pvb )(z, b)) = (cos(β), sin(β)) and observe W (ub (z), vb (z)) = 1. If z ∈ ρ(A), z is no eigenvalue and hence ua (z, x) cannot be a multiple of ub (z, x). Thus we can set ua (z, x) = vb (z, x) + m(z)ub (z, x) (9.34) and it remains to show that m(z) is holomorphic with m(z)∗ = m(z ∗ ). Choosing h with compact support in (a, c) and g with support in (c, b) we have h, (A − z)−1 g = h, ua (z) g ∗ , ub (z) (9.35) = ( h, vb (z) + m(z) h, ub (z) ) g ∗ , ub (z) (9.33)
(with a slight abuse of notation since ub , vb might not be square integrable). Choosing (realvalued) functions h and g such that h, ub (z) g ∗ , ub (z) = 0 we can solve for m(z): m(z) = This ﬁnishes the proof.
d Example. We already know that τ = − dx2 on I = (−∞, ∞) gives rise to the free Schr¨dinger operator H0 . Furthermore, o √
2
h, (A − z)−1 g − h, vb (z) g ∗ , ub (z) . h, ub (z) g ∗ , ub (z)
(9.36)
(9.37) √ are two linearly independent solutions (for z = 0) and since Re( −z) > 0 for z ∈ C\[0, ∞), there is precisely one solution (up to a constant multiple)
u± (z, x) = e
−zx
,
z ∈ C,
9.2. Weyl’s limit circle, limit point alternative
165
which is square integrable near ±∞, namely u± . In particular, the only choice for ua is u− and for ub is u+ and we get
√ 1 G(z, x, y) = √ e− −zx−y 2 −z
(9.38)
which we already found in Section 7.4. If, as in the previous example, there is only one square integrable solution, there is no choice for G(z, x, y). But since diﬀerent boundary conditions must give rise to diﬀerent resolvents, there is no room for boundary conditions in this case. This indicates a connection between our l.c., l.p. distinction and square integrability of solutions. Theorem 9.9 (Weyl alternative). The operator τ is l.c. at a if and only if for one z0 ∈ C all solutions of (τ − z0 )u = 0 are square integrable near a. This then holds for all z ∈ C. Similarly for b. Proof. If all solutions are square integrable near a, τ is l.c. at a since the Wronskian of two linearly independent solutions does not vanish. Conversely, take two functions v, v ∈ D(τ ) with Wa (v, v ) = 0. By con˜ ˜ sidering real and imaginary parts it is no restriction th assume that v and v are realvalued. Thus they give rise to two diﬀerent selfadjoint operators ˜ ˜ A and A (choose any ﬁxed w for the other endpoint). Let ua and ua be the ˜ corresponding solutions from above, then W (ua , ua ) = 0 (since otherwise ˜ ˜ A = A by Lemma 9.5) and thus there are two linearly independent solutions which are square integrable near a. Since any other solution can be written as a linear combination of those two, every solution is square integrable near a. It remains to show that all solutions of (τ − z)u = 0 for all z ∈ C are square integrable near a if τ is l.c. at a. In fact, the above argument ensures ˜ this for every z ∈ ρ(A) ∩ ρ(A), that is, at least for all z ∈ C\R. Suppose (τ − z)u = 0. and choose two linearly independent solutions uj , j = 1, 2, of (τ − z0 )u = 0 with W (u1 , u2 ) = 1. Using (τ − z0 )u = (z − z0 )u and (9.10) we have (a < c < x < b)
x
u(x) = αu1 (x) + βu2 (x) + (z − z0 )
c
(u1 (x)u2 (y) − u1 (y)u2 (x))u(y)r(y) dy. (9.39)
Since uj ∈
L2 ((c, b), rdx)
we can ﬁnd a constant M ≥ 0 such that
b
u1,2 (y)2 r(y) dy ≤ M.
c
(9.40)
166
9. One dimensional Schr¨dinger operators o
Now choose c close to b such that z − z0 M 2 ≤ 1/4. Next, estimating the integral using Cauchy–Schwarz gives
x
(u1 (x)u2 (y) − u1 (y)u2 (x))u(y)r(y) dy
c x
2
x
≤
c
u1 (x)u2 (y) − u1 (y)u2 (x)2 r(y) dy
c x
u(y)2 r(y) dy (9.41)
x
≤ M u1 (x)2 + u2 (x)2
c
u(y)2 r(y) dy
and hence
x
u(y)2 r(y) dy ≤ (α2 + β2 )M + 2z − z0 M 2
c c
u(y)2 r(y) dy (9.42)
≤ (α2 + β2 )M + Thus
c x
1 2
x
u(y)2 r(y) dy.
c
u(y)2 r(y) dy ≤ 2(α2 + β2 )M
(9.43)
and since u ∈ ACloc (I) we have u ∈ L2 ((c, b), r dx) for every c ∈ (a, b). Note that all eigenvalues are simple. If τ is l.p. at one endpoint this is clear, since there is at most one solution of (τ − λ)u = 0 which is square integrable near this end point. If τ is l.c. this also follows since the fact that two solutions of (τ − λ)u = 0 satisfy the same boundary condition implies that their Wronskian vanishes. Finally, led us shed some additional light on the number of possible boundary conditions. Suppose τ is l.c. at a and let u1 , u2 be two solutions of τ u = 0 with W (u1 , u2 ) = 1. Abbreviate
j BCx (f ) = Wx (uj , f ),
f ∈ D(τ ).
(9.44) (9.45)
Let v be as in Theorem 9.6, then, using Lemma 9.5 it is not hard to see that Wa (v, f ) = 0 ⇔
1 2 cos(α)BCa (f ) + sin(α)BCa (f ) = 0,
1 BCa (v) 2 BCa (v)
where tan(α) = − . Hence all possible boundary conditions can be parametrized by α ∈ [0, π). If τ is regular at a and if we choose u1 (a) = p(a)u2 (a) = 1 and p(a)u1 (a) = u2 (a) = 0, then
1 BCa (f ) = f (a), 2 BCa (f ) = p(a)f (a)
(9.46) (9.47)
and the boundary condition takes the simple form cos(α)f (a) + sin(α)p(a)f (a) = 0. Finally, note that if τ is l.c. at both a and b, then Theorem 9.6 does not give all possible selfadjoint extensions. For example, one could also choose
1 1 BCa (f ) = eiα BCb (f ), 2 2 BCa (f ) = eiα BCb (f ).
(9.48)
9.2. Weyl’s limit circle, limit point alternative
167
The case α = 0 gives rise to periodic boundary conditions in the regular case. Now we turn to the investigation of the spectrum of A. If τ is l.c. at both endpoints, then the spectrum of A is very simple Theorem 9.10. If τ is l.c. at both end points, then the resolvent is a Hilbert– Schmidt operator, that is,
b a a b
G(z, x, y)2 r(y)dy r(x)dx < ∞.
(9.49)
In particular, the spectrum of any self adjoint extensions is purely discrete and the eigenfunctions (which are simple) form an orthonormal basis. Proof. This follows from the estimate
b a a b b x b
ub (x)ua (y)2 r(y)dy +
x
ub (y)ua (x)2 r(y)dy r(x)dx (9.50)
≤2
a
ua (y)2 r(y)dy
a
ub (y)2 r(y)dy,
which shows that the resolvent is Hilbert–Schmidt and hence compact. If τ is not l.c. the situation is more complicated and we can only say something about the essential spectrum. Theorem 9.11. All self adjoint extensions have the same essential spectrum. Moreover, if Aac and Acb are selfadjoint extensions of τ restricted to (a, c) and (c, b) (for any c ∈ I), then σess (A) = σess (Aac ) ∪ σess (Acb ). (9.51)
Proof. Since (τ − i)u = 0 has two linearly independent solutions, the defect indices are at most two (they are zero if τ is l.p. at both end points, one if τ is l.c. at one and l.p. at the other end point, and two if τ is l.c. at both endpoints). Hence the ﬁrst claim follows from Theorem 6.19. For the second claim restrict τ to the functions with compact support in (a, c) ∪ (c, d). Then, this operator is the orthogonal sum of the operators A0,ac and A0,cb . Hence the same is true for the adjoints and hence the defect indices of A0,ac ⊕ A0,cb are at most four. Now note that A and Aac ⊕ Acb are both selfadjoint extensions of this operator. Thus the second claim also follows from Theorem 6.19.
d Problem 9.3. Compute the spectrum and the resolvent of τ = − dx2 , I = (0, ∞) deﬁned on D(A) = {f ∈ D(τ )f (0) = 0}.
2
168
9. One dimensional Schr¨dinger operators o
9.3. Spectral transformations
In this section we want to provide some fundamental tools for investigating the spectra of SturmLiouville operators and, at the same time, give some nice illustrations of the spectral theorem.
d Example. Consider again τ = − dx2 on I = (−∞, ∞). From Section 7.2 we know that the Fourier transform maps the associated operator H0 to the multiplication operator with p2 in √2 (R). To get multiplication by λ, as in L the spectral theorem, we set p = λ and split the Fourier integral into a positive and negative part
2
(U f )(λ) = Then
i λx R e √ f (x) dx −i λx f (x) dx Re 2
√
,
λ ∈ σ(H0 ) = [0, ∞).
(9.52)
U : L (R) →
j=1
2
L2 (R,
χ[0,∞) (λ) √ dλ) 2 λ
(9.53)
is the spectral transformation whose existence is guaranteed by the spectral theorem (Lemma 3.3). Note that in the previous example the kernel e±i λx of the integral transform U is just a pair of linearly independent solutions of the underlying diﬀerential equation (though no eigenfunctions, since they are not square integrable). More general, if U : L2 (I, r dx) → L2 (R, dµ), f (x) →
R √
u(λ, x)f (x)r(x) dx
(9.54)
is an integral transformation which maps a selfadjoint SturmLiouville operator A to multiplication by λ, then its kernel u(λ, x) is a solution of the underlying diﬀerential equation. This formally follows from U Af = λU f which implies u(λ, x)(τ − λ)f (x)r(x) dx =
R R
(τ − λ)u(λ, x)f (x)r(x) dx
(9.55)
and hence (τ − λ)u(λ, .) = 0. Lemma 9.12. Suppose
k
U : L2 (I, r dx) →
j=1
L2 (R, dµj )
(9.56)
is a spectral mapping as in Lemma 3.3. Then U is of the form
b
U f (x) =
a
u(λ, x)f (x)r(x) dx,
(9.57)
. b) × (a. The inverse is given by k U −1 F (λ) = j=1 R uj (λ. there is a kernel uj (λ. at one endpoint. then the kernels coincide c ˆ on [c. λ (with respect to µj ). for ordered spectral measures we have always k ≤ 2 and even k = 1 if τ is l. y)f (y)r(y) dy. d] ⊂ (a. b). (9.) = uj (λ. In particular.58) Moreover. y)χ[c.d] .d] = Uj χ[ˆ.d] is Hilbert–Schmidt since G(z. d] ⊇ [c. U g = U −1 F.9 there is a corresponding kernel uj b [c. x)f (x)r(x) dx. x.d] .d] = (λ − z)Uj RA (z)χ[c. this formula holds for any f (provided the integral is understood as the corresponding limit). x)g(x)r(x) dx = a (U −1 F )(x)∗ g(x)r(x) dx.d] f )(λ) = a uj [c.e.d] (λ.c. .59) If we restrict RA (z) to a compact interval [c. . x)f (x)r(x) dx (9.d] (λ.) is a solution of τ uj = λuj for a. Next. Using the fact that U is unitary.d] [ˆ. we see Fj (λ)∗ j R a b b uj (λ.61) for every f with compact support in (a.9. x)f (x)r(x) dx (9.d] χ[c. x)) and each uj (λ.d] c ˆ Uj f (x) = a uj (λ. the solutions uj (λ) are linearly independent if the spectral measures are ordered and. the formula for the inverse follows. d]. Spectral transformations 169 where u(λ. b). x) such that b [c.62) Interchanging integrals on the right hand side (which is permitted at least for g.63) . (9. uk (λ. x) = (u1 (λ. Hence Uj χ[c. . x). .)χ[c. d]. F. b). then RA (z)χ[c.c.3. x)∗ Fj (λ)dµj (λ). at some endpoint.60) (Uj χ[c. x. since we have Uj χ[c. they satisfy the boundary condition. Using Uj RA (z) = 1 λ−z Uj we have b Uj f (x) = (λ − z)Uj a G(z. c ˆ In particular.d] (y) is square integrable over (a. from Uj Af = λUj f we have b b uj (λ.d] is Hilbert–Schmidt as well and by Lemma 6. uj (λ. Now take a larger compact interval [ˆ. (9. if τ is l. . . x)(τ f )(x)r(x) dx = λ a a uj (λ. F with compact support). g . Since functions with compact support are dense and Uj is continuous. Proof. y) such that (9.
170 9.58).0 λuj (λ. x). Finally. dµj ).)[c. Since U is surjective.g. we can prescribe Fj arbitrarily. .66) where the limit is taken in L2 (R. If τ is l. a) = − sin(α).d] = cd. p(a)c (z.) is a solution of τ uj = λuj . x) and using the rescaling dµ(λ) = .0 ab.c. x) = s(z.11.67) and introduce two solution s(z. a) = cos(α). For simplicity we will only pursue the case where one endpoint. In particular.64) then we have l cj (λ)Fj (λ) = 0. d] ⊂ (a.c. Restricting everything to [c.) is τ is l. We choose a boundary condition cos(α)f (a) + sin(α)p(a)f (a) = 0 (9. x) and c(z. The general case can usually be reduced to this case by choosing c ∈ (a. there is a set Ωl such that µj (Ωl ) = 0 for 1 ≤ j ≤ l. are linearly independent for λ ∈ Ωl which shows k ≤ 2 since there are at most two linearly independent solutions. Please note that the integral in (9.)[c.d] ∈ D(A∗ ) and A∗ uj (λ. we can choose α = a and f to satisfy the boundary condition. Hence uj (λ. x) = γa (λ)s(λ. that is. If we assume the µj are ordered. if uj (λ. x) = 0 j=1 (9. .57) has to be understood as β Uj f (x) = lim α↓a. λ and every f ∈ D(A0 ).β↑b α uj (λ. One dimensional Schr¨dinger operators o for a. (9. e. j=1 Fj = Uj f.. 1 ≤ j ≤ l. x) must satisfy the boundary condition. p(a)s (z. ﬁx l ≤ k. a) = sin(α). . . near a.68) Note that s(z. a) = cos(α).e. Moreover. is regular. x) is the solution which satisﬁes the boundary condition at a. and uj (λ. c(z. there is only one linearly independent solution and thus k = 1. in our previous lemma we have u1 (λ.)[c. . x) of τ u = zu satisfying the initial conditions s(z. Suppose l cj (λ)uj (λ. x)f (x)r(x) dx. b) the above equation implies uj (λ. Moreover. say a. x). b) and splitting A as in Theorem 9. Similarly for (9. (9. (9.d] . we can choose ua (z. uj (λ.65) for every f . Fj (λ) = 1 for j = j0 and Fj (λ) = 0 else which shows cj0 (λ) = 0.
r dx) → L2 (R. the measure µ is the object of desire since it contains all the spectral information of A.e. R (9.. Suppose E ∈ σp (A) is an eigenvalue.e.71) Moreover. s(z)) = 1. that is SE (λ) = s(E) 2 . x) = γb (z)(c(z. (9. If A has only pure point spectrum (i. if τ is limit circle at both endpoints).69) s(λ. Then s(E. For arbitrary A. since U is unitary we have b s(E) 2 = a s(E. cos(α)ub (z.75) (9. (9. x) + mb (z)s(z. λ = E. For µ1 we know µ1 (R) = 1.. Since c(z. with inverse (U −1 F )(x) = (U f )(λ) = a s(λ. a) z ∈ ρ(A). only eigenvalues) this is straightforward as the following example shows.73) where dΘ is the Dirac measure centered at 0.70) Note however. x)F (λ)dµ(λ). x) and s(z.74) . In the general case we have to work a bit harder. we can write ub (z. Furthermore. In particular. but µ might not even be bounded! In fact. where mb (z) = cos(α)p(a)ub (z. but we know nothing about the measure µ. Example. a) + sin(α)p(a)ub (z. SE (λ)2 dµ(λ) = s(E) 4 µ({E}). Spectral transformations 171 γa (λ)2 dµa (λ) and (U1 f )(λ) = γa (λ)(U f )(λ) we obtain a unitary map b U : L2 (I. if A has pure point spectrum (e. So up to this point we have our spectral transformation U which maps A to multiplication by λ. it destroys the normalization of the measure µ.72) that is µ({E}) = s(E) In particular. x) is the corresponding eigenfunction and hence the same is true for SE (λ) = (U s(E))(λ). it turns out that µ is indeed unbounded. W (c(z). So our next aim must be to compute µ. λ = E . a) . x) are linearly independent solutions. x)2 r(x)dx = R −2 . SE (λ) = 0 for a. x)). a) − sin(α)ub (z. x)f (x)r(x)dx (9. that while this rescaling gets rid of the unknown factor γa (λ). dµ). j=1 (9. we have ∞ dµ(λ) = j=1 1 s(Ej ) 2 dΘ(λ − Ej ).9. the above formula holds at least for the pure point part µpp .3. λ=0 (9. σp (A) = {Ej }∞ .g. 0.
y)f (y)r(y) dy = a R s(λ. if z is an eigenvalue). Proof.76) since the same is true for ub (z. u∗ ) vanishes as x ↑ b. Now choose u(x) = ub (z. the right hand side is continuous with respect to x and so is the left hand side. x.172 9.e.77) Lemma 9. the constant γb (z) is of no importance and can be chosen equal to one.79) (clearly it is true for x = a. that is. x)F (λ) dµ(λ). x) = c(z. We have (U ub (z))(λ) = where ub (z.83) . v) (9. since both ub and u∗ are in b b D(τ ) near b. ub (z)∗ ) − 2Im(mb (z)). One dimensional Schr¨dinger operators o is known as WeylTitchmarsh mfunction.80) and observe that Wx (ub . x) + mb (z)s(z. 1 Proof. x)2 r(x) dx. Note that mb (z) is holomorphic in ρ(A) and that mb (z)∗ = mb (z ∗ ) (9. Here equality is to be understood in L2 . x) and v(x) = ub (z. v(x) of τ u = zu. λ−z (9. ub (z.81) G(z. x)∗ = ub (z ∗ . τ v = z v it is straightforˆ ward to check x (ˆ − z) z a u(y)v(y)r(y) dy = Wx (u. (9. x. x).13. (9. y)2 r(y) dy = Wx (ub (z). Lemma 9.75) only vanishes if ub (z. x) satisﬁes the boundary condition at a. x) is normalized as in (9. now diﬀerentiate with respect to x). x) is normalized as in (9.14. Moreover. that is for a. Hence in this case the formula holds for all x and we can choose x = a to obtain b sin(α) a ub (z. at least if F has compact support. However. x − 2Im(z) a ub (z. (9. The Weyl mfunction is a Herglotz function and satisﬁes b Im(mb (z)) = Im(z) a ub (z. λ−z (9. Given two solutions u(x). y)f (y)r(y) dy = sin(α) R F (λ) dµ(λ).78) where ub (z.82) where F = U f . v) − Wa (u. λ−z (9.77). x) (the denominator in (9.77). x). First of all note that from RA (z)f = U −1 λ−z U f we have b 1 .
∞).86) Moreover. 0)e− and thus √ −zx (9.85) and d = Re(mb (i)). where F has compact support.15.88) Proof. Spectral transformations 173 for all f . d Example. α = 0. Now combining the last two lemmas we infer from unitarity of U that b Im(mb (z)) = Im(z) a ub (z. The Weyl mfunction is given by mb (z) = d + R 1 λ − λ − z 1 + λ2 dµ(λ).3.82) with respect to x before setting x = a. δ b (9. (9.85) is a welldeﬁned holomorphic function in C\R. Then 2 Im(z) λ−z2 √ √ sin(α) c(z.86) and hence the right hand side 1 λ of (9. x) = ub (z.84) shows (9.91) (9. 1 + λ2 1 π λ+δ (9. µ is given by Stieltjes inversion formula µ(λ) = lim lim δ↓0 ε↓0 Im(mb (λ + iε))dλ. Choosing z = i in (9. Stieltjes inversion formula follows as in the case where the measure is bounded. x)2 r(x) dx.89) (9.84) λ − z2 and since a holomorphic function is determined up to a real constant by its imaginary part we obtain Theorem 9.9. x)2 r(x) dx = Im(z) R 1 dµ(λ) (9. R 1 dµ(λ) = Im(mb (i)) < ∞. (9. (9. x) = − sin(α) cos( zx) + √ sin( zx). Consider τ = − dx2 on I = (0. z ub (z.87) where Im(mb (λ + iε)) = ε a ub (λ + iε.90) Moreover. d ∈ R. the claim follows if we can cancel sin(α). To see the case α = 0. ﬁrst diﬀerentiate (9. By Im( λ−z − 1+λ2 ) = its imaginary part coincides with that of mb (z) and hence equality follows. x) = cos(α) cos( zx) + √ sin( zx) z and √ √ cos(α) s(z. that is.92) √ −z cos(α) + sin(α) mb (z) = √ −z sin(α) − cos(α) . Since these functions are dense.
Let ϕ be some continuous positive function supported in [−1. y) − Gk (z. 1 Formally it follows by choosing x = a in ub (z. 1] with ϕ(x)dx = 1 and set ϕk (x) = k ϕ(k x).96) (9. x. a more careful analysis is needed. x. b) as Im(z) → ∞. y) − Gk (z.98) . since we know equality only for a. We ﬁrst observe (Problem 9. One dimensional Schr¨dinger operators o respectively dµ(λ) = λ dλ. x. b). For simplicity we only consider the case p = k ≡ 1. mb (z) = cot(α) + We conclude this section with some asymptotics for large z away from the spectrum. x) = (U −1 λ−z )(x). y) = ϕk (. y) k ϕ 2 + G(z. y) = 0 for every x.93) Note that if α = 0 we even have and hence < 0 in the previous example 1 dµ(λ) (9.e. ≤ Im(z) where Gk (z.17. One can show that this remains true in the general case. x. y) + G(z. Then.14). y) = s(z. Proof. We have 1 sin(α) √−z(x−a) 1 c(z. y ∈ (a. (9. G(z. y) ≤ Gk (z. − x). however. y).174 9. The Green’s function of A G(z.94) R λ−z in this case (the factor − cot(α) follows by considering the limit z → ∞ of both sides). x. x. satisﬁes lim Im(z)→∞ x ≤ y. (9. y). x.97) G(z. x) = − sin(α) + √ e −z(x−a) (1 + O( √ )) (9.95) 2 −z −z for ﬁxed x ∈ (a.4): Lemma 9. x. x. x)ub (z. Furthermore we have Lemma 9. by RA (z) ≤ Im(z)−1 (see Theorem 2.16. RA (z)ϕk (.99) (9. x. x) = cos(α) + √ e (1 + O( √ )) 2 −z −z √ 1 cos(α) 1 s(z. − y) . π(cos(α)2 + λ sin(α)2 ) 1 λ−z dµ(λ) √ (9.
Spectral transformations 175 Since G(z. −z α=0 α=0 (9. x. y) − Gk (z. y) ≤ ε an the claim follows.101) s(z. Note that assuming q ∈ C k ([a.9. y) ≤ 2 and then Im(z) so large such that k ϕ 2 Im(z) ε ≤ 2 . x) = cos(α) cosh( −zx) + √ sinh( −zx) −z x √ 1 −√ sinh( −z(x − y))q(y)c(z. Problem 9.2 and consider c(z. x) form which the claim follows. x) = e− ˜ √ −zx c(z. x. y) we can ﬁrst choose k suﬃciently large ε such that G(z. (Hint: Use that √ √ sin(α) c(z.) cot(α) + O( √1 ). x) and s(z.18. x) from Lemma 9. Plugging the asymptotic expansions from Lemma 9.16 into Lemma 9. x). y) = limk→∞ Gk (z. y)dy −z a by Lemma 9. −z √ − −z + O( √1 ). Prove the asymptotics for c(z. x) mb (z) = − + o(e−2 −z(x−a) ) (9. x.16 and hence also in the expansion of mb (z). b)) one can obtain further asymptotic terms in Lemma 9. Combining the last two lemmas we also obtain Lemma 9. x.3. Hence G(z.17 we see √ c(z.4. Proof. The Weyl mfunction satisﬁes mb (z) = as Im(z) → ∞.16.100) . x.
.
2) Proof.1. Then ψ ∈ C∞ (Rn ) and for any a > 0 there is a b > 0 such that ψ ∞ ≤ a H0 ψ + b ψ . Selfadjointness and spectrum Our next goal is to apply these results to Schr¨dinger operators.3) ˆ shows ψ ∈ L1 (Rn ).4) 177 . Suppose n ≤ 3 and ψ ∈ H 2 (Rn ). We are mainly interested in the case 1 ≤ d ≤ 3 and want to ﬁnd classes of potentials which are relatively bounded respectively relatively compact. (10.Chapter 10 Oneparticle Schr¨dinger operators o 10. since (p2 + γ 2 )ψ ∈ L2 (Rn ). ˆ Hence.1. the CauchySchwarz inequality ˆ ψ 1 = ≤ ˆ (p2 + γ 2 )−1 (p2 + γ 2 )ψ(p) 1 ˆ (p2 + γ 2 )−1 (p2 + γ 2 )ψ(p) . Lemma 10.1) where V : Rd → R is the potential energy of the particle. The important observation is that (p2 + γ 2 )−1 ∈ L2 (Rn ) if n ≤ 3. But now everything follows from the RiemannLebesgue lemma ψ ∞ ˆ ˆ ≤ (2π)−n/2 (p2 + γ 2 )−1 ( p2 ψ(p) + γ 2 ψ(p) ) = (γ/2π)n/2 (p2 + 1)−1 (γ −2 H0 ψ + ψ ) (10. The Hamilo tonian of one particle in d dimensions is given by H = H0 + V. (10. To do this we need a better understanding of the functions in the domain of H0 . (10.
2 also tells us σess (H (1) ) = [0. Oneparticle Schr¨dinger operators o ﬁnishes the proof.2 we conclude that H (1) is selfadjoint. but if γ > 0. ∞ In particular. (10. Then V is relatively compact with respect to H0 . then V is given by the Coulomb potential and the corresponding Hamiltonian is given by γ H (1) = −∆ − . 10.7) x If the potential is attracting. H = H0 + V.9) If γ ≤ 0 we have H (1) ≥ 0 and hence E0 = 0. we might have E0 < 0 and there might be some discrete eigenvalues below the essential spectrum.10 using f (p) = (p2 − z)−1 and g(x) = V (x). ∞) (10. .6) Proof. D(H) = H 2 (Rn ). ∞ ∞ Observe that since Cc (Rn ) ⊆ D(H0 ). Theorem 10. Theorem 10.2. (10.8) and that H (1) is bounded from below E0 = inf σ(H (1) ) > −∞. 1 As domain we have chosen D(H (1) ) = D(H0 ) ∩ D( x ) = D(H0 ) and by Theorem 10. that is. If one takes only the electrostatic force into account. ∞ Moreover. C0 (Rn ) is a core for H. Our previous lemma shows D(H0 ) ⊆ D(V ) and the rest follows from Lemma 7. The hydrogen atom We begin with the simple model of a single electron in R3 moving in the external potential V generated by a nucleus (which is assumed to be ﬁxed at the origin). bounded from below and σess (H) = [0. D(H (1) ) = H 2 (R3 ). ∞). then it describes the hydrogen atom and is probably the most famous model in quantum mechanics. Moreover.2. if γ > 0. (10. (10. we must have V ∈ L2 (Rn ) if loc D(V ) ⊆ D(H0 ). Let V be realvalued and V ∈ L∞ (Rn ) if n > 3 and V ∈ ∞ L∞ (Rn ) + L2 (Rn ) if n ≤ 3. Now we come to our ﬁrst result.178 10. Note that f ∈ L∞ (Rn ) ∩ L2 (Rn ) for n ≤ 3.5) is selfadjoint.
17) which is negative for s large. V (1) ψ (10. s ∈ R.10.10) which is a strongly continuous oneparameter unitary group.2. 0))) = ∞. Suppose γ > 0. Now suppose Hψ = λψ. . Suppose H = H0 + V with U (−s)V U (s) = e−s V .16) ψ(s). then ψ.4. with limj→∞ Ej = 0.13) D(H (1) (s)) = D(H (1) ). (10. H]ψ = U (−s)ψ.11) 2 2 Now let us investigate the action of U (s) on H (1) H (1) (s) = U (−s)H (1) U (s) = e−2s H0 + e−s V (1) . Corollary 10. ψ s→0 s s→0 s = ψ. (10. Then Theorem 4. Then any normalized eigenfunction ψ corresponding to an eigenvalue λ satisﬁes 1 λ = − ψ. Now choose a sequence sn → ∞ such that we have supp(ψ(sn )) ∩ supp(ψ(sm )) = ∅ for n = m. all eigenvalues must be negative. H]ψ = lim U (−s)ψ. Choose ψ ∈ Cc (R\{0}) and set ψ(s) = U (−s)ψ.15) 2 In particular. This result even has some further consequences for the point spectrum of H (1) . H (1) ψ(s) = e−2s ψ. U (s)ψ = 0 and hence 0 = lim 1 H − H(s) ψ. Then E0 < Ej < Ej+1 < 0. H0 ψ + e−s ψ. [U (s). Then σp (H (1) ) = σd (H (1) ) = {Ej−1 }j∈N0 . (10.12) (10. V ψ . Hψ − Hψ. The generator can be easily computed 1 in Dψ(x) = (xp + px)ψ(x) = (xp − )ψ(x).3. (10. ∞ Proof. ψ ∈ S(Rn ). (2H0 + V (1) )ψ . The hydrogen atom 179 In order to say more about the eigenvalues of H (1) we will use the fact that both H0 and V (1) = −γ/x have a simple behavior with respect to scaling. Theorem 10.14) Thus we have proven the virial theorem. (10. Consider the dilation group U (s)ψ(x) = e−ns/2 ψ(e−s x).11 (i) shows that rank(PH (1) ((−∞. H0 ψ = ψ. (10. Since each eigenvalue Ej has ﬁnite multiplicity (it lies in the discrete spectrum) there must be an inﬁnite number of eigenvalues which accumulate at 0. [U (s).
2.19) which correspond to a unitary transform L2 (R3 ) → L2 ((0. 2 ∂r r ∂r r γ V (r) = − . r (10. Hence we have gotten a quite complete picture of the spectrum of H (1) . θ. sin(θ)dθ) ⊗ L2 ((0.18) For a general potential this is hopeless. x2 . Oneparticle Schr¨dinger operators o If γ ≤ 0 we have σd (H (1) ) = ∅ since H (1) ≥ 0 in this case. x2 = r sin(θ) sin(ϕ).21) (Recall the angular momentum operators Lj from Section 8. (10. which is given by the partial diﬀerential equation − ∆ψ(x) − γ ψ(x) = λψ(x). it suggests itself to switch to spherical coordinates (x1 . ϕ) = R(r)Θ(θ)Φ(ϕ) we obtain the following three SturmLiouville equations 1 d 2d l(l + 1) r + + V (r) R(r) = λR(r) 2 dr r dr r2 1 d d m2 − sin(θ) + Θ(θ) = l(l + 1)Θ(θ) sin(θ) dθ dθ sin(θ) d2 − 2 Φ(ϕ) = m2 Φ(ϕ) dϕ − (10. ∞).22) 1 ∂ 2∂ 1 r + 2 L2 + V (r). ϕ) x1 = r sin(θ) cos(ϕ). sin(θ) ∂θ ∂θ sin(θ)2 ∂ϕ2 (10. These equations will be investigated in the following sections.) Making the product ansatz (separation of variables) ψ(r. x3 = r cos(θ). . Next. θ. dϕ). π). (10. but in our case we can use the rotational symmetry of our operator to reduce our partial diﬀerential equation to ordinary ones. we could try to compute the eigenvalues of H (1) (in the case γ > 0) by solving the corresponding eigenvalue equation.23) (10. θ.20) In these new coordinates (r.180 10. x3 ) → (r. ϕ) our operator reads H (1) = − where L2 = L2 + L2 + L2 = − 1 2 3 1 ∂ ∂ 1 ∂2 sin(θ) − . r2 dr) ⊗ L2 ((0. First of all.24) The form chosen for the constants l(l + 1) and m2 is for convenience later on. 2π). x (10.
25) is selfadjoint. .23) to be in the domain of H0 (in particular continuous). π). Its spectrum is purely discrete σ(A) = σd (A) = {m2 m ∈ Z} and the corresponding eigenfunctions 1 Φm (ϕ) = √ eimϕ .3.10. since in polar coordinates L3 = 1 ∂ .5. (10. (10. I = (0.32) L2 ((0.25) since we want ψ deﬁned via (10.27) m ∈ Z. we choose periodic boundary conditions the StumLiouville equation AΦ = τ Φ. dx dx 1 − x2 (10. 2π) Φ ∈ AC 1 [0. 2π form an orthonormal basis for L2 (0. (10.1 we immediately obtain Theorem 10. 1). D(A) = {Φ ∈ L2 (0. (10. Angular momentum We start by investigating the equation for Φ(ϕ) which associated with the StumLiouville equation τ Φ = −Φ .31) The operator τ transforms to the somewhat simpler form τm = − d d m2 (1 − x2 ) − . π). We note that this operator is essentially the square of the angular momentum in the third coordinate direction. m ∈ N0 . Φ (0) = Φ (2π)} (10. all eigenvalues are twice degenerate.3. 2π). The operator A deﬁned via (10.29) (10. Note that except for the lowest eigenvalue. Φ(0) = Φ(2π). Angular momentum 181 10. i ∂ϕ (10.30) For the investigation of the corresponding operator we use the unitary transform Θ(θ) → f (x) = Θ(arccos(x)). dx). I = (0.26) From our analysis in Section 9.28) Now we turn to the equation for Θ(θ) τm Θ(θ) = 1 sin(θ) − d d m2 sin(θ) + dθ dθ sin(θ) Θ(θ). . 2π]. 2π). sin(θ)dθ) → L2 ((−1.
Plm is an eigenfunction corresponding to the eigenvalue l(l + 1) and it suﬃces to show that Plm form a basis. Moreover. 1) .6. it suﬃces to show that the functions Plm (x) are dense. By Theorem 9. note that Pl (x) are (nonzero) polynomials of degree l. Oneparticle Schr¨dinger operators o The corresponding eigenvalue equation τm u = l(l + 1)u (10. Moreover. This is straightforward to check. 1) f ∈ AC 1 (0. For m > 0 we have Q0m = (x ± 1)−m/2 (C + O(x ± 1)) which shows that it is not square integrable.38) form an orthonormal basis for L2 (−1. τ f ∈ L2 (−1. 2 (l − m)! l ∈ N0 .36) 2 )P (t)2 lm 0 (1 − t dt In fact. For l ∈ N0 it is solved by the associated Legendre functions dm Plm (x) = (1 − x2 )m/2 m Pl (x). l ≥ m.33) is the associated Legendre equation.6.34) dx where 1 dl 2 (x − 1)l (10. Thus τm is l. 1) it suﬃces to show that the functions (1 − x2 )−m/2 Plm (x) are dense. linearly independent solution is given by x dt Qlm (x) = Plm (x) . l ≥ m} and the corresponding eigenfunctions ulm (x) = 2l + 1 (l + m)! Plm (x). To prove this. for m > 0 at both endpoints. Since (1 − x2 ) > 0 for x ∈ (−1. (10. 1). (10. π).p.c. Proof. Am is selfadjoint. A second. for m = 0 and l.23) is continuous) we choose the boundary condition generated by P0 (x) = 1 in this case x Am f = τ f.c.37) is selfadjoint. In order to make sure that the eigenfunctions for m = 0 are continuous (such that ψ deﬁned via (10.182 10. D(Am ) = {f ∈ L2 (−1. The operator Am . For m = 0 we have Q00 = arctanh(x) ∈ L2 and so τ0 is l. deﬁned via (10. But the span of these functions . at both end points. for every SturmLiouville equation v(x) = u(x) satisﬁes p(t)u(t)2 τ v = 0 whenever τ u = 0.37) Theorem 10. limx→±1 (1 − x2 )f (x) = 0} (10.39) (10. (10. m ∈ N0 . Its spectrum is purely discrete σ(Am ) = σd (Am ) = {l(l + 1)l ∈ N0 . Now ﬁx l = 0 and note P0 (x) = 1.35) Pl (x) = l 2 l! dxl are the Legendre polynomials.
(10. so are the polynomials. Every continuous function can be approximated by polynomials (in the sup norm and hence in the L2 norm) and since the continuous functions are dense. (10.3. which can be found in any book on special functions. 2 (l − m)! l = m. The spherical harmonics Ylm (θ. . π). π). Proof.40) form an orthonormal basis for L2 ((0.44) In particular.9. if we can show that Ylm form a basis.41) (10. r = x.±m (x) = 2l + 1 (l + m)! ˜ x3 Plm ( ) 4π (l − m)! r x1 ± ix2 r m . sin(θ)dθ) ⊗ L2 ((0.42) form an orthonormal basis and satisfy L2 Ylm = l(l + 1)Ylm and L3 Ylm = mYlm . sin(θ)dθ) for any ﬁxed m ∈ N0 . ϕ) = Θlm (θ)Φm (ϕ) = 2l + 1 (l + m)! P (cos(θ))eimϕ . .45) . Angular momentum 183 contains every polynomial. 4π (l − m)! lm m ≤ l.43) ˜ where Plm is a polynomial of degree l − m given by dl+m ˜ Plm (x) = (1 − x2 )−m/2 Plm (x) = l+m (1 − x2 )l . The only thing remaining is the normalization of the eigenfunctions.10. Returning to our original setting we conclude that Θlm (θ) = 2l + 1 (l + m)! Plm (cos(θ)). Note that transforming Ylm back to cartesian coordinates gives Yl. dx (10. r2 (10. (10. The operator L2 on L2 ((0.7. Theorem 10. . Ylm are smooth away from the origin and by construction they satisfy − ∆Ylm = l(l + 1) Ylm . 2π)) has a purely discrete spectrum given σ(L2 ) = {l(l + 1)l ∈ N0 }. But this follows as in the proof of Lemma 1. Everything follows from our construction. m + 1.
at 0 we are in the l. Oneparticle Schr¨dinger operators o 10. It remains to investigate this operator. case only if l > 0. τ f ∈ L2 (I).4.47) ˜ Hl . . Hence H = H0 + V = l. d2 l(l + 1) + + V (r) dr2 r2 (10. limr→0 (f (r) − rf (r)) = 0 if l = 0}. (10. (10.46) (10. we have to take the boundary condition generated by u(r) = r. we need an additional boundary condition at 0 if l = 0. σess (Al ) = [0.50) D(Al ) ⊆ L2 ((0. ∞). r2 dr).m (10. r2 dr)} reduce our operator H = H0 + V . ϕ)R(r) ∈ L2 ((0.52) Proof.49) τl = − ˜ R(r) → u(r) = rR(r). By construction of Al we know that it is selfadjoint and satisﬁes σess (Al ) = [0. ∞). ∞)). case at ∞ for any l ∈ N0 .p. Theorem 10.48) where ˜ Hl R(r) = τl R(r). r2 dr) → L2 ((0. ∞ The important observation is that the spaces Hlm = {ψ(x) = R(r)Ylm (θ. we can even admit any spherically symmetric potential V (x) = V (x) with V (r) ∈ L∞ (R) + L2 ((0. ∞)). r2 dr). ∞). Hence it remains to compute the domain. f ∈ AC(I). The eigenvalues of the hydrogen atom Now we want to use the considerations from the previous section to decompose the Hamiltonian of the hydrogen atom. In fact.8.23) is in the domain of H0 ). The domain of the operator Al is given by D(Al ) = {f ∈ L2 (I) f. ˜ l(l + 1) 1 d 2d r + + V (r) r2 dr dr r2 D(Hl ) ⊆ L2 ((0. our operator transforms to Al f = τl f.51) Using the unitary transformation L2 ((0. Thus we are in the l. ∞).184 10. ∞). Since we need R(r) = u(r) to r be bounded (such that (10. We know at least D(Al ) ⊆ D(τ ) and since D(H) = D(H0 ) it suﬃces to consider the case V = 0. ∞). that is. In this case the solutions of −u (r) + l(l+1) u(r) = 0 are given by r2 u(r) = αrl+1 + βr−l .p. However. τl = − (10. where I = (0. (10. Moreover. ∞).
r) = jl ( zr) and ub (z.53) r2 are given by the spherical Bessel respectively spherical Neumann functions √ √ u(r) = α jl ( zr) + β nl ( zr). It turns out that they can be expressed in terms of the Laguerre polynomials dj Lj (r) = er j e−r rj (10.55) r dr r In particular.60) ψnlm (x) = Rnl (r)Ylm (x). √ √ √ (10.57) V (r) = − .9.54) where 1 d l sin(r) jl (r) = (−r)l . n ∈ N0 . The second case is that of our Coulomb potential γ γ > 0. the lowest eigenvalue E0 = − γ4 is simple and the corresponding eigenfunction ψ000 (x) = γ 3 −γr/2 e 43 π 2 is positive. (10. .4.61) e γr − 2(n+1) L2l+1 ( n+l+1 γr ).62) In particular. 2(n + 1) An orthonormal basis for the corresponding eigenspace is given by (10. where Rnl (r) = γ 3 (n − l)! 2n3 ((n + l + 1)!)3 γr n+1 l (10. respectively. where the corresponding diﬀerential equation can be explicitly solved.58) dr and the associated Laguerre polynomials dk Lj (r).56) ua (z. The simplest case is V = 0 in this case the solutions of l(l + 1) − u (r) + u(r) = zu(r) (10. r) = jl ( zr) + inl ( zr) are the functions which are square integrable and satisfy the boundary condition (if any) near a = 0 and b = ∞. The eigenvalues of the hydrogen atom 185 Finally let us turn to some explicit choices for V . The eigenvalues of H (1) are explicitly given by 2 γ . r where we will try to compute the eigenvalues plus corresponding eigenfunctions. (10.10.59) En = − (10. drk Note that Lk is a polynomial of degree j − k. j Lk (r) = j Theorem 10. (10. n+1 (10.
n = √ 2 −λ Now let us search for a solution which can be expanded into a convergent power series ∞ v(x) = j=0 vj xj .g. we need to look at the equation l(l + 1) γ − )u(r) = λu(r) (10. The only problem is to show that we have found all eigenvalues.. Substituting the ansatz (10.63) − u (r) + ( r2 r √ x for λ < 0.65) into our diﬀerential equation and comparing powers of x gives the following recursion for the coeﬃcients 2(j − n) vj+1 = vj (10.5. In this case u(r) is square integrable and hence an γ eigenfunction corresponding to the eigenvalue λn = −( 2(n+l+1) )2 .186 10. (10. this state is in some sense . then vj = 0 for j > n and v(x) is a polynomial. Since all eigenvalues are negative. Nondegeneracy of the ground state The lowest eigenvalue (below the essential spectrum) of a Schr¨dinger opero ator is called ground state. Since the laws of physics state that a quantum system will transfer energy to its surroundings (e. (10.64) xv (x) + 2(l + 1 − x)v (x) + 2nv(x) = 0.67) j! k + 2(l + 1) k=0 Now there are two cases to distinguish. Oneparticle Schr¨dinger operators o Proof.66) (j + 1)(j + 2(l + 1)) and thus j−1 1 2(k − n) vj = . It is a straightforward calculation to check that Rnl are indeed eigenγ functions of Al corresponding to the eigenvalue −( 2(n+1) )2 and for the norming constants we refer to any book on special functions. 10.65) The corresponding u(r) is square integrable near 0 and satisﬁes the boundary condition (if any). If n ∈ N0 . But ˜ ˜ j! then v (x) ≥ exp((2 − ε)x) and thus the corresponding u(r) is not square ˜ integrable near −∞. Thus we need to ﬁnd those values of λ for which it is square integrable near +∞. an atom emits radiation) until it eventually reaches its ground state. Otherwise we have vj ≥ (2−ε)j j! for j suﬃciently large. Hence by adding a polynomial j to v(x) we can get a function v (x) such that vj ≥ (2−ε) for all j. Introducing new variables x = −λ r and v(x) = xe u( √x ) l+1 −λ this equation transforms into γ − (l + 1). (10. v0 = 1.
10. it maps real functions to real functions) operator. Example..e. Then by Aψ = Aψ+ − Aψ−  ≤ Aψ+ + Aψ− = Aψ we have A = ψ. Convolution with a strictly positive function is positivity improving. then it is no restriction to assume that ψ is real (since A is real both real and imaginary part of ψ are eigenfunctions as well). Let ψ be an eigenfunction. Aψ ≤ A . Proof. (10.. We ﬁrst show that positivity improving operators have positive eigenfunctions. By (7. This is quite surprising since the corresponding classical mechanical system is not. To set the stage let us introduce some notation. Nondegeneracy of the ground state 187 the most important state. Let H = L2 (Rn ). So we need a positivity improving operator. Suppose A ∈ L(L2 (Rn )) is a selfadjoint. If A is an eigenvalue. and f = 0. Without restriction ψ = ψ+ ≥ 0 and since A is positivity increasing we even have ψ = A −1 Aψ > 0.5. We call f strictly positive if f > 0 a. Aψ and thus 1 ψ+ . Aψ− > 0. Aψ ≤ ψ. g ≥ 0.e. A bounded operator A is called positivity preserving if f ≥ 0 implies Af ≥ 0 and positivity improving if f ≥ 0 implies Af > 0. λ < 0 are since they are given by convolution . the hydrogen atom is stable in the sense that there is a lowest possible energy. positivity improving and real (i.68) that is. t > 0 and Rλ (H0 ). Clearly A is positivity preserving (improving) if and only if f. Multiplication by a positive function is positivity preserving (but not improving). ψ. Aψ ) = 0. Aψ − ψ. Note that it suﬃces to show that any ground state eigenfunction is positive since nondegeneracy then follows for free: two positive functions cannot be orthogonal. In particular. We assume ψ = 1 and denote by ψ± = f ±2 f  the positive and negative parts of ψ. We call f ∈ L2 (Rn ) positive if f ≥ 0 a. Ag ≥ 0 (> 0) for f. (10.69) 4 Consequently ψ− = 0 or ψ+ = 0 since otherwise Aψ− > 0 and hence also ψ+ . then it is simple and the corresponding eigenfunction is strictly positive.e. Aψ− = ( ψ.42) and (7. Theorem 10. the electron could fall into the nucleus! Our aim in this section is to show that the ground state is simple with a corresponding positive eigenfunction. We have seen that the hydrogen atom has a nondegenerate (simple) ground state with a corresponding positive eigenfunction.43) both E−tH0 . Aψ = ψ.10. Aψ ≤ ψ.
Oneparticle Schr¨dinger operators o with a strictly positive function. Now it remains to use (7. Next I claim that for ψ positive the closed set N (ψ) = {ϕ ∈ L2 (Rn )  ϕ ≥ 0. If ϕ ∈ N (ψ) we have by ≥ 0 that = 0. we have Hn ψ → Hψ for ψ ∈ Cc (Rn ) (note that necessarily sr V ∈ L2 ) and hence Hn → H in strong resolvent sense by Lemma 6. which shows that e−tH is at least positivity preserving (since 0 cannot be an eigenvalue of e−tH it cannot map a positive function to 0).188 10. If E0 = min σ(H) is an eigenvalue. loc s Hence e−tHn → e−tH by Theorem 6. Hence tVn ϕe−sH ψ = 0.70) is just {0}.28. (10. Suppose H = H0 + V is selfadjoint and bounded from ∞ below with Cc (Rn ) as a core. then Vn is bounded and Hn = H0 + Vn is positivity preserving by the Trotter product formula since both e−tH0 and e−tV are. (10.72) for ϕ. Theorem 10. Proof. If we set Vn = V χ{x V (x)≤n} . ϕ. We ﬁrst show that e−tH . If ψ is an eigenfunction of H corresponding to E0 it is an eigenfunction of RH (λ) corresponding to E01 and the claim follows since RH (λ) = −λ 1 . e−sH ψ = 0∀s ≥ 0} e−sH ψ ϕe−sH ψ (10. it is simple and the corresponding eigenfunction is strictly positive. t > 0.23. e−tH ψ dt > 0.11.71) k→∞ Since we ﬁnally obtain that e−tH0 leaves N (ψ) invariant. is positivity preserving. but this operator is positivity increasing and thus N (ψ) = {0}. ψ positive. So RH (λ) is positivity increasing for λ < E0 . In other words. λ < E0 .41) which shows ∞ s e−t(H−Vn ) → e−tH0 ϕ. ∞ Moreover. both etVn and e−tH e leave N (ψ) invariant and invoking again Trotter’s formula the same is true for k t t e−t(H−Vn ) = slim e− k H e k Vn . RH (λ)ψ = 0 eλt ϕ. E0 −λ . that is etVn ϕ ∈ N (ψ). Our hope is that this property carries over to H = H0 + V .
y2 ) = √ 2 then H reads in this new coordinates I I −I I (x1 . y2  (11. x2.k (xj − xk ).4) 189 .1 . which will give us a feeling for how the many particle case diﬀers from the one particle case and how the diﬃculties can be overcome.1 . (11. If we assume that the interaction is again of the Coulomb type. Selfadjointness In this section we want to have a look at the Hamiltonian corresponding to more than one interacting particle.2 . Let 1 (y1 .3 ) and those corresponding to the second particle by x2 = (x2. x2 ). x1. the Hamiltonian is given by γ . We denote the coordinates corresponding to the ﬁrst particle by x1 = (x1. x1.2 does not allow singularities for n ≥ 3. it does not tell us whether H is selfadjoint or not.3 ).1.1) We ﬁrst consider the case of two particles. (11.2 . D(H) = H 2 (R6 ). It is given by N N H=− j=1 ∆j + j<k Vj.3) √ γ/ 2 H = (−∆1 ) + (−∆2 − ). (11. x2.Chapter 11 Atomic Schr¨dinger o operators 11.2) H = −∆1 − ∆2 − x1 − x2  Since Theorem 10.
However. one ﬁrst has to remove the center of mass motion. Hence. In the last coordinates this corresponds to a bound state of the second operator. we expect [0. γ≤0 (11. Atomic Schr¨dinger operators o In particular. this implies that we do not have to deal with the center of mass motion . λ0 = −γ 2 /8. this is not true! This is due to the fact that γ/( 2y2 ) does not vanish as y → ∞. Then. ∞) for −∆1 and √ ψ2 (y2 ) be a Weyl sequence corresponding to λ0 for −∆2 −γ/( 2y2 ). the ﬁrst part corresponds to the center of mass motion and the second part to the relative motion. Conversely. H is selfadjoint and semibounded √ for any γ ∈ R. Moreover. If λ ∈ σess (H). ψ1 (y1 )ψ2 (y2 ) is a Weyl sequence corresponding to λ + λ0 for H and thus [λ0 . this means that the movement of the whole system is somehow unbounded. both particles are far away from each other (such that we can neglect the interaction) and the energy corresponds to the sum of the kinetic energies of both particles. From a physics point of view. It is not hard to translate this intuitive ideas into a rigorous proof. ∞) ⊆ σess (H). In particular. Let ψ1 (y1 ) be a Weyl sequence corresponding to λ ∈ [0. Let us look at this problem from the physical view point.190 11. we will restrict ourselves to the case of one atom with N electrons whose nucleus is ﬁxed at the origin. To avoid clumsy notation. ∞). Thus we obtain σ(H) = σess (H) = [λ0 . where λ0 = −γ 2 /8 is the smallest eigenvalue of the second operator if the forces are attracting (γ ≥ 0) and λ0 = 0 if they are repelling (γ ≤ 0). we have −∆1 ≥ 0 respectively −∆2 − ∞) γ/( 2y2 ) ≥ λ0 and hence H ≥ λ0 . Since both can be arbitrarily small (but positive). Secondly. you might suspect that γ/( 2y2 ) is relatively compact with respect to −∆1 − ∆2 in L2 (R6 ) since it is with respect to −∆2 √ in L2 (R6 ). √ Using that γ/( 2y2 ) has (−∆2 )bound 0 in L2 (R3 ) it is not hard to see that the same is true for the (−∆1 − ∆2 )bound in L2 (R6 ) (details will follow in the next section). There are two possibilities for this. 0.5) Clearly. the physically relevant information is the spectrum of the operator √ −∆2 − γ/( 2y2 ) which is hidden by the spectrum of −∆1 . Hence we expect [λ0 . Firstly. ∞) ⊆ σess (H). it is the sum of a free particle plus a particle in an external Coulomb ﬁeld. both particles remain close to each other and move together. In particular.√ ⊆ σess (H). γ ≥ 0 . in order to reveal the physics.
. γj > 0. D(H) = H 2 (Rn ). by our previous lemma. Now let ψ ∈ S(Rn ).6) D(H (N ) ) = H 2 (R3N ). This will follow from Kato’s theorem. However. (11. Hence we obtain d 2 Rn Vk ψ 2 ≤ a  j=1 n ˆ p2 ψ(p)2 dn p + b2 ψ j ˆ p2 ψ(p)2 dn p + b2 ψ j j=1 2 2 ≤ a2 Rn  2 = a2 H0 ψ + b2 ψ 2 .8) Proof. a core.7) x We ﬁrst need to establish selfadjointness of H (N ) . ee. After a unitary transform of Rn we can assume y (1) = (x1 . where Vne describes the interaction of one electron with the nucleus and Vee describes the interaction of two electrons. (11. It suﬃces to consider one k. be realvalued ∞ and let Vk (y (k) ) be the multiplication operator in L2 (Rn ).1 (Kato). 11. In particular. Theorem 11. (11. Explicitly we have γj Vj (x) = . n = N d. Let Vk ∈ L∞ (Rd ) + L2 (Rd ). The HVZ theorem 191 encountered in our example above.2. j = ne. H = H0 + is selfadjoint and k ∞ (Rn ) is C0 Vk (y (k) ). obtained by letting y (k) be the ﬁrst d coordinates of a unitary transform of Rn . The Hamiltonian is given by N N N N H (N ) = − j=1 ∆j − j=1 Vne (xj ) + j=1 j<k Vee (xj − xk ). d ≤ 3. there is still something we . . Then Vk is H0 bounded with H0 bound 0. . (11. (11.10) which implies that Vk is relatively bounded with bound 0. xd ) since such transformations leave both the scalar product of L2 (Rn ) and H0 invariant. The HVZ theorem The considerations of the beginning of this section show that it is not so easy to determine the essential spectrum of H (N ) since the potential does not decay in all directions as x → ∞.9) d 2 2 j=1 ∂ /∂ xj .11.2. then Vk ψ where ∆1 = 2 ≤ a2 Rn ∆1 ψ(x)2 dn x + b2 Rn ψ(x)2 dn x. .
The proof is similar but involves some additional notation. xN ) = ψ N −1 (x1 . We begin with the ﬁrst inclusion. the ionization energy (i. ψr (xN ) = ψ 1 (xN − r). it remains to consider the inﬁmum of the spectra of these operators. Let H (N ) be the selfadjoint operator given in (11.j )ψ N −1 ∈ j=1 1 L2 (R3N ) and ψr  → 0 pointwise as r → ∞ (by Lemma 10. . .j )ψr . van Winter and Hunziker and known as HVZ theorem. .11) VN. Since the error turns out relatively compact. To prove [λN −1 .. xN −1 )ψr (xN ). Hence arguing as in the two electron example of the previous section we expect Theorem 11. The idea of proof is the following. (−∆N − λ)ψ 1 ≤ ε for some λ ≥ 0.192 11. where λN −1 = min σ(H (N −1) ) < 0. Our goal for the rest of this section is to prove this result which is due to Zhislin. . For all cases where one electron is far away it is λN −1 and for the case where all electrons are far away from the nucleus it is 0 (since the electrons repel each other). Let ψ N −1 (x1 . the energy should be the sum of the kinetic energy of the single electron and the energy of the remaining system. ∞) we will localize H (N ) on sets where either one electron is far away from the others or all electrons are far away from the nucleus. . ∞) ⊆ σess (H (N ) ) we choose Weyl sequences for H (N −1) and −∆N and proceed according to our intuitive picture from above. Denote the inﬁmum of the spectrum of H (N ) by λN .2 (HVZ). let us split the system into H (N −1) plus a single electron. Then H (N ) is bounded from below and σess (H (N ) ) = [λN −1 . . Since (VN − N −1 VN. Then. (H (N −1) − λN −1 )ψ N −1 ≤ ε and ψ 1 ∈ H 2 (R3 ) such that ψ 1 = 1.12) where VN = Vne (xN ) and VN. xN −1 ) ∈ H 2 (R3(N −1) ) such that ψ N −1 = 1. .e. . (11. the third . . Atomic Schr¨dinger operators o can do. In fact there is a version which holds for general N body systems.j = Vee (xN −xj ). To prove σess (H (N ) ) ⊆ [λN −1 . . then (H (N ) − λ − λN −1 )ψr ≤ (H (N −1) − λN −1 )ψ N −1 + ψ N −1 + (VN − j=1 1 (−∆N − λ)ψr N −1 1 ψr (11. . ∞).6). . the energy needed to remove one electron from the atom in its ground state) of an atom with N electrons is given by λN − λN −1 . Now consider 1 1 ψr (x1 . If the single electron is far away from the remaining system such that there is little interaction. In particular.1).
(11. = j. n=1 j=0 (11. Now we will choose φj . is such that N (11. In summary. [0. (11.18) Indeed. say j = 1. (H (N ) − λ − λN −1 )ψr ≤ 3ε proving [λN −1 . ψ ∈ H 2 (Rn ).14) then N ∆ψ = j=0 φj ∆φj ψ − ∂φj 2 ψ. 0 ≤ j ≤ N . x ∈ U1 j 1 j . (11. suppose there is an x ∈ S 3N −1 which is not an element of this union.11. and ∂φj (x) → 0 as x → ∞. 1]).14) holds. Next. The second inclusion is more involved. and xj  > n−1 }.16) = j. 1 ≤ j ≤ N . is such that (11.13) φj (x)2 = 1.3 (IMS localization formula). Lemma 11. and xj  ≥ Cx}. 1]. supp(φj ) ∩ B ⊆ {x ∈ B xj − x  ≥ Cx for all supp(φ0 ) ∩ B ⊆ {x ∈ B x  ≥ Cx for all } for some C ∈ [0. j=0 x ∈ Rn . Suppose φj ∈ C ∞ (Rn ). we will choose φ0 in such a way that x ∈ supp(φ0 ) ∩ B implies that all particle are far away from the nucleus. Similarly. Proof.4. There exists functions φj ∈ C ∞ (Rn .15) Abbreviate B = {x ∈ R3N x ≥ 1}. Consider the sets n Uj = {x ∈ S 3N −1  xj − x  > n−1 for all n U0 = {x ∈ S 3N −1  x  > n−1 for all }. since n for all n implies 0 = x − x  = x  for some j > 1. The HVZ theorem 193 term can be made smaller than ε by choosing r large (dominated convergence). say j = 2. We begin with a localization formula. (11. 0 ≤ j ≤ N . in such a way that x ∈ supp(φj ) ∩ B implies that the jth particle is far away from all the others and from the nucleus. which can be veriﬁed by a straightforward computation Lemma 11.17) We claim that ∞ N n Uj = S 3N −1 .2. n Then x ∈ U0 for all n implies 0 = xj  for some j. ∞) ⊆ σess (H (N ) ).
j) = − =1 ∆ − =j N V + N Vk. [0. a contradiction.0) = − =1 ∆ + k< Vk. (11.j) = Vj + =j Vj. By compactness of S 3N −1 we even have N n Uj = S 3N −1 j=0 (11. Then V is relatively compact with respect to H0 . Atomic Schr¨dinger operators o Proceeding like this we end up with x = 0.22) where N N N N N H (N. . Extend φj (x) to a smooth function from 3N \{0} to [0. =j V (N. k.20) ˜ and pick a function φ ∈ C ∞ (R3N . Let V be H0 bounded with H0 bound 0 and suppose that χ{xx≥R} V RH0 (z) → 0 as R → ∞.23) To show that our choice of the functions φj implies that K is relatively compact with respect to H we need the following Lemma 11.194 11. λ > 0. The gradient tends to zero since φj (λx) = φj (x) for λ ≥ 1 and x ≥ 1 which implies (∂φj )(λx) = λ−1 (∂φj )(x). By our localization formula we have N N H (N ) = j=0 φj H (N. Choose ∞ φ ∈ C0 (Rn . .24) a H0 φRH0 (z)ψn + b φRH0 (z)ψn . K= j=0 φ2 V (N. H (N. Then V RH0 (z)ψn ≤ ≤ (1 − φ)V RH0 (z)ψn + V φRH0 (z)ψn (1 − φ)V RH0 (z) ∞ ψn + (11.j) φj + K. Then ˜ ˜ ˜ φ + (1 − φ)φj φj = (11. V (N. x ∈ S 3N −1 . It is wellknown that there is a partition of unity ˜ ˜ φj (x) subordinate to this cover. [0.5.19) for n suﬃciently large. j (11. 1]) such that it is one for x ≤ R. Proof.21) N ˜ ˜ ˜ φ + (1 − φ)φ =0 are the desired functions. Note that ψn ≤ M for some M > 0.0) = =1 V (11. . 1]) with support inside the unit ball which is 1 in a neighborhood of the origin. It suﬃces to show that V RH0 (z)ψn converges to 0. k< .j) + ∂φj 2 . Let ψn converge to 0 weakly. 1] by R ˜ ˜ φj (λx) = φj (x).
10. (11. m ∈ N.11. Moreover. there are eigenvalues embedded in the essential spectrum in this case.27) 4 Moreover. 0). we have H (N.28) In particular. n. The terms ∂φj 2 are bounded and vanish at ∞. (11. the last term can also be made smaller than ε by choosing n large since φ is H0 compact. are all of the form H (N −1) plus one particle which does not interact with the others and the nucleus. (11.30) . K is also relatively compact with respect to H (N ) since V (N ) is relatively bounded with respect to H0 . (H (N ) − K − λN −1 )ψ = j=0 φj ψ. we see 2 γne . The terms φj V (N. (11. if γee = 0 (no electron interaction). This even holds for arbitrary N . H (2) ≥ − σp (H (N ) ) ⊂ (−∞. The HVZ theorem 195 By assumption. the ﬁrst term can be made smaller than ε by choosing R large. ∞) which ﬁnishes the proof of the HVZ theorem. let us consider the example of Helium like atoms (N = 2).25) Thus we obtain the remaining inclusion σess (H (N ) ) = σess (H (N ) − K) ⊆ σ(H (N ) − K) ⊆ [λN −1 . since the electron interaction term is positive. (11. hence they are H0 compact by Lemma 7. That is.j) are relatively compact by the lemma and hence K is relatively compact with respect to H0 . By Lemma 6. the same is true for the second term choosing a small.j) − λN −1 )φj ψ ≥ 0. 1 ≤ j ≤ N . we can also treat molecules if we assume that the nuclei are ﬁxed in space. 1 ≤ j ≤ N . Note that the same proof works if we add additional nuclei at ﬁxed locations.k ≥ 0 and hence N ψ. Since the operators H (N. Next. Finally. In particular H (N ) − K is selfadjoint on H 2 (R3N ) and σess (H (N ) ) = σess (H (N ) − K).2. Moreover.29) 2 Note that there can be no positive eigenvalues by the virial theorem. we have H (0) ≥ 0 since Vj.22.j) − λN −1 ≥ 0. we can take products of one particle eigenfunctions to show that (11. ∞).j) . By the HVZ theorem and the considerations of the previous section we have 2 γne . (H (N. Finally.26) σess (H (2) ) = [− 2 − γne 1 1 + 4n2 4m2 ∈ σp (H (2) (γee = 0)).
.
1. Ω± ψ = limt→±∞ eitH e−itH0 ψ (12. Hence one expects asymptotic states ψ± (t) = exp(−itH0 )ψ± (0) to exist such that ψ(t) − ψ± (t) → 0 $$ ¡ $$$$ $ ¡ $$$ ¡ $$$ ¡ ¡ ψ(t) ¡ ¡ ¡ ¡ ψ− (t) ¡ ¡ ¡ ¡ ! ¡ ¡ as t → ±∞.2) and motivated by this we deﬁne the wave operators by D(Ω± ) = {ψ ∈ H∃ limt→±∞ eitH e−itH0 ψ} . Outside this region the forces are negligible and hence the time evolution should be asymptotically free. Abstract theory In physical measurements one often has the following situation.3) 197 . (12. A particle is shot into a region where it interacts with some forces and then leaves the region again.Chapter 12 Scattering theory 12.1) X ψ+ (t)$$$$ Rewriting this condition we see 0 = lim t→±∞ e−itH ψ(0) − e−itH0 ψ± (0) = lim t→±∞ ψ(0) − eitH e−itH0 ψ± (0) (12.
D(Ω± ) reduces exp(−itH0 ) and Ran(Ω± ) reduces exp(−itH). ϕ ∈ D(Ω± ). it is called a scattering state. Hence we deﬁne the scattering operator S = Ω−1 Ω− . The sets D(Ω± ) and Ran(Ω± ) are closed and Ω± : D(Ω± ) → Ran(Ω± ) is unitary.6) with respect to t we obtain from Theorem 5. By construction we have Ω± ψ = lim t→±∞ eitH e−itH0 ψ = lim t→±∞ ψ = ψ (12. . Moreover.6) In addition. Moreover. ψ ∈ Ran(Ω+ ) ∩ Ran(Ω− ). The subspaces D(Ω± ) respectively Ran(Ω± ) reduce H0 respectively H and the operators restricted to these subspaces are unitarily equivalent Ω± H0 ψ = HΩ± ψ. ψ = 0. that is.9) Note that we have D(S) = D(Ω− ) if and only if Ran(Ω− ) ⊆ Ran(Ω+ ) and Ran(S) = D(Ω+ ) if and only if Ran(Ω+ ) ⊆ Ran(Ω− ). interchanging the roles of H0 and H amounts to replacing Ω± by Ω−1 and hence Ran(Ω± ) is ± also closed. observe that t→±∞ lim eitH e−itH0 (e−isH0 ψ) = lim e−isH (ei(t+s)H e−i(t+s)H0 ψ) t→±∞ (12. Next. exp(−itH0 )ψ = exp(itH0 )ϕ. Theorem 12. ψ ∈ D(Ω± ) ∩ D(H0 ). S is unitary from D(S) onto Ran(S) and we have H0 Sψ = SH0 ψ. If a state ψ has both. (12. ψ ∈ D(Ω± ).7) Hence D(Ω± )⊥ is invariant under exp(−itH0 ) and Ran(Ω± )⊥ is invariant under exp(−itH).4) and it is not hard to see that D(Ω± ) is closed.5) and hence Ω± e−itH0 ψ = e−itH Ω± ψ. Consequently.1 the intertwining property of the wave operators. Moreover. D(H0 ) ∩ D(S). Moreover. (12. + D(S) = {ψ ∈ D(Ω− )Ω− ψ ∈ Ran(Ω+ )}. if ψ ∈ D(Ω± )⊥ then ϕ. (12.1. (12.198 12.8) It is interesting to know the correspondence between incoming and outgoing states. note that this whole theory is meaningless until we can show that D(Ω± ) are nontrivial. We ﬁrst show a criterion due to Cook. Lemma 12. diﬀerentiating (12. Scattering theory The set D(Ω± ) is the set of all incoming/outgoing asymptotic states ψ± and Ran(Ω± ) is the set of all states which have an incoming/outgoing asymptotic state. D(Ω± ) is invariant under exp(−itH0 ) and Ran(Ω± ) is invariant under exp(−itH).2. (12. In summary.10) However.
Hac (H) ⊆ Ran(Ω± ). Since such functions are dense. Moreover. Moreover. We will say that the wave operators exist if all elements of Hac (H0 ) are asymptotic states. in addition.3 (Cook). The result follows from eitH e−itH0 ψ = ψ + i 0 t exp(isH)(H − H0 ) exp(−isH0 )ψds (12.16) 4π 1 and thus any such ψ is in D(Ω+ ). that is. Invoking (7. Similarly for Ω− . we even have (Ω± − I)ψ ≤ 0 (H − H0 ) exp( itH0 )ψ dt (12. x)2 dx. respectively.11) then ψ ∈ D(Ω± ).12. 0 ∞ ψ ∈ D(H0 ). we obtain D(Ω+ ) = H. Suppose H0 is the free Schr¨dinger operator and H = o H0 + V with V ∈ L2 (R3 ). s > 0. Proof.17) and that they are complete if. Since we want to use Cook’s lemma.12) in this case. (12. (12. Suppose D(H) ⊆ D(H0 ). ψ(s) = exp(isH0 )ψ. (12.1. that is.34) we get 1 ψ V ψ(s) ≤ ψ(s) ∞ V ≤ (4πs)3/2 1 V .13) which holds for ψ ∈ D(H0 ). Hence for ψ ∈ Hpp (H0 ) it is easy to check whether it is in D(Ω± ) or not and only the continuous subspace is of interest. we need to estimate V ψ(s) 2 = R3 V (x)ψ(s. Proof. By the intertwining property ψ is an eigenfunction of H0 if and only if it is an eigenfunction of H. Abstract theory 199 Lemma 12.15) at least for ψ ∈ L1 (R3 ). then the wave operators exist and D(Ω± ) = H.18) . Hac (H0 ) ⊆ D(Ω± ) (12. (12. If ∞ (H − H0 ) exp( itH0 )ψ dt < ∞.14) for given ψ ∈ D(H0 ). all elements of Hac (H) are scattering states. this implies ∞ 1 V ψ(s) ds ≤ 3/2 ψ 1 V (12.4. As a simple consequence we obtain the following result for Schr¨dinger o operators in R3 Theorem 12.
Since the dilation group acts on x only.200 12. we expect that it cannot be found in a ball of radius proportional to at as t → ±∞ (a is the minimal velocity of the particle. abbreviating ψ(t) = e−itH0 ψ. Our ﬁrst goal is to give a precise meaning to some terms in the intuitive picture of scattering theory introduced in the previous section.11).20) where D is the dilation operator introduced in (10. since we have assumed the mass to be two). Note FD = −DF (12. d Eψ (x(t)2 ) = ψ(t). To say more we will look for a transformation which maps D to a multiplication operator. If we project a state in Ran(P± ) to energies in the interval (a2 . o In this later case the wave operators exist if D(Ω± ) = H.23) . In fact. 12. Set x(t)2 = exp(iH0 t)x2 exp(−iH0 t). (12. b2 ). We will be mainly interested in the case where H0 is the free Schr¨dinger operator and hence Hac (H0 ) = H. (12. then. ±∞)). Scattering theory If we even have Hc (H) ⊆ Ran(Ω± ). and asymptotically complete if Hc (H) = Ran(Ω± ).21) as outgoing respectively incoming states. it seems reasonable to switch to polar coordinates x = rω. they are complete if Hac (H) = Ran(Ω± ). x2 ]ψ(t) = 4 ψ(t). Dψ(t) . P± = PD ((0.22) and hence Ff (D) = f (−D)F. (t. In particular asymptotic completeness implies Hsc (H) = ∅ since H restricted to Ran(Ω± ) is unitarily equivalent to H0 . We ﬁrst collect some properties of D which will be needed later on.19) they are called asymptotically complete. Since U (s) essentially transforms r into r exp(s) we will replace r by ρ = ln(r). we will show below that the tail decays faster then any inverse power of t. (12. i[H0 . dt ψ ∈ S(Rn ). This physical picture suggests that we should be able to decompose ψ ∈ H into an incoming and an outgoing part. But how should incoming respectively outgoing be deﬁned for ψ ∈ H? Well incoming (outgoing) means that the expectation of x2 should decrease (increase). Incoming and outgoing states In the remaining sections we want to apply this theory to Schr¨dinger opo erators. In these coordinates we have U (s)ψ(eρ ω) = e−ns/2 ψ(e(ρ−s) ω) (12. ω) ∈ R+ × S n−1 . Hence it is natural to consider ψ ∈ Ran(P± ).2.
32) . Thus D corresponds to diﬀerentiation with respect to this coordinate and all we have to do to make it a multiplication operator is to take the Fourier transform with respect to ρ.26) (12. b > 0. t > 0. Consider ψ ∈ S(Rn ). −R))Kt.x . FPD ((R. (12. In particular we have P+ + P− = I. (Mψ)(λ.x . (1 + t)N ±t ≥ 0. b2 ) for some a. N ∈ N there is a constant C such that χ{x x<2at} e−itH0 f (H0 )PD ((±R. (12.x (p) = we see that it suﬃces to show PD ((−∞. (12. (12. Incoming and outgoing states 201 and hence U (s) corresponds to a shift of ρ (the constant in front is absorbed by the volume element). PD ((−∞.30) where Kt. that is. ∞ Lemma 12.5. Moreover.12.31) ≤ const . ω) = √ 2π 0 ∞ . (12. From this it is straightforward to show that σ(D) = σac (D) = R.27) −isλ where dn−1 ω is the normalized surface measure on M −1 U (s)M = e and hence M−1 DM = λ. −R))ψ . (1 + t)2N (12.28) and that S(Rn ) is a core for D. n/2 (2π) for x < 2at. r−iλ ψ(rω)r n −1 2 (12. ω)2 dλdn−1 ω = R S n−1 R+ S n−1 ψ(rω)2 rn−1 drdn−1 ω. the remaining one being similar.25) S n−1 . ±∞)) ≤ respectively. σsc (D) = σpp (D) = ∅ (12.24) dr By construction. Introducing ψ(t. For any R ∈ R. x) = e−itH0 f (H0 )PD ((R. Using the Mellin transform we can now prove Perry’s estimate [11]. This leads us to the Mellin transform M : L2 (Rn ) → L2 (R × S n−1 ) ψ(rω) 1 → (Mψ)(λ. We prove only the + case. C . M is unitary. ∞))ψ(x) = Kt.29) Proof. Suppose f ∈ Cc (R) with supp(f ) ⊂ (a2 .2. ∞))ψ ˆ = Kt.x 2 1 2 ei(tp −px) f (p2 )∗ .
r ∈ (a. b). (12. ω) = ˜ f (r)eiα(r) dr (12. R(2εa)−1 .35) for λ ≤ −R and t > where ε is the distance of a to the support ˜. r ∈ (a. Hence we expect that the time evolution will asymptotically look like the free one for ψ ∈ Hc if the potential decays suﬃciently fast.202 12. α(r) = tr2 + rωx − λ ln(r).3. 1 (2π)(n+1)/2 0 ∞ (12.36) α (r) 1 + λ + t and λ.37) (the last step uses integration by parts) for λ. ψ(t) will eventually leave every compact ball (at least on the average). ∞)) and R ∈ R. ω) ≤ (1 + λ + t)N for any N ∈ N and t ≥ 0 and λ ≤ −R.x Since 2 = −∞ S n−1 (MKt.x )(λ. Estimating the derivative of α we see α (r) = 2tr + ωx − λ/r > 0. By increasing the constant we can even assume that it holds for t ≥ 0 and λ ≤ −R. t as above. ±∞)) and χ = χ{x x<2at} we have PD f (H0 )e−itH0 ψ ≤ χeitH0 f (H0 )∗ PD ψ + f (H0 ) (I−χ)ψ . . we expect such potentials to be asymptotically complete. (12. Suppose that f ∈ Cc ((0.38) (MKt. Hence we can ﬁnd a constant such that of f const 1 ≤ . Moreover. ±∞))f (H0 ) exp(−itH0 ) converges strongly to 0 as t → ∞.33) (MKt. Proof. Scattering theory Now we invoke the Mellin transform to estimate this norm R PD ((−∞. ω)2 dλdn−1 ω.34) ∞ ˜ with f (r) = f (r2 )∗ rn/2−1 ∈ Cc ((a2 . ∞ Corollary 12. Abbreviating PD = PD ((±R. t as above. (12. b2 )). b).x )(λ. Then the operator PD ((±R. Schr¨dinger operators with short range o potentials By the RAGE theorem we know that for ψ ∈ Hc . by iterating the last estimate we see const (12. Using this we can estimate the integral in (12. In other words.6. Taking t → ∞ the ﬁrst term goes to zero by our lemma and the second goes to zero since χψ → ψ.x )(λ.39) since A = A∗ . This ﬁnishes the proof. 12.34) ∞ 0 1 d iα(r) const ˜ e dr ≤ f (r) α (r) dr 1 + λ + t ∞ 0 ˜ f (r)eiα(r) dr . −R))Kt. (12.
by our short range condition it converges in norm to RH (z)V RH0 (z) = RH (z) − RH0 (z) as r → ∞ (at least for some subsequence). As a ﬁrst consequence note (12. We ﬁrst note that it suﬃces to check this for h1 or h2 and for one z ∈ ρ(H0 ). Invoking the ﬁrst resolvent formula φr V RH0 (z) ≤ φr V RH0 (z0 ) I − (z − z0 )RH0 (z) ﬁnishes the proof.7.42) [RH0 (z). ∆φr ∞ ≤ ∆φ ∂φr ∞ ≤ ∂φ ∞ /r2 we see ∞ /r 2 (12. Using ˜ h2 (r) = φr V RH0 (z) .43) respectively (∂φr ) = φr/2 (∂φr ). where χr = χ{x x≥r} . φr ]RH0 (z) = RH0 (z)(∆φr + (∂φr )∂)RH0 (z) and ∆φr = φr/2 ∆φr . (12. Moreover. where h2 (r) = χr V RH0 (z) .12. Lemma 12. Moreover. If V is short range.10 and RH (z)V is bounded by Lemma 6.47) .8.40) ˜ h1 (r) = V RH0 (z)φr .45) (12. r ˜ ˜ Hence h2 is integrable if h1 is. Pick φ ∈ Cc (Rn . The operator RH (z)V (I−χr )RH0 (z) is compact since (I−χr )RH0 (z) is by Lemma 7.44) c˜ ˜ ˜ h1 (r) − h2 (r) ≤ h1 (r/2). and φr (x) = φ(x/r). Schr¨dinger operators with short range potentials o 203 Suppose V is relatively bounded with bound less than one. r ≥ 1. (12. hj integrable for one z0 ∈ ρ(H0 ) implies hj integrable for all z ∈ ρ(H0 ). Then it is not hard to see that hj is integrable if and ˜ only if hj is integrable. [0. The function h1 is integrable if and only if h2 is.22. (12. c˜ c˜ 2c ˜ ˜ ˜ ˜ h1 (r) ≤ h2 (r) + h1 (r/2) ≤ h2 (r) + h2 (r/2) + 2 h1 (r/4) r r r ˜ ˜ shows that h2 is integrable if h1 is. then RH (z) − RH0 (z) is compact. Conversely. Proof. (12. 1]) such that φ(x) = 0 for 0 ≤ x ≤ 1/2 and φ(x) = 0 for 1 ≤ x. (12. Introduce h1 (r) = V RH0 (z)χr . ∞ Proof.46) Lemma 12.41) The potential V will be called short range if these quantities are integrable.3. φr ] = −RH0 (z)[H0 (z). r ≥ 0. r ≥ 1.
51) n→∞ that is.3) we need to show that V exp( itH0 )ψ ≤ V RH0 (−1) (I − χ2at ) exp( itH0 )(H0 + I)ψ + V RH0 (−1)χ2at (H0 + I)ψ (12. this is clearly not enough to prove asymptotic completeness and we need a more careful analysis. (12.13) we see ∞ RH (z)(Ω± − I)f (H0 )P± ψn ≤ 0 RH (z)V exp(−isH0 )f (H0 )P± ψn dt.49) is integrable for a dense set of vectors ψ. By (12. Suppose RH (z)−RH0 (z) is compact. (Ω± − I)f (H0 )P± is compact. And by σac (H) ⊆ σess (H) = [0. The ﬁrst part is Lemma 6.9. ∞) we even have σac (H) = [0. By Cook’s criterion (Lemma 12.53) . we see that the wave operators exist. by Weyl’s theorem we have σess (H) = [0. Scattering theory In particular.10. The same is true by Perry’s estimate (Lemma 12. The main ideas are due to Enß [4]. The second term is integrable by our short range assumption.20 and the second part follows from part (ii) of Lemma 6. Then lim (Ω± − I)f (H0 )P± ψn = 0.8 since χr converges strongly to 0.48) Proof. ∞). ∞ Lemma 12. (12. Since vectors of this form are dense. ∞) = σac (H0 ) ⊆ σac (H). (12. ±∞))ϕ. We begin by showing that the wave operators exist. ∞).52) Since RH (z)V RH0 is compact we see that the integrand RH (z)V exp(−isH0 )f (H0 )P± ψn = RH (z)V RH0 exp(−isH0 )(H0 + 1)f (H0 )P± ψn (12. Moreover. then so is f (H)−f (H0 ) for any f ∈ C∞ (R) and r→∞ lim (f (H) − f (H0 ))χr = 0. V short range implies that H and H0 look alike far outside. ∞)) and suppose ψn converges weakly to 0.50) Since H restricted to Ran(Ω∗ ) is unitarily equivalent to H0 . Let f ∈ Cc ((0. we obtain ± [0. (12. D(Ω± ) = H. Lemma 12. Proof. To prove asymptotic completeness of the wave operators we will need that (Ω± − I)f (H0 )P± are compact.204 12.5) for the ﬁrst term if we choose ψ = f (H0 )PD ((±R. However.
56) ∞ for some f ∈ Cc ((0. (12. (12. σsc (H) = ∅. b) is a ﬁnite set.11.57) . Since H restricted to Ran(Ω± ) is unitarily equivalent to H0 (which has purely absolutely continusc ous spectrum). But a continuous measure cannot be supported on a ﬁnite set. ∞).54) In summary we have shown Theorem 12. (12. Choosing f such that f (λ) = 1 for λ ∈ [a. Then σac (H) = σess (H) = [0. In particular σsc (H) ∩ (a. Furthermore. Moreover. Since f (H) − f (H0 ) is compact. We ﬁrst show that the singular continuous spectrum is absent.49) the integrand is bounded by an L1 function depending only on ψn . b] we see that P = P f (H) is compact and hence ﬁnite dimensional. ∞). using the intertwining property we see that ˜ (Ω± − I)f (H0 )P± = RH (z)(Ω± − I)f (H0 )P± − (RH (z) − RH0 (z))f (H0 )P± ˜ is compact by Lemma 6.55) All nonzero eigenvalues have ﬁnite multiplicity and cannot accumulate in (0. Schr¨dinger operators with short range potentials o 205 converges pointwise to 0. PH Ω± = 0.3. the singular part must live on Ran(Ω± )⊥ . Thus RH (z)(Ω± − I)f (H0 )P± is compact by the dominated convergence theorem.12. By the RAGE theorem the sequence ψ(t) converges weakly to zero as t → ±∞. where f (λ) = (λ + 1)f (λ). 0 < a < b. but avoids the use of Ces`ro means in our main a argument. This is not really necessary. Introduce ϕ± (t) = f (H0 )P± ψ(t). Suppose V is short range. b) = ∅. Now we come to the anticipated asymptotic completeness result of Enß. ∞).20. Choose ψ ∈ Hc (H) = Hac (H) such that ψ = f (H)ψ (12. that is. showing σsc (H) ∩ (a. Thus P f (H0 ) = P (I − Ω+ )f (H0 )P+ + P (I − Ω− )f (H0 )P− is compact. Since 0 < a < b are arbitrary we even have σsc (H) ∩ (0. sc Abbreviate P = PH PH ((a. b)). pp sc Observe that replacing PH by PH the same argument shows that all nonzero eigenvalues are ﬁnite dimensional and cannot accumulate in (0. ∞) = ∅ and by σsc (H) ⊆ σess (H) = [0. Now we have gathered enough information to tackle the problem of asymptotic completeness. ∞). it follows that P f (H) is also compact. Abbreviate ψ(t) = exp(−itH)ψ. arguing as in (12. ∞) we obtain σsc (H) = ∅.
20. Ω+ ϕ+ (t) + Ω− ϕ− (t) .6. (12. Scattering theory which satisfy t→±∞ lim ψ(t) − ϕ+ (t) − ϕ− (t) = 0. (12.63) by Corollary 12.61) t→±∞ t→±∞ By Theorem 12. . then ψ 2 = = = t→±∞ lim ψ(t). Suppose V is short range.206 12. Hence Ran(Ω± ) = Hac (H) = Hc (H) and we thus have shown Theorem 12. Moreover.2. we even have t→±∞ (12.60) lim (Ω± − I)ϕ± (t) = 0 by Lemma 12.59) (12. then the wave operators are asymptotically complete. Now suppose ψ ∈ Ran(Ω± )⊥ . ψ(t) . Invoking the intertwining property we see ψ 2 = lim P f (H0 )∗ e−itH0 Ω∗ ψ.12 (Enß).62) lim P f (H0 )∗ Ω∗ ψ(t). ϕ+ (t) + ϕ− (t) lim ψ(t). ψ(t) = 0 t→±∞ (12.58) Indeed this follows from ψ(t) = ϕ+ (t) + ϕ− (t) + (f (H) − f (H0 ))ψ(t) and Lemma 6. ψ(t) lim ψ(t). Ω ϕ (t) (12.10. Ran(Ω± )⊥ is invariant under H and hence ψ(t) ∈ Ran(Ω± )⊥ implying ψ 2 = = t→±∞ t→±∞ lim ψ(t). For further results and references see for example [3].
Part 3 Appendix .
.
Hence any reasonable notion of size (i. by de Morgan. is called an algebra. • A is closed under complements. and reassemble them to get two copies of the unit ball (compare Problem A. it is called a σalgebra. If an algebra is closed under countable unions (and hence also countable intersections). Borel measures in a nut shell The ﬁrst step in deﬁning the Lebesgue integral is extending the notion of size from intervals to arbitrary sets. A. 209 . rotate and translate them.Appendix A Almost everything about Lebesgue integration In this appendix I give a brief introduction to measure theory. one which is translation and rotation invariant) cannot be deﬁned for all sets! A collection of subsets A of a given set X such that • X ∈ A.e.1)..1.) pieces. Unfortunately. Good references are [2] or [18]. A is also closed under ﬁnite intersections. since a classical paradox by Banach and Tarski shows that one can break the unit ball in R3 into a ﬁnite number of (wild – choosing the pieces uses the Axiom of Choice and cannot be done with a jigsaw. Note that ∅ ∈ A and that. this turns out to be too much. • A is closed under ﬁnite unions.
The sets in Σ are called measurable sets. k (σadditivity). In the case X = Rn the Borel σalgebra will be denoted by Bn and we will abbreviate B = B1 . (Note that it is no restriction to assume Xj X. Example. Let A ∈ P(M ) and set µ(A) to be the number of elements of A (respectively ∞ if A is inﬁnite). This is the so called counting measure.210 A.) It is called ﬁnite if µ(X) < ∞. In this case σadditivity clearly only needs to hold for disjoint sets An for which n An ∈ A. Sets in the Borel σalgebra are called Borel sets. Any measure µ satisﬁes the following properties: (i) A ⊆ B implies µ(A) ≤ µ(B) (monotonicity). the Borel σalgebra of X is deﬁned to be the σalgebra generated by all open (respectively all closed) sets. A measure µ is a map µ : Σ → [0. If X is a topological space. Now let us turn to the deﬁnition of a measure: A set X together with a σalgebra Σ is called a measure space. It is called the (σ)algebra generated by S. Almost everything about Lebesgue integration Moreover. • µ( ∞ j=1 Aj ) ∞ j=1 µ(Aj ) = if Aj ∩ Ak = ∅ for all j. Example. the intersection of any family of (σ)algebras {Aα } is again a (σ)algebra and for any collection S of subsets there is a unique smallest (σ)algebra Σ(S) containing S (namely the intersection of all (σ)algebra containing S). The third follows from the second using A ˜ µ(An ) = µ(A1 ) − µ(An ). The second follows using An = An \An−1 ˜n = A1 \An and and σadditivity. . ∞] on a σalgebra Σ such that • µ(∅) = 0.1 is not superﬂuous. Note that if X = N and An = {j ∈ Nj ≥ n}. n An ) and An A if Theorem A. If we replace the σalgebra by an algebra A. It is called σﬁnite if there is a countable cover {Xj }∞ of X with µ(Xj ) < j=1 ∞ for all j. The ﬁrst claim is obvious.1. ˜ Proof. (ii) µ(An ) → µ(A) if An (iii) µ(An ) → µ(A) if An A (continuity from below). then µ(An ) = ∞. but µ( n An ) = µ(∅) = 0 which shows that the requirement µ(A1 ) < ∞ in the last claim of Theorem A. We will write An An+1 ⊆ An (note A = A if An ⊆ An+1 (note A = n An ). then µ is called a premeasure. A and µ(A1 ) < ∞ (continuity from above).
Show that the restriction to the Borel sets is a measure. In particular. b] µ(b) − µ(a). A Borel measures is called outer regular if µ(A) = and inner regular if µ(A) = C⊆A. we defer it to the next section and look at some examples ﬁrst.1) sup µ(C). For every right continuous nondecreasing function µ : R → R there exists a unique regular Borel measure µ which extends (A. b] µ(A) = .C A⊆O.3) µ((0.4) A = (a. x=0 µ(x) = (A. Suppose Θ(x) = 0 for x < 0 and Θ(x) = 1 for x ≥ 0. Then we obtain the socalled Dirac measure at 0. µ(b−) − µ(a−). x < 0 0. Since the proof of this theorem is rather involved. Extend this to an outer measure for all subsets of R. x]). Conversely. (A. Borel measures in a nut shell 211 A measure on the Borel σalgebra is called a Borel measure if µ(C) < ∞ for any compact set C. which is given by Θ(A) = 1 if 0 ∈ A and Θ(A) = 0 if 0 ∈ A.2. This yields a premeasure. b) µ(b−) − µ(a). Example. But how can we obtain some more interesting Borel measures? We will restrict ourselves to the case of X = R for simplicity. A = [a.A. b) where µ(a−) = limε↓0 µ(a − ε). Let us ﬁrst show how we should deﬁne µ for intervals: To every Borel measure on B we can assign its distribution function −µ((x. this gives a premeasure on the algebra of ﬁnite unions of intervals which can be extended to a measure: Theorem A.4).2) It is called regular if it is both outer and inner regular. . µ(b) − µ(a−). x>0 which is right continuous and nondecreasing. 0]). Then the strategy is as follows: Start with the algebra of ﬁnite unions of disjoint intervals and deﬁne µ for those sets (as the sum over the intervals).O inf µ(O) open (A. compact (A. A = [a. Two diﬀerent functions generate the same measure if and only if they diﬀer by a constant.1. given a right continuous nondecreasing function µ : R → R we can set A = (a.
it contains no subintervals) and perfect (it has no isolated points). Problem A. Suppose λ(x) = x. Moreover. 7 ]∪ 9 9 3 3 9 [ 8 . if the set where it does not hold is contained in a set of measure zero. (Hint: How can you build up [0.1 (Vitali set). 1 ]∪[ 2 . . A set A ∈ Σ is called a support for µ if µ(X\A) = 0. 1]. y ∈ [0. λ(C) = limn→∞ λ(Cn ) = 0. Next. Example. Call two numbers x. In particular.. 1] whose ternary expansion contains no one’s. A property is said to hold µalmost everywhere (a. Almost everything about Lebesgue integration Example. Construct the set V by choosing one representative from each equivalence class. Since Cn is compact. then the associated measure is the ordinary Lebesgue measure on R. It has some further interesting properties: it is totally disconnected (i. and thus its Lebesgue measure is λ(Cn ) = (2/3)n . It is constructed as follows: Start with C0 = [0. Using the ternary expansion it is extremely simple to describe: C is the set of all x ∈ [0. 1) from V ?) . The Cantor set is an example of a closed uncountable set of Lebesgue measure zero. 1] 1 and remove the middle third to obtain C1 = [0. Show that V cannot be measurable with respect to any ﬁnite translation invariant measure on [0.) if the it holds on a support for µ or. Proceeding like this we obtain a sequence of nesting sets Cn and the limit C = n Cn is the Cantor set. equivalently. 1) equivalent if x − y is rational.e. again remove 3 the middle third’s of the remaining sets to obtain C2 = [0.e. 9 C0 C1 C2 C3 . 1). which shows that C is uncountable (why?). Cn consists of 2n intervals of length 3−n . 1]. 1 ]∪[ 2 . Example. so is C.212 A. We will abbreviate the Lebesgue measure of a Borel set A by λ(A) = A. . 3 ]∪[ 2 . The set of rational numbers has Lebesgue measure zero: λ(Q) = 0. It can be shown that Borel measures on a separable metric space are always regular. and so has any countable union of points (by countable additivity). any single point has Lebesgue measure zero. In fact.
Similarly for decreasing sequences. But does it contain any elements? Well if A ∈ A we have A ⊆ M (A) implying M (A) = M. Hence M is a monotone class and must be equal to M since it contains A. Theorem A.5) shows that M (A) is closed under increasing sequences. As a prerequisite we ﬁrst establish that it suﬃces to check increasing (or decreasing) sequences of sets when checking wether a given algebra is in fact a σalgebra: A collections of sets M is called a monotone class if An A implies A ∈ M whenever An ∈ M and An A implies A ∈ M whenever An ∈ M. Then An is increasing and n An = n An ∈ A.2.. To show that it is an σalgebra let ˜ ˜ ˜ An be given and put An = k≤n An .2 we need to show how a premeasure can be extended to a measure. M (A) is closed under decreasing sequences and hence it is a monotone class. Hence every collection of sets S generates a smallest monotone class M(S). If An is an increasing sequence then X\An is a decreasing sequence and X\ n An = n X\An ∈ M if An ∈ M . Now we start by proving that (A. In order to prove Theorem A. It is rather technical and should be skipped on ﬁrst reading. Extending a premasure to a measure The purpose of this section is to prove Theorem A.4) indeed gives rise to a premeasure. So we know that M is an algebra. Let A be an algebra. The typical use of this theorem is as follows: First verify some property for sets in an algebra A. Proof. If Bn is an increasing sequence of sets in M (A) then A ∪ Bn is an increasing sequence in M and hence n (A ∪ Bn ) ∈ M.2. Similarly. Hence A ∪ B ∈ M if at least one of the sets is in A.2. In order to show that it holds for any set in Σ(A). is a monotone class). Now A∪ n Bn = n (A ∪ Bn ) (A. We ﬁrst show that M = M(A) is an algebra. Then M(A) = Σ(A).A. So M is closed under ﬁnite unions. . it suﬃces to show that the sets for which it holds is closed under countable increasing and decreasing sequences (i. But this shows A ⊆ M (A) and hence M (A) = M for any A ∈ M.e. Put M (A) = {B ∈ MA ∪ B ∈ M}. Every σalgebra is a monotone class and the intersection of monotone classes is a monotone class.3. To show that we are closed under complements consider M = {A ∈ MX\A ∈ M}. Extending a premasure to a measure 213 A.
x − ε) and use that µ({x}) = limε↓ µ(x + ε) − µ(x−ε). Proof. Almost everything about Lebesgue integration Lemma A. It remains to verify σadditivity. b)) = limn→∞ µ(bn ) − µ(an ) if an ↓ a and bn ↑ b. we can assume any such union to consist of open intervals and points only. Then ﬁnitely many of the Jk . If I is unbounded.8) To get the converse inequality we need to work harder. ∞). cover I and we have n n ∞ µ(I) ≤ µ( k=1 Jk ) ≤ k=1 µ(Jk ) ≤ k=1 µ(Ik ) + ε.6) whenever In ∈ A and I = k Ik ∈ A. this shows σadditivity for compact intervals.4) can be extended to ﬁnite unions of disjoint intervals by summing over all intervals. To show outer regularity replace each point {x} by a small open interval (x + ε. k=1 (A. to show inner regularity.7) which shows ∞ µ(Ik ) ≤ µ(I). we can assume that I is just one interval (just treat each subinterval separately). Suppose I is compact ﬁrst.4. To show regularity. we can as well assume each In is just one interval (just split In into its subintervals and note that the sum does not change by additivity). Similarly. (A.4) gives rise to a unique σﬁnite regular premeasure on the algebra A of ﬁnite unions of disjoint intervals. µ as deﬁned in (A. say I = [a. b) and use µ((a. say the ﬁrst n. We need to show µ( k Ik ) = k µ(Ik ) (A. It is straightforward to verify that µ is well deﬁned (one set can be represented by diﬀerent unions of intervals) and by construction additive.214 A. Since each In is a ﬁnite union of intervals. By additivity µ is monotone and hence n n µ(Ik ) = µ( k=1 k=1 Ik ) ≤ µ(I) (A. replace each open interval (a. bn ] ⊆ (a. By outer regularity we can cover each Ik by open interval Jk such that µ(Jk ) ≤ µ(Ik ) + 2εk . First of all. Similarly.9) Since ε > 0 is arbitrary. By additivity we can always add/subtract the end points of I and hence σadditivity holds for any bounded interval. (A. b) by a compact one [an . .
Since by assumption M contains some algebra generating B it suﬃces to proof that M is a monotone class. Then outer (inner) regularity holds for all Borel sets if it holds for all sets in A. If An ∈ S we have µ(An ) = µ(An ) and taking limits ˜ (Theorem A.6. . so will be the extension: Lemma A. We ﬁrst assume that µ(X) < ∞.5 (Uniqueness of measures). Next let An ˜ A and take again limits. Proof. Similarly µ(An ) ≤ µ◦ (An ) ≤ µ(On ) ≤ µ(An ) + Now if An if An A. (A. Then there is at most one extension to Σ(A).10) Since x > a and ε > 0 are arbitrary we are done. This ﬁnishes the ﬁnite case.3).1 (ii)) we conclude µ(A) = µ(A). Note that if our premeasure is regular.14) n A just take limits and use continuity from below of µ. Let An A.A. x] and hence n n µ(Ik ) ≥ k=1 k=1 µ(Jk ) − ε ≥ µ([a.11) I claim S is a monotone class and hence S = Σ(A) since A ⊆ S by assumption (Theorem A. ˜ (A.O inf µ(O) ≥ µ(A) open (A. We ﬁrst assume that µ(X) < ∞. To extend our result to the σﬁnite case let Xj X be an increasing sequence such that µ(Xj ) < ∞. By the ﬁnite case µ(A ∩ Xj ) = µ(A ∩ Xj ) (just restrict µ. Then 1 . This premeasure determines the corresponding measure µ uniquely (if there is one at all): Theorem A. Proof. Set µ◦ (A) = A⊆O. x]) − ε. Let An ∈ M be a monotone sequence and let On ⊇ An be open sets such 1 that µ(On ) ≤ µ(An ) + n .12) and we are done.2. Suppose there is another extension µ and consider the set ˜ S = {A ∈ Σ(A)µ(A) = µ(A)}. Extending a premasure to a measure 215 then given x > 0 we can ﬁnd an n such that Jn cover at least [0. Hence ˜ ˜ µ(A) = lim µ(A ∩ Xj ) = lim µ(A ∩ Xj ) = µ(A) ˜ ˜ j→∞ j→∞ (A.13) and let M = {A ∈ Bµ◦ (A) = µ(A)}. (A. Suppose µ is a σﬁnite premeasure on some algebra A generating the Borel sets B. µ to Xj ). Let µ be a σﬁnite premeasure on an algebra A.
Now we need to distinguish two cases: If µ(A) = ∞. there are compact subsets Kj of Aj such that µ(Aj ) ≤ µ(Kj ) + 2εj . This ﬁnishes the proof. Then the function µ∗ : P(X) → [0. A2 = (A\A1 ) ∩ X2 . Theorem A.16) is a monotone class. If µ(X) < ∞ one can show as before that M = {A ∈ Bµ◦ (A) = µ(A)}. and • µ∗ ( ∞ n=1 An ) ≤ ∞ ∗ n=1 µ (An ) (subadditivity).216 A. Then O = j Oj is open. (A. • A1 ⊆ A2 ⇒ µ∗ (A1 ) ≤ µ∗ (A2 ).2) • µ∗ (∅) = 0.7 (Extensions via outer measures). Then the set Σ of all sets A satisfying the Carath´odory condition e µ∗ (E) = µ∗ (A ∩ E) + µ∗ (A ∩ E) ∀E ⊆ X (A. Since Xj has ﬁnite measure. Almost everything about Lebesgue integration Now let µ be arbitrary. For the σﬁnite case split again A as before. the sum µ(K j µ(Aj ) will converge and choosing n suﬃciently large we will have ˜ ˜ µ(Kn ) ≤ µ(A) ≤ µ(Kn ) + 2ε. it has the properties (Problem A. etc. where µ◦ (A) = C⊆A.19) . By regularity.). Thus there are open (in X) sets Oj covering Aj such that µ(Oj ) ≤ µ(Aj ) + 2εj . Note that µ∗ (A) = µ(A) for A ∈ A (Problem A.3). So it remains to ensure that there is an extension at all. For any premeasure µ we deﬁne ∞ ∞ (A.18) where the inﬁmum extends over all countable covers from A. Given A we can split it into disjoint sets Aj such that Aj ⊆ Xj (A1 = A ∩ X1 . covers A. If µ(A) < ∞. that is. the sum j µ(Aj ) will ˜ n = n ⊆ A is compact with diverge and so will j=1 j µ(Kj ). A n ∈ A (A. This settles the ﬁnite case.C sup µ(C) ≤ µ(A) compact (A. Let µ∗ be an outer measure.17) µ∗ (A) = inf n=1 µ(An ) A ⊆ n=1 An . Next let us turn to inner regularity.15) This settles outer regularity. Let Xj be a cover with µ(Xj ) < ∞. Hence K ˜ n ) → ∞ = µ(A). we can assume Xj open. and satisﬁes µ(A) ≤ µ(O) ≤ j µ(Oj ) ≤ µ(A) + ε. ∞] is an outer measure.
= k=1 µ∗ (Ak ∩ E). (A.21) which follows from subadditivity and (A ∪ B) ∩ E = (A ∩ B ∩ E) ∪ (A ∩ B ∩ E) ∪ (A ∩ B ∩ E). Since the reverse inequality is just subadditivity. .22) ˜ Using An ∈ Σ and monotonicity of µ∗ .2. Without restriction we can assume that they are disjoint (compare the last argument in proof of ˜ Theorem A. let An be a sequence of sets from Σ.24) ≥ µ∗ (A ∩ E) + µ∗ (B ∩ E) ≥ µ∗ (E) and we infer that Σ is a σalgebra. where we have used De Morgan and µ∗ (A ∩ B ∩ E) + µ∗ (A ∩ B ∩ E) + µ∗ (A ∩ B ∩ E) ≥ µ∗ ((A ∪ B) ∩ E) (A. we infer ˜ ˜ µ∗ (E) = µ∗ (An ∩ E) + µ∗ (An ∩ E) n ≥ k=1 µ∗ (Ak ∩ E) + µ∗ (A ∩ E). A = n An .3). Then for any set E we have ˜ ˜ ˜ µ∗ (An ∩ E) = µ∗ (An ∩ An ∩ E) + µ∗ (A ∩ An ∩ E) n (A. Abbreviate An = k≤n An . Next. Proof.23) Letting n → ∞ and using subadditivity ﬁnally gives ∞ µ∗ (E) ≥ k=1 µ∗ (Ak ∩ E) + µ∗ (A ∩ E) (A.20) ˜ = µ (An ∩ E) + µ (An−1 ∩ E) n ∗ ∗ = . we conclude that Σ is an algebra. Extending a premasure to a measure 217 (where A = X\A is the complement of A) form a σalgebra and µ∗ restricted to this σalgebra is a measure. We ﬁrst show that Σ is an algebra. Applying Carath´odory’s condition e twice ﬁnally shows µ∗ (E) = µ∗ (A ∩ B ∩ E) + µ∗ (A ∩ B ∩ E) + µ∗ (A ∩ B ∩ E) + µ∗ (A ∩ B ∩ E) ≥ µ∗ ((A ∪ B) ∩ E) + µ∗ ((A ∪ B) ∩ E).. B ∈ Σ.A.24) we have ∞ ∞ µ∗ (A) = k=1 µ∗ (Ak ∩ A) + µ∗ (A ∩ A) = k=1 µ∗ (Ak ) (A. It clearly contains X and is closed under complements. Finally. (A.. setting E = A in (A.25) and we are done. Let A.
218
A. Almost everything about Lebesgue integration
Remark: The constructed measure µ is complete, that is, for any measurable set A of measure zero, any subset of A is again measurable (Problem A.4). The only remaining question is wether there are any nontrivial sets satisfying the Carath´odory condition. e Lemma A.8. Let µ be a premeasure on A and let µ∗ be the associated outer measure. Then every set in A satisﬁes the Carath´odory condition. e Proof. Let An ∈ A be a countable cover for E. Then for any A ∈ A we have
∞ ∞ ∞
µ(An ) =
n=1 n=1
µ(An ∩A)+
n=1
µ(An ∩A ) ≥ µ∗ (E∩A)+µ∗ (E∩A ) (A.26)
since An ∩ A ∈ A is a cover for E ∩ A and An ∩ A ∈ A is a cover for E ∩ A . Taking the inﬁmum we have µ∗